HTML5 Web Audio API Tutorial: Building a Virtual Synth Pad

Share this article

The World Wide Web Consortium’s Web Audio working draft
is a high-level API that allows developers to process, synthesize, and analyze audio signals in web applications like HTML5 games or virtual musical instruments. Web Audio uses an AudioContext interface to represent AudioNodes. Within the AudioContext an audio file, as an example, is connected to a processing node, which in turn, is connected to a destination like the speakers on your laptop. Each node in the AudioContext is modular so that a web developer can plug (or unplug) nodes like a toddler snapping Lego blocks in place to build relatively more complicated structures. One of the best ways to become familiar with the Web Audio API is to simply use it. In this article, I am going to describe how to build a very basic virtual synth pad that will play audio samples and provide a basic reverb feature. This HTML synth pad is going to be far from the tone generating instruments that professional musicians use, but it will show us how to:
  • Create an AudioContext
  • Load audio files
  • Play audio files
  • Add a volume control
  • Loop audio samples
  • Stop audio playback
  • Create a reverb effect
  • Create an audio filter
From time to time, SitePoint removes years-old demos hosted on separate HTML pages. We do this to reduce the risk of outdated code with exposed vulnerabilities posing a risk to our users. Thank you for your understanding.

Creating the Synth Pad in our HTML

This very basic virtual synth pad will be presented in a web browser, so let’s begin with the markup, adding four “pads” to a page. I included the jQuery JavaScript library via Google’s content delivery network. jQuery is in no way required for the Web Audio API, but its powerful selectors will make it a lot easier to interact with the HTML pads. I am also linking to a local JavaScript file that will contain the code for working with the Web Audio API. I have assigned a data attribute to each of the pads with information about each pad’s associated sound file. Here’s the relevant HTML:
<section id="sp">
<div id="pad1" data-sound="kick.wav"></div>
<div id="pad2" data-sound="snare.wav"></div>
<div id="pad3" data-sound="tin.wav"></div>
<div id="pad4" data-sound="hat.wav"></div>
</section>
I use CSS to lay out the four pads in a two-by-two grid, since this would be a standard configuration for a small synth pad. I set a width value for the <section> element and have each ‘pad’ element display as inline-block.

Creating an AudioContext

Let’s start the scripting. I create a new AudioContext with a single line.
var context = new AudioContext();

Loading Audio Files

The next task is to write a function that will load audio files. This function will:
  • Accept the URL for the audio file
  • Load that file via an XMLHttpRequest
  • Decode the audio for use within the AudioContext
  • Provide some means of accessing the decoded source.
Here it is:
function loadAudio( object, url) {

var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';

request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
object.buffer = buffer;
});
}
request.send();
}
The loadAudio function that I have written for our virtual synth pad accepts two parameters. The first parameter is a pad object. The second parameter is the URL for the sound file the function will be loading. The request variable is assigned a new XMLHttpRequest object. We pass three parameters to the request’s open() method, specifying the method for communicating (GET in this case), the URL for the audio file, and “true” to designate that we want an asynchronous request. The request’s response type is set to “arraybuffer” to handle the binary audio file.
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
When the file loads, the script will call an anonymous function, which, in turn, calls the decodeAudioData() method of the AudioContext. This method will asynchronously decode the audio file. The decodeAudioData() method has two required parameters. The first of these is the audio file that it is to decode. In our script that file is stored as “request.response.” The second required parameter is a callback function. For the callback function, I used a second anonymous function to add a property to the pad object. This new property will be an easy way to access the audio source node.
request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
object.buffer = buffer;
});
}
The request’s send() method is, of course, also added to the script.

Playing an Audio File When a Pad is Clicked

Each virtual synth pad should play an audio file when it is clicked, so there must be a way to associate the pad and a sound. There were several ways that the sound-pad relationship could have been managed, but eventually, I decided to extend, if you will, the <div> element object, adding audio-related properties to the pad <div> itself as a means of making the aforementioned association. Thus, the addAudioProperties() function accepts a pad element object parameter, and adds three properties to that object. A fourth property is added when the pad is “played.”
function addAudioProperties(object) {
object.name = object.id;
object.source = $(object).data('sound');
loadAudio(object, object.source);
object.play = function () {
var s = context.createBufferSource();
s.buffer = object.buffer;
s.connect(context.destination);
s.start(0);
object.s = s;
}
}
The first line in the function sets the value for the “name” property, so that it matches the pad element’s id attribute, specifically “pad1,” “pad2,” “pad3,” and “pad4.”
object.name = object.id;
The next two lines in the function set the “source” property to match the value of the HTML data-sound attribute that I included in each of the pad’s <div> elements and passes both the object and the source to the loadAudio function, effectively loading the sound file to the buffer. You can think of the buffer as the place in system memory that holds your sounds until you’re ready to play them.
object.source = $(object).data('sound');
loadAudio(object, object.source);
Next, the function gives the pad object a play method. This method has five tasks.
  • It calls the AudioContext’s createBufferSource method, making a new audio buffer source node
  • It sets the node’s source property
  • It connects the audio source to your computer’s speakers
  • It plays the sound
  • It attaches the audio source to the pad object’s s property
Here is the function:
object.play = function () {
var s = context.createBufferSource();
s.buffer = object.buffer;
s.connect(context.destination);
s.start(0);
object.s = s;
}
Let’s consider a couple of these tasks in more detail. First, the createBufferSource() method places a new node in the AudioContext. Second, the new node is connected to context.destination. This destination is a special node representing your system’s default sound output. Usually, this will be your computer’s default speakers or, perhaps, a pair of headphones plugged into your computer. Notice also that I used the jQuery selector and the jQuery data() method to make it a little easier to access the data-sound attribute. Now we need to put our new functions and the AudioContext in action. I used jQuery to create the well-known anonymous document ready function that is automatically called when the page loads:
$(function() {

});
When the page loads, I want to go ahead and extend the pad element objects. This code uses jQuery to select each of the pad elements and iterate over every one, calling the addAudioProperties() function on each.
$('#sp div').each(function() {
addAudioProperties(this);
});
The document ready function also begins to listen, if you will, for click events on the pad elements. When a click event occurs, the virtual synth pad calls the pad element object’s play() method.
$('#sp div').click(function() {
this.play();
});
Here is the document ready function with all of its parts and pieces thus far.
$(function() {
$('#sp div').each(function() {
addAudioProperties(this);
});

$('#sp div').click(function() {
this.play();
});
});
With all of your files saved and the virtual synth pad loaded in Chrome, Firefox, Safari, or Opera, you should now have a functional synth pad. When you click on a pad, a sound is played.

Add Volume Control

Although the virtual synth pad is functional, it is not terribly entertaining. We need to add some basic controls, beginning with a volume control. This control is going to require a bit of additional HTML and CSS to add a control panel section and four control div elements, below our existing markup for the pads. The HTML for each control panel element looks like this:
<div data-pad="pad1">
<h2>TL Control</h2>
<h3>top left pad</h3>
<label for"volume 1">Volume</label>
<input type="range" min="0" max="5" step="0.1" value="1" data-control="gain" name="volume1">
</div>
Notice that I used a range input element for the volume control. Each of the input elements has a data-control attribute with a value of “gain”. In the Web Audio API, a gain node interface effectively represents a change in sound volume. We need to add the gain or volume control to the pad element object. This addition will require:
  • A new gain node
  • Updating the play() method to route the audio source through the gain node.
The AudioContext has a simple method for creating a gain node.
object.volume = context.createGain();
In the play() method, I simply connected the source to the gain node and then connected the gain node to the destination.
s.connect(object.volume);
object.volume.connect(context.destination);
The updated addAudioProperties() function is just two lines longer, indicated in the comments in the code below:
function addAudioProperties(object) {
object.name = object.id;
object.source = $(object).data('sound');
loadAudio(object, object.source);
object.volume = context.createGain(); // new line
object.play = function () {
var s = context.createBufferSource();
s.buffer = object.buffer;
s.connect(object.volume);
object.volume.connect(context.destination); // new line
s.start(0);
object.s = s;
}
}
In the document ready function, I am going to add a bit of code to monitor the volume input and update the sound volume. You’ll notice that I used a JavaScript switch statement, which, at the moment, is something akin to using a jackhamp to put a tack in the wall, but I am foreseeing a time when we have three range inputs in our control panel.
$('#cp input').change(function() {
var v = $(this).parent().data('pad'),
pad = $('#' + v)[0];
switch ($(this).data('control')) {
case 'gain':
pad.volume.gain.value = $(this).val();
break;
default:
break;
}
});
This code snippet has four chores.
  • It monitors the control panel inputs
  • It identifies which pad is associated with the volume control
  • It uses a switch statement to identify the input’s purpose
  • It changes the sound volume
jQuery has a change() method that will fire when there is any change to one of the volume range input elements. The change() method accepts a callback function as a parameter, allowing the script to take some action — like changing the volume level. In the HTML for the controls, I placed a data attribute to identify which virtual synth pad is associated with a given control. The pad value (“pad1,” “pad2,” “pad3,” or “pad4”) is stored in the variable v, which identifies the proper synth pad.
$('#cp input').change(function()...
A second variable, pad, is assigned the pad element object. jQuery allows for this sort of concatenated selector, wherein the “#” is combined with the pad value, for example “pad1,” to be selected as “#pad1.”
pad = $('#' + v)[0];
The JavaScript switch statement considers the data-control attribute of the range input. When the data-control attribute’s value is “gain,” the code updates the pad element object’s volume.gain.value property, changing the sound volume.
switch ($(this).data('control')) {
case 'gain':
pad.volume.gain.value = $(this).val();
break;
default:
break;
}
At this point, the virtual synth pad has functional volume controls.

Adding an Audio Loop Feature

The virtual synth pad needs the ability to play a single audio sample repeatedly. So we’re going to add a “Loop” button to the control panel. This loop feature will play the associated audio sample again as soon as it ends. We need to add a little more HTML to display the “Loop” button.
<button type="button" class="loop-button" data-toggle-text="End Loop" value="false">Loop</button>
Make note of the button’s class, value, and data attribute as all of these will be referenced in our JavaScript. To facilitate the loop feature, I made three changes to the addAudioProperties() function, adding a new loop property to the object; setting the source’s loop property to the value of the pad object’s loop property inside the play() method; and adding a stop() method. Remember that stopping an audio source was also one of our objectives mentioned at the beginning of the article, and it really is that simple.
function addAudioProperties(object) {
object.name = object.id;
object.source = $(object).data('sound');
loadAudio(object, object.source);
object.volume = context.createGain();
object.loop = false;
object.play = function () {
var s = context.createBufferSource();
s.buffer = object.buffer;
s.connect(object.volume);
object.volume.connect(context.destination);
s.loop = object.loop;
s.start(0);
object.s = s;
}
object.stop = function () {
if(object.s) object.s.stop();
}
}
Inside of the document ready function, I added some code to listen for button clicks. This code has seven tasks.
  • Identify the associated pad
  • Set a variable to the button’s text value, “Loop” in this case
  • Set a variable equal to the pad div element object
  • Use a switch statement to identify the button’s purpose
  • Stop the audio source from playing
  • Swap the button text with the value of a data attribute
  • Set the pad element object’s loop value
Here is the code:
$('#cp button').click(function() {
var v = $(this).parent().data('pad'),
toggle = $(this).text(),
pad = $('#' + v)[0];

switch ($(this)[0].className) {
case 'loop-button':
pad.stop();
$(this).text($(this).data('toggleText')).data('toggleText', toggle);
($(this).val() === 'false') ? $(this).val('true') : $(this).val('false');
pad.loop = ($(this).val() == 'false') ? false : true;
break;
default:
break;
}
});
Let’s take a look at each of these steps in a bit more detail. First the variable v is set to the value of the pad name. This is exactly the same technique I used when we added the volume control above.
var v = $(this).parent().data('pad'),
The next two variables are assigned the value of the button text, which in this case is “Loop” and the pad element object respectively. jQuery makes these selections very easy.
toggle = $(this).text(),
pad = $('#' + v)[0];
The switch statement looks at the button’s class name. I used the class name as a way of identifying the button’s purpose, if you will. Here again the switch statement is somewhat overkill, but I know what we are going to add two more buttons to the virtual synth pad, so using it now saves us a bit of trouble later.
switch ($(this)[0].className) {
case 'loop-button':
pad.stop();
$(this).text($(this).data('toggleText')).data('toggleText', toggle);
($(this).val() === 'false') ? $(this).val('true') : $(this).val('false');
pad.loop = ($(this).val() == 'false') ? false : true;
break;
default:
break;
}
The first line in the switch statement for the “loop-button” case calls the pad element object’s stop() method, which I just added. If you are not very familiar with jQuery, the next line of code may look complicated.
$(this).text($(this).data('toggleText')).data('toggleText', toggle);
The first section is a simple jQuery selector capturing the button element (i.e. “this”). The text() method here sets the value of the button’s text to the value of the button’s “data-toggle-text” attribute. Specifically, this will make the button read “End Loop” rather than “Loop.” Moving further down the chain, the data() method is used to set the value of the data-toggle-text attribute to the value of the variable toggle, which only moments ago, I set to the value of the button’s text before we changed that text. Effectively, I have had the button text, which was initially “Loop,” switch places with the value of the data-toggle-text attribute, which was initially “End Loop.” Each time the button is clicked “Loop” and “End Loop” will swap places. The next two lines of code work together to update the pad element object’s loop property.
($(this).val() === 'false') ? $(this).val('true') : $(this).val('false');
pad.loop = ($(this).val() == 'false') ? false : true;
A conditional ternary operator tests the button’s value. If the value is currently false, that value is changed to true. Likewise if the current value was true, it would be changed to false — since the button’s value before the click represents the opposite of the user’s intent. It might seem like I could now set the value of the pad element object’s loop property to the button’s value, but this will not quite work. The button’s value is a string, but the loop property requires a Boolean. Thus, I used a second ternary operator to pass the proper Boolean. I suppose I could have also changed the type. The virtual synth pad now has a functioning loop feature.

Create a Reverb Effect

In the Web Audio API, you can create a reverb effect using a convolver node. The convolver node performs linear convolution on your source audio. Without going into the sound science, this node basically takes your source audio, compares it to an impulse response sound file and produces a new sound based on the comparison. You may think of the impulse response sound file as a characterization of the way a given space, like a large room, echos. For the virtual synth pad, I am using an impulse response file representing a fairly large hall. This impulse response file came from Chris Wilson’s Web Audio API Playground project on Github and is free to use under an MIT License. Chris Wilson, by the way, is a developer advocate at Google and an editor of the Web Audio API Working Draft. As before, I am going to need some additional HTML to place a reverb button on the virtual synth pad page. The HTML here is almost identical to the HTML for the loop button.
<button type="button" class="reverb-button" data-toggle-text="No Rvrb" value=false>Reverb</button>
The next step in the process of adding this node is to include a new function that will load the impulse response audio file. This function will create a reverb object and then use the laodAudio function to add the impulse response sound to the buffer. There are no new concepts here.
function reverbObject (url) {
this.source = url;
loadAudio(this, url);
}
In the addAudioProperties() function, I need to add a single line of code creating a property to represent the reverb.
object.reverb = false;
The play() method of the pad div element object will also need to be updated. At the moment the audio source is connected to the gain node, and the gain node is connected to the speakers. When the user clicks the reverb button, we will need to insert the convolver node into that chain, so that the audio source connects to the gain node, the gain node connects to the convolver node, and the convolver node connects to the speakers. Take a look at the play() method as it is before these changes.
object.play = function () {
var s = context.createBufferSource();
s.buffer = object.buffer;
s.connect(object.volume);
object.volume.connect(context.destination);
s.loop = object.loop;
s.start(0);
object.s = s;
}
I took the line of code that connected the gain node, “object.volume,” to the speakers and replaced it with an if-else construct.
object.play = function () {
var s = context.createBufferSource();
s.buffer = object.buffer;
s.connect(object.volume);
if (this.reverb === true) {
this.convolver = context.createConvolver();
this.convolver.buffer = irHall.buffer;
this.volume.connect(this.convolver);
this.convolver.connect(context.destination);
} else if (this.convolver) {
this.volume.disconnect(0);
this.convolver.disconnect(0);
this.volume.connect(context.destination);
} else {
this.volume.connect(context.destination);
}
s.loop = object.loop;
s.start(0);
object.s = s;
}
The first part of the if statement, checks to learn if the pad element object’s reverb property is set to true. If the property is true, the convolver node is created, the impulse response file is identified, and the nodes are connected. If the reverb property is false, the method checks to learn if there is already a convolver node connected to the source. If there is a convolver node and, as we already know, the reverb property is false, then a user must have clicked the reverb button to turn it off. So the script disconnects the gain node and convolver nodes and reconnects the gain node directly to the speakers. If the reverb property is false and there is no existing convolver node, the gain node will be connected directly to the speakers. The reverb feature must be wired in to the jQuery document ready function too. Here is a look at the portion of the document ready function that listens for the loop button as we have the virtual synth pad coded right now.
$('#cp button').click(function() {
var v = $(this).parent().data('pad'),
toggle = $(this).text(),
pad = $('#' + v)[0];
$(this).text($(this).data('toggleText')).data('toggleText', toggle);
($(this).val() === 'false') ? $(this).val('true') : $(this).val('false');
switch ($(this)[0].className) {
case 'loop-button':
pad.stop();
pad.loop = ($(this).val() == 'false') ? false : true;
break;
default:
break;
}
});
Adding a new case in the switch statement is all that is required. This new case behaves very much like the code created for the loop button:
case 'reverb-button':
pad.stop();
pad.reverb = ($(this).val() == 'false') ? false : true;
break;
As the last step, a new line of code is inserted into the document ready function to add the impulse response file to the buffer.
irHall = new reverbObject('irHall.ogg');
The virtual synth pad’s reverb feature is now functional.

Creating an Audio Filter

The virtual synth pad is starting to become fun to play with, but I want to add one more feature: an audio filter. The Web Audio API has several ways to manipulate sounds, but we are going to focus on a simple example with a fancy name, specifically a lowpass biquad filter node. In the HTML, I added a new “Filter” button and two range inputs for frequency and quality.
<button type="button" class="filter-button" data-toggle-text="No Fltr" value=false>Filter</button>
<lable class="filter-group faded" for="frequency1">Frequency:</lable>
<input class="filter-group faded" type="range" min="0" max="10000" step="1" value="350" data-control="fq" name="frequency1">
<lable class="filter-group faded" for="quality1">Quality:</lable>
<input class="filter-group faded" type="range" min="0.0001" max="1000" step="0.0001" value="500" data-control="q" name="quality1">
Do take note of the ranges for the frequency and quality inputs. The quality factor, as an example, is set to the biquad filter nodes nominal range. Also not the “faded” class. When the control section loads, I want to range inputs for the audio filter to appear faded, indicating that they are unavailable. When the user clicks the filter button the range inputs will come to life, if you will. The pad element object needs three new properties to set a Boolean value, to set a default frequency value, and to set a default quality value. These properties are, of course, added to the addAudioProperties() function.
object.filter = false;
object.fqValue = 350;
object.qValue = 500;
The pad element object’s play() method also needs a few conditional statements. The concept here is very similar to the if statement that we added with the reverb feature. The code needs to correctly connect nodes depending on whether or not looping, reverb, and filtering are engaged.
if (this.filter === true) {
this.biquad = context.createBiquadFilter();
this.biquad.type = this.biquad.LOWPASS;
this.biquad.frequency.value = this.fqValue;
this.biquad.Q.value = this.qValue;

if (this.reverb === true) {
this.convolver.disconnect(0);
this.convolver.connect(this.biquad);
this.biquad.connect(context.destination);
} else {
this.volume.disconnect(0);
this.volume.connect(this.biquad);
this.biquad.connect(context.destination);
}

} else {
if (this.biquad) {
if (this.reverb === true) {
this.biquad.disconnect(0);
this.convolver.disconnect(0);
this.convolver.connect(context.destination);
} else {
this.biquad.disconnect(0);
this.volume.disconnect(0);
this.volume.connect(context.destination);
}
}
}
Next, we need to make changes to the document ready function. The first of these changes is to add support for the filter button. This will be a new case in the switch statement. Notice that I added a bit of jQuery to toggle the “faded” class we added to the filter labels and inputs.
case 'filter-button':
pad.stop();
pad.filter = ($(this).val() == 'false') ? false : true;
$(this).parent().children('.filter-group').toggleClass('faded');
break;
I also added new cases to the input switch statement we had been using for the volume control.
case 'fq':
pad.fqValue = $(this).val();
break;
case 'q':
pad.qValue = $(this).val();
break;
The filter feature is now functional.

Conclusion and Demo

This tutorial sought to provide a basic introduction to the powerful Web Audio API. If you followed it, you should have a virtual (and noisy) synth pad as well as a better understanding of Web Audio’s basic features. You can also download the source files or mess around with the code on CodePen. One thing to note: CodePen seems to cause an error that prevents one of the files from being loaded in Chrome. This doesn’t happen on the HTML demo and it should work fine on Firefox in both demos. The Web Audio API is supported in all modern desktop browsers but not in IE11.

Frequently Asked Questions about Building a Virtual Synth Pad with HTML5 Web Audio API

How can I create a simple synth using the Web Audio API?

Creating a simple synth using the Web Audio API involves a few steps. First, you need to create an audio context. This is the primary ‘container’ for your project. It can be created using the AudioContext interface. Next, you need to create oscillators, which are the basic sound sources in the API. You can set the type of waveform the oscillator produces – sine, square, sawtooth, or triangle. You can also set the frequency and start or stop the oscillator. Finally, you need to connect the oscillator to the audio context’s destination (the speakers), and you have a simple synth.

How can I use MIDI with the Web Audio API?

MIDI (Musical Instrument Digital Interface) can be used with the Web Audio API to control your synth. The Web MIDI API allows you to send and receive MIDI messages, which can be used to play notes, change parameters, and more. You can access MIDI inputs and outputs and send MIDI messages using the output’s send method. You can also listen for MIDI messages using the input’s onmidimessage event.

What are the different types of oscillators in the Web Audio API?

The Web Audio API provides four types of oscillators: sine, square, sawtooth, and triangle. The sine wave is a smooth periodic oscillation, the square wave is a waveform type of musical tone which has a harsh timbre, the sawtooth wave is a kind of non-sinusoidal waveform, and the triangle wave is a type of waveform that moves in an up-and-down direction. Each of these waveforms has a unique sound, and you can choose the one that best fits your needs.

How can I change the frequency of an oscillator in the Web Audio API?

The frequency of an oscillator in the Web Audio API can be changed using the frequency property. This is an a-rate AudioParam, meaning it can be changed over time and can have different values at different times. You can set the value directly, or use methods like setValueAtTime or linearRampToValueAtTime to change the frequency over time.

How can I add effects to my synth using the Web Audio API?

The Web Audio API provides several ways to add effects to your synth. You can use BiquadFilterNode to create a variety of filter effects, ConvolverNode to create reverb effects, DynamicsCompressorNode to create compression effects, and WaveShaperNode to create distortion effects. You can also use GainNode to control the volume of your synth.

How can I create a modular synth with the Web Audio API?

Creating a modular synth with the Web Audio API involves creating multiple oscillators and filters, and connecting them in various ways to create complex sounds. You can also use AudioParam objects to modulate various parameters of your synth, like the frequency of an oscillator or the gain of a filter.

How can I use the Web Audio API to create a drum machine?

Creating a drum machine with the Web Audio API involves creating buffers for each drum sound, and then playing them back using AudioBufferSourceNode objects. You can control the timing of the playback using the start method, and you can control the volume using a GainNode.

How can I visualize the output of my synth using the Web Audio API?

The Web Audio API provides the AnalyserNode, which can be used to get time domain and frequency data from your synth. This data can then be visualized using the Canvas API or WebGL.

How can I save the output of my synth as an audio file?

The Web Audio API does not provide a built-in way to save the output of your synth as an audio file. However, you can use the MediaRecorder API to record the output of your synth, and then save it as an audio file.

How can I make my synth responsive to user input?

Making your synth responsive to user input involves adding event listeners to your HTML elements, and then changing the parameters of your synth in response to these events. For example, you could change the frequency of an oscillator in response to a slider being moved, or start and stop the oscillator in response to a button being clicked.

Armando RoggioArmando Roggio
View Author

Armando Roggio is an experienced director of marketing, ecommerce expert, growth hacker, web developer and writer. When he is not analyzing data or writing (code or prose), Armando is also a high school wrestling coach.

audio apiHTML5 AudioLouisLweb audioweb synth tutorial
Share this article
Read Next
Get the freshest news and resources for developers, designers and digital creators in your inbox each week