Sound generation with Javascript

A simple introduction to WebAudioAPIs and sound generation in the browser

3 min readMay 11, 2017

--

Recently I’ve worked on web experiment, using the Web Audio APIs for sound generation. The initial goal for this experiment was to generate everything from code, because it’s fun and because I wanted to reduce the size of the app as much as possible, to prevent boring loading screens.

Another experiment with sound generation is this one, where the user can play various notes with his keyboard. The note’s frequencies are then mapped on the circle.

In this post I will talk about the basic method I’ve used to generate sound.

Web Audio API

The Web Audio API is a specification implemented in all major browsers (even though its behavior is still inconsistent across different browsers) specific for sound manipulation, it works along the HTML5 <audio> element and is incredibly powerful. You can read more here [1] [2].

The basic idea is to have a series of AudioNodes, connected together in a graph. This graph must have a sound source (where the sound comes from), a sound destination (where the sound is played) and something in the middle for sound manipulation. You can see all the available nodes interfaces at this link.
For this post we will be focusing on AudioContext, OscillatorNode and GainNode.

What we want to do is to use the OscillatorNode to generate waveforms and then manipulate the waveforms with our GainNode, in order to obtain some decent sounds.

The AudioContext is the base on which every audio graph is built, you can read the docs for more info.

Generating sounds

Let’s create our AudioContext

const context = new AudioContext();

We can then create our OscillatorNode

const oscillator = context.createOscillator();
oscillator.type = "sine";
oscillator.frequency.value = 196;

The type property indicates the waveform we want. We’re keeping it simple using a sine wave, but it’s possible to use different waveforms, such as: triangle, square, sawtooth and even custom waveforms.

The frequency value is what defines the actual note that will be played. You can pick the frequency values from one of this tables: [1] [2]

Create the GainNode

const gainNode = context.createGain();

Connect everything

oscillator.connect(gainNode);
gainNode.connect(context.destination);

The oscillator is our audio source and context.destination is the audio-rendering device (eg: your speakers).

Play the sound

oscillator.start(0);

Right now you can only hear a constant “beep”.

Generating notes

In order to transform this sound into a note you need to gradually silence the sound.

const duration = 2;

gainNode.gain.linearRampToValueAtTime(0.0001, context.currentTime + duration);
oscillator.stop(context.currentTime + duration);

gain.linearRampToValueAtTime is a handy function that linearly decrease the volume of the sound. In this case it takes the value of the gain to 0.0001, linearly, in duration seconds.
To prevent distortions in the sound I’ve discovered that it’s useful to stop the oscillator, after the sound has been silenced.

Here you can see (and hear) a simple example in action:

Conclusion

This is a basic approach to sound generation, but you can build a lot on it.
As you can hear in this experiment it is possibile to obtain some fancy effects just by playing around with the nodes in your audio graph. The source code of this experiment is opensource and can be found here. And here is the function responsible for sound generation.

Where can you find me?

Follow me on Twitter: https://twitter.com/psoffritti
My website/portfolio: pierfrancescosoffritti.com
My GitHub account: https://github.com/PierfrancescoSoffritti
My LinkedIn account: linkedin.com/in/pierfrancescosoffritti/en

--

--