How to attach sound effects to an AudioBuffer

I’m trying to add the following sound effects to some audio files, then grab their audio buffers and convert to .mp3 format

  • Fade-out the first track
  • Fade in the following tracks
    - A background track (and make it background by giving it a small gain node)
    - Another track that will serve as the more audible of both merged tracks
  • Fade out the previous one and fade in the first track again

I observed the effects that are returned by the AudioParam class as well as those from the GainNode interface are attached to the context’s destination instead of to the buffer itself. Is there a technique to espouse the AudioParam instance values (or the gain property) to the buffers so when I merge them into one ultimate buffer, they can still retain those effects? Or do those effects only have meaning to the destination (meaning I must connect on the sourceNode) and output them via OfflineContexts/startRendering? I tried this method previously and was told on my immediate last thread that I only needed one BaseAudioContext and it didn’t have to be an OfflineContext. I think to have various effects on various files, I need several contexts, thus I’m stuck in the dilemma of various AudioParams and GainNodes but directly implicitly invoking them by calling start will inadvertently lose their potency.

The following snippets demonstrate the effects I’m referring to, while the full code can be found at

    var beginNodeGain = overallContext.createGain(); // Create a gain node
            beginNodeGain.gain.setValueAtTime(1.0, buffer.duration - 3); // make the volume high 3 secs before the end
            beginNodeGain.gain.exponentialRampToValueAtTime(0.01, buffer.duration); // reduce the volume to the minimum for the duration of expRampTime - setValTime i.e 3
            // connect the AudioBufferSourceNode to the gainNode and the gainNode to the destination

Another snippet goes thus

    function handleBg (bgBuff) {
                var bgContext = new OfflineAudioContext(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate), // using a new context here so we can utilize its individual gains
                bgAudBuff = bgContext.createBuffer(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate),
                bgGainNode = bgContext.createGain(),
                smoothTrans =  new Float32Array(3);
                smoothTrans[0] = overallContext.currentTime; // should be 0, to usher it in but is actually between 5 and 6
                smoothTrans[1] = 1.0;
                smoothTrans[2] = 0.4; // make it flow in the background
                bgGainNode.gain.setValueAtTime(0, 0); //currentTime here is 6.something-something
                bgGainNode.gain.setValueCurveAtTime(smoothTrans, 0, finalBuff.pop().duration); // start at `param 2` and last for `param 3` seconds. bgBuff.duration
                for (var channel = 0; channel < bgBuff.numberOfChannels; channel++) {
                     for (var j = 0; j < finalBuff[0].length; j++) {
                        var data = bgBuff.getChannelData(channel),
                        loopBuff = bgAudBuff.getChannelData(channel),
                        oldBuff = data[j] != void(0) ? data[j] : data[j - data.length];
                        loopBuff[j] = oldBuff;
                // instead of piping them to the output speakers, merge them

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.