An ongoing experiment in rebuilding an old granular synthesis Max patch as a Javascript Web Audio API app

Buffer Source File:

Hi! I’m Patrick and I’m a web developer from Kentucky.

I’ve been wanting to update and improve my skills in programming Javascript and, as a long-time enthusiast for music technology and creative coding, I was curious to find a personal project through which I could explore and teach myself the Web Audio API.

When I originally read Curtis Roads’ excellent book on Granular Synthesis, Microsound, I found myself very drawn to the concept and sought out a way to play around with the ideas in Max. I soon found Nathan Wolek’s essential Granular Toolkit and spent quite a bit of time teaching myself how it all worked so that I could incorporate it into my own granular patch, with which I continued to tinker with and expand slowly over time.

Over the many years since originally starting on that granular patch, technologies have changed quite a bit and that patch eventually stopped working. I haven’t yet had the time to rebuild it using the current granular tools from Nathan’s LowKeyNW package (installable from Max’s Package Library) but thought that it would be an excellent potential Javascript learning opportunity.

Progress Report

I have a basic proof-of-concept up and running and am beginning to expand functionality.

Questions & Answers

Aren’t there already numerous examples of Web Audio-based granular synthesis?

Absolutely! I haven’t found many that implement the grain windows discussed in Microsound or available in Nathan Wolek’s granular tools though. I’ve found the subtle variations between each window type to be interesting, as if they were the granular tool’s vocal cords / glottis, imparting their own tonal coloring on the resultant sound.

How do you personally use an app like this?

I enjoy granular synthesis/processing more for random generation of ideas and textures than for any sort of traditional melodic or tonal usage. Most often, I love using a long source file of speech or dialogue that, when granulated and cut up, sometimes synthesizes new and often bizarre phrases out of the rearranaged syntax. See also the literary cutup techniques pioneered by William S. Burroughs and Brion Gysin (as well as their book on the subject, The Third Mind).

Lessons I’ve Learned So Far

Don’t overcomplicate things when starting out by trying to learn multiple new topics simulaneously. I spent 60% of my time so far debugging React Javascript errors when the app doesn’t actually need React at all (though I did learn a ton about React in the process).

You can get rock-solid timing with Javascript and the Web Audio API but it’s a lot more complicated than it might initially seem. I relied on the WAAclock.js package for all timing and scheduling needs.

ArrayBuffers aren’t really arrays but raw sequences of bytes. You have to create a typed view of those bytes to get the actual array data.

The arrays you use for the grain windows should have as few rows as you can get away with. 1000 rows creates garbled noise when playing back but 100 rows is a smooth curve when using AudioParam.setValueCurveAtTime().

AudioParam.setValueCurveAtTime() does all the heavy lifting for windowing the grains’ amplitude curves. You feed it an array of level values and a duration and it handles the complex task of interpolating through those array values evenly over that duration.