There is often little distinction made between the terms granular synthesis and granulation of sampled sound, but we will treat them somewhat separately, with less attention paid to totally synthetic granulation, as it is used less and less these days.
Granular synthesis was first discussed seriously as an outgrowth of quantum physics, where researchers such as Dennis Gabor examined the concept of reducing sound to its most basic building blocks he called sound quanta in the late 1940's. Gabor interestingly developed a rotating tape recorder head that, in essence, could break down the recorded material into small bits for time and pitch manipulation. Even before computers became capable of doing the large number of calculations necessary to generate the massive amounts of data necessary, Iannis Xenakis both discussed and used these atomic-level sound quanta, often an enveloped simple waveform like a sine wave, or simply a bell-shaped sound impulse called a wavelet to create statistically-shaped "density clouds," as he discussed in his book Formalized Music. By the time computers were ready to lend significant control to the process, leaders in the technique emerged in Curtis Roads and Barry Truax.
Truax was able to use his PDP minicomputer to control the grain production and results in real time. The most notable piece to emerge using granular synthesis is Riverrun (the first word of James Joyce's Finnegan's Wake). At the heart of the technique is the enveloped grain, a short burst of sound that could be 10-100 milliseconds long. These grains could be produced at a very fast rate and density, for example between 100 and 2,000 grains a second. By controlling the grain frequency, grain duration, grain amplitude, envelope shape, more cohesive pitched sounds can be produced when the grains have similar characteristics, and more random noise-like textures can be created when they have higher variability, all controllable via pre-programing or in real time. Listen to Riverrun here (you may need to search if the URL changes).
Of seeming greater interest in recent decades is the granulation of real-world sampled sound. This technique was used with fascinating results in Paul Lansky's Idle Chatter series, circa 2011, along with several other techniques. Many DAW's, plug-ins, apps, and built-in or add-on objects for synthesis languages provide easy access to the technique. The video below provided a starting point for how the technique works and what some basic parameters are. Granulation of sound is essentially an automated form of amplitude modulation, whereby portions of a larger sound file are multiplied by an envelope over and over again to form what are called grains. We will examine the parameters involved below.
A sound file (or real-time sound) is read into a buffer. The composer can then select which region of the sound file to granulate, including the entire file if they wish. The process will be limited to the bounds of that selection, though it is not uncommon to change those bounds during the process. And from here, we list the primary control elements found in most granulators.
If the grain production rate is above sub-audio rate (20 grains per second and above), and with no jitter (see below), the following occurs:
Common Granulation Variants
There are certain added bells and whistles that have come to typify most granulation applications. First and foremost is the addition of constrained randomness to the basic parameters, usually as a percentage or rate. The most common name for this randomness is jitter, but they may be called something else relating to randomness. Not all applications provide all possibilities, but the common ones are:
If you wish to explore granulation of sampled sound on a Mac, John Gibson's Granulator app can be downloaded here (MacOS) or here (Win). Below is an example of it's use with a variety of the settings discussed above. If you find a spot that interests you, stop the video and examine the settings.