Score Preprocessors, Algorithmic and Generative Systems
Max Mathew’s wrote in The Technology of Computer Music “Scores for the instruments thus far discussed contain many affronts to a lazy composer, and all composers should be as lazy as possible when writing scores.” While he was referencing MUSIC V’s preprocessed conversion routines, early on, composers with programming skills were writing routines to autonomously generate score material from processes and algorithms they themselves designed. Even MUSIC V contained note-event-generating routines (if you knew enough Fortran to use them). Soon, some score preprocessors languages were developed for more general use, the most well-known of which was Score 11, written in 1982 by Alexander Brinkman (MIT) to generate notes for Barry Vercoe’s Music 11, and subsequently used with Vercoe's csound successor. Automated patterns of pitches, rhythms, timbres and dynamics (such as a smooth crescendo over hundreds of notes), freed the composer from the tedious task of computing and then typing it all in by instead generating a score of user-prescribed note event gestures. Music 11 used typed alphanumeric music transcription coding developed by Leland Smith (Stanford) for his notation program, Score. A subsequent csound preprocessor, nGen, written by Mikel Kuehn built on Score 11 ideas to allow composers to easily generate expansive granular textures, randomly distributed parameters and to quickly generate dense chords. Both these programs output a text-based score file for csound to read based on parameter fields the composer designated in the orchestra file.
Beginning with Iannis Xenakis, composers have explored computational methods for algorithmically generating music by defining a complex of random or probabilistic (stochastic) possibilities with user-defined constraints. A good example of this applied to synthesis languages was a MAX/MSP patch/program called Stochos (named after Xenakis’s own mainframe program) developed by Sinan Bokesoy and Gerard Pape around 2000 that was a “real-time stochastic, chaotic, and deterministic event generator...with a unique control interface for assigning stochastic, chaotic, or deterministic curves to different sound transformation and synthetic parameters,” Computer Music Journal Vol. 27, No. 3 (Autumn, 2003), pp. 33-43.
With the advent of MIDI, pre-audio MAX users frequently explored algorithmic and generative processes applied to synthesizers or MIDI controlled acoustic instruments such as the Yamaha Disklavier. Many modern languages such as RTcmix and SuperCollider have built-in capabilities for automated generation of parameters. Some, such as MAX and RTcmix allow users to program their own add-ons using Javascript or Python, or even develop their own modules and routines that integrate with the rest of the program. SuperCollider has often been used to generate real-time algorithmic music interactively, either as a solo performance or in a networked laptop ensemble with other SC users. A music language called ChucK, written by Ge Wang, but not explored here, was written almost exclusively for this "on-the-fly" interactive laptop orchestra (LORC) purpose.
Stanford Laptop Orchestra: Ge Wang, founding director