Title: "Software As Sculpture: Creating Music From the Ground Up"
Author: John Bischoff (composer, teacher, and equipment manager) 1374 Francisco St. Berkeley, CA 94702 Abstract: The composer discusses two electronic music compositions that typify his approach to composing with small computer music systems. Bischoff's compositional technique is characterized by bottom-up software design and close attention to emerging details of the medium. These methods result in a sculptural process of composition that is unique in the field of computer music. Some of the details of this process are outlined and distinctive features of the music are discussed. Article: This article discusses two compositions of mine that were composed by writing software for small computer music systems. The process by which these pieces were constructed is characterized by bottom-up design and the empirical method. Though this way of working within a medium is common in visual art, it is unusual within the field of computer programming. It is generally thought that programmers implement their ideas on the computer with as few unforeseen developments as possible. Ideas are thought to flow in one direction, from the operator to the machine. From the vantage point of an artist, though, it is just as easy to see the flow going in the opposite direction: the medium reveals itself as the artist proceeds and those material details shape the direction in which the artist continues. This interaction between artist and medium, and the intimacy it suggests, is no different for a computer artist. I began using computers to make electronic music in 1977. Prior to that time I had composed electronic pieces using analog synthesizer modules, tape recorders and custom circuits in live interactive configurations with performer input. These early pieces shared an attention to details of the electronic medium and the use of those details in shaping the unique character of each work. The behavior of a circuit often defined the musical structure of a piece. This focus on the material nature of the medium is also typical of much of my work with computers [1]. My introduction to computers came about through my association with James Horton, an experimental music composer and theorist based in the SF Bay Area who introduced me to the KIM-1 microcomputer around 1976. The KIM-1 was a single board, 6502-based system with a built- in keypad and 6 digit LED display [2]. Gerald Mueller, of the Electronic Music Lab at City College of San Francisco, also provided me with early opportunities to program in assembly language. After buying my own KIM, I started writing programs in 6502 machine code to make both solo pieces and eventually to play in computer network bands with Horton and others [3]. In this article, I will focus on two examples of my solo computer work, both of which were also adapted for use in network bands at one time or another. AUDIO WAVE (1978-80) AUDIO WAVE [4] was written for pianist Rae Imamura and first performed by her at 1750 Arch Street in Berkeley on May 30, 1980. My idea was to make a live computer piece for Rae where both of her hands would be continually active, as in her conventional keyboard playing, but where her actions would serve to influence an ongoing musical output rather than have the task of initiating each sound. To this end, I extensively modified an earlier random tone generating program I had written for the KIM, ran it on two KIMs simultaneously, and made its behavior partially controllable through pressing keys on the KIM keypads. The final version of the program, written in 6502 machine code, generates an 8-bit sonic output characterized by a continuous stream of highly modulated tones. Each tone starts as a simple ramp, triangle or random waveshape built-up in memory in a wave table. At each audio cycle output, the waveshape is modified according to the following scheme: the value at a current point in the wave table is altered a number of steps toward a maximum or minimum and the current point itself is moved a number of steps toward the beginning or end of the table. The result after many audio cycles is a continually changing waveshape that waffles around a given area of points in a repeating modulation pattern (see fig. 1a and 1b). This technique creates tones with overtone spectra that expand and contract to varying degrees at varying rates. An additional timbral effect arises from what might be considered a defect in my design: many program functions, including keypad scanning, wave modification, general housekeeping etc., are coded within the main audio loop at the end of each waveform cycle. This introduces a time delay before the start of the next wave front and a consequent flat portion at the end of each waveform (see fig. 1a and 1b). This dead period in each audio cycle constantly fluctuates in length due to the wiles of program overhead and therefore creates a kind of indeterminate pulse-width modulation which slightly perturbs both the pitch and the timbre of the tone. This unpredictable musical result came about as a direct consequence of an empirical programming style which relied on demonstrated effect rather than preconceived limits on what might be musical. Many of these effects became integral parts of the sonic character of the piece. The flat portion of the wave is most substantially lengthened by the insertion of "key" functions triggered as the performer depresses KIM keypads to influence the behavior of the program. The sudden presence of this code drops the pitch of the sounding tone noticeably. This feature was retained for its mechanical quality: one can visualize the performer's keystrokes as not only shaping the sonic output overall but also leaving their mechanical impression on the course of each tone. The sixteen "key" functions for each KIM include: sustain current tone, change tempo, contract pitch range, repeat last 3 events, switch starting waveform, etc. The overall sonic effect of AUDIO WAVE is one of unconstrained electronicism, an embracing of the electronic medium in its power to create a new voice. The nature of this voice is shaped by the limitations of the KIM-1 medium and my response to them. The sound of this voice is the vibratory work of a speaker cone used as an instrument. It does not rely on simulation of acoustic instruments for its musical effects. Rather, it seeks musicality in properties discovered within the electronic system itself. One can view its characteristics as uniquely electronic compensations for the absence of musical phenomena found in acoustic instruments: -- The missing complex onset of an acoustic attack is made up for by the ebb and flow of waveshaping throughout the electronic tone. It is as though the usual richness at the start of each tone is now spread out in time over its duration. Without the beginning, middle, and end of the acoustic envelope, tones switch from one to another to form one continuous ribbon. What is highlighted in AUDIO WAVE is the bending and warping of that ribbon. -- The age old magic of an acoustic sound being struck into being by a human agent is replaced by the surging potential of a waffling speaker. There is a different quality of volition inherent in electronic sound. Like the difference between standing on land and floating on water, motion starts from the individual in the acoustic realm and is inherent in the environment in electronics. -- The subtle imperfections of pitch and amplitude naturally occuring in a sustained acoustic tone are here replaced by more methodical artifacts. In AUDIO WAVE, small shifts in pitch occur as the result of unorthodox program design: the audio output routine is not insulated from other program functions. As these functions are brought into play by the performer, the length of the output loop is altered and therefore both pitch and timbre are unpredictably shifted. Sudden changes in waveform produce sudden changes in position of the speaker cone, resulting in pops or clicks in the sound. These are welcome imperfections that become part of the character of the instrument. Tones are constantly revoiced in this manner, creating extended sequences of complex timbral articulation. The electronic properties outlined above do not have the same musical function as their acoustic counterparts; rather, I am suggesting that some of the richness of the acoustic tradition, which is missing in electronics, is here replaced by new properties discovered in the electronics themselves rather than adopted from the acoustic world. These properties were discovered by empirical play with the medium. As the construction of the music progressed, new artifacts emerged and were incorporated into the music or were discarded by recoding those sections. The fullness of this music is therefore established on an electronic basis; it is composed within an electronic music tradition. NEXT TONE, PLEASE (1984-85) NEXT TONE [5] was written in FORTH on a Commodore 64 computer. Additional tone generating hardware was added to the Commordore to give it a total of 12 digital oscillators, which make up the basic signal source for the piece. All enveloping was done externally with a SERGE MODULAR analog synthesizer panel under control of the computer [6]. The piece was first performed by myself on Nov. 2, 1985 at the New College Art Gallery in San Francisco. NEXT TONE conjures a rarefied world of percussive block chords moving slowly across a phrase of fixed length. The chords are tuned to a subset of 31-tone equal temperament. The source chord progression for each phrase is specified; what varies is the amount of the progression employed and the way the progression is distributed in time. In addition, a performer selects among several modes of embellishment in the upper parts as the chords unfold.
Each phrase is constructed by the program according to the following scheme, the order of which recapitulates my compositional process: 1) a slow procession of primary chords (bass & tenor) that subdivide the phrase into a simple polyrhythm (3 to 4, 4 to 5, or 5 to 6). This choice also determines the amount of the progression required as more chords are needed, for example, in a 4 to 5 than a 3 to 4 relationship. 2) a layer of secondary chord (alto & soprano) that fall slightly before, right on, or slightly after the primary units. The precise placement before or after is based on higher number subdivisions of the phrase length (e.g. 24, 36, etc.). Additional "reflecting" chords can occur if the alto and soprano happen to fall within a certain proximity. 3) faster chordal embellishments that fill the space between the secondary chords. Each successive layer was added to the piece after extensive listening to the previous layer(s). Particular attention was paid to the manner in which time seemed to flow: if one rhythmic subdivision seemed too dominant, I tried to counterpose it in the next layer. Each rhythmic component exerts its own gravitational pull on the listener; where multiple components are balanced, a single frame of reference is suspended and a kind of polyphony of forward momentum is allowed. Describing the whole in terms of each of the perceived rhythmic layers:
-- The longest event is the phrase itself (15 sec.) which in its strict regularity provides a fixed gesture within which the changing events occur.
-- Moving a bit faster are the primary chords (3-5 sec.) which are locked in a slow polyrhythm. They unfold at a slow enough rate that their polyrhythmic relationship is just a step beyond constant awareness. They act as the fundamental points of chordal transition against which the other events are heard.
-- The secondary chords occur at roughly the same rate as the primary chords but act as attendants to the primary chords: appearing synchronous with, or right next to primary chords, they anticipate, reinforce or reflect the moments of primary transition.
-- At the fastest level, embellishments (.2-1 sec.) fill the space between chords. This motion is fast enough to be assertively metrical and acts to methodically mark time between events. What characterizes NEXT TONE is the play between the expected and the unexpected. The use of a polyrhythmic relationship as the primary framework for all other rhythmic placement grounds the sense of time flow on an essentially ambiguous phenomena. This is because the listener can shift attention from one component subdivision to the other, each time shifting the reference for rhythmic activity. In addition, the slow tempo injects enough feeling of anticipation to make the arrival of each chord slightly unpredictable. Below a certain speed, metrical events tend to loosen their bonds while gaining an air of inevitability. In addition, there are very slight delays introduced by program overhead (each upcoming phrase is calculated during the execution of the current phrase). These delays add further minute displacements to the positioning of each chord and lend to the music a slight feeling of effort which it would not otherwise have. Each phrase harmonically progresses with just a touch of forward momentum. Within the gestural sweep of a fixed phrase length, the shifting details of chord placement parse time in a patchwork of moment to moment arrivals and departures. The sense of a musical present is diffused and a more global experience of time is induced. The development of NEXT TONE grew out of my involvement with the music of Charles Ives, which was introduced to me by James Tenney [7]. Having listened to Ives' music extensively, I have become particularly interested in what I hear as Ives' expansive sense of a musical moment. Ives stretches the definition of a moment to include such rambling arrivals and departures of tones and chords that the listener's sense of the present is widened to include a bit more of the past and future. The moment is felt as an intersection of multiple paths which have suddenly gained additional meaning through close proximity. This potential resonance between rhythmically independent elements is an idea that permeates Ives' music at many levels. In NEXT TONE, I tried to fashion a musical context where cascading near misses in simultaneity would generate surprising instances of musical meaning. The primary chord motion is surrounded by attendant chords and embellishments. The alignment of these accompanying events varies but the whole moves slowly enough in time to amplify the resulting effects. The slight feeling of anticipation in the music seems to enhance the perception of detail. I think of these pieces as being sculptural because of the way in which they were written in software. Starting from an initial idea or perception, each new facet was coded and then fine tuned to enhance its effect. As the material developed, I tried to peer into the emerging behavior of the machine to see where the next musical angle would come from. As elements accumulated, I tried to keep those that belonged and discard those that I felt did not. Like sculpture, the audible result retains some evidence of its construction. Notes and References: 1. See Tim Perkis' comments in the article "The Future of Music", compiled by Larry Polansky and published in LEONARDO, Vol. 20, No. 4, (1987), p. 365. 2. See the chapter "Improvisation with George Lewis" by Curtis Roads in COMPOSERS AND THE COMPUTER, edited by Curtis Roads and published by William Kaufman, Inc. (1985), p. 79. 3. The League of Automatic Music Composers, the first microcomputer network band, was active 1978-82 and had as its members, at one time or another: David Behrman, John Bischoff, Donald Day, Rich Gold, Jim Horton and Tim Perkis. The group played music by connecting their computers into a network and passing data back and forth to influence the music as they played. See the article "Music for an Interactive Network of Microcomputers" by John Bischoff, Rich Gold, and Jim Horton, published in FOUNDATIONS OF COMPUTER MUSIC, edited by Curtis Roads and John Strawn, MIT Press (1985), pp. 588-600 (originally published in COMPUTER MUSIC JOURNAL 2(3):24-29, 1978). The Hub, a band continuing to work in the network music form since 1986, has as its members: John Bischoff, Chris Brown, Scot Gresham-Lancaster, Tim Perkis, Phil Stone, and Mark Trayle. See the article "Paper Hubrenga" by Mark Trayle and John Bischoff in IS JOURNAL #9, Vol. 5 No. 1 published by International Synergy (1990), pp. 74-85. For recorded examples of both The League and The Hub, see Discography below, items 2 & 5. 4. For recordings of AUDIO WAVE, see Discography below, items 1 & 3. 5. For recordings of NEXT TONE, PLEASE, see Discography below, items 1 & 3. 6. SERGE MODULAR, 572 Haight St., San Francisco CA 94117. 7. I studied composition and piano with Tenney at the California Institute of the Arts in 1970/71. See Tenney's book "Meta-Hodos" published by Frog Peak Music, Box 9911, Oakland CA 94613.
Partial Discography: 1. ARTIFICIAL HORIZON (ART 1003), a CD of computer music by John Bischoff and Tim Perkis, available from Artifact Recordings, 1374 Francisco St., Berkeley, CA 94702. This CD includes performances of AUDIO WAVE and NEXT TONE, PLEASE. 2. THE HUB (ART 1002), a CD of nine pieces by the computer network band The Hub, available from Artifact Recordings (see above).
3. NEXT TONE, PLEASE, a cassette of six electronic compositions by John Bischoff, available from Frog Peak Music, Box 9911, Oakland, CA 94613. This cassette includes early performances of AUDIO WAVE and NEXT TONE, PLEASE. 4. JUST FOR THE RECORD (VR 1062), an LP by "Blue" Gene Tyranny performing music by Robert Ashley, John Bischoff, Paul Demarinis, and Phil Harmonic, available from Lovely Music Ltd., 105 Hudson St., New York, NY 10013. 5. LOVELY LITTLE RECORDS (VR 101-06), six EPs of music by John Bischoff, Paul Demarinis, Phil Harmonic, Frankie Mann, Maggi Payne, and "Blue" Gene Tyranny, available from Lovely Music Ltd. (see above). John Bischoff's EP includes a selection by The League of Automatic Music Composers recorded in 1978. Glossary: electronicism -- an artifact, gesture, or feature of the music that is characteristic of electronic music technology.
polyrhythm -- used in the sense of a "cross rhythm" where two constrasting subdivisions of a whole time unit are played against each other, as in 3 against 2 or 4 against 3. Figure Legends: Fig. 1 -- Waveform modulation in the piece AUDIO WAVE. a) an idealized superimposition of multiple waveforms that are the result of approximately 80 cycles of modification carried out on a single ramp wave. b) several typical waveforms extracted from a). Fig. 2 -- A diagram representing 2 possible phrases in NEXT TONE, PLEASE. Each dot represents a 3-tone chord in the range specified to the left. The primary rhythmic relation is the polyrhythm between Bass & Tenor (3 to 4 in phrase 1, 4 to 5 in phrase 2). The Alto and Soprano occur in relation to the Bass and Tenor, respectively. The short vertical dashes in the Alto and Soprano represent embellishments of the chord just sounded.