PREVIOUS NEXT UP TOP
Hub internal document 2892w

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

"The Glass Hand" by John Bischoff Nov. 1991

This piece involves multiple sonic layers that transform themselves at differing rates. Each player produces one layer. At one extreme is sound that barely changes, at the other is sound that rapidly jumps from one state to the next. The way each player moves between these two extremes is determined by network interaction. The player's manual actions serve to mix their layer with the whole and to fuel the network data.

Basic Materials -------------------------------

This is a 10 minute piece for any number of steady state sounds. "Steady state" means anything from a single, continuous, sound to a multi-component, dynamically vibrating sound mass (good lord!), as long as the sound appears to stay in one place.

Once a number of steady state sounds are defined, make your program capable of executing a transition between one sound and the next by stepping linearly from one set of defining values to another. Make the rate of the transition variable from approximately 1/10th second to 10 seconds.

These steady state sound definitions can come from another piece or be generated randomly. The requirement is that they be continuously modifiable. It is also possible to make up "sets" of sound definitions where gradual transitions can be made between sounds within sets but not across sets. As a performer you could then switch sets occasionally during the piece.

Network Dimension -----------------------------

"Triggers" - which start a transition in your sound - and "Speeds" - which set your current rate of transition - are determined by your fellow Hubsters and are encoded in MIDI "note ons". Use the following guidelines to generate your "note on" messages (for 6 players):

1. Player 1 triggers 2 and sends speeds to 3 " 2 " 3 " " " " 4 (and so on with a wrap-around from 6 to 1)

2. Triggers = note number 100; send triggers at a periodic rate proportional to your overall amplitude. The range of trigger rates is approximately 1/10th second to 10 seconds as mentioned above.

3. Speeds = note numbers 0->99; send speeds proportional to your current pitch.

Manual Performance Controls ---------------------

Mix your amplitude actively as the piece unfolds. Remember, these changes affect the rate that you send triggers to your neighbor. Your program should continue to send network data even if your audio is turned down.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Crybaby / M. Trayle '91 (BMI,SPCA,FBI,DDT,LSD)

this piece was inspired by the Crybaby wahwah pedal.

The General Idea: the audio from some sound source goes through a chain of signal processors. the output of each signal processor goes to a mixer input, then the whole mess is mixed and diffused in hi-fi stereo. various parameters of the signal processors are controlled by data passed around through the Hub/OpCode box. the data travels in a loop. Audio:chain / Data:loop.

I know that John, Scott, Chris, and I have signal processors... I'm not sure about Phil and Tim. Whoever doesn't have a signal processor can play/be the sound source. Right now I'm thinking CD player for sound source... maybe a sound fx CD. Or maybe a radio. Tim? Shortwave?

Specifics: The thing about the wahwah pedal is that you can't get from A to C without going through B, i.e., it ain't discrete (in more ways than one). So the processor players need to select an effect that already has an oscillating component (I'll probably use a chorus+delay or flange+delay,e.g.), or write some code to make some portion of a chosen effect oscillate. The data from one of your fellow Hubsters will determine the frequency of this oscillation. Still with me? Okay, data only flows one way through this loop. You get your data from the guy "upstream" from you. That data will come in the form of a MIDI noteon, where note value = frequency of oscillating effect and velocity = width (amplitude) of the effect.

Once you use the data, change the value of the noteon and then pass it on to the guy downstream. Rule for changing noteon: use a translation table, i.e., no adding of random numbers to noteons, no xoring it with your favorite prime number.

Comments? Questions? Flames?

Pictures and more details will be available soon.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Listener's Digest by Scot Gresham-Lancaster

Each of the 6 players is responsible for a monophonic melody that is created by consensus with all the other 5 voices. The choices of pitch will be based on decisions made in accordance with several parameters. The choice of each pitch will be entire- ly context dependent and will arise out of the context of the other's choices.

I will be passing midi note-on messages to each player that will inform about what the current 1:1 is for their melody and what the peak dynamic (ie velocity) 0 velocity message means rest, anything else means playing is at your discretion. These note messages will be passed in a rhythmic manner and if possible players should try to perform pitches in conjunction with the notes as they arrive.

Below are tables of criterion that are based on information from Barlow's "Two Essays on Theory" from the CMJ which will pro- vide an easy table driven base from which to base your melody generating algorithms. If possible, at first, always go for the interval with the greatest "Harmonicity Value" in the context of the notes being played by other individuals. Every time you play a note broadcast your note choice to each of the other players. For example, you communicate a note choice of the 3:2 of the current intonation as 159 (midi noteon to ch. 16 ) 03 02, for 27:16 it would be 159 27 16. When you play a new note make sure and turn off the old one (ie 159 03 00 and 159 27 00 ). Each player will be responsible for keeping track of which intervals are available. The cumulative "harmonicity" of the current vertical sonority will be sent as a running 14 bit pitch bend on channel 16. So in this case if you get a note on message, you are dealing with a a 3:2 and 27:16 from the table below this equals about .35666 or $.8562. The higher the value the better.

Making a note choice should be done in whole or half steps in the opposite direction of the last leap. Leaps should be arbi- trarily executed every once in a while, when no scalar choice is available or possibly when the edge of the tessitura is reached. Since I am asking for a monophonic melody, that means you can use MIDI pitchbend to sharpen any equal pitch to exactly the frequen- cy needed. I will be adding MIDI note number plus 14 bit pitch- bend value to the Harmonnicity table as soon as I get finished calculating all of those. I will assume that you have set the global parameter on your MIDI sound producing unit to the range of whole tone in either direction ( the standard default ). For those of you using Amiga local sound simply multiply the frequen- cy of the 1:1 times the " Frequency Ratio" in the table below to get the frequency.

The overall structure of the piece will start off as a well tuned choral, but as this goes along vagrant "free style" intona- tion will be introduced to some of the players information, ( ie differing 1:1 ) As this begins to happen more raucous timbres should be employed. The rhythmic activity level is canonic in na- ture and will increase in frenzy as the piece progresses in keep- ing with this the melodic choice will become more free and less reasoned as it progresses. At some point the activity will begin to slacken and we will wind down back to the tame choral of the opening section. As we hunt for a an arbitrary cadence.

Harmonicity table

Interval Frequency Harmonicity 0 1:1 2.000000 111.731 16:15 0.076531 182.404 10:9 0.078534 203.910 9:8 0.120000 231.174 8:7 0.075269 266.871 7:6 0.071672 294.135 32:27 0.076923 315.641 6:5 0.099338 386.314 5:4 0.119048 407.820 81:64 0.060000 435.084 9:7 0.064024 498.045 4:3 0.214286 519.551 27:20 0.060976 701.955 3:2 0.272727 764.916 14:9 0.060172 813.686 8:5 0.106383 884.359 5:3 0.110294 905.865 27:16 0.083333 933.129 12:7 0.066879 968.826 7:4 0.081395 996.090 16:9 0.107143 1017.596 9:5 0.085227 1088.269 15:8 0.082873 1200.00 2:1 1.000000

Approximate Tessituri assigned in terms of equal temperament

pitch MIDI note#

john b2 to b4 59 to 83 tim g2 to g4 55 to 79 chris e2 to e4 52 to 76 scot a1 to a3 45 to 69 mark f1 to f3 41 to 65 phil d1 to d3 38 to 62

As a postscript I should add that I am intending to make a change to the Studio V patch so that we all have a global channel of MIDI ch 16 to communicate big info, like the pitchbend/group harmonicity number. I hope I am not asking too much....

HERE IS THE LONG A WAITED TABLE FOR "LISTNER'S DIE JEST" I HAVE GIVEN LOTS OF NEW INFORMATION HERE THAT SHOULD MAKE THE TASK AT HAND MUCH EASIER. THE LAST COLUMN IS THE ONE THAT YOU CAN BASE ALL YOUR HARMONIC DESICIONS ON IN A MUCH MORE PALATABLE FORM. I.E. UNISON IS NUMBER 1, AN OCTAVE IS #2, A FIFTH IS #3, A FOURTH IS #4 ETC. THIS IS MUCH EASIER THAN THE HARMONICITY FACTOR BEFORE. TO PLAY A MIDI NOTE IN TUNE WITH PITCH BEND TAKE THE 1:1 I GIVE YOU AS A NOTEON, ADD THE OFFSET THEN USE ONE OF THE PITCH BEND OFFSETS IN THE TABLE. SOME SYSTEMS PARSE THE 14 BIT NUMBER INTO TWO 7 BIT MIDI DATA BYTES AND OTHERS NEED 2 SEPARATE 7 BIT BYTES AFTER THE BEND COMMAND. I HAVE ASSUMED THAT THE LSB IS 00 YOUCAN TWEEK THAT IF YOU THINK THESE INTERVALS ARE OUT OF TUNE, BUT WHEN I TESTED IT THESE WERE WITHIN A CENT. REMEMBER I WANT TO START WITH HARMONICALLY SIMPLE SUSTAINED TONES AT FIRST AND MOVE INTO HARMONIC COMPLEXITY AS THE PIECE ACCELERATES.

lo to hi ET offset 14bit 7bit MSB Harmonic freq ratio (+ midinote#) HEX DEC HEX DEC Priority ____________________________________________________________

1.) 1:1 0 $2000 8192 $40 64 1 2.)16:15 0 $3100 12544 $62 98 17 3.)10:9 1 $2C00 11264 $58 88 15 4.) 9:8 1 $3000 12288 $60 96 5 5.) 8:7 2 $2480 9384 $49 73 18 6.) 7:6 2 $2780 10112 $4F 79 19 7.)32:27 2 $2E80 11904 $5D 93 16 8.) 6:5 3 $2380 9088 $47 71 10 9.) 5:4 4 $1D80 7552 $3B 59 6 10)81:64 4 $2100 8448 $42 66 24 11) 9:7 4 $2500 9472 $4A 74 21 12) 4:3 5 $1F80 8064 $3F 63 4 13)27:20 5 $2180 8576 $43 67 22 14) 3:2 7 $2004 8196 $40 64 3 15)14:9 7 $2780 10112 $4F 79 23 16) 8:5 8 $2280 8832 $45 69 9 17) 5:3 8 $2000 11392 $59 89 7 18)27:16 9 $2100 8448 $42 66 12 19)12:7 9 $2580 9600 $4B 75 20 20) 7:4 9 $2B00 11008 $56 86 14 21)16:9 10 $1F00 7936 $3E 62 8 22) 9:5 10 $2200 8704 $44 68 11 23)15:8 11 $1D80 7552 $3B 59 13 24) 2:1 12 $2000 8192 $40 64 2

ADRESS ALL QUESTIONS TO

HI LEE UNLIKE LEE

SGL %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

WHEELIES (1992) by Chris Brown

This piece uses Midi System Real time messages that one player generates to play with the variability of rhythmic synchronizations of the group. It also sets up a system of interaction in which members of the group change the rhythmic performance of each other's systems during the piece.

Each player has programmed their system to count Timing Clocks, and respond appropriately to Start, Stop, and Continue Midi messages. They are prepared to play repeating cycles of samples, or percussive voices, as controlled by three parameters called "Ictus", "Meter", and "Density". Ictus sets the number of timing clocks in a beat, "Meter" sets the number of beats in a cycle, and "Density" controls a percentage of the beats that will be silent. When a Start message is received (all players receive them at the same time, since System Real Time messages apply to all Midi channels) every player sends out a package of values for these three parameters to any other player(s) in the network. That player MUST implement this parameter data in the playback of the new section.

The result of this situation is that the group plays a synchronized pulse-oriented music that is often in many meters, and subdivisions of the group pulse, at once. And each player can strategically affect the music of any other player, while giving up control of the same part of their own music to the group. My intention here has been both musical (to accomplish rational rhythmic complexities otherwise unperformable by humans) and social (to invent a new form of group music that at once allows individuality and submits the individual to the primacy of the group).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

"Waxlips" by Tim Perkis (1991) =============================== Ok, hubbies this is a score. I'd like to try to do one where you can really see if there is any emergent pattern to a static setup, where each station acts in a fixed, predictable way, but the interconnects are so complex that the overall behavior is still groovy. The piece is simple. Each player does essentially the same thing: take keydown midi messages in, transform them in a regular way that I'll specify below, play the new transformed note and send out a copy of it to somebody. I'll "seed" the process at the beginning of the piece or section by sending out a few notes to start. The transformation can be anything you want, within these limitations: * One note in, one note out. For every possible midi note and channel input combination( 127 * 5 or 635 total) you define a unique transform to some other midi note and channel combination. Within any one performance of the piece this mapping is fixed: each time a particular note on a particular channel is received the same transformed output is sent. No random number changes, knob or slider or button adjustments, no algorithms which depend on previous states of your machine or previous input. A simple, fixed mapping. For example, the mapping could be as simple as: send the same note out, transposed up a fifth, to the next channel (mod 5). Or it could be as complex and arbitrary as a randomly generated(beforehand!) lookup table giving transforms for each of the 635 cases. * These mappings are fixed for the performance of any one section, but you are encouraged to provide yourself with controls which allow you to easily define new mappings for new performances of the piece. I really don't know what this will sound like, but I imagine a concert performance of the piece would involve playing several sections, stopping each when it gets boring, changing the mappings and starting over. * If you wish you can incorporate a delay into your mapping-- that is, you can define for each input case not only an output note and channel, but also an amount of time you wait before playing your note and sending your output. This delay should be a fixed one-to- one mapping, just like the note and channel mapping, not subject to any user, algorithmic or random adjustment within any one section. * You may also transform the velocity of the note you receive, again, only through a simple mapping, and with the additional restriction that you only transform the velocity by the amount -1, 0, or +1. For example, you might decrement velocity for all notes below middle C, and increment all others. Or decrement velocity on notes received on odd channels, increment velocity on notes received on evens, except for notes you get from Scot, which you pass with velocity unchanged. You get the idea. Any mapping you want, but fixed and one-to-one. * Avoid droning voices: no keyups are being sent around, and you should set your synthesizer to percussive type sounds which die by themselves without keyups or generate your own keyups on your local midi chain. 4.23.93 Addition to wax lips for berlin, moers and rova 1993. In the new version, an additional structure is added which defines a set of pitches which are legal for use in defining the mapping table. Using the hub blob data standard. addresses 0-11 in the ch 12 kd blob will hold pitch class designators in range 0-11 where 0 = C, 1 C#, etc. This table should be continuously updated, but it shouldn't change the network behavior until a new table command is sent.

tp %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


TOP OF PAGE