Musical Expression and the Human-Computer Interface
For Miller Puckette, M209, Sp’99
By Harry D. Castle

This century has witnessed a rise in the inclusion of electronic, technological invention in composition and performance.  This paper will investigate the evolutionary state of musical performance skill as expressed through computers and electronics.  How have performers’ physical relationships to their instruments changed or been maintained in relation to technology-based instruments/systems, and is the form of mutual relationship driven by performers’ desires, or by technological development.  What performance languages have developed, how have they evolved, and what further changes are to be anticipated?  This paper is principally concerned with issues pertaining to real-time interaction with technologically-based systems for making music.

Technological developments through this century have made it possible to design and build machines that compose, that may be used as instruments, and that act as “composing instruments.”  As the brains built into these machines have received a great deal of attention, generally less has been devoted toward designing meaningful interfaces that provide adequately accessible yet complex control over increasingly abstracted musical behaviors.  These and related areas of concern are addressed in some detail by Architecture Professor Malcolm McCullough in his “Abstracting Craft: The Practiced Digital Hand” (1996), and by the philosopher/chemist Michael Polanyi in his book “Personal Knowledge” (1958).  Each author writes at length of people and their relationships to the tools they use, taking a broad view of the word “tool.”

 
You probably think of a tool as something to hold in your hand.  It is something to extend your powers: a piece of technology, or applied intelligence, for overcoming the limitations of the body.  The hand-held tool comes to mind because more than any other it demands an especially active sort of skill.  It requires your participation, and for that reason it engages your imagination.  (McCullough, pg. 59).

I propose that an expressive instrument or performance system should be marked by these qualities. It should demand an “active sort of skill” and require participation in a way that ultimately “engages your imagination.”  McCullough believes that such engagement encourages practice and learning, which in turn bring their own rewards.  In his words:

 
If you feel satisfaction in using a well-practiced tool, you probably do so on several levels.  Tool usage simultaneously involves direct sensation, provides a channel for creative will, and affirms a commitment to practice.  The latter is quite important: only practice produces the most lasting and satisfying form of knowing.  Practiced mastery is something we crave in itself.  Most anthropologists would affirm a fundamental relation between tools and humanity.  Deep in our very nature, we are tool users.  (McCullough, pg. 61).

As a lens through which to examine the present, it is worth taking a look at a complex synthesizer developed in the 70’s by composer/performer/improvisor Salvatore Martirano.   The synthesizer, named The SalMar Cunstruction was conceived, developed, and built by the composer with help
from some very talented engineers.  It was, in brief:

 
A hybrid digital-analogue electronic instrument, developed from about 1970 at the University of Illinois in Urbana by a team that was led by the composer Salvatore Martirano…A console the size of a chamber organ carries 291 controls, which, when the performer touches them, connect flip-flop switches through his body to a source of electrical current; by means of these controls the performer can select from a large reservoir of programmed material and modify and reshape every aspect of it in performance.  Any one of four ‘orchestras’ (including two ‘percussion ensembles’) can be selected at a time.  The music is spatially distributed over 24 lightweight loudspeakers suspended above and around the audience.  (Davis, pg. 285).

Martirano and his team built the SalMar Construction from scratch.  Circuits were designed and graduate students were employed to etch and build each of the numerous circuit boards that constituted
the synthesizer.  The imposing piece of work (weighing about 400lbs) was completed in 1972, and reflected the compositional concerns and performance desires of its creator.  Martirano had a background as a serial composer, and a jazz improvisor/pianist with a penchant for electronics.  His immersion in electronics led to the development, with James Divilbiss (chief engineer of the Iliac project, the worlds first supercomputer), of touch-sensitive switches that could be controlled either by Martirano at the console, or by programs running in the synthesizer itself.  There were 291 such switches which would control each of the four software orchestras.  Furthermore, Martirano could vary the degree of control provided by each switch.  He could alternate between controlling microstructural and macrostructural elements during performance.  In his own words:

 
In performance, I can change my relationship to the sound by zooming in on a microprocess, fiddle around, change the process or not, remain there or turn my attention to another process  (Martirano, from Chadabe, pg 291).

The most oft asked question after a concert performance with Sal-Mar was: "Do you know what it will do next?". Though too complex to analyze it was possible to predict what sound would result and this caused me to lightly touch or slam the switch as if this had an effect on the two-state logic. I was in the loop, trading swaps with the logic. Let's face it, there are some things you can't talk about and make much sense. I enabled paths, or better, I steered. To make music with the SMC is like driving a flying bus  (Martirano, from his web page).

It is interesting that Martirano insisted on interactive control of  both micro and macro processes for performance.  As a serial composer he designed the software to be able to store, transform, and reintroduce sequences of notes or control values.  As a jazz improvisor he wanted the capablity to actuate discrete notes and to apply nuanced expression to ongoing processes.  But regardless of the particular customized set of features he designed into the instrument, it is above all significant that they held meaning for him and that he was thus able, through extensive performance and practice, to effectively negotiate the complex musical world he had created.  It was engaging enough to encourage use and learning, and expressive enough that it became for him a useful extension of his musical mind.  As Polanyi says:

 
Our subsidiary awareness of tools and probes can be regarded now as the act of making them form a part of our own body.  The way we use a hammer or a blind man uses his stick, shows in fact that in both cases we shift outwards the points at which we make contact with the things that we observe as objects outside ourselves.  While we rely on a tool or a probe, these are not handled as external objects.  We may test the tool for its effectiveness or the probe for its suitability, e.g. in discovering the hidden details of a cavity, but the tool and the probe can never lie in the field of these operations; they remain necessarily on our side of it, forming part of ourselves, the operating persons.  We pour ourselves out into them and assimilate them as parts of our own existence.  We accept them existentially by dwelling in them.  (Polanyi, pg. 59).

Similar traits are evident in Joel Chadabe’s description of some of his performance experiences with complex technological setups.  The CEMS (Coordinated Electronic Music Studio) System was installed at the State University of New York at Albany late in 1969.  He describes it as “the world’s largest concentration of Moog sequencers under a single roof” (Chadabe, pg. 286).  It was nearly as imposing as The SalMar Construction, and to use it one faced a wall of knobs, switches, and plug-points through which modules could be interconnected using patch cables.  As with Martirano’s instrument, the array was complex enough to yield detailed control, but could be configured for “macroprocess” control as well.  It also supplied an almost inescapable degree of  indeterminacy.  In Chadabe’s words:

 
…I was using joysticks to control oscillators, filters, modulators, amplifiers, and several sequencers.  The sequencers, configured to generate pseudo-random patterns, were also controlling the oscillators, filters, modulators, and amplifiers.  And I was also controlling the sequencers.  It was a complex network of modular interconnections which, as intended, caused a certain balance between predictability and surprise.  Because I was sharing control of the music with the sequencers, I was only partially controlling the music, and the music, consequently, contained surprising as well as predictable elements.  The surprising elements made me react.  The predictable elements made me feel that I was exerting some control.  It was like conversing with a clever friend who was never boring but always responsive.  I was, in effect, conversing with a musical instrument that seemed to have its own interesting personality.  (Chadabe, pg. 286-7).

We can see some common elements within the Martirano and the Chadabe examples.  Both were using systems that were highly complex, and both composers possessed an expert working affinity with the atomic elements of the technology at their disposal.  Access to the fundamental elements was exposed to the composers in a way that made them easy to manipulate in performance, yet remain directly accessible for experimentation while developing higher level controls of ongoing processes.  Their systems were “tweakable” and perhaps most important of all, capable of surprising them.  Both composers clearly enjoyed this element of indeterminacy and were thus encouraged to immerse themselves in practice.  As McCullough comments:

 
the circumstances of practice are often themselves a source of satisfaction.  This is because skill is sentient: it involves cognitive cues and affective intent.  It is also very habitual.  In particular, it develops an intimate relation with certain contexts or tools, which makes it individual.  No two people will be skilled alike; no machine will be skilled at all.  Of course the latter is debatable if we accept simple mechanical or deductive capacity as skill—but we are maintaining that there is a sentient component too.  One way our sentient activity differs from the action of machines is play.  We putter about in our studios.  We enjoy being skilled.  We experiment to grow more so.  Skills beget more skills.  (McCullough, pg. 7).

Although these composers refined their performance skills over the years, they were clearly not interested in “refining” the unpredictable characteristics of their system so as to eliminate them.  I believe that, as improvisors and experts of their own systems, they are operating at a higher level of musical discourse that is difficult to describe adequately.  So even though Martirano says that the music generated by the SalMar Construction was “too complex to analyze” while playing, he follows with: “I was in the loop.”  Polanyi speaks to this issue when he says:

 
The fact that skills cannot be fully accounted for in terms of their particulars may lead to serious difficulties in judging whether or not skilful performance is genuine.  The extensive controversy on the ‘touch’ of pianists may serve as an example.  Musicians regard it as a glaringly obvious fact that the sounding of a note on the piano can be done in different ways, depending on the ‘touch’ of the pianist.  To acquire the right touch is the endeavour of every learner, and the mature artist counts its possesion among his chief accomplishments.  A pianist’s touch is prized alike by the public and by his pupils: it has a great value in money.  Yet when the process of sounding a note on the piano is analysed, it appears difficult to account for the existence of ‘touch’.  When a key is depressed, a hammer is set in motion which hits a string.  The hammer is pushed by the depressed key only for a short distance and is thereby flung into free motion, which is eventually stopped by the chord.  Therefore, it is argued, the effect of the hammer on the chord is fully determined by the speed of the hammer in free motion at the moment when it hits the chord.  As this speed varies, the note of the chord will sound more or less loudly.  This may be accompanied by changes in colour, etc., owing to concurrent changes in the composition of overtones, but it should make no difference in what manner the hammer acquired any particular speed.  Accordingly, there could be no difference as between tyro and virtuoso in the tone of the notes which they strike on a given piano; one of the most valued qualities of the pianist’s performance would be utterly discredited.  (Polanyi, pg. 50).

George Lewis’ “Voyager” stands in complement to these two examples.   Lewis felt that “notions about the nature and function of music are embedded right into the structure of music software” (Lewis 1997), and proceeded to construct an interactive playing environment that he could engage on his own terms, i.e., through playing his trombone.  As a masterful, lifelong improviser, Lewis did not find the notions embedded in commercial software packages consistent with his own, and so he taught himself to program so that he could write software according to his own needs and interests.

 
In Voyager, an improvisor interacts with a large, computer-driven group of “virtual improvisors.”  A computer program analyzes aspects of an improvisor’s musical behavior in real time, using that analysis to guide an automatic composing program that generates complex “orchestral” responses to the musician’s playing.  The computer response to the human improvisor also includes generative behavior on the part of the system that, while influenced by the improvisor, is ultimately independent of outside input.  (Lewis, 1995).

Voyager makes real-time decisions as to what it will play based on an analysis of what its human co-improviser is playing.  While using this analysis to inform its playing, Voyager is nevertheless producing an independent stream of musical activity that is not strictly derivative of or even reliant upon what the human improviser is playing.  Lewis set out to address musical issues very different than those addressed in the two previous examples discussed here, and he did it using available technologies in ways that best suited him.  The resulting system was one that allowed him to maintain physical access through an intimately familiar channel of expression.  Voyager also encouraged practice by its inherent ability to surprise not only through its reaction to Lewis’ playing, but by generating material “independently” as well.  Lewis was thus in a position to use his skills to beget new skills as McCullough suggests.  Lewis had access to the atomic elements of his program only while preparing for performance, but compensated by preserving for himself greater freedom during performance since, with his horn, he interject musical ideas of his own choosing at any time.

I chose the three preceding examples of interactive systems because I felt that each of them exemplified aspects of interactive systems that work, and because they contrast in many ways with possibilities afforded by present day circumstances.  As someone who has in recent years  spent a great deal of time with computers trying, with varying degrees of success, to get seemingly simple things to work, I may come off sounding a bit pessimistic.  I am, in fact, generally optimistic about the future, but would nevertheless like to make some observations regarding the present.  Computers have for the most part supplanted the large-scale, custom hardware systems as described above.  In theory, since a computer can replicate any waveform and thereby produce any sound, this is a wonderful turn of events.  The reality, however, proves to be somewhat different, and I think it can be attributed to a lack of some of the qualities that made the above systems viable (and fun).  As McCullough observes:

 
By and large, everyday computing involves very little touch technology.  Most ordinary tools and technologies, and most understandings of skillful process, exist without explicit formulations for the role of the hands.  Nonmechanical aspects of touch elude practical engineering; some elude scientific description.  (McCullough, pg. 6).

While the mass-marketing of computers is making them more and more accessible and may ultimately be the salvation of live, interactive, technology-based musics that rely on them, it has so far generally led to a more user-friendly, i.e., “closed” architecture.  Even the expert experimenter seems to need to know more to bypass commercially provided solutions than previously, as layers of software and hardware support have become more convoluted.  It is increasingly difficult to build and interface the physical extensions that one my want for a customized system of one’s own and to harness the power of computers for live performance.

The instruments that are currently commercially available are simply not that great.  Although it is possible to purchase  MIDI keyboards with “realistically” weighted keys, they are no substitute for the real thing when it comes to expressivity, and most acoustically trained pianists groan at the thought of being required to play them for performance.  In short, it is difficult to create a real-time environment that “engages the senses.”

And, finally, there has to date been a notable lack of interest, on the part of computer manufacturers, in providing real-time operating systems.  As Brandt and Dannenberg state:

 
A modern CPU can run extensive and sophisticated signal processing faster than real time.  Unfortunately, executing the right code at the right time and performing time-critical I/O may very well exceed the capabilities of a modern operating system.  Latency is the stumbling block for music software, and the OS is often the cause of it.  (Brandt, pg. 137).

Current trends in software development have been geared toward desk-top, production studio environments.  This is fine for certain types of musical work, but is notably lacking when it comes to real-time performance.

There is every reason to expect that adequate real-time performance response using computers will someday be in vogue, though it may well be driven by extra-musical influences (most notably requirements of the gaming community).  I don’t regret the passing of huge, custom built, performance systems such as those described in this paper.  However, I hope that as new capabilities arrive they are applied with an eye toward the concerns and observations raised by McCullough and Polanyi, and then some of us can, at least once in a while, get out from behind our desks.

ENDNOTES

Brandt, E. and R.B. Dannenberg.  1998. “Low-Latency Music Software Using Off-the-Shelf Operating Systems.

Bruner, Jerome, Alison Joly, and Kathy Silva, eds., 1976. Play. Harmondsworth: Penguin.

Chadabe, Joel. (1997) Electric Sound: the Past and Promise of Electronic Music.  Upper Saddle River, NJ: Prentice Hall.

Davis, Hugh. Art. “Sal-Mar Construction,” The New Grove Dictionary of Musical Instruments, ed. Stanley Sadie (London: Macmillan Press Limited, 1984), vol 3.

Laurel, Brenda, ed., 1990. The Art of Human-Computer Interface Design. Reading, MA: Addison-Wesley.

Lewis, George. (1995) “A Listener’s Guide to Voyager.”  Tijdkring (Contemporary Music Festival 1995) program book.  Den Haag: Stichting Jazz in Nederland/Phoenix Foundation.

Lewis George.  (Spring 1997) “Singing the Alternative Interactivity Blues.”  Grantmakers in the Arts, 8, no. 1.

Lucie-Smith, Edward.  1981. The Story of Craft—the Craftsman's Role in Society.  Ithaca, NY: Cornell University Press.

McCullough, Malcolm. 1996 Abstracting Craft, The Practiced Digital Hand.  Cambridge: MIT Press.

Polanyi, Michael. 1958. Personal Knowledge—Towards a Post-Critical Philosophy.  Chicago: University of Chicago Press.
 

ONLINE REFERENCES

Web links are problematic as they change and die without any warning.  For that reason I have given the search key which obtained the result.  This is a web site constructed recently in memory of Martirano, and, although it may move, it should be around for some time to come.

Search key: “Martirano, Salvatore”
Web site:  https://cmp-rs.music.uiuc.edu/~martiran/HTdocs/salmar.html