Musical Expression and the Human-Computer Interface
For Miller Puckette, M209, Sp’99
By Harry D. Castle
This century has witnessed a rise in the inclusion of electronic, technological invention in composition and performance. This paper will investigate the evolutionary state of musical performance skill as expressed through computers and electronics. How have performers’ physical relationships to their instruments changed or been maintained in relation to technology-based instruments/systems, and is the form of mutual relationship driven by performers’ desires, or by technological development. What performance languages have developed, how have they evolved, and what further changes are to be anticipated? This paper is principally concerned with issues pertaining to real-time interaction with technologically-based systems for making music.
Technological developments through this century have made it possible to design and build machines that compose, that may be used as instruments, and that act as “composing instruments.” As the brains built into these machines have received a great deal of attention, generally less has been devoted toward designing meaningful interfaces that provide adequately accessible yet complex control over increasingly abstracted musical behaviors. These and related areas of concern are addressed in some detail by Architecture Professor Malcolm McCullough in his “Abstracting Craft: The Practiced Digital Hand” (1996), and by the philosopher/chemist Michael Polanyi in his book “Personal Knowledge” (1958). Each author writes at length of people and their relationships to the tools they use, taking a broad view of the word “tool.”
I propose that an expressive instrument or performance system should be marked by these qualities. It should demand an “active sort of skill” and require participation in a way that ultimately “engages your imagination.” McCullough believes that such engagement encourages practice and learning, which in turn bring their own rewards. In his words:
As a lens through which to examine the present, it is worth taking a
look at a complex synthesizer developed in the 70’s by composer/performer/improvisor
Salvatore Martirano. The synthesizer, named The SalMar Cunstruction
was conceived, developed, and built by the composer with help
from some very talented engineers. It was, in brief:
Martirano and his team built the SalMar Construction from scratch.
Circuits were designed and graduate students were employed to etch and
build each of the numerous circuit boards that constituted
the synthesizer. The imposing piece of work (weighing about 400lbs) was completed in 1972, and reflected the compositional concerns and performance desires of its creator. Martirano had a background as a serial composer, and a jazz improvisor/pianist with a penchant for electronics. His immersion in electronics led to the development, with James Divilbiss (chief engineer of the Iliac project, the worlds first supercomputer), of touch-sensitive switches that could be controlled either by Martirano at the console, or by programs running in the synthesizer itself. There were 291 such switches which would control each of the four software orchestras. Furthermore, Martirano could vary the degree of control provided by each switch. He could alternate between controlling microstructural and macrostructural elements during performance. In his own words:
The most oft asked question after a concert performance with Sal-Mar was: "Do you know what it will do next?". Though too complex to analyze it was possible to predict what sound would result and this caused me to lightly touch or slam the switch as if this had an effect on the two-state logic. I was in the loop, trading swaps with the logic. Let's face it, there are some things you can't talk about and make much sense. I enabled paths, or better, I steered. To make music with the SMC is like driving a flying bus (Martirano, from his web page).
It is interesting that Martirano insisted on interactive control of both micro and macro processes for performance. As a serial composer he designed the software to be able to store, transform, and reintroduce sequences of notes or control values. As a jazz improvisor he wanted the capablity to actuate discrete notes and to apply nuanced expression to ongoing processes. But regardless of the particular customized set of features he designed into the instrument, it is above all significant that they held meaning for him and that he was thus able, through extensive performance and practice, to effectively negotiate the complex musical world he had created. It was engaging enough to encourage use and learning, and expressive enough that it became for him a useful extension of his musical mind. As Polanyi says:
Similar traits are evident in Joel Chadabe’s description of some of his performance experiences with complex technological setups. The CEMS (Coordinated Electronic Music Studio) System was installed at the State University of New York at Albany late in 1969. He describes it as “the world’s largest concentration of Moog sequencers under a single roof” (Chadabe, pg. 286). It was nearly as imposing as The SalMar Construction, and to use it one faced a wall of knobs, switches, and plug-points through which modules could be interconnected using patch cables. As with Martirano’s instrument, the array was complex enough to yield detailed control, but could be configured for “macroprocess” control as well. It also supplied an almost inescapable degree of indeterminacy. In Chadabe’s words:
We can see some common elements within the Martirano and the Chadabe examples. Both were using systems that were highly complex, and both composers possessed an expert working affinity with the atomic elements of the technology at their disposal. Access to the fundamental elements was exposed to the composers in a way that made them easy to manipulate in performance, yet remain directly accessible for experimentation while developing higher level controls of ongoing processes. Their systems were “tweakable” and perhaps most important of all, capable of surprising them. Both composers clearly enjoyed this element of indeterminacy and were thus encouraged to immerse themselves in practice. As McCullough comments:
Although these composers refined their performance skills over the years, they were clearly not interested in “refining” the unpredictable characteristics of their system so as to eliminate them. I believe that, as improvisors and experts of their own systems, they are operating at a higher level of musical discourse that is difficult to describe adequately. So even though Martirano says that the music generated by the SalMar Construction was “too complex to analyze” while playing, he follows with: “I was in the loop.” Polanyi speaks to this issue when he says:
George Lewis’ “Voyager” stands in complement to these two examples. Lewis felt that “notions about the nature and function of music are embedded right into the structure of music software” (Lewis 1997), and proceeded to construct an interactive playing environment that he could engage on his own terms, i.e., through playing his trombone. As a masterful, lifelong improviser, Lewis did not find the notions embedded in commercial software packages consistent with his own, and so he taught himself to program so that he could write software according to his own needs and interests.
Voyager makes real-time decisions as to what it will play based on an analysis of what its human co-improviser is playing. While using this analysis to inform its playing, Voyager is nevertheless producing an independent stream of musical activity that is not strictly derivative of or even reliant upon what the human improviser is playing. Lewis set out to address musical issues very different than those addressed in the two previous examples discussed here, and he did it using available technologies in ways that best suited him. The resulting system was one that allowed him to maintain physical access through an intimately familiar channel of expression. Voyager also encouraged practice by its inherent ability to surprise not only through its reaction to Lewis’ playing, but by generating material “independently” as well. Lewis was thus in a position to use his skills to beget new skills as McCullough suggests. Lewis had access to the atomic elements of his program only while preparing for performance, but compensated by preserving for himself greater freedom during performance since, with his horn, he interject musical ideas of his own choosing at any time.
I chose the three preceding examples of interactive systems because I felt that each of them exemplified aspects of interactive systems that work, and because they contrast in many ways with possibilities afforded by present day circumstances. As someone who has in recent years spent a great deal of time with computers trying, with varying degrees of success, to get seemingly simple things to work, I may come off sounding a bit pessimistic. I am, in fact, generally optimistic about the future, but would nevertheless like to make some observations regarding the present. Computers have for the most part supplanted the large-scale, custom hardware systems as described above. In theory, since a computer can replicate any waveform and thereby produce any sound, this is a wonderful turn of events. The reality, however, proves to be somewhat different, and I think it can be attributed to a lack of some of the qualities that made the above systems viable (and fun). As McCullough observes:
While the mass-marketing of computers is making them more and more accessible and may ultimately be the salvation of live, interactive, technology-based musics that rely on them, it has so far generally led to a more user-friendly, i.e., “closed” architecture. Even the expert experimenter seems to need to know more to bypass commercially provided solutions than previously, as layers of software and hardware support have become more convoluted. It is increasingly difficult to build and interface the physical extensions that one my want for a customized system of one’s own and to harness the power of computers for live performance.
The instruments that are currently commercially available are simply not that great. Although it is possible to purchase MIDI keyboards with “realistically” weighted keys, they are no substitute for the real thing when it comes to expressivity, and most acoustically trained pianists groan at the thought of being required to play them for performance. In short, it is difficult to create a real-time environment that “engages the senses.”
And, finally, there has to date been a notable lack of interest, on the part of computer manufacturers, in providing real-time operating systems. As Brandt and Dannenberg state:
Current trends in software development have been geared toward desk-top, production studio environments. This is fine for certain types of musical work, but is notably lacking when it comes to real-time performance.
There is every reason to expect that adequate real-time performance response using computers will someday be in vogue, though it may well be driven by extra-musical influences (most notably requirements of the gaming community). I don’t regret the passing of huge, custom built, performance systems such as those described in this paper. However, I hope that as new capabilities arrive they are applied with an eye toward the concerns and observations raised by McCullough and Polanyi, and then some of us can, at least once in a while, get out from behind our desks.
Brandt, E. and R.B. Dannenberg. 1998. “Low-Latency Music Software Using Off-the-Shelf Operating Systems.
Bruner, Jerome, Alison Joly, and Kathy Silva, eds., 1976. Play. Harmondsworth: Penguin.
Chadabe, Joel. (1997) Electric Sound: the Past and Promise of Electronic Music. Upper Saddle River, NJ: Prentice Hall.
Davis, Hugh. Art. “Sal-Mar Construction,” The New Grove Dictionary of Musical Instruments, ed. Stanley Sadie (London: Macmillan Press Limited, 1984), vol 3.
Laurel, Brenda, ed., 1990. The Art of Human-Computer Interface Design. Reading, MA: Addison-Wesley.
Lewis, George. (1995) “A Listener’s Guide to Voyager.” Tijdkring (Contemporary Music Festival 1995) program book. Den Haag: Stichting Jazz in Nederland/Phoenix Foundation.
Lewis George. (Spring 1997) “Singing the Alternative Interactivity Blues.” Grantmakers in the Arts, 8, no. 1.
Lucie-Smith, Edward. 1981. The Story of Craft—the Craftsman's Role in Society. Ithaca, NY: Cornell University Press.
McCullough, Malcolm. 1996 Abstracting Craft, The Practiced Digital Hand. Cambridge: MIT Press.
Polanyi, Michael. 1958. Personal Knowledge—Towards a Post-Critical
Philosophy. Chicago: University of Chicago Press.
Web links are problematic as they change and die without any warning. For that reason I have given the search key which obtained the result. This is a web site constructed recently in memory of Martirano, and, although it may move, it should be around for some time to come.
Search key: “Martirano, Salvatore”
Web site: http://cmp-rs.music.uiuc.edu/~martiran/HTdocs/salmar.html