Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE ( © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 20 February 2019

Gesture and Morphology in Laptop Music Performance

Abstract and Keywords

This article discusses the design and development of new interfaces for electronic music performance for which the affordances inherent in the acoustic instrument move into the virtual. It gives particular attention to the way in which performative gestures are linked to principal control components used to shape the resultant sound properties in musical performance and outlines issues to do with authenticity and a perception of counterfeit musical performances using laptop computers. It gives a brief outline of the Thummer Mapping Project, presenting a model for musical control developed from a musician's perspective. It seeks to draw the research into approaches to mapping together with a consideration of phenomenology to understand better the conscious and unconscious nature of the engagement between a musician and the musician's instrument.

Keywords: electronic music performance, performative gestures, laptop computers, Thummer Mapping Project, phenomenology

1. Instrument or Interface

This chapter discusses the design and development of new interfaces for electronic music performance for which the affordances inherent in the acoustic instrument move into the virtual. It gives particular attention to the way in which performative gestures are linked to principal control components used to shape the resultant sound properties in musical performance and outlines issues to do with authenticity and a perception of counterfeit musical performances using laptop computers. It seeks to draw the research into approaches to mapping together with a consideration of phenomenology to better understand the conscious and unconscious nature of the engagement between a musician and the musician's instrument.

A. Gesture

Relationships to sound are in part physical: musical instruments generally require us to blow, pluck, strum, squeeze, stroke, hit, and bow. The acoustic instrument vibrates in a manner determined by the energy transmitted into it. The physical gesture determines the amplitude, pitch, and timbre of each event.

(p. 215) Within this context, a proprioceptive relationship is established, that is, a largely unconscious perception of movement and stimuli arising within the body from the relationship between the human body and the instrument during performance; a direct relationship is established between the physical gesture, the nature of the stimuli, and the perceived outcome. The resulting awareness is multifaceted and has been at the core of musical performance for centuries. I would argue that these levels of engagement extend to distributed cognition, a product of the whole body and not simply the brain, and as such allow musicians to enjoy an embodied relationship (by which the instrument and performer may appear to dissolve into one entity) with their instrument, a relationship that is often communicated to the audience through their performance gestures. Computer-based music, however, heralded the dislocation of the excitation, sonification mechanism, dissolving the embodied relationship the musician previously enjoyed with the instrument while simultaneously introducing a broad range of possibilities that defy the limits of the human body, raising questions about the role of gesture in musical performance and the value of haptics in successful musical instruments.

B. Interface

Playing a musical instrument causes the transfer of spatial (pitch) and temporal (duration/rhythm) information from the conscious and subconscious systems of the body to the apparatus that physically produces the sound. Any such information transfer operates from within complex traditions of culture, musical design, and performance technique and is shaped by human cognitive and motor capacities (e.g., the event speed and complex polyrhythms in the compositions of Colon Nancarrow;1 Carlsen 1988, Gann 1995, Duckworth 1999) as well as personal experiences (Pressing 1990).

The mechanization of musical instruments has a long history. Mechanical automation has surfaced in the music boxes of Europe, hand-cranked street organs, through to the theatrical extravagance of the Wurlitzer organ and the player piano. A brief overview of mechanized musical instruments would include Salomon de Caus's pegged organ (1644), Johann Maelzel's (inventor of the metronome) forty-two robot musicians for which Beethoven composed the Wellington Victory March, music boxes, and musical clocks.

Electrical automation also has a long history, dating to the late 18th century with Cahill's telharmonium,2 a vast electromechanical synthesizer that occupied five train carriages when touring. Developments proceeded through various machinations to purely electronic instrument such as Friedrich Adolf Trautwein's trautonium, on which Oskar Sala was a virtuoso, and the Ondes Martenot to the theremin, made famous by the virtuosic performances of Clara Rockmore (Chadabe 1997) and perhaps the most famous electronic instrument for which gesture is critical in its performance. Each of these instruments retains a limited and clearly defined timbral range, a fixed morphology (Wishart 1996) for which even for the (p. 216) theremin a clear relationship between gesture and musical outcomes was evident. The performance of the theremin traditionally allocates the left hand to the control of amplitude and the right to the control of pitch. Some timbral variation can be associated with changes in pitch through modifying the shape of the right hand, but the synthesis engine remains unchanged, so while pitch is characteristically fluid on the instrument, the timbre, or in Wishart's sense, the morphology (the relationship among pitch, timbre, and time) remains fixed.

C. The Computer as Instrument

By contrast, the computer is a utilitarian child of science and commerce, a chameleon with no inherent property other than acting as an interface to desired functionality. Kittler's notion of construction by process (Kittler and Johnston 1997) neatly summarizes the computer as having a context generated by its momentary context. Elsewhere (Kittler 1999), he referenced the typewriter, pointing out that the letters of the typewriter are dissociated from the communicative act with which it is commonly associated. When we examine the computer, this dissociation is magnified many times. Each key on a typewriter has a single associated function, whereas a computer keyboard is amorphous, adapted to the programmer's desire as an interface to a communication framework, a controller for a game, or a function key changing the sound volume or display brightness.

When the computer is used as a musical instrument, this disjunction between interface and function is a dramatic diversion from the fixed morphology of traditional instruments for which each element of the interface (the keys on a wind instrument for instance) has affordances for a limited set of clearly understood/defined functions, and the gestures evoked through engaging with the interface have a functional musical association that may also be communicated to and understood by an audience (for instance, the quality of finger movement, bow movement, or the force of a percussive attack).

The computer as musical instrument offers the possibility for interactive music systems (Paine 2002), which, if they utilize real-time synthesis, are one of the few possible dynamic authoring systems available for which the nuance, temporal form, and micro- or macrostructures can be produced, selected, and developed in real time. In that sense, it is the iconic instrument of our time, eschewing the traditional composer/performer model for a real-time authoring environment. Such a claim cannot be supported when addressing the DJ/VJ (video jockey) model as performative outcomes are sculpted from prerecorded material (which through the process of recording is archived with a fixed morphology) and collaged into new soundscapes (in which the potentials are fundamentally preestablished by the material). By contrast, real-time synthesis on a laptop computer offers an almost-infinite aesthetic scope, posing the challenge of how to constrain the system in such a way it provides a virtuosic performance of a recognizable musical work—in other words, all possibilities are not contained in all works—a situation that in (p. 217) theory would produce a superwork that contained all possible works, clearly an untenable and undesirable effect.

Perhaps the notion of control is passé? Perhaps the laptop musician is not so much “in control” as he or she is navigating the potentials inherent in the work? If this is so, then performance gestures take on a very different function; their designation moves from an event-based classification to encompass the notion of gesture as form and timbre as interrelationships, influencing orchestration, focus, or structural evolution as the performance/musical work evolves. Many approaches have been taken to this problem (Mulder 1994, Mulder and Fels 1998, Bongers 2000, Hunt and Kirk 2000, Cook 2001, Wanderley 2001, Wessel and Wright 2002), which commands an annual international conference, the International Conference on New Interfaces for Musical Expression.3

2. Why are Gesture and Morphology Important?

The conception of computer-based instruments is still often established on precepts of acoustic instruments. They often exhibit the following, among other features:

  1. a. Limited and fixed timbral characteristics, which operate on

  2. b. Excitation-sonification models (attack, sustain, decay envelopes, as well as timbral structure, i.e., noise in the attack stage, etc.) derived from existing acoustic instruments

In other words, they are derived from prior experience, from history and tradition. Ought these precepts remain true in all cases for computer-based performance instruments? It is in fact meaningless to argue so, for as Kittler (1999) pointed out, the computer does not act as a signifier for any one approach to communication, let alone musical performance. Its amorphous nature lends itself to a remarkably wide range of real-time music-making possibilities, from interactive installation works utilizing video tracking or biological or environmental sensing to a synthesis engine addressed from an interface of knobs and sliders to an entity in a collaborative networked ensemble such as The Hub.4 Indeed, the software tools artists exploit in these varying applications are just as diverse in their approach to control. Ableton Live5 focuses on the triggering of sample files or the sequencing of effects, with the gestural input addressing events (start/stop) and variation of defined variables (volume, pan, effect mix, effect parameters, etc). On the other hand, tools such as Max/MSP [Max Signaling Processing]/Jitter6 from Cycling74 strive to be a blank canvas to avoid imposing a musical ideology. A more extreme case is the live coding, or real-time scripting movement, using open source software (p. 218) languages such as SuperCollider,7 Impromptu,8 and ChucK,9 to create performances consisting of musicians writing and executing the software to generate the sounds in front of the audience. Such an approach forces reflection on the fact that if the instrument has changed so fundamentally, then so can the performance practice (Borgo 2005).

Kim Cascone, a recognized exponent of laptop music performance, commented that “if computers are simply the repositories of intellectual property, then musical composition and its performance are now also located in this virtual space. The composer transfers his or her mental work into the computer, and it is brought to life by interacting with it through the interface of a software application” (2000, p. 95). For Cascone and others, a laptop music performance presents a dichotomy by which the virtualization of the sonification-excitation mechanism and the subsequent dissolution of the embodied relationship acoustic musicians enjoy with their instruments deprives the audience of a conduit to engagement because if the audience is unable to identify the role the performer is playing in the production of the music they hear, the authenticity of the action is questioned:

Spectacle is the guarantor of presence and authenticity, whereas laptop performance represents artifice and absence, the alienation and deferment of presence…. Laptop music adopts the quality of having been broadcast from an absent space-time rather than a displaced one.

The laptop musician broadcasts sounds from a virtual non-place; the performance feigns the effect of presence and authenticity where none really exists. The cultural artifact produced by the laptop musician is then misread as “counterfeit,” leaving the audience unable to attach value to the experience. The laptop performer, perhaps unknowingly, has appropriated the practice of acousmatic music and transplanted its issues. (Cascone 2000, p. 95)

This alienation has attracted a wide variety of solutions; some performers destroy or hack objects during their performances, an action that may or may not have anything to do with sound production, while others project imagery, seeking to avoid a sense of “artifice and absence, the alienation and deferment of presence” (Cascone 2000, p. 95).

Some argue that the focus has moved from the visual, the excitement of watching the flamboyant performer (Nigel Kennedy, for instance), to the audible, a deep listening experience in which the intricacy of the sonic event is primary. “Digital performance is totally referential as a performative process. Therefore, when considering its cultural implications, it is perhaps more productive to consider it as a form of aesthetic regurgitation rather than altering old notions of performativity. The upside to this is that it means we have over a century and a half of critical materials developed in critical response to such approaches. The downside is that most of those critical materials ultimately secretly reaffirm the object they wish to critique” (Cascone 2000). Many laptop music performers, however, do see the need to inject a sense of the now, an engagement with audience, in an effort to reclaim the authenticity associated with “live” performance. Music performances on acoustic instruments illustrate relationships common to all acoustic (p. 219) phenomena by which the source of the sound is readily identifiable. The acoustic model established our precepts of “liveness.” The performance gesture on an acoustic instrument is inherently associated with musical sonification. When this is not the case, a gestural paradigm needs to be invented, composed, and rationalized; it could be anything, a point illustrated by the fact that real-time music systems are used in sound installations (Paine 2001, 2006, 2007), with dancers, acrobats, to sonify data associated with factory operations, pilot systems in aircraft, and weather states (Paine 2003), to mention but a few.

Cascone essentially suggested that we need to rethink the paradigm of musical performance, which he would argue has been largely in the domain of entertainment, a spectacle of virtuosity, of expression, of passion and angst, expressed through a music using instruments that encourage theatricalized gesturing. Stanley Godlovitch discusses the act of musical performance in detail in his book Musical Performance: A Philosophical Study (1998) and points out that it is far more than purely entertainment; it is a ritualized form of collective conscience, a rare opportunity within modern Western society for communal catharsis. Music plays an important role in the emotional state of the society from which it emerges, and the performance practice of the time is in part a critique of the fashion (manners, customs, and clothing) of the time.

I posit therefore that the physicality of musical performance remains a critical and inherently valuable characteristic of live music. The laptop computer may circumvent the need for gestural input in and of itself; however, a number of interfaces for musical performance encourage gesturing, be it moving sliders, turning knobs, shaking, rotating, or the like, and here lies the real problem: what characteristics of human movement are meaningful when controlling or creating live electronic music on a laptop computer?

Inventing a gestural language without inherent intent leaves the computer musician open again to charges of counterfeit. The question of designing interfaces that address authenticity, that illustrate a link between action and result is therefore of paramount importance.

The flaw in this argument may be that musicians continually adapted existing technologies for music-making (the turntable, the mixing desk, etc.), seeking instruments that express the evolving cultural climate even though they were never designed for that purpose. Genres evolve in accordance with these creative redeployments of technology and the resulting remediation of the concept of musical performance. Until recently, DJs far outnumbered laptop musicians. Nevertheless, they are never understood to be making the musical material in the moment, but to be navigating a pathway through current musical artifacts, drawing links, and opening communication channels and new perspectives on music composed by others. The DJ is to some extent the live exponent of the remix, and the DJ's act is often highly gestural, ranging from accentuated swaying to the beats of the music to spinning on the turntable itself, horizontally suspended from one hand on the platter. What I believe this tells us is that the need for showmanship, for performance (Godlovitch 1998), is far from obsolete, (p. 220) that communal catharsis is just as current and critical to today's society as it has always been.

It is critical that new instruments be developed that facilitate and nurture this expression, musical instruments that facilitate subtlety and nuanced expressivity of the same granularity as traditional acoustic musical instruments.

As outlined, unlike acoustic instruments, the performer's physical gestures are decoupled from the sound-generating mechanism in electronic musical instruments. A crucial step in the development of new musical interfaces, therefore, is the design of the relationship between the performer's physical gestures and the parameters that control the generation of the instrument's sound (Cook 2001, Wessel and Wright 2002). This process is known in the computer science and engineering worlds as control mapping (Wessel 1991, Rowe 1993, Mulder 1994, Winkler 1995, Roads 1996, Mulder et al. 1997, Rovan et al. 1997, Chadabe 2002); however, the musician perceives it as a more homogeneous engagement in which agency is decisive.

The issue of embodied knowledge is vital in both the learning and the teaching of musical performance skills and the relationship the musician has to his or her instrument. Don Ihde's phenomenological explorations of music and sound (1990) refer to “embodiment relations,” a relationship with an instrument by which the instrument “disappears” in use to become a conduit for expression rather than an object in its own right.

In their article “Corporeal Virtuality: The Impossibility of a Fleshless Ontology,” Ingrid Richardson and Carly Harper (2001) extended Ihde's approach (itself drawn from Merleau-Ponty 1962), interlinking epistemology, phenomenology, and notions of experience in a framework for the consideration of experiential engagement:

Phenomenology, via both Merleau-Ponty and Heidegger, not only prioritises the body as … [a] condition of knowledge, but can also situate technics or equipmentality in primary relation with that body, as mutually imbricated in the processes of knowing and perception. Both Heidegger and Merleau-Ponty develop a latent “phenomenology of instrumentation” ((Ihde 1990): 40) and thus lay the potential groundwork for a promising reconfiguration of agency in relation to high technology.

Merleau-Ponty, in particular, challenges dominant neo-Cartesian models of subjectivity, by highlighting the a priori coincidence of consciousness and the body i.e. abandoning the mind/body dualism in favour of the notion of a “body-subject.”

Richardson and Harper were seeking to address interaction in virtual environments through a materialist, somatic approach to existence and the production of knowledge. I believe this approach is equally valid in electronic musical instruments. Phenomenology (after Idhe and Merleau-Ponty) provides a framework for the consideration of experiential engagement, or “embodiment relations,” which as Ihde commented, is the state of interaction a highly trained musician develops with the dynamical system that is his or her instrument. The musician does not consciously consider every action he or she executes in performance; it is a trained, (p. 221) subliminal process that utilizes principal components to shape the sound properties. Ihde referred to this as Lifeword perception, citing Merleau-Ponty as follows: “What counts for the orientation of the spectacle is not my body as it in fact is, as a thing of object space, but as a system of possible actions, a virtual body with its phenomenal ‘place’ defined by its task and situation. My body is wherever there is something to be done” (Merleau-Ponty 1962, from Ihde 1990, p. 39).

To summarize, I propose that one of the reasons for the perseverance of acoustic musical instruments is that their design and construction provide a set of affordances that have facilitated modes of engagement that extend to profound “embodiment relations” (Ihde 1990, p. 39) that encourage expression on a highly abstract but simultaneously visceral and rewarding basis.

The Thummer Mapping Project (ThuMP) project sought to understand the way in which these phenomenological approaches might yield information about the epistemic condition of knowledge, that is, the subconscious body of knowledge associated with “embodiment relations” (after Idhe) that could guide electronic instrument/interface design in such a manner that the gestural language used to engage with the instrument would exhibit affordances sufficiently convincing to overcome any concern about authenticity in performance.

3. The ThuMP Project

Although a considerable body of literature exists discussing models of mapping, one to many, many to many (Hunt and Kirk 2000, Hunt et al. 2000, Hunt and Wanderley 2002), the literature is largely devoid of discussion regarding underlying musical intentionality associated with the control mechanisms outlined.

In 2005–2006, in an effort to develop a model of musical intentionality I established the Thummer Mapping Project (ThuMP) (Paine et al. 2007) at the University of Western Sydney with colleague Ian Stevenson and industry partner Thumtronics10 P/L. Rather than analyzing the mapping strategies displayed in existing electronic music interface paradigms, ThuMP sought to develop a generic model of successful and enduring acoustic musical instruments, with the selection constrained by the specification that all instruments should be able to produce a continuous tone that could be varied throughout. The ThuMP project interviewed wind, brass, string, and piano accordion performers, asking each musician to reflect on personal practice and to specify the control parameters he or she brought to bear in playing the instrument, seeking to prioritize the parameters and understand their inherent interrelationships.

Each interview was analyzed by a researcher skilled in qualitative data analysis yet a layperson regarding musicianship. This was done to reduce bias in the analysis that may have been introduced as a result of training within a particular musical paradigm. The musical parameters each interviewee discussed included pitch; (p. 222) dynamics; articulation (attack, release, sustain); and vibrato. Through further content analysis, based solely on the logic outlined in each discourse, the physical controls (speed, direction, force, etc.) that each musician utilized to affect these control parameters were also noted. In addition, the interconnections between these controls and the overall effect on the sound of the instrument were distinguished. The analysis was then represented diagrammatically, noting these connections and interrelatedness of the physical controls, the control parameters, and their effect on the overall sound of the instrument (see fig. 11.1).

 Gesture and Morphology in Laptop Music PerformanceClick to view larger

Figure 11.1 Common underlying physicality involved in controlling sound dynamics.

 Gesture and Morphology in Laptop Music PerformanceClick to view larger

Figure 11.2 Analysis of control for the concert flute.

Using the NVivo11 qualitative data analysis program, each of the pathways outlined diagrammatically was then supported with transcript data. For example, fig. 11.2 indicates that pathway 6 concerns the embouchure and its affect on the dynamics of the flute. As stated by the participant:

I personally believe that you should have it as wide as possible, not to the point where it's really windy sounding, but you want to have all those extra harmonics and the richness of the sound. So you would use the smaller embouchure, the smaller circle, when you're playing softer because when you're playing softly the air has to come out faster, has to still come out fast, I shouldn't say it has to come out faster.

To play softly you can't just stop blowing because it doesn't work, so it's like; you know if you put your thumb over the end of the hose and the water comes out faster because you've made a smaller hole, kind of the same thing when you're playing softer.

For loud, more air. That's qualified by; the embouchure has to get larger to allow that air to come out…. That's where the angle of the air comes in as well, you've got to aim the air, angle the air downwards.

For softer, smaller embouchure. Less air than is required for the loud playing but still enough air so that the note works. Also, the angle of the air generally angles upwards.

(p. 223) The transcribed discourse was then subject to a summary analysis, so that each pathway was succinctly represented. For example, pathway 6 was summarized to include the following:

A smaller embouchure is used to play softly—because the air still has to come out fast. There is less air than when playing loud, but still enough to make the note work. The air is angled upwards.

To play loudly, more air is required, that is, the embouchure gets larger to allow more air and the air is angled downwards.

A second round of interviews was conducted with the instrumentalists to clarify the relationships between the physical controls of the instrument, the defined principal control parameters (dynamics, pitch, vibrato, articulation, release, attack), and the tone color as outlined in fig. 11.3.

The principal aim of this phase was to identify the commonalities among the interviews regarding controllable sound properties and the physical controls that are exercised in the manipulation of these properties. Four parameters, pressure, speed, angle, and position, were consistently noted across all the interviews (see fig. 11.1). A robust generic model representing these physical controls was developed for: dynamics, pitch, vibrato, and articulation, including attack, release, and sustain, represented in the model in figure 11.3.

 Gesture and Morphology in Laptop Music PerformanceClick to view larger

Figure 11.3 Common underlying physicality involved in controlling sound dynamics.

With regard to the physical controls that are exercised in the manipulation of these properties, a number of commonalities were identified. However, given the variance evident in the physical manipulation of the instruments included in the study (e.g., the flute and the double bass), the commonalities identified were based (p. 224) on similarities in the underlying physicality of the process involved. To illustrate, in controlling the sound dynamics, double bass players vary the amount of bow hair used to impact the string by varying the angle of the bow (relative to the string) and increasing the pressure between the bow and string; flute players vary the angle of the airstream and the amount of air moving through the instrument, which is in turn a product of embouchure size and diaphragmatic support. The underlying physical process across these two manipulations can then be identified as a variance of angle and pressure. This type of analysis was repeated for each of the four control parameters outlined and was again represented diagrammatically.

In summary, fig. 11.3 represents a generalized model of the control parameters identified in the interviews using the NVivo qualitative data analysis approach, all of which depend on the pragmatics of the instrument in question (i.e., bowing technique and airstream control) but that determine the most critical musical attribute, the overall tone color. The principal controls are dynamics, pitch, vibrato, articulation, and attack and release.

It should be noted that tone color is seen here not simply as a variable but as the principal objective of all control, with musical concepts such as dynamics and volume, expression, duration, and intonation falling under more general concepts, such as pitch, dynamics, and articulation.

Interrelationships exist within even the most generalized model, and in asking musicians to identify the interrelationships of the myriad specialist control parameters relating to their instrument, they often commented that they were all interrelated—that very little could be done by isolating a single parameter.

The Thummer Mapping Project produced a generic model that illustrated the relationships between musical characteristics and human control gestures within a (p. 225) context that ensured the gestures were meaningful. It is suggested that the model can be translated into a gestural language for controlling/creating live electronic music on a laptop computer. It is further suggested that control mechanisms should be developed from a consideration of the outlined gestural model rather than the reverse, which has previously been the norm in electronic interface design, that is, that the gestures are developed to make the most efficient use of a preexisting, or already designed and manufactured, musical interface. The ThuMP approach made the musical context paramount, shifting the principal consideration from electrical engineering, ergonomics (most acoustic musical instruments are not considered ergonomically sound), and computer science considerations to a basis that provided an inherently musical framework for the development of a gestural control paradigm for musical performance using electronic musical interfaces, and as such at least partly addressing the issues of authenticity outlined.

4. Composition in the Timbre Domain

A possible criticism of the model derived from ThuMP is that it is based on a consideration of musical instruments developed and utilized for chromatic/ tonal music, a musical paradigm built on the musical note, and as such is an event-based paradigm. Wishart (1996) outlined the dramatic shift that occurred in electroacoustic composition as well as some late 20th century acoustic music (Varèse, Xenakis, etc.) by which the morphology of the harmonic material became a paramount compositional consideration. This movement from an event-based, note-driven compositional approach (lattice-based composition) to a temporal, timbral, morphological approach also has profound implications for instrument design and hence for approaches to electronic music performance interfaces and the associated design of control mechanisms, physical and virtual (software). This movement toward timbral composition introduces the possibility of gestural control having a direct relationship to musical content.

In approaching such a task, one must also consider the act of musical performance. I cited both Cascone and Godlovitch's considerations of this and outlined notions of body knowledge and somatic engagement with a musical instrument. Richardson and Harper pointed to Heidegger and Merleau-Ponty's notions of phenomenology of instrumentation. It stands to reason that musical practice evolves in accordance with these perceptual changes and that the nature of performative gesture and control also transforms. Richardson and Harper discussed the associated transformation of corporeal possibilities with reference to the human-technology relationship:

(p. 226) This provides an important and highly relevant theorisation of corporeal transformation, an idea that becomes central in the context of human-technology relations ((Weiss 1999): 10). Knowledge of our bodies is technologically mediated and our perception is instrumentally embodied, both in the sense that tools assimilate and materially impinge upon our field of perception, and in the sense that as environmental probes, sensory tools become virtually inseparable from what we would discern as our own perceptual and sensorial boundaries…. [E]mphasising the corporeal-instrumental embodiment of knowledge becomes particularly imperative when critiquing technologies of virtuality. (Richardson and Harper 2001)

5. Gesture and Spatialization

Gestures, regardless of size, reflect a spatial characteristic. A gesture is always in reference to another point and contains morphology, direction, energy, and intent. There is a history of sound diffusion, dating from the antiphony of biblical times to the complexity of modern-day movie theater surround-sound systems and acousmatic diffusion systems such as those of the Birmingham ElectroAcoustic Sound Theatre (BEAST), Le Groupe de Recherches Musicales (GRM), Zentrum für Kunst und Medientechnologie (ZKM), Karlsruhe, Groupe de Musique Experimentale de Bourges (GMEB), and the Gmebaphone (Bourges), and other institutions with an interest in acousmatic music. Denis Smalley referred to the practice of sound diffusion as “‘sonorizing’ of the acoustic space and the enhancing of sound-shapes and structures in order to create a rewarding listening experience” (Austin 2000, p. 10).

The more abstract electronic music becomes, the more important it is for many listeners to identify and localize a source for heard events to make meaning from the musical activity in front of them. Dan Trueman and Perry Cook took a novel approach to this issue when they designed the Bowed-Sensor-Speaker-Array (BoSSA) (Trueman and Cook 2000). They made reference to the violin's “spatial filtering audio diffuser”:

Traditional musical instruments provide compelling metaphors for human-computer interfacing, both in terms of input (physical, gestural performance activities) and output (sound diffusion). The violin, one of the most refined and expressive of traditional instruments, combines a peculiar physical interface with a rich acoustic diffuser. We have built a new instrument that includes elements of both the violin's physical performance interface and its spatial filtering audio diffuser, yet eliminates both the resonating body and the strings. The instrument, BoSSA (Bowed-Sensor-Speaker-Array), is an amalgamation and extension of our previous work with violin interfaces, physical models, and directional tonal radiation studies.

The BoSSA instrument utilises geosonic loudspeakers, which they argue has allowed them to “substantially reinvent their approach to the performance of live interactive computer music.” (Trueman et al. 2000, p. 38) (p. 227) Through the design and construction of unique sound diffusion structures, the nature of electronic sound can be reinvented. When allied with new sensor technologies, these structures offer alternative modes of interaction with techniques of sonic computation. This paper describes several recent applications of Geosonic Speakers (multichannel, outward-radiating geodesic speaker arrays) and Sensor-Speaker-Arrays (SenSAs: combinations of various sensor devices with outward-radiating multichannel speaker arrays). Geosonic Speakers, building on previous studies of the directivity of acoustic instruments (the NBody Project),12 attempt to reproduce some of the diffusion characteristics of conventional acoustic instruments; they engage the reverberant qualities of performance spaces, and allow electronic and acoustic instruments to blend more readily. (Trueman et al. 2000, p. 38)

In addition to the work by Trueman et al. (2000), Simon Emmerson outlined the perceptual issues relating to the “local-field” confusion (Emmerson 1990, 1996, 1998, 2000, 2001). Local-field refers to the expansion of a sound source brought about by amplifying and diffusing the sound of an acoustic instrument, which in normal use constitutes a point source location. When acoustic instruments are used in electroacoustic music performances, amplification of the acoustic sources brings the electronic or recorded sounds and the acoustic sources into the same sonic space, situating the acoustic instrument within the broader sonic experience, while simultaneously alerting us to the fact that the presence of an acoustic instrument on stage does not necessarily obviate these issues of perceptible sound source (Riikonen 2004).

The diffusion of sound sources, however, can add substantially to the performance of electroacoustic and acousmatic music. Once individual sonic elements are treated independently in terms of diffusion, they are perceived as individual entities within the broader architecture of the music. This provides a compositional tool by which sonic elements can be characterized as having some autonomy, a collaborator rather than simply a subservient part of the overall composition, a cincture at the base of a much larger and consuming structure (Paine 2005). New interfaces can offer a truly new opportunity for the musician to take control of spatialization during performance. As the sound source is not located at the interface but diffused through an array of loudspeakers, gestural performance can meaningfully extend to the spatialization of individual elements of the composition. To some extent, it can be argued that such an application of gesture through new interfaces for electronic music performance may assist in overcoming the local-field confusion discussed and distinct from the point source approach displayed in the development of the geosonic loudspeaker (Trueman et al. 2000).

Such an application of performance gesture would need careful consideration to address the possible confusions between performance, as in the excitation and creation of the events being heard in real time, and control, as in the triggering, mixing, or in this case spatialization of events previously generated or recorded. It may be that a control-create confusion would replace the local-field dilemma if this issue is not properly addressed.

(p. 228) I have used this technique in interactive responsive sound environment installations:

As I step over the threshold, through the open door into the room, the resonant singing dissipates as if the attention of the room has been interrupted. As it turns its attention to me, I am greeted by a multitude of small, intimate trickling bubble-like sounds, moving from the far corner around the room to greet me, to investigate my appearance and in that instance to make me aware of my presence, my immediate and total presence within the system. No longer an observer, but an integral part of the whole, I move another step, and sense a whoosh of small watery sounds moving away from me as if in fright, dancing to-and-fro, at once intimate and curious, wrapping around me and then dashing away as if to get a bigger-picture view—acting as an observer as the dynamic of my movement increases rather than immersing me. I am aware of the space as alive, dynamically filling itself with sonic invitations to engage with it in a dance, enquiring as to my intentions, which I seek to make clearer through my gestures and behavioural responses, exploring the terrain, I find myself embraced in a kind of sonic womb. (Paine 2007, p. 348)

I have experienced this same sense of intimacy in a BEAST concert in Birmingham in 2003. The 2020 Re:Vision concert,13 featured over one hundred loudspeakers14 mounted in a full three-dimensional array around the audience (front, sides, back, above, and below and many points in between), a setup also mirrored now in a number of research institutions. The experience of being immersed in the sound is qualitatively and perceptually distinct from that of listening to music presented as a stereo image (as per a proscenium arch theater) by which the audience is always and necessarily separate from the activity to which they can only relate as spectators.

Sound diffusion is practiced with both multichannel source material and stereo source material that is diffused (dynamically spatialized) through multiple loudspeakers. Stereo diffusion presents a number of issues regarding the maintenance of a coherent sonic image, which can easily become unintelligible when spread over physical and sonic distances not experienced during composition. A good deal of research pertaining to spectral or dynamic separation of sounds has taken place, but discussion is beyond the scope of this chapter.

In summary, music that is designed for loudspeaker presentation has the ability to utilize a number of strategies for the diffusion of that musical material over many loudspeakers to engage the architectural space and to present sonic elements as individual and intimate. Spatialization of sonic entities may assist in communicating the “embodiment relations” (Ihde 1990) musicians experience. I propose that both traditional and digital instruments allow embodiment relations, but the special spatialization potential afforded by gestures enacted on digital interfaces may provide a rich and rewarding avenue for development of the way in which abstract music engages an audience.

(p. 229) 6. Conclusion

Cascone (2000, p. 95) stated that “laptop performance represents artifice and absence, the alienation and deferment of presence.” This chapter specifically focused on addressing these issues through notions of somatalogical and epistemic affordances, Ihde's embodiment relations (1990), within the context of developing performance interfaces that provide sufficiently convincing gestural control affordances to overcome any concern about authenticity in performance while providing the potential for highly nuanced, expressive, embodied music performances.

The discussion of ThuMP gave a brief outline of a new approach to these issues, presenting a model for musical control developed from a musician's perspective. This model permits the design of computer music performance interfaces, derived from a musical rather than an engineering or computer science perspective as has been the norm in the past, that utilize a gestural language for controlling/creating live electronic music on a laptop computer.

The identified interrelationship of all musical parameters exemplifies a dynamical system. The relationship between the complexity of control parameters and the evolving nature of musical practice has also been discussed with specific reference to the notion of dynamic morphology (Wishart 1996), addressing both musical material and the notion of morphology of gesture in musical control.

The adoption of interface technologies such as the WiiMote15 and the Wacom16 graphics tablet for laptop music performance makes the consideration of morphological approaches to musical interfaces an imperative. The extraordinarily swift adoption of these interfaces for laptop music performance is a clear indication that gestural control is seen as important to both musicians and audiences alike and remains one of the most intricate and complex areas of development in laptop music performance tools.

Acknowledgments: I would like to thank the MARCS Auditory labs and colleagues Ian Stevenson from the School of Communication Arts and Angela Pearce from the School of Psychology at the University of Western Sydney for their assistance during ThuMP.


Austin, L. 2000. Sound diffusion in composition and performance: An interview with Denis Smalley. Computer Music Journa. 24(2): 10–21.Find this resource:

    Bongers, B. 2000. Physical interfaces in the electronic arts—interaction theory and interfacing techniques for real-time performance. In Trends in Gestural Control of Music. ed. M. M. Wanderley and M. Battier. Paris: IRCAM-Centre Pompidou, pp. 41–70.Find this resource:

      Borgo, D. 2005. Sync or Swarm: Improvising Music in a Complex Age. New York: Continuum.Find this resource:

        Carlsen, P. 1988. The Player-Piano Music of Conlon Nancarrow: An Analysis of Selected Studies. Vol. 1. SAM monographs, no. 26. Brooklyn: Institute for Studies in American Music, Conservatory of Music, Brooklyn College of the City University of New York.Find this resource:

          Cascone, K. 2000. Comatonse recordings: Articles and reviews. Computer Music Journa. 24(4): 91–94.Find this resource:

            Chadabe, J. 1997. Electric Sound: The Past and Promise of Electronic Music. Upper Saddle River, NJ: Prentice Hall.Find this resource:

              Chadabe, J. 2002. The Limitations of Mapping as a Structural Descriptive in Electronic Music. Paper presented at the NIME 2002, Dublin.Find this resource:

                (p. 231) Cook, P. 2001. Principles for designing computer music controllers. In Proceedings of the NIME-01 New Interfaces for Musical Expression in Proceedings of CHI 2001, Seattle., accessed February 6, 2008.Find this resource:

                  Duckworth, W. 1999. Talking Music: Conversations with John Cage, Philip Glass, Laurie Anderson, and Five Generations of American Experimental Composers. New York: Da Capo Press.Find this resource:

                    Emmerson, S. 1990. Computers and live electronic music: Some solutions, many problems. In Proceedings of the International Computer Music Conference, Glasgow, Scotland.–94.htm, accessed February 6, 2008.Find this resource:

                      Emmerson, S. 1996. Local/field: Towards a typology of live electronic music. Journal of Electroacoustic Musi. 9: 10–12.Find this resource:

                        Emmerson, S. 1998. Acoustic/electroacoustic: The relationship with instruments. Journal of New Music Researc. 27(1–2): 146–164.Find this resource:

                          Emmerson, S. 2000. “Losing Touch?” The human performer and electronics. In Music, Electronic Media and Culture. ed. S. Emmerson. Aldershot, UK: Ashgate, pp. 194–216.Find this resource:

                            Emmerson, S. 2001. New spaces/new places: A sound house for the performance of electroacoustic music and sonic art. Organised Soun. 6(2): 103–105.Find this resource:

                              Gann, K. 1995. The Music of Conlon Nancarrow. Music in the Twentieth Century. Cambridge: Cambridge University Press.Find this resource:

                                Godlovitch, S. 1998. Musical Performance: A Philosophical Study. London: Routledge.Find this resource:

                                  Hunt, A., and R. Kirk. 2000. Mapping strategies for musical performance. In Trends in Gestural Control of Music. ed. M. Wanderley and M. Battier. Paris: IRCAM—Centre Pompidou, pp. 231–258.Find this resource:

                                    Hunt, A., M. Wanderley, and R. Kirk. 2000. Towards a model for instrumental mapping in expert musical interaction. In Proceedings of the International Computer Music Conference, Berlin., accessed February 6, 2008, pp. 209–211.Find this resource:

                                      Hunt, A., and M. M. Wanderley. 2002. Mapping performer parameters to synthesis engines. Organised Soun. 7(2): 97–108.Find this resource:

                                        Ihde, D. 1990. Technology and the Lifeworld: From Garden to Earth. Indiana Series in the Philosophy of Technology. Bloomington: Indiana University Press.Find this resource:

                                          Kittler, F. A. 1999. Gramophone, Film, Typewriter. Writing Science. Stanford, CA: Stanford University Press.Find this resource:

                                            Kittler, F. A., and J. Johnston. 1997. Literature, Media, Information Systems: Essays. Critical Voices in Art, Theory and Culture. Amsterdam: GB Arts International.Find this resource:

                                              Merleau-Ponty, M. 1962. Phenomenology of Perception. New York: Humanities Press.Find this resource:

                                                Mulder, A. 1994. Virtual musical instruments: Accessing the sound synthesis universe as a performer. In Proceedings of the First Brazilian Symposium on Computer Music, Caxambu, Minas Gerais, Brazil., accessed February 6, 2008, pp. 2–4.Find this resource:

                                                  Mulder, A., S. Fels, and K. Mase. 1997. Mapping virtual object manipulation to sound variation. IPSJ SIG Note. 97(122): 63–68.Find this resource:

                                                    Mulder, A., and S. Fels. 1998. Sound sculpting: Performing with virtual musical instruments. In Proceedings of the 5th Brazilian Symposium on Computer Music, Belo Horizonte, Minas Gerais, Brazil., accessed February 6, 2008, pp. 3–5.Find this resource:

                                                      Paine, G. 2001. “Gestation.” Exhibition catalogue., accessed February 6, 2008.

                                                      (p. 232) Paine, G. 2002. Interactivity, where to from here? Organised Soun. 7(3): 295–304.Find this resource:

                                                        Paine, G. 2003. Reeds, a responsive environmental sound installation. Organised Soun. 8(2): 139–150.Find this resource:

                                                          Paine, G. 2005. Sonic immersion: Interactive engagement in realtime responsive environments. In Proceedings of the e-Performance and Plugins, Sydney., accessed February 6, 2008.Find this resource:

                                                            Paine, G. 2006. Interactive, responsive environments: A broader artistic context. In Engineering Nature: Art and Consciousness in the Post-Biological Era. Chicago: University of Chicago Press, Intellect, pp. 312–334.Find this resource:

                                                              Paine, G. 2007. Hearing places: Sound, place, time and culture. In Hearing Places. ed. R. Bandt, M. Duffy, and D. MacKinnon. Newcastle, UK: Cambridge Scholars Press, pp. 348–368.Find this resource:

                                                                Paine, G., I. Stevenson, and A. Pearce. 2007. The Thummer Mapping Project (ThuMP). In Proceedings of the International Conference on New Interfaces for Musical Expressio. (NIME07), New York City, NY., accessed February 6, 2008, pp. 70–77.Find this resource:

                                                                  Pressing, J. 1990. Cybernetic issues in interactive performance systems. Computer Music Journa. 14(1): 12–25.Find this resource:

                                                                    Richardson, I., and C. Harper. 2001. Corporeal virtuality: The impossibility of a fleshless ontology. Body, Space and Technology Journa. 2(2). this resource:

                                                                      Riikonen, T. 2004. Shared sounds in detached movements: Flautist identities inside the “local-field” spaces. Organised Soun. 9(3): 233–242.Find this resource:

                                                                        Roads, C. (ed.). 1996. The Computer Music Tutorial. 2nd ed. Massachusetts: MIT Press.Find this resource:

                                                                          Rovan, J., M. Wanderley, S. Dubnov, and P. Depalle. 1997. Mapping Strategies as Expressive Determinants in Computer Music Performance. Paper presented at the AIMI International Workshop Kansei, The Technology of Emotion, Genoa, Italy.Find this resource:

                                                                            Rowe, R. 1993. Interactive Music Systems. Massachusetts: MIT Press.Find this resource:

                                                                              Trueman, D., C. Bahn, and P. Cook. 2000. Alternative voices for electronic sound. Journal of the Acoustical Society of Americ. 108(5): 25–38.Find this resource:

                                                                                Trueman, D., and P. Cook. 2000. Bossa: The deconstructed violin reconstructed. Journal of New Music Researc. 29(2): 121–130.Find this resource:

                                                                                  Wanderley, M. 2001. Gestural control of music. In Proceedings of the International Workshop on Human Supervision and Control in Engineering and Music, Kassel Germany., accessed February 6, 2008.Find this resource:

                                                                                    Weiss, G. 1999. Body Images: Embodiment as Intercorporeality. New York: Routledge.Find this resource:

                                                                                      Wessel, D. L. 1991. Instruments that learn, refined controllers, and source model loudspeakers. Computer Music Journa. 15(4): 82–86.Find this resource:

                                                                                        Wessel, D., and M. Wright. 2002. Problems and prospects for intimate control of computers. Computer Music Journa. 26(3): 11–22.Find this resource:

                                                                                          Winkler, T. (1995). Making Motion Musical: Gestural Mapping Strategies for Interactive Computer Music. Paper presented at the 1995 International Computer Music Conference, San Francisco.Find this resource:

                                                                                            Wishart, T. 1996. On Sonic Art. ed. S. Emmerson. Philadelphia: Harwood.Find this resource:


                                                                                              (1.) See “Conlon Nancarrow,” Wikipedia,, accessed February 2, 2008.

                                                                                              (2.) See Jay Williston, “Thaddeus Cahill's Teleharmonium,”,, accessed February 2, 2008.

                                                                                              (3.) See the NIME Web site,, accessed June 3, 2008.

                                                                                              (4.) See the official Web site of the “original computer network band,” The HUB,, accessed June 3, 2008.

                                                                                              (5.) See the development company's official Web site,, accessed June 3, 2008.

                                                                                              (6.) See the development company's official Web site,, accessed June 3, 2008.

                                                                                              (7.) See the official repository and development community for the Supercollider synthesis software,, accessed June 3, 2008.

                                                                                              (8.) See the official repository and development community for the impromtu software environment,, accessed June 3, 2008.

                                                                                              (9.) See the official download and information page for the musical programming language Chuck,, accessed June 3, 2008.

                                                                                              (10.) See the official Web site for Thumtronics Ltd.,, accessed December 3, 2008.

                                                                                              (11.) See the official Web site for QSR International, the developer of the qualitative analysis software Nvivo,, accessed December 3, 2008.

                                                                                              (12.) See “N-Body: Spatializing electric and virtual instruments,”, accessed July 5, 2008.

                                                                                              (13.) See “Previous Events,” Electroacoustic Music Studios/Beast,, accessed July 5, 2008.

                                                                                              (14.) Loudspeaker specifications can be found at “About BEAST,” Electroacoustic Music Studios/Beast,, accessed March 21, 2009.

                                                                                              (15.) See the official Nintendo P/L Web site for the Wii gaming console and accessories,, accessed December 3, 2008.

                                                                                              (16.) See the official Wacom P/L Web site,, accessed December 3, 2008, which provides information on the Wacom Tablet products.