(p. 1) Introduction
(p. 1) Introduction
The Oxford Handbook of Interactive Audio is a collection of chapters on interactivity in music and sound whose primary purpose is to offer a new set of analytical tools for the growing field of interactive audio. We began with the premise that interacting with sound is different from just listening to sound in terms of the audience’s and creator’s experience. Physical agency and control through interactivity add a level of involvement with sound that alters the ways in which sound is experienced in games, interfaces, products, toys, environments (virtual and real), and art. A series of related questions drive the Handbook: What makes interactive audio different from noninteractive audio? Where does interacting with audio fit into our understanding of sound and music? What are the future directions of interactive audio? And, How do we begin to approach interactive audio from a theoretical perspective?
We began the Oxford Handbook of Interactive Audio by approaching authors who work with interactive audio across a wide spectrum, hoping that, together, we may begin to answer these questions. What we received in return was an incredible array of approaches to the idea of interacting with sound. Contributors to the Handbook approach the ontological and philosophical question of, “What is interactive audio, and what can it do?” from a number of different perspectives. For some, an understanding of sound emerges through developments and advancements in technology, in writing software programs and codes, or building original hardware and equipment to create new types of sound. For others, interactive audio is more of an aesthetic consideration of how its inherent power can be used in creative projects and art installations. For still others, new perspectives on audio emerge through exploration of its communicative power: how audio works as a link between not only the human–machine interface, but also—and increasingly—between human beings.
From the outset, our goal was to put together a volume of work that was both inclusive and dialectical in nature, a volume that would be humanities-driven, but that would also take into account approaches from practitioners and those within the natural sciences and engineering disciplines. Rather than direct contributors to write to a specific brief, we instead encouraged them to interrogate, interpret, and challenge current theories and understandings of interactive audio, in whatever forms and contexts were meaningful to them. What has emerged from this type of open-ended mandate demonstrates not (p. 2) only a remarkable range of scholarship but also the inherent importance of interactive audio to so many different areas.
However, beneath the seemingly wide disparity between the approach and subject matter of the chapters, a series of themes began to clearly surface and recur across disciplines. It was these themes that eventually led to the overall structure of the Oxford Handbook of Interactive Audio and its separation into six sections: (1) Interactive Sound in Practice; (2) Videogames and Virtual Worlds; (3) The Psychology and Emotional Impact of Interactive Audio; (4) Performance and Interactive Instruments; (5) Tools and Techniques; and (6) The Practitioner’s Point of View. These sections are to some extent driven by the overarching themes that tie them together, although as will be made apparent upon reading, there is considerable overlap between sections, making our organizational structure just one of any number of ways of presenting and making sense of so many diverse and diffuse ideas.
Interactive Sound in Practice
The first section, Interactive Sound in Practice, presents research drawn from an arts perspective, with a particular focus on interactive audio as a component of art practice (where “art” is defined broadly). What is clear from the chapters in this section is the idea that interactivity in the arts arose as a defining element of the twentieth-century avant-garde. Interactivity facilitated (and was facilitated by) a new relationship between audience and creator, a relationship that broke down the “fourth wall” of artistic practice. The fourth wall is a term borrowed from performance theory that considers the theatrical stage as having three walls (the rear and two sides) and an invisible fourth wall between the actors and audience. “Breaking” the fourth wall has become an expression for eliminating the divide between performer or creator and audience. Alongside this creator–audience dissolution is the new emphasis on art as an experience and practice, rather than a text or object. The shift in the arts in the twentieth century from object-based work to practice-based work is a shift that has been referred to as a change of focus on doing: a shift to an aesthetics of relationships (Bourriaud 2002; Green 2010, 2). Gell for instance suggests a redefinition of art as the “social relations in the vicinity of objects mediating social agency…between persons and things, and persons and persons via things” (Gell 1998, 5).
One of the challenges of thinking of interactivity in these terms—that is, as an ongoing social construct—is that it brings up difficult questions about the nature of texts as finished products (Saltz 1997, 117). Tied closely to the concept of the open work (an idea of “unfinishedness” that was made famous by John Cage, although the idea certainly existed much earlier), interactivity presents work that is always evolving, always different, and never finished. Interactive texts are inherently unfinished because they require a participant with whom to interact before they can be realized in their myriad forms: a player is needed for a game, and an audience is required for an interactive play. The structures that are inherent in interactive media encourage a greater affordance for, and a greater interest on the part of, the audience toward (p. 3) coauthorship. In this way, notions of interactivity both feed into and draw from postmodern aesthetics, shifting away from “art” and “play” as cogent and unproblematic terms, moving toward a system that defines interactivity as a necessarily individualized and interpretive process.
From a technological–industrial perspective, it becomes evident that interactivity has been, in no small measure, influenced by advances in digital machines and media. Marshall McLuhan and Barrington Nevitt predicted as early as 1972 that the consumer–producer dichotomy would blur with new technologies. Rob Cover argues that “the rise of media technologies which not only avail themselves to certain forms of interactivity with the text, but also to the ways in which the pleasure of engagement with the text is sold under the signifier of interactivity is that which puts into question the functionality of authorship and opens the possibility for a variety of mediums no longer predicated on the name of the author” (Cover 2006, 146).
The dissolution of creator–audience divide and the rise of the audience-creator is explored in a variety of forms in this section of the book. Holly Rogers takes on this history in video art in “Spatial Reconfiguration in Interactive Video Art,” drawing on Frances Dyson’s conceptualization of the change as going from “looking at” to “being in” art (Dyson 2009, 2). It is further interrogated in Nye Parry’s “Navigating Sound: Locative and Translocational Approaches to Interactive Audio,” which explores the influence of the avant-garde on site-specific and environmental sound.
In each of the chapters in this section, it is clear that the role of the audience has gone from one of listening to one of sound-making. The audience is no longer disconnected from the sounds produced in the environment, but is actively involved in adding to, shaping, and altering the sonic environment around them. This activity is made explicit in Andrew Dolphin’s chapter on sound toys, “Defining Sound Toys: Play as Composition.” Dolphin questions the role of the composer as a kind of auteur, suggesting instead that interactive audio leads to a democratization of sound-making practice in the form of affordable, user-friendly interactive toys.
The new means to interact with sound may lead to potentially new ways to enhance learning, an idea explored by M. J. Bishop in her chapter, “Thinking More Dynamically About Using Sound to Enhance Learning from Instructional Technologies.” Finally, Jan Paul Herzer explores the concept of an audience’s participation in an interactive environment, an environment where audio becomes a component of a functional interactive ecosystem, in “Acoustic Scenography and Interactive Audio: Sound Design for Built Environments.”
Videogames and Virtual Worlds
Perhaps one of the most influential drivers of interactive audio technology today is that of videogames and virtual worlds. For those who have grown up playing videogames, interacting with audio (and video) is an almost instinctive process. Our physical interaction with sound, coupled with the meaning derived from these (p. 4) sounds (and our interaction with them), directly informs the ways in which videogames and game franchises are created. Publishers and online companies rely on audio to communicate key ideas about the game and gameplay through sound and music. Videogames have offered a uniquely commercial avenue for the exploration and exploitation of interactive audio concepts, from generative and procedural content to nonlinear open-form composition. The nonlinear nature inherent in videogames, along with the different relationship the audio has with its audience, poses interesting theoretical problems and issues. One of the most significant aspects has been the influence of games on sound’s structure, particularly the highly repetitive character of game audio and the desire for variability.
The chapters in the Videogames and Virtual Worlds section explore the influence of interactivity on sound’s meanings and structures. Inherent in all of the chapters in this section is the idea that games are fundamentally different from film, and that interactivity drives this difference. In “The Unanswered Question of Musical Meaning: A Cross-domain Approach,” Tom Langhorst draws on elements of psychoacoustics, linguistics, and semiotics to explore the meaning behind seemingly simple sounds of early 8-bit games such as Pong and Pac-Man, suggesting that new methods must be developed to explore interactive sound in media.
Jon Inge Lomeland takes a different approach to meaning in “How Can Interactive Music be Used in Virtual Worlds Like World of Warcraft?” Lomeland approaches the meaning of game music for the audience in terms of the nostalgia that builds around the highly repetitive music tied to hours of enjoyment with a game. As games evolve over time, what changes should be made to the music, without altering the attachments that players develop to that music, and what response does new music get from its audience?
Guillaume Roux-Girard further explores the listening practices of game players in “Sound and the Videoludic Experience.” Roux-Girard suggests methods that scholars can employ in analyzing interactive music, focusing on the experiential aspects of play. Roux-Girard, Lomeland, and Langhorst all focus on the idea that interactivity alters the relationship that players have with music, and suggest that game music cannot be analyzed outside the context of the game, but that there is a fundamental necessity to include the player’s experience in any analysis.
Just as games can influence music’s structure, the final two chapters of the section suggest how music can influence the structure of games. In “Designing a Game for Music: Integrated Design Approaches for Ludic Music and Interactivity,” Richard Stevens and Dave Raybould take a cue from famed sound designer Randy Thom’s well-known article “Designing a Movie for Sound” (1999). In this article, Thom argues that sound can be a driving force for film if the film is written to consider sound right from the beginning. The idea was later explored by game sound director Rob Bridgett in his Gamasutra article, “Designing a Next-gen Game for Sound” (2007), where he argues that it is necessary to design games with “sound moments” in order to entice the audience. Stevens and Raybould offer their own take on this important concept, suggesting that previous definitions of interactivity have focused merely on the idea of reactivity, and by reconceptualizing the notion of interactivity itself, we may begin to (p. 5) think about new ways of developing games around audio, rather than developing the audio around the game, as is commonly done. Melanie Fritsch offers us some insight into music-based games in her chapter, “Worlds of Music: Strategies for Creating Music-based Experiences in Videogames.” By presenting three case studies of musically interactive games, Fritsch brings forth the notion that games are activities, driven by our physical, embodied interaction.
The Psychology and Emotional Impact of Interactive Audio
Historically, researchers into human cognition believed thinking and problem-solving to be exclusively mental phenomena (Clancey 1997, in Gee 2008). But more contemporary research, specifically that of embodied cognition theory, holds that our understanding of the world is shaped by our ability to physically interact with it. According to embodied cognition theory, our knowledge is tied to the original state that occurred in the brain when information was first acquired. Therefore, cognition is considered “embodied” because it is inextricably tied to our sensorimotor experience; our perception is always coupled with a mental reenactment of our physical, embodied experience (Collins 2011).
In the third section of the Handbook, The Psychological and Emotional Impact of Interactive Audio, embodiment through sound technology is explored by taking an embodied cognition approach, as is done in the two chapters that focus on videogames; Mark Grimshaw and Tom Garner’s “Embodied Virtual Acoustic Ecologies of Computer Games” and Inger Ekman’s “A Cognitive Approach to the Emotional Function of Game Sound.” The importance of the role that our body plays in experiencing interactive sound—not only through the direct physical interaction with sound, but also through the multimodal act of listening—is explored in the following two chapters, Rolf Nordahl and Niels C. Nilsson’s “The Sound of Being There: Presence and Interactive Audio in Immersive Virtual Reality” and Stefania Serafin’s “Sonic Interactions in Multimodal Environments: An Overview.”
Nordahl and Nilsson explore the importance of sound to the concept of immersion and presence. The theory of immersion most currently in favor within the game studies and virtual reality community is related to Csíkszentmihályi’s (1990) concept of “optimal experience” or “flow.” Csíkszentmihályi describes flow as follows: “The key element of an optimal experience is that it is an end in itself. Even if initially undertaken for other reasons, the activity that consumes us becomes intrinsically rewarding” (Csikszentmihalyi 1990, 67). He outlines eight criteria for the flow experience: (1) definable tasks; (2) ability to concentrate; (3) clear goals; (4) immediate feedback; (5) “deep but effortless involvement that removes from awareness the worries and frustrations of everyday life”; (6) sense of control over their actions; (7) disappeared concern for self; and (8) altered sense of the duration of time.
(p. 6) Several attempts have been made to identify the elements of virtual environments or games that lead to or contribute to immersion. One of the least explored areas of immersion is the influence of sound. Nordahl and Nilsson attempt to define presence and immersion in the context of interactive virtual environments, exploring the auditory influence as well as specific auditory techniques on immersive experiences. Serafin expands on this argument by focusing specifically on sound as one component within a multimodal system.
The interactions that occur between our sensory modalities can vary depending on the context they are operating in. Our perception of one modality can be significantly affected by the information that we receive in another modality. Some researchers have studied the interactions among modalities in general (Marks 1978). Others have focused on the interactions of two specific sensory modalities, such as vision and touch (Martino and Marks 2000), sound and touch (Zampini and Spence 2004), sound and taste (Simner, Cuskley, and Kirby 2010), and sound and odor (Tomasik-Krótki and Strojny 2008). Serafin interrogates these cross-modal interactions with sound, examining how an understanding of our perceptual system may improve our ability to design and create technologies.
Indeed, an understanding of the emotional and cognitive aspects of sound can potentially lead to much greater engagement with a variety of media. Anders-Petter Andersson and Birgitta Cappelen even show in “Musical Interaction for Health Improvement” that sound (specifically, music) can influence and improve our health. Natasa Paterson and Fionnuala Conway’s “Engagement, Immersion and Presence: The Role of Audio Interactivity in Location-aware Sound Design” specifically focuses on the role of sound in the design of location-aware games and activities, arguing for greater engagement and immersion through sound design.
Performance and Interactive Instruments
The fourth section of the Handbook, Performance and Interactive Instruments, brings together emerging ideas about how we physically interact with audio: through what devices, media, and technologies? New generations of game consoles manifest the idea that we physically interact with audio: through devices shaped like guitars and light sabers, through hand-held controllers and other gestural interaction devices. However, what are the constraints of these systems? How are designers and engineers working to overcome current technical and industrial limitations? In addition, how does the increasingly important role of social and online media influence the ways in which people interact with audio? In seeking solutions to these and other questions, the work of authors in this section challenge traditional thinking about audio and the (p. 7) environment, about performer and audience, about skill and virtuosity, about perception and reality.
Each author presents a different perspective on what interactive sound means in terms of digital sound production and consumption: exploring liveness, instrument creation, and embodiedness. Kiri Miller explores interactivity through dance in “Multisensory Musicality in Dance Central” Miller argues that through the performative practice of dance, and the social interactions that take place around games like Dance Central, audiences may develop a new relationship to music and sound.
Mike Frengel and Michael Gurevich each explore interactivity in the performing arts from the perspective of the composer and performer, rather than audience. This is not to say that an audience isn’t a component of that performance. Indeed, Frengel argues that “Interactivity in the performing arts is distinctive because there is a third party involved—the spectator. In concert music performances, the interaction typically occurs between a performer and a system, but it is done for an audience that remains, in most cases, outside the interactive discourse.” Both Frengel’s “Interactivity and Liveness in Electroacoustic Concert Music” and Gurevich’s “Skill in Interactive Digital Music Systems” examine the relationship between the performer and the audience in electronic (and particularly digital) interactive music, exploring what it means to perform with technology.
Research has shown that we can recognize and feel the emotion conveyed by a performer when we listen to music (Bresin and Friberg 2001). An embodied cognition approach as to why this occurs suggests that we understand human-made sounds (including those generated by playing a musical instrument) in terms of our own experience of making similar sounds and movements. We therefore give meaning to sound in terms of emulated actions, or corporeal articulations (Leman 2008). More specifically, we mentally and sometimes physically imitate the expressiveness of the action behind the sound, based on our “prior embodied experience of sound production” (Cox 2001). Winters describes, “The mimetic hypothesis might also provide an explanation for why we might find ourselves unconsciously “imitating” the emotion seemingly being expressed, in addition to any willing participation in a game of make-believe” (Winters 2008). Electronically generated or synthesized sounds and music remove this corporeal connection to causality. Issues of liveness frequently arise in the discussions of electronic music. What is made clear in Frengel’s and Gurevich’s chapters is that digital electronic instruments can disguise some of the important performative aspects of music. Marc Ainger and Benjamin Schroeder’s “Gesture in the Design of Interactive Sound Models” focuses on this role of gesture in the relationship between performer, instrument, and listener, suggesting some means to overcome the lack of gesture in some types of digital music performance. Nick Collins suggests that the machine can become a performer in its own right, an intelligent responsive instrument that can listen and learn, in “Virtual Musicians and Machine Learning.” This idea is further expanded upon by Norbert Herber in “Musical Behavior and Amergence in Technoetic and Media Arts.” Herber suggests generative music systems can offer one (p. 8) means to enhance the live experience, as variation and difference can be brought into performance.
Tools and Techniques
The concept of machine learning, and how the machine “talks” back to us and interacts with us, brings us to the section on Tools and Techniques, focusing on the enabling nature of new tools, technologies, and techniques in interactive audio. Within Tools and Techniques, ontological implications of questions regarding the evolving, ongoing, and often contested relationship between human and machine are explored. The essence of the interactivity lies within the medium of interaction and therefore, unsurprisingly, computers, hardware, and software are the media integral to the production of digital audio. New technologies such as digital sensors have enabled interactivity to thrive in the arts but how, specifically, can these media influence interaction with sound? In some instances, such as in music for film and television, audio is transmitted in one direction: from creator to listener, with little or no interactivity involved; and in others, sound can and indeed must be interactive, as is the case with videogames. Despite this difference, implicit in all of these cases is the understanding that technology is simply a tool—true creativity is an inherently human trait. But is such a statement necessarily the case? The research presented in this section questions the essential elements of interactivity by linking these findings to wider questions about creativity and creative work. Is creativity, by definition, something that can be produced only by human beings? Can machines produce output that evokes emotion?
Chris Nash and Alan F. Blackwell begin the section in “Flow of Creative Interaction with Digital Music Notations” by exploring the relationship between digital music notation and creation, examining the software at the heart of digital music production, from sequencer or tracker-based systems such as Pro Tools to graphic programming software such as Max/MSP. They present a series of design heuristics based on their research into the influence that software has on creativity. David Bessell’s “Blurring Boundaries: Trends and Implications in Audio Production Software Developments” provides a useful corollary to Nash and Blackwell by providing us with a historical overview of the digital audio workstation, or DAW, focusing on the development of this musical software.
The next two chapters focus on generative and procedural production systems for videogames. Procedural music soundtracks may offer some interesting possibilities that may solve some of these complications with respect to composing for games. On the other hand, procedural music composers are faced with a particular difficulty when creating for videogames: the sound in a game must accompany an image as part of a narrative, implying that sound must fulfill particular functions in games. Cues need to relate to each other, to the gameplay level, to the narrative, to the game’s run-time (p. 9) parameters, and even to other games in the case of episodic games or those that are part of a larger series. Procedural music and sound in (most) games, therefore, must be bound by quite strict control logics (the commands or rules that control playback), in order to function adequately (see Collins 2009). In particular, music must still drive the emotion of the game, a fact explored by Maia Hoeberechts, Jeff Shantz, and Michael Katchabaw in “Delivering Interactive Experiences through the Emotional Adaptation of Automatically Composed Music.” Niels Böttcher and Stefania Serafin specifically focus on the question of how procedural sound relates to the gestural interactions of the player in “A Review of Interactive Sound in Computer Games: Can Sound Affect the Motoric Behavior of a Player?.”
The Tools and Techniques section of the Handbook is rounded out by Victor Lazzarini’s “Interactive Spectral Processing of Musical Audio,” which explores emerging ideas in interactive spatial sound and interactive spectral processing. Although such tools and techniques often occur “behind the scenes” of the creative and experiential aspects of sound production and listening, the ideas and concepts are driving new tools and technologies that are sure to become familiar to us in the future.
The Practitioner’s Point of View
The final section of the book, The Practitioner’s Point of View, steps back from some of the academically inspired issues and questions to consider interactive audio from the point of view of some of its practitioners. The collection of chapters presented in this section coalesce around considerations of the past, present, and future of interactive audio. “Let’s Mix it up: Interviews Exploring the Practical and Technical Challenges of Interactive Mixing in Games” by Helen Mitchell presents interview material with game sound designers, outlining some of the creative and technical challenges of designing interactive sound. Damian Kastbauer, an audio implementation specialist for games, explores what “Our Interactive Audio Future” might look like, introducing some of the technical work that is being undertaken through a narrative of sound synthesis in the future. Leonard J. Paul’s “For the Love of Chiptune” explores what it means to compose with game sound tools, and how practitioners can develop their own aesthetic within a community of composers.
Andy Farnell, one of the leading proponents of procedural audio, introduces us to his take on “Procedural Audio Theory and Practice,” providing a useful complementary chapter to some of the theoretical work presented in other chapters. Likewise, complementing the chapters in Performance and Interactive Instruments, composer Rafał Zapała gives his theory on and techniques for live electronic and digital performance with “Live Electronic Preparation: Interactive Timbral Practice.” Finally, game composer and sound designer Tim van Geelen introduces us to “New Tools for Interactive Audio, and What Good They Do,” suggesting how new hardware, software, and techniques may lead us forward in our production and understanding of interactive audio.
(p. 10) A Series of Lists…
The crossover between chapters has meant that there are common references to products and concepts that recur throughout the Handbook. In order to facilitate an ease of referencing common software, games, and acronyms, we have compiled three lists following this Introduction: (a) a list of acronyms; (2) a list of software; and (3) a list of games. It is our hope that by presenting the information collated in this fashion, readers will be more easily able to follow up on references. Likewise, we have presented a list of further references for those readers who wish to seek out videos, images, sound files, and other content beyond what we could include in this text. This latter list was compiled by the authors of the chapters included here, and is presented as a kind of “recommended reading, viewing, and listening list.”
Bourriaud, N. 2002. Esthétique relationnelle. Dijon: Les presses du réel.Find this resource:
Bresin, Roberto, and Anders Friberg. 2001. Expressive Musical Icons. In Proceedings of the 2001 International Conference on Auditory Display, ed. J. Hiipakka, N. Zakarov. and T. Takala, 141–143. Espoo, Finland: Helsinki University of Technology.Find this resource:
Bridgett, Rob. 2007. Designing a Next-gen Game for Sound. Gamasutra, November 22. http://www.gamasutra.com/view/feature/130733/designing_a_nextgen_game_for_sound.php.
Clancey, W. J. 1997. Situated Cognition: On Human Knowledge and Computer Representations. Cambridge, UK: Cambridge University Press.Find this resource:
Collins, Karen. 2009. An Introduction to Procedural Audio in Video Games. Contemporary Music Review, 28(1): 5–15Find this resource:
——. 2011. Making Gamers Cry: Mirror Neurons and Embodied Interaction with Game Sound. ACM AudioMostly 2011: 6th Conference on Interaction with Sound. Coimbra, Portugal, September 2011, 39–46.Find this resource:
——. 2013. Playing With Sound: A Theory of Interacting with Sound and Music in Video Games. Cambridge, MA: MIT Press.Find this resource:
Cover, Rob. 2006. Audience Inter/active: Interactive Media, Narrative Control and Reconceiving Audience History. New Media and Society 8(1): 139–158.Find this resource:
Cox, Arnie. 2001. The Mimetic Hypothesis and Embodied Musical Meaning. Musicae Scientiae 5(2): 195–212.Find this resource:
Csikszentmihalyi, Mihaly. 1990. Flow: The Psychology of Optimal Experience. New York: Harper-Perennial.Find this resource:
Dyson, Frances. 2009. Sounding New Media: Immersion and Embodiment in the Arts and Culture. Berkeley: University of California Press.Find this resource:
Gee, James Paul. 2008. Video Games and Embodiment. Games and Culture 3(3–4): 253–263.Find this resource:
Gell, Alfred. 1998. Art and Agency: An Anthropological Theory of Art. Oxford: Oxford University Press.Find this resource:
Green, Jo-Anne. 2010. Interactivity and Agency in Real Time Systems. Soft Borders Conference and Festival Proceedings, 1–5. São Paulo, Brazil.Find this resource:
(p. 11) Leman, Marc. 2008. Embodied Music Cognition and Mediation Technology. Cambridge, MA: MIT Press.Find this resource:
Marks, Lawrence E. 1978. The Unity of the Senses: Interrelations among the Modalities. New York: Academic Press.Find this resource:
Martino, Gail, and Lawrence E. Marks. 2000. Cross-modal Interaction between Vision and Touch: The role of Synesthetic Correspondence. Perception 29(6): 745–754.Find this resource:
McLuhan, Marshall, and Barrington Nevitt. 1972. Take Today: The Executive as Dropout. New York: Harcourt, Brace and Jovanovich.Find this resource:
Saltz, David Z. 1997. The Art of Interaction: Interactivity, Performativity, and Computers. Journal of Aesthetics and Art Criticism 55(2): 117–127.Find this resource:
Simner, J., C. Cuskley, and S. Kirby. 2010. What Sound Does That Taste? Cross-modal Mappings across Gustation and Audition. Perception 39(4): 553–569.Find this resource:
Thom, Randy. 1999. Designing a Movie for Sound. Film Sound. http://filmsound.org/articles/designing_for_sound.htm.
Tomasik-Krótki, Jagna, and Jacek Strojny. 2008. Scaling of Sensory Impressions. Journal of Sensory Studies 23(2): 251–266.Find this resource:
Winters, Ben. 2008. Corporeality, musical heartbeats, and cinematic emotion. Music, Sound, and the Moving Image 2(1): 3–25.Find this resource:
Zampini, Massimiliano, and Charles Spence. 2004. The role of auditory cues in modulating the perceived crispness and staleness of potato chips. Journal of Sensory Studies 19(5): 347–363. (p. 12) Find this resource: