When Music Unfolds into Image: Conceiving Visual Concerts for Kaija Saariaho’s Works
Abstract and Keywords
The authors reflect on their own experience of developing a specific form of multimedia live performance: the visual concert. The various video projects they realized for works by Finnish composer Kaija Saariaho serve as examples illustrating a more general aesthetic question: what can video art bring to music within the concert ritual? Answers are suggested first in a general assessment of the scientific (perception and cognition research) and cultural roots and parameters of cross-media art forms, and second in an analysis of the contemporary technological tools that allow the visual concert to move beyond the antiquated paradigms of synesthesia, synchronization, or aleatory autonomy of juxtaposed media, and thus to meet the challenges of contemporary music. These mostly unexplored links between new musical techniques and video art open new opportunities that expand the listener’s experience of music and suggest a practice that can become an art form of its own.
We have been exploring for many years—as a multimedia artist and a stage director, respectively—the manifold relationships that can be created between live music and video. Together with Image Auditive (IA)1—an organization dedicated to the conception, realization, and diffusion of innovative projects using new technologies to combine music and image—we have produced different sorts of visual concerts, in particular with diverse combinations of pieces for instruments and voices by Finnish composer Kaija Saariaho (born in 1952).2
In our practice and research, we are aiming at bringing the relations between live music and video to the status of a mature artistic form, hopefully more refined and consistent aesthetically than many productions in which advanced technologies are involved in the live performance of classical and contemporary music together with a visual part. For us, conceiving and realizing visual concerts should be proceeding from a clear and constructive methodology, a serious knowledge and experience of music and visual arts, including their histories and relations, of digital synthesis and processing techniques, and of cognitive and perceptual studies, as well as a responsibly assumed and consequent aesthetical standpoint.
In this chapter, we explore the theoretical background of our research, and describe our own approach to “visual concerts,” concentrating the discussion on our work with the music of Saariaho. In that way, we hope to suggest a new paradigm interesting for music performance practice and, more generally, for research about the interactions between music and image.
(p. 84) I. Intents and Purposes
A. Defining a New Paradigm for Visual Concert
Although a common feature of pop and rock concerts since the 1960s, the use of visual projections, and video in particular, in the context of classical and contemporary music is a rather recent fashion. There have been, of course, notable exceptions, coming from avant-garde music and theatre. Two historical examples took place at the 1958 Brussels World’s Fair: Poème électronique, Edgar Varèse’s spatialized music, synchronized to a film in a building specially designed by Le Corbusier (with both musical and architectural contributions by Iannis Xenakis); and Laterna Magika, a multimedia performance by director Alfréd Radok and scenographer Josef Svoboda, including actors, dancers, live projections, and music. Svoboda later put these technologies to the service of avant-garde music, such as Luigi Nono’s Intolleranza 1960 (La Fenice, Venice, 1961). Despite these occasional forays, for historical and sociological reasons the classical concert of our days (especially in the case of symphonic music and recitals) is still an extremely codified event, in which rituals serve as reminders of traditions supposedly embodied by the “classics,” and where everything and everyone is, at least in theory, aiming at optimizing the audience’s experience of “pure” music.
Nevertheless, since the 2000s, many concert venues, including some of the most prestigious ones worldwide, are increasingly interested in expanding their audience’s experience beyond the supposedly “neutral” visuals of the concert. The occasional attempts in the past decades in using more sophisticated lights (colored filters, slides, lasers, and the like), which never really became part of the concert routine, opened the way for the use of diverse projected materials, which has become much more widely accepted (Stevens 2009). The projection of films, cartoons, photos, paintings, and ultimately coherent video works—all forms that can be termed visual concerts according to our definition—serve various pedagogical, conceptual, and artistic functions, which will not be described here in length.3 These attempts indicate a strong tendency toward the use of visuals as a creative means to offer original, less abstract ways to approach classical and contemporary music, including in forms that are conceived as self-sufficiently musical (i.e., aural), which will be our focus here.
In the case of concert performances of operas, video is increasingly viewed as a way of compensating for the lack of staging and scenography, in a format meant to be economically advantageous: there are no more set and specific rehearsal costs, and relatively little technical assistance is needed, since the video artist only arrives at the last moment to calibrate his or her work on the music, rehearsed in normal concert/recital conditions. Probably the most famous example in this field is the version of Wagner’s Tristan und Isolde, which director Peter Sellars created with video artist Bill Viola. Although premiered in an opera house (Opéra Bastille, Paris, 2005), the project has since then toured in concert halls all over the world—easily so, from a technical (p. 85) point of view, since it only requires a projecting screen, minimal stage space and scenery, few props, and simple lightings that create a visual bridge between the video and the live singers.
This example is rather unusual, since the video does not aim at replacing a full staging or illustrating the action, but creates its own world of symbols and textures, showcasing video art’s ability to fabricate its own function in the realm of live performance. In that way, it demonstrates how manifold the purpose of video can be in the concert performance; one could simply consider it superfluous because it appears as an “added layer” to an experience sufficient in itself, and conceived as an independent object. As such, video can seem not only unnecessary but also disturbing, simply because for the audience (especially when it is more cultivated in music than in visual arts), concentrating on a visual input draws attention away from the sonic experience of the music. This can happen in many ways: our reflex of concentrating preeminently on picture rather than music, tamed by cinema and television, can be automatically activated and impose itself hegemonically; or the visual content can simply be distracting because of its intrinsic force or estrangement from the music, which is a common phenomenon in opera performances (audience members sometimes complain of not being able to concentrate on the music or even the words because of the staging).
Whereas video works conceived for concert versions of operas might benefit from the broader reflections and examples set by stage productions, there are seemingly very few tools available to theorize the possible adjunction of video to a pure music performance, and the way such an adjunction can avoid the above-mentioned difficulties. This is why, although we have created video concepts for operas performed in concert versions (cf. Barrière 2008), we will focus here more specifically on non-operatic music and its challenges for video, which are different from the ones of replacing a staging in works that were written from the beginning as stage works. A helpful parallel for a matter of clarification would be that of the visual concert with an original choreography, created on music that was not written for the stage (opera or ballet), as has been common for decades. Although by its very nature the choreography is “unnecessary” to the experience of the music, it still stems from the score and offers a visual interpretation of it, hopefully enriching our experience of the piece’s rhythm, motives, and possibly narrative, if not thematic or contextual materials.
For this reason, another form we will not discuss here is the audiovisual performance or installation, where music and video are conceived together (the equivalent of a ballet choreography created for the premiere of a ballet music, perhaps)—even though it is an important part of our activities and, more generally speaking, has played a huge role in the development of video as an art form.4 One important example is Nam June Paik’s Concerto for TV Cello and Video Tape (1971), in which the form’s poiesis avoids any questioning of the video’s relevance, since it is created as a natural and necessary complement to the music. The same applies to projections/slides/videos specifically asked for by the composers of an operatic score, such as the famous interlude in Berg’s Lulu, the composite forms created by composer/filmmaker Michel van der Aa, or our own works Violance and Ekstasis.5 This tendency could be retraced (p. 86) back to “synesthetic” musical projects, such as Scriabin’s Prometheus, Kandinsky’s Der gelbe Klang, Schoenberg’s Die glückliche Hand, Messiaen’s Saint-François d’Assise, and of course Xenakis’s Polytopes and Diatope. All these endeavors can be, and sometimes have been, re-explored through the means of modern video technology, but once again, this is a different issue.6
As for the matter of video’s relevance, from the perspective of complementing self-sufficient music, we hope to suggest a few answers in the course of explaining the intents, purposes, and means of our work.
B. Kaija Saariaho: A Visual Composer?
Most of the scores by Kaija Saariaho for which we have developed visual concert versions were actually not written specifically for a multimedia approach. Why is it that we are intuitively driven to interpret them visually? Certainly not because we consider the music insufficient in itself, as if it needed an accompaniment to be uplifted as an artistic experience. But something in Saariaho’s music seems to be intrinsically visual: the general concept of composition, building larger “pictures” and atmospheres; the use of motives within this broader frame; the work on textures and “colors” (a subject studied, among others, in Barrière 1991); and in later works what can be called a sense of narrative—in short, all elements often quoted by musicologists and critics concerning Saariaho’s music.7 These are only a few elements that can serve as a starting point for a broader reflection on what could be called pictoriality in music, along with its mirror-concept, that is, musicality in pictures—two concepts crucial to the development of a cross-media art form.
These concepts are subjected to a long tradition, starting with the works of Kandinsky, Klee, and their contemporaries, who looked for bridges between arts (cf. Cook 1998), and culminating in various attempts to transform synesthesia into an artistic principle, on the basis of scientific research on wavelengths. Saariaho’s relationship to such a tradition is obvious: not only is the fact that she studied fine arts before entering Sibelius Academy’s composing program often quoted as a relevant aspect of her curriculum, but her strong attraction to multimedia forms can be traced back to her early works—for instance, the early piece Study for Life (1980), for soprano and tape, which is based on a text by T. S. Eliot, contains indications for lighting, choreography, and even odors. Up to the climaxes of her stage works for the ballet (Maa, 1991, choreographed by Carolyn Carlson) and the opera (starting from L’Amour de loin, 2000, directed by Peter Sellars), Saariaho has explored various forms of collaboration with other arts, as diverse as sonic atmospheres for several exhibitions by painter Raija Malka (from La Dame à la licorne in Paris, 1993, to Tidelines in Helsinki, 2011), a filmic project with director/producer Anne Grange (Vagues/Miroirs, started in 1996), and the interactive CD-ROM Prisma (1999), to which we also contributed.8
Another “symptom” that can be mentioned here is the strong influence that Saariaho received from composers often considered “visual,” such as Debussy and Messiaen; the (p. 87) inspiration she took from the filmography of Ingmar Bergman and Andrei Tarkovsky9; her taste for metaphor—the most synesthetical figure in poetry—in the texts she decides to set to music, for example, Saint-John Perse’s poems10; and, more generally speaking, the visual experiences that are the starting point for many of her works and that are alluded to in their titles (Du cristal …, … à la fumée, Mirrors, Changing Light, Mirages, Lumière et Pesanteur, Laterna Magica, and more). Few composers, thus, are seemingly as open to cross-media collaborations. This is perhaps why her music seems to be a perfect starting point for the creation of a multimedia experience: it is concrete enough to call for images, and abstract enough to set a standard far away from the temptation of illustration (as the elements described in Appendices A and B point out).
The function of such a multimedia form, therefore, has the same relationship to visual accompaniments trying to tell a story with the music (following the pioneering model of Walt Disney’s Fantasia, 1940) as contemporary dance has to classical ballet, with its codes and its narrative aims. But video may also be the way to dig deeper into music than the aesthetical paradigm of contemporary dance set by John Cage and Merce Cunningham, based on the superposition of independent layers (in their case, music and dance). Live computer video, when used in conjunction with cameras on stage, has the ability to include music and the musicians in its processes, thus integrating the visual presence of the performers into an enhanced level of listening that takes into account that the classical concert is never a purely aural experience. It therefore builds a form based on inner coherence as well as complementarity, which is the balance required, in our vision, by a sensitive multimedia approach. The Cage/Cunningham approach was revolutionary in its own time in reaction to what was then the academicism of the relation between music and dance, but it appears to us today in most cases as a form of aesthetical laziness, a pretext not to embrace the challenges of the renovation of these relations. This is even more salient in regard to artistic challenges raised by the use of the computer in all domains, which by construction allows bridges between the different arts. At times when collaborations between artists and crafts are often superficial, dissociated, and insufficiently based on dialogue and mutual understanding (in the fields of classical music and opera, in particular), the search for unity, coherence, and mutual “intelligence” is, in our opinion, a strong paradigm for all artistic endeavors and, generally speaking, for the place of the artist in society.
C. Computer Video and the Interactions Between Music and Image
We are considering in this article mostly computer video in concert situation, whether generated (i.e., synthesizing images) and/or processed (i.e., modifying live or pre-recorded sources) and, more specifically, projected rather than displayed on monitor screens. A unique feature of this form is that it acts like a second level of interpretation added to the performance itself at real time, which in a way is opposed to simplistic (p. 88) mapping of parameters between audio and visual that generate more or less automatic visual illustrations or visualizations of the sound.
To some extent, one can envision that the realm of video image today can recapitulate, encompass, or encapsulate all the historical artistic and aesthetic backgrounds from fine arts or photography to cinema, light shows, image processing, and synthesis—in spite of the fact that in terms of image quality there remains obviously substantial difference between the media. Indeed, computer programs (whether purely commercial or issued from artistic research) can be used to produce, reproduce, or emulate images similar to the ones produced by previously existing media, as well as to create images that were impossible to conceive and realize before. Video projection by itself can also be considered and used as an advanced and sophisticated form of light show. In short, computer video is today a very general artistic tool. There are several computer programs offering all these possibilities in the form of a unified environment, some of them being conceived and used by and for DJs and VJs. Most of them are rather “closed” programs (i.e., packaged applications), while others are open programmable environments that can be customized and extended at will by persons with the relevant interest and programming knowledge.
It is interesting to note that the ones we have been ourselves choosing to use, Max11 and Isadora,12 are actually coming from researchers and programmers originally involved in music: Max was first developed by Miller Puckette during the mid-1980s at Ircam, the largest musical research and production center in the world; Isadora was designed specifically for image and is still developed by a single person, at first a composer, Mark Coniglio, who had previously realized a computer music program called Interactor (at the same time as Max was conceived). Both programs offer toolboxes to manipulate music and image, and allow one to work closely on their interactions. This is especially true for Max: its openness and programmability allows the user to imagine and realize many possible forms of connections between the design of sound and image.
It is clear that the existence of such tools encourages the elaboration of interactions between music and image in ways that were much more difficult, if not impossible, before. They actually offer conceptual and practical environments that allow direct experimentation and testing of audiovisual interactions, and therefore make it easy to validate or eliminate assumptions. However, specific conceptual and practical tools for actually representing and manipulating these interaction processes are still lacking tremendously. In fact, one consistent weakness since the beginnings of computer music, whether in commercial or research software, is the lack of tools for time representation. Nevertheless, general music notation in contemporary computer music tools is very deceptive, and this is clearly limiting the artistic research on the music/image interactions. Therefore, what is needed for the development of any serious research on these interactions, at the very minimum, is an opened programming framework for sound and image synthesis and transformations, which will finally include sophisticated time representation. Max/MSP/Jitter does not provide this, at least not yet, precisely as it is missing rich time representations. So, to move to the next step in this domain of (p. 89) research, a new temporal conceptual and practical integrated system may be required, which will enable the notation and manipulation of the specificities of these emerging artistic multidimensional objects, representing both music and image and their interactions on a time line. The existence of such a system is the condition to move forward from—interesting but limited—current experimentations, and to bring this art form to the level of sophistication that notation has brought to music composition (Dufourt 1981).
II. The Poetics of the Visual Concert
A. Building a Grammar
In our attempt at bringing the form of the visual concert to artistic maturity and autonomy, our starting point is the music. By music we consider here both the score and its live interpretation by musicians in a concert situation. We, therefore, deliberately attempt to use the experience of a musical form, through its varied layered aspects and dimensions, to elaborate the form of what we conceive as a new global artistic experience. While doing so, we try to keep in mind not ever to distort the perception of music; quite the contrary, we aim at helping its sensitive understanding. However, as already mentioned, it is a determinant aesthetic principle in our approach to refuse straightforward synchronizations, illustrations, or “visualizations” of music. Synchronization, a term coming historically from cinema, is often just a trick to capture and maintain the attention of the audience. At the same time, illustration and visualization (especially when used as a pseudo-scientific argument) are aesthetically naive and unsatisfying, considered from the more ambitious and demanding point of view of designing dynamic structural interactions between music and image. Indeed, we aim to serve but also challenge the music by making its structure proliferate in visuals, and in this way give to the artistic result an autonomous status that may eventually culminate in a new artistic form.
It may also be important to emphasize at this point that we are not aiming at any form of synesthesia, which, as it is known today, is first of all a neurological particularism, even though, as a source of artistic inspiration, many interesting fantasies have been developed from it. Synesthesia’s major drawback for our concerns is its subjective nature, making it inappropriate as a common basis of perceptual experience (several cases are described from a neuropsychological point of view in Sacks 2007). Hence, we tend not to believe in what can be considered the utopian project of synesthesia, which we could from our point of view define as an extreme case of systematic mapping, both too arbitrary and limiting for our purposes (as analyzed precisely by Nicholas Cook in his book about musical multimedia, cf. Cook 1998).13 But at the same time, we consider the opposite tendency, consisting in the completely independent telling of another “story,” as equally spurious, or at least antiquated as an approach to the music/image relationship.
(p. 90) A specificity of our approach is to start from the musical score, considered as a scenario, to deduce relations, search for metaphors, prolongations, or comments. Our methodology for setting relations between music and image starts, for each specific piece, by choosing visual materials—from concrete, natural, or synthetic sources—that can be related aesthetically to the music materials of the given work. We define scales of values evolving between polarities for each parameter in music and image that can be applied to the various transformations of these materials. We then continue by defining a network of processes (sometimes generative), rules of correlations and variations that evolve dynamically throughout the piece, which are controlled by an interaction score/scenario. In this way, original forms of narration, embedded in the music, may emerge.
In a search for similar formal processes between music and image, we manage areas of coordination (rather than of synchronization) of coincidences and conflicts, convergences and divergences. Redundancy (or pleonasm) between music and image, besides being an obvious aesthetic flaw or weakness, should be avoided if only because it may be a source of boredom for the audience. Also, direct correspondences have no hermeneutic properties, except when used locally: they do not “explain” anything; they do not provide any interesting “clue” or key to build aesthetically the formal discourse between music and image. The same applies to strict synchronization. It can only be relevant when used to highlight some structural markers, in important places, and even then, it should not be reproduced systematically. Whatever our pre-compositional ideas, we eventually define the limits of these kinds of interaction processes according to perceptual criteria: whenever a relationship is starting to become too predictable, that is, when it is beginning to be somehow too clear perceptually, it needs to be redefined. Therefore, in order to be able to move forward again formally, we need at such points to change the relationships between parameters.
This brings forward an aesthetic and epistemological paradigm: creating dynamic structures of interaction between music and image, what Gregory Bateson calls “patterns that connect” (Bateson 1979, 16). To achieve that, we have designed and refined for over fifteen years and through a number of projects a consistent technical paradigm: prepared visual materials crafted by us (that may or not be inspired by other artistic sources) of arbitrary complexity (which are therefore not constrained by the technical limitations of real time treatments) are processed and interpolated with live materials. We try to build continuity between the real world of the performers and the abstract world of the video, looking for solutions to integrate musicians and singers into the video, whether by processing their live images or the tracking of their gestures.14
This technique is, in part, an inheritance from our experience in computer music, the use of live cameras being equivalent to that of microphones in electronic music. Furthermore, this same concept can be found in the musical electronics of Saariaho, which are mostly concerned with the fusion between instruments and computer sounds, with the target of elaborating a continuity starting from instruments, going through instrumental sound processing, and culminating in sound synthesis.15 We also believe that this approach does serve a didactic purpose: everything is proceeding from the score, thus extending the music to another sensory dimension. Sharing elements of (p. 91) structure between music and image helps the understanding of both the music and of this new audiovisual form as a totality.
The compositional strategies encompassing music and image in this artistic form need to take into account the perceptual complexity of music and image combined. Indeed, the total rate of information that can be perceived and assimilated at a given moment is obviously limited, so that psychoacoustical and psychovisual research projects should be defined to measure our capacity to handle these levels of complexity. More generally, we need to access serious cross-studies between perception and cognition, of music and image, separately and together. The importance of such interdisciplinary studies may be well accepted now, at least in principle, but in practice there are very little examples of such projects, and scientific results are rare since the subject is technically very complex, needing multidimensional forms of knowledge and competences to be interwoven (see, for instance, Godøy 2001).
Meanwhile, it is quite clear from our experiences and observations that people with either a musically or visually predominant background do not handle sensory information in the same way, in particular near and beyond overload. Therefore, in the conception of such artistic forms, besides building multiple layers of complexity in the interactions between music and image for purposes of aesthetic and artistic richness, we need to provide alternative “entrance points” for people who have, for biological, cultural, or incidental reasons, different approaches/relationships to music and image, and their interactions.
B. Common Concepts for Music and Image
The main musical parameters of pitches, durations, and amplitudes, and the basic corresponding functions that can be applied to them (e.g., transpositions, delays, and intensities rescaling), can be given various, more or less straightforward, but nevertheless arbitrary visual equivalents (e.g., space translation, video delay, brightness/fade). But for the aesthetic reasons discussed earlier, we do not want to rely on simplistic mappings and equivalence strategies between musical and visual parameters or their technical formalizations in specific computer tools. Rather, we search for equivalents between structural parameters related to underlying formal processes. Therefore, in composing the coordination between music and image, we proceed by creating structural developments that, while being specific to either music or image, may be formally linked together.
This audiovisual approach is complemented by the insertion of detailed musical gestures of instrumentalists and facial expressions of singers into the palette of visual materials, thanks to cameras in the live concert situation. Thus, the musicians are completely involved in the process, while genuine musical gestures and body attitudes are building bridges between materials, by bringing instrumental causalities, which are organically connected with the music, in confrontation with other visual materials that may sometimes be very abstract (see Figure 5.1). (p. 92)
The intimate relations with the score and with the performers demanded by such an approach is the reason, along with practical reasons (mostly related to the economy of music and stage arts today), that we tend to prefer working on chamber music projects rather than on larger forms, such as music for ensembles or orchestras, or operas. Although it is not immune to the limiting normalization of the performance conditions in all venues (e.g., the size, shape, and position of the projection screen made available), chamber music, from a musical point of view, usually allows more rehearsals, more flexibility in each rehearsal, and more intimate relations with the performers, therefore making it possible to involve them more deeply in the process. Indeed, ideally in our vision, everything, whether musical or visual, should proceed from, and return toward, the performer.
In developing such ideas with Saariaho’s music, it is interesting to take into consideration her conception of electronics, which is often the continuation of instrumental composing by other means, as mentioned earlier. Based on this observation, we often consider visuals as the extension of the electronics, therefore making all these elements or levels, from instrumental sounds to image, linked to each other. Another obvious emerging factor in her compositional writing, of which she developed important (p. 93) theoretical conceptions as well, is timbre.16 Involving several interdependent lower level parameters, it is a more complex parameter, whose protean nature can be the source of many metaphors, and which can be extremely fertile in our endeavor.
In search of interactions between music and image, it seems rather natural to develop parallels between timbre and color. But this is particularly interesting in the case of Saariaho, due to her personal conception of timbre. Already in her early works, she conceived an axis to classify sounds from pure to noisy, in order to build musical scales for various parameters of timbre. She used these scales to compose timbral interpolations in her instrumental as well as electronic or mixed pieces. In her instrumental writing, for example, she has been using two modes of playing to build this axis. The first one is a gradual change in the place of the bow on string instruments—from sul tasto to sul ponticello—while controlling its pressure (for example, in her cello piece Petals from 1988). The second mode is controlling the sound production in wind instruments, particularly flute, changing from normal, “pure,” “pitched,” to “noisy,” by progressively adding “breath” (for example, in her flute piece Laconisme de l’aile, 1982).17
We transposed this idea of a timbral axis into image control. For instance, we assigned bow pressure or breath to control the interpolation from one specific color to another, for a given reference image, typically a musician playing, interpolated with an abstract set. For example, in the cello piece Petals, we follow the bow position and pressure evolutions by making the image of the live cellist playing meld with the image of an imaginary garden, moving from “normal” (that is, used as such, without transformation) to “colored.” The same source image therefore oscillates between “pure” and “altered” with the level of a specific color changing from one part of the piece to the other. Or, for the ensemble piece Lichtbogen (1986) (literally, “arches of light,” in reference to the phenomenon of the Northern Lights), we follow both bow pressure and breathy sound evolutions, by making a synthetic image of an abstract visual structure, inspired by the aurora borealis, evolve from “normal” to “blued” (by exaggerating the level of blue compared to other colors) and/or distorted (by alteration of that structure). This type of correspondence is practically achieved by changing visual parameters according to the relevant variations of the musical source, either with predefined variations of values specified at given cues in the score, and/or variations controlled by real-time audio analysis of the given instrument(s), the choice of strategy depending on the context in the piece.
C. Filtering Sound and Image
In our long time exposure and study of Saariaho’s music, another musical concept that revealed a key concept for the visuals is transparence, that is, masking/filtering a sound by another, serving as metaphor to the idea of masking/filtering an image by another. In her electronics parts, Saariaho has often used a technique called “models of resonance” (Potard, Baisnée, and Barrière 1991; Barrière, Potard, and Baisnée 1985). In this sound synthesis/processing technique, a spectral model of a sound (obtained with a (p. 94) specific analysis technique) is used to filter another sound. By comparison with the excitation/resonance paradigm that has proved so important in musical acoustics, one sound is used to “excite” another sound, which resonates. For instance, the speaking or singing voice of the flutist may excite the flute, and in the same time be filtered by the resonant body of the flute in Noa Noa (1992).18 With this technique, any sound that can be modeled can filter and/or become filtered by another sound. For instance, in the piece Près (1992) for cello and electronics, as well as in the related concerto for cello and ensemble Amers (1992), sounds of sea waves are filtered by models of the resonant body of the cello; or in the piece Lonh (1996), for soprano and electronics, birdsongs and voices are filtered through models of percussion.
In computer music, a related generic approach is “cross-synthesis,” in which the spectrum of one sound is used to filter another sound, one sound being described as the “source,” the other as the “filter.” All these techniques are related to the idea of timbral interpolations and mutations (Barrière 1991a), and can provide rich models for developing equivalent image interpolations. In image, there are many different techniques to filter one image by another. A simple way is to interpolate one image with another by mixing (i.e., blending them more or less together). A more sophisticated technique is “masking,” in which the levels of black and white of an image are used to gradually “let through”—filter, or more poetically reveal—another image.
Yet another related series of techniques is “displacement”: the relative luminosity of each pixel in one image can be used to displace (to move spatially within the frame of the image) the pixels of another image. All sorts of processes of “perturbation” of one image by another can be imagined and implemented, in order to realize image interpolations producing new concepts and forms of reflections (Miller 1998) and anamorphoses (Baltrušaitis 1976). Practically, one image is then modifying another one. Depending on the precise values of parameters, this can go from making an image slightly agitated, disrupted by another one, up to “embed” one image in another, even hiding an image in another, as in the piece Lonh (see Figure 5.2).
For Noa Noa, a solo piece for flute and electronics, based on painter Paul Gauguin’s Tahiti diary of the same name, the image of the performing flutist captured by cameras is masked by images built from Gauguin’s engravings: the flutist’s image is seen through the engraving as through some sort of a veil (see Figure 5.3). Each movement of the flutist produces something similar to a slight movement of the veil, making it alive. Technically, the flutist’s image is filtered by the image of the etching, in the same way that the fragments of Gauguin’s diary spoken by the flutist are filtered by models of resonance in the electronic part of the music. Visually, the flutist’s playing “reveals” the various elements of the etchings.
We have been using such processes of image interpolation, inspired by our work on timbral interpolations (Barrière 1985, 1991), in most of our projects, implementing them with varied techniques. Singers and instrumentalists are filtering and/or being filtered by all sorts of visual materials, often coming from nature, as happens in the computer music part. These visual sources can be static, as we saw in the previous examples, or they can be dynamic, which means achieved with cinematic materials, (p. 95) (p. 96) sometimes shot on locations related to the context of the piece. For example, for Six Japanese Gardens (1994), a piece for percussions and electronics inspired by a visit to specific gardens in Kyoto (which give titles to the different parts), we filmed these precise gardens. These landscape details, transformed in various ways, are then used to filter the image of the percussionist during the performance, captured by a mobile cameraman around the percussion sets. Similarly, for the string quartet Nymphea (1987) and the related cello piece Petals, inspired by Monet’s paintings, we have used all sorts of textures and colored materials filmed in his garden of Giverny, with which the musicians are interpolated during the performance, their playing “revealing” the various aspects of these materials.
D. Different Levels of Music Control over Image
We are using three main superimposed levels of control of music over image. The first one, as mentioned earlier, uses traditional musical analysis of the score to find anchor points in the explicit musical parameters, as well as in the underlying structural elements. The second one uses audio analysis of a particular performance, therefore taking into account a specific interpretation of the music. The third one is the “manual,” direct control of parameters through various gestural devices (like MIDI faders or specially designed graphical interfaces on Ipad), allowing a final arbitrary manual level of control. Occasionally, we have also used video analysis of the performers’ gestures as additional sources of control, as we are doing more frequently in interactive installations to track visitors’ movements.
Whereas the score usually provides structural macro events, the specific interpretation makes it possible to access microscopic events through continuous controls (for instance, through bow gestures or quantity of noise/breath, detected through spectral content variations). However, audio analysis results are often very “noisy,” “jittery,” and require post-processing to smooth the data and make it usable. This is especially true for Saariaho’s music, because of her use of many complex instrumental modes of playing for timbre control. In places that are too noisy or inharmonic, as well as in polyphonic situations, audio analysis can be often unreliable. In practice, this can result in many mistakes of recognition, which make the reproduction of specific sequences of parameters very tedious.
The main audio analysis principles we have found useful are the following:
• Pitch following: more specifically, techniques that discriminate the perceived pitches in complex sounds, which Saariaho herself used for the analysis of instrumental sounds for pre-compositional purposes and for building synthesized sounds.19
• Envelope following: that is, amplitude tracking for nuances (which may have considerable variability of scale according to context, typically from one interpretation to another). (p. 97)
We have experimented with many other audio analysis techniques with more or less success: periodic/non-periodic and harmonic/inharmonic separation, event detection, and more generally all sorts of parametric “alignments” based on higher level descriptors. Anticipatory music score following (Cont 2008), integrating many of these, will certainly become more and more useful, provided it succeeds to completely remove the burden on the performer of feeling prisoner of a given template of interpretation.
Every technique must be validated in a specific musical context, and submitted to intensive tests to be proved robust enough to resist a real concert situation, with all its possible accidents and interferences. For instance, the use of different microphones and the acoustics of a specific hall have considerable impact on the result of audio analysis. In most cases, it is absolutely necessary to validate the expected results of audio analysis in the precise context, which concretely means having access to the specific hall and to rehearse in it as closely as possible to the final conditions. Even in these conditions, the change of acoustics of a hall between the general rehearsal and the concert, with the audience “absorbing” part of the reverberation, may sometimes be enough to make chaotic the analysis, and result in the failure of the attempted interactions.
Thus, the information coming from automatic analysis has always to be moderated by controls at higher structural levels, and/or superseded and modified in real time with gestural control devices. These gestural devices on their side may, more constructively, constitute a necessary step toward an instrumental control of the relationship between music and image, aimed at controlling an image “instrumentally,” that is, in ways inspired by the model of instrumental gestures in music, which transport crucial musical knowledge accumulated over centuries through many cultures. This approach can be combined with the use of transducers placed on instruments and/or instrumentalists, to follow the gestural production of music sometimes more accurately than audio and/or video analysis. In any case, we consider it important to continue to investigate audio analysis techniques in order to insert them in the process of interaction between music and image, since they will eventually provide access to the structural characteristics of interpretation, which in our vision should be an essential part of this work.
The development and deepening of the artistic form of the “visual concert” raises issues far greater than its own intents and purposes: How can we consistently expand the realm of interaction between music and image beyond the traditional forms of performance? Can such an expansion be of any relevance in the audience’s approach to (p. 98) music or, in the case we are concerned with, contemporary music—or is this extra layer doomed to distract the listener, rather than enhance the experience? What are the possibilities provided by advanced technologies and fields of knowledge, in terms of visual design and live information processing, for the development of other forms of performance (such as film, theatre, opera, visual arts, or architecture)?
These questions remain unanswered, and a definitive goal cannot be set. Nevertheless, we hope we have noted some of the most significant challenges in this specific field of multimedia art, alongside concepts and tools that can allow a more sensitive approach of music through image. We believe it is only through the thoughtful mastery of technology and understanding of the complex parameters of live performance that renewed artistic possibilities can be explored, and new aesthetic paradigms created. Our own experience has proved that such forays, although seemingly disturbing (or even upsetting) when first experienced by some parts of the audience, like any change of convention in art, can be of great interest in opening new horizons in the experience of live music, whether artistically, intellectually, or pedagogically. We therefore hope that such novel forms will soon be developed far beyond the grammar sketched in this chapter, to give birth to creations that are able to build bridges between arts and audiences.
Appendix A Stage Works Developed In Visual Concert Form
1. L’Amour de loin (2000; Figure 5.4), Kaija Saariaho’s first opera, on a libretto by French/Lebanese author Amin Maalouf, was premiered at the Salzbug Festival in 2000, in a stage production by Peter Sellars, with the Südwestfunk Orchestra conducted by Kent Nagano. Since its premiere, it has been presented in several different productions all over the world. In 2006, we were asked to create a visual part for a concert version to be conducted by Nagano. This was particularly challenging for us since we had followed the genesis of the opera very closely, including the composition process and the realization of the electronics, which we supervised. We therefore knew the libretto, the music, and the staging of Sellars very well, and had to forget most of what we knew so that we could start from scratch and produce something else.
As with other dramatic works for which we have designed visual dimensions, we started by examining the points of view of the different characters through the visual part in order to render their subjective vision. The singers’ faces, captured live by cameras, are processed and inserted in abstract mental landscapes acting as virtual sets, nevertheless evoking the different locations: Blaye’s castle near Bordeaux, where the poet Jaufré Rudel lives and writes his poems about the imagined beauty of the princess of Tripoli, whom he has never seen; and Tripoli, where the Princess Clémence lives and receives the unexpected and—at first unwanted—songs of Jaufré. Both characters are seen through a chessboard—a symbol of the Amour courtois of the Middle Ages, which progressively transforms itself in a spiral absorbing them. The two separated worlds, the West and the East, represented by contrasted forms, shapes, and (p. 99) colors, progressively merge, when Jaufré crosses the sea to finally realize the encounter, bound to failure, with the subject of his impossible love.
2. La Passion de Simone (2006; Figure 5.5) is an oratorio for soprano, orchestra, choir, and electronics about the life and thoughts of French writer and philosopher Simone Weil (1909–1943), premiered in Vienna with a staging by Peter Sellars. In a modern rendering of the traditional Passion form, the piece is centered on a young woman’s evocation of important moments of Weil’s life and ideas, at times admiringly identifying with her or at times distancing herself from Weil’s deeds.
Our version of the piece was premiered at Helsinki Music House in October 2012, by the Finnish Radio Orchestra, conducted by Esa-Pekka Salonen with Dawn Upshaw as soloist. Unlike almost all of our works based on Saariaho’s music, this time we did not use live cameras on stage to capture the soprano’s facial expressions. Rather, we asked Italian, New York–based choreographer Luca Veggetti to conceive with us some dance fragments to be filmed in the studio, as well as on location in the ruins of an abandoned Renault factory on Ile Seguin (an island in a suburb of Paris), where Weil herself worked anonymously during the 1930s to study the industrial working world. These dance fragments, representing in another form the stations of the Passion narrated by the soprano, are then processed and interpolated with purely abstract visual materials aimed at representing the evolutions of Weil’s states of mind. Except for a practical reason for the deviation from our usual way of work (the lack of time for rehearsal in the final concert hall), we thought that representing the singer on stage by the dancer in the video offers a good analogy or metaphor for the duality of the character telling the story, at times in fusion with her heroine, at times taking distance.
3. Maa (1991; Figure 5.6), ballet music consisting of seven pieces for seven musicians and electronics, was re-choreographed (twenty years after the original version by Carolyn Carlson) by Luca Veggetti, who asked us to direct a video part. We worked with him and his dancers in the Martha Graham Company studio in New York to film diverse materials. Edited and (p. 100) (p. 101) transformed, these pre-recorded materials are then processed and interpolated with live filming, and screened on a bamboo sculpture by Japanese artist Moe Yoshida, acting as a screen suspended four meters above the stage.
Two cameramen, located on each side of the stage, as well as dancers manipulating cameras on the stage, captured precise details of the choreography. The visual part takes advantage of the contrasted instrumentation of each part and the referential source materials dealing with naturel elements like water, earth, forest, wind, and air. More than in other projects, the visual dimension works here as a permanently transforming virtual set, giving a strong character to the various situations of the piece.
Appendix B Installations Related to Visual Concerts
1. Miroir des Songes (2004; Figure 5.7) (in English, Mirror of Dreams) is an interactive installation, premiered at the Institut Culturel Finlandais (Paris) and Kiasma (the Contemporary Art Museum of Helsinki). It reconsiders the concept of mirror (Baltrušaitis 1970) at the digital age, by “staging,” among other sound and music materials, some of the pieces Saariaho composed on a recurrent theme in her work—the dream. She composed two pieces that clearly indicate in their titles that subject—From the Grammar of Dreams (1988) and Grammaire des rêves (1988)—as well as dream sequences in her operas L’Amour de loin (2000) and Adriana Mater (2006); her violin piece Nocturne and the mixed choir piece Nuits, Adieux are both based on the motif of night. In the installation, we integrated all these pieces, except for the two opera scenes, and also added Mirrors, a piece originally created for the CD-ROM Prisma. The last piece consists of a series of flute and cello fragments that could be assembled according to mirror compositional rules applied on various parameters (including timbre). These principles were transposed in the installation through the interactive modalities, on both the musical fragments and on the image of the visitors, captured by a camera and transformed according to these rules.
The interactivity was achieved through a 3D data glove, allowing the visitor to explore various oniric universes through browsing dreams recorded in different languages (up to 19 languages were recorded at the end of the first exhibition) by people from all over the world. These dreams could be sent through the Internet, with MMS on mobile phones (thanks to the sponsorship of Nokia and Orange), or recorded in a so-called Dream Station conceived for this purpose outside the installation and automatically merged in the installation’s dream database. Interactivity allows the linking of music and image in completely different ways than in the concert situation: the arbitrary interactive scenario allows a sophisticated analysis of the music, which can be prepared in advance, and the analysis of the visitor’s gestures allows spontaneous, real-time control. The final result, then, is the complex merge—always different—of the information coming from these different sources.
2. Nox Borealis (2008) is a musical and visual installation built around the piece Lichtbogen (1986) for chamber ensemble and electronics. Lichtbogen was composed after a personal experience of the aurora borealis in the Arctic Circle during the mid-1980s. The installation was premiered more than twenty years later in Paris, and was then presented in many places around the world. It is presented on a screen as large as possible, suspended horizontally and (p. 102) high on the ceiling of a room, with eight loudspeakers distributed all around the audience, which lies on mattresses, to emulate the situation of people looking at the Northern Lights in nature. We took a multi-track recording of Lichtbogen, remixed it, and dynamically spatialized each instrument, as well as the electronics, on eight channels. Prior to the actual realization, we made a variety of analyses that served to determine the relevant information, mostly trying to track timbral information and linking it to color, light, and structural evolutions of visual forms inspired by auroras.
We analyzed the musical score of Lichtbogen and divided it into eight parts. In parallel, we studied and analyzed many images, both photos and films, of the aurora borealis and designed eight structures/forms that we realized in image synthesis. We analyzed the spectra of the musical recordings for each instrument separately, and then edited the results by constant comparison with the score to fit some given formal criteria. The form of visual structures, colors, opacities or transparencies, movements and vibrations are linked—and therefore partly controlled by—the spectral analysis of chosen instruments, and/or parameters taken from the musical score. These relations are evolving in time and are evolving from one of the eight parts to the other.
Nox Borealis can be considered as eight sketches of a workshop on interactions between musical and visual structures, through an exploration of musical timbre, light, and colors. This (p. 103) is mostly a speculative workbench about these interactions, and it attempts to define the conditions for a precise exploration of these relations, in order to prepare further steps of investigation that should ideally constitute a true research project.
Baltrušaitis, Jurgis. 1970. Le miroir, essai sur une légende scientifique, révélations, science fiction. Paris: Le Seuil.Find this resource:
Baltrušaitis, Jurgis. 1976. Anamorphic Art. New York: Abrams.Find this resource:
Barrière, Aleksi. 2013. “Théâtre musical, théâtre de la musique: la rencontre de Kaija Saariaho et Peter Sellars.” In Kaija Saariaho: l’ombre du songe (Tempus perfectum no. 11), edited by Clément Mao-Takacs, 25–31. Lyon: Symétrie.Find this resource:
Barrière, Jean-Baptiste, ed. 1991. Le Timbre, métaphore pour la composition. Paris: Christian Bourgois.Find this resource:
Barrière, Jean-Baptiste. 1991a. “Problématique de la mutation.” InHarmoniques 8: 144–159.Find this resource:
Barrière, Jean-Baptiste. 2008. “L’opéra avec création visuelle, une nouvelle forme artistique” and “Concevoir une partie visuelle pour Saint François d’Assise d’Olivier Messiaen.” Arcadi, la revue 8: 12–13, 14–17.Find this resource:
Barrière, Jean-Baptiste, Xavier Chabot, and Kaija Saariaho. 1993. “On the Realization of NoaNoa and Près, Two Pieces for Solo Instruments and Ircam Signal Processing Workstation.” In Proceedings of the 1993 International Computer Music Conference, ICMA, 210–213. Ann Arbor: Michigan Publishing.Find this resource:
Barrière, Jean-Baptiste, Yves Potard, and Pierre-François Baisnée. 1985. “Models of Continuity Between Synthesis and Processing for the Elaboration and Control of Timbre Structures.” In Proceedings of the 1985 International Computer Music Conference, ICMA, 193–198. Ann Arbor: Michigan Publishing.Find this resource:
Bateson, Gregory. 1979. Mind and Nature: A Necessary Unity. New York: Hampton Press.Find this resource:
Bosseur, Jean-Yves. 1998. Musique et arts plastiques: Interactions au xxe siècle. Paris: Minerve.Find this resource:
Chion, Michel. 1994. Audio-Vision: Sound on Screen. New York: Columbia University Press.Find this resource:
(p. 105) Cont, Arshia, 2008. “ANTESCOFO: Anticipatory Synchronization and Control of Interactive Parameters in Computer Music.” In Proceedings of the 2008 International Computer Music Conference, ICMA, 33–40. Ann Arbor: Michigan Publishing.Find this resource:
Cook, Nicholas. 1998. Analyzing Musical Multimedia. Oxford: Oxford University Press.Find this resource:
Dufourt, Hugues. 1981. “L’artifice d’écriture dans la musique occidentale.” Critique 408: 465–477.Find this resource:
Gage, John. 1999. Color and Culture: Practice and Meaning from Antiquity to Abstraction. Berkeley: University of California Press.Find this resource:
Godøy, Rolfe Inge, and Harald Jørgensen, eds. 2001. Musical Imagery. Lisse: Swets and Zeitlinger.Find this resource:
Godøy, Rolfe Inge, and Marc Leman, eds. 2010. Musical Gestures: Sound, Movement and Meaning. London: Routledge.Find this resource:
Hoitenga, Camilla. 2011. “The Flute Music of Kaija Saariaho. Some Notes on the Musical Language,” hoitenga.com/site/flute-music-of-kaija-saariaho.
Hoitenga, Camillia. 2013. “La musique pour flûte de KS: quelques notes sur son langage musical.” In Kaija Saariaho: l’ombre du songe (Tempus perfectum no. 11), edited by Clément Mao-Takacs, 15–23. Lyon: Symétrie.Find this resource:
Howell, Tim, ed. 2011. Kaija Saariaho: Visions, Narratives, Dialogues. Farnham, UK: Ashgate.Find this resource:
Joubert, Muriel. 2011. “L’œuvre de Kaija Saariaho et les arts visuels: lumière, reflets, et transparence,” in Séminaire “Musique et arts plastiques,” 18 Juin 2011, OMF, Michèle Barbe, http://joubert.muriel.pagesperso-orange.fr/Joubert.Muriel/Publications.html.
Malt, Mikhail, and Emmanuel Jourdan. 2011. “Real-Time Uses of Low Level Sound Descriptors as Event Detection Functions.” Special Issue: New Paradigms for Computer Music, Journal of New Music Research 40, no. 3: 217–233.Find this resource:
Mao-Takacs, Clément, ed. 2013. Kaija Saariaho: l’ombre du songe (Tempus perfectum no. 11). Lyon: Symétrie.Find this resource:
McAdams, Stephen, and Kaija Saariaho. 1985. “Qualities and Functions of Musical Timbre.” In Proceedings of the 1985 International Computer Music Conference, ICMA, 367–374. Ann Arbor: Michigan Publishing.Find this resource:
McAdams, Stephen, and Bruno L. Giordano. 2009. “The Perception of Musical Timbre.” In Oxford Handbook of Music Psychology, edited by Susan Hallam, Ian Cross, and Michael Thaut, 72–80. New York: Oxford University Press.Find this resource:
Miller, Jonathan. 1998. On Reflection. London: National Gallery Publications Limited.Find this resource:
Moisala, Pirkko. 2009. Kaija Saariaho. Urbana: University of Illinois Press.Find this resource:
Nieminen, Risto. 1994. Kaija Saariaho. Les Cahiers de l’Ircam: Compositeurs d’aujourd’hui, no. 6. Paris: Ircam Publications.Find this resource:
Peeters, Geoffroy, Bruno L. Giordano, Patrick Susini, Nicolas Misdariis, and Stephen McAdams. 2011. “The Timbre Toolbox: Extracting Audio Descriptors from Musical Signals.” Journal of the Acoustical Society of America 130: 2902–2916.Find this resource:
Potard, Yves, Pierre-François Baisnée, and Jean-Baptiste Barrière. 1991. “Méthodologie de synthèse du timbre: l’exemple des modèles de résonance.” In Le Timbre, métaphore pour la composition, edited by Jean-Baptiste Barrière, 135–163. Paris: Christian Bourgois.Find this resource:
Saariaho, Kaija. 1987. “Timbre and harmony: Interpolations of timbral structures.” Contemporary Music Review 2, no. 1: 93–133.Find this resource:
Saariaho, Kaija. 2013. Le Passage des frontières: écrits sur la musique. Paris: Musica Falsa.Find this resource:
Sacks, Oliver. 2007. Musicophilia: Tales of Music and the Brain. New York: Knopf.Find this resource:
Shaw-Miller, Simon. 2002. Visible Deeds of Music: Art and Music from Wagner to Cage. New Haven, CT; London: Yale University Press.Find this resource:
(p. 106) Stevens, Meghan, 2009. Music and Image in Concert: Using Images in the Instrumental Music Concert. Sydney: Music and Media.Find this resource:
Terhardt, Ernst, Gerhard Stoll, and Manfred Seewann. 1982. “Algorithm for Extraction of Pitch and Pitch Salience from Complex Tonal Signals.” The Journal of the Acoustical Society of America 71, no. 3: 679–688.Find this resource:
Todoroff, Todor, Éric Daubresse, and Joshua Fineberg. 1995. “IANA: A Real-Time Environment for Analysis and Extraction of Frequency Components of Complex Orchestral Sounds and its Application within a Musical Realization.” In Proceedings of the 1995 International Computer Music Conference, ICMA, 292–294. Ann Arbor: Michigan Publishing.Find this resource:
Vergo, Peter, 2012. The Music of Painting: Music, Modernism and the Visual Arts from the Romantics to John Cage. London: Phaedon.Find this resource:
(1.) IA was founded by Jean-Baptiste Barrière in 1997; its first project was the CD-ROM Prisma, Discovering Contemporary Music Through the Works of Kaija Saariaho (1999; 2nd edition, 2001, Naïve), which won the Grand Prix Multimédia de l’Académie Charles Cros in 2000. The people involved in the team of each project depend on its nature. Over the years, the main contributors have been Jean-Baptiste Barrière for the general conception, direction, and realization of electronics and images, Pierre-Jean Bouyer and François Galard for the realization of the images, Isabelle Barrière for the live cameras and stage managing, and Aleksi Barrière for dramaturgy and stage direction.
(3.) For a general information about visual music and for documentations, see the Center for Visual Music: http://www.centerforvisualmusic.org/; the Iota Center: http://www.iotacenter.org/; and Musima: http://homepage.eircom.net/~musima/visualmusic/visualmusic.htm. See also Rhythmic Light: http://rhythmiclight.com, for the idea of playing image as playing music (all websites mentioned in this chapter were accessed 12/25/2015).
(4.) Appendix A and B give some information about some of our visual concert programs based on stage works, and some of our audiovisual installations.
(6.) For further reading on the interactions between music and the visual arts in the twentieth and twenty-first centuries, see Bosseur (1998), Shaw-Miller (2002), and Vergo (2012). See also Cook (1998) for his concept of “music film,” which is searching for a balance between music and image quite similar to what we are looking for with the concept of “visual concert,” and Chion’s concept of “audiovision” (1995), a theory of sound/picture relationships, although primarily interested in cinema.
(7.) For a general presentation of Saariaho’s music, beyond the scope of this chapter, see Nieminen (1994), Moisala (2009), Howell (2011), Mao-Takacs (2013), and the CD-ROM Prisma (note 1); for a discussion of her relation to visual arts, see for example Joubert (2011).
(9.) In a symposium about her sources of inspirations (Cité de la musique, Paris, April 20, 2013), Saariaho typically chose to comment on extracts from Bergman’s Cries and Whispers (1972) and Tarkovsky’s Stalker (1979); her string quartet Nymphea (1987) utilizes texts by Tarkovsky’s father, Arseni Tarkovsky, and her orchestral piece Laterna magica (2008) is based on Bergman’s memoirs of the same name.
(10.) Saariaho is using Perse’s poems in her flute piece Laconisme de l’aile (1982) and, as an inspiration, in her cello and ensemble piece Amers (1992) and in her flute concerto L’Aile du songe (2001).
(15.) Since 1982 Jean-Baptiste Barrière has supervised the realizations of most of the musical electronics of Saariaho’s pieces. See his multimedia presentation on the use of electronics in Saariaho’s music in the CD-ROM Prisma (note 1).
(16.) For further reading in English, see Saariaho (1987); a slightly different French version can be found in Barrière (1991, 412–453), and a third, synthetic, version in Saariaho (2013). See also McAdams and Saariaho (1985), and McAdams and Brubo (2009) for a more up-to-date account on research.
(17.) Cf. the CD-ROM Prisma (note 1) for multimedia presentations of her instrumental techniques by Anssi Karttunen (cello) and Camilla Hoitenga (flute). For further reading in English, see Hoitenga (2011); for further reading in French, see Hoitenga (2013).
(19.) Cf. Saariaho (1987). Most important in this context is the program Iana, developed during the 1980s by Gérard Assayag at Ircam with the algorithms of psychoacoustician Ernest Terhardt, and which can now be executed in real time to analyze instruments in concert situation. For further reading, see Terhardt, Stoll, and Seewann (1982); Todoroff, Daubresse, and Fineberg (1995).