Why do you want so many Loudspeakers?
This article is a version of a presentation first made at the 3rd Acousmatic Festival of Cagliari, Sardinia, in June 2006.
Abstract
All sounds are an evocation of voice; all sounds are an evocation of dance, all sounds are about exploring time. Sometimes the voice is explicit, as in Feist, sometimes abstract, as in Wright. The dance can be clear and playful as in Le Caine and Alvirez, energetic as in Moore, or dark and hidden as in Mahtani and Karneef.
The two main themes of time in this concert are about ordered time, Le Caine, Feist, Alvirez, directed time in Wright and Moore, and open time as Mahtani and Karneef. Listen closely to how these seven composers in works composed over he past half-century play with time, dance and voice.
Program
- Daniel Feist — (Canada 1954–2005) Auxferd Nightburr’d November 2 a.m. (1986, 1’)
An old bird, a crazy old bird in the middle of the night, singing and singing and singing. He thinks the streetlamp is the sun. Crazy old bird. Disconnected. Hasn’t he heard? Nobody is listening. - Hugh Le Caine (Canada 1914–1977) — Dripsody (1955, 2’)
A rhapsody on a drop of water. - Maurice Wright (USA b. 1949) — Electronic Composition (1973, 6’)
The analog synthesizer speaks and dances. - Adrian Moore (UK b. 1969) — Superstrings (1998–99, 12’)
A fractured piano speaks and dances. - Philip Karneef (Canada b. 1983) — Pneumothorax pt. I... (2005, 9’)
In a hospital, time can be stretched and distorted. - Annie Mahtani — Surfacing (2003, 13’)
In the subconscious, time can be stretched and distorted. - Javier Alvarez (Mexico b. 1956) — Mambo à la Braque (1990, 3’)
Boogie Honey! - Daniel Feist — Auxferd Nightburr’d November 2 a.m. (1986, 1’)
Maybe someone now listens?
The first time an audience sees 8 or 16 or 32 loudspeakers at a concert, “Why do you want so many loudspeakers?” is one of the first questions. There are more than two parts to the answers, and this brief introduction will try to look at two of them.
We can listen to Furtwangler conducting Beethoven in 1942 and hear almost all of the instruments of the orchestra even though the sound only comes through one loudspeaker. Part of the reason for this is that the perceptual systems of the ear and brain hear the complex sound and separates (segregates) the individual lines or instruments into their own individual ‘streams’ of sounds, much like listening to one voice when many people are speaking. For the Beethoven symphony, the number and types of sounds is limited, woodwinds, brass, percussion, strings, and each is quite ‘different’ from the other, trumpets do not sound like timpani.
Electroacoustic composers often create sounds that are more complex than basic instruments and the composer may want the listener to be able to listen to parts which are ‘inside’ the sound, like focusing on just one note in a chord. Listening to a stereo playback uses the ears’ capacity to separate sounds based upon the direction from which they come, but, with two speakers in front, the limitation is like a screen in front of the listener, like looking through a window.
With a window in front, others on the sides and more behind, the “listener” is able to hear sounds from all around, and as they originate from different directions, it is easier to separate them. This process of ‘segregation’ allows for a broader context in which to place sounds, as sounds which are quite similar can come from different directions without being heard as ‘one sound’ (fusion, or integration).
This idea that complex sounds can either segregate or integrate was not easily controlled by composers before the use of electroacoustic technology. Two interesting examples of this are found in the piano and the string quartet.
A piano is really 88 instruments joined together in a common box. These 88 instruments are made to integrate by all being heard as coming from the same place. A composer who would like C# to come from the extreme right, and D to come from the extreme left will have difficulty without the intervention of electroacoustics. The use of space and direction is not easy with a piano.
Western classical music (1750–1900/50) written for the string quartet (Haydn to Bartók) was largely based upon hearing all the parts “together”. The ensemble sits close together not only to be in physical proximity for emotional impact, but also so that the listener hears ‘one’ complex source.
On occasion before the twentieth century, composers used ensembles in different places (Gabrielli, Mozart etc.), but starting in the twentieth-century, more composers started to experiment with the use of direction in their work as another aspect of the music (Ives, Brant, Stockhausen etc.). It was discovered quite quickly that if the instruments were moved away from each other, what was heard were independent streams — the ear was not (easily) able to integrate all of the sound into a single homogeneous mix. An effect was that is very difficult to hear ‘harmony’ (integration) when the sonic elements are separated in space.
Composers began to use this idea and create musics of much greater complexity by placing instruments and ensembles in different locations around the audience. In doing this, new musical resources became available, and younger generations of composers grew up thinking and imagining not only in terms of pitch and tone color, but also in terms of space.
For composers of acoustic works however there was one limitation that was difficult to overcome, that of making the sound move through space. Composers can have the violinist walk from the back of the hall to the front, but the maximum speed is determined by how fast the person can walk and play. As we have experience in surround-sound films, this is not an issue in electroacoustics — Galaxy starships and battle-cruisers can pass to the left, to the right and overhead at almost any velocity with no apparent difficulty.
Until very recently, a major limitation for ea composers has been one of ‘standards’ to play pieces in various locations. The ‘high acousmatic tradition’ was restricted to working from a stereo source (stereo tape, DAT or CD in most cases), and having sounds ‘appear’ to move by moving faders up and down on a mixer placed in the middle of the audience. For the past 25 years or so, this has been adapted to sound projection systems of between 8 and 30+ loudspeakers, 16 to 24 speakers being something of the norm in many situations.
Two general set-ups have existed, one being of the “surround” variety, with speakers around the audience — front, side and back, and the other of the “orchestra of loudspeakers” with a large number of speakers placed on a stage. They both encourage certain kinds of work, and both carry restrictions.
With the widespread presence of multi-channel sound (5.1 etc.) in movies, ‘surround’ has become dominant, and here there are a number of schools of thought. In commercial 5.1, the ‘back’ speakers are used to provide acoustic ‘fill’, or ambience, while most ‘concert’ ea composers wish to have the side and back speakers used to handle the complete signal, not just reverberation and atmosphere.
The ability to take pieces around the world has recently been simplified by the widespread presence of faster computers, better software and larger, high quality interfaces. There is also interest in having a flexible approach to ‘surrounding’ sound, one in which the audience may be seated facing many different directions rather than only facing ‘forwards’. This becomes possible as concerts take place in open performance spaces (churches, large open halls, out of doors etc.).
But what about the sounds?
Acousmatics has been described as ‘musique concrete’ (where the sounds are all from a microphone), grown up. In many ways this is true as the ‘high acousmatic art tradition’ takes sounds picked up by a microphone as raw material for processing and transformation. ‘Acousmatic’ is not “electronic music” in that “electronic music” will mostly have synthesis (oscillators, noise, filters etc) or algorithmic compositions as its sound source. But the lines of this distinction are being blurred more and more as software is able to concrete sounds (samples) as the source for synthesis. Yet, a distinction of some sort may remain for some composers.
One major distinction has related to the role of pitch. In acousmatic works, while there may be strong references to the pitches of sounds, pitch is seldom the element which determines the structure or carries the piece. In Beethoven, it is the relationships of the pitches that give the identity to the piece — it is identified as the same piece whether played by an orchestra, a chamber ensemble, on a piano, or even sung.
It has been more than half a century since this ‘pure’ division began to break down, and composers today more and more recognize the areas of commonality, most frequently in dealing with aspects of sonic gesture. In much traditional acousmatic work, the gesture defines its own duration, for example, the recording of a passing bus has its own duration defined by the action.
There are composers who use their recorded sounds as ‘raw material’, and edit and shape it to the patterns and shapes that they want, rather than the natural acoustical shape. This method of work is very much like that of the traditional electronic music studio, and this shows a strong link between the two, in some ways two faces of the same head.
The acousmatic tradition has worked with the sound ‘as the sound’, or as a representation of an existing concrete (or real) sound. Electronically generated or transformed sounds now exist in a world somewhat between ‘pitch-based’ electronic music and acousmatics. This new(er) sound world allows itself the use of multi-speaker sound projection systems, and shares the signal processing capabilities of the acousmatic tradition, and is an area of integration between these two worlds.
The concert I have prepared for Calgliari Sardinia plays across many of these ideas. Hugh Le Caine’s Dripsody is tonal musique concrete, the sound is a drop of water, pitch is a central aspect of the structure. Javier Alvarez fragments melody and harmony into pure sound gesture. Adrian Moore uses a piano in an unpitched fashion and shares gestural shapes and types with Maurice Wright’s Electronic Composition composed a quarter century earlier. Philip Karneef and Annie Mahtani inhabit a similar psychological sound world, with the sources having been purely electronic, and entirely concrete. Daniel Feist in 1986 felt that nobody was listening, and 20 years later, it seems many are.
Social top