The Development of the Clarinet

         Compared to the other instruments like a violin or a flute, the clarinet is a fairly modern instrument. Even among the woodwind instruments, the clarinet is considered to be a very young instrument. The first instrument that resembled a clarinet was called a chalumeau, which was also a single reed cylindrical instrument but it played a little lower. It wasn’t until around the eighteenth/seventeenth century that the chalumeau begin development into something that resembled more like a clarinet with more tonal range. C. H. Denner (1655-1707), who was from Nuremberg, Germany is said to have been one of the earlier figures who tested with chalumeaus and started innovating various ways to improve upon it.

Chalumeau (image from fmasson)

          The new instrument, which was then called a clarinetto (due to it sounding like a trumpet), became much more relevant to the Western musical world. The clarinetto was basically an improvement to the chalumeau. It had two or three keys. Even though that might seem little, it helped to facilitate technical runs that might have been otherwise too difficult. It also had a louder tone quality and began to be used more and more in orchestras.  

         More and more improvements start to be made into the new instrument as the works for the clarinet gets more demanding. The clarinet was also seen to have potential for its beautiful tone to help color the sound of the orchestra better. Various works such as the Mozart clarinet concerto or the Stamitz concerto required solid technical proficiency which made various musicians and inventors to further develop the clarinet. Ivan Mueller, further improved the clarinet by introducing the thirteen keyed clarinet. The addition of extra keys would allow the player technical ease and the ability to produce more tone colors.

         This type of clarinet that was used in this time period is more related to the Ohler clarinet that is used widely today. It is known for its beautiful tonal focus. Due to that, it could be seen as inflexible as it is harder to manipulate the sound. The Ohler system is used more in Germany and Europe as it is the clarinet used in the Berlin Philharmonic and other European orchestras. Even though this system has a beautiful sound to it, the fingering is more complicated and could make technical works difficult.

         The system of clarinets that the rest of the world uses is called the Boehm system. The Boehm system was created by Theobald Boehm in the mid ninth-teenth century. The two main difference between the Boehm clarinets and the Ohler clarinet (or the German clarinets) are its tone and ease of technicality. Unlike the Ohler clarinets, the Boehm clarinets have a more flexible and lighter sound. You can say that the tonal ideas of the two clarinets were almost opposites. One with a more focused dark sound, and the other with a lighter sound. The keywork was also very different between the two. For the Boehm system, the both pinky would have four keys to make playing easier while the Ohler system had only two on each where there was a roller in the middle you had to slide with your finger if the piece required you to play the other note. It should also be noted that Michele Zukovsky, the former principal clarinet player of the LA Philharmonic, took almost a year to get acquainted with the Ohler system from the Boehm because of its technical difficulty.

Left is the German clarinet and the right is Boehm clarinet (image is from the-clarinets)

        All these little inventions and refinement helped to create the modern clarinet. Musicians faced challenges from the technical limits of the instrument which set off the development which would improve the clarinet in various ways such as bettering the tone, tuning, and key work. This would help them play what is required of them and to bring out their artistry and music making.

The Instrument That Plays Without Being Touched

Throughout time, musical instruments have always been played with some part of the body. Whether it is creating vibrations from the lips like brass players do, creating vibrations with different types of objects in their hands like percussionists, or even just physical bowing strings, musical instruments have this physical attribution to it. Even the littlest touch like a piano still has some sense touching something to produce sound. As musicians, we associate people to their body parts and it becomes this cliche click that goes around in the music world. Brass players are going to have big, puffy lips, strings players are going to have calluses on their fingers, or even percussionists may always be tapping something. However what if the instrument does not require any type of physical touch? What if one can produce a sound by just moving their hands? This becomes a new, inventive category and the starting point for electronic musical instruments. In the 1920s, an instrument called the theremin was invented and became a major impact in the world of electronic instruments.

History

The theremin was invented in 1920 by a Russian physicist named Lev Termen or better known as Leon Theremin. He first discovered this by researching the density of gases. He then created a device to measure the density. He put in a meter to reflect the density as well as a whistling device that would change pitch according to the variation of densities. Theremin then discovered that his hands had an effect on the pitch because of the manipulation of the electromagnetic field. He played around with it until he could play a melody with it and told his fellow co-workers. He then went on the complete this project and constructed this instrument. The final product finished with having two antennas, one being placed vertically and the other being placed horizontally, connected to two different circuits. Both these antennas have an electrical field around. By using both hands, the right hand is able to manipulate pitch and while the left hand controls the volume.


https://www.carolinaeyck.com/theremin/
Leon Theremin playing his instrument.

Theremin Music

Most people probably have heard what a theremin sounds like but they just have not realized it. There are lots of old movies classics like “The Lost Weekend”, “Spellbound”, some science-fiction movies, or even recently a movie called “First Man” that displays the theremin in all sorts of ways. Albert Glinsky, author of Theremin: Ether Music and Espionage, describes it as “this squealing, wailing sound that sometimes goes along with the violins and creates this eerie sound”.  In Alfred Hitchcock’s “Spellbound”, the theremin was prevalent throughout the score of the film. In these two examples, the first one of displays the theremin in a very haunting way. The first example starts with this wavy, eerie sound fits this haunting mood of the movie. It fits the complements what is going on in the scene. The second example is the main theme to “Spellbound”. The interesting part about this one is that the theremin starts with the theme. It becomes first melodic sounding instrument one hears when listening to this movie theme. It is then contrasted by these long lines of the strings. This sound just becomes so refreshing to here after knowing what all these typical orchestral instruments sound like.

One of my favorite examples to display this great, unique sound actually comes from the soundtrack of “First Man”. “First Man” a movie that features the life of Neil Armstrong.There is a scene in the movie where he puts on music while in space and that song is called “Lunar Rhapsody”. “Lunar Rhapsody” is from a record called Music Out of the Moon and it features then famous theremin player Dr. Samuel J. Hoffman(who also played in “Spellbound). It was released in April 1947 and it became one of the best selling theremin records. “Lunar Rhapsody” features this “squealing” yet warm sound, soothing sound. The theme is so melodic and it shows off that the theremin is more than this sound effect.

In the end, what makes an instrument an instrument is the sound it can produce. It does not matter whether it is as physical like a drum set player or technical like a harp player. The theremin requires no physical touch and has been on many soundtracks or studio records that have been a profound impact on music culture. In today music world, Moog produces theremins that are well more advanced than the old ones and has become one of their best-selling instruments. It is interesting to see that it all started with project in a lab and it transformed to one of the most unique instruments today.


Sources

http://tuvalu.santafe.edu/projects/musicplusmath/index.php?id=29

“Theremin: Ether Music and Espionage” by Albert Glinsky, Bob Moog

http://www.thereminworld.com/Article/14232/what-s-a-theremin-

https://www.cbsnews.com/news/the-theremin-a-strange-instrument-with-a-strange-history/

https://www.youtube.com/watch?v=YNoR-SR5t1s

https://www.youtube.com/watch?v=dawxnlRTgE8&t=0s&list=LLRr8TWpP-T8xAEzawPycdng&index=16

https://www.youtube.com/watch?v=gvK0NkrZXxM

https://www.youtube.com/watch?v=CrDC_LuifkU

The Grain of Sound: Development of Granular Synthesis and Its Relationships with Musical Performance

It seems that western classical music performers’ pursuits in instrumental sound has always been bit of a paradox. On the one hand, one seeks for an “impossible perfection” of the timbre: players try to work against the physical limitations of the instrument in order to attain flawless sound. No matter how “natural” and “relaxed” one is taught to be, producing a purer sound is always the more important task, and that often results in greater sufferings of the body. On the other hand, many musicians seem to value some occurrences of “imperfection” in music playing. A brief moment of scratch tone, a slipped-aside pitch, or maybe just some unexpected errors of rhythms, can sometimes become the most expressive moment in a performance. Very often, one would even intentionally “distort” the sound, so that a more dramatic effect could be achieved.

But why would that be? What makes a sound expressive? Composers in the 20th Century are intrigued by the reasoning behind these ideas, and they have proposed numerous theories on how the most minute details of a sound changes everything in a performance.

During his lecture on electronic music in 1972, Karlheinz Stockhausen proposed the idea that compressing and stretching the duration of a sound would completely change the listener’s perception of it. Every piece of music can be a distinct timbre, and every brief sound can be a piece of music. This theory regards all sounds as highly complex compounds of information and structure, hence expectedly resonates with the idea that a single molecule is loaded with infinite contents. Indeed, the nature never ceases to overwhelm us with its sheer amount of details, and it is from different combinations of these details can we recognize an object’s quality. If one regards a sound as an object in the auditory realm, one can see what the sound consists of through deconstruction.

However, how does one utilize this idea in music composition? How can one find directions within the vast ocean of sounds which in reality last a single second? The answers are infinite. The micro-structure of a sound is a world of its own, we can of course explore as much as we want in it just the same as in our universe. Here is an example of complex sonic details created by new ways of using materials in a performance.

Australian composer Liza Lim uses a unique kind of bow in her cello solo piece Invisibility. The hair is wrapped around the stick of the bow; and, in Liza Lim’s words, “the stop/start structure of the serrated bow adds an uneven granular layer of articulation over every sound.” In her mind, this special bow enables the sound to outline the movement of the player, simultaneously outputting the “grains” and the “fluid”, thus providing new expressive possibilities in the relationship between the instrument and the player. Arguably, it is the instability and randomness in such grains that evokes the sense of body movement.

Helped by development of a new type of technology—granular synthesis—in the 20th century, composers were able to find the grains of sound for the first time, and that created a whole world of sonic expression completely unheard before. Arguably, many composers’ use of grain layer in the sound stems from the aesthetics inspired by this new found sonic granulation technique.

Demonstration of a simple process of granular synthesis. (source link)

The basic concept of granular synthesis is to create a special playback system which splits a sound sample into hundreds of thousands of small “grains”, providing the possibility of microscopic manipulations such as stretching and transposing. Greek-French composer Iannis Xenakis was the first to introduce the use of this concept in musical composition. In his piece Analogique A-B, he physically cuts the tape recordings into extremely small segments and rearranges them when sticking together. It was a tremendous amount of work without the help of computer, and the experiment one could operate is very limited.

It was not until 1990 when Canadian composer Barry Truax fully implemented the real-time processing of granular synthesis in his piece Riverrun, where he applied a computer program that allows immediate playback in the middle of a sample when changing the configurations of the synthesis. Now one can experiment very efficiently with all kinds of granulations of sound, and in real-time transition from one kind to another gradually in order to create difference in fluctuation as a musical parameter. With this advanced granulation system, one can truly combine the mentioned ideas proposed by Stockhausen and Lim: the sense of physical movement achieved by stretching and exposing the details of sound, that is the sonic particles, the complexity of grains. Below is a piece called “Bamboo, Silk and Stone” by Truax for Koto and electronics.

In the piece, the player performs the initial material for granulation, and the tape would then answer it with the granulated sound, and so on so forth. Source materials from bells alike are also added in the piece, along with the granulation of those sounds. From the processed sound of the electronics, we can see that Truax uses granulation to segregate each attack from the Koto sound, making it into a fast group of identical “clouds” of sound that has a ghostly quality. We can also hear airy sound with rapid pulses which derives from sound of the vessel flute Xun. Such transformation produces the effect that as if the sound is physically constructing and deconstructing itself. The reason one might have such impression is that, in the process of stretching and magnifying the small grains of sound, the characteristics of that sound is still perceivable. Therefore, we can say that, through microscopic manipulations, we can treat sounds fully as physical objects and make them flexible to distortion without losing their own identities.

Working with the vast details and finding the physicality in sound has not only given birth to new forms of electronic music and compositional inspirations, but also provided new insights into performance practices.

In his essay “The Grain of the Voice”, French philosopher Roland Barthes examines and compares the quality of two singers’ voices (Panzera and Fischer-Dieskau) and explains why he finds one of them (Panzera, who has a very distinctive bright voice and carries out peculiar interpretations) superior. One of his conclusion is that the physicality—the bodily communication—of speaking a language is shown through the grains of sound, and such physicality expresses without limitation of linguistic laws. He calls this kind of singing a “genosong”.

Now going back to another technical detail in granular synthesis: the use of randomization is very important when one granulates a sound, because this intended unevenness of grain positions would improve the effect, especially of stretched sound. Inspired by the concept of this technology, percussionist Tim Feeney writes that his drum roll is pretty much like a “hand-made granular synthesis”. Each attack is a single grain, and their positions in time and on the drum skin are partially the basic configurations of a synthesis. More importantly he writes that, when he has rolled for a long time and experienced lack of strength, occasional technical failures of rolling in reality brings out the equivalence of a randomization function in the granular process, and that provides a variety of new effects.

If one views the function of granular synthesis as a whole, one would find that the process is still very much like the mentioned paradox in traditional instrument playing. One operates fine control of a sound, and at the same time adds a layer of randomness to it. It seems that human never really left this duality: the “imperfect perfection”. It is then natural to see that, composers in the 21st Century have been trying to combine the technology and the traditional practices together, so as to maximize expressiveness. Live granulation is now available through a faster operation system on computers, and performers can now hear the sound of their instrument being granulated instantly as they are playing. Using the power of granulation, computer live processing is now able to “amplify” human’s physical actions, to transform the sound of the instrument and to expand its musical vocabulary.

Barthes writes that “the ‘grain’ is the body in the voice as it sings, the hand as it writes, the limb as it performs”. It is possible that, after music has been through all these advancement of technologies, people still tend to value behaviors of themselves the most. In the future, with this focus on physical movements, one potential evolution of music would be the merging of relationships between the composers, the performers and the audiences. Technologies would allow the sound in music to be changed by the listener’s behaviors. Overall, art can be regarded as organized expressive human behaviors. The beginning gesture of a piece, the initial splashing of color on the canvas…all points to the motion of the flesh which, although being the most primal and ritualistic, signifies a cry of our existence.

–Yan Yue

Sources:

  1. Roads, Curtis. “Introduction to Granular Synthesis.” Computer Music Journal 12, no. 2 (1988): 11-13. doi:10.2307/3679937.
  2. https://www.granularsynthesis.com/
  3. Barthes, Roland, and Stephen Heath. 1977. Image, music, text. London: Fontana Press.
  4. Feeney, Tim. “Weakness, Ambience and Irrelevance: Failure as a Method for Acoustic Variety.” Leonardo Music Journal 22 (2012): 53-54.
  5. Harley, James. “Iannis Xenakis (1922-2001).” Computer Music Journal 25, no. 3 (2001): 7.
  6. https://lizalimcomposer.files.wordpress.com/2011/07/liza-lim-patterns-of-ecstasy.pdf

Electrically Live Organs!

Electricity in Organs is one of the most innovative technologies to ever happen to organists in the modern day. It allows the organist many opportunities for improvement in both the ability to practice and in the ability to perform.

There are a few terms that the reader should know. An organ’s ”bellows” are similar to that of a bellow with which you would fan a flame in a fireplace. These bellows produce wind to follow through the wind trunks, which are pipes for the air to go through, to arrive at the pipes to produce the sound we hear. The next term is ”stops” which are the individual sounds that can be combined to make the ”normal” sound of an organ. The keyboards of an organ have a ”pluck” which is a slight resistance in the movement of the key from the normal position to the depressed position. This resistance is the opening of a passageway that the air follows to reach the intended pipe to make the harmony or melodic line desired.

Diagram of organ bellow and its route to the pipes

Before electricity’s use in organs, an organist would practice by use of one of 2 methods. First is hiring a person or two (or as required) to pump the bellows of the organ to supply wind for the pipes to speak/make a sound. This method of practice was not the most desirable as you would have to pay the so-called, bellow treader(s) to pump the bellows and also as a result of practicing in the Church during this time, the organist would have to work in a cold environment or pay for some system of heating in the building. The other style of practice was to use an instrument that did not require winding, such as a harpsichord or clavichord, or could supply wind of its own, such as a harmonium. This style of practicing was better for two reasons; it did not require the hiring of bellow treaders and also would not require additional heating beyond what the home’s normal livable temperature would have been.

Pedal Clavichord

While these two styles of practicing allowed for excellent music making during that time and a way for the organist to develop his/her technique, both had their disadvantages: the cost of practice time or not hearing the intended instrument’s sound. With the use of electricity in organs, Organists are now allowed to practice without the aid of bellow treaders by application of electric blowers/heavy duty fans or missing out on the sounds of the organ on which they intend to play.

Just as important is the use of electricity to recall combinations of stops at any point in time to accomplish a specific sound to assist the expression of the music and the organist. The sounds can be anything the organist desires to hear in regards to pitch level and dynamics. Olivier Messiaen used this feature of organs to employ intriguing and distinct colors in his pieces that would have been near impossible to accomplish on an organ that did not have electricity to execute these drastic and pertinent sound changes.

Changes of sound could not be as drastic without the aid of electricity

In addition to these improvements with electricity, because of this advancement there allowed enablement of a higher technical facility for organists. The development of direct electric action was the cause of the progress. This action allowed for ease of overcoming the pluck of the palette, allowing the pipe to speak. With this available ease, there was an increase in virtuoso writing for the organ. The ability of the organist before this technology was stunted as the key pressure would have to be overcome with more weight and would, therefore, slow the speed at which organists would tend to play because of physical limitations of the instrument.

Finding Beings in Sound: A Short History of Found Sound in Electronic Music

The essence of technology, according to philosopher Martin Heidegger, is, in his own jargon, a clearing: specifically, technology is a kind of worldview that reveals—or more often than not imposes—an essential purpose to things. For those who are interested in reading Heidegger’s original paper, it can be found here.

His most important points, for our purposes, are as follows:

  1. Things by themselves simply are—they do not need to serve any purpose (a tree simply is)
  2. Technology is the imposition—the forcing-on—of purpose on things (technology imposes purposes on a tree—as part of a shelter, or the handle of an axe, etc.)
  3. Technology is therefore a violation of the freedom of things, which prevents us from appreciating them in-and-of-themselves (we view the tree as a means to end; this perspective degrades our experience of the tree)

Before recording devices, free sonic beings in-and-of-themselves did not appear in formal concert settings. All music-making, pre-phonograph, necessitated a technological manipulation of an object outside of its natural context. The bone flute, perhaps the most technologically “primitive” of all instruments, is still relatively technological in its appropriation of the bone—which doesn’t seem remotely musical by itself—as an end to musical sound. Or, from another point of view, sonic experiences, pre-phonograph, were marked by a clear partition between outdoor and indoor sounds: there was the natural world itself (the world of free sounds) and the technological world of the concert hall.

Jiahu flute, circa 7000 BC (image from Virtual Collection of Asian Masterpieces)

            The idea of found sound, then, should be absolutely revolutionary—at least from a Heideggerian point of view. Found sound—the use of recorded, so-called “non-musical” sounds in an “indoor” setting—is perhaps the very first instance of a sonic-being appreciated in-and-of-itself in a concert hall milieu. Its history, therefore, outlines a story of ethics, framed around changing conceptions of the role of the “unadulterated” sound. We will attempt to dissect this narrative in four seminal works of electronic music.

1. Pierre Schaeffer: Étude de Bruits

            Found sound entangles itself with philosophy; it is not surprising, then, that is has been subject to much very polemical writing. The first major works of found sound composition, musique concrète (music made by manipulating recorded sound) were entrenched in the typically heated debate of the post-1945 (postwar) generation of composers, many of them heavily invested in creating a new kind of electronic art distinct from the burdened music of the past (for a more detailed discussion of that and the following, see chapter 1 in Joanna Demers’s Listening Through the Noise: The Aesthetics of Experimental Electronic Music).

            Most discussions of electronic music in general (including this one) begin with musique concrète, but that umbrella term encompassed at least two very different perspectives to found sound. Indeed, Pierre Schaeffer, who almost single-handedly invented musique concrete in his Paris Studio in the 40s, wrote music of constantly evolving characteristics—not least as a result of the technological limitations of his times, and these divergent approaches, although not by themselves always philosophically grounded, positioned found sound radically differently in relation to Heidegger’s notion of being-in-and-of-itself.

This first and earliest work of musique concrète, Étude de Bruits, was composed on a shellac record disk, such that techniques nowadays considered commonplace—transposition, looping, filtering—were manipulated excruciatingly by hand. The limits of this “primitive” technology can be heard in the piece. This piece contradicts, for better or worse, the ideals developed in Schaeffer’s later philosophical writings (which I will discuss shortly): this is a kind of musique concrète where sound sources are easily identified—and indeed come with whole packages of connotations.

Schaeffer at work on shellac (image from Prepared Guitar blog)

            What might Heidegger have thought of these musicalized trains and sauce pans? His questions may have revolved around the intended affect of such a piece: it is a settled fact that the first étude recalls a running train, but to what end? My personal impression of the piece is primarily of a filmic experience. There is a sense in which the grainy audio recalls a likewise visually grainy old black-and-white film, populated by the objects suggested in the music.

 Is this an experience of sound as a being-in-and-of-itself? It is rather easy to suggest that Schaeffer fails to achieve this kind of experience because the recorded sound becomes a technological means—and therefore an unfree being—of evoking a filmic quasi-narrative. And yet, on a certain plane, this kind of filmic sound already suggests a lesser degree of technological manipulation than instrumental music. Traditional instrumental music subsumes the identity of the object within its sound, such that when we hear, say, a violin, we do not hear it as a sound stemming from an object but as a sound, to which the object is subservient. These Études are, to my ears, something different: no such hierarchy exists between the train and its sounds.

2: Pierre Schaeffer: Étude aux objets

            Schaeffer eventually published his musico-philosophical musings in several writings. Many of his ideas are summarized in Demers’s book—Schaeffer turns his back on the filmic sounds of Bruits, the failures of which he blames on the technological limitations of the 1940s. With the invention of the more versatile tape recorder, Schaeffer experienced a degree of artistic freedom which enabled him to experiment with “emancipated” sound, much inspired by Husserl’s phenomenology (as Dostal writes, Heidegger’s work is based on the framework of Husserl’s thought). As Demers writes, Schaeffer attempts to create a free sound-being “through the removal of visual cues” and “through the intentional disregard of the perceived sources and origins of a sound.”

            What this sounds like in Étude aux objects is a rather complex collage of sounds—sounds of obviously physical/natural emanation, but of imprecise origin and context. This, for Schaeffer, was a liberated sound-being. Schaeffer is in part addressing his own Études aux bruits and that technologization of sound, in which sounds become mnemonics, markers for objects: in this new musique concrète, Schaeffer advocates a kind of music which leads us to, one, hear the sound as being real, and, two, hear the sound as having interest in and of itself. Sounds must be heard as beings distinct from visual entities.

            Demers’s book captures some of the polemic that surrounded this claim. While Schaeffer’s ideas—on paper—seem to suggest a true “autonomous” and free sound being, Demers notes that composers, especially of the younger generation (the infamously argumentative young Pierre Boulez perhaps leading the charge), had doubts about a sound’s ability to separate itself from an outside context without being turned into an instrument (which would eliminate its philosophically vital distinction from instrumental music).

            Such complaints are clearly heard in the music. Listening to Études aux objets, one is compelled to guess the origins of the sounds, and it is difficult to hear the piece without feeling that there are two contradictory layers of organization—as Lévi-Strauss notes: one, the implied contexts and worldly emanations of the sounds, the implications of which suggest a network of relationships, and two, the actual organizing principles of the music. For instance, a sound similar to a car horn followed by a crumpling or crashing sound automatically suggest their own narrative, such that this sequence interferes with a more abstract, composed structure.

            I have mixed feelings, however, about the value of such objections to Schaeffer’s thought. Heidegger’s thought is centered around the idea that our understanding of the world must be unlearned: likewise, is it not possible to unlearn our mnemonic understanding of sound?

3: Karlheinz Stockhausen: Kontakte

            Supposing that Boulez et al. had touched on truth in their rejection of Schaeffer’s “autonomous” sound-being, it may be the case that a truly freed sound must speak of itself. Citing Stockhausen’s Kontakte as an example of found sound is a bit of a stretch, since the sounds are entirely synthesized—made from scratch—from basic wave generators, but my thinking here is that, in Stockhausen, the attempted autonomous sound-being is sound itself, detached from any specific context. In The Concept of Unity in Electronic Music, Stockhausen illuminates how works like Kontakte stem structurally from acoustic principles. One particularly spectacular instance is a long and dramatic glissando (which Stockhausen draws out in the air rather spectacularly in a 1970s lecture here) which illustrates how a single pitch can be lowered until it is a series of attacks—illustrating that timbre/pitch and rhythm exist on the same continuum. Like in some modernist architecture, in which light is not used merely to illuminate spaces but to be savored on its own as an independent architectural entity, sound here is not merely a material for music, but the driving force behind the music itself.

Ando: Church of Light (full-scale reproduction) (image from Dezeen)

4: Hildegard Westerkamp: Talking Rain

            One of the criticisms of Schaeffer’s earliest works is, as mentioned above, the apparent mnemonic quality of sound imposed by found sounds of obvious origin. One can reject found sound’s context—as Schaeffer—or disregard found sound entirely—as Stockhausen—but it is indeed also possible to go the opposite direction and embrace the context of found sounds as the primary element of musical organization. Soundscape composition, often described as acoustic ecology – a school of electronic music first written about extensively by Canadian R. Murray Schafer (no relation to the French Schaeffer), is such a musical movement.

Soundscape composer Gordon Hempton with a binaural microphone (image from KNKX)

            As Demers writes, soundscape composers recognize that sounds are inexorably linked to an environmental context. Whole sections of pieces consist of field recordings of what one might call ambient sounds, often with the intent of preserving the unique soundscape of a locale, be it an inhabited or pristinely natural one. Such recordings are often achieved with binaural recording systems, replicating a listener’s experience of the sound with maximal precision. In a sense, soundscape composition is a kind of virtual travel for the ears alone. In addition to being “unadulterated” sound in a literal sense, the idea of soundscape can also be seen as Heideggerian in that it recognizes that the very definition of being implies being part of an environment, what Heidegger calls facticity. Yet, this claim can be problematic. In Talking Rain we have a sequence of environmental milieus, not an actual field recording that runs for 15 minutes. One can argue that the milieus are extracted and technologized to serve the structure of the piece: the recordings lose their freedom in the context of a larger musical structure. On the other hand, I would argue that each soundscape is sufficiently immersive on its own to become autonomous, but ultimately this relies on a specific mode of listening.

Indeed, as with all previous attempts to liberate sound from technology, what is perhaps most important is the perspective of the listener. It is the listener who imposes purpose, but it is also the listener who frees sound by the act of listening.

-Haotian Yu