Now in 2019, how many people have not used Spotify, Apple Music, Pandora, or any music streaming services yet? There is simply too much music that we are exposed to listen to with just a press of a button.
However, did you know that history of the music streaming services first derived from Napster, Inc., which uses a mechanism of Peer to Peer (P2P) service? This unprecedented application on the internet began in the fall of 1998 by Shawn Fanning, brought both cheerful acclamation and troublesome disputes (lawsuits from major records) at the same time. The songs were stored in central servers that provided a real-time directory with specifications of stored file names and locations. Users uploaded music to the server from their vinyl, tapes, and CD recordings, in returns, downloading over billions of other songs in MP3 format. “MP3 technology was developed by a German engineering firm in 1987 as a way of compressing digital audio files by removing inaudible space and squeezing the rest.” (Honigsberg, 474)
On December 6th, 1999, A & M Records and seventeen other record companies filed a complaint about Napster off copyright infringement. The image of Napster was rapidly waning and on February 12th, 2001, the court ordered Napster to install filters to halt the use of any copyrighted materials, thus “blocking over ninety-nine percent of copyrighted material.” On July 2nd, 2001, Napster eventually had to close their online service.
Nevertheless, during the span of court hearings, Napster was preparing for their transformation. BMG record company, which was one of the five major companies who sued Napster, turned their side and partnered with Napster for a “fee-based membership service.” Napster and BMG together, they planned on creating a new online service that provides a digital version of the music, books, and magazines with the utilization of P2P mechanism. Hank Barry, who is the former CEO of BMG record announced to offer the users with $4.95/month and about seventy to eighty percent avenues shared to record companies. Unfortunately, the offer did not appeal to any other major records as their calculation suggested that the deal was not profitable enough. With Konrad Hilbers’ replacement of Hank Barry, Napster previewed their new subscription model In January 2002 with a limitation in a diversity of music selections.
Soon after, Napster sadly had to announce their bankruptcy and Roxio, a CD-burning software maker, purchased Napster’s brand and logo with his bid that was worth about $5.3 million. After he successfully brought back Sean Fanning to the company, they planned on launching a fully legalized version of Napster. Roxio acquired PressPlay for $12.5 million in cash and made reborn of PressPlay possible with the name of Napster 2.0. After five years, Best Buy purchased Napster with $121 million but resold ‘Napster’s customers and intellectual property’ in 2011 to Rhapsody with returns of a minority stake. Rhapsody has been growing ever since, especially big in Europe, and In 2016, Rhapsody rebranded itself with the name of Napster. Now Napster is competing against major music streaming services, Spotify, Pandora, Apple Music, iHeartRadio, Deezer, Beats Music, and many more.
This is the brief history of Napster, the pioneer incorporate that brought the music streaming services to us. Watch some of the documentaries about Napster.
Sources:
U.S.C. A&M Records. Inc. v. Napster. Inc. 114 F. Supp. 2d 896 (N. D. Cal. 2000)
Peter Jan Honigsberg, The Evolution and Revolution of Napster, 36 U.S.F. L. Rev. 473 (2002)
When you think of the theremin, what is the first thing that comes to mind? Perhaps a violin being played under water? Ghost movies? Alien abductions?? For me, I always think of that one episode on the Big Bang Theory where Sheldon used it to played the Star Trek theme song (much to the annoyance of his friends).
Even though your views of the theremin might not be as intrinsically linked the the Big Bang Theory as mine, I’m sure we can all agree that this instrument is already pretty cool. And now that we have established that the theremin is pretty freaking cool and therefore worth studying, I’m about to flood your brain with all the necessary knowledge you never thought you needed about how this pretty incredible piece of electronic technology came to be.
A History of Lev and his Theremin
The thermain begin in the mind of Russian inventor Lev Sergeyevich Termen, more commonly known today as Léon Theremin in 1919. The 23 year old soviet (who was also a KGB spy) invented the device accidentally while working on a meter that measures the density of gas. Basically this gas meter created an electromagnetic field that would produce a sound when the area around it was disturbed. Theremin realized that the closer he brought his hands to the gas meter, the higher the pitch became, and the further away he pulled his hands, the lower the pitch became. So, like any 23 year old in a laboratory when you find out your new machine makes funny noises, Theremin busted out some tunes for his lab buddies. His buddies and his boss were like “Wow that’s so cool . How about you like make an actual instrument out of it and like take it on the road and stuff?” And, so he did.
But first young Theremin made a pit stop at Vladmir Lenon’s house in 1922 to show him the new diddy maker he had just made, which he called the Aetherphone. And Lenin was like “Woah, this is cool, like really cool. It electronic technology like this that will help me spread all the communism. You should totally go out and share this Aetherphone with the people (and also maye think of a new name while you’re at it.)” So, with Lenin’s gold star of approval, Léon Theremin went out and spent the 1920s touring Europe with his fancy new doodad, which he now called the Thereminvox (which was then shortened to Theremin because it’s easier to say).
After traveling and performing around Europe, Mr. Theremin and his wife Katia then made their way to America in 1927. In America, Theremin performed in the nation’s top concert halls and venues making his debut at the Metropolitan Opera in 1928, then New York Philharmonic in 1928, and Carnegie Hall in 1928 and 1929. It was at this time that Leon Theremin also patented his theremin in the United States and the Theremin began to be produced and marketed by RCA (*Radio Corporation of America) in 1929 and 1930. Unfortunately, they were not a commercial success.
However, while in America Mr. Theremin met Clara Rockmore (née Reisenberg) who would go on to become a theremin virtuoso and perpetuate the use of theremins in modern music and cinema. Clara went on to devise her own fingering to allow for greater control and dexterity on the instrument, and as their partnership continued, Clara convinced Mr. Theremin to continue to refine his instrument, expanding the instrument’s range from three octaves to five octaves. Mr. Theremin, who was so encapsulated by Clara’s gifts, then proposed to her (a bunch of times) ((even though he may have still been married to Katia)), and was rejected, and Clara went on to marry the attorney Robert Rockmore.
In the 1930s, Mr. Theremin established a laboratory in New York where he continued to develop the Theremin and other electronic instruments including the Rhythmicorn (electronic drum set) and the Fingerboard (cello) Theremin. Theremin even went on to perform a 10 theremin program in Carnegie Hall in 1930 and conducted his first electronic orchestra in 1932. Mr. Theremin also went on to marry the African-American ballet dancer Lavinia Williams, which resulted in his ostracization from society.
The Theremin continued to make appearances in films and media in the background tracks of movies like The Lost Weekend (1945), Spellbound (1945), and Forbidden Planet (1956). Meanwhile Clara Rockmore continued to play the Theremin in a variety of concert halls and venues (and was also featured in the 1932 performances in Carnegie Hall). Clara went on to release an album entitled “The Art of the Theremin” in 1977 with Delos CD, containing a variety of selections from the classical canon. Even moving into the late 20th and early 21st century, the Theremin is still heard in a variety of pop songs including in the Beach Boys 1996 single “Good Vibrations,” the 1967 Rolling Stones albums Between the Buttons and Their Satanic Majesties Request.
And through the theremins continued success across the mainstream media and musical performances, what ever happened to Leo Theremin? One day in 1938, he disappeared from his New York studio and vanished, being swept back to Russia, leaving behind his wife, Lavinia, and his theremin (among many his other musical inventions). Never to be heard from again until the fall of the Iron Curtain in 1991.
Technology Behind the Theremin
So how does this thing actually work? We’ll you’re in luck because SciShow made a super informative video that explains the whole thing. The theremin really didn’t change a whole lot since its invention; the body of the device grew smaller due to the advancements in microtechnologies and the rod that determines pitch was made longer as to accommodate a more extended range. Aside from these small adjustments, the science behind the theremin remained relatively unchanged.
The Theremin In Action
Here are some super cool videos of the Theremin in action!
Music, besides a purely sensual, and often surreal, kind of enjoyment, has a million facets and presents a wide array of different challenges for different mentalities: for performers, music is a fluid oscillation between obtaining eloquent delivery of tones and phrases, and reimagining the philosophical pillars with which the piece in question was derived; for composers, and usually scholars as well, music yields a metaphysical reality, by pursuing which our concerns about material, and sometimes even practical, realities shrivel; for the general audience (assuming one that is familiar with the context of the piece they listen or recollect), music tends to be interpreted as a manifestation of an all-encompassing, higher being, in which the listener is dissolved, elevated to a vantage point, and able to re-deliver the comprehensivity of the music.
Discussions of the psychological impact of music are often imbued with that praising its extraordinary illusionary capabilities. This observation, however, pertains directly to an integral part in musical imagination: space. Composers use techniques of distance and spatiality to create a premise for intricate structural progressions and volatile ideas; sometimes it becomes so compelling that the listener’s awareness of the surrounding is completely subsumed into it. In such cases an ‘environment’ is created. In relation to more traditional aesthetics, the sole agenda of creating space is to create an alternative path towards metaphysical reality, apart from teachings of reasoning. Space instills fertility of thought in us. When we listen to music attentively enough, the boundary between the listener, who perceives sonic information, and the music, which configures and ‘emanates’ the information, is obscured; therefore it is not hard to imagine the multiplicity and simultaneity of perceptual conduits and the listener’s self-awareness achieved by spatialization, through which the attentive locating of sound — sometimes the listener’s subconscious, self-seeking appreciation of such attentiveness as well — is transposed to a kind of panoramic ‘vision’ when the listener recognizes another sound source. The experience is then translated into, to a certain extent, an experience of anonymity and metaphysical clarity beyond the subjectively imposed characterization of external sonic objects. Ultimately, a virtual environment is created, and the listener is rendered receptive of it; the processes of signification is diluted within a complex procedure of transitioning between being and non-being. The listener’s ideas are therefore, ideally, equally represented and given ‘amorphous’ shapes according to how the composite matters are delimited and when the ideas ‘intersect’ the music during the listening experience. When we internalize spatiality, polyphony begins.
The spatialization technology has nowadays been primarily associated with electronic settings thanks to the proliferation of electronic music and development of electronic equipments. Its root, however, can be traced far back to the antiphonal singing of chants in the medieval era. The antiphonal style, that is, the call-and-response setting between segregated choruses, has been implemented in chants more than the responsorial (solo-chorus) style and the direct (unison choir) style. Besides exploring the poetic images behind the antiphons, the Renaissance era inherited the performance practices and further intensified implicit soundscapes in significantly elaborate polyphony. One can relate this movement to concurrent scientific discoveries (or, perhaps more accurately, the acknowledgment of their validity from the Church) regarding the motional and spatial relativity between Earth and other celestial objects, which helped replacing the Earth-centric view of the universe with a spatially much greater one. The explosive expansion of the hypothetical universe led to a new way of looking at space; the spherical representation of the universe and the sphere as a theological representation of perfection both emerged during this period, and the octave, considered the most ‘spherical’ of all intervals, was employed in ways of enhancing, regulating and reorganizing the tonal space — the handling of tonal implications and motional relativity had been increasingly reified and conceptualized such that it virtually became a ‘spatial’ parameter — which correlates to a revised end-goal of contrapuntal writing. Counterpoint had been treated as, obviously enough, a rigorously linear, ‘contrapuntal’ context in the previous century. This can be seen in Johannes Tinctoris’ formulation in his 1477 thesis Liber de arte contrapuncti:
Counterpoint is a regulated and rational concentus [literally, “singing together”] realized by setting one voice against another. Its name counterpoint derives from counter and point, because one note is set against another as if it were constituted by one point against another.
Tinctoris , Liber de arte contrapuncti, 1477
However, new ideas emerged unhindered, and we see the recognition of the totality of polyphony as a ‘body,’ an organic whole. Counterpoint had since therefore been endowed a mystical quality. Here is one example of numerous expressions of the then-revolutionary theory, quoted from Franchinus Gaffurius’ Angelicum ac divinum opus musice, written in 1518:
The concento or many-voiced work is a certain organism that contains different parts adapted for singing and disposed between voices distanced in commensurable intervals. This is what the singers call counterpoint.
Gaffurius, Angelicum ac divinum opus musice, 1518
The latter one, notably favored by theorist Gioseffo Zarlino who appropriated the quote nearly verbatim in his seminal treatise Istitutioni harmoniche, combined the perception of external spatiality and that of the internal analogue into a single, transcendent unit. If external spatial distribution was to be viewed as insufficient to fulfill our perceptual intuitions, Gaffurius’ conception of the organic composition may well serve to alleviate the apparent mediocrity of the seemingly signal-like, ‘unmusical’ tactics. In other words, spatialization had again been able to yield its perceptual potency thanks to the intensification of tonal organization. Further discussion of its aesthetic history can be found in this beautifully written paper.
The revitalized interest in spatial arrangement was evident in the architectural plans of Catholic churches. Many of the them have places specially designed for antiphonal choirs; in this article, the author specifically examines the floor plan of St Mark’s Basilica in Venice.
The liturgical significance of antiphonal settings is evident here; while the organ is introduced into architectures, spaces are retained for antiphonal choirs.
Unsurprisingly, antiphonal writing is enormously difficult because, by the time polyphony came into fruition during the mid- and late-Renaissance era, segregated choirs were treated not only as purely antiphonal but also as a composite choir. Besides maintaining independence of melodic lines, the composer has to manage the composite choir — typically an eight-part force divided into two four-part choirs with equal forces — in a way that the independence of individual choirs can be recognized while the unity of the eight-part body is preserved. In Istitutioni harmoniche Zarlino wrote about the principles of writing of this kind:
Because the choirs are located at some distance from one another, the composer must see to it that each chorus has music that is consonant, that is without dissonance among its parts, and that each has a self-sufficient four-part harmony. Yet when the choirs sound together, their parts must make good harmony without dissonances. Thus composed, each choir has independent music which could be sung separately without offending the ear.
Zarlino, Istitutioni harmoniche, 1558
One example of eight-voice setting is the Ave Maria (1572) by Tomás Luis de Victoria, included in the supreme compilation of his musical art Missae, Magnificat, Motecta, Psalmi, published in 1600. This motet incredibly encapsulates different kinds of choral writing, all fused in a compelling dramatic trajectory. In addition to eloquent shifting between kind to kind, Victoria exploited the organizational possibilities within the expanded setting, usually when the texture is diminished into four parts. The curtailed choir, however, may be drawn from both sides of the entire force as opposed to one; the strict, ‘primitive’ distinction of sides which once defined antiphony became a form of interlacing, its original functional implications — to elicit call-and-response reciprocations — giving way to intricate transitioning between different pairs of different distances. To carefully calculate the relative amplitude of each side is to manipulate the ‘movement’ of a sound (not to be confused with that of a pitch, which is contour) — an implicit, heavily context-dependent, yet immensely affective parameter.
For further investigation of the performative considerations of an eight-voice setting, this article offers a detailed discussion of Victoria’s Victimae paschali laudes, a sequence also included in the 1600 collection Missae, Magnificat, Motecta, Psalmi.
The spatial component saw a second rise in significance in the nineteenth century and a full fruition in subsequent centuries. In the third movement of Hector Berlioz’s Symphonie fantastique, an oboist is instructed to remain offstage while playing the remote echoes of the shepherd’s melody, which is in turn played by the english horn; the schalldeckel in Richard Wagner’s revolutionary Bayreuth Festspielhaus reflects the lush orchestral sound from the pit back to the auditorium in a way that at any given point the sound seems to be completely immersive and all-directional; Luigi Nonocredited the Venetian masters in the Renaissance era as of primary importance in his music because spatialization directly pertains to contemporary theatrical philosophy. Such is the relevance of our instinctive awareness of surroundings to metaphysical and spiritual truthfulness. However, spatialization through purely contrapuntal means is no less complex than the handling of the electronic facilities. Perhaps, evocation of a primal revelation and reverence — although it may appear more akin to an ‘unlearning’ process — could only be plausible through our rigorously regulating, reinventing and augmenting the conduit for an authentic yet universal experience.
Imagine a young Elvis Presley, only 21 years old, in his home town of Tupelo, Mississippi. Finally coming home for the first time as a massive celebrity, Elvis decides to put on a homecoming concert for the town. Performing for tens of thousands of screaming fans, Elvis makes sure to pull all of the stops. He sings some of his most famous hits like “Hound Dog” and “Don’t Be Cruel,” dances in the sensual fashion that never made it into his “waist up” performance on the Ed Sullivan Show, and holds his hand out to a sea of people desperately wanting to touch him, all the while clutching a bulbous, chrome set microphone that would come to be playfully nicknamed the “Elvis Mic”: the Shure Unidyne Model 55.
This microphone, first developed in 1939 under the Unidyne Microphone Series of the Shure Company, has been in the presence of some of the most famous musicians and arguably the most recognizable events in American history. The Shure Unidyne Model 55 was the preferred microphone not only for Elvis Presley, but for the great jazz singer Billie Holiday, the “Queen of Swing” Mildred Bailey, and Frank Sinatra. It was in front of Martin Luther King Jr. in his famous “I Have A Dream” speech at the Lincoln Memorial, was quite noticeable in the “Dewey Defeats Truman” photo, and the iconic microphone that helped Michael Buffer utter the words, “Let’s get ready to RUMBLE!!!”
However, even though the Model 55 has been around for 80 years and has been an integral part of America’s musical and social culture, not many people really know much about this mic and what made it the groundbreaking technology that it truly is. Well, I intend to right this incredible wrong of society and present to you a rundown of the Model 55’s history and its ingenious design that more that certainly led to its popularity.
1. It was the first of its kind to be a “single element dynamic cardiod” microphone
Now I know what you’re thinking. “This is how you’re going to reel me in? Throwing together a bunch of engineering terms and hoping I think it sounds cool? You’ve lost me.” But wait! While they might sound a little dry, those four words (single element dynamic cardiod) are the basis for almost all modern recording technology and, in the context of the 1930’s, opened a new realm of possibilities for studio and live recordings. Here is a breakdown for those words.
“Cardiod” refers to the specific directional pattern that the mic makes. Back in the 1930’s, most microphones either picked up sounds equally from all sides (an omnidirectional pattern) or equally from two sides (a bidirectional pattern) but the desired pattern for live performances was a unidirectional pattern that picked up sounds from only one side of the microphone. That way, only a performer’s sound would go in to the microphone without the ambient noise that could normally cause unwanted feedback. This unidirectional pattern often is in the shape of a heart, which is why it is specifically called a cardiod pattern. If you want to read more on directional patterns, click here.
“Dynamic” refers to how the microphone turns the acoustic waves of the sound into electric waves. In a dynamic microphone, sound pushes against a diaphragm which is connected to a piece of wire coiled around a magnet. Whenever the diaphragm moves, the coil moves over the magnet, creating a small current that momentarily runs through the wire. There are many different ways in which sound can be changed to electric signal and if you want to learn more about these methods, click here.
“Single element” is probably the most important term out of this group because it’s what made this microphone so successful as a product. In the 1930’s, to be able to create a cardiod directional pattern, recorders would have to use huge microphones that effectively had multiple omnidirectional and bidirectional mics within it that would sum or subtract their outputs. These “multiple element” mics were heavy and not always reliable, so Shure researched ways to modify the dynamic configuration of the mic so that only one element was needed. With the help of Benjamin Bauer, the head designer and inventor of the mic, the company found a way to alter how sound hits the diaphragm from the back and effectively nullified any sound that would come in that direction. This resulted in a mic that was extremely light weight and significantly more reliable than its competitors; features that many performers and announcers were attracted to.
2. It was extremely cost effective
Because of its single element design, Shure could sell these microphones at a reasonable price to broadcast groups. The Shure Unidyne Model 55 costed around $45 dollars, which for its reliability and weight, was a great deal for performers buying them.
3. People loved the outer design of the microphone
Without a doubt people were attracted to the futuristic look of the Model 55. According to the Shure company, the outer design of the mic was inspired by the grill of the 1937 Oldsmobile as well as the Art Deco movement of the 1920’s and 1930’s.
All in all, the Shure Unidyne Model 55 was a feat of technological brilliance. It offered an efficient way to accurately record vocals without the fear of feedback or odd frequency response. The Shure Model 55 should be remembered as the father to all modern dynamic microphones because it truly was the first of its kind. So, whenever you see a picture of Elvis waving around the “Elvis Mic” or Sinatra crooning into the Model 55, just remember how groundbreaking that microphone was.
What do you think of when the word hammer comes to mind?
A tool?
Rapper/dancer, MC Hammer?
A piano?
You may be thinking, “What’s a hammer have to do with a piano?”
Good question.
Hammers are mechanisms inside the keyboard that play a crucial role in its structure and sound. At times, we can forget about them because they are inside the instrument, but they are still an essential part of this big wood contraption. Without the hammers, it would not be able to produce the sound we hear today.
Before the piano was invented, the harpsichord was the main keyboard instrument. It produced sounds by hitting keys, which would strike a device called “jacks” that were in the harpsichord. The strings would be plucked in order to make sound, and a “jack rail” would then control how many strings were plucked at a time. This was what adjusted the volume.
Here’s a simple demonstration of how the jack works:
The piano eventually came into play in the 1700’s. It was invented by Bartolomeo Cristofori in Italy because many people were unsatisfied with the lack of control they had over the volume of the harpsichord. Cristofori switched the plucking mechanism for a hammer in the 1700’s. He developed an “escapement” mechanism, which allowed a hammer to fall after hitting the strings, as well as a dampening mechanism on the jack so that the strings would not sound when it was not being hit. His invention completely changed the sound of the keyboard instrument. It seems like a very minute detail- silencing the string; however, it makes a big difference. There’s a less abrupt sound, and a nice resonation. All of these characteristics makes the instrument more appealing to the ear.
Cristofori also developed another mechanism that improved the striking action. He used what he named, a “slide-slip.” The device (which was activated by a hand stop) would shift the mechanism so that it would only hit one string instead of three. This is where the soft pedal or una corda originated from.
The head of the hammer was also covered by a piece of felt. This allowed the tool to be protected, and not clash into the strings while keys were played. They were originally covered with layer of leather; however it was changed, most likely because it wasn’t fully developed until the mid 1800’s. The felt material allowed pianists to produce a softer sound, compared to the harpsichord, which was sharper and more abrupt. It had larger gradations in dynamics, which previous keyboard instruments did not have. As the felt quality gradually increased over time, modern pianos developed better tone, which gave room for more expression.
So why are the hammers in the piano even that important? Is it even considered a technology? I would say so! The development of the hammer revolutionized the keyboard instrument. Before, pianists had no control over the volume at which they could play. As a pianist, that would have really bothered me, because the most important thing to have is a large range in dynamics. With the earlier keyboard instruments, the volume could only be controlled with the jack rail, and that still didn’t give much range in dynamics. The articulation of fingers was essentially the only thing that could control the sound and tone. It didn’t matter how much weight you put into the keys.
With the development of the hammer mechanism, pianists were able to change the sound and volume with the weight of their arms. This allowed them to produce a much bigger range of dynamics. It’s the reason why we are able to play a vast range of fortes and pianos today.
The bassoon reed: so small, yet so capable of ruining my life. The bassoon reed is one of the few pieces of technology that makes life difficult just as often as it makes life easy. Every bassoonist knows the struggle: you spend hours on one reed, only to find out that it is not, and never will be, very good.
At the foundation of the bassoon reed is a plant called Arundo donax, or, more commonly, “giant reed.” Once it’s harvested and sent to bassoonists, it’s generally referred to as “cane.” It is an invasive species and grows all over the world, but most of the cane used for bassoon reeds is grown in France. That may seem like an oddly specific, arbitrary location, but there actually is a difference in the makeup of the plant depending on the region in which it grows. Arundo donax contains certain percentages of natural minerals and chemicals that serve to protect it from insects. One of these minerals is silica (a glass-like mineral that gives a piece of cane strength), and the amount of it that is in the cane dictates whether or not it is usable for the purpose of reed making. Too much means the cane will be too stiff to vibrate (making it very difficult to make a sound), and too little means that it vibrates much too easily and will probably sound similar to a kazoo. Basically, there are only certain regions in which cane grows in an ideal way for reed making, and France just so happens to have ideal growing conditions.
When the cane first reaches the bassoonist, it looks nothing like a reed. At this point in the process (wherein the only thing done to the cane has been harvesting it and cutting it into sections about one foot tall), it pretty closely resembles a bamboo shoot.
This is where the manual labor begins for the bassoonist. To put it simply, the tube has to be split into four equal pieces and cut to a precise length, and the inside material has to be scooped out and thinned. At this point, the cane has undergone what is called the gouging process, and it looks like this:
The cane then undergoes a series of transformations during which it begins to look like a reed. These would be boring and confusing to explain, but this video of Abe Weiss (the former principal bassoonist of the RPO) does a good job of demonstrating the steps. The first half of this video is about the steps involved in processing cane, and the second half is about finishing the reed.
How someone chooses to finish a reed depends on a variety of factors. These factors include things like where they live, who they study with, and what kind of playing they plan on doing. According to George Sakakeeny in his book, Making Reeds from Start to Finish, there are three main styles of reed making which all suit different needs.
The first distinct reed style is the Garfield style. This type of reed is rare and mostly found in North America. It is named for Bernard Garfield, who developed it in the mid-twentieth century while playing in the Philadelphia Orchestra. The goal of this reed type was to achieve a darker sound that was easy to control for orchestral playing. Over the years it has fallen out of favor, mostly because it does not project well. The other two reed styles don’t have special names. One is used mainly in North America and tends to elicit a brighter sound from the bassoon. This type of reed is good for solo and principal orchestral playing. The third is used pretty much everywhere outside of North America, and elicits a darker sound from the bassoon. While there are three “main” styles of bassoon reeds, every individual adjusts their reeds to suit their needs in so specific of a way that almost no two people’s reeds are the same.
In terms of development, the bassoon reed hasn’t really changed since…pretty much ever. Even dulcians (the precursors to bassoons) used reeds that look similar to ones bassoonists use today. Obviously there are differences in things like shape and size, but the general structure of the bassoon reed doesn’t seem to have gotten an update in over 300 years.
Making a reed out of organic material, however, can be a frustrating task given how inconsistent plants can be. This means reed making ends up being an incredibly time-consuming task since at least half of all reeds are not suitable to be played and have to be thrown out. As a result, modern companies are trying to find a synthetic material that will work just as well as traditional bassoon cane. The forerunner in this industry right now is Légère, which has created a synthetic reed that, all things considered, functions pretty well. That being said, they are not widely accepted in the bassoon world yet. Reed making is honestly a pretty culty thing, so much so that people tend to look down upon people who don’t make their own reeds. Steve Paulson, the principal bassoonist of the San Francisco Symphony even said, “I’m almost reluctant to reveal publicly how much I am enjoying the experience. As good as these reeds are, I’m sure that even the folks at Legere understand that it will take a long time to have synthetic reeds accepted as mainstream in our worldwide culture of bassoonists, at least among professionals. Prospective conservatory students will want the assurance that a bassoon teacher will continue devote the time and energy to the teaching of cane reed making, as I will, even if the professional happens to be “doing a little Legere on the side”.
Reeds are a vital part of the bassoon playing experience. Without a good reed, there’s no way to play the bassoon to the standard of an orchestra or any ensemble for that matter. They are the most important piece of technology a bassoonist has available to them, and, in the process of being made, the reeds accrue their own history. Through understanding how a reed is made and how specific they are to different people/regional sound preferences, we can gain an appreciation for how bassoonists have adapted this technology to make it meet their individual needs.
Each musical genre can be associated with a key instrument; tenor saxophone for Swing and Bebop, electric guitar for Blues and Rock, and the synthesizer for 80’s pop. With this in mind, what timbres accurately depicts the past decade of music? There isn’t exactly one sound that can fit this criteria and this is a result of the emergence of samplers in modern music production.
Of course sampling dates back to far before rappers and DJ’s had instant sampling and real time loops on portable digital devices. Sampling technology began with utilizing tape recordings in the 1940’s with Henry Chamberlin’s invention of the Model 100 Rhythmate. This instrument played a selection of pre-recorded drum loops on a tape reel so that users could play along with it on other instruments. After a few different versions of the Rhythmate along with the addition of a keyboard and recordings of violins, woodwinds, and choirs, the first Mellotron was born (http://egrefin.free.fr/eng/mellotron/melhist.php)
Alike most things in life, the Mellotron was not perfect to begin with. In an “Poor Man’s Mellotron” Bruce Harvie shares his experience with his own mellotron; “I have to warm mine up for an hour or two to get it to where it will play back the tape banks without warbling, and even then it’s dicey as to whether or not it will play the notes clearly.” Clearly, it takes a bit more than just owning a mellotron to be able to use it effectively. “Mine has trumpet, French horn, violin, cello, and the wonderful sound of individual men’s and woman’s voices… and that’s it!” he further shares.
Harvie recounts the instrumentation of his mellotron as a downside, but for producers and songwriters, this was more than enough to spark creativity. Just listen to iconic Beatles cut, “Strawberry Fields Forever,” which begins with a mellotron that plays a recording of a flute.
Being limited to the option of a flute mellotron helped the fab four bring this tune to life in a way that could not have otherwise been imagined. The warm sound of the warbling, distorted tape disguises the fact that the recording is the sound of a flute, and creates a sound that is entirely unique. Voice and string Mellotrons are notorious for creating dense atmospheric textures, which can be heard in British Progressive Rock outfit Genesis’ “Dancing With The Moonlit Knight.”
Countless recordings have been made legendary by the sound of Mellotron’s. It became a staple of several artists in the 70’s, most notably on David Bowie’s “Space Oddity,” Led Zeppelin’s “Kashmir” Tangerine Dream’s ambient and illustrious “Phaedra.” (http://ultimateclassicrock.com/mellotron-songs/)
Naturally, borrowing pieces of music has evolved. During the 80’s, hip hop artists began using vinyl records to sample recordings. Alike the tape on a Mellotron, the warbling sound of vinyl maintained the warm analog sound in sampling (https://entertainment.howstuffworks.com/music-sampling1.htm). The production of Mellotrons was put to a halt in 1986 due to the invention of digital samplers taking over the market. Fast forward to the 21st century to see Roland’s invention of the SP404, allowing users to record digital samples with a built in microphone and even apply reverb, chorus, and filters to them. (https://www.roland.com/us/products/sp-404sx/)
Despite this convenient technology, many still prefer the unique texture that a mellotron creates. Johnny Greenwood of Radiohead has discussed his own impressions of the Mellotron, “It didn’t sound like any other keyboard. Instead there was a choir, and a weird, fucked-up sort of choir. I love the fact that the notes run out after a few seconds.” As a true testimony of the influence such a unique instrument had on even future generations, various Mellotron’s can be heard throughout the band’s 1997 masterpiece album, “OK Computer.” (https://www.spin.com/2017/06/radiohead-jonny-greenwood-genesis-paranoid-android-ok-computer/)
While digital samplers continue to dominate the scene, there is still a market for the iconic sound of a Mellotron. Just this past weekend at the 2019 NAMM convention, Quilter Labs unveiled The Panoptigon, a machine which plays floppy discs and allows users to manipulate the pitch of the audio, quite reminiscent of the sound of a Mellotron; https://reverb.com/news/video-quilters-panoptigon-brings-back-the-optical-disc-instrument
The EWI, otherwise known as an electronic wind instrument, is a technological invention that has made a huge impact on many different genres of music and has a recent history that is often overlooked.
The History of the Instrument:
It all started in 1981, with inventor Nyle Steiner. In its first stages, the EWI was made by hand, and was essentially an analog controller that didn’t have very many sounds other than the ones built in. The top of the EWI contains sensors inside the mouthpiece that measures how much wind is being blown into the instrument and would change the volume. The front of the instrument was made of non-movable buttons/parts on the front. On the back close to the mouthpiece, there is a series of metal rollers that would allow the user to control the octave register with their thumb.
Shortly after being created, its increasing popularity caused some of the users to carry lots of extra equipment in order to create extra sounds as well as cords that made the it compatible with other synthesizers. The solution to that problem came when Steiner integrated the MIDI box into the EWI in 1985. This allowed the it to be more compatible with commonly used samplers and mimicked any real sound the user wanted to make. That is why the instrument itself was so versatile, including it’s ability to program different fingerings (for brass instruments or saxophone) that are more familiar to users.
Once Steiner was no longer able to make the EWI’s by hand, he went to Akai Professional who were already working on their own digital sampler at the time with music instrument company Electroharmonix, and made a deal for the prototype to be mass produced. It continued to be revised over the years to improve its technological abilities and playing ability. The most recent model, the Akai EWI 5000 was revealed in 2014, and even contains its own soundboard to change reverb, delay, chorus, and pitches. It features the same button/octave mechanics as the original but in a much slimmer form containing more advanced technology, and more patch sounds.
Another example would include my favorite piece featuring the EWI: Original Rays on Michael Brecker’s Michael Brecker (1987). It is also plugged into an Oberheim Xpander (a six voice keyless interval generator/analog synthesizer). The notes that are being played on the EWI are marked as pink, and the color coated chunks mark each time the Oberheim Xpander generates a new set of six intervals harmonizing the main note.
One last example, just because Michael Brecker is that awesome, is the song Itsbynne Reel on Don’t Try this at Home (1988). It showcases the EWI in a different context as described in the liner notes by George Varga: “The opening section, ‘Itsbynne Reel’ begins with a vigorous traditional Irish-reel-cum bluegrass duet between Brecker on EWI and violinist O’Connor before leading into a driving, harmonized vamp…” It’s not the typical setting for an electronic instrument with violin, but it totally works and that is the best part. The EWI is not just limited to jazz or fusion music, it can go anywhere if it fits the context. I also highly recommend listening to the rest of the track, it’s quite unbelievable.
EWI in the context of contemporary music: Although the EWI became more popular among other users, more artists became critical over its legitimacy in music after Michael Brecker didn’t use the it as often in the 1990’s. Despite that, there are a lot of musicians that continued to use it at a very high level, one of them being Bob Mintzer. There is a group called the Yellowjackets that features him on the saxophone and on a more recent model of the instrument. One of my favorite snippets of the group is them performing in Stockholm in 2009, showcasing the amount of technical ability that can be achieved while being musical and assimilating vocabulary from the blues/jazz.
EWI as its own instrument:
As awesome as the EWI can sound, people often mistake it as being too similar to being able to play an acoustic instrument, specifically the saxophone, clarinet and flute. As described in an article regarding technique and expressivity on EWI, the reason why the it is incredible is because it requires its own technical mastery, completely separate from any other instrument. That is why people often experiment with the EWI, but do not get past the early stages. One major difference is that the buttons are touch sensitive, as opposed to physical finger buttons that can be pressed down or tone holes that can be covered as well. It is essential that the finger movement is clean and precise. If users are not paying attention, their fingers can be easily touching buttons and swirling between notes that were not intended. Another challenging concept is the touch sensitive thumb roller for the octave register. It is not the same as producing the upper and lower harmonics on an acoustic instrument. If users aren’t careful, the thumb can easily roll quickly between octaves and creates a huge whirlpool of morphed unintentional sounds. There are also seven/eight octaves on the instrument, which is a lot more than usual acoustical instruments are accustomed to having. Figuring out how to properly incorporate this huge range on the instrument into music can be very challenging as well. The continuing capabilities of the EWI include pitch bend, vibrato, and glissandos is not as easy to use in context as users might think. The mouthpiece is also made of hard rubber, which can feel much different than actively vibrating a reed or buzzing in a brass mouthpiece. As an EWI 5000 user myself, I absolutely love the instrument, but the technical challenges are certainly apparent.
EWI and its place in music today:
One issue that the EWI ran into at the beginning of its development is that it was considered as a replacement for 80’s jazz/pop saxophone. This limited the usage and its credibility to be continually used in other contexts. I believe that the EWI should be treated as its own instrument and should be assimilated into any musical context of which is appropriate. Considering that it is somewhat like a technological version of what an acoustic wind instrument, it is very unique and has a futuristic/contemporary feeling to it. It can certainly push the boundaries of what is possible in music and can also yield to the creation of other music niches/genres in the future.
Do drum machines provide a source for “Sexual Healing”? Lets ask Marvin Gaye.
Do drum machines put you in a “Love Lockdown” or say
“Welcome to Heartbreak”? Kanye West
knows the answer.
Maybe drum machines will have you joining Whitney Houston in saying “I wanna Dance with somebody” or better yet, have you screaming “Yeah!” with Lil Jon and Usher.
In the tracks referenced above, there was one specific drum machine behind it all. The Roland TR-808 revolutionized the way music was created and heard. It provided a whole new interface for artists and producers to be creative. It brought forth an entire soundscape that did not exist prior to 1980 and revolutionized the way we hear rhythm and beats.
History of Roland
Ikutaro Kalehashi, aka Mr. K, was born in Osaka Japan in 1930. As a kid he studied mechanical engineering eventually working for a company called Ace Electronics. There he helped manufacture what were called “Combo Rhythm Units”. These were early drum machines programmed into organs providing the organist with a beat in the case of no other musicians present. Some of the earliest recordings of drum machines were from artists such as Sly and the Family Stone and Timmy Thomas utilizing these rhythm units. In continuing his interests of electronic musical instruments, Mr. K founded the Roland Corporation in 1972. https://www.theverge.com/2017/4/3/15162488/roland-tr-808-music-drum-machine-revolutionized-music
Like anything revolutionary or new, people didn’t understand the TR-808 when first released in 1980. However, the machine was unique to anything else on the market. The sounds produced were synthetic and not natural. They sounded as if from outer space or from the future. People were confused. https://www.theverge.com/2017/4/3/15162488/roland-tr-808-music-drum-machine-revolutionized-music
The New York Scene
In the underground scene of New York City, the sound was about mixing records and spinning vinyl in clubs. In 1981 the game was changed when one man introduces the 808 sounds to the world. Afrika Bambaatta was already mixing artists such as James Brown, Sly and the Family Stone, and Kraftwork together through being a dj, and with help of producer, Arthur Baker, the song “Planet Rock” was recorded for Tommy Boy records. Soon, the sounds of the 808 were being played throughout the New York Clubs. The most noticeable sound of the track was the low end of the bass. No one had ever heard a bass sound of that magnitude before the 808.
Sexual Healing
The next major event to occur in
the life of the 808 is when Marvin Gaye chooses to make a career move.
Struggling with happiness in his life, Gaye decides to move to Belgium and
escape the struggles of family, drug abuse, as well as remove himself from the
Motown sound. He went into the studio with a new, stripped down writing style,
and a vision. With the 808, he created a groove and was adamant about that one
sound. After recording his vocals over the 808 loop, Marvin Gaye transformed the
808 sound into his number one selling track of all time helping him receive his
only ever Grammy Award. The track “Sexual Healing” helped bring the sounds of
the 808 into the pop music world in 1982.
After the breach into the pop music world, the Roland TR-808 sound began to bridge the gap between multitudes of genres and city scenes from the Miami bass music scene, to Atlanta, New Orleans, Chicago and even across the seas to Europe. One of the most revolutionary producers to utilize the 808 is Rick Rubin, co-founder of Def Jam Records and co-president of Columbia records. Rubin is known for his discovery of using the 808 for a bass line. He found out a way to maximize the sustain sound and tune the pitches allowing for the creation of a bass line. Artists such as LL Cool J, The Beastie Boys, Run DMC, and Public Enemy can be attributed to the work of Rick Rubin and the 808. Rubin developed the sound of American hip hop with the help of the iconic drum machine. https://www.britannica.com/biography/Rick-Rubin
3 years of never ending influence
Between the year 1980 and 1983, Roland produced and sold 12,000 machines. The way the machine was developed, Mr. K, chose to utilize the defective transistors in the analog circuit because it created a unique buzzing sound. As technology improved, access to these specific transistors diminished. Rather than changing the formula, Mr. K. stopped all production of the TR-808. Even though only a limited amount were created, the Roland TR-808 changed music forever. As a listener you cannot turn on the radio without hearing a track consisting of 808 sounds or influence. Artist still use it today. https://www.rollingstone.com/music/music-news/8-ways-the-808-drum-machine-changed-pop-music-249148/
Back to the original question…you
could say maybe the faulty transistor is the soul within the 808 drum machine.
Maybe it’s the soul within the artist or producer that shines through. All in
all though, with the impact the 808 had on music it is easy to say, it has more
soul than a ginger like me…
Can breathing be a technology, and how does our breathing evolve?
When we think of technology, we list off every electronic device that comes to mind, and if we can’t think of any more, then we search on the internet, which is also categorized into the electronic department. But what is rarely thought of as a technology is the body. Your body, my body, everyones body. More specifically, an involuntary function in our body: the breath. Ok… isn’t technology supposed to be techy or something? No. The breath is something we as musicians, and I guess also as human beings, depend on. Technology is something developed and in turn used as a way to facilitate something, or to make something work. In order to make most of our musical instruments work, we must use an airstream, which needs our breath. We don’t just *breathe* into our instruments — there is a lot of thought that goes into making the airstream. Airstream is a development of our breath; it is something we manipulate for our own use, therefore making it a technology to musicians.
We never really wake up and think “oh man maybe I should breathe”, but as musicians, we lock ourselves in practice rooms and obsess and over analyze breathing and airstream; it is no longer involuntary. It dictates intonation, tone, color, vibrancy, pitch accuracy, etc… As a flutist, any pain in my body, stiff jaw, tight chest, wack oral chamber, affects the outcome of my air that I am trying to achieve. So there is a reason to obsess over it — we cannot play our instruments beautifully without it. The development of our breath into airstream has sparked an interest in developing other technologies to even further improve our breath, which in turn helps our airstream, such as variations on the breath builder , breathing bags, finger breathing, and other breathing accessories specific to instrument types, such as the Pneumo Pro for flutists.
It is interesting to think that musicians hold and attend classes that teach you how to breathe. Like, why do I need to sit here for an hour listening to some old guy talk about breathing? Over time, and across the globe, musicians developed different ways of explaining and manipulating breathing. These classes, although sound boring, help us think about breathing and air, rather than doing it mindlessly and involuntarily. I’ve definitely attended classes like these and each time I would freak out because I would suddenly overthink breathing and then I sit there very uncomfortably breathing for the rest of the class, but also I have a new perspective of breathing as a tool for improvement.
But actually productively thinking about how your air moves through your body, then how it moves through your instrument, allows you to develop as a musician, since air has so much to do with playing (which is pretty wild honestly). Usually the longer you play an instrument the more organic this dream airstream becomes (obviously with lots of practice). I breathe everyday and think about my air when I play so I hope maybe someday I’ll have a marvelous airstream too!