Sex As We Don’t Know It: Computer Music Futures

Pauline Oliveros

This text was originally presented as the keynote address for the International Computer Music Conference 2010 at SUNY.


Do you love the music that you make or that others make?
or
Do you love how you make the music that you make or that others make?

Where do we locate this love?
In our own bodies?
Within a group that shares values?
Both?
Some other place?

How attached are you to the tools and instruments of your music making? – – Your role as a composer, performer, improviser or sound artist?
How about your method and structures for making music?
Your role as a composer, performer, improviser or sound artist?

Composing or improvising for me is about producing new knowledge. I want the music that I make to expand my mind and stimulate the pleasure center in my brain so that I can feel good and transmit good feeling to others through my music. Pleasure in music is very related to sexual pleasure. The associated areas that light up in the brain are found in close proximity to each other. (Levitin)

I continue the practice of Deep Listening because I benefit from it and others tell me that they also benefit, derive pleasure and knowledge from it. My intention is to continue to deepen the practice. Listening takes place not in the ear but in the brain/body after the ears gather and transduce the sound waves, and deliver them to the audio cortex. So listening is already inside of the body and not out in the world even though we perceive sound outside of us.

Deep Listening is a practice that facilitates the process of experiencing heightened awareness of sound, silence and sounding. Such experiences are intended to enhance or expand the listening mind across all abilities including those with special needs.

Deep Listening is a form of meditation that is a natural possibility for humans. Anyone can or may already be a deep listener. For most people hearing occurs all of the time, listening occurs most of the time and yet remains mysterious in its process.

So far as I know listening is not yet measurable as a scientific physical process. Listening remains a private matter for each of us.

Ipods and earbuds and other external in ear listening devices are only a crude step along the way towards listening completely inwardly rather than just privately or publicly. In the future the ears may be by-passed altogether for a totally private experience

Let’s take a look at where we are with hearing technology:

Stigma associated with hearing loss has diminished greatly from former times. Still many people hide their hearing loss or are not aware of it.

There is often impatience with the hard of hearing rather than support to get professional help. Regrettably many people never address their loss.

Fortunately from my perspective hearing loss does not necessarily interfere with listening.

Numerous returning veterans have suffered hearing loss from explosions, forcing the United States Military to research and produce remedies for these injuries. Consequently, hearing research is finally beginning to catch up with eye research. Until recently much more was known about the visual than the auditory system since sight continues to be favored over sound in this culture. The highly sophisticated hearing aids resulting from this research has filtered to the general populace.

Some of the improvements include: (From Healthy Hearing)

  • Full spectrum, broad range sound
  • A more natural organic listening experience
  • Digital signal processing
  • Built in automation of settings
  • Automated feedback suppression
  • Spatialized sound
  • Automatic or manual adaptation to all listening environments
  • Digital noise reduction
  • Wireless technology including communication with other audio devices and listening directly to TV, landline phone, cell phone or MP3 player

As usual this list says nothing about the improvement of my definition of listening – only the means to improve hearing. Still this is a remarkable list of features for in ear hearing aids and a pointer to future ear enhancements with incorporation of the latest neuroscience research findings.

This brings us back to contemplating the future.

With this level of technology simulations could include hearing through the ears of others for diagnostics, curiosity and creativity. Other possibilities include hearing through the ears of other species of animals, birds and insects. I wonder what a grasshopper hears with its ears on its knees? Without ears bees sense vibrations with a hearing organ. What do bees hear? is a most often asked question! They sense vibrations in the air with a tympanic membrane without ears.

However it is improvement of our mental powers and abilities through our own efforts as well as transhuman efforts that will bring us the pleasure and knowledge that we need and seek.

The means to achieve this improvement comes through our music – its conception and implementation of new structures, tools and implementations

Let’s imagine computer music futures

One’s listening has expanded beyond the boundaries of our human sensory systems. We are listening to the unknown with our bodies superceded by machines and machine intelligence – intelligence that can calculate and sense far more than our human selves already – what will happen to us? Senses will be amplified – we will see more, hear more, touch more and process far faster than ever before. This is already true of the Super Soldier in today’s military with augmentation of his/her physical strength, super-vision, ultra sensitive sensors and protective responsive fabric. Iron Man the fantasy is becoming reality sooner than we think.

As we develop listening we are involved in transhuman activity – especially if we are enhancing hearing and listening through technology or the way that we organize our listening. The increasing sophistication and purposes of technology are moving us into a post-human world.

Sex is no longer absolutely necessary for reproduction. In fact one researcher labeled it inefficient. This means that babies can be born to any parent – single, coupled or otherwise. Sex for pleasure is another matter.

Computers are assisting us to go deeper into the mind as knowledge explodes and sharing of knowledge accelerates.

So what happens to us as we continue to merge with our technology?
What happens when machine intelligence exceeds human intelligence?
Our bodies will become increasingly unnecessary.

Gender will not matter. The voluntary elimination of gender in the human species through the application of advanced biotechnology and assisted reproductive technologies is creating a new social philosophy in keeping with such a transformation.

We have evolved a music that has moved through acoustics to electronics to digital technology in 50 years as evidenced by this ICMC2010. Electronic music before digital technology, developed out of post world war II cast offs such as signal generators, other test equipment and tape machines. Today’s Super Soldier cast offs may yield our next musical materials and instrumentations.

Currently we are still bound to analog technology for output from digital technology and humans to trigger the technology. Loud speakers will become obsolete. As analog technology fades into the past our perceptions may simply become internalized as computers evolve towards a new kind of sentience and as we are enhanced and can be connected with instantaneous communication in a collective network.

Sexual pleasure may be felt without traditional stimulation of membranes – this kind of stimulation is rapidly becoming unnecessary and obsolete – (who has time for it anyway?) a simulation in an amplified brain can be triggered anytime desired at any intensity – this is sex, as we don’t know it. Perhaps musical pleasure can be experienced this way for music, as we don’t necessarily know it.

Imagine composing for an audience of post human, hybrid beings.

In a network: your music might be heard below and beyond the range of hearing with timbres in some new and unknown dimension. Rhythms might be from the dance of molecules or atoms, or of galaxies or black holes. A rather different venue is called for and delivered directly to the brain.

How far can we go until there is a new species that lives beyond human needs? We are being absorbed into this new species.

How long will this take? Think of moving from CDs to file transfers, from landlines to cell phones, from email to texting. Technology is accelerating evolution. Humans and computers will be merged by the end of the 21st century.

Listen to how fast our music is transforming during this conference. Will music disappear with humans? We may understand this evolution by studying the music of other species now coexisting with us – like birds, chimps and whales or the post humans that may already be living among us due to advances in medical nanotechnology.

Our next music may come from nanotechnology. In the most recent research at UC Berkeley: Jeffrey Grossman has been using vibrational energy exchange to in effect “listen” to the vibrations of the molecule. “The concept is much like bringing a set of nano tuning forks up to a molecule and seeing which ones become excited. Those would form a chord of ‘notes’ that are unique to that particular molecule. Thus, the molecule can be identified.”

The sound of a neuron firing has been heard in a laboratory, as you may know. Scientists are listening for microscopic data going beyond listening to breathing and heart rate. Thus we may be able to listen to the microscopic music of our own cellular structures in the future.

After computers pass the Turing test demonstrating a mind indistinguishable from a human’s in terms of knowledge, emotion and self-awareness (year 2029 according to Kurzweil) maybe we can have “smart” music that finds its own way to the appropriate audiences where ever they are and to what ever form they may be.

When nanobots have entered our bloodstreams and done hyper accurate brain scanning the first AI simulation of the human brain will appear. Subsequent “AIs will inevitably become smarter and more powerful than unenhanced humans” Says Kurzweil. “Also AI s will exhibit moral thinking and will respect humans as their ancestors.”

May AI’s be super musicians. And if they are I wonder how it will be to perform and improvise with them. AIs could perform at any rate of speed and track any number of voices. They could listen to deeper and higher tones and parse any noises. All expressive actions could be more detailed and refined than ever before. Their output could be delivered individually to any mind.

I would like to line up for more enhancements! How about musical nanobots designed to retrofit humans for detailed upgraded musical abilities?

Interface is the central problem in computer music. How to access the powers of the computer for musical performance and creation as well as everyday life purposes – direct human machine interface is a solution I have been contemplating for a long time as I struggle to deal with the ever increasing knobs and switches that I need to simultaneously control numerous parameters on the fly during performances.

“With the line blurred between machines with human-level intelligence and humans upgraded with cybernetic implants that enhance human cognitive and physical abilities direct interface between humans and machines will be possible.” (Kurzweil)

My own musical journey has taken me from analog tape delay to digital technology in a search to expand my capabilities as an improviser/composer. In multiplying the number of lines, rhythms and timbres that I could track simultaneously my performances require continually accessible controls that now can only be managed with algorithms. Currently a virtual agent that can listen and make intelligent decisions about how to improvise with others and me is underway at Rensselaer Polytechnic Institute supported by the National Science Foundation.

OK this is where more futures come to life.

Well maybe you don’t agree with Kurzweil or me and thoughts of such a drastic transformation of humans and machines. However ready or not agree, or not, we already are infected with digital and nanotechnology and are moving within and towards this merger.

I have my first implant – an artificial lens in my left eye that corrects for distance and astigmatism. With my glasses I now have 20/20 vision in my left eye. Spring this year looks greener to me as never before. The green is breath taking. It took me a while to realize that my transhuman eye is contributing to this beauty. At least it is delivering more accurate signals to my brain for me to enjoy my renewal of color perception. Other friends have cochlear implants, heart and liver transplants etc. and are alive because of their implants.

Artificial organs are now sustaining lives. Prosthetics are restoring mobility for millions of people. Research is advancing for the restoration of damaged hearing via regeneration of hair cells in the cochlea. Birds are already capable of this kind of hair cell regeneration

One of the top predictions from Scientific American 7 Top Predictions for the Future of the Brain is Wi Fi everything: “Microchips in or on your brain will enhance memory, store data, and connect wirelessly to the internet, eliminating your cell phone and allowing you to control machines or even clones via mental wi-fi”. Neuro science beyond Iron Man2 is here.

We need to be careful of what we build upon. Post human citizenry is a distinct possibility with old and new political, social, educational, philosophical and musical problems to solve. For me the time is right to investigate the possibility of becoming a posthuman citizen.

I want to be a transformed musician who listens, creates, collaborates, performs new music, and remains thoughtful and concerned about others no matter who they are or what their origin may be. Technology is taking us on a wild sexy ride into the future. If we are mindful of our purposes, creations, designs, models and simulations we could open up new and thrilling musical territory, as we don’t know it.


Pauline Oliveros career spanned over fifty years of boundary dissolving music making.  In the ’50s she was part of a circle of iconoclastic composers, artists, poets gathered together in San Francisco. A recipient of the 2012 John Cage award from the Foundation of Contemporary Arts, Oliveros was Distinguished Research Professor of Music at Rensselaer Polytechnic Institute, Troy, NY, and Darius Milhaud Artist-in-Residence at Mills College. Oliveros was interested in finding new sounds as in finding new uses for old ones — her primary instrument was the accordion, an unexpected visitor perhaps to musical cutting edge, but one which she approached in much the same way that a Zen musician might approach the Japanese shakuhachi.  Pauline Oliveros’ life as a composer, performer and humanitarian was about opening her own and others’ sensibilities to the universe and facets of sounds.  Since the 1960’s she has influenced American music profoundly through her work with improvisation, meditation, electronic music, myth and ritual.  Pauline Oliveros was the founder of “Deep Listening,”  which comes from her childhood fascination with sounds and from her works in concert music with composition, improvisation and electro-acoustics.  Pauline Oliveros described Deep Listening as a way of listening in every possible way to everything possible to hear no matter what you are doing.  Such intense listening includes the sounds of daily life, of nature, of one’s own thoughts as well as musical sounds. “Deep Listening is my life practice,” she explained, simply.  Oliveros is founder of Deep Listening Institute, formerly Pauline Oliveros Foundation, now the Center For Deep Listening at Rensselaer.