A 365-Day Project
"We Are All Mozart"
A project to create
I talk too fast. Too many words, too many thoughts. That's how the neurons fire. Whaddya want? Does it affect my music? Too many notes? Too much going on? Density, yeah, density... Okay, break.
There is an unusual intersection of musical, technological, and economic forces taking place. It's not a set theory intersection, but merely the simultaneous and overlapping presence of forces that contain few common elements save for existing at once. These forces all act on how we use sound, and they have moved toward each other, influencing sonic expression, and soon will be moving away again. The results are unknowable.
Very generally, the forces include worldwide networking, expansion of intellectual property, technological advances in sound manipulation, and musical ubiquity. This list is unremarkable and well-known, but its intersection has created serious anomalies in listening that affect the ability to grasp a musical composition now and in the future, and may influence composers' work.
This all sounds reasonable. What is the nexus of crisis here? Here it is: Conflicting expectations and incompatible concepts create a severe dissonance in transmission, reception and perception.
Loss and lossiness are a major theme of the transition to a digital paradigm. Digital information, permanent-sounding in its binary consistency, is actually friable. A few critical lost bits cause severe degradation and destruction of data. Storage deteriorates, and we have a loss of history and heritage.
At a less catastrophic level, digital noise in photographs or the compression of audio and image files may seem insignificant -- after all, with proper visual or acoustic modeling, the eye does not miss and the ear cannot distinguish what is lost. It has been compressed to fool the senses in a 'good' way. The soft lossiness of film emulsion was analog. We could infer what was missing from the gradations of color and gray. But digital pictures enlarge into simple blocks. There is a point below which no information exists, a digital wall. Smart algorithms may recover some because of a programmed understanding of the analog world, but that is not enough. Music gives us an even starker recognition. Compression such as Sony's excellent Atrac (used in minidiscs) very closely models the ear-brain interaction to mask sounds below a certain volume, and to cover sounds under reverberant tails. Even if that is true, an enormous amount is lost.
Let's take this from a composer's viewpoint, and for this a reference to Elaine Lillios's university classes is useful. A few years ago, she chose exirxion, an excerpt from Circular Screaming, for class analysis. She wrote:
One striking aspect of this piece is the number of things happening at one time. Listen to the layering in this piece and answer the following questions.
It's what Elaine asks first that is relevant here. A compressed version (such as that on my own website) has eviscerated some of the information. No re-mixing, no rebalancing, no re-equalization will ever bring that information back. The density deliberately composed into the piece, a density that is audible in the master and even moreso in the multitrack version, is simplified and stupidized. Studio tools that have allowed the enhancement of placement, color, and knitting together so many strands are rendered ridiculous by compression.
Acoustic work suffers as well. Another dense piece, full of registers and changing instrumental purposes, is heard in Softening Cries, an orchestral work that has an increasingly energetic string ensemble, many rising lines of subtle percussive effects, quickly moving clustered woodwinds, and an embedded string quartet. Listening to the recording is by itself a rewarding experience, but imagine if instead that recording was done with the highest level of equipment (instead of my simple stereo DAT), with a fully convincing surround of the original acoustic environment. Now imagine if in the not-too-distant future, that original recording could be re-balanced instrument by instrument at home. This is not a new concept. In fact, the original recorded release of John Cage & Lejaren Hiller's HPSCHD was packaged with a computer printout (unique to each copy of the record) called KNOBS that listed instructions for manipulating volume, balance, treble and bass controls to create a new experience of the recording and include the participation of the home listener. And that was forty years ago. We have the technology to re-involve the listener, but not with compressed files lacking multichannel dimensionality -- and certainly not with the musical environment we are encouraging.
Both remaining elements militate against that involvement. The first is the expansion of intellectual property laws and locks. Given the state of Digital Rights Management (DRM), the remix tape of the 1970s-90s will not exist in any format within a few years. The creative element that brought to bear hiphop and turntable artists of enormous imagination will cease. Whatever sampling of existing music might have meant in 1995, it will have no substantive meaning in 2015.
What is not accomplished by laws and locks will be achieved by the psychological shift to personal music. It seems such an achievement, the iPod concept: an entire library on the hip or in the pocket on around the neck, swapping acoustic and stylistic and cultural worlds with the slightest touch, or even at random. Even setting aside an inevitable ennui with stylistic sensibility, there is the acoustic damage -- to hearing, to the ability to listen past compression into the music's color and placement, to the sense of direction and acoustic space. This latter is not so easy. Just because earphones put the sound in the head is not the same as having the artist's intentions in your ears. There are so many psychoacoustic cues that don't exist when music is squeezed into the headphone universe. There's a pair of 1970s-era so-called 'four-channel' headphones in my studio, in which the designers had embedded two transducers on each side, as if somehow the reflections and ear-space that makes possible directionality could be achieved so clumsily. They were expensive, hot, heavy -- and they didn't work.
Not working is what happens to carefully composed and engineered music in that semi-distracted headphone space. (And this discussion doesn't even begin to touch on the wealth of musical experience made available -- the question of 'how do you choose' if not at random? Another day for that.)
There is a secondary effect of ubiquitous personal music. Many people render their entire existing libraries to compressed formats for generic listening. One professor is in the process of converting his entire LP collection to iPod. The degraded sound of the LP further degraded, and then used for teaching -- aside from the fact that it reiterates the necrosonic cycle of listening to 'oldies', this process mixes the worst of the analog and digital domains for a depressingly muddled result. (Yes, good conversions can be done, but it's not merely plopping the LP on the old turntable and pushing it through the mp3 ripping software.)
Moreover, portable music changes concentration. Who cares about compression if it's just a pair of earbuds interfered with by outside sounds that one listens to casually? This is not a generational complaint; were the technology well-designed and the players somehow stylistically intelligent and able to compensate for the external world interaction, that would be one step closer to making the music meaningful as art. But because it also minimizes the creative act of choosing, it alienates the listener's role even further, and reduces the expectation of being a participant to almost nothing. There will be no KNOBS today (there are no knobs anyway; the concept is extinct).
The studio world has moved incessantly toward a higher quality of sound and new concepts in its presentation: Remixes in digital format, Super Audio CDs (SACD), multimedia CDs with added content, the display of text information, surround sound in five or seven channels with additional low-frequency effects (LFE), DVD-Audio, etc. Recording is done at bit rates so high that the distortion-inducing analog filters previously needed have vanished from the chain. These all supplement years of research into surround techniques that are extraordinary in replicating real (or imaginary) acoustic spaces -- ambisonics, ambiphonics, convolution, B-format, UHJ, etc. -- and can even place electroacoustic works in a 'real' concert hall.
So we have reduced the capability of discernment through personal music while simultaneously refining the material to be discerned, we have outlawed and locked the ability to be involved with how the music is re-created while simultaneously providing a world's worth of musical wealth online to touch the imagination -- and stylistically, the pop and nonpop worlds make a step backwards for each technological step forward, in a tightening retro spiral.
Returning to density: It bothers me that my multichannel compositions, not just the old ones from the quadrisonic days of the 1970s but newer ones like the eight-channel Spammung and Memento Mori and six-channel Detritus of Mating (with its enormously dense 100-voice canon), will wait yet another generation or two to be heard as intended ... even if by then we have overcome the damage done by ubiquitous headphone-based listening or the evanescent character of digital storage itself. New formats are introduced every few years. Who will bring this vast worldwide library of new nonpop forward to those formats? Will they be incompatible? How much will simply vanish in a digital diaspora?
Not long from now Internet II will overcome the compression issues, and new storage may be more permanent. Maybe if I just talk fast enough...
Back to the Blog Index