In the summer of 2011, Lauren Oakes, age 30, found herself limboing through a chaos of fallen trees on Alaska’s Chichagof Island, a densely forested land mass situated between the Gulf of Alaska and the Inside Passage. She had come to gauge the health of Alaska’s temperate rainforest, which is suffering from a kind of fever.
The problem, as Oakes saw it, is that climate change is killing great swaths of Callitropsis nootkatensis, the Alaska yellow cedar. While healthy stands of cedar still populate the northern reaches of Oakes’ study area, their lifeless trunks rise in great thickets in the south, where rain is increasingly replacing snowfall. Meanwhile, western hemlock is usurping the tracts of forest the cedar once dominated.
Listen to the Podcast
The sounds of science.
The resulting study, “Long-Term Vegetation Changes in a Temperate Forest Impacted by Climate Change,” published in the journal Ecosphere, capped Oakes’ doctoral work at Stanford’s Emmett Interdisciplinary Program in Environment and Resources (E-IPER). Translation: longtime inhabitants of Alaska’s coastal forests, like the yellow cedar, will continue to perish in large numbers due to a warming planet, which will trigger a massive change in the composition of species on the forest floor.
Relatively few saw the study, much less read it. (The same is true of the thousands of scientific papers released each year, whose primary audience is the research community.) But this spring, Oakes’ Stanford colleague Nik Sawe (pronounced saw-vay) liberated the raw numbers from the study and transformed them into music, using a discipline-bending process called data sonification. Part science, part art, data sonification transmutes metrics into soundscapes using a combination of a composer’s aesthetic sensibilities and special modeling software.
The goal? To reveal nuances of scientific phenomena not easily seen. In the case of Oakes’ sonified study, which eerily resembles phrases from piano sonatas by the Russian composer Alexander Scriabin, the music conveys the meaning—and pathos—of her findings in about three minutes.
Sawe, a social scientist who graduated from E-IPER this year, is a pioneer in the field of environmental neuroeconomics, meaning he spends his days deciphering the root cause of earth-friendly investment decisions. What, for example, compels a homeowner to invest in an EnergyStar refrigerator or donate money to save a natural landscape from ruination? (He uses fMRI to light up test subjects’ brains when they are asked to make such decisions.)
His research is still in its infancy, but Sawe is clear about one thing: there’s little correlation between philanthropy and bar charts demonstrating the decline of ecosystems. At their core, Sawe’s studies show that humans act pro-environmentally when they are exposed to stimuli that elicit bliss or moral outrage.
Sawe has used images of iconic landscapes, like Yosemite Valley, to tickle test subjects’ nucleus accumbens—the same dopamine-rich region of the brain that craves sex, drugs, and rock and roll—before asking for money to preserve those landscapes. (Most people donate.) His desire to further plumb the brain’s pleasure centers led him last spring into Stanford’s mysterious “mansion of music” so he might learn to convert big data into an environmental message pleasing to the ear.
The mansion is Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA, pronounced “karma”), one of the country’s first computer music research programs. There, raw data sets enter a computer and emerge as music. How it works: software reinterprets the data for a musical instrument digital interface (MIDI), which translates code into computerized music, such as, say, loops or samples of the clarinet or guitar.
When the numbers become a soundscape, researchers can hear information that has escaped their eyes, and the music’s disposition can reveal the meaning of complex systems to nonscientists, often in a single sitting.
“We learn behaviors of systems in an intuitive way and internalize them and recognize them because we’re good at hearing patterns,” says Chris Chafe, who heads up the CCRMA and calls himself part scientist, part musician.
Despite more than 50 years of using the computer to compose music, the subspecialty of data sonification is still in its infancy, says Chafe. And the art form’s limits are constrained only by the composer’s imagination, musicianship, and scientific savvy. (In the proper hands, the statistics of climate change—such as the mathematics contained across the rows of Oakes’ spreadsheets—might contain a musical trove, if only someone were able to perceive them and elicit their lyricism.) Sonifiers like Chafe—and now his mentee Sawe—imbue their data-driven music with their own aesthetic to transform scientific data into a comprehensible musical narrative. Sawe hoped to make pleasing music from Oakes’ data while maintaining the integrity of her research.
The tangle of numbers Oakes e-mailed to Sawe was a chronosequence, a data set for trees at sites affected by a phenomenon—say, tree death—at different points in time. (Researchers commonly use the technique to show how ecological processes occur over many years.) Oakes’ chronosequence included five conifer species. The study area stretched north to south, from Glacier Bay National Park and Preserve to the Slocum Arm in the Tongass National Forest. She and her team counted every tree in those plots, including mosses, sedges, seedlings, saplings, and snags, in addition to each tree’s height and diameter. The data set contained nearly 30 variables for more than 2,000 conifers.
First, Sawe needed to narrow Oakes’ data, partly to shoehorn it into the constraints of Western music’s 12-note octave. A piece of music is organized into lines called measures; within each line are individual notes that constitute the music itself. For Oakes’ data, each of the 48 tree plots became one measure, and the number of notes in each measure corresponded to the number of trees in each plot. (Dead trees became dropped notes.)
Each note’s pitch was determined by the height of individual trees, and the duration of each note was fixed to the health of each tree’s crown, with fuller crowns assigned greater sustain. The trunk diameters informed the force with which the notes were played. Sawe also assigned a different instrument to each species: piano for yellow cedar, flute for western hemlock, clarinet for shore pine, and violin for mountain hemlock. Because Sitka spruce is a popular material for string instruments, he mapped that species to the cello. Sawe now had his ensemble. He used the key of D natural minor. Structurally, the composition would resemble a kind of flyover of the terrain Oakes and her team measured, with the initial strains of music coming from the north, near Glacier Bay National Park and Preserve, and ending in the south, by the Slocum Arm.
Throughout the piece, Sawe wanted to highlight the relationship between the native yellow cedar and invasive western hemlock. He braided the sounds of the two species, both to amplify their voices and to highlight the fall of one and the rise of the other. Just as the keyboard and strings in Mozart’s “Sonata for Piano and Violin in E minor” play off of one another to create a musicality greater than the sum of their parts, this musical death dance between the two becomes, in its own way, the sound of climate change.
And what of the music that emerged from the data? While it’s true that Oakes’ study limns the yellow cedar’s demise, it’s equally true that Sawe’s composition sounds nothing like a dirge.
At first, the piano dominates the composition—remember, the piece opens in the north, where dense stands of yellow cedar are still in abundance. By the end, however, the flute has supplanted the piano, suffusing the music with something that sounds like hope. But when Sawe strips away the flute, strings, and woodwinds from the piano in a second rendition, the gaps between piano notes from beginning to end make the ravages of climate change more than evident. A live orchestra will perform the piece on Stanford’s campus this autumn.
Sawe considers himself a dabbler in sonification, yet Joel Thome, a Grammy-winning composer of modern classical, or “new” music, lauded the composition for its timbral color and evocation of the impressionists. “He’s tapping into a kind of synesthesia,” Thome says, “because the composition does reflect the natural ambience of the material that he’s working with and its relationship to climate change.” At the same time, Thome questioned the use of D natural minor, which he supposed was meant to imbue the piece with poignancy in a key acceptable to Western ears. (True, says Sawe.) “I might have encouraged him to walk on the wild side, to create the music of his own time,” Thome says.
“Form, of course, is the province of the artist,” says Chafe. “Nik could have made the piece extremely unnerving by making other musical choices. But by making it not disturbing, he’s helping his audience hear the nuance in the data.” Sawe and Oakes are exploring avenues to expose the yellow cedar composition and others to a larger audience. They’ve spoken with staffers at the San Francisco–based California Academy of Sciences for a public installation. “This was just the first exploration,” says Sawe. “I'm looking for ways to portray data that are more complex with regards to music theory but are still intuitive to listeners.”
It’s structure and story that Oakes herself perceived when Sawe shared the piece with her for the first time. “For me, I can hear what I’ve only seen through two years of obsessing about many different ways of visualizing lots of different variables,” she says. “There is an element of loss, for sure. But there’s also an element of a new ecosystem emerging that people can potentially relate to in different ways, right? So it is a sad song as written. I kind of wonder if there’s a happy one, too.”