Category Archives: Music

search data to predict which new songs will be hits

In 2000, a Stanford Ph.D. named Avery Wang co-founded, with a couple of business-school graduates, a tech start-up called Shazam. Their idea was to develop a service that could identify any song within a few seconds, using only a cellphone, even in a crowded bar or coffee shop.

At first, Wang, who had studied audio analysis and was responsible for building the software, feared it might be an impossible task. No technology existed that could distinguish music from background noise, and cataloging songs note for note would require authorization from the labels. But then he made a breakthrough: rather than trying to capture whole songs, he built an algorithm that would create a unique acoustic fingerprint for each track. The trick, he discovered, was to turn a song into a piece of data.

Shazam became available in 2002. (In the days before smartphones, users would dial a number, play the song through their phones, and then wait for Shazam to send a text with the title and artist.) Since then, it has been downloaded more than 500 million times and used to identify some 30 million songs, making it one of the most popular apps in the world. It has also helped set off a revolution in the recording industry. While most users think of Shazam as a handy tool for identifying unfamiliar songs, it offers music executives something far more valuable: an early-detection system for hits.

By studying 20 million searches every day, Shazam can identify which songs are catching on, and where, before just about anybody else. “Sometimes we can see when a song is going to break out months before most people have even heard of it,” Jason Titus, Shazam’s former chief technologist, told me. (Titus is now a senior director at Google.) Last year, Shazam released an interactive map overlaid with its search data, allowing users to zoom in on cities around the world and look up the most Shazam’d songs in São Paulo, Mumbai, or New York. The map amounts to a real-time seismograph of the world’s most popular new music, helping scouts discover unsigned artists just as they’re starting to set off tremors. (The company has a team of people who update its vast music library with the newest recorded music—including self-produced songs—from all over the world, and artists can submit their work to Shazam.)

How Auto Tune conquered pop music

Sebert, whose label did not respond to a request for an interview, has built a persona as a badass wastoid, who told Rolling Stone that all male visitors to her tour bus had to submit to being photographed with their pants down. Even the bus drivers.

Yet this past November on the Today Show, the 25-year old Sebert looked vulnerable, standing awkwardly in her skimpy purple, gold, and green unitard. She was there to promote her new album, Warrior, which was supposed to reveal the authentic her.

“Was it really important to let your voice to be heard?” asked the host, Savannah Guthrie.

“Absolutely,” Sebert said, gripping the mic nervously in her fingerless black gloves.

“People think they’ve heard the Auto-Tune, they’ve heard the dance hits, but you really have a great voice, too,” said Guthrie, helpfully.

“No, I got, like, bummed out when I heard that,” said Sebert, sadly. “Because I really can sing. It’s one of the few things I can do.”

Warrior starts with a shredding electrical static noise, then comes her voice, sounding like what the Guardian called “a robo squawk devoid of all emotion.”

“That’s pitch correction software for sure,” wrote Drew Waters, Head of Studio Operations at Capitol Records, in an email. “She may be able to sing, but she or the producer chose to put her voice through Auto-Tune or a similar plug-in as an aesthetic choice.”

So much for showing the world the authentic Ke$ha.

Since rising to fame as the weird techno-warble effect in the chorus of Cher’s 1998 song, “Believe,” Auto-Tune has become bitchy shorthand for saying somebody can’t sing. But the diss isn’t fair, because everybody’s using it.

For every T-Pain — the R&B artist who uses Auto-Tune as an over-the-top aesthetic choice — there are 100 artists who are Auto-Tuned in subtler ways. Fix a little backing harmony here, bump a flat note up to diva-worthy heights there: smooth everything over so that it’s perfect. You can even use Auto-Tune live, so an artist can sing totally out of tune in concert and be corrected before their flaws ever reach the ears of an audience. (On season 7 of the UK X-Factor, it was used so excessively on contestants’ auditions that viewers got wise, and protested.)

Indeed, finding out that all the singers we listen to have been Auto-Tuned does feel like someone’s messing with us. As humans, we crave connection, not perfection. But we’re not the ones pulling the levers. What happens when an entire industry decides it’s safer to bet on the robot? Will we start to hate the sound of our own voices?

A short philosophical history of personal music

If you are reading this on a computer, there is an excellent chance that you are wearing, or within arm’s reach of, a pair of headphones or earbuds.

To visit a modern office place is to walk into a room with a dozen songs playing simultaneously but to hear none of them. Up to half of younger workers listen to music on their headphones, and the vast majority thinks it makes us better at our jobs. In survey after survey, we report with confidence that music makes us happier, better at concentrating, and more productive.

Science says we’re full of it. Listening to music hurts our ability to recall other stimuli, and any pop song — loud or soft — reduces overall performance for both extraverts and introverts. A Taiwanese study linked music with lyrics to lower scores on concentration tests for college students, and other research have shown music with words scrambles our brains’ verbal-processing skills. “As silence had the best overall performance it would still be advisable that people work in silence,” one report dryly concluded.

If headphones are so bad for productivity, why do so many people at work have headphones?

There is an economic answer: The United States has moved from a farming/manufacturing economy to a service economy, and more jobs “demand higher levels of concentration, reflection and creativity.” This leads to a logistical answer: With 70 percent of office workers in cubicles or open work spaces, it’s more important to create one’s own cocoon of sound. That brings us to a psychological answer: There is evidence that music relaxes our muscles, improves our mood, and can even moderately reduce blood pressure, heart rate, and anxiety. What music steals in acute concentration, it returns to us in the form of good vibes.

That brings us finally to our final cultural answer: Headphones give us absolute control over our audio-environment, allowing us to privatize our public spaces. This is an important development for dense office environments in a service economy. But it also represents nothing less than a fundamental shift in humans’ basic relationship to music.

A SHORT HISTORY OF PRIVATE MUSIC

In 1910, the Radio Division of the U.S. Navy received a freak letter from Salt Lake City written in purple ink on blue-and-pink paper. Whoever opened the envelope probably wasn’t expecting to read the next Thomas Edison. But the invention contained within represented the apotheosis of one of Edison’s more famous, and incomplete, discoveries: the creation of sound from electrical signals.

Muzak in the realm of retail theatre

f you blindfolded Dana McKelvey and led her into a retail store, a restaurant, a doctor’s office, or a bank, she could tell fairly quickly whether the music playing in the background was Muzak. You may think that you would be able to tell, too, but unless your job is creating Muzak programs, as McKelvey’s is, you probably wouldn’t. The syrupy orchestral “elevator music” that most people associate with the company scarcely exists anymore. Muzak sells about a hundred prepackaged programs and several hundred customized ones, and only one—“Environmental”—truly fits the stereotype. It consists of “contemporary instrumental versions of popular songs,” and it is no longer terribly popular anywhere, except in Japan. (“The Japanese think they love it, but they actually don’t,” a former Muzak executive told me. “They’ll get over it soon.”) All of Muzak’s other programs are drawn from the company’s huge digital inventory, called the Well, which contains more than 1.5 million commercially recorded songs, representing dozens of genres and subgenres—acid jazz, heavy metal, shag, neo-soul, contemporary Italian—and is growing at the rate of twenty thousand songs a month. (Some record labels now upload new releases directly to the company, which, like a radio station, pays licensing fees for the songs it uses.) The Well includes seven hundred and seventy-five tracks recorded by the Beatles, a hundred and thirty by Kanye West, three hundred and twenty-four by Led Zeppelin, eighty-four by Gwen Stefani, a hundred and ninety-one by 50 Cent, and nine hundred and eighty-three by Miles Davis. It also includes many covers—among them, versions of the Rolling Stones’ song “Paint It Black” by U2, Ottmar Liebert, and a late-sixties French rock band with a female vocalist (who sang it in French) and approximately five hundred versions of the Beatles’ song “Yesterday,” which, according to Guinness World Records, is the most frequently covered song in the world.

“There are so many songs out there that if I listened to just one I’d never know whether it was Muzak or not,” McKelvey, who is twenty-six years old, and has the kind of soft, persuasive voice that would sound good on late-night radio, told me. “But I could tell if I listened to the flow of a few. The key is consistency. How did those songs connect? What story did they tell? Why is this song after that song, and why is that one after that one? When we make a program, we pay a lot of attention to the way songs segue. It’s not like songs on the radio, or songs on a CD. Take Armani Exchange. Shoppers there are looking for clothes that are hip and chic and cool. They’re twenty-five to thirty-five years old, and they want something to wear to a party or a club, and as they shop they want to feel like they’re already there. So you make the store sound like the coolest bar in town. You think about that when you pick the songs, and you pay special attention to the sequencing, and then you cross-fade and beat-match and never break the momentum, because you want the program to sound like a d.j.’s mix.” She went on, “For Ann Taylor, you do something completely different. The Ann Taylor woman is conservative, not edgy, and she really couldn’t care less about segues. She wants everything bright and positive and optimistic and uplifting, so you avoid offensive themes and lyrics, and you think about Sting and Celine Dion, and you leave a tiny space between the songs or gradually fade out and fade in.”

Muzak’s corporate headquarters are in Fort Mill, South Carolina. Naturally, there’s an awesome sound system, which extends into the parking lot but not (for deeply felt symbolic reasons) into the elevator. McKelvey works in a section of the building called the Circle, a curved arrangement of cubicle-size offices, which are the only Muzak work spaces that have doors. She has spent many hours behind hers, listening to hundreds of songs and thinking about how best to employ music to further the marketing ambitions of the hundred or so clients she manages at once. At the time I visited, she was working on a proposal for a prospective customer, a French-owned chocolatier in New York City. “They want the program to include music from everyplace in the world where cocoa grows,” McKelvey told me. “It’s a challenge, to say the least, but it’s fun.” Shortly before we talked, she had been listening to lounge and rhythmic music from Brazil and West Africa, and to a number of less exotic songs, including familiar jazz tunes that she felt conveyed a mood of chocolate-appropriate romance.

McKelvey, a creative manager at Muzak, is one of twenty-two “audio architects”—the company’s term for its program designers. All but two are in their twenties or thirties, and all have serious, eclectic, long-term relationships with music. (Eight of the architects work in the Circle, ten work in the Muzak office in Seattle, two work in New York, and two work from home, in Connecticut and in California.) McKelvey was born in 1980 in Charleston, South Carolina. Her parents weren’t musicians, but her mother liked to sing and her father worked as a d.j.; he now owns a night club in Charleston called Casablanca. McKelvey began playing the piano when she was two, could read notes on the treble clef before she could read words, and took up the violin when she was seven. Two years later, she joined the Charleston Youth Symphony, as a violinist, and performed through high school. At home, when she wasn’t practicing classical pieces, she listened mainly to eighties pop—Michael Jackson, DeBarge, the Jets—and to the music her parents loved, which was Motown and funk. “I never had a TV in my room,” she told me. “I always had a 45-player. My dad had an amazing record collection, and he still does, and it’s all first runs, not reissues. Whenever I’m in Charleston, I try to sneak records from him.” She says her current taste in music is too diverse to characterize.

People at Muzak sometimes speak of a song’s “topology,” the cultural and temporal associations that it carries with it, like a hidden refrain. When McKelvey works on a program for a client whose customers represent a range of ages—such as Old Navy, whose market extends from infants to adults—she has to accommodate more than one sensibility without offending any. The task is simplified somewhat by the fact that musical eras and genres are not always moored firmly in time. Elvis Presley (who is represented in the Well by fourteen hundred and five tracks) sounds dated to many people today, but teen-agers can listen to Beatles songs from just a few years later without necessarily thinking of them as oldies.

Sailing by Ear of Music

I just spent eight weeks working on a screenplay ten hours a day while listening to the same three albums—Popol Vuh: Einsjager und Siebensjager (1974);  The Six Parts Seven: Casually Smashed to Pieces (2007); and the Jerome Morass soundtrack to the 1957 film The Big Country—on infinite repeat. All the tracks were AAC files that I had downloaded from the iTunes Music Store, and I was listening to them through a pair of small, attractive podules that connected to my iMac through its FireWire port. This is, roughly, the setup that I have been using for a long time now, since before there was an iTunes, or an iPod, or a Napster, back when the only MP3s available were those you had ripped yourself. And though I also listen to music in the kitchen, in the car, on airplanes, and while running, given the amount of time that I spend at my desk, and the fact that I listen to music constantly while writing, over the past ten years I have probably listened to more music in the form of MP3s playing through cute little pods placed about three feet from my head than in any other way. So I was surprised, last week, when for no apparent reason, while writing a big Martian air battle scene, I looked up from the iMac’s monitor to one of those cute little FireWire ovoids, as Vuh lead guitarist Daniel Fichelscher attempted unbelievably intricate and beautiful things on the title track of Einsjager, and thought: Dude, what’s with the Fisher Price speakers? 

You might suppose that repetition would have dulled my powers of aural discernment—this must have been the fiftieth or sixtieth time I’d listened to the track over the past two months—but on the contrary it abruptly seemed to have heightened them, to have broken through the dam of convenience, simplicity, and ready access to the music, to have flooded my jaded ear with sudden understanding. I’m no audiophile; I want to say that right off. I have no idea what impedance is, or how to set the levels of an equalizer with any confidence or panache, and I still find infantile amusement in saying the word “woofer.” But it struck me all at once that the sound quality of the music I’d been listening to so heavily, with the indirect attentiveness I give music when I’m writing, was thin, brittle; all sheen and no depth. It was tinny, tiny, and pallid. It sounded like shit, in fact; and not only did it sound like shit, but it had been sounding like shit for years. Shit in the kitchen, playing from a big hard drive attached to an old PowerBook, through a couple of small, flush-mounted wall speakers. Shit, in the minivan and the Prius, patched from an iPod through factory-installed speakers greased over with a scurf of children and their miasmas. Shit, through the endless, vaguely rattling series of earbuds—that nauseating term, with its suggestion of Van Goghesque  mutilations—accompanying me on morning runs and onto airplanes. The digitized music itself  “compressed,” “lossy,” reduced to a state of parity with whatever system I consigned it to. With the possible exception of books, I love music more than I love anything in my life that is not a person or a dog. At one time, I now realized, I had known how to express and indulge and nourish that love: with iron-heavy black records, a fifty-watt amplifier, and a pair of speakers that were themselves pieces of furniture, far too large for any desktop. I hit the space bar, stopping the music, and observed a moment of silence for my own lossy life, and thought about a man whom I had not seen in almost thirty years.