Music to Suit Your Changing Mood

To improve streaming music playlists, researchers create a “personalized DJ” computer program.

Based on the research of Maytal Saar-Tsechansky

Music to Suit Your Changing Mood music to suit your changing mood img 661db00d574a0

Imagine having a disc jockey inside your computer who matches the music played to your current frame of mind.

That’s what a data scientist at Texas McCombs has created, together with a pair of computer science researchers at The University of Texas at Austin. Their goal is to outdo streaming music services by making their playlists more individual.

“The idea was to create your own personalized DJ,” says Maytal Saar-Tsechansky, professor of Information, Risk, and Operations Management. “Whether you’re getting into the car after a long day of meetings, or you’re getting out of bed on a weekend morning, it should tailor its recommendations to your changing moods.”

Songs in Sequences

The project started as the brainchild of Elad Liebman, a Ph.D. student in computer science at UT Austin who also has a degree in music composition.

He was frustrated with music streaming and recommendation platforms like Spotify and Pandora. Liebman found most of their song choices bland and uninspiring.

One problem, he believed, was that if he liked a song, Spotify then simply queued up songs by similar artists that it assumed that he would like, too. Instead, by focusing on the music itself and which elements he liked, it might present him with more adventurous selections.

Pandora, at least, factored in some musical attributes of songs. But it paid no attention to their order and how each song might affect his experience of the next one.

“The long-term goal of the ideal playlist is to maximize the listener’s overall pleasure.” — Elad Liebman

Could he devise a program to do that? To try, he teamed up with Saar-Tsechansky, who develops machine learning methods to address business and organizational challenges. She has developed techniques for fields as diverse as chronic diseases and smart electric grids.

Learning from Listening

The researchers decided to consider a strategy called reinforcement learning. Instead of relying on what other users have liked, it employs trial and error to find out what someone prefers in the present.

“It’s learning on the fly while you’re collecting more information about the user.” — Maytal Saar-Tsechansky

The program they designed, with UT Computer Science Professor Peter Stone, runs a series of feedback loops. It tries out a song, the listener rates it, and the program heeds that rating in choosing the next song. “Then you alter the model accordingly,” says Liebman.

Because the program focuses on a listener’s emotional state, songs are strung together with similar acoustic properties, rather than artists or genres. Instead of following one blues song with another, it might play a bluesy-sounding Iranian song. It uses 34 properties like pitch, tempo, and loudness, drawn from an online database that has already analyzed the songs.

But in the researchers’ design, the music service is thinking well beyond the next song. Like a chess player, it plans its moves 10 songs ahead. While one song is playing, it generates tens of thousands of possible sequences, and it predicts which one will please the listener the most. It serves up the next song on that playlist — and while that song is playing, it creates and tests new sequences.

In machine learning, the mechanism is known as a Monte Carlo search. Which inspired the name of the program: DJ-MC.

Transitions Matter

With their virtual DJ queued up, it was time to try it out on human listeners. In a laboratory, 47 subjects listened to 50 songs apiece, drawn from Rolling Stone magazine’s list of the 500 greatest albums of all time, in one-minute snippets. After each song, listeners clicked a like or dislike button. They also decided whether they liked the transition from the previous song.

The first 25 songs were offered at random, to learn about the individual’s tastes. The program then used those lessons to select the remaining 25 songs.

When the results were tabulated, DJ-MC’s playlists racked up 19 percent more likes than the songs played at random.

The experiment also compared two competing strategies: maximizing likes for an overall session versus maximizing likes for individual songs. With 11 percent more likes, the overall session came out ahead.

“Listeners enjoy not just the songs but the sequence, the transitions from one song to another.” — Maytal Saar-Tsechansky

In that way, she adds, the program works like a good DJ.

Simple is Sufficient

The researchers were pleasantly surprised to find that DJ-MC was simple enough to run on a laptop with 8 gigabytes of memory. They tried variations that analyzed additional song properties or tried out more songs to discover a listener’s tastes. But more complex algorithms didn’t increase listener satisfaction.

“Part of the power of the model is how lightweight it is,” Liebman says.

A model is all that the researchers intend DJ-MC to be, says Saar-Tsechansky. She has no plans to commercialize it, though she’d be happy to see another company do so.

What interests the researchers more is applying reinforcement learning beyond music. Liebman says the program could be adapted to other kinds of media, from news stories to videos.

“Learning algorithms don’t have taste, they just have data,” he says. “You can replace the dataset with anything, as long as people are consuming it in a similar fashion.”

Saar-Tsechansky goes further. “It can work in any case where you’re recommending things to humans, experienced in a sequence,” she says. “It could even be food.”

The Right Music at the Right Time: Adaptive Personalized Playlists Based on Sequence Modeling” is forthcoming in MIS Quarterly.

Story by Steve Brooks