Jeanna Isham

When you think of the word “sadness” what images come to mind? What emotions? Does sadness mean devastation, heartbreak, loneliness, misery, depression, longing?

With all of these different definitions for sadness, how could we possibly define and confine them within one interpretation of a Spotify playlist?

A UX designer friend of mine wrote a very thought provoking article recently discussing a Spotify feature she was developing. It identified a user’s musical preferences by having the user select a mood, such as “bright mood,” and then navigating to a drop down that defined said “bright mood” further: happy, euphoric, relaxed, sentimental, motivated, spiritual, optimistic.

By defining the mood beyond the initial suggestion, the app could then better learn that specific user’s musical preference.

My mind took this a step further

What if we were able to program an algorithm (of sorts) that detected a potential use of “too much” sadness in our musical choices?

Don’t get me wrong. I know that this may border on privacy and rights and all of that, but think about this for a minute.

Have you ever enjoyed sitting in sadness? Wallowing in it? Have you ever fed that feeling and sunk deeper and deeper into it? If you have, how hard was it for you to climb back out? I would wager it took you longer to get out than it took you to get in.

Suicide and depression is a real thing. It’s a dark and disturbing place and when you feel your lowest, sitting in your sadness almost feels good. Musical choices during these low times play a critical part between recovery and devastation.

Sound heals

I’ve written about sound in healing. Sound can heal at a cellular level. It can also be used as a therapy, which in itself can help more physical wounds to heal. If you are mentally in the right place, your body can follow suit.

So back to my question.

What if AI could help catch a person’s slip toward depressive or suicidal sadness levels and course correct by slowly feeding them happier and more uplifting music?

In my scenario, appropriate musical choices could elevate you from depression, to reassurance, to feeling uplifted, to optimism, and eventually to feeling happy again. Think of it like an IV drip of music. By working backwards through that user’s musical preferences, they could potentially give themselves a dopamine boost by lifting them back up through song.

In real life, no one alternates moods directly from sad to happy, so the music we listen to shouldn’t do that either. It wouldn’t feel natural, and in so saying, wouldn’t be effective. Guaranteed, that user will reach over and change the playlist if they feel they are not getting what they want.

Limited Definition Means Lost Opportunity

Suicide and depression aside, most of us are a roller coaster of emotions. Limiting a mood choice to one word doesn’t really give you access to fine tuning your musical experience. My definition of sadness is different from yours. Heck, my definition of sadness today differs from what I thought it was five days or five weeks or even five minutes ago. It’s all perspective and context—two things that only the individual user can provide.

Our journey through emotions is unpredictable. We are constantly living and thinking and changing; our mood evolves as we go.

Marketing to a Moving Target

Here’s where I throw marketing into the mix.

If we could find a way to give more choices to a user’s “mood journey,” how much more value might we bring to their advertising experiences?

Ads in general (and in Voice AI as well) are going to happen. It’s just a fact. If we have to experience them anyway, wouldn’t we prefer them to make sense to our world and, dare I say, even be enjoyable? With the ability to fine tune a mood music experience, advertisers and marketers could better define their parameters around to whom it is they’re marketing.

For example. It’s summertime and my mood is tending toward summertime music. To me that would be The Beach Boys, The Monkeys, Santana, Nelly Furtado, Pharrell, Justin Timberlake, etc. An advertiser might see this trend and target swimwear or sunscreen or skateboards. On the other hand, my neighbor is in the summertime mood, too, but he prefers Jimi Hendrix, The Doors, Janis Joplin, and The Ramones. The ads designed for me would be totally inappropriate and/or irrelevant to him—and vice-versa.

By targeting advertisements based on subsets of musical moods, the brand has a better opportunity for a higher ROI.

Refining and Defining

I’m excited to see how we can better use this opportunity of streaming radio playlists. With proper cultivation of Spotify, Pandora, iHeart Radio, and all the other large conglomerate streaming radio stations, there’s potential to both save lives and more accurately market to an audience. A really weird combination I know, but this just shows that the sky’s the limit when we make sound on purpose! And isn’t that what we’re really trying to do? Whether it’s around marketing and advertising and branding—or just plain ol’ zoning out to good vibrations. Sound shouldn’t just be noise, or filler, or something that someone somewhere thinks—algorithmically—would make us feel better, or feel sad but in a way that’s helpful. Sounds, Voice AI, should be as purposeful and well-thought-out as a Mozart symphony or bit of Shakespeare dialogue.

Conclusion

If you’re interested in learning more about sound in marketing, check out my new course, Sound’s Power and Influence in Marketing, at www.soundinmarketing.com, where we talk about the beginning of sound (not recorded sound but sound sound), early radio advertising, the rise of the internet, sound’s purchase power, and how your brain reacts and responds to sound.