Business

5 Ways AI Has Already Changed the Music Industry

“Fake Drake” and similar controversies have gotten most of the attention, but not all uses of artificial intelligence in

5 Ways AI Has Already Changed the Music Industry

“Fake Drake” and similar controversies have gotten most of the attention, but not all uses of artificial intelligence in music are cause for concern.

Hype around artificial intelligence has been higher this year than any time since The Terminator, with implications ranging anywhere from dating app messages to doomsday predictions. In music, excitement and hysteria has been similarly mixed, thanks to a flurry of AI-generated soundalikes that have shown the potential to change artistry — and fandom — as we know it, while many companies are assessing how to best protect their artists, copyrights and revenue streams from the growing threat. 

But not all AI in music is “Fake Drake.” In fact, many uses are a lot less freaky. 

For example, when Paul McCartney told BBC Radio 4 that he would use artificial intelligence to create the final Beatles song, including vocals from the late John Lennon, it prompted widespread confusion. Many fans assumed that this meant McCartney was using AI to bring his bandmate’s voice back from the dead, generating some kind of new recording of Lennon’s out of thin air. Quickly, McCartney clarified on Twitter that “nothing has been artificially or synthetically created.” Instead, the singer is using AI to clean up an old recording made by the bandmates while they were still living using a process known as “stem separation.”

Not every use-case of the emerging technology involves generating computer-made songs or voices instantaneously. While some applications of AI certainly present urgent legal and ethical concerns, there are also many applications that give musicians and rights holders new creative opportunities from the way it’s created to how it’s released and beyond.

Here are five of the ways AI is already affecting the music business: 

Revolutionizing Production

Thanks to the increasing portability and affordability of technology, it’s been getting easier and easier to make professional sounding music for decades; an aspiring artist who can afford Apple products can start fiddling around with production on GarageBand, or buy “type beats” online and record vocals with a phone. Even so, increasingly popular AI-driven technology takes a wrecking ball to the already-porous wall separating civilians from musicians. Users of the app Boomy, for example, can select a few options — like Rap Beats or Global Groove — and generate an instrumental in seconds that they can then rearrange, re-tool or record a vocal over. BandLab’s SongStarter can generate an instrumental based on specific lyrics and emojis. “Writer’s block is real,” BandLab noted when launching SongStarter in May. “Sometimes, you just need a nudge in the right direction.”

Getting Stems

Just as AI technology can help aspiring artists build songs from scratch, it also has the ability to break them up into their component parts, known as stems. Having those audio building blocks can be essential if, for example, a movie wants to use an instrumental version of a track in a film trailer, or a brand wants to incorporate a vocal a capella into a commercial. Some musicians have lost their stems over time; other artists may have cut albums before recording technology existed to isolate all the different parts, and those albums may now be in the hands of catalog owners looking for new revenue opportunities. 

The producer Rodney Jerkins used AI technology to pull audio of Wu-Tang Clan’s Ol’ Dirty Bastard off a VHS tape and sample it for a SZA track. This technology is only likely to become more popular at a time when the music industry is acknowledging the extent to which younger listeners want to manipulate audio on their own — crafting homemade remixes that can earn viral attention on TikTok. “It’s not just, ‘how do in some very controlled way reimagine songs with your people in-house?’” says Jessica Powell, CEO of Audioshake, which created the tech that Jerkins used for his ODB sample. “The next wave of it is how do you bring fans and artists and fans and music closer together? How do you actually give the keys [to a song] over in a way that you’re comfortable with, to really let people go wild with it?” 

The Deluge

The modern music industry was designed in a world where the supply of professional-grade music was fairly limited and largely controlled by a few large companies. But as AI technology gets better and better, it’s now possible to create a torrent of music very quickly. This has caused a fair amount of anxiety at the major labels, who face questions from financial analysts about “market share dilution”: If AI has the capacity to turbocharge the amount of music being made outside of the majors’ purview, it could hurt their payouts under the streaming services pro-rata business model. Earlier this year, JP Morgan’s Sebastiano Petti asked Warner Music Group’s new CEO, Robert Kyncl, “Are you concerned about the dilution of music from AI-generated content?” “AI is probably one of the most transformative things that humanity has ever seen,” Kyncl replied. “It has so many different implications. Because of that, yes, I’m paying very close attention to it.”

Personalized Soundtracks

A number of start-ups are producing malleable music that morphs in real time to underscore actions in video games, VR, workouts, and Snapchat filters, using cutting-edge technology. Often called “dynamic” or “personalized” music, companies like Reactional Music, Life Score, Minibeats, and others employ artificial intelligence not to generate music at the click of a button, but rather to take human-made music and shuffle its individual elements (called “stems”) around to arrange newfound compositions that best underscore a user’s needs and actions, much like a film score does for your favorite scene. Their work begs the question, “How magical would it be if we listened to music and music listened back to us?” as Lifescore co-founder and CEO Philip Sheppard puts it.

Pitch Records

Some songwriters and publishers are now experimenting with AI voice synthesis technology to help them place their compositions with top-tier artists. These days, “pitch records” — songs that are written just by professional songwriters and later shopped around to artists to record — can be especially hard to land as more artists want to play a larger role in the song creation process, so AI voice technology has helped tech-forward publishers and writers show the artists’ team what the singer might sound like on a track before they even record it. While it hasn’t been widely adopted yet, some proponents say this use of AI is a cheaper and more exact alternative to hiring demo singers with voices similar to popular artists, a common industry practice. However, detractors warn this could lessen work opportunities for those demo singers, and replicating an artist’s voice with AI might also scare them away. 

By  Elias Leight, Kristin Robinson

About Author

TuRitmo

Leave a Reply

Your email address will not be published. Required fields are marked *