CM In Depth

Rise of the Robots: How AI Is Changing the Music World

This article originally appeared in the September/October 2018 issue of Canadian Musician magazine.

By Michael Raine

Artificial intelligence is here, it’s everywhere, and it’s radically changing our daily lives, from internet searches to banking, shopping, commuting, and even the temperature in our homes.

In the music world, AI is fundamentally changing not just how we listen to music, but how music is made and even how the music industry operates. And as all tech does, AI is evolving at such a radical pace that, frankly, we can only speculate about its long-term impacts.

“Didn’t AI just write a symphony? I saw that when I was at the TED Conference. I was like, ‘Oh my god, this is insane; it literally wrote the damn symphony! … What the hell is going on here?’” recalls Jason Flom, CEO of Lava Records, to Canadian Musician.

“Maybe it’s a good thing, maybe it’s a bad thing. I don’t know, but … it’s probably a net positive if it helps with research. If it helps identify artists who otherwise wouldn’t get discovered, then ironically, AI would be being utilized, in a certain way, to expose non-AI creativity. I’m sure there will be some nefarious consequences as well, but I don’t know what those would be. It’s a crazy world we live in now.”

It’s a crazy world, indeed. So, let’s look at how AI is impacting the two poles of the music business: the music and the business.

robot-hand

Will AI Help or Disrupt Songwriters?

The big question for artists is: is AI going to assist the musician or is it going to be the musician? The answer appears to be “both,” and an awful lot of money is being spent to make it happen.

Here is just a sample of the investments being made in AI around the music industry. For starters, Abbey Road Red, an AI incubator run by the iconic studio, is connecting tech start-ups with the music industry. As well, Sony Music and Warner Music Group are partly funding Techstars Music, an L.A.-based start-up accelerator. It includes Amper Music, the leading AI composer, performer, and producer of production music, which has raised $9 million in seed money. Techstars also includes Popgun, which uses deep-learning AI algorithms to create original pop music. It’s also no surprise that Google is deeply invested in AI music, with its Magenta project already producing songs written and performed by AI, as well as the Google Brain AI research team working on AI assistants for musicians. AI application Flow Machines, created by Sony Computer Science Laboratories, released a fairly-convincing song in the style of The Beatles called “Daddy’s Car” in 2016. One of the brains behind that project, AI scientist Francois Pachet, has been hired by Spotify to head its new Creator Technology Research Lab. As well, French start-up AIVA was the first AI composing app to be registered with a performing rights organization.

In May 2018, Taryn Southern brought some of these applications together on her album I AM AI. Using four AI applications – Amper Music, IBM’s Watson Beat, Google’s Magenta, and AIVA – it is credited as the first pop album to be fully composed and produced by AI. But she is not the only one exploring the world of AI collaboration. French collective SKYGGE used Flow Machines to make the album Hello World. Additionally, producer Alex da Kid worked with Spotify and IBM’s Watson Beat to compose four songs.

“I think [AI] will change music,” Southern told the Toronto Star earlier this year. She compares AI composers, and the worries they elicit, to the advent of digital samples and beats. Rather than kill musicianship like some predicted, those new tools propelled music forward, particularly EDM and hip-hop; likewise, the musicians who embrace AI, according to Southern, “are going to create all kinds of things which we can’t yet even conceive of.”

But what are the intentions of the humans behind the AIs?

“Our goal is to empower anyone around the world to express themselves creatively through music, regardless of their background, expertise, or access to resources,” Amper Music CEO and Co-Founder Drew Silverstein tells Canadian Musician. “As we become successful at that goal, it means that Amper should be involved in the creation and collaboration of every piece of music around the world. It is a very exciting goal for us to have.”

That’s ambitious.

drew-silverstein-large
Amper Music has become one of the leaders in AI-created production music (though has competition from the likes of U.K. start-up Jukedeck), partly because Silverstein and two fellow co-founders are also professional film composers based in Los Angeles. Amper specializes in original, royalty-free production music that matches the style, mood, and length of a video. Those who want to go deeper can also specify tempo, instrumentation, rhythm, harmonic progression, etc. Frankly, the results are impressive.

“What’s really critical is that Amper is designed as a system that is contextually aware,” Silverstein explains. “You can give feedback to Amper. You can say, ‘Here’s what I like and what I didn’t like. I want you to change that and want more of this and less of that,’ as if we’re sitting in a studio talking to each other. Amper can then create a revision of that music, again in a manner of seconds, and truly collaborate with you as a creative partner until the user is really content with what they’ve created and then they download the music and get a royalty-free, global, and perpetual licence to use it however they want.”

Artists can potentially take that music, add or subtract parts, add vocals, and now they have an original song co-written with AI.

Silverstein says two things sparked the idea for Amper Music. First, as composers, they felt new technology usually made them more productive, creative, and efficient. Second, many film producers, directors, and editors said they preferred working with professional composers but often didn’t have the budget and were forced to licence stock music.

“They’d say, ‘Look, could you just write the music for us instead?’ More often than not, we’d say, ‘We wish we could but we can’t because the economics just don’t work,’” recalls Silverstein. “But then we said, ‘Look, our job as composers is to translate emotion into music and music into emotion.’ That is kind of what it boils down to. And so we suggested, ‘If we could build an AI and give you the same collaborative approach as if you’re working with a composer but with the time and economic framework that you need, would you want it?’ Overwhelmingly, their answer was, ‘Yes!’ and we kind of laughed and were like, ‘Great, well we’re glad you like the idea but we’re not sure it’s possible. But thank you for the endorsement.’ As I’m sure you know, this idea was in no way new. It’s been around for as long as computers have been around, but it had never been successful in a commercial sense. So, we spent a lot of time in the woodshed to try to figure out what we could do to build an actual creative AI that could express itself and express its collaborators’ desires through music.”

The counterpoint to the complete AI composer like Amper is the AI assistant. One example is Lyric AI Assistant, a collaboration between Art & Icons, the creative studio of Canadian singer-songwriter and Moist frontman David Usher, and the developers at Google Brain. Working off a dataset of lyrics, Lyric AI Assistant generates personalized fragments of lyrics that the artist can then either stitch together or use as inspiration. A prototype of Lyric AI Assistant was unveiled at SXSW 2018, but it’s still a work in progress.

“The core idea is to use what is called a ‘recurrent neural network.’ So neural networks are a type of model that we use in deep learning where it’s vaguely inspired by the brain. You have a bunch of neurons that are transmitting messages between each other and so you feed it inputs and it produces some type of output,” explains Google Brain Senior Software Developer Pablo Castro, also a musician. “Say it produces some word like ‘song,’ that output gets fed back into the model to produce the next word. So if you have ‘the song remains the’ and you feed all that back into the model, the next word would be ‘same,’ if it’s trained on Led Zeppelin lyrics.”

According to Castro, there is really no part of an album that AI couldn’t potentially assist, from helping to write chords, melodies, and lyrics to the actual recording process, all the way to the album artwork.

“From the beginning, it was always clear to both of us that we wanted to approach this as an assistive technology, not a replacement technology. We don’t want to create something where you press a button and you get the next top-40 hit. It’s really something that we want the artist to play with and to interact with,” says Castro.

Having a computer spit out words seems simple on its face, but making it a useful collaborator is very complex. Like a baby, the AI has to learn the foundations of language. It needs to understand the vocabulary, then needs to understand the difference between lyrics and prose, then structure, cadence, genre, rhyming, syllables, and so on – and understand it all literally front-to-back and back-to-front.

The first project for Lyric AI Assistant was rewriting David Usher’s song “Sparkle and Shine” by learning from a dataset of his past lyrics. The first attempts weren’t thrilling.

“One of my favourite lines it produced was, ‘So those green ladies, they ain’t all bad,’” Castro recalls with a laugh. “I sent it to David and he called me back almost immediately and he said, ‘Pablo, these lyrics are terrible. I can’t use any of them.’ I couldn’t disagree. They were pretty nonsensical.”
Screen-Shot-2018-09-06-at-12.59.29-PM-768x500
“I’m actually looking to build a useful tool,” says Usher. “I would definitely like to use it as a writing partner for sure. One of the biggest challenges, and one of the reasons we decided to start with lyrics, is that as a writer who has written a lot of songs, you start to rewrite yourself. It’s, ‘How do I push my vocabulary out of its normal set? How do I push myself to a new place? How do I find input that is not always falling back to the same metaphors?’”

Recently, an AI trained on Taylor Swift lyrics wrote a song in the pop star’s style. While, like Castro and Usher learned, the AI can write some real nonsense, it also wrote some pretty Swiftian lyrics. For example, “We’re starry-eyed hipsters and wanna-be players,” and “I wore your tee in the morning and love at night.” It’s not hard to imagine those lines in a Swift song.

So, if AIs can compose and perform worthwhile music, and they can also write lyrics that pass as the world’s biggest pop star, and the nature of deep-learning dictates they’ll only get exponentially better, what does this all mean for the future of popular music? After all, it’s one thing to task AI with something utilitarian like production music for a YouTube video, but commercial music, whether classical or hip-hop and everything in between, is supposed to be more than utilitarian. The music we listen to in our daily lives serves many functions that we like to think of as “human.” It helps us understand the world and ourselves, and connects us with our emotions.

If AI is used merely as a tool by human artists, it can be excused as just another technological progression in music creation, not unlike Pro Tools and plug-ins.

But nearly every person Canadian Musician spoke with predicts that fully-AI-generated songs will eventually be on the Billboard charts. To ask the age-old tech question, just because we can do something, should we?

“I think it is absolutely possible and I think it will happen sooner than people think. I think in a relatively short time that AI’s music, and Amper’s music, will be indistinguishable from human-created music in any genre or style or vertical,” says Silverstein, adding that the only real debate is how long it will take. “The other part of your question, the should it happen, is a little bit mute because it’s going to happen.”

It seems inevitable, in part because of the sheer number of songs an AI can produce, and it theoretically improves with each one. When Alex da Kid made songs using IBM’s Watson Beat, the AI was fed 26,000 Billboard Hot 100 songs and analyzed them for patterns in keys, chord progressions, and genres. If it takes an AI just minutes to analyze decades’ worth of hit songs and, if asked, it could output another decade’s worth of new songs, at least one is bound to be decent, right?

So again, will we and should we have AIs making music for the masses?

“I think that we will. Do I think we should? Probably not,” says Usher cautiously. “If you just look into the ecosystem, there are a ton of different companies racing to build AI-generated music and we’re starting to see it in the background of commercials, in video games, and that is how things seep in. They don’t come in at the top of the food chain; they come in in the middle somewhere and they seep in slowly and the technology just builds itself into your life.”

That sounds disconcerting.

“AI is coming. It’s already being used in multiple areas, but one thing is for sure: top talent in music is unique, incomparable, and transcendent,” attests Richard Gooch, CTO at the IFPI, which represents the global recorded music business.

Let’s hope he’s right.

Screen-Shot-2018-09-06-at-12.53.40-PM-232x300

AI & The Music Business

In March of this year, Warner Music Group acquired Sodatone, a Toronto-based tech start-up with an algorithmic platform that combines social, streaming, and touring data to identify promising unsigned artists. The acquisition is a good indication of where the industry is headed.

“In a really quick amount of time, people are going to be using artificial intelligence in their music careers, whether it’s having a computer automatically mix and master your music, having a computer tell you which songs to release and which songs not to, and telling you which parts of songs resonate and which ones don’t, and then your tour routing,” says Travis Laurendine, a tech entrepreneur, investor, and producer of music industry-focused hackathons at SXSW, Canadian Music Week, and elsewhere. “What’s going to happen is, as a result of that, people are going to get signed, like Moneyball, by the computer where there is going to be a machine that says, ‘Hey, Universal Music, this guy here in Toronto is the next big star.’ That’s not going to be some A&R telling his boss that; they’re going to send the A&R out there to meet the guy, but the identification process is largely going to be from computers.”

That is effectively what Sodatone is already trying to do.

“It’s just another source of information, right? It’s a source of information that is going to point out things that are already resonating with people. I think that’s the big thing,” says Ron Lopata, VP of A&R at Warner Music Canada. In his view, using AI is just the latest iteration of the types of analytics that A&R departments have always used, whether it was examining radio spins or Shazam hits. “The thing that AI obviously doesn’t do is tell you whether the song is good. Also, what it can’t do is tell you if an artist has tons of potential but needs development.”

Ron-Lopata-large
A common worry across all industries is AI’s ability to automate increasingly complex jobs. Serving as a warning, earlier this year, a contest pitted 20 experienced lawyers against the LawGeex AI algorithm and tasked them with spotting errors in five nondisclosure agreements. The AI was both more accurate (94 vs. 85 per cent accuracy) and exceptionally faster, with the lawyers taking an average of 92 minutes to review the contracts while the AI took 26 seconds.

“The AIs are just going to get smarter and faster and the lawyers are never going to get better at those corrections, or very minimally. And the response from the lawyers was, ‘Well, that’s OK because this will take some of the load off from the middle of the business – from the younger lawyers and assistants.’ But the way it’s going to work is AI, in my opinion, is going to eat its way from the inside out. It’s going to do the simple tasks, but it’s just going to get better and faster at those and slowly learn tasks up and down the food chain,” warns Usher, who also founded the Human Impact Lab at Montreal’s Concordia University. “As we learned in the music business, when you eat the middle of the business out, it fundamentally changes the business from being a music business into being a technology business, and from being a law business into being a technology business, from being a medical business, and on and on and on.”

Despite Usher’s ominous warning, Lopata isn’t too concerned about AI’s imminent takeover. “Yes, computers have taken over a lot of jobs, but have they not caused new jobs to be formed as well? Is it going to take over A&R so that the only thing that we’re hearing and getting signed are things that are out there and have those numbers right away? Because then there are going to be other people who go, ‘Well, I am going to look for the other artists that this thing doesn’t pick up, and I am going to build those to the point where it does pick it up.’ I kind of look at it as it’s going to just evolve the process.”

In the world of rights and royalties, SOCAN has been at the forefront of the AI revolution. Like all major PROs, SOCAN has seen its job grow from tracking and paying royalties on a few hundred thousand performances per year in the radio era to billions in the streaming era. To help with this, SOCAN bought Audiam and MediaNet in 2016, but also invested in and formed collaborations on a few AI-focused initiatives. The goal is to have AI help on a number of fronts, such as identifying unlicensed music venues or songs with missing or incorrect metadata.

One of SOCAN’s collaborations with IBM Watson is focused on song identification, converting lyrics to text that can be searched and matched. “It’s doing melodic analysis and looking at patterns and then increasing the probability of identifying which song this really is,” explains Jeff King, SOCAN’s COO. That is particularly helpful for identifying the songs in mashups and cover versions on YouTube, Facebook, etc.

“One of the things we didn’t realize until we started doing our AI experiments was that social media comments, particularly outside your own broad postal code, are huge indicators of potential success,” says King, noting this can help identify emerging songwriters who aren’t registered with SOCAN. “If you’re mostly in, say, the M4 postal code, it’s probably friends and family. But if all of a sudden an L shows up or an N or a P, let alone some other province, you’re getting traction somewhere else and that is a good indicator that something is happening. That’s been very useful.”

Through a partnership with graduate students at the University of Toronto, a similar strategy is being used to identify venues hosting live music that aren’t licensed with SOCAN.

Looking at the big picture, King sees AI’s potential to be transformative across the entire industry because of its unparalleled processing power.

Jeff-King-large
“As opposed to, say, blockchain, which is very slow moving and does only six transactions a minute and such — it’ll eventually be fast but right now it’s a very slow process — IBM Watson, for instance, can do 700,000 pageviews per second. So it can work through 140 years of, let’s say, case law on a certain type of litigation in four or five minutes, whereas right now it would take two or three articling students six weeks to finish the research,” he says. “That is going to be transformative for a whole bunch of industries, including ours. With so much [data] from YouTube and Spotify and such, we’re really just scratching the surface of what this will look like. That processing power is going to be very, very important.”

As the world of AI-made music takes off, there is a pressing question with no clear answer: who owns the rights to AI-generated music? Is it the software developers who made the AI, or the corporation that owns it? In the case of a human/AI co-write, does the human writer own all the rights, or just half? As well, if the AI’s dataset includes copyrighted music and lyrics, does that affect who owns the rights to new songs?

After consulting with some colleagues, one prominent music industry lawyer contacted by Canadian Musician concluded that there aren’t any clear answers to these questions for the time being. For that reason, he didn’t want to speculate on the record, but as King says, it’s likely this will be “fertile ground for copyright lawyers” for the foreseeable future.

“It’s a tricky area to really nail who owns the rights to what. But from our perspective, we’re really just trying to both advance research and to empower musicians,” says Castro at Google Brain. “From our perspective, we’re not really interested in making money off of this.”

Amper Music, because it’s already generating commercially available music, had to wrestle with these questions. In order to provide users with a royalty-free global and perpetual licence for the AI-generated music, they had to make sure that every bit of sound inputted into the AI was recorded in Amper’s own studio and wholly owned by the company.

“A lot of this isn’t final and there is very little case law or court precedent on AI music,” says Silverstein. “I think the industry, in some sense, is ahead of the curve in terms of where the legal system is, at least in our country. That being said, we know what the rules of the game are today and they very much involve access and intent. The best way to be confident that there is no possible infringement is to have zero access and zero intent. With that, it makes it a very black and white case, which we are on the right side of. At the same time, we recognize that this is going to be an important part of the creative and musical world for, hopefully, the next hundreds of years, and so we want to play an active and positive role in leading the conversation around how the law should evolve and how we should think about this so that as precedents are created and as frameworks are set, they’re done in a way that we think is most beneficial for the creator.”

As we’ve invited AI into our daily lives in helpful ways, people’s unease with it has waned. But when Spotify’s latest playlist is called Chill AI Music, what will that mean for musicians and the industry? It’s coming sooner than you think, and we all need to wrestle with that.

END

Michael Raine is the Senior Editor of Canadian Musician.