The Flood Is Real. So Is the Chance to Rebuild. (3/3)

Inside the AI music boom: hype, power shifts, and what’s missing.
Part III: Licensing – Who Owns the Future of AI Music?
Introduction
Generative AI music has rapidly evolved from a niche experiment into a mainstream phenomenon. The global market for AI-driven music is projected to grow tenfold – from around $300 million in 2023 to over $3.1 billion by 2028 – fueled by massive investment and technological advances. This report provides a comprehensive overview of the current generative AI music landscape, focusing on hard data and diverse perspectives, especially those of musicians around the world. We examine adoption statistics, technical breakthroughs, legal debates, and industry viewpoints to illuminate both the opportunities and challenges that AI-generated music presents in 2025.
Executive Summary
Generative AI is transforming the music industry at unprecedented speed. Platforms like Suno and Udio have reached millions of users within months, generating music at industrial scale—often with quality that rivals commercial releases. Listeners are engaging, creators are experimenting, and tech companies are racing ahead.
But the infrastructure beneath this boom is fractured.
Most generative models rely on unlicensed data. Rights holders are litigating. Artists are split between curiosity and fear. Legal frameworks lag behind, while the sheer volume of AI-generated content floods platforms, raising urgent questions around authorship, attribution, and economic participation.
The success of AI music systems is no longer theoretical: it’s happening. But the systems that govern their use—creative, legal, and financial—are not keeping pace. Without intervention, the gap between AI capabilities and cultural accountability will only widen.
CORPUS enters this landscape with a constructive alternative:
A music licensing system designed for the age of AI. Artist-led, legally sound, and globally scalable. CORPUS makes it possible to train AI on high-quality music with full consent, clear rights, and fair compensation. It shifts the dynamic from extraction to collaboration, and opens the door for musicians to actively shape the next generation of creative tools.
This report outlines the current state of generative AI in music—its drivers, risks, and contradictions—and shows the context in which CORPUS offers one path forward.
Part 3
After examining the scale of adoption and the expanding toolkit in part 1 and part 2, we come to the structural core of the issue: licensing. This section addresses what is arguably the most urgent and unresolved dimension of generative AI in music. Who owns the data that trains these models? Who should be paid when AI creates a song? And what kinds of legal or economic systems could ensure that artists, not just tech companies, benefit from this transformation? We’ll explore the global legal landscape, emerging proposals, and where momentum is heading.
Licensing and the Future: New Models or Free-for-All?
As generative AI accelerates, one of the most urgent gaps to fill is the licensing and regulatory framework governing it. The current moment is often likened to the advent of sampling in the 1980s or the dawn of music streaming in the 2000s – a disruptive technology outpacing the legal system, forcing stakeholders to negotiate new arrangements. Today, we’re essentially in a Wild West for AI music licensing: there’s no universal standard for whether and how AI companies can use copyrighted compositions or recordings to train models, nor clear rules on ownership or royalties for AI-generated works that mimic existing artists. But stakeholders are staking out positions that could shape the eventual equilibrium.
On one side, AI developers argue for minimal friction – they prefer an open environment where training on publicly available music is allowed without individual licenses, akin to how humans learn by listening to music. OpenAI, Google and others are lobbying lawmakers (especially in the U.S.) to affirm that using copyrighted data in training falls under fair use or similar exceptions. They warn that requiring case-by-case permission or payment for every song ingested would cripple AI innovation, effectively locking up the training fuel needed to make these models accurate. In submissions to the U.S. Copyright Office and in public comments, some of these companies have even floated nationalistic reasons – e.g. “if U.S. law doesn’t allow generous fair use for AI, Chinese companies that disregard copyright will surge ahead”. The subtext is that AI is strategically important, and copyright rules might need to bend to accommodate it. This argument has drawn fierce rebuttal from the creative community (who see it as a false choice between innovation and artists’ rights), but it has resonated with some policymakers. Notably, Japan’s path essentially embraced this view: as we saw, Japan created a de jure exception blessing AI training on any content - effectively a compulsory free license – reasoning that societal benefits of AI outweigh the tradable value of the copies made in training.
On the other side, artists, labels, and publishers seek a system where AI pays its dues. They are not trying to ban AI models from using music entirely (realistically, that genie is out of the bottle), but they insist on consent and compensation. The ideal outcome for this camp would be something like a collective licensing regime for AI training data: AI companies could pay into a fund or per-work licensing fee to use large catalogs of music for training, similar to how radio stations pay blanket licenses to play songs. Then, those fees would be distributed to creators, perhaps by usage metrics or a negotiation. In Europe, this thinking is reflected in proposals to require AI firms to obtain CMO licenses when they ingest protected works. Imagine OpenAI or Google striking a deal with, say, GEMA/SACEM to cover use of all works by their members, with a payout based on the number of works ingested or the revenue derived from the AI service. Another approach floated is a new right or levy – for instance, some have suggested a “compensated training exception” where AI training is allowed but the AI provider must pay a government- or CMO-administered levy that goes to creators (akin to blank tape levies of old). The EU AI Act’s latest drafts, as of late 2024, were considering mandatory transparency (AI companies disclosing what data they used) and perhaps giving rightsholders a say (like an opt-out right or even an opt-in requirement). Meanwhile, in the US, a group of music industry players have floated the idea of an AI usage compulsory license managed by a body like SoundExchange (which currently administers royalties for digital radio). Under such a scheme, every time an AI generates a track that is used commercially (or every time an AI service is subscribed to), a micro-payment could be made to an artist pool if that AI was trained on artists’ works. These ideas are complex to implement (how do you quantify each artist’s contribution to an AI model that trained on millions of songs?), but they show the industry gearing up to rebuild the rules of value exchange.
Companies like OpenAI have publicly signaled openness to finding a middle ground. Sam Altman’s statement that creators “deserve control” suggests OpenAI might not oppose an opt-out mechanism or a licensing scheme, especially if it heads off worse outcomes like an outright ban or endless litigation. Indeed, we have started seeing voluntary partnerships: the Google/UMG voice model deal, if it ever comes to fruition, essentially creates a licensed product rather than a piracy issue. And just as labels license compositions for training vocals, perhaps they will license entire catalogs for training AI compositional models – one can imagine a future where a label proudly advertises an album as “AI-Augmented: Trained on the Best of [Our Catalog]” because they turned their archives into a proprietary AI for their artists to use. OpenAI’s Jukebox (a 2020 experiment) raised eyebrows by generating songs “in the style of” Elvis or Sinatra; at the time it was free and for research, but if such a model were commercial, you can bet rights-holders would demand either a shut-down or a cut. We’re seeing those theoretical cases turn real: for example, when an app allowed users to create songs mimicking famous artists (without permission), it quickly drew lawsuits from major labels seeking up to $150,000 per infringed work in damages. The pressure from litigation could force tech companies to the negotiating table. In May 2023, over 200 artists (including big names like Billie Eilish and Paul McCartney) signed a letter calling for AI firms to “stop exploiting our work without consent” and for lawmakers to ensure AI outputs that mimic artists are not given a free pass. This cultural and legal pressure is reminiscent of the Napster era – and we know how that went: it birthed new licensed platforms (iTunes, Spotify) after the dust settled.
So what new licensing models might emerge? One possibility is a tiered approach: training licensing (for feeding AI models) and output licensing (for AI-generated tracks that reference specific artist identities). The former could be handled by collective blanket licenses or opt-in frameworks. The latter might involve something like right of publicity or trademark-like rules for voices, where using an artist’s voice via AI without permission is illegal – unless you have their explicit license as with the Google/UMG tool. Already, places like California are exploring extending personality rights to cover AI depictions, which could bolster artists’ control over their voice in AI. The EU AI Act may also effectively require labeling AI-generated content, which, while not a license, at least ensures transparency (e.g. listeners should know if a song is AI-made or human-made, especially if it imitates a known artist). Furthermore, new intermediaries could arise: imagine agencies that represent an artist’s “digital likeness” rights, brokering deals for their AI voice usage (much like sync licensing agencies do for film/TV placements). We might see the advent of official “AI Plugins” for DAWs authorized by artists – for example, a plugin that legitimately lets you generate vocals “in the style of Freddie Mercury” because Queen’s estate licensed an AI model of him. This would monetize the late singer’s legacy in a controlled way (similar to hologram tours). In fact, ABBA’s virtual “Voyage” concert in London – while not AI (the avatars are pre-programmed using motion capture) – shows audiences’ appetite for digital resurrections of beloved acts, which AI could further enable.
Crucially, there is an argument against introducing new blanket licenses: some worry it might legitimize AI companies’ past uncompensated copying and remove any incentive for negotiation if the law simply declares training fair use. The counter-argument is that without a clear path to licensing, innovation will move to jurisdictions with looser rules (like Japan or China), and creators might paradoxically end up worse off (no control, no pay). As Virginie Berger wrote in a recent analysis, the AI copyright battle is essentially whether “fair use” becomes “free use” – i.e., if tech giants get their way, AI training might be permanently exempt from the permission economy. Artists and their advocates retort that this would be a massive uncompensated transfer of value. The middle ground could be something like a compulsory license with statutory rates for training data – much as radio and webcasters have compulsory licenses for music use. This would legalize the practice but ensure payment. It’s messy (because determining a “usage” in training is abstract), but not impossible. The industry might also push for a share in AI companies’ revenues rather than per-song fees – e.g., if generative AI music services thrive, a fixed percentage of their revenue goes into a pool for creators whose works were used to build those models. In essence, creators would become stakeholders in the AI boom, not bystanders.
As of 2025, the chance to rebuild lies in proactive collaboration. Some forward-looking initiatives are trying exactly that. In late 2024, CISAC and the music industry praised the EU’s AI Act negotiations as a “groundbreaking step” towards responsible AI – particularly applauding provisions that seemed to uphold copyright and transparency. There’s talk of an “AI music code of conduct” where AI firms voluntarily agree to certain principles (like not using any music unless it’s public domain or licensed, and providing identification of AI tracks). The Human Artistry Campaign, launched by a coalition of artist groups, laid out core principles to guide AI in support of human creativity, gathering support globally. These soft measures, combined with evolving law, will shape the norms.
One thing often noted as missing in the hype is a robust framework for attribution. In the current state, an AI-generated song that becomes popular doesn’t necessarily credit the artists whose styles or samples influenced it. In contrast, if a human producer makes a track clearly inspired by another, there’s at least the court of public opinion or music journalism that draws the lineage. With AI, the lineage can be opaque. Creators are demanding not just payment but recognition – some propose that AI music should come with metadata listing source material or “training data credits” if known. This is tricky (deep models don’t output a bibliography), but researchers are working on methods to trace which training items most influenced a given output. Such transparency could help ensure original creators aren’t erased in the AI remix.
Amid these debates, some initiatives are already exploring practical licensing frameworks tailored for the AI age. One example is CORPUS, a project designed to enable the legal training of generative music models on high-quality, artist-contributed material. Unlike models built on scraped data, CORPUS operates with clear consent and compensation mechanisms, allowing AI developers to license music directly from rights holders. It doesn’t attempt to solve everything—questions around outputs, voice rights, and attribution remain—but it does demonstrate that scalable, rights-respecting training data models are possible. As regulatory momentum builds, such bottom-up efforts could complement top-down legislation by showing what workable alternatives actually look like in practice.
Conclusion
In the midst of this AI music boom, the industry stands at a crossroads. The “flood” of generative content is undeniably real – songs are being created and consumed in ways that were science fiction a few years ago. But equally real is the chance to rebuild the music ecosystem’s foundations. The hype around AI music belies deeper questions of authorship, ownership, and artistry that society now has an opportunity to address. We are witnessing power shifts: from the few gatekeepers of old (labels, studios) to new ones (AI platforms, big tech), and potentially from human creators to algorithms. But these shifts are not foregone conclusions; they depend on choices we make now about laws, norms, and business models. Europe’s cautious, principle-driven approach might yield a more balanced outcome – one where AI serves creators, not supplants them, and where new creative labor (like training AI models or curating AI outputs) is valued. The U.S.’s market-driven approach could speed innovation but must guard against treating artists as collateral damage in the tech race. Asia’s headfirst embrace of AI in music shows the creative promise and perils of moving fast: enormous scale and new content, but also a glimpse of a world where human musicians have to work harder to prove their worth against machines.
What’s missing, arguably, is trusted systems of attribution and compensation, and a cultural consensus on how we want AI to interact with something as human as music. Music has always been a blend of art and technology – from electric guitars to Auto-Tune – and each revolution forced a recalibration. Generative AI is just the latest disruptive instrument. It can produce an endless sea of soundalikes, but it’s humanity that will decide whether those are curiosities, utilities, or true cultural touchstones. Listeners might enjoy AI-generated songs, but will they cherish them the way they do a song written from a human soul? The jury is out.
In navigating this, the music world can seize the chance to rebuild an industry that truly respects creators at every level – by integrating AI in a way that amplifies creativity and compensates originators. Some efforts are already taking steps in that direction. CORPUS, for example, is building a legally licensed music corpus with clear consent and compensation for contributors—offering one concrete model for how AI training can support, rather than extract from, the music community. It’s a small part of a much larger puzzle, but it shows that ethical, artist-driven infrastructure is possible.
The flood of AI music may feel overwhelming, but just as communities rebuild after a literal flood, the creative industries can rebuild stronger with the lessons learned. As one European rights group declared, it’s about ensuring “a mutually beneficial future for innovation and creation.” The coming years will show whether we can achieve that balance. The hopeful vision is an industry where AI becomes a tool in artists’ arsenals (not a threat), where fans get new interactive musical experiences and know that their favorite artists are respected in the process, and where the definition of “musician” expands rather than contracts. The flood is here; the rebuilding is up to us.
Further Sources
Industry Reports & Analyses
- Forbes – AI's Impact On Music In 2025: Licensing, Creativity And Industry Survival
An in-depth analysis of how AI is reshaping the music industry, focusing on licensing challenges and creative implications.
Read more - GlobeNewswire – Generative Artificial Intelligence in Music Strategic Business Report 2024–2030
Market projections indicating significant growth in the generative AI music sector.
Read more - Artsmart.ai – AI in Music Industry Statistics 2025: Market Growth & Impact
Statistical insights into the adoption and impact of AI in the music industry.
Read more
Legal & Ethical Debates
- The Guardian – Paul McCartney and Dua Lipa among artists urging Starmer to rethink AI copyright plans
Coverage of artists' concerns over AI's use of copyrighted material.
Read more - The Times – Elton John is right to protest against AI's pillaging of his work
An opinion piece on the ethical implications of AI in music.
Read more - Pitchfork – SoundCloud Updates AI Policy in Terms of Use After Backlash
Details on SoundCloud's policy changes in response to AI-related controversies.
Read more
CORPUS Project
- CORPUS Official Website
Learn more about CORPUS, a new system where music creators are rewarded for their contributions to a shared musical corpus.
Visit site