The Flood Is Real. So Is the Chance to Rebuild. (1/3)

Inside the AI music boom: hype, power shifts, and what’s missing.
Part I: Adoption and Why – The Explosion of Generative AI Music
Introduction
Generative AI music has rapidly evolved from a niche experiment into a mainstream phenomenon. The global market for AI-driven music is projected to grow tenfold – from around $300 million in 2023 to over $3.1 billion by 2028 – fueled by massive investment and technological advances. This report provides a comprehensive overview of the current generative AI music landscape, focusing on hard data and diverse perspectives, especially those of musicians around the world. We examine adoption statistics, technical breakthroughs, legal debates, and industry viewpoints to illuminate both the opportunities and challenges that AI-generated music presents in 2025.
Executive Summary
Generative AI is transforming the music industry at unprecedented speed. Platforms like Suno and Udio have reached millions of users within months, generating music at industrial scale—often with quality that rivals commercial releases. Listeners are engaging, creators are experimenting, and tech companies are racing ahead.
But the infrastructure beneath this boom is fractured.
Most generative models rely on unlicensed data. Rights holders are litigating. Artists are split between curiosity and fear. Legal frameworks lag behind, while the sheer volume of AI-generated content floods platforms, raising urgent questions around authorship, attribution, and economic participation.
The success of AI music systems is no longer theoretical: it’s happening. But the systems that govern their use—creative, legal, and financial—are not keeping pace. Without intervention, the gap between AI capabilities and cultural accountability will only widen.
CORPUS enters this landscape with a constructive alternative:
A music licensing system designed for the age of AI. Artist-led, legally sound, and globally scalable. CORPUS makes it possible to train AI on high-quality music with full consent, clear rights, and fair compensation. It shifts the dynamic from extraction to collaboration, and opens the door for musicians to actively shape the next generation of creative tools.
This report outlines the current state of generative AI in music—its drivers, risks, and contradictions—and shows the context in which CORPUS offers one path forward.
Part 1
Before we dive into specific tools or the legal grey zones of AI music, we need to understand just how widespread and fast-moving the adoption has been. This part 1 sets the stage: Who is using generative music platforms? How many people? Why now? We explore the underlying conditions—technological, cultural, and historical—that make this boom possible. This context is essential for grasping the weight of what follows: the new tools it enables (part 2) and the regulation it urgently demands (part 3).
Adoption of Generative AI Music Platforms
Explosive Growth in User Adoption: New AI music platforms have seen surging user numbers and content creation volumes. For example, Suno, a generative music app launched in 2022, claims that is has already attracted over 25 million people creating songs – many making music for the first time. Within Suno, engagement is high: nearly 50% of new users hit the 10-song free limit on their first day, indicating strong interest and retention. Another startup, Udio, launched in 2024, drew over 600,000 users in its first two weeks of beta and now sees content being made at an astounding rate of 10 songs per second. This equates to roughly 864,000 AI-generated tracks per day on Udio’s platform.
AI-Generated Music By the Numbers: Established generative music services also report massive content catalogs. Boomy, an AI music app founded in 2018, has had over 19 million songs generated to date. In the Middle East, streaming service Anghami partnered with generative platform Mubert and by early 2023 had already created 170,000 AI-generated songs, on track to host 200,000+ AI tracks for its listeners. And in China, Tencent Music Entertainment (TME) disclosed that it released over 1,000 songs with AI-synthesized vocals – with one hit song surpassing 100 million streams These figures underscore that AI music is not just tech hype; it is being widely adopted and consumed across different markets.
To summarize some key adoption metrics across platforms and regions:
AI Music Adoption Metrics | |
---|---|
Suno (global) | |
Udio (global) | |
Boomy (global) | |
Anghami + Mubert (MENA) | |
Tencent Music (China) |
Streaming Platforms Flooded with AI Content: A significant portion of new music uploads to streaming services now comes from AI. Deezer (France) reports that by April 2025, users were adding over 20,000 fully AI-generated tracks per day, which accounted for 18% of all new uploads – nearly double the share from just a few months prior. This surge has prompted Deezer to deploy AI-detection tools and remove “bot-made” songs from algorithmic recommendations. Rival platform Spotify has also grappled with AI uploads: in mid-2023 it removed tens of thousands of songs created via Boomy amid concerns of artificial streaming manipulation. Listeners are indeed encountering AI-generated music in the wild – sometimes without realizing it. In one experiment, a musician used Suno’s AI to create a song and slipped it into Spotify playlists, garnering over 64,000 listens; none of the curators or listeners flagged the track as artificial. This suggests that, when quality is sufficient, casual listeners may treat AI songs no differently than human-made tracks.
User Behavior and Use Cases: Generative music platforms reveal a spectrum of user engagement. Suno’s community, for instance, ranges from “power users” spending hours perfecting songs to casual users making fun tunes with children or personal “audio souvenirs” (what Suno calls “soundtracking your life”). Notably, many who start by creating AI music often continue as listeners on these platforms – enjoying a personalized music feed of AI-generated songs tailored to their prompts or preferences. This blurs the line between creator and consumer, hinting at a more interactive, participatory music culture. In a survey of over 15,000 music creators, 35% reported they have already used some form of AI in their music-making (rising to 51% among those under 35). However, enthusiasm is tempered by caution – the same survey found 71% of creators fear that AI’s growth could make it impossible for musicians to earn a living from their work. This mix of excitement and concern is shaping how adoption unfolds.
Why Is AI Music Booming Now? (Modern AI vs. Early Experiments)
Several converging factors explain why generative AI music is succeeding now where earlier efforts remained limited:
- Advances in Model Quality: Today’s AI models can produce music with a realism and complexity that was unattainable in the past. Cutting-edge deep learning architectures (e.g. large transformer models and diffusion networks) learn directly from audio, capturing nuances of timbre, rhythm, and phrasing. The latest generative songs are often indistinguishable from human music in popular genres. Listeners no longer hear the tell-tale glitches or “extra fingers” of earlier AI; a well-tuned model can create a passable pop or hip-hop track complete with vocals, instrumentation, and production value. This represents a leap from even a few years ago (for example, OpenAI’s 2020 Jukebox model could mimic artist styles but with audible distortions). Improved model fidelity greatly boosts mainstream acceptability.
- Increases in Computing Power: The sheer compute available (GPUs, TPUs, cloud clusters) has grown exponentially, allowing researchers to train enormous music models and sample lengthy audio outputs. What once took days on a supercomputer can now be done in hours on accessible hardware. Real-time AI music generation on consumer devices is becoming feasible. This horsepower lets models crunch through huge datasets of audio and generate songs in seconds, as Udio demonstrated (producing a fully mastered track in ~40 seconds in-app). Greater computing muscle also enables higher audio quality (e.g. higher sample rates, stereo sound) and longer compositions than early systems.
- Abundance of Training Data: Modern AI music thrives on vast quantities of digital music data. The internet and streaming era have provided millions of recordings and detailed audio datasets for AI to learn from – far more than what was available to earlier researchers. Moreover, models can ingest not just scores (MIDI or notation) but raw audio, learning directly from the finished songs including their production qualities. This broad exposure lets AIs internalize the patterns of entire genres and eras. (However, as discussed later, using all this data has sparked legal controversy over copyright.) The availability of rich training corpora – including isolated instrument stems, music with aligned lyrics, and large open sample libraries – is a key enabler of today’s generative music quality.
- Music Homogeneity and Patterns: Some observers note that much of contemporary popular music follows relatively standardized structures (common chord progressions, song forms, production styles). This homogenization may incidentally make it easier for AI to learn and reproduce mainstream music styles. Studies have found that over past decades, aspects of music like harmonic complexity and timbral diversity in chart hits have decreased, leading to songs sounding more similar to each other. In effect, an AI model doesn’t need to account for a wildly diverse stylistic range to generate a “typical” radio-friendly track. Thus, the formulaic nature of many hit songs plays to the strengths of pattern-recognizing algorithms. Once an AI masters the formula, it can churn out endless variations that sound plausible to listeners.
- Historical Precedents vs. Today’s AI: Generative music is not entirely new – algorithmic composition experiments date back decades, but their impact was limited by technology. In 1957, Lejaren Hiller and Leonard Isaacson’s Illiac Suite became the first score composed by a computer, using random number generation and rules to create a string quartet. Similarly, mechanical music systems like player pianos and music boxes automated performance long before AI. However, those earlier systems could not mimic the expressive subtlety or audio realism of human-made music. The Illiac Suite, while historically important, sounded more like a quirky mathematical exercise than a hit record. Later, in the 1990s and 2000s, researchers like David Cope used algorithms to emulate classical composers’ styles (e.g. the Experiments in Musical Intelligence project), and early software like Band-in-a-Box auto-generated accompaniment. Yet, those systems were largely symbolic or rule-based, lacking the raw audio learning and “feel” of music. What’s different now is that neural networks can learn from actual audio recordings, capturing performance nuances and production aesthetics. Paired with modern computing, this means AI can compose, arrange, and produce a full song that sounds studio-made. In essence, generative AI music is working now because the models finally have the quality, the data, and the compute to sound genuinely musical, bridging the gap that made earlier mechanical or algorithmic music feel artificial or “soulless.”
The use of AI in music is no longer speculative. It's here, it's huge, and it's only accelerating. But most of this growth has happened without a shared understanding of its implications. In the next section, we’ll look more closely at the tools AI has introduced—how they’re being used, what they make possible, and what new creative behavior is emerging in response.