Dams In the Infinite River: Limits to Copyright’s Power Over the Next-Generation of Generative AI Media
Can copyright effectively govern creative works in an age of infinite digital remixing? Is it possible—or even reasonable—to audit how creators use AI in their art? Should copyright law adapt to prioritize attribution and visibility over strict ownership? What new models are needed for artists to capture value in a world of mass, AI-driven content? Questions raised by Zachary Cooper in his presentation. Watch the video or read the transcript below. Zachary Cooper, a researcher at the Amsterdam Law and Technology Institute (Vrije Universiteit Amsterdam), delivered a provocative address at the recent User Rights Conference in Geneva in June 2025 entitled “Dams for the Infinite River.” Cooper’s talk challenged conventional debates in copyright law by spotlighting the seismic impact of AI on media creation. Instead of focusing on traditional authorship thresholds, Cooper urged legal scholars and policy-makers to grapple with deeper issues emerging as generative AI lets anyone remix, alter, and mass-produce creative content with unprecedented ease. His research was supported by the Weizenbaum Institute in Berlin and the Copyright Society. Throughout his presentation, Cooper highlighted the profound difficulties facing rights holders in auditing creative works crafted with AI, questioning both the feasibility and ethics of tools like watermarking. He noted that legal frameworks around the world are ill-equipped to handle the blurred boundaries between human and machine authorship, which can vary dramatically from one country—or even one creative tool—to the next. As technology enables infinitely variable, interactive content, Cooper argued that copyright law is rapidly losing its power to define, control, and protect creative output. Cooper concluded by suggesting that copyright is akin to “a dam in an infinite river”—an increasingly obsolete barrier in a world defined by endless remix and transformation. He warned that unless legal and industry leaders embrace collective licensing models and prioritize attribution and visibility, platforms with massive network effects will continue to undercut the negotiating power of original creators. The talk raised urgent questions about the future of artistic value and legal protection in the era of generative AI. Cleaned-Up Full Transcript Below is the transcript, with minor clean-up for clarity, grammar, and removal of interruptions (for example, fixing sentence structure and omitting interjections like “Maybe some of you guys already know this song. Is it coming through? Oh, hold on.” unless directly relevant). I’ll also clarify some sentences and remove redundancy and filler where possible. Stage directions (music playing, technical interruptions) will be omitted unless needed for meaning. I’m Zach Cooper from the Amsterdam Law and Technology Institute at Vrije Universiteit Amsterdam. This research has also been sponsored and supported by the Weizenbaum Institute in Berlin and the Copyright Society. I’ve titled the presentation “Dams for the Infinite River.” What do I mean by that? I’ve been trying to reframe the conversation around what I believe are the actual challenges as AI dramatically changes the ways we consume and produce media in the 21st century. Instead of focusing on longstanding debates about authorship thresholds, I argue that a collection of unspoken challenges will more fundamentally shape the issues facing rights holders over the coming decade—some already present, others just emerging. As a cultural reference, I was encouraged to revisit Taylor Swift’s music, and her work offers a useful metaphor for my argument. With new technologies, it’s possible to change a song’s structure, lyrics, or style almost instantly. For instance, I was able to alter one of her songs for this presentation much faster than it took us to listen to it here. That example raises a fundamental question: Now that we can so easily transform any media, what role does intellectual property play? How can it govern such a fluid world? It’s become widely accepted that pressing a button to generate content isn’t the same as creating something as an artist. However, definitions differ worldwide. In China, a lengthy and detailed AI prompt might count as authorship. In the US, AI is considered a tool, but the boundaries between “human” and “AI-generated” work remain undefined. Europe has largely avoided the issue, though courts have denied copyright to works made by AI without human involvement. The problem with all these approaches is that they split copyright protection based on how much AI was used. In reality, professional creative software has countless generative and non-generative tools, and creators often use many in combination. At present, pressing one set of buttons might grant copyright; pressing another might not—yet how can we audit those choices? When people talk about “AI,” they imagine a single function, but AI can do anything: from mastering music to generating drumlines, altering instrument sounds, or even creating new genres. The spectrum of creative tools and applications is vast. And generative art isn’t simply “slop” made thoughtlessly; sometimes, the act of generation itself creates entirely new forms of art. Artists like Databots stream infinitely generated music, turning the process and the model itself into the artwork. Labeling something as “AI-generated” doesn’t reveal anything meaningful about the creator’s relationship to the work. Judges, copyright offices, and others cannot just ask whether someone used generative AI—they need to know exactly how it was used, down to the specifics of each prompt or button pressed. Currently, there’s no way to reliably track or audit these creative decisions. The available methods are either to trust creators to self-report their practices, which is unreliable, especially for older works, or to track every creative act—a solution that invades privacy and may stifle creativity. Watermarking is frequently proposed but is technically weak; watermarks can be stripped from images or audio files, and existing protocols like C2PA are dependent on metadata, which is easy to remove. Even with robust watermarking, creators could simply recreate works without relying on AI, bypassing any AI-detection entirely. Thus, the system is inconsistent and essentially unworkable. Current enforcement tries to assign rights based on the use of AI features that remain unauditable and poorly defined, never truly capturing the creator’s intent or involvement. Alternatively, we accept new creative models and ask: What are the real