By Andrés Izquierdo
AI, Copyright, and the Future of Creativity: Notes from the Panama International Book Fair
During the second week of August, I was invited to speak at the Panama International Book Fair, an event hosted by the World Intellectual Property Organization (WIPO), the Panama Copyright Office, the Ministry of Culture, and the Panama Publishers Association. My presentation focused on the increasingly complex intersection between copyright law and artificial intelligence (AI)—a topic now at the center of global legal, cultural, and economic debate. This post summarizes the core arguments of that presentation, drawing on recent litigation, academic research, and policy developments, including the U.S. Copyright Office’s May 2025 report on generative AI.
How should copyright law respond to the widespread use of protected works in the training of generative AI systems? The analysis suggests there are emerging discussions around several key areas: the limits of fair use and exceptions, the need for enforceable remuneration rights, and the role of licensing and regulatory oversight. The article proceeds in five parts: it begins with an overview of the legal and technological context surrounding AI training; it then reviews academic proposals for recalibrating copyright frameworks; it examines recent court decisions that test the boundaries of current doctrine; it summarizes the U.S. Copyright Office’s 2025 report as an institutional response; and it concludes by outlining four policy considerations for future regulation.
A Shifting Legal and Technological Landscape
The integration of generative AI into creative and informational ecosystems has exposed foundational tensions in copyright law. Current systems routinely ingest large volumes of copyrighted works—such as books, music, images, and journalism—to train AI models. This practice has given rise to unresolved legal questions: Can copyright law meaningfully regulate the use of training data? Do existing doctrines and legal provisions—fair use, or exceptions and limitations—extend to these practices? What remedies, if any, are available to rightsholders whose works are used without consent?
These questions remain open across jurisdictions. While some courts and regulatory agencies have begun to respond, a substantial part of the debate is now being shaped by legal scholarship and litigation, each proposing frameworks to reconcile AI development with copyright’s normative commitments. The following sections examine this evolving landscape, beginning with recent academic proposals.
Academic Perspectives: Towards a New Equilibrium
In reviewing the literature, several clear themes have emerged.
First, some authors agree that remuneration rights for authors must be strengthened. Geiger, Scalzini, and Bossi argue that to truly ensure fair compensation for creators in the digital age, especially in light of generative AI, EU copyright law must move beyond weak contractual protections and instead implement strong, unwaivable remuneration rights that guarantee direct and equitable revenue flows to authors and performers as a matter of fundamental rights.
Second, some scholars highlight that the technical opacity of generative AI demands new approaches to author remuneration. Cooper argues that as AI systems evolve, it will become nearly impossible to determine whether a work was AI-generated or whether a particular copyrighted work was used in training. He warns that this loss of traceability renders attribution-based compensation models unworkable. Instead, he calls for alternative frameworksto ensure creators are fairly compensated in an age of algorithmic authorship.
Third, scholars like Pasquale and Sun argue that policymakers should adopt a dual system of consent and compensation—giving creators the right to opt out of AI training and establishing a levy on AI providers to ensure fair payment to those whose works are used without a license. Gervais, meanwhile, argues that creators should be granted a new, assignable right of remuneration for the commercial use of generative AI systems trained on their copyrighted works—complementing, but not replacing, existing rights related to reproduction and adaptation.
There is also a growing consensus on the need to modernize limitations and exceptions, particularly for education and research. Flynn et al. show that a majority of the countries in the world do not have exceptions that enable modern research and teaching, such as academic uses of online teaching platforms. And in Science, several authors propose harmonizing international and domestic copyright exceptions to explicitly authorize text and data mining (TDM) for research, enabling lawful, cross-border access to copyrighted materials without requiring prior licensing.
At WIPO, the Standing Committee on Copyright and Related Rights (SCCR) has been taking steps in this area by approving a work program on L&E´s, under current discussions for the upcoming SCCR 47. And in the Committee on Development and Intellectual Property (CDIP), there is a Pilot Project approved on TDM to Support Research and Innovation in Universities and Other Research-Oriented Institutions in Africa – Proposal by the African Group (CDIP/30/9 REV).
My own work, as well as that of Díaz & Martínez, has emphasized the urgency of updating Latin American educational exceptions to account for digital and cross-border uses.
Eleonora Rosati argues that unlicensed AI training falls outside existing EU and UK copyright exceptions, including Article 3 of the DSM Directive (TDM for scientific research), Article 4 (general TDM with opt-outs), and Article 5(3)(a) of the InfoSoc Directive (use for teaching or scientific research). She finds that exceptions for research, education, or fair use-style defenses do not apply to the full scope of AI training activities. As a result, she concludes that a licensing framework is legally necessary and ultimately unavoidable, even when training is carried out for non-commercial or educational purposes.
Finally, policy experts like James Love warn that “one-size-fits-all” regulation risks sidelining the medical and research breakthroughs promised by artificial intelligence. The danger lies in treating all training data as equivalent—conflating pop songs with protein sequences, or movie scripts with clinical trial data. Legislation that imposes blanket consent or licensing obligations, without distinguishing between commercial entertainment and publicly funded scientific knowledge, risks chilling socially valuable uses of AI. Intellectual property law for AI must be smartly differentiated, not simplistically uniform.
Litigation as a Site of Doctrinal Testing
U.S. courts have become a key venue for testing the boundaries of copyright in the age of artificial intelligence. In the past two years, a growing number of cases have explored whether existing doctrines and foundational concepts of copyright—such as fair use, reproduction, and originality—can meaningfully apply to machine learning systems. While judges assess these questions within the confines of current law, their rulings are increasingly informing the policy debate over whether statutory reform may be necessary.
In August 2025, the court in Perplexity AI v. News Corp denied Perplexity’s motion to dismiss or transfer the case, rejecting the company’s jurisdictional challenge by emphasizing its business presence and targeting of New York-based readers, and thereby confirmed the Southern District of New York as a proper forum. The ruling does not address the merits of the infringement claim—that Perplexity allegedly scraped and repurposed News Corp’s content via AI summarization without permission—but establishes that the case will proceed in New York for now.
In June 2025, the court in Bartz v. Anthropic PBC found that training AI models on lawfully purchased and digitized books constituted fair use—an act it described as “spectacularly” or “quintessentially transformative,” akin to a writer learning from the works of others—but held that copying and storing pirated books to build a central, permanent library was not fair use and must proceed to trial for damages. The court’s decision stands, as Judge Alsup denied Anthropic’s request to appeal before trial; the piracy-related phase will proceed to a jury scheduled for December 2025.
In New York Times v. OpenAI & Microsoft, filed in December 2023, the plaintiffs allege that their articles were ingested without authorization to train large language models –– a claim now supported by the court’s refusal to dismiss key copyright infringement counts, including direct and contributory liability. The dispute includes claims that outputs from the AI models sometimes “regurgitate” or closely mimic Times content, including near‑verbatim reproductions or synthetic summaries resembling paywalled material. A key issue lurking under the cases is what is substitution/market harm under fair use. The court has allowed the case to proceed into discovery and potential trial.
In the creative industries, AI-generated music is facing its own legal reckoning. In UMG v. Suno, filed in mid-2024, Universal Music Group alleges that the startup unlawfully used copyrighted sound recordings to train generative AI systems that produce new music tracks. The case raises critical questions about whether training on copyrighted recordings constitutes infringement—and whether outputs that mimic the style or sound of human artists can trigger liability. The outcome could establish major precedents for musical copyright in an AI context.
Earlier decisions have already begun to set legal limits. In Thomson Reuters v. Ross Intelligence, a 2024 ruling rejected Ross’s fair use defense after finding that its AI legal assistant copied Westlaw headnotes in a way substantially similar to Westlaw’s own product, making the substitution risk more direct and obvious than in cases involving large, generalized training datasets.
Visual artists and photographers are also pursuing their claims. In parallel lawsuits (Andersen v. Stability AI and Getty v. Stability AI), courts are considering whether AI-generated images infringe the right to prepare derivative works and whether stripping metadata violates moral rights.
On the literary front, the Authors Guild v. OpenAI remains in early stages, but could shape the compensation landscape for book authors whose works were used without consent in LLM training.
Finally, foundational principles are also being reaffirmed. In Thaler v. Perlmutter, the U.S. Court of Appeals for the District of Columbia Circuit in 2023 upheld the U.S. Copyright Office’s decision that purely AI-generated works without human authorship cannot be copyrighted. This ruling reasserted the human-centered foundation of copyright law.
Together, these cases are forging the contours of copyright doctrine in real time. They expose the limitations of existing frameworks—and the growing pressure on courts to reconcile new technologies with enduring legal principles.
Institutional Responses: The U.S. Copyright Office’s 2025 Report
The most comprehensive institutional response to date comes from the U.S. Copyright Office’s May 2025 report on generative AI. Key findings include:
- AI-generated works without human authorship are not copyrightable.
- Training on protected materials may require licenses, unless covered by clearly defined exceptions.
- New policy tools are under consideration, including remuneration systems, dataset registries, and transparency mandates.
The report draws a clear distinction between permissible and impermissible uses:
- Infringing: unauthorized reproduction, derivative-like outputs, removal of copyright management information.
- Permissible: fair use (where applicable), TDM exceptions, use of public domain or synthetic data.
At the same time, the Office underscores that uses for research, analysis, or non-substitutive educational functions are often “highly transformative” and therefore more likely to qualify as fair use. Training models for closed systems or research purposes is distinguished from training designed to output expressive works that compete with the originals.
Policy Directions: Building a Balanced Framework
To close the gap between technological reality and legal capacity, policy makers could consider exploring more deeply the following topics:
- Remuneration rights for authors.
- Exceptions for text and data mining, especially for research, education, and non-commercial innovation.
- Transparent licensing schemes, with disclosure of training data.
- Tools that enable regulators, authors, and the public to understand how data is sourced, processed, and deployed in generative models.
These measures, while not exhaustive, could represent building blocks for a future copyright system.
Conclusion
The legal community now faces a pivotal challenge: how to adapt copyright frameworks to AI without undermining the principles that underpin creative economies and public access to knowledge.
From academic proposals to courtroom debates, from fair use to human rights, this conversation is no longer theoretical. It is unfolding now—in legislation, in litigation, and in international fora. Countries like Panama, with vibrant creative sectors and strong cultural traditions, are well-positioned to contribute to a fairer, more inclusive AI governance model.
The question is not whether copyright will change. It’s how—and for whom.








