AI, Copyright, and the Future of Creativity: Notes from the Panama International Book Fair
AI, Copyright, and the Future of Creativity: Notes from the Panama International Book FairDuring the second week of August, I was invited to speak at the Panama International Book Fair, an event hosted by the World Intellectual Property Organization (WIPO), the Panama Copyright Office, the Ministry of Culture, and the Panama Publishers Association. My presentation focused on the increasingly complex intersection between copyright law and artificial intelligence (AI)—a topic now at the center of global legal, cultural, and economic debate. This post summarizes the core arguments of that presentation, drawing on recent litigation, academic research, and policy developments, including the U.S. Copyright Office’s May 2025 report on generative AI. How should copyright law respond to the widespread use of protected works in the training of generative AI systems? The analysis suggests there are emerging discussions around several key areas: the limits of fair use and exceptions, the need for enforceable remuneration rights, and the role of licensing and regulatory oversight. The article proceeds in five parts: it begins with an overview of the legal and technological context surrounding AI training; it then reviews academic proposals for recalibrating copyright frameworks; it examines recent court decisions that test the boundaries of current doctrine; it summarizes the U.S. Copyright Office’s 2025 report as an institutional response; and it concludes by outlining four policy considerations for future regulation. A Shifting Legal and Technological LandscapeThe integration of generative AI into creative and informational ecosystems has exposed foundational tensions in copyright law. Current systems routinely ingest large volumes of copyrighted works—such as books, music, images, and journalism—to train AI models. This practice has given rise to unresolved legal questions: Can copyright law meaningfully regulate the use of training data? Do existing doctrines and legal provisions—fair use, or exceptions and limitations—extend to these practices? What remedies, if any, are available to rightsholders whose works are used without consent? These questions remain open across jurisdictions. While some courts and regulatory agencies have begun to respond, a substantial part of the debate is now being shaped by legal scholarship and litigation, each proposing frameworks to reconcile AI development with copyright’s normative commitments. The following sections examine this evolving landscape, beginning with recent academic proposals. Academic Perspectives: Towards a New Equilibrium In reviewing the literature, several clear themes have emerged. First, some authors agree that remuneration rights for authors must be strengthened. Geiger, Scalzini, and Bossi argue that to truly ensure fair compensation for creators in the digital age, especially in light of generative AI, EU copyright law must move beyond weak contractual protections and instead implement strong, unwaivable remuneration rights that guarantee direct and equitable revenue flows to authors and performers as a matter of fundamental rights. Second, some scholars highlight that the technical opacity of generative AI demands new approaches to author remuneration. Cooper argues that as AI systems evolve, it will become nearly impossible to determine whether a work was AI-generated or whether a particular copyrighted work was used in training. He warns that this loss of traceability renders attribution-based compensation models unworkable. Instead, he calls for alternative frameworksto ensure creators are fairly compensated in an age of algorithmic authorship. Third, scholars like Pasquale and Sun argue that policymakers should adopt a dual system of consent and compensation—giving creators the right to opt out of AI training and establishing a levy on AI providers to ensure fair payment to those whose works are used without a license. Gervais, meanwhile, argues that creators should be granted a new, assignable right of remuneration for the commercial use of generative AI systems trained on their copyrighted works—complementing, but not replacing, existing rights related to reproduction and adaptation. There is also a growing consensus on the need to modernize limitations and exceptions, particularly for education and research. Flynn et al. show that a majority of the countries in the world do not have exceptions that enable modern research and teaching, such as academic uses of online teaching platforms. And in Science, several authors propose harmonizing international and domestic copyright exceptions to explicitly authorize text and data mining (TDM) for research, enabling lawful, cross-border access to copyrighted materials without requiring prior licensing. At WIPO, the Standing Committee on Copyright and Related Rights (SCCR) has been taking steps in this area by approving a work program on L&E´s, under current discussions for the upcoming SCCR 47. And in the Committee on Development and Intellectual Property (CDIP), there is a Pilot Project approved on TDM to Support Research and Innovation in Universities and Other Research-Oriented Institutions in Africa – Proposal by the African Group (CDIP/30/9 REV). My own work, as well as that of Díaz & Martínez, has emphasized the urgency of updating Latin American educational exceptions to account for digital and cross-border uses. Eleonora Rosati argues that unlicensed AI training falls outside existing EU and UK copyright exceptions, including Article 3 of the DSM Directive (TDM for scientific research), Article 4 (general TDM with opt-outs), and Article 5(3)(a) of the InfoSoc Directive (use for teaching or scientific research). She finds that exceptions for research, education, or fair use-style defenses do not apply to the full scope of AI training activities. As a result, she concludes that a licensing framework is legally necessary and ultimately unavoidable, even when training is carried out for non-commercial or educational purposes. Finally, policy experts like James Love warn that “one-size-fits-all” regulation risks sidelining the medical and research breakthroughs promised by artificial intelligence. The danger lies in treating all training data as equivalent—conflating pop songs with protein sequences, or movie scripts with clinical trial data. Legislation that imposes blanket consent or licensing obligations, without distinguishing between commercial entertainment and publicly funded scientific knowledge, risks chilling socially valuable uses of AI. Intellectual property law for AI must be smartly differentiated, not simplistically uniform. Litigation as a Site of Doctrinal Testing U.S. courts have become a key venue for testing the boundaries of copyright in the age of artificial intelligence. In the past two years, a growing number of cases





