AI

Artificial Intelligence, Blog, Centre News

Centre Announces Short Course on Intellectual Property and Artificial Intelligence

The Centre on Knowledge Governance is please to announce a new short course on AI and IP to take place in Geneva from September 7-8, 2026. COURSE DESCRIPTION  This intensive two-day course provides a comprehensive, comparative analysis of the evolving legal and policy landscape at the intersection of Intellectual Property (IP) and Artificial Intelligence (AI). Participants will explore pressing legal challenges, including the copyright protection for AI training data, the patentability and copyright of AI-generated outputs, and the balance between proprietary interests and the public interest in research (Text and Data Mining and computational research) and the development of “Public AI.”  The course will feature in-depth comparative analysis of legal frameworks and policy proposals across the European Union (EU), United States (USA), India, Brazil, Singapore, Japan, and in international forums, such as the World Intellectual Property Organization, World Trade Organization and other agencies.  The learning experience will culminate in a practical role-play exercise in which students will draft a model international legal instrument aimed at ensuring fair remuneration for creators while safeguarding the rights of researchers and public interest organizations developing AI infrastructure. This legal instrument will focus on  a range of factors to be used in distinguishing research and public interest uses of AI from commercial competitive uses. LEARNING OBJECTIVES Upon completion of this course, participants will be able to: WHO IS THIS PROGRAMME FOR? This programme is particularly relevant for mid- to senior level practitioners from various organisations working at the intersection of intellectual property and AI policy or scholarship, such as: LECTURERS The Course will be directed by Sean Flynn and Ben Cashdan of the Centre on Knowledge Governance, Geneva Graduate Institute. Guest lecturers will participate in person or online to bring comparative expertise from jurisdictions such as India, Brazil and China and the African continent, in addition to the US and EU. SCHOLARSHIPS 10 scholarships will be available for highly motivated government delegates from developing countries or representatives of public interest organizations who participate in multilateral policy processes on copyright, AI and the rights of researchers. EXPRESSION OF INTEREST (INITIAL APPLICATION) If you are interested in being considered as a student on the course, and/or if you would like to apply for one of our scholarships, please kindly complete the following form:

Africa: Copyright & Public Interest, Artificial Intelligence, TDM Cases

Case Studies of AI for Good and AI for Development

Today the Geneva Centre on Knowledge Governance presents a series of Case Studies on AI for Good in Africa and the Global South. These grew out of our work on Text and Data Mining and our policy work in support of the Right to Research. Researchers in the Global South are responding to local and global challenges from health and education to language preservation and mitigation of climate change. In all these case computational methods and Artificial Intelligence (AI) play a leading role in finding and implementing solutions. A common thread that runs through all the cases is how intellectual property laws can support innovation and problem solving in the public interest, whilst protecting the interests of creators, communities and custodians of traditional knowledge. In addition several practitioners are looking at how to redress data imbalances, where large companies in the Global North have much greater access to works, for historical, legal and economic reasons. The cases include: Each of our case studies in written up in the form of a report, combined with a video exploration of the case study in the words of its leading practitioners.

Blog

The AI Remuneration Debate: Three Perspectives

The rapid development of generative AI has sparked intense debate over how, or even if, creators should be compensated when their copyrighted works are used to train commercial AI systems. This issue pits the drive for technological innovation against the fundamental rights of authors to benefit from their creations, leading to diverse proposals for legal and economic frameworks that seek to strike a fair balance. The following three presentations from the Global Expert Network on Copyright User Rights Symposium in June 2025 explore this complex landscape from distinct legal, philosophical, and geopolitical perspectives. The Geneva Centre on Knowledge Governance and the Program on Information Justice and Intellectual Property bring you three contributions to the AI Remuneration Debate. PART 1: Christophe Geiger approaches the problem from a human rights perspective, arguing for a balance between the right to develop AI for cultural and scientific progress and the author’s right to benefit from their work. He critiques current systems, noting the “all-or-nothing” nature of the US “fair use” doctrine and the EU’s “bizarre” opt-out rule for text and data mining, which he believes fails to secure fair compensation for authors due to unequal bargaining power with publishers and producers. His central proposal is to replace the EU’s opt-out system with a mandatory statutory remuneration scheme for the commercial use of works in AI training. Drawing on the success of similar “remunerated exceptions” in Europe, which generate significant revenue, Geiger proposes that income from this scheme be distributed directly to creators. Geiger contends this model would uphold authors’ human right to fair remuneration without stifling innovation. PART 2: Zachary Cooper reframes the debate by arguing that traditional copyright concepts are becoming obsolete in an age of infinite digital remixing and AI-driven content creation. He contends that focusing on authorship thresholds is futile because the line between human and machine creation is hopelessly blurred and impossible to audit reliably. Methods like watermarking are technically weak and easily circumvented. For Cooper, the real issue is the massive scale of AI generation, which makes copyright enforcement impractical and weakens creators’ negotiating power. He describes copyright as “a dam in an infinite river,” an outdated barrier against a constant flow of transformation. Instead of rigid ownership rules, Cooper suggests the future lies in collective licensing models and a greater emphasis on attribution and visibility, which would allow creators to capture value as their work spreads across massive platforms. PART 3: Vitor Ido situates the remuneration debate within the political and economic context of Brazil and Latin America, presenting it as a crucial tool for regulating corporate power and protecting national creative industries. He explains that for GRULAC (Group of Latin American and Caribbean Countries), the issue is not just about copyright but about challenging the dominance of large, foreign-based platforms that exploit local content with little to no payment to creators. The discussion also encompasses cultural sovereignty, such as protecting the dubbing industry from AI-generated voices, and safeguarding the traditional knowledge of Indigenous communities from misappropriation. Ido highlights Brazil’s draft AI Bill, which proposes an inverse of the EU’s system: a mandatory remuneration right that includes a reciprocity clause and ties the payment amount to the size of the AI company, directly targeting the market power of major corporations. This approach frames remuneration as a strategic element in a broader agenda of economic justice and cultural preservation in the Global South.

Blog, Centre News

Italy updates its copyright law to address AI

On September 18, 2025, the Italian Senate definitively approved the country’s first comprehensive framework law on artificial intelligence (AI). The new law also reflects Italy’s commitment to aligning its domestic legal system with the EU Artificial Intelligence Act (Regulation (EU) 2024/1689), ensuring coherence between national rules and the emerging European regulatory framework. Law no. 132 of September 23, 2025 (Provisions and delegations to the Government regarding artificial intelligence), has been published in the Official Gazette no. 223 of September 25, 2025, and it will enter into force on October 10, 2025. It consists of 6 chapters and 28 articles, not only establishing ethical and regulatory frameworks for AI across various sectors but also bringing several changes to the field of copyright law. In particular, Chapter IV, titled “Provisions for the Protection of Users and Copyright,” modifies Article 1 of Law No. 633/1941 (Italy’s Copyright Act) and introduces a new Article 70-septies, adapting the legal framework to the evolving challenges posed by AI-generated content and data mining. Emphasising human authorship The first major change introduced by Article 25,  a), of the new AI law is a revision to Article 1 of the Italian Copyright Act. The phrase “human” has been explicitly added, clarifying that only works of human creativity are eligible for protection under Italian copyright law. The amended text now reads: This law protects works of human creativity in the fields of literature, music, figurative arts, architecture, theatre, and cinematography, whatever the mode or form of expression, even when created with the assistance of artificial intelligence tools, provided they are the result of the author’s intellectual effort. This addition is not merely semantic. It codifies a crucial principle: while AI can be a tool in the creative process, copyright protection remains reserved for human-generated intellectual effort. This positions Italian law in alignment with the broader international trend, seen in the EU, U.S., and UK, of rejecting full legal authorship rights for non-human agents such as AI systems. In practice, this means that works solely generated by AI without significant human input will likely fall outside the scope of copyright protection. Regulating text and data mining for AI The second key innovation is provided by Article 25,  b), of the new AI law, which introduces Article 70-septies in the Italian Copyright Act, providing clarity on the legality of text and data mining (TDM) activities used in the training of AI models. The provision states: 1. Without prejudice to the provisions of the Berne Convention for the Protection of Literary and Artistic Works, reproductions and extractions from works or other materials available online or in databases to which one has lawful access, for the purposes of text and data mining by AI systems, including generative AI, are permitted in accordance with Articles 70-ter and 70-quater. This provision essentially reaffirms that text and data mining (TDM) is permitted under certain conditions, namely where access to the source materials is lawful and the activity complies with the existing TDM exceptions under EU copyright law, as already implemented in Articles 70-ter and 70-quater of the Italian Copyright Act. It mirrors the spirit of the EU Directive 2019/790 on Copyright in the Digital Single Market, which created specific exceptions for TDM, notably distinguishing between scientific and general uses. By formally reiterating the TDM exceptions for the use of AI, Italy seeks to balance the promotion of AI development with the protection of content creators’ rights. However, challenges remain regarding the definition of ‘lawful access’ and the ability of rightsholders to effectively exercise their opt-out rights in relation to TDM activities. Conclusion The recent amendments to Italy’s Copyright Act mark an important step toward harmonising traditional legal frameworks with the realities of emerging technologies, such as AI. By emphasising human authorship and providing clearer legal pathways for text and data mining, the new provisions aim to foster both innovation and respect for intellectual property. The law shall enter into force on the fifteenth day following its publication in the Official Gazette of the Italian Republic. This article was reposted from the original at https://communia-association.org/2025/10/01/italy-updates-its-copyright-law-to-address-ai/

Artificial Intelligence, Blog, Latin America / GRULAC

INTELIGENCIA ARTIFICIAL, DERECHOS DE AUTOR Y EL FUTURO DE LA CREATIVIDAD: APUNTES DE LA FERIA INTERNACIONAL DEL LIBRO DE PANAMÁ

Por Andrés Izquierdo Durante la segunda semana de agosto, fui invitado a hablar en la Feria Internacional del Libro de Panamá, un evento organizado por la la Oficina del Derecho de Autor de Panamá, el Ministerio de Cultura y la Asociación Panameña de Editores con apoyo de la Organización Mundial de la Propiedad Intelectual (OMPI). Mi presentación se centró en la cada vez más compleja intersección entre las leyes de derechos de autor y la inteligencia artificial (IA), un tema ahora en el centro del debate legal, cultural y económico mundial. Esta publicación resume los argumentos principales de esa presentación, basándose en litigios recientes, investigaciones académicas y desarrollos de políticas, incluyendo el informe de mayo de 2025 de la Oficina de Derechos de Autor de EE. UU. sobre IA generativa. ¿Cómo deberían responder las leyes de derechos de autor al uso generalizado de obras protegidas en el entrenamiento de sistemas de IA generativa? El análisis sugiere que hay debates emergentes en varias áreas clave: los límites del uso justo y las excepciones, la necesidad de derechos de remuneración aplicables, y el papel de la concesión de licencias y la supervisión regulatoria. El artículo se desarrolla en cinco partes: comienza con una visión general del contexto legal y tecnológico en torno al entrenamiento de IA; luego revisa propuestas académicas para recalibrar los marcos de derechos de autor; examina decisiones judiciales recientes que ponen a prueba los límites de la doctrina actual; resume el informe de 2025 de la Oficina de Derechos de Autor de EE. UU. como respuesta institucional; y concluye con cuatro consideraciones de política para la regulación futura. UN ESCENARIO LEGAL Y TECNOLÓGICO EN TRANSFORMACIÓNLa integración de la IA generativa en los ecosistemas creativos e informativos ha expuesto tensiones fundamentales en la ley de derechos de autor. Los sistemas actuales ingieren rutinariamente grandes volúmenes de obras protegidas —como libros, música, imágenes y periodismo— para entrenar modelos de IA. Esta práctica ha dado lugar a preguntas legales no resueltas: ¿Puede la ley de derechos de autor regular de manera significativa el uso de datos de entrenamiento? ¿Se extienden las doctrinas y disposiciones legales existentes—como el uso justo, o excepciones y limitaciones—a estas prácticas? ¿Qué remedios, si los hay, están disponibles para los titulares de derechos cuyas obras se utilizan sin consentimiento? Estas preguntas siguen abiertas en todas las jurisdicciones. Si bien algunos tribunales y agencias reguladoras han comenzado a responder, una parte sustancial del debate está siendo moldeada ahora por la investigación académica  jurídica y por los litigios, cada uno proponiendo marcos para conciliar el desarrollo de la IA con los compromisos normativos del derecho de autor. Las siguientes secciones examinan este panorama evolutivo, comenzando con propuestas académicas recientes. PERSPECTIVAS ACADÉMICAS: HACIA UN EQUILIBRIO RENOVADOAl revisar la literatura académica, han emergido varios temas claros. Primero, algunos autores concuerdan en que deben fortalecerse los derechos de remuneración para los autores. Geiger, Scalzini y Bossi sostienen que, para garantizar verdaderamente una compensación justa para los creadores en la era digital, especialmente a la luz de la IA generativa, la ley de derechos de autor de la Unión Europea debe ir más allá de las débiles protecciones contractuales y, en su lugar, implementar derechos de remuneración robustos e inalienables que garanticen ingresos directos y equitativos a autores e intérpretes como cuestión de derechos fundamentales. Segundo, varios académicos subrayan que la opacidad técnica de la IA generativa exige nuevos enfoques de remuneración para los autores. Cooper argumenta que, a medida que los sistemas de IA evolucionen, será casi imposible determinar si una obra fue generada por IA o si una obra protegida específica se utilizó en el entrenamiento. Advierte que esta pérdida de trazabilidad hace que los modelos de compensación basados en atribución sean inviables. En cambio, aboga por marcos alternativos para garantizar que los creadores reciban una compensación justa en una era de autoría algorítmica. Tercero, académicos como Pasquale y Sun sostienen que los responsables de formular políticas deberían adoptar un sistema dual de consentimiento y compensación: otorgar a los creadores el derecho a excluirse del entrenamiento de IA y establecer un gravamen sobre los proveedores de IA para asegurar el pago justo a aquellos cuyas obras se utilizan sin licencia. Gervais, por su parte, defiende que los creadores deberían recibir un nuevo derecho de remuneración, asignable, por el uso comercial de sistemas de IA generativa entrenados con sus obras protegidas por derechos de autor; este derecho complementaría, pero no reemplazaría, los derechos existentes relacionados con reproducción y adaptación. También hay un consenso creciente sobre la necesidad de modernizar las limitaciones y excepciones, en particular para educación e investigación. Flynn et al. muestran que una mayoría de los países del mundo no tienen excepciones que permitan la investigación y enseñanza modernas, como el uso académico de plataformas de enseñanza en línea. Y en Science, varios autores proponen armonizar las excepciones de derechos de autor internacionales y domésticas para autorizar explícitamente la minería de texto y datos (TDM) para investigación, permitiendo el acceso lícito y transfronterizo a materiales protegidos sin requerir licencias previas. En la OMPI, el Comité Permanente sobre Derecho de Autor y Derechos Conexos (SCCR) ha tomado medidas en este ámbito aprobando un programa de trabajo sobre limitaciones y excepciones, actualmente en discusión para el próximo SCCR 47. Y en el Comité de Desarrollo y Propiedad Intelectual (CDIP), está aprobado un Proyecto Piloto sobre TDM para Apoyar la Investigación e Innovación en Universidades y Otras Instituciones Orientadas a la Investigación en África – Propuesta del Grupo Africano (CDIP/30/9 REV). Mi propio trabajo, al igual que el de Díaz & Martínez, ha enfatizado la urgencia de actualizar las excepciones educativas latinoamericanas para dar cuenta de usos digitales y transfronterizos. Eleonora Rosati sostiene que el entrenamiento con IA no licenciada queda fuera de las excepciones de derechos de autor existentes en la UE y el Reino Unido, incluidas el Artículo 3 (TDM para investigación científica) de la Directiva DSM, el Artículo 4 (TDM general con exclusiones) y el Artículo 5(3)(a) de la Directiva InfoSoc (uso para enseñanza o investigación

Artificial Intelligence, Blog, Latin America / GRULAC

AI, Copyright, and the Future of Creativity: Notes from the Panama International Book Fair

AI, Copyright, and the Future of Creativity: Notes from the Panama International Book FairDuring the second week of August, I was invited to speak at the Panama International Book Fair, an event hosted by the World Intellectual Property Organization (WIPO), the Panama Copyright Office, the Ministry of Culture, and the Panama Publishers Association. My presentation focused on the increasingly complex intersection between copyright law and artificial intelligence (AI)—a topic now at the center of global legal, cultural, and economic debate. This post summarizes the core arguments of that presentation, drawing on recent litigation, academic research, and policy developments, including the U.S. Copyright Office’s May 2025 report on generative AI. How should copyright law respond to the widespread use of protected works in the training of generative AI systems? The analysis suggests there are emerging discussions around several key areas: the limits of fair use and exceptions, the need for enforceable remuneration rights, and the role of licensing and regulatory oversight. The article proceeds in five parts: it begins with an overview of the legal and technological context surrounding AI training; it then reviews academic proposals for recalibrating copyright frameworks; it examines recent court decisions that test the boundaries of current doctrine; it summarizes the U.S. Copyright Office’s 2025 report as an institutional response; and it concludes by outlining four policy considerations for future regulation. A Shifting Legal and Technological LandscapeThe integration of generative AI into creative and informational ecosystems has exposed foundational tensions in copyright law. Current systems routinely ingest large volumes of copyrighted works—such as books, music, images, and journalism—to train AI models. This practice has given rise to unresolved legal questions: Can copyright law meaningfully regulate the use of training data? Do existing doctrines and legal provisions—fair use, or exceptions and limitations—extend to these practices? What remedies, if any, are available to rightsholders whose works are used without consent? These questions remain open across jurisdictions. While some courts and regulatory agencies have begun to respond, a substantial part of the debate is now being shaped by legal scholarship and litigation, each proposing frameworks to reconcile AI development with copyright’s normative commitments. The following sections examine this evolving landscape, beginning with recent academic proposals. Academic Perspectives: Towards a New Equilibrium In reviewing the literature, several clear themes have emerged. First, some authors agree that remuneration rights for authors must be strengthened. Geiger, Scalzini, and Bossi argue that to truly ensure fair compensation for creators in the digital age, especially in light of generative AI, EU copyright law must move beyond weak contractual protections and instead implement strong, unwaivable remuneration rights that guarantee direct and equitable revenue flows to authors and performers as a matter of fundamental rights. Second, some scholars highlight that the technical opacity of generative AI demands new approaches to author remuneration. Cooper argues that as AI systems evolve, it will become nearly impossible to determine whether a work was AI-generated or whether a particular copyrighted work was used in training. He warns that this loss of traceability renders attribution-based compensation models unworkable. Instead, he calls for alternative frameworksto ensure creators are fairly compensated in an age of algorithmic authorship. Third, scholars like Pasquale and Sun argue that policymakers should adopt a dual system of consent and compensation—giving creators the right to opt out of AI training and establishing a levy on AI providers to ensure fair payment to those whose works are used without a license. Gervais, meanwhile, argues that creators should be granted a new, assignable right of remuneration for the commercial use of generative AI systems trained on their copyrighted works—complementing, but not replacing, existing rights related to reproduction and adaptation. There is also a growing consensus on the need to modernize limitations and exceptions, particularly for education and research. Flynn et al. show that a majority of the countries in the world do not have exceptions that enable modern research and teaching, such as academic uses of online teaching platforms. And in Science, several authors propose harmonizing international and domestic copyright exceptions to explicitly authorize text and data mining (TDM) for research, enabling lawful, cross-border access to copyrighted materials without requiring prior licensing.  At WIPO, the Standing Committee on Copyright and Related Rights (SCCR) has been taking steps in this area by approving a work program on L&E´s, under current discussions for the upcoming SCCR 47. And in the Committee on Development and Intellectual Property (CDIP), there is a Pilot Project approved on TDM to Support Research and Innovation in Universities and Other Research-Oriented Institutions in Africa – Proposal by the African Group (CDIP/30/9 REV). My own work, as well as that of Díaz & Martínez, has emphasized the urgency of updating Latin American educational exceptions to account for digital and cross-border uses.  Eleonora Rosati argues that unlicensed AI training falls outside existing EU and UK copyright exceptions, including Article 3 of the DSM Directive (TDM for scientific research), Article 4 (general TDM with opt-outs), and Article 5(3)(a) of the InfoSoc Directive (use for teaching or scientific research). She finds that exceptions for research, education, or fair use-style defenses do not apply to the full scope of AI training activities. As a result, she concludes that a licensing framework is legally necessary and ultimately unavoidable, even when training is carried out for non-commercial or educational purposes. Finally, policy experts like James Love warn that “one-size-fits-all” regulation risks sidelining the medical and research breakthroughs promised by artificial intelligence. The danger lies in treating all training data as equivalent—conflating pop songs with protein sequences, or movie scripts with clinical trial data. Legislation that imposes blanket consent or licensing obligations, without distinguishing between commercial entertainment and publicly funded scientific knowledge, risks chilling socially valuable uses of AI. Intellectual property law for AI must be smartly differentiated, not simplistically uniform. Litigation as a Site of Doctrinal Testing U.S. courts have become a key venue for testing the boundaries of copyright in the age of artificial intelligence. In the past two years, a growing number of cases

Blog

Ethical Data Scraping for Research – Expert Workshop held in Amsterdam

A unique, expert-led workshop on ethical data scraping was organized by Professor Niva Elkin-Koren and Dr. Maayan Perel and hosted by the Shamgar Center of Digital Law and Innovation, Tel Aviv University. The workshop was made possible by the generous support of the Right to Research in International Copyright Law coalition at the American University, especially Professor Sean Flynn, the Director of the Program on Information Justice and Intellectual Property (PIJIP). An interdisciplinary group of information law experts gathered in Amsterdam’s beautiful Volks hotel on July 2, 2025, to discuss data scraping for research and innovation and its ethical boundaries. The event aligned with the agenda of the Standing Committee on Copyright and Related Rights (SCCR), which promotes public interest strategies, coordinated action, and research, and seeks to inform public policy on legal exceptions and limitations for researchers. Data scraping is an essential research tool for academics and scientists across a wide range of disciplines. It is also critical for training artificial intelligence (AI) models and developing innovative research methodologies. The legal boundaries of data scraping attract considerable attention, not only from academics but also from policymakers, governments, courts, technology companies, and data providers worldwide. The boundaries of ethical data scraping— often dependent on the type of data being scraped, the technologies being used, the purpose of scraping, and the applicable legal framework—remain unclear. Consequently, researchers are left to navigate the potential legal risks and changing technological barriers set by tech giants, such as Cloudflare (recently adopting a permission-based approach to data scraping). As a result, researchers may be deterred from engaging in lawful data scraping, at the cost of not engaging in research that can serve the public interest. Moderated by Dr. Maayan Perel and Professor Eldar Haber, the workshop aimed to bring greater clarity to what ethical data scraping is and should be. The workshop applied practical and technical insights from real-world data scraping, analyzed the legal implications of various transatlantic approaches, and proposed guidelines for promoting ethical data scraping for research and development. To obtain a better understanding of how data scraping models work in practice, participants explored a test case model from Bright Data, an international data scraping company, whose model was also discussed in recent litigation with X and Meta. In a stimulating presentation, Bright Data representatives described their publicly available data scraping technology, elaborated on their ethical policies, and presented their “data for good” initiative, which offers scraping opportunities for researchers as well as other stakeholders. To encourage a productive dialogue between academic and business participants, the discussion followed a “red teaming” approach. Red teaming, a concept we adapted from the cybersecurity realm, essentially aims to help organizations proactively identify weaknesses and strengthen their security posture before actual attacks occur. Applying red-teaming’s critical approach, the participants identified potential legal challenges in Bright Data’s data test case model from various perspectives, including intellectual property law, competition law, privacy law, and data protection law, while also identifying points of legal tension between the US and the EU frameworks. The issues highlighted included the legal application of copyright law to information copying and storage; questions of competition law arising from the dominant market actors’ ability to adjust behavior and match prices; and the scope of privacy protection in personal information that data providers voluntarily make publicly accessible.   Next, insights from Bright Data’s test case were used to draw broader observations about what constitutes ethical data scraping in practice, especially for AI training. Key issues included: The workshop concluded with a broader discussion of potential legal, technical, and institutional strategies to promote ethical data scraping for academic research and technological development. Participants identified the need to distinguish between questions of access to data and questions of the use of the data, as each raises different legal issues. Key suggestions included: Participants: Tanya Aplin, Mor Avisar, Balazs Bodo, Sharon Bar Ziv, Sean Flynn, Eldar Haber, Uri Hacohen, Bernt Hugenholtz, Aline Iramina, Matthias Leistner, Dana Mazia, Maayan Perel, Mando Rachovista, Pamela Samuelson, Martin Senftleben, Ben Sobel, Streffan Verhultz, Amit Zac

Artificial Intelligence, Blog

A first look into the JURI draft report on copyright and AI

This post was originally published on COMMUNIA by Teresa Nobre and Leander Nielbock Last week we saw the first draft of the long-anticipated own-initiative report on copyright and generative artificial intelligence authored by Axel Voss for the JURI Committee (download as a PDF file). The report, which marks the third entry of the Committee’s recent push on the topic after a workshop and the release of a study in June, fits in with the ongoing discussions around Copyright and AI at the EU-level. In his draft, MEP Voss targets the legal uncertainty and perceived unfairness around the use of protected works and other subject matter for the training of generative AI systems, strongly encouraging the Commission to address the issue as soon as possible, instead of waiting for the looming review of the Copyright Directive in 2026. A good starting point for creators The draft report starts by calling the Commission to assess whether the existing EU copyright framework addresses the competitive effects associated with the use of protected works for AI training, particularly the effects of AI-generated outputs that mimic human creativity. The rapporteur recommends that such assessment shall consider fair remuneration mechanisms (paragraph 2) and that, in the meantime, the Commission shall “immediately impose a remuneration obligation on providers of general-purpose AI models and systems in respect of the novel use of content protected by copyright” (paragraph 4). Such an obligation shall be in effect “until the reforms envisaged in this report are enacted.” However, we fail to understand how such a transitory measure could be introduced without a reform of its own. Voss’s thoughts on fair remuneration also require further elaboration, but clearly the rapporteur is solely concerned about remunerating individual creators and other rightholders (paragraph 2). Considering, however, the vast amounts of public resources that are being appropriated by AI companies for the development of AI systems, remuneration mechanisms need to channel value back to the entire information ecosystem. Expanding this recommendation beyond the narrow category of rightholders seems therefore crucial. Paragraph 10 deals with the much debated issue of transparency, calling for “full, actionable transparency and source documentation by providers and deployers of general-purpose AI models and systems”, while paragraph 11 asks for an “irrebuttable presumption of use” where the full transparency obligations have not been fully complied with. Recitals O to Q clarify that full transparency shall consist “in an itemised list identifying each copyright-protected content used for training”—an approach that does not seem proportionate, realistic or practical. At this stage, a more useful approach to copyright transparency would be to go beyond the disclosure of training data, which is already dealt with in the AI Act, and recommend the introduction of public disclosure commitments on opt-out compliance. A presumption of use—which is a reasonable demand—could still kick in based on a different set of indicators. Another set of recommendations that aims at addressing the grievances of creators are found on paragraphs 6 and 9 and include the standardization of opt-outs and the creation of a centralized register for opt-outs. These measures are very much in line with COMMUNIA’s efforts to uphold the current legal framework for AI training, which relies on creators being able to exercise and enforce their opt-out rights. Two points of concern for users At the same time that it tries to uphold the current legal framework, the draft report also calls for either the introduction of a new “dedicated exception to the exclusive rights to reproduction and extraction” or for expanding the scope of Article 4 of the DSM Directive “to explicitly encompass the training of GenAI” (paragraph 7). At first glance, this recommendation may appear innocuous—redundant even, given that the AI Act already assumes that such legal provision extends to AI model providers. However, the draft report does not simply intend to clarify the current EU legal framework. On the contrary, the report claims that the training of generative AI systems is “currently not covered” by the existing TDM exceptions. This challenges the interpretation provided for in the AI Act and by multiple statements by the Commission and opens the door for discussions around the legality of current training practices, with all the consequences this entails, including for scientific research. The second point of concern for users is paragraph 13, which calls for measures to counter copyright infringement “through the production of GenAI outputs.” Throughout the stakeholder consultations on the EU AI Code of Practice, COMMUNIA was very vocal about the risks this category of measures could entail for private uses, protected speech and other fundamental freedoms. We strongly opposed the introduction of system-level measures to block output similarity, since those would effectively require the use of output filters without safeguarding users rights. We also highlighted that model-level measures targeting copyright-related overfitting could have the effect of preventing the lawful development of models supporting substantial legitimate uses of protected works. As this report evolves, it is crucial to keep this in mind and to ensure that any copyright compliance measures targeting AI outputs are accompanied by relevant safeguards that protect the rights of users of AI systems. A win for the Public Domain One of the last recommendations in the draft report concerns the legal status of AI-generated outputs. Paragraph 12 suggests that “AI-generated content should remain ineligible for copyright protection, and that the public domain status of such works be clearly determined.” While some AI-assisted expressions can qualify as copyright-protected works under EU law —most importantly when there’s sufficient human control over the output—many will not meet the standards for copyright protection. However, these outputs can still potentially be protected by related rights, since most have no threshold for protection. This calls into question whether the related rights system is fit for purpose in the age of AI: protecting non-original AI outputs with exclusive rights regardless of any underlying creative activity and in the absence of meaningful investment is certainly inadequate. We therefore support the recommendation that their public domain status be asserted in those cases. Next steps Once the draft report is officially published and presented in JURI on

Artificial Intelligence, Blog

Danish Bill Proposes Using Copyright Law to Combat Deepfakes

Recently, a Danish Bill has been making headlines by addressing issues related to deepfake through a rather uncommon approach: copyright. As stated to The Guardian, the Danish Minister of Culture, Jakob Engel-Schmidt, explained that they “are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI.” According to CNN, the minister believes that the “proposed law would help protect artists, public figures, and ordinary people from digital identity theft.” Items 8, 10, and 19 of the proposal include some of the most substantive changes to the law. Among other measures, Item 8 proposes adding a new § 65(a), requiring the prior consent of performers and performing artists to digitally generate imitations of them and make these available to the public, establishing protection for a term of 50 years after their death. Item 10 introduces a new § 73(a), focusing on “realistic digitally generated imitations of a natural person’s personal, physical characteristics,” requiring prior consent from the person being imitated before such imitations can be made available to the public. This exclusive right would also last for 50 years after the death of the imitated person and would not apply to uses such as caricature, satire, parody, pastiche, criticism, or similar purposes. It could be argued that this approach is uncommon because several countries, including those in the European Union, already have laws regulating personality rights and, more specifically, personal data. Copyright is known for regulating the use of creative expressions of the human mind, not the image, voice, or likeness of a person when considered individually, i.e., outside the context of an artistic performance. According to CNN “Engel-Schmidt says he has secured cross-party support for the bill, and he believes it will be passed this fall.”  A machine-translated version of the Proposal is below:  Notes:

Blog

Unfair Licensing Practices in the Library Sector

Teresa Nobre outlines a chilling range of practices by publishers to try to restrict the ability of researchers to conduct computational research. From ‘choice of law’ clauses which seek to circumvent EU law, to increased liability and penalties on libraries which fail to police their users. Nobre suggests a series of urgent measures to tip the balance back in favour of libraries and their users, and ultimately in favour of the right to research. This presentation was delivered at the User Rights meeting in Geneva on 17 June 2025. The full text is available below. The transition to licensing We have transitioned from a sales-based model in printed publications to a licence-based model in digital publications. What happens is that even if you have a fit-for-purpose framework that allows libraries to make certain uses of copyrighted works, they still need to rely on licences to have a first access to the material, and that gives publishers a lot of power in determining what libraries can and cannot do with the licensed materials, even if you have exceptions that allow them to make certain uses. Communia’s research We know that these licences tend to be subject to confidentiality agreements, which means that we don’t know what are the terms of these licences.  Communia is a non-profit based in Brussels, we have been involved in copyright reform for many years, we have been coming to the SCCR for many years, and we decided in February this year, we invited licensing managers, so people that are from the library sector, public library and academic library sector in Europe, we invited them to come to Brussels and we held a Chatham House rules meeting. We also invited the European Commission to attend this meeting and observe this meeting. And this environment where people could not attribute each other was the right environment for licensing managers to come and talk about the issues that they are facing with the licences, so the unfair licensing practices, the unfair terms that they are being subject to. So I will be mentioning some of those practices, and I will start with a very hot topic right now, which is the topic of AI, but also text and data mining for scientific research. Maybe I should also tell you that in addition to inviting librarians to come and talk to us in private, in front of the Commission, we also invited them to share with us in confidence clauses that they considered unfair, clauses that are part of those licensing agreements or licensing offers. Efforts to Circumvent the European TDM Directive Maybe here for those that are not European, I should give you a bit of a legal context of Europe. In Europe, six years ago we passed a new directive that guarantees that researchers in Europe can make text and data mining uses of copyrighted materials for scientific research. So we have a mandatory exception for these research uses. And this mandatory exception is protected against contractual overrides. And what does that mean? It means that if a licence says that you cannot make those uses, you don’t need to follow the licence because the law, the European law, protects you.  And what we realised, and we were very surprised, that publishers were actually concerned about prohibiting these uses in Europe when we have a law that allows these uses and prohibits contractual overrides. But that was indeed the case. So we noticed, and they told us, that since 2023, so place it at the same time where generative AI is raising, suddenly all the contracts are saying library users cannot conduct text and data mining on e-books and e-journals that are available in the libraries.  They cannot conduct any related AI uses with those materials.  ‘Choice of Law’ clauses And surprisingly, what was interesting to see was that, well, they were actually concerned about putting those prohibitions in those contracts, although the law would not allow for those prohibitions, because they could circumvent the EU policy, the EU law, and our contractual overrides prohibition by selecting a law that’s outside of Europe. So we know that ‘choice of law’ is typically a clause that the parties need to negotiate and takes time to negotiate. Everyone wants to choose their own law. But in this case, by choosing a law that’s not the national law where the library is located, meaning that’s not the EU law which would protect these uses against contractual overrides, they are able to circumvent basically the EU law and the prohibition of contractual overrides. And that’s enough. So imagine all of the work that we have done throughout the years to have exceptions in place, exceptions that are protected against contractual overrides, is simply circumvented by a choice of law clause. I’m going to give you an example of what prohibition of AI uses in these licences means. And, you know, there’s different ones. And you can see in our report, we gave some examples of it. Prohibition of AI-enabled browsers But publishers go as far as prohibiting the use of browsers with connected AI functionality. People, nowadays, there’s no browsers that do not use AI.  And publishers are prohibiting the library users from using browsers with AI functionality. This is how far it goes. We saw different variations of this. For instance, you see one that’s very simple, straightforward. You cannot conduct text and data mining, which is exactly what the EU law allows you to do. And when it comes to the choice of law, I think typically what we are seeing is that they are choosing U.S. law, maybe because the U.S. law right now, it’s not very clear if it allows these sort of uses or not. If it’s a UK publisher, they will select the U.K. law, which also doesn’t permit as many text and data mining uses as the EU law. So this is the first, let me say, the first category of obstacles and really

Scroll to Top