Artificial Intelligence

Artificial Intelligence, Centre News

A Scale of Tools for Copyright and AI Training Data?

At the Centre on Knowledge Governance, we are working at the intersection of Copyright, the Right to Research and ‘Just AI’. We are looking for tools that can be used to ensure that researchers are able to use AI for public interest purposes, like health and education, whilst ensuring that the rights of creators and communities and custodians of traditional knowledge are respected. To that end we have developed a table which locates different policy approaches to IP and AI training data on a spectrum, starting with ‘full access’ for public interest research and ‘full protection’ for systems that can replicate or mimic works of entertainment, such as music and movies. This table proposing a Scale of Tools was developed with input from a group of African experts and policy makers at our Retreat on Copyright and Just AI at the Cradle of Humankind in South Africa in February 2026. This approach will also be discussed further at our upcoming Meeting on Creating and Researching with AI in Rio de Janeiro in June 2026 and at our User Rights meeting on Copyright the Right to Research and Just AI from 1-2 October 2026. It will also be used as part of the curriculum during our upcoming Upskill Executive Education Course on IP and AI to be held in Geneva from 29-30 September 2026. For more on ‘Just AI’, visit our Focus Area page on Copyright the Right to Research and Just AI. You are free to use the graphic on your own site, as long as you attribute as us the source, and link back to our articles. Do you have ideas or suggestions about our approach? Please reach out to us using our contact form.

Africa: Copyright & Public Interest, Artificial Intelligence, Technical Assistance, Traditional Knowledge

Knowledge Governance and Just AI – Cradle Report

This document was prepared by the Centre on Knowledge Governance (CKG) to summarise discussions on principles of Just AI and Knowledge Governance held in the Cradle of Humankind, South Africa, in February 2026. The Cradle discussions included representatives of African governments, academics, Artificial Intelligence (AI) model developers, scholars and negotiators of international agreements on traditional knowledge, and creators and performers of copyrighted works and data used in AI training, listed at the bottom of the document. The objective of the discussions was to articulate an African voice and focus on the topic while reflecting principles with potentially global application. This document records the substance of discussions without attempting to resolve any potential conflicts in views or endorsing any particular norms as reflecting the positions of all or any of the participants. When we use the word “should” we mean it to indicate that at least some members of the discussions proposed that actions should be taken, not necessarily that all in the room consented to the precise framing used here. It is not a suggestion for a legal document. It is the starting point, not an end, of a longer and broader discussion that CKG plans to host, including in consultation with other stakeholders, academics and policy makers. Table of contents Just AI Essence and purpose Principles Needs of African AI Developers From Knowledge Governance Systems From Other Resources From Exercise of Agency (code of conduct) Needs of Creators of Works and Content Used in Training From Knowledge Governance Systems From Other Resources From Exercise of Agency Objectives of Copyright and Knowledge Governance Objectives Table: Scale of Tools Objectives for Protecting Traditional Knowledge What Governments should do Workshop Participants Just AI Essence and purpose The goal of AI models and applications should be to serve as a tool to help humans, not to achieve an intelligence independent of humanity. AI should support human life and the planet. Models and tools should be decentralised and context specific, wherever possible; be as small (e.g. use as little data and compute power) as possible; be adaptable and malleable, be publicly accountable and transparent (whilst respecting privacy); reflect the lived experiences and cultures of the creators, developers and users, and accessible by all people wherever possible. Definitions of Public AI and Just AI Public AI models and tools are subject to public policies to ensure that they promote human rights and justice. Public AI policies aim to ensure that AI development is in the public interest, subject to transparent governance and democratic public accountability. Just AI builds on Public AI, aiming to protect the moral and material interests of creators, the stewardship of traditional knowledge, cultural expressions, and genetic resources by communities, and by promoting the developmental priorities of the Global South. Principles Just AI systems should not exploit labour, use unsustainable environmental practices, be concentrated in and controlled by large for-profit corporations in one or two countries. Just AI systems require recognition of data providers and data communities who are impacted, have guardrails to prevent abuse (such as deepfakes), and should prevent the misappropriation of traditional knowledge and traditional cultural expressions. African governments should explore strategies to support Public AI, where compute, data, models, and expertise are made available to innovators and stakeholders on the continent. Needs of African AI Developers From Knowledge Governance Systems African AI developers need knowledge governance systems, including balanced copyright and other intellectual property provisions, that promote the ability to fairly use and access data for the development of models and tools. For example, fair use of the JW300 dataset of language translations was required to create African language translation tools by the Masakhane Natural Language Processing (NLP) project for African languages. NLP also needs access to other sources that have local language translations, including content from broadcasters (especially public broadcasters). Content and data needed are often held by or created by governments, but not always released through open access policies. There is also a need for access to African data and information held abroad, such as in foreign libraries, museums, archives, and media collections. African AI developers need the ability to use reverse engineering and knowledge distillation to learn from foundational large language models to create smaller and more specific applications for the African context. Reverse engineering rights are recognised in many systems of the Global North, but often lack expression in the copyright and other laws of the Global South. African AI developers often wish to ensure protection of their own curated datasets against competing global for-profit institutions. A particular concern is expressed about the rapacious desire for more data by the largest model builders, who rarely share back with African producers of data or model builders. Positive approaches include the use of the Nwulite Obodo Open Data License (Noodl), that includes customised license terms and tiered pricing schemes for the distribution of African language datasets. From Other Resources African AI developers need access to a variety of resources to do their work. They need affordable access to a stable Internet and computing infrastructure. Computing power can be accessed through a hybrid of public and private infrastructure. Some examples include African GPU Hub, Sannah AI, MIND institute. There is a need for access to affordable legal advice and support, for example, from legal networks that understand the law and the needs of AI developers. Developers need access to various kinds of information and training, including access to performance metrics of public AI, information on data provenance, information on IP and data protection laws, and participation in the development of public policies and strategies. From Exercise of Agency (code of conduct) Developers need to establish norms for their own conduct, similar to the Malabo convention and South African NIMA. Organisation should be furthered through support for structures to engage each other and have a strong voice with policy makers (such as through the Deep Learning Indaba). In their work, developers need to prioritise the African context and cultural sensitivity in their work, prioritise local over global

Artificial Intelligence, Latin America / GRULAC

200 Bills and Counting: AI Legislation in the Brazilian Congress

Artificial Intelligence (AI), and Generative AI in particular, is transforming the way we create and challenging fundamental concepts of copyright law — including authorship, originality, and the very notion of a “protected work.” As AI tools become increasingly embedded in creative processes, they also raise concerns among creators and intellectual workers about potential job displacement and the risk that AI-generated outputs may undermine creative markets. These outputs are only possible because Generative AI systems are typically trained on human-authored works — a practice that has already prompted lawsuits in several parts of the world. As part of research activities carried out by the Global Expert Network on Copyright User Rights, researchers from the Centre on Knowledge Governance and the Brazilian Copyright Institute (IBDAutoral) mapped all legislative proposals currently under discussion in the Brazilian National Congress that address AI, including those at the intersection of AI and copyright.  Methodology We mapped the databases of legislative proposals from the Federal Senate and the Chamber of Deputies in search of bills addressing issues related to Artificial Intelligence. The searches were conducted between January and February 2025 on the websites of the Chamber of Deputies and the Federal Senate. On the Chamber of Deputies website, a subject-based search was performed using the platform’s built-in tool. The first query used the term “artificial intelligence,” with “bill” selected as the proposal type and no restrictions applied to the status field — meaning all propositions were included. This returned 173 bills, some of which were incorrectly classified as AI-related, as discussed below. A second search, using the same parameters but adding the term “copyright,” returned 13 results, also with some misclassifications. After removing duplicates, the total number of bills mapped in the Chamber of Deputies came to 175. On the Federal Senate website, the search was conducted under the “Search – Senate Portal” tab using the free-text term “artificial intelligence,” with the filters “Bills and Subject Matters – Propositions” and “PL – Bills” applied. This returned 25 records, some of which overlapped with bills already identified in the Chamber of Deputies search. No time restrictions were applied at any stage, in order to obtain the broadest possible view of AI-related legislation currently under consideration in the National Congress. Once the bills were identified, we collected and categorized key information about each one — including bill number, date of introduction, authorship, party affiliation, affected legislation, rapporteur, and current status. In addition, a short description of each bill was prepared based on its summary and full text. Preliminary Findings The initial mapping, after removing duplicates, identified 200 bills related to artificial intelligence. Of these, 10 were found to be incorrectly classified as AI-related or did not feature AI as a meaningful element. In terms of when bills were introduced, a modest increase was observed in 2019, with 10 proposals filed that year. The real surge, however, came between 2022 and 2023, when the number of bills rose from 15 to 53, and again in 2024, with 82 bills introduced. This acceleration is understood to be tied to the widespread diffusion of generative AI systems — such as ChatGPT — beginning in the second half of 2022. Regarding subject matter, the most prevalent theme across the mapped bills is criminal law (33 bills), followed by labor law (17) and consumer protection (17). Next come bills of a more general or principles-based nature addressing the development and use of AI (16), and then those specifically dealing with copyright (14). It is worth noting that a single bill may be classified under more than one category. Main Themes Across AI Legislative Proposals A preliminary analysis of the 14 bills addressing the intersection of generative AI and copyright reveals a strong focus on two issues: recognizing the use of protected works for AI training as an act subject to exclusive rights — or even as grounds for creating a new exclusive right (6 bills) — and establishing remuneration obligations when protected works are used for training purposes (6 bills). Other frequently addressed topics include civil and criminal sanctions for copyright infringement (5 bills) and transparency obligations imposed on AI developers and operators (5 bills). By contrast, the equally important debate around limitations and exceptions — particularly regarding the use of works for AI training in research and educational contexts, or for certain text and data mining purposes — has received considerably less legislative attention, appearing in only 1 bill. That bill is No. 2338/23, which has already been approved by the Federal Senate and is currently under review in the Chamber of Deputies. Most Frequent Topics in AI and Copyright Legislative Proposals The full mapping of bills under consideration in the National Congress will be made publicly available on the website of the Observatório Nacional de Direitos Autorais, an initiative created by the Brazilian Copyright Institute in 2022. The Observatory’s main objective is to provide open access to a wide range of materials — including judicial decisions, theses and dissertations, draft laws, legislation, and international treaties — on copyright law, serving as a reference resource for researchers, legal practitioners, and anyone seeking to understand and deepen their knowledge of the subject.

Artificial Intelligence

Centre announces a policy agenda on ‘Just AI’

In today’s world, research in fields ranging from health, education and agriculture to economics, social sciences and humanities relies on computational methods, and in particular artificial intelligence tools.  Policy makers and public interest advocates around the world are beginning to formulate a policy agenda for the promotion of Just AI.  The concept of Just AI combines the desires for promoting public accountability and accessibility in AI infrastructure advocated by “Public AI” advocates with additional human rights concerns, including the moral and material interests of creators, the stewardship of traditional knowledge, cultural expressions, and genetic resources by communities, and the developmental priorities of the Global South.  Many of the core elements of a Just AI vision require the implementation or alteration of copyright and related knowledge governance policies (including, e.g., privacy law, data governance, competition law, etc.). These areas of law are often shaped and informed by international treaties and policies being implemented and reformed in International Geneva.    At the Centre on Knowledge Governance we are working with a network of 100 scholars in 30 countries (through the User Rights Network) and with representatives of governments in multilateral organisations in Geneva to help define a policy agenda on Copyright, the Right to Research and Just AI.  To read more about our vision for just AI, see our full concept note below. For case studies on Just AI, visit our focus area page on Just AI

Artificial Intelligence, WIPO

WIPO Launches Artificial Intelligence Infrastructure Interchange

WIPO launched its Artificial Intelligence Infrastructure Interchange (AIII) on March 17, which was described as having the goal of supporting the development of AI technology that supports the livelihoods of creators and innovators. The goal has two aspects – making AI tools available to creators to help their work, while at the same time assuring that the works used to create such tools support the moral and material rights of authors.  The key focus is on “infrastructure” that can technically identify AI creations and promote models for creators to use AI as a tool. Assistant Director General Ken Natsume explained that “the answer lies in various tools: Watermarks, metadata, digital ID, authentication tools, digital distribution frameworks.” The AIII’s launch page similarly defines the “IP infrastructure” of its focus as composed of “watermarks, authentication tools, standards, metadata, digital identifiers, rights management and content recognition systems, and digital distribution frameworks … developed by rightsholders and creators to build new business models that safeguard their rights.”  This definition of AI infrastructure is quite different than the broader sense embraced by Public AI advocates. That approach proposes “treating AI as public infrastructure, emphasising democratic governance, broad accessibility, and accountability to the communities that AI systems serve.” The concept of “Just AI” used by the Centre on Knowledge Governance and others is largely congruent with the goals of Public AI, but also raises additional human rights concerns, including the moral and material interests of creators. In this sense, the WIPO AIII focus on tools to enable remuneration and creator opt outs in AI Tools can be seen as promoting some but not all aspects of a Just AI vision.  At the launch event, participants described the goal of AIII as providing a neutral forum for creators, rights holders, developers, and experts to share information on the development and use of such tools, including tools that can be used in the creation process. Music and voice or actor simulation models are a core focus of the project. These are areas where AI tools have the potential to create content that competes with the works used to train them. In such areas, the justification for using highly licensed tools and giving creators maximum ability to opt out of their content being used in training is at its apex.    The WIPO project has created a “Technical exchange network (TEN)” where technical experts from the private sector, including academics and civil society, will share information on the development and use of content identification tools. There will also be an annual public meeting of the project and a government expert group that will share information with policy makers about such infrastructure and exchange on national developments.

Artificial Intelligence, Blog

The Moratorium the AI Industry Cannot Afford to Lose

The WTO’s 14th Ministerial Conference (MC14) starts in Yaoundé, Cameroon, next week with a packed agenda and real stakes. Buried in the long list of negotiations is a decision that will have a significant impact beyond trade: whether to renew the moratorium on non-violation complaints under the TRIPS Agreement. The outcome will help determine whether the TRIPS flexibilities and exceptions, particularly copyright exceptions, which have recently become the backbone of the AI economy, can be challenged at the WTO. Two Moratoriums, One Bargain Since 1998, WTO members have supported a temporary moratorium on customs duties on electronic transmissions, including software downloads, streamed content, and digital services. That moratorium has been extended at every Ministerial Conference since. It is up for renewal again at MC14, where the United States (US) is pushing to make it permanent. The moratorium originated at the 1998 WTO Ministerial in Geneva, where members adopted a Work Program on E-commerce and committed to “continue their current practice of not imposing customs duties on electronic transmissions” (WTO 1998). Critically, the term “electronic transmissions” was never defined. That ambiguity allowed the scope of the moratorium to expand alongside the digital economy, covering an ever-wider range of digital content and services without any fresh multilateral agreement. Since then, the US has been embedding the moratorium in its bilateral free trade agreements. The US-Jordan FTA in 2000 was the first agreement to include a binding commitment not to impose customs duties on electronic transmissions. Recent agreements on reciprocal trade (ARTs) go further and require countries to support multilateral adoption of a permanent moratorium on customs duties on electronic transmissions at the WTO. All these efforts build a web of bilateral obligations that formalize the current push for a permanent multilateral moratorium at MC14. Less discussed but just as consequential is a second moratorium: the freeze on non-violation and situation complaints under the TRIPS Agreement. The moratorium on the TRIPS non-violation and situation complaints (NVC) has also been extended at each Ministerial Conference since 1995.  Under TRIPS Article 64, a WTO member can file a non-violation complaint even when no TRIPS rule has been broken, claiming only that expected benefits have been “nullified or impaired” by another member’s measures. Non-violation claims create a significant IP weapon: they mean that a country’s copyright exceptions, fair use, limitations for research and education, patentability requirements, and compulsory licenses could, in principle, be challenged at the WTO not for violating TRIPS but for frustrating the commercial expectations of foreign rightsholders.  Any TRIPS measure that allegedly nullifies or impairs benefits under TRIPS may, under certain conditions, be challenged through a non-violation complaint (e.g., on the theory that it frustrates a member’s legitimate expectations). In principle, this creates a pathway to challenge a wide range of legitimate public-interest policies that affect rightsholders. Such policies could include, among others, rules on patentability, compulsory licensing, and copyright limitations and exceptions, including the US fair use doctrine. US copyright law includes a variety of specific exceptions, but fair use is the oldest and the most broadly applicable of all US exceptions to copyright infringement. As IP scholar Frederick Abbott warned as early as 2003, “non-violation causes of action could be used to threaten developing Members’ use of flexibilities inherent in the TRIPS Agreement and intellectual property law more generally. Thus, for example, Members that adopt relatively generous fair use rules in the fields of copyright or trademark might find that they are claimed against for depriving another.”  The two moratoriums have been traded as a package. Developing countries seeking the TRIPS NVC moratorium, which protects domestic policy space in health, access to knowledge, education, and technology transfer, have had to support the e-commerce moratorium, which benefits US digital platforms. Each Ministerial Conference is, in effect, another round of that exchange. If the e-commerce moratorium becomes permanent at MC14, as the US proposes, the key question is what developing countries receive in return, particularly on the TRIPS NVC side. Significance of Copyright Exceptions Many key internet functions rely on copyright limitations and exceptions. Search engines cache and index content without negotiating individual licensing agreements; search previews display short snippets; CDNs buffer and transmit protected works; cloud services store user-uploaded copyrighted files.  According to the CCIA’s 2025 report, fair use industries now account for 18 percent of US GDP, $4.9 trillion in value added, and $10.2 trillion in revenues in 2023, employing one in seven American workers. Within that broader figure, AI-related fair use industries alone generated $1.7 trillion in revenues in 2023, up 78 percent since 2017. The AI industry has added a new dimension. Training large language models requires access to vast quantities of text, books, articles, web pages, and code repositories. Much of that access has been broadly justified under fair use, which is transformative and serves a new purpose. In that sense, AI companies and the broader data economy are the newest dependents on copyright exceptions. If those limitations and exceptions can be challenged through non-violation complaints at the WTO, bypassing the question of whether they infringe TRIPS, the legal foundation for AI training could become globally contestable.  The Buenos Aires Lesson At the Buenos Aires Ministerial Conference in December 2017, during Donald Trump’s first term, the renewal of both moratoria on the e-commerce and TRIPS NVC was uncertain. Both moratoria were eventually extended. That Buenos Aires episode revealed, or at least made visible, that the fair use and safe harbor exceptions underpinning internet commerce were potentially vulnerable to non-violation challenges. There was a growing awareness among US tech industry stakeholders of how much the TRIPS NVC moratorium mattered to their legal operating environment. The two moratoriums were treated as a package. That understanding should be stronger today. AI companies are actively navigating copyright litigation in domestic courts, whose outcomes are still unresolved. Exposure via non-violation complaints at the WTO would add a second front. What was at stake in 2017 is now more visible and more significant. What’s Next The argument is pretty straightforward. If the US

Artificial Intelligence

Public AI Launch, and Some Thoughts on Copyright

I attended the exciting launch of a series of papers and reflections on “Public AI” at the EU Parliament this week. The core of the idea is that the non-US/China world needs more public directed and open source AI related resources — from computational capacity to open data sets (like EU’s “data spaces”) — to build both commercial and non-commercial AI tools delinked from big tech. There is an important copyright issue at its core. To build AI infrastructure, including to support the development of frontier and foundation models that may be themselves non-profit but can serve as the base for other (including commercial) developers, Public AI model builders need legal certainty as to what material they can use for training. If they don’t have the same right as Chinese and US developers, they won’t be able to succeed. Some developers are working with only openly licensed and public domain sources, but they tend to be trained on much smaller data sets then. Cultural heritage organizations want to help, but they also need certainty as to whether they can curate and share data with model builders. Article 3 of the EU CDSM (2019) provides some cover, but publishers are claiming it is not for training AI but rather only for traditional academic pursuits. Most developing countries lack even an Art. 3 type leg to stand on. In this context, the future of Public AI appears to depend a lot on the definition of the right to research within modern copyright laws. Proposals to apply remuneration requirements, if any, only after a specific application (“output)”) of a foundation model proves to have copyright relevant effects (e.g. commercial substitution) may be one path forward. See Senftleben, Martin, Generative AI and Author Remuneration (June 14, 2023). International Review of Intellectual Property and Competition Law 54 (2023), pp. 1535-1560.

Artificial Intelligence, Blog, Centre News

Centre Announces Short Course on Intellectual Property and Artificial Intelligence

The Centre on Knowledge Governance is pleased to announce a new short course on AI and IP to take place in Geneva from September 29-30, 2026. COURSE DESCRIPTION  This intensive two-day course provides a comprehensive, comparative analysis of the evolving legal and policy landscape at the intersection of Intellectual Property (IP) and Artificial Intelligence (AI). Participants will explore pressing legal challenges, including the copyright protection for AI training data, the patentability and copyright of AI-generated outputs, and the balance between proprietary interests and the public interest in research (Text and Data Mining and computational research) and the development of “Public AI.”  The course will feature in-depth comparative analysis of legal frameworks and policy proposals across the European Union (EU), United States (USA), India, Brazil, Singapore, Japan, and in international forums, such as the World Intellectual Property Organization, World Trade Organization and other agencies.  The learning experience will culminate in a practical role-play exercise in which students will draft a model international legal instrument aimed at ensuring fair remuneration for creators while safeguarding the rights of researchers and public interest organizations developing AI infrastructure. This legal instrument will focus on  a range of factors to be used in distinguishing research and public interest uses of AI from commercial competitive uses. LEARNING OBJECTIVES Upon completion of this course, participants will be able to: WHO IS THIS PROGRAMME FOR? This programme is particularly relevant for mid- to senior level practitioners from various organisations working at the intersection of intellectual property and AI policy or scholarship, such as: LECTURERS The Course will be directed by Sean Flynn and Ben Cashdan of the Centre on Knowledge Governance, Geneva Graduate Institute. Guest lecturers will participate in person or online to bring comparative expertise from jurisdictions such as India, Brazil and China and the African continent, in addition to the US and EU. SCHOLARSHIPS 10 scholarships will be available for highly motivated government delegates from developing countries or representatives of public interest organizations who participate in multilateral policy processes on copyright, AI and the rights of researchers. You can apply below: APPLICATION FOR COURSE To enroll for the course itself please use the online form on this page. If you have also applied for a scholarship please not this when you enroll. Thanks.

Africa: Copyright & Public Interest, Artificial Intelligence, TDM Cases

Case Studies of AI for Good and AI for Development

Today the Geneva Centre on Knowledge Governance presents a series of Case Studies on AI for Good in Africa and the Global South. These grew out of our work on Text and Data Mining and our policy work in support of the Right to Research. Researchers in the Global South are responding to local and global challenges from health and education to language preservation and mitigation of climate change. In all these case computational methods and Artificial Intelligence (AI) play a leading role in finding and implementing solutions. A common thread that runs through all the cases is how intellectual property laws can support innovation and problem solving in the public interest, whilst protecting the interests of creators, communities and custodians of traditional knowledge. In addition several practitioners are looking at how to redress data imbalances, where large companies in the Global North have much greater access to works, for historical, legal and economic reasons. The cases include: Each of our case studies in written up in the form of a report, combined with a video exploration of the case study in the words of its leading practitioners.

Artificial Intelligence, Blog, Latin America / GRULAC

INTELIGENCIA ARTIFICIAL, DERECHOS DE AUTOR Y EL FUTURO DE LA CREATIVIDAD: APUNTES DE LA FERIA INTERNACIONAL DEL LIBRO DE PANAMÁ

Por Andrés Izquierdo Durante la segunda semana de agosto, fui invitado a hablar en la Feria Internacional del Libro de Panamá, un evento organizado por la la Oficina del Derecho de Autor de Panamá, el Ministerio de Cultura y la Asociación Panameña de Editores con apoyo de la Organización Mundial de la Propiedad Intelectual (OMPI). Mi presentación se centró en la cada vez más compleja intersección entre las leyes de derechos de autor y la inteligencia artificial (IA), un tema ahora en el centro del debate legal, cultural y económico mundial. Esta publicación resume los argumentos principales de esa presentación, basándose en litigios recientes, investigaciones académicas y desarrollos de políticas, incluyendo el informe de mayo de 2025 de la Oficina de Derechos de Autor de EE. UU. sobre IA generativa. ¿Cómo deberían responder las leyes de derechos de autor al uso generalizado de obras protegidas en el entrenamiento de sistemas de IA generativa? El análisis sugiere que hay debates emergentes en varias áreas clave: los límites del uso justo y las excepciones, la necesidad de derechos de remuneración aplicables, y el papel de la concesión de licencias y la supervisión regulatoria. El artículo se desarrolla en cinco partes: comienza con una visión general del contexto legal y tecnológico en torno al entrenamiento de IA; luego revisa propuestas académicas para recalibrar los marcos de derechos de autor; examina decisiones judiciales recientes que ponen a prueba los límites de la doctrina actual; resume el informe de 2025 de la Oficina de Derechos de Autor de EE. UU. como respuesta institucional; y concluye con cuatro consideraciones de política para la regulación futura. UN ESCENARIO LEGAL Y TECNOLÓGICO EN TRANSFORMACIÓNLa integración de la IA generativa en los ecosistemas creativos e informativos ha expuesto tensiones fundamentales en la ley de derechos de autor. Los sistemas actuales ingieren rutinariamente grandes volúmenes de obras protegidas —como libros, música, imágenes y periodismo— para entrenar modelos de IA. Esta práctica ha dado lugar a preguntas legales no resueltas: ¿Puede la ley de derechos de autor regular de manera significativa el uso de datos de entrenamiento? ¿Se extienden las doctrinas y disposiciones legales existentes—como el uso justo, o excepciones y limitaciones—a estas prácticas? ¿Qué remedios, si los hay, están disponibles para los titulares de derechos cuyas obras se utilizan sin consentimiento? Estas preguntas siguen abiertas en todas las jurisdicciones. Si bien algunos tribunales y agencias reguladoras han comenzado a responder, una parte sustancial del debate está siendo moldeada ahora por la investigación académica  jurídica y por los litigios, cada uno proponiendo marcos para conciliar el desarrollo de la IA con los compromisos normativos del derecho de autor. Las siguientes secciones examinan este panorama evolutivo, comenzando con propuestas académicas recientes. PERSPECTIVAS ACADÉMICAS: HACIA UN EQUILIBRIO RENOVADOAl revisar la literatura académica, han emergido varios temas claros. Primero, algunos autores concuerdan en que deben fortalecerse los derechos de remuneración para los autores. Geiger, Scalzini y Bossi sostienen que, para garantizar verdaderamente una compensación justa para los creadores en la era digital, especialmente a la luz de la IA generativa, la ley de derechos de autor de la Unión Europea debe ir más allá de las débiles protecciones contractuales y, en su lugar, implementar derechos de remuneración robustos e inalienables que garanticen ingresos directos y equitativos a autores e intérpretes como cuestión de derechos fundamentales. Segundo, varios académicos subrayan que la opacidad técnica de la IA generativa exige nuevos enfoques de remuneración para los autores. Cooper argumenta que, a medida que los sistemas de IA evolucionen, será casi imposible determinar si una obra fue generada por IA o si una obra protegida específica se utilizó en el entrenamiento. Advierte que esta pérdida de trazabilidad hace que los modelos de compensación basados en atribución sean inviables. En cambio, aboga por marcos alternativos para garantizar que los creadores reciban una compensación justa en una era de autoría algorítmica. Tercero, académicos como Pasquale y Sun sostienen que los responsables de formular políticas deberían adoptar un sistema dual de consentimiento y compensación: otorgar a los creadores el derecho a excluirse del entrenamiento de IA y establecer un gravamen sobre los proveedores de IA para asegurar el pago justo a aquellos cuyas obras se utilizan sin licencia. Gervais, por su parte, defiende que los creadores deberían recibir un nuevo derecho de remuneración, asignable, por el uso comercial de sistemas de IA generativa entrenados con sus obras protegidas por derechos de autor; este derecho complementaría, pero no reemplazaría, los derechos existentes relacionados con reproducción y adaptación. También hay un consenso creciente sobre la necesidad de modernizar las limitaciones y excepciones, en particular para educación e investigación. Flynn et al. muestran que una mayoría de los países del mundo no tienen excepciones que permitan la investigación y enseñanza modernas, como el uso académico de plataformas de enseñanza en línea. Y en Science, varios autores proponen armonizar las excepciones de derechos de autor internacionales y domésticas para autorizar explícitamente la minería de texto y datos (TDM) para investigación, permitiendo el acceso lícito y transfronterizo a materiales protegidos sin requerir licencias previas. En la OMPI, el Comité Permanente sobre Derecho de Autor y Derechos Conexos (SCCR) ha tomado medidas en este ámbito aprobando un programa de trabajo sobre limitaciones y excepciones, actualmente en discusión para el próximo SCCR 47. Y en el Comité de Desarrollo y Propiedad Intelectual (CDIP), está aprobado un Proyecto Piloto sobre TDM para Apoyar la Investigación e Innovación en Universidades y Otras Instituciones Orientadas a la Investigación en África – Propuesta del Grupo Africano (CDIP/30/9 REV). Mi propio trabajo, al igual que el de Díaz & Martínez, ha enfatizado la urgencia de actualizar las excepciones educativas latinoamericanas para dar cuenta de usos digitales y transfronterizos. Eleonora Rosati sostiene que el entrenamiento con IA no licenciada queda fuera de las excepciones de derechos de autor existentes en la UE y el Reino Unido, incluidas el Artículo 3 (TDM para investigación científica) de la Directiva DSM, el Artículo 4 (TDM general con exclusiones) y el Artículo 5(3)(a) de la Directiva InfoSoc (uso para enseñanza o investigación

Scroll to Top