July 15, 2025

Blog

Torres Strait Islander climate change decision by Fed Court at odds with UN

16 July 2025 QUT legal expert Professor Mathew Rimmer is available to speak on the yesterday’s decision: · Judge doubted negligence law appropriate vehicle to deal with climate change matters · Judge said action on climate change a political matter for Fed Govt · The judge maintained Torres Strait Islanders’ only recourse was via ‘the ballot box’.” · An appeal court could further explore comparative international developments in climate litigation. In the landmark case of Pabai Pabai v Commonwealth, Justice Michael Wigney of the Federal Court of Australia doubted whether the law of negligence was the appropriate vehicle to deal with matters of climate change. QUT legal expert Professor of Intellectual Property and Innovation Matthew Rimmer said that judge held that the Commonwealth did not owe a duty of care to Torres Strait Islanders to protect them from climate change. “Judge Wigney acknowledged that the Commonwealth’s response to the threat of climate change to the Torres Strait Islands and their traditional inhabitants has been wanting,” Professor Rimmer said. “However, Judge Wigney stressed that the question of action on the reduction of greenhouse gases was ultimately a political matter for the Federal Government. “The judge warned there could be little, if any, doubt that the Torres Strait Islands face a bleak future if urgent action is not taken. The judge maintained that the only recourse that Torres Strait Islanders have is via ‘the ballot box’.” Professor Rimmer said that the case recognised that the Torres Strait Islands had been ravaged by the impacts of climate change. “The judge also noted that climate change is having ‘a devastating impact on the traditional way of life of Torres Strait Islanders and their ability to practices Ailan Kastom, their unique and distinctive body of customs, traditions, observances and beliefs.’ “The judge doubted, though, the applicants could obtain relief — ‘in respect of their loss of fulfilment of Ailan Kastom’. Professor Rimmer said the protection of traditional knowledge, cultural heritage, and Indigenous intellectual property warranted greater consideration. “The judge showed a significant amount of judicial humility, maintaining that a single judge of the Federal Court of Australia could not change the law. “The judge did observe that the law could change through ‘the incremental development or expansion of the common law by appellate courts, or by the enactment of legislation.’ “The judge said the plaintiffs could take this case further to the Full Court of the Federal Court of Australia, and ultimately, the High Court of Australia (which has previously engaged in judicial innovation in the Mabo case).’ Professor Rimmer said an appeal could explore the consistency of the decision with comparative law and international law. “An appeal court could further explore comparative developments in climate litigation.” Professor Rimmer said yesterday’s decision was also at odds with the successful 2019 Urgenda decision in the Netherlands in which the Dutch Supreme Court held that the Dutch government had an obligation to urgently reduce greenhouse emission in line with its human rights obligations. The decision of the Federal Court of Australia in Pabai Pabai v Commonwealth could be contrasted with the decision of 2022 Torres Strait Eight case of Daniel Billy and others v Australia. Professor Rimmer said that in the Torres Strait Eight case the UN Human Rights Committee found that Australia’s failure to adequately protect Torres Strait Islander people from adverse climate impacts violated their human rights. “The Committee found that under the UN’s Covenant on Civil and Political Rights, which Australia ratified in 1980, Australia had violated their human rights, in particular their cultural rights, and rights to be free from arbitrary interferences with their private life, and family, and home,” Professor Rimmer said. Pabai v Commonwealth of Australia (No 2) [2025] FCA 796 Decision — https://www.judgments.fedcourt.gov.au/judgments/Judgments/fca/single/2025/2025fca0796 Summary — https://www.judgments.fedcourt.gov.au/judgments/Judgments/fca/single/2025/2025fca0796/summaries/2025fca0796-summary For reactions of defendants, see The Hon Chris Bowen MP, Minister for Climate Change and Energy, and the Hon Malarndirri McCarthy, Minister for Indigenous Australians, Senator for the Northern Territory, ‘Joint statement on Pabai Pabai v Commonwealth’, Australian Government, 15 July 2025https://minister.dcceew.gov.au/bowen/media-releases/joint-statement-pabai-pabai-v-commonwealth For reactions of plaintiffs, see Joseph Gunzier, ‘‘My Heart is Broken’: Climate Case Dismissed Despite Findings of Cultural Loss’, National Indigenous Times, 16 July 2025, https://nit.com.au/15-07-2025/19140/my-heart-is-broken-climate-case-dismissed-despite-findings-of-cultural-loss Niki Widdowson, ‘Torres Strait Islander climate change decision by Fed Court at odds with UN’, Media Alert, QUT,16 July 2025, https://drrimmer.medium.com/torres-strait-islander-climate-change-decision-by-fed-court-at-odds-with-un-4d1da4c805a2

Artificial Intelligence, Blog

A first look into the JURI draft report on copyright and AI

This post was originally published on COMMUNIA by Teresa Nobre and Leander Nielbock Last week we saw the first draft of the long-anticipated own-initiative report on copyright and generative artificial intelligence authored by Axel Voss for the JURI Committee (download as a PDF file). The report, which marks the third entry of the Committee’s recent push on the topic after a workshop and the release of a study in June, fits in with the ongoing discussions around Copyright and AI at the EU-level. In his draft, MEP Voss targets the legal uncertainty and perceived unfairness around the use of protected works and other subject matter for the training of generative AI systems, strongly encouraging the Commission to address the issue as soon as possible, instead of waiting for the looming review of the Copyright Directive in 2026. A good starting point for creators The draft report starts by calling the Commission to assess whether the existing EU copyright framework addresses the competitive effects associated with the use of protected works for AI training, particularly the effects of AI-generated outputs that mimic human creativity. The rapporteur recommends that such assessment shall consider fair remuneration mechanisms (paragraph 2) and that, in the meantime, the Commission shall “immediately impose a remuneration obligation on providers of general-purpose AI models and systems in respect of the novel use of content protected by copyright” (paragraph 4). Such an obligation shall be in effect “until the reforms envisaged in this report are enacted.” However, we fail to understand how such a transitory measure could be introduced without a reform of its own. Voss’s thoughts on fair remuneration also require further elaboration, but clearly the rapporteur is solely concerned about remunerating individual creators and other rightholders (paragraph 2). Considering, however, the vast amounts of public resources that are being appropriated by AI companies for the development of AI systems, remuneration mechanisms need to channel value back to the entire information ecosystem. Expanding this recommendation beyond the narrow category of rightholders seems therefore crucial. Paragraph 10 deals with the much debated issue of transparency, calling for “full, actionable transparency and source documentation by providers and deployers of general-purpose AI models and systems”, while paragraph 11 asks for an “irrebuttable presumption of use” where the full transparency obligations have not been fully complied with. Recitals O to Q clarify that full transparency shall consist “in an itemised list identifying each copyright-protected content used for training”—an approach that does not seem proportionate, realistic or practical. At this stage, a more useful approach to copyright transparency would be to go beyond the disclosure of training data, which is already dealt with in the AI Act, and recommend the introduction of public disclosure commitments on opt-out compliance. A presumption of use—which is a reasonable demand—could still kick in based on a different set of indicators. Another set of recommendations that aims at addressing the grievances of creators are found on paragraphs 6 and 9 and include the standardization of opt-outs and the creation of a centralized register for opt-outs. These measures are very much in line with COMMUNIA’s efforts to uphold the current legal framework for AI training, which relies on creators being able to exercise and enforce their opt-out rights. Two points of concern for users At the same time that it tries to uphold the current legal framework, the draft report also calls for either the introduction of a new “dedicated exception to the exclusive rights to reproduction and extraction” or for expanding the scope of Article 4 of the DSM Directive “to explicitly encompass the training of GenAI” (paragraph 7). At first glance, this recommendation may appear innocuous—redundant even, given that the AI Act already assumes that such legal provision extends to AI model providers. However, the draft report does not simply intend to clarify the current EU legal framework. On the contrary, the report claims that the training of generative AI systems is “currently not covered” by the existing TDM exceptions. This challenges the interpretation provided for in the AI Act and by multiple statements by the Commission and opens the door for discussions around the legality of current training practices, with all the consequences this entails, including for scientific research. The second point of concern for users is paragraph 13, which calls for measures to counter copyright infringement “through the production of GenAI outputs.” Throughout the stakeholder consultations on the EU AI Code of Practice, COMMUNIA was very vocal about the risks this category of measures could entail for private uses, protected speech and other fundamental freedoms. We strongly opposed the introduction of system-level measures to block output similarity, since those would effectively require the use of output filters without safeguarding users rights. We also highlighted that model-level measures targeting copyright-related overfitting could have the effect of preventing the lawful development of models supporting substantial legitimate uses of protected works. As this report evolves, it is crucial to keep this in mind and to ensure that any copyright compliance measures targeting AI outputs are accompanied by relevant safeguards that protect the rights of users of AI systems. A win for the Public Domain One of the last recommendations in the draft report concerns the legal status of AI-generated outputs. Paragraph 12 suggests that “AI-generated content should remain ineligible for copyright protection, and that the public domain status of such works be clearly determined.” While some AI-assisted expressions can qualify as copyright-protected works under EU law —most importantly when there’s sufficient human control over the output—many will not meet the standards for copyright protection. However, these outputs can still potentially be protected by related rights, since most have no threshold for protection. This calls into question whether the related rights system is fit for purpose in the age of AI: protecting non-original AI outputs with exclusive rights regardless of any underlying creative activity and in the absence of meaningful investment is certainly inadequate. We therefore support the recommendation that their public domain status be asserted in those cases. Next steps Once the draft report is officially published and presented in JURI on

Artificial Intelligence, Blog

Danish Bill Proposes Using Copyright Law to Combat Deepfakes

Recently, a Danish Bill has been making headlines by addressing issues related to deepfake through a rather uncommon approach: copyright. As stated to The Guardian, the Danish Minister of Culture, Jakob Engel-Schmidt, explained that they “are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI.” According to CNN, the minister believes that the “proposed law would help protect artists, public figures, and ordinary people from digital identity theft.” Items 8, 10, and 19 of the proposal include some of the most substantive changes to the law. Among other measures, Item 8 proposes adding a new § 65(a), requiring the prior consent of performers and performing artists to digitally generate imitations of them and make these available to the public, establishing protection for a term of 50 years after their death. Item 10 introduces a new § 73(a), focusing on “realistic digitally generated imitations of a natural person’s personal, physical characteristics,” requiring prior consent from the person being imitated before such imitations can be made available to the public. This exclusive right would also last for 50 years after the death of the imitated person and would not apply to uses such as caricature, satire, parody, pastiche, criticism, or similar purposes. It could be argued that this approach is uncommon because several countries, including those in the European Union, already have laws regulating personality rights and, more specifically, personal data. Copyright is known for regulating the use of creative expressions of the human mind, not the image, voice, or likeness of a person when considered individually, i.e., outside the context of an artistic performance. According to CNN “Engel-Schmidt says he has secured cross-party support for the bill, and he believes it will be passed this fall.”  A machine-translated version of the Proposal is below:  Notes:

Scroll to Top