July 25, 2025

Blog

An Open Letter to the ICANN Community: Not the Community Priority Evaluation We Intended

This post was originally published on CircleID by Kathy Kleiman To the ICANN Community, Today, I share a warning about serious changes to the Community Priority Evaluation (CPE) of the New gTLD Applicant Guidebook. They are not driven by public comment, but by a few voices within the SubPro Implementation Review Team—and they are very likely to lead to disastrous misappropriation of well-known community names, including those of Tribes, Indigenous Peoples and NGOs around the world. The reason why is that we (the ICANN Community) envisioned.CHEROKEE for the Cherokee Nation and other tribes, peoples and NGOs, not a group that loves their Grand Cherokee and Jeep Cherokee cars and jeeps. But the policy written by the SubPro PDP Working Group (2016-2020) and accepted by the GNSO Council and ICANN Board recently was deeply changed—and replaced with a scoring system that eliminates the ability of well-known communities to stop unrelated groups, or a fraction of their community, or a group completely opposed to them from using the same name as a new gTLD, provided the applicant has some semblance of internal organization and activity. This change will result in the misappropriation of well-known community names and great harm that we never intended when we wrote the policy. The Subsequent Procedures PDP Working Group (meeting 2016-2020) was fairly balanced in its recommendations for both the applicant and communities that might oppose the CPE application. I share some of the language showing the independence of the Community Experts on the CPE panel to research and other communities and tribes to send comments and letters of opposition and raise concerns—all to be taken into account in the CPE evaluation. Final Report, 2020. Unfortunately and very recently, a few members of the SubPro Implementation Review Team (“IRT”), a group charged with implementing policy, not rewriting it, made change after change to the language, terms and scoring of the Community Priority Evaluation (“CPE”) rules. In April, they stripped out carefully negotiated policies and balances to create an unfair advance for applicants—including by new rules telling the CPE Panelists to greatly limit the use their expertise and independent research skills and not to weigh heavily external opposition and comments they may receive. The changes are buried in Module 4: Contention Set Resolution, 4.4 Community Priority Evaluation, pages 133-150, of the final draft of the Applicant Guidebook now out for public comment. If you look at the new CPE scoring system—called Community Priority Evaluation Criteria (Section 4.4.7, p.139 in draft AGB)—in the edited versions (“redlines” that I share from the IRT on April 14, 2025, and April 30, 2025, and a special redline combining both sets of edits that I created), you will see the hands of the CPE Panelists are newly “tied” and they cannot engage in the research and application of their knowledge that the adopted policy requires. Sadly, under the new changes: And these are just a few examples. Under this new language—newly shared with the community and not arising from public comment—self-identified communities will win CPE. What a prize for the applicant (no auction) and what a tragedy for the peoples, tribes and NGOs of the same name and for far longer than the applicant! Overall, if these rules are adopted, we can predict that letters and comments of heartfelt opposition against CPE applicants will pour into ICANN, only to be systematically ignored by the Panel because of these recent changes to scoring and evaluation criteria. As shared above, this April editing came not from accepted policy, but from a few strong voices on the SubPro IRT. I fear disastrous misappropriation of the well-known names of peoples, tribes and communities if recent changes to CEP text and scoring are not reversed, and the original language is not restored. If you agree, I ask you to write a small set of comments—and share you how to do it below—as it will make a different. Thank you for reading and caring, Kathy Kleiman, Co-Founder ICANN’s Noncommercial Users Constituency To Submit a Comment in ICANN’s Open Proceeding on the Final Draft of the Applicant Guidebook, due July 23rd. Thank you! Footnotes

Blog

Ethical Data Scraping for Research – Expert Workshop held in Amsterdam

A unique, expert-led workshop on ethical data scraping was organized by Professor Niva Elkin-Koren and Dr. Maayan Perel and hosted by the Shamgar Center of Digital Law and Innovation, Tel Aviv University. The workshop was made possible by the generous support of the Right to Research in International Copyright Law coalition at the American University, especially Professor Sean Flynn, the Director of the Program on Information Justice and Intellectual Property (PIJIP). An interdisciplinary group of information law experts gathered in Amsterdam’s beautiful Volks hotel on July 2, 2025, to discuss data scraping for research and innovation and its ethical boundaries. The event aligned with the agenda of the Standing Committee on Copyright and Related Rights (SCCR), which promotes public interest strategies, coordinated action, and research, and seeks to inform public policy on legal exceptions and limitations for researchers. Data scraping is an essential research tool for academics and scientists across a wide range of disciplines. It is also critical for training artificial intelligence (AI) models and developing innovative research methodologies. The legal boundaries of data scraping attract considerable attention, not only from academics but also from policymakers, governments, courts, technology companies, and data providers worldwide. The boundaries of ethical data scraping— often dependent on the type of data being scraped, the technologies being used, the purpose of scraping, and the applicable legal framework—remain unclear. Consequently, researchers are left to navigate the potential legal risks and changing technological barriers set by tech giants, such as Cloudflare (recently adopting a permission-based approach to data scraping). As a result, researchers may be deterred from engaging in lawful data scraping, at the cost of not engaging in research that can serve the public interest. Moderated by Dr. Maayan Perel and Professor Eldar Haber, the workshop aimed to bring greater clarity to what ethical data scraping is and should be. The workshop applied practical and technical insights from real-world data scraping, analyzed the legal implications of various transatlantic approaches, and proposed guidelines for promoting ethical data scraping for research and development. To obtain a better understanding of how data scraping models work in practice, participants explored a test case model from Bright Data, an international data scraping company, whose model was also discussed in recent litigation with X and Meta. In a stimulating presentation, Bright Data representatives described their publicly available data scraping technology, elaborated on their ethical policies, and presented their “data for good” initiative, which offers scraping opportunities for researchers as well as other stakeholders. To encourage a productive dialogue between academic and business participants, the discussion followed a “red teaming” approach. Red teaming, a concept we adapted from the cybersecurity realm, essentially aims to help organizations proactively identify weaknesses and strengthen their security posture before actual attacks occur. Applying red-teaming’s critical approach, the participants identified potential legal challenges in Bright Data’s data test case model from various perspectives, including intellectual property law, competition law, privacy law, and data protection law, while also identifying points of legal tension between the US and the EU frameworks. The issues highlighted included the legal application of copyright law to information copying and storage; questions of competition law arising from the dominant market actors’ ability to adjust behavior and match prices; and the scope of privacy protection in personal information that data providers voluntarily make publicly accessible.   Next, insights from Bright Data’s test case were used to draw broader observations about what constitutes ethical data scraping in practice, especially for AI training. Key issues included: The workshop concluded with a broader discussion of potential legal, technical, and institutional strategies to promote ethical data scraping for academic research and technological development. Participants identified the need to distinguish between questions of access to data and questions of the use of the data, as each raises different legal issues. Key suggestions included: Participants: Tanya Aplin, Mor Avisar, Balazs Bodo, Sharon Bar Ziv, Sean Flynn, Eldar Haber, Uri Hacohen, Bernt Hugenholtz, Aline Iramina, Matthias Leistner, Dana Mazia, Maayan Perel, Mando Rachovista, Pamela Samuelson, Martin Senftleben, Ben Sobel, Streffan Verhultz, Amit Zac

Scroll to Top