Advertising

The Need for a Legal ‘Safe Harbor’ to Enable Evaluation of AI Tools by Researchers, Journalists, and Artists, According to Experts

blankThe Need for a Legal ‘Safe Harbor’ to Enable Evaluation of AI Tools by Researchers, Journalists, and Artists, According to Experts

In a recent paper published by 23 AI researchers, academics, and creatives, the importance of a legal ‘safe harbor’ for the evaluation of AI tools by researchers, journalists, and artists has been highlighted. The paper emphasizes that these protections are crucial for conducting “good-faith” evaluations of AI products and services. However, the terms of service for popular AI models often prohibit independent research related to vulnerabilities, hindering progress in this field.

The authors of the paper call on tech companies to provide indemnification for public interest AI research and protect it from account suspensions or legal consequences. While the terms of service aim to deter malicious activities, they unintentionally restrict research on AI safety and trustworthiness. Companies often forbid such research and may enforce their policies through account suspensions.

One example cited in the paper is OpenAI’s characterization of the New York Times’ evaluation of ChatGPT as “hacking” in a recent lawsuit. The Times’ lead counsel clarified that the evaluation was not hacking but rather an attempt to find evidence of copyright infringement. These incidents highlight the need for legal protections to enable researchers, journalists, and artists to carry out their evaluations without fear of legal reprisal.

Shayne Longpre from MIT Media Lab and Sayash Kapoor from Princeton University, co-authors of the paper, explain that the concept of a ‘safe harbor’ was initially proposed by the Knight First Amendment Institute for social media platform research. They emphasize the history of academics and journalists facing lawsuits or even imprisonment as they sought to uncover weaknesses in platforms. The ‘safe harbor’ would provide researchers with the necessary tools to investigate AI systems and identify potential harms.

The paper, titled “A Safe Harbor for AI Evaluation and Red Teaming,” notes that account suspensions during public interest research have occurred at companies such as OpenAI, Anthropic, Inflection, and Midjourney. Artist Reid Southen, one of the paper’s co-authors, faced suspension on Midjourney after sharing images that resembled copyrighted versions. His investigation revealed that Midjourney could unintentionally infringe on copyrights through simple prompts. He believes that independent evaluation and red teaming should be allowed to protect the rights of content owners.

Transparency is a key issue raised by Longpre. Independent researchers should have the right to investigate the capabilities and flaws of AI products without misuse or harm. However, it is important to find a balance and work with companies to improve transparency and address flaws in their systems. Kapoor adds that while companies may have valid reasons for banning certain types of usage, there should not be a one-size-fits-all policy. Differentiating between malicious users and researchers conducting safety-critical research is crucial.

The authors of the paper have been in conversation with companies whose terms of use are under scrutiny. Many companies have shown willingness to engage in dialogue, although no firm commitments have been made regarding the implementation of a ‘safe harbor.’ OpenAI modified its terms of service after reviewing the first draft of the paper, indicating a potential willingness to support some aspects of the proposed ‘safe harbor.’

In conclusion, the need for a legal ‘safe harbor’ to enable the evaluation of AI tools by researchers, journalists, and artists has been highlighted by experts in the field. The paper emphasizes that current terms of service restrict independent research and hinder progress in AI safety and trustworthiness. While conversations with companies have shown some positive signs, further dialogue and actions are needed to ensure transparency and protection for those conducting evaluations in the public interest.