The AI Impact Tour: Last Chance to Request an Invite
The AI Impact Tour is an upcoming event that aims to explore various methods for auditing AI models. With just one week left to request an invite, participants will have the opportunity to engage with top executive leaders and delve into strategies for ensuring optimal performance and accuracy across their organizations.
Researchers Call for Protection of Whistleblowers and Critics
In a new open letter titled “Right to Warn,” a group of 11 researchers, including former employees of OpenAI and Google DeepMind, are calling for AI companies to commit to four principles protecting whistleblowers and critics who raise concerns about AI safety. The letter highlights the serious risks posed by AI technologies, such as the entrenchment of inequalities, manipulation and misinformation, and the potential loss of control over autonomous AI systems. The signatories express concerns about the lack of proper oversight, the influence of profit motives, and the suppression of dissenting voices within organizations working on cutting-edge AI technologies.
Principles to Rectify Concerns
The four principles outlined in the open letter aim to address these concerns. The signatories are calling for AI companies to refrain from enforcing agreements that prohibit disparaging comments or retaliation for risk-related criticism. They also propose establishing a verifiable anonymous process for raising risk-related concerns to the company’s board, regulators, and independent organizations. Additionally, they advocate for a culture of open criticism within AI companies, allowing employees to share risk-related concerns publicly while protecting trade secrets. Finally, the signatories demand that companies do not retaliate against employees who share risk-related confidential information following the failure of other reporting methods.
Insights from Former OpenAI Employee
One of the signatories, Daniel Kokotajlo, elaborated on his reasons for leaving OpenAI in a series of posts on social media. He claimed to have lost confidence in the company’s ability to act responsibly in its pursuit of artificial general intelligence. Kokotajlo emphasized the need for transparency and ethical conduct in the development of advanced AI systems. He stated that he joined OpenAI with the hope that the company would prioritize safety research as its systems became more capable. However, he felt that OpenAI failed to make this pivot, leading him and several other researchers to leave the company. Kokotajlo also raised concerns about the non-disparagement agreement presented to him upon his departure, which he considered unethical.
Turbulence and Criticism Surrounding OpenAI
This wave of criticism directed at OpenAI is part of an ongoing period of turbulence for the company. In November 2023, the former non-profit board fired OpenAI co-founder and CEO Sam Altman, citing alleged communication issues. Altman was reinstated as CEO shortly after, but concerns about the company’s transparency persisted. The recent release of OpenAI’s new GPT-4 model also drew criticism, with celebrity actor Scarlett Johansson accusing the company of using her likeness without permission. OpenAI denied these claims, and independent research showed that the voice used in the model resembled other actors, not Johansson. The departures of high-profile figures involved in AI safety efforts further fueled concerns about OpenAI’s safety policies and practices.
The Need for Accountability and Transparency
The open letter and the insights from former OpenAI employees highlight the importance of accountability and transparency in the development and deployment of AI technologies. Whistleblowers and critics play a crucial role in holding AI companies accountable to the public, particularly in the absence of effective government oversight. The principles outlined in the letter aim to ensure that employees have the freedom to voice their concerns without fear of retaliation. By promoting open criticism, anonymous reporting processes, and protection for trade secrets, these principles seek to create a culture of accountability within AI companies.
In conclusion, as the deadline approaches for requesting an invite to The AI Impact Tour, it is crucial to consider the concerns raised by researchers about the need for accountability and transparency in the AI industry. The open letter calling for the protection of whistleblowers and critics sheds light on the serious risks posed by AI technologies and the need for effective oversight. By committing to the principles outlined in the letter, AI companies can work towards addressing these concerns and ensuring the responsible development and deployment of AI models.