Advertising

OpenAI Collaborates with U.S. Government for Safety Checks on Next AI Model

OpenAI, a prominent name in the AI industry, has recently faced concerns over the safety of its advanced intelligence systems. In response, the company’s CEO, Sam Altman, has announced that OpenAI’s next major generative AI model will undergo safety checks by the U.S. government.

Altman revealed in a post that OpenAI has been collaborating with the U.S. AI Safety Institute, a federal government body, to provide early access to its next foundation model and advance the science of AI evaluations. This partnership demonstrates OpenAI’s commitment to ensuring the safety of its AI systems.

To further address the concerns raised, OpenAI has made changes to its non-disparagement policies. Current and former employees are now encouraged to freely raise concerns about the company and its work. Altman also emphasized that at least 20% of OpenAI’s computing resources will be allocated to safety research, underscoring the company’s dedication to prioritizing safety.

However, OpenAI’s commitment to safety has come under scrutiny. In a letter to Altman, five U.S. senators questioned the company’s approach to safety and raised concerns about possible retribution against former employees who publicly voiced their concerns under the non-disparagement clause in their employment contracts.

In response, OpenAI’s chief strategy officer, Jason Kwon, reaffirmed the company’s commitment to developing AI that benefits humanity. He highlighted the steps being taken by OpenAI, including the allocation of computing resources to safety research, the removal of the non-disparagement clause, and the partnership with the AI Safety Institute to ensure safe model releases.

Altman reiterated these commitments without divulging specific details, particularly regarding the collaboration with the AI Safety Institute. The government body, housed within the National Institute of Standards and Technology, aims to address risks associated with advanced AI and is working with a consortium of tech industry companies, including OpenAI.

It’s worth noting that OpenAI has a similar agreement with the U.K. government for safety screening. This demonstrates the company’s commitment to global safety standards.

The concerns over safety at OpenAI began to grow in May when two co-leaders of the superalignment team, Ilya Sutskever and Jan Leike, resigned. Leike specifically criticized the company’s neglect of safety culture and processes. However, OpenAI has continued its product releases and has established a new safety and security committee to review its processes and safeguards.

Led by industry leaders such as Bret Taylor, Adam D’Angelo, Nicole Seligman, and Sam Altman, the committee is focused on ensuring that OpenAI maintains a strong safety framework.

In conclusion, OpenAI is taking significant steps to address the concerns surrounding the safety of its AI systems. Through partnerships with government bodies and the formation of a safety and security committee, the company is actively working to prioritize safety and responsible AI development.