Advertising

OpenAI Teams Up with U.S. AI Safety Institute for Early Access to Next Generative AI Model

OpenAI, a leading artificial intelligence (AI) company, has recently announced a partnership with the U.S. AI Safety Institute to provide early access to its next major generative AI model for safety testing. This collaboration comes after OpenAI faced criticism for allegedly deprioritizing AI safety in favor of developing more powerful AI technologies.

Earlier this year, OpenAI disbanded a team dedicated to researching controls that would prevent superintelligent AI systems from behaving unpredictably. This decision led to the resignation of the team’s co-leads and raised concerns about OpenAI’s commitment to AI safety. In response to the backlash, OpenAI made several promises, including eliminating non-disparagement clauses that discouraged whistleblowing and dedicating 20% of its computing resources to safety research.

Despite these measures, skeptics remained unconvinced, especially when OpenAI appointed internal staff members to its safety commission and reassigned a top AI safety executive to another organization. The company’s endorsement of the Future of Innovation Act, a proposed Senate bill that would authorize the AI Safety Institute as an executive body for AI standards and guidelines, further fueled suspicions of regulatory capture.

OpenAI CEO Sam Altman’s involvement in the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board and the company’s increased federal lobbying expenditures also raised eyebrows. These actions suggest that OpenAI is actively influencing AI policymaking at the federal level.

The U.S. AI Safety Institute, housed within the Commerce Department’s National Institute of Standards and Technology, collaborates with a consortium of companies, including OpenAI, Google, Microsoft, Meta, Apple, Amazon, and Nvidia. This industry group is focused on implementing President Joe Biden’s AI executive order, which includes developing guidelines for AI red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.

OpenAI’s partnership with the U.S. AI Safety Institute demonstrates the company’s commitment to addressing AI safety concerns. By providing early access to its generative AI model for safety testing, OpenAI aims to allay fears about the potential risks associated with advanced AI technologies. This collaboration also highlights the importance of industry-government cooperation in shaping responsible AI development and deployment.

Overall, OpenAI’s efforts to prioritize AI safety and engage with regulatory bodies reflect a growing recognition of the need for ethical and safe AI practices. As AI continues to advance, it is crucial for companies and policymakers to work together to ensure that AI technologies benefit society while minimizing potential risks.