| Welcome to Global Village Space

Wednesday, November 13, 2024

Former OpenAI Exec Ilya Sutskever starts safe AI company

The company’s business model and investor alignment are designed to prioritize safety over short-term gains.

Ilya Sutskever, former chief scientist at OpenAI, has embarked on a new mission to develop safe and beneficial artificial intelligence through his newly formed company, Safe Superintelligence Inc. (SSI). Teaming up with OpenAI colleague Daniel Levy and former Apple AI executive Daniel Gross, Sutskever aims to address what he and his co-founders believe is “the most important technical problem of our time.”

Mission of Safe Superintelligence Inc.

The name Safe Superintelligence Inc. (SSI) encapsulates the company’s singular focus: developing artificial superintelligence (ASI) that is safe and ethical. “SSI is our mission, our name, and our entire product roadmap,” the founders declared on their website. The team is dedicated to ensuring that the development of ASI, a hypothetical future stage where AI surpasses human intelligence, does not pose an existential threat to humanity.

Read More: Apple teams up with OpenAI to enhance Siri

Sutskever’s emphasis on safety isn’t new. During his tenure at OpenAI, he prioritized developing AI with robust safeguards. Luminaries in the field, like Geoffrey Hinton, have voiced concerns about the potential dangers of ASI, emphasizing the importance of aligning AI development with human interests.

Aftermath of OpenAI’s Power Struggle

Sutskever’s departure from OpenAI in May followed a dramatic boardroom coup attempt in November. Sutskever, along with independent board members Helen Toner, Tasha McCauley, and Adam D’Angelo, tried to remove OpenAI CEO Sam Altman. The move was thwarted by chairman Greg Brockman, who resigned in protest. The failed coup highlighted internal conflicts over the direction and governance of OpenAI.

Following the power struggle, Sutskever expressed regret for his role in the attempted ousting of Altman. The upheaval at OpenAI revealed deep governance issues, including accusations that the company had deviated from its mission to develop AI for the benefit of all humanity. These events likely influenced Sutskever’s decision to form a new company dedicated solely to safe AI development.

Insulating from Commercial Pressures

SSI’s approach to AI development sets it apart from other tech companies. Sutskever has stated that SSI does not intend to sell AI products or services in the near term. Instead, the company will focus exclusively on developing safe superintelligence. “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever told Bloomberg.

By insulating SSI from the pressures of the competitive AI market, Sutskever hopes to avoid the pitfalls that can come with the pursuit of commercial success. The company’s business model and investor alignment are designed to prioritize safety over short-term gains. This focus on safety and capabilities as intertwined technical challenges reflects a commitment to advancing AI responsibly.

Future of AI Development

The formation of SSI comes at a time when major tech companies like Google, Apple, Facebook, and Microsoft are rapidly advancing their AI capabilities. OpenAI has continued its development trajectory, recently launching new features like GPT-4o, which offers faster responses and improved reasoning.

Read More: OpenAI sets up a safety team for training a new GPT model

However, Sutskever’s new venture represents a different approach to AI development. By concentrating on the long-term goal of safe superintelligence, SSI aims to ensure that the transformative potential of AI is realized without compromising safety or ethical standards.