Advertising

The AI Safety Institute Expands with New Location in San Francisco

U.K.’s AI Safety Institute Expands to San Francisco

The U.K.’s AI Safety Institute, established in November 2023 to assess and address risks in AI platforms, has announced plans to open a second location in San Francisco. This move is aimed at getting closer to the epicenter of AI development, as the Bay Area is home to major players like OpenAI, Anthropic, Google, and Meta. Despite the U.K. having a memorandum of understanding with the U.S. for AI safety collaboration, the decision to establish a direct presence in the U.S. shows their commitment to tackling the issue.

Michelle Donelan, the U.K. secretary of state for science, innovation, and technology, emphasized the importance of having a base in San Francisco, stating that it would provide access to the headquarters of many AI companies and an additional pool of talent. The U.K. sees AI as a significant opportunity for economic growth and investment, making visibility with these firms crucial.

The recent controversy surrounding OpenAI’s Superalignment team has further emphasized the timeliness of establishing a presence in San Francisco. The AI Safety Institute currently has 32 employees, but its influence is significant considering the billions of dollars invested in AI companies.

One of the notable achievements of the AI Safety Institute is the release of Inspect, a set of tools for testing the safety of foundational AI models. However, engagement with these models is currently voluntary and inconsistent. Companies are not legally obligated to have their models vetted, which could result in identified risks going unchecked.

Donelan acknowledged that the evaluation process is an emerging science and highlighted the institute’s aim to present Inspect to regulators at the AI safety summit in Seoul. The hope is to encourage regulators to adopt it as a standard practice.

Moving forward, Donelan envisions the U.K. developing more AI legislation. However, she emphasized the need to better understand AI risks before implementing new laws. The recent international AI safety report highlighted the gaps in research and called for more global collaboration and incentivized research.

Ian Hogarth, chair of the AI Safety Institute, emphasized the importance of an international approach to AI safety and expressed pride in scaling operations to San Francisco. The institute aims to collaborate with other countries, test models, and anticipate risks associated with frontier AI.

By expanding to San Francisco, the U.K.’s AI Safety Institute aims to enhance its presence in the global AI community, strengthen collaboration with AI companies, and contribute to making AI safer for society as a whole.