Apple has recently pledged its commitment to developing safe, secure, and trustworthy AI by signing the White House’s voluntary agreement. This move comes as Apple prepares to introduce its generative AI offering, called Apple Intelligence, into its core products, reaching its vast user base of 2 billion people.
Joining forces with 15 other tech giants like Amazon, Google, Microsoft, and OpenAI, Apple has shown its dedication to adhering to the White House’s guidelines for the development of generative AI. While Apple had not previously disclosed the extent of its plans to integrate AI into iOS, it made its intentions clear at the Worldwide Developers Conference (WWDC) in June, where it announced its partnership with ChatGPT to embed it in the iPhone. By aligning itself with the White House’s rules on AI, Apple aims to establish its willingness to cooperate with regulators, potentially anticipating any future regulatory disputes related to AI.
However, it is worth questioning how much weight Apple’s voluntary commitments hold. Although they may not have significant enforceability, they serve as a starting point in the pursuit of safe and trustworthy AI. This commitment is considered the “first step” in the broader context of President Biden’s AI executive order in October and ongoing legislative efforts to regulate AI models at the federal and state levels.
As part of the commitment, AI companies, including Apple, have agreed to subject their AI models to red-teaming, a process that involves stress-testing the models’ safety measures by acting as an adversarial hacker. The results of these tests will be shared with the public, promoting transparency. Additionally, the commitment emphasizes the importance of treating unreleased AI model weights as confidential information. Apple and other companies will work on these weights in secure environments, limiting access to as few employees as possible. Lastly, the commitment requires the development of content labeling systems, such as watermarking, to enable users to differentiate between AI-generated and non-AI-generated content.
In a related development, the Department of Commerce is preparing to release a report on the potential benefits, risks, and implications of open-source foundation models. The accessibility of model weights for powerful AI models has become a contentious issue with political undertones. While some advocate for limiting access to ensure safety, such restrictions could stifle AI startups and research. Therefore, the White House’s stance on this matter will significantly impact the broader AI industry.
Furthermore, the White House highlighted the progress made by federal agencies in fulfilling the objectives set out in the October executive order. Over 200 AI-related hires have been made, more than 80 research teams have been granted access to computational resources, and various frameworks for AI development have been released. These initiatives demonstrate the government’s commitment to fostering advancements in AI while prioritizing safety and regulation.
In conclusion, Apple’s decision to sign the White House’s voluntary commitment to developing safe and trustworthy AI reflects its dedication to playing by the rules and cooperating with regulators. While the enforceability of these commitments may be limited, they mark the first step towards responsible AI development. The White House’s efforts, including the executive order and ongoing legislative discussions, demonstrate a broader push for AI regulation and safety. The future impact of these developments on the AI industry, particularly regarding open-source models, remains to be seen. Nonetheless, the progress made by federal agencies signifies the government’s commitment to supporting AI advancements while maintaining a focus on safety and regulation.