Advertising

Apple Unveils Apple Intelligence: A Closer Look at the Company’s Generative AI Push

Apple’s announcement of Apple Intelligence at its Worldwide Developers Conference (WWDC) has taken the AI world by storm. The company’s push into generative AI includes a range of features, from an upgraded Siri to AI-generated emoji to photo-editing tools. Apple promises that its AI is built with safety in mind and will provide highly personalized experiences. However, there are concerns about the lack of transparency regarding Apple’s model training practices.

Apple revealed that it trains its AI models on a combination of licensed datasets and the public web. While publishers have the option to opt out of future training, there are concerns about whether artists and creators have been compensated or credited for their work used in Apple’s training. The courts have yet to decide whether fair use doctrine applies to generative AI, leaving many unanswered questions.

The lack of transparency may be due to competitive reasons, but it also raises concerns about copyright issues. Apple’s embrace of fair use arguments while presenting itself as a champion of sensible tech policy is disappointing. It would be beneficial for Apple to provide more information and explanation regarding its training practices to address these concerns.

In other news, OpenAI has made new hires, including Sarah Friar as its CFO and Kevin Weil as its chief product officer. Yahoo Mail has also introduced new AI capabilities, including AI-generated summaries of emails. A recent study from Carnegie Mellon highlights that not all generative AI models treat polarizing subject matter equally.

Google has released a research paper on its Personal Health Large Language Model (PH-LLM), which aims to provide recommendations for improving sleep and fitness based on wearable data. While PH-LLM shows promise, it still falls slightly short of human sleep experts’ recommendations.

Apple’s on-device and cloud-bound generative AI models, part of its Apple Intelligence suite, have been detailed but with limited information about their capabilities. The on-device model has 3 billion parameters and runs offline on Apple devices, while the server model is larger and more capable. Apple claims that both models are less likely to produce toxic outputs compared to similar-sized models.

Lastly, the sixth anniversary of GPT-1, OpenAI’s groundbreaking generative AI model, was celebrated. GPT-1 revolutionized training techniques by relying primarily on unlabeled data instead of manually labeled data. The field of AI has come a long way since then, but experts believe that another paradigm shift of this magnitude may not happen anytime soon.

Overall, Apple’s entry into generative AI has generated excitement and curiosity, but concerns regarding transparency and copyright issues remain. The AI industry continues to evolve with new hires, advancements in health-related AI models, and ongoing research.