Home Tech Google’s AI Flaws Go Viral: Lessons in Red Teaming and AI Development

Google’s AI Flaws Go Viral: Lessons in Red Teaming and AI Development

The flaws and failures of AI products have become a meme on social media, with users highlighting the ridiculous and incorrect responses generated by these systems. Companies like Google, who have the resources to conduct extensive testing, are still shipping products with obvious flaws. However, these memes can actually serve as useful feedback for companies developing and testing AI. Despite the high-profile nature of these flaws, tech companies often downplay their impact, stating that these examples are uncommon and not representative of most people’s experiences.

One recent case that went viral involved Google suggesting adding glue to pizza sauce to make the cheese stick. It turns out that the AI was pulling this answer from an eleven-year-old Reddit comment. This blunder not only highlights the flaws in AI content but also raises questions about the value of AI content deals. Google has a $60 million contract with Reddit to license its content for AI model training, and similar deals have been signed with other platforms like OpenAI, WordPress.org, and Tumblr.

While some of the errors circulating on social media come from unconventional searches designed to trip up the AI, others are more serious. For instance, Google provided incorrect information about what to do if you get a rattlesnake bite, recommending actions that should not be taken according to the U.S. Forest Service. Another example is Google’s misidentification of a poisonous mushroom as a common white button mushroom. These instances highlight the potential dangers of relying on AI-generated content for critical information.

When a bad AI response goes viral, it can further confuse the AI as it tries to understand the new content around the topic. For example, when a query asked if a dog has ever played in the NHL, the AI mistakenly called a Calgary Flames player a dog. Now, when making the same query, the AI pulls up an article about how Google’s AI keeps thinking dogs are playing sports. This demonstrates the problem of training large-scale AI models on internet data, as people can spread misinformation or make false claims.

In conclusion, the flaws in AI products are not only a source of entertainment on social media but also a cause for concern. The memes and viral posts that highlight these flaws can provide valuable feedback to companies developing and testing AI systems. However, it is crucial for tech companies to address these issues and improve the accuracy and reliability of AI-generated content. The saying “garbage in, garbage out” applies to the training of AI models on internet data, and it is important to ensure that the information provided by AI systems is trustworthy and safe.

Exit mobile version