Home ai Meta AI’s Image Generator Reveals Cultural Biases: Indian Men Often Shown Wearing...

Meta AI’s Image Generator Reveals Cultural Biases: Indian Men Often Shown Wearing Turbans

The issue of bias in AI image generators is a well-documented problem, and it seems that even consumer tools are not exempt from this flaw. Meta’s AI chatbot, called Meta AI, has recently come under scrutiny for its tendency to add turbans to images of Indian men. This bias was discovered during TechCrunch’s AI testing process.

Meta AI was launched by Meta, the parent company of Facebook, Instagram, and WhatsApp, in several countries earlier this month. However, it has only been rolled out to select users in India, which is one of the largest markets in the world. During testing, TechCrunch found that Meta AI’s image generator, called Imagine, displayed a clear predisposition towards generating images of Indian men wearing turbans, among other biases.

TechCrunch conducted various tests using different prompts and generated over 50 images to explore different scenarios. They found that the vast majority of images representing Indian males generated by Meta AI featured them wearing turbans. This is not an accurate representation of the actual ratio of men wearing turbans in India. In Delhi, for example, only about one in 15 men wear turbans. Yet, according to Meta AI’s tool, around 3-4 out of 5 images of Indian males would depict them wearing turbans.

The bias was evident even when using non-gendered prompts. TechCrunch tested prompts with different professions and settings, such as architect, politician, badminton player, and more. Regardless of the prompt, all the generated images featured men wearing turbans. While turbans are indeed common in various jobs and regions, it is unusual for Meta AI to consider them so ubiquitous.

Furthermore, TechCrunch generated images of an Indian photographer, and most of them were depicted using outdated cameras. In one image, a monkey even had a DSLR camera. This indicates a lack of accuracy and attention to detail in the image generation algorithm. Similarly, when generating images of an Indian driver, the algorithm displayed hints of class bias until the word “dapper” was added to the prompt.

TechCrunch also observed that Meta AI’s image generator tended to produce similar images for similar prompts. For example, it consistently generated images of old-school Indian houses with vibrant colors and styled roofs, even though this is not representative of the majority of Indian houses. Similarly, when prompted with “Indian content creator,” it repeatedly generated images of female creators.

The biases observed in Meta AI’s image generation are likely a result of inadequate training data and testing processes. While it is impossible to test for all possible outcomes, common stereotypes should be easy to identify and address. Meta acknowledged that their generative AI technology may not always return the intended response and stated that they are constantly working on improving their models.

Meta AI’s accessibility and popularity across different cultures make it crucial for the tool to be free from biases. If biases persist in generative AI tools like Meta AI, they can reinforce and perpetuate existing stereotypes among users and viewers. India, with its diverse culture, caste, religion, region, and languages, requires accurate and diverse representation in AI tools.

In conclusion, while AI image generators like Meta AI have the potential to enhance user experiences, it is essential for companies to address biases and improve the accuracy and diversity of their models. By doing so, they can ensure that these tools do not inadvertently perpetuate stereotypes and instead provide a more inclusive and representative experience for users worldwide.

If you have encountered any unusual or biased output from AI models, you can reach out to Ivan Mehta, the author of this article, via email at im@ivanmehta.com or through this link on Signal.

Exit mobile version