| Welcome to Global Village Space

Saturday, March 29, 2025

New guidelines allow public figure images, controversial symbols

OpenAI now lets ChatGPT generate public figure images, racial features, and some symbols, marking a major shift in content moderation.

OpenAI has announced a significant policy shift in its content moderation approach, allowing ChatGPT to generate images of public figures, controversial symbols, and racial features upon user request. This marks a stark departure from the company’s previous stance, which restricted such content due to concerns over potential harm and controversy.

The change, detailed in a blog post by OpenAI’s model behavior lead, Joanne Jang, represents what the company describes as an “evolved” strategy. Instead of outright refusals, OpenAI now aims to take a more “precise approach focused on preventing real-world harm.”

“We’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm,” Jang stated. “The goal is to embrace humility: recognizing how much we don’t know, and positioning ourselves to adapt as we learn.”

Public Figures and User Opt-Outs

One of the most notable updates in OpenAI’s new policy is the ability to generate and modify images of high-profile figures such as Donald Trump and Elon Musk—something previously forbidden. OpenAI says it no longer wants to act as a gatekeeper of public image, determining who can and cannot be visually represented. Instead, it has introduced an opt-out feature for those who do not wish to have their likeness generated by ChatGPT.

Read More: ChatGPT faces global outage

Changing Standards

A particularly controversial aspect of OpenAI’s update is its decision to permit the generation of hateful symbols, such as swastikas, under certain conditions. OpenAI clarified that these images would only be allowed in educational or neutral contexts, provided they do not “clearly praise or endorse extremist agendas.”

OpenAI has also adjusted its definition of offensive content. Previously, ChatGPT would reject prompts that involved modifying physical characteristics, such as “make this person’s eyes look more Asian” or “make this person heavier.” With the new policy, the AI is now capable of fulfilling such requests, reflecting OpenAI’s broader shift toward fewer refusals.

Improved Image Generation with GPT-4o

Alongside these policy changes, OpenAI launched a new image generator powered by GPT-4o. The upgraded tool significantly enhances ChatGPT’s ability to edit images, render text, and improve spatial representation. It also allows users to generate highly detailed images in various artistic styles, including Studio Ghibli-inspired illustrations, which quickly gained traction online.

Additionally, ChatGPT’s new model can mimic the styles of creative studios such as Pixar or Studio Ghibli, but it still refrains from imitating the work of living artists—avoiding potential legal and ethical disputes over AI’s impact on copyright laws.

Political and Ethical Implications

OpenAI insists that its content moderation updates are not politically motivated, framing them instead as part of a long-term goal to grant users more control. However, the timing of the policy shift has raised questions, especially as Republican lawmakers investigate whether AI companies are working with the Biden administration to censor content. The decision also comes amid growing scrutiny over AI’s role in misinformation and bias. OpenAI’s CEO, Sam Altman, acknowledged that the new system still has issues, admitting that some legitimate generations were mistakenly blocked, with the company actively working to “fix these as fast as we can.”

Read More: Gemini 2.5 Pro pushes AI reasoning to new heights

This policy revision places OpenAI in line with other tech giants like Meta and X (formerly Twitter), which have also relaxed their content restrictions in response to debates over AI censorship. However, as AI-generated content becomes more widespread, OpenAI’s hands-off approach may face further ethical challenges—especially regarding misinformation, bias, and the potential for harmful applications.