Home meta OpenAI Removes Voice from GPT-4o Model Amidst Scarlett Johansson Controversy

OpenAI Removes Voice from GPT-4o Model Amidst Scarlett Johansson Controversy

**OpenAI removes voice from GPT-4o model after controversy**

OpenAI faced backlash this week after users discovered that one of the voices used by its new GPT-4o model sounded eerily similar to Scarlett Johansson’s AI character in the movie “Her.” The company announced that it would be removing the voice, known as Sky, from the model. Actress Scarlett Johansson released a statement saying that she had hired legal counsel to inquire about how the voice was developed, as OpenAI had previously approached her about using her voice for the model. OpenAI maintains that the voice was not based on Johansson’s.

This controversy raises important questions about the ethical considerations surrounding AI and voice synthesis. As AI technology continues to advance, it becomes increasingly important to ensure that AI systems and models are developed and used in an ethical and responsible manner. This includes obtaining proper permissions and consent when using someone’s likeness or voice for AI applications.

**US Department of Justice files lawsuit against Live Nation Entertainment**

The US Department of Justice, along with 30 state attorneys general, filed a lawsuit against Live Nation Entertainment, the parent company of Ticketmaster, for alleged monopolistic practices. Attorney General Merrick Garland stated in a press conference that Live Nation “suffocates its competition.” The lawsuit comes after legislators investigated the entertainment giant’s control over the industry, prompted by complaints from Taylor Swift fans who had difficulty purchasing tickets for her Eras tour.

This lawsuit highlights the ongoing concerns about monopolistic practices in the entertainment industry. When a single company dominates a market, it can stifle competition and limit consumer choice. Regulators play a crucial role in ensuring fair competition and protecting consumers from anti-competitive behavior.

**Techstars CEO Maëlle Gavet steps down amid controversy**

Techstars, a prominent startup accelerator, experienced a major shakeup this week as CEO Maëlle Gavet announced her departure. Gavet’s leadership style had been a subject of controversy during her tenure, with allegations of an “autocratic and punishing” culture that led to a significant labor exodus. Co-founder and board chairman David Cohen will be replacing her as CEO.

This leadership change at Techstars underscores the importance of fostering a positive and inclusive work culture. A toxic work environment can have detrimental effects on employee morale, productivity, and overall company success. It is essential for leaders to prioritize creating a supportive and respectful workplace that values the well-being and growth of its employees.

**Slack uses user data to train AI services**

Slack has come under scrutiny for using its own user data to train some of its new AI services. Users who do not want their data to be used for training purposes must email Slack to opt out. This raises concerns about user privacy and data usage in the context of AI development.

As AI technology becomes more prevalent, it is crucial for companies to be transparent about their data practices and obtain proper consent from users before using their data for training models. Safeguarding user privacy should be a top priority to maintain trust and ensure ethical AI development.

**Meta’s lack of diversity in AI council**

Meta, the parent company of Facebook, recently announced its new AI advisory council, which is composed entirely of white men. This lack of diversity is concerning, as it reflects a broader issue within the AI industry. Women and people of color have played a key role in the AI revolution but continue to be overlooked in leadership positions.

Diverse perspectives are essential in AI development to ensure that biases are identified and mitigated. Without representation from different backgrounds and experiences, AI systems can perpetuate existing inequalities and biases. It is crucial for companies like Meta to prioritize diversity and inclusion in their decision-making processes to create more equitable AI technologies.

**OpenAI’s Superalignment team resignations**

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, experienced several resignations due to limitations and restrictions imposed by the company. The team, including co-lead Jan Leike, felt that their work was being hindered and that they lacked the necessary resources to fulfill their mission.

This situation highlights the challenges of ensuring responsible AI development and governance. It is crucial for organizations to provide the necessary support and resources to teams working on AI safety and alignment. Building robust frameworks for governing AI systems is essential to prevent unintended consequences and ensure the ethical and responsible use of AI technology.

Exit mobile version