Advertising

What a Harris presidency could mean for U.S. AI regulation

What Could Kamala Harris Mean for AI Regulation in the US?

With President Joe Biden announcing his decision not to seek reelection and endorsing Vice President Kamala Harris as the Democratic Party’s nominee, many are wondering what this could mean for AI regulation in the US. Both Biden and Harris have expressed their belief in the importance of protecting the public while advancing innovation. In fact, Biden had previously issued an executive order calling for companies to set new standards in AI development.

Harris has been vocal about the need for stronger government oversight in the absence of regulation. She believes that some technology companies prioritize profit over the well-being of their customers and the stability of democracies. AI policy experts expect that a Harris administration would maintain consistency with the current AI policy, rather than dismantling it as the Trump administration has championed.

Lee Tiedrich, an AI consultant at the Global Partnership on Artificial Intelligence, believes that Biden’s endorsement of Harris could increase the chances of maintaining continuity in US AI policy. Tiedrich highlights the 2023 AI executive order and the focus on multilateralism through international organizations like the United Nations, G7, and OECD.

Sarah Kreps, a professor of government at Cornell, does not anticipate Harris rolling back any of the AI safety protocols established under Biden. However, she believes that a Harris administration might take a less top-down regulatory approach to address concerns raised by certain segments of the tech industry.

Krystal Kauffman, a research fellow at the Distributed AI Research Institute, agrees that Harris will likely continue Biden’s work in addressing the risks associated with AI use and increasing transparency. However, she hopes that Harris will include the voices of data workers in policy discussions. Kauffman believes that closed-door meetings with tech CEOs should not be the sole basis for formulating policy.

Overall, it seems that a Harris administration would prioritize AI regulation and oversight while seeking input from various stakeholders. The focus would be on striking a balance between protecting the public and fostering innovation in the AI industry.

In other news, Meta has released Llama 3.1 405B, a text-generating and -analyzing model with 405 billion parameters. Adobe has also introduced new Firefly tools for graphic designers, offering more ways to utilize AI models. An English school has faced reprimand for using facial recognition technology without obtaining specific opt-in consent from students. Cohere, a generative AI startup, has raised $500 million, customized AI models for enterprises, and secured investments from companies like Cisco and AMD. Additionally, TechCrunch interviewed Lakshmi Raman, the director of AI at the CIA, discussing the CIA’s use of AI and the responsible deployment of new technologies.

Researchers have been exploring alternative AI model architectures to transformers, which are widely used for complex reasoning tasks. State space models (SSMs) show promise as more computationally efficient architectures capable of processing long sequences of data. Mamba-2, a strong incarnation of SSMs, can handle larger input data than transformer-based models while remaining competitive in performance.

Furthermore, a team of researchers has developed test-time training models (TTT models), a new type of generative AI model that can reason over millions of tokens. TTT models have the potential to scale up to billions of tokens and power next-generation generative AI applications.

In the world of generative AI startups, Stability AI has faced controversy over its restrictive terms of use and licensing policies. However, the company has recently announced adjustments to allow for more liberal commercial use and clarify its licensing terms.

The ongoing legal challenges and controversial licensing terms in the generative AI industry highlight the complexity and lack of consensus in regulating AI. It remains to be seen how the industry will navigate these challenges and work towards clarity in the future.