Advertising

NIST GenAI Program Launched to Assess Generative AI Technologies and Combat Deepfakes

NIST Launches GenAI Program to Assess Generative AI Technologies

The National Institute of Standards and Technology (NIST), a part of the U.S. Commerce Department, has initiated a new program called NIST GenAI. This program aims to evaluate and assess generative AI technologies, including those that generate text and images. NIST GenAI will release benchmarks, develop content authenticity detection systems to combat deepfakes, and encourage the creation of software to identify the source of misleading AI-generated content.

The program will issue a series of challenge problems to measure the capabilities and limitations of generative AI technologies. These evaluations will help promote information integrity and guide the safe and responsible use of digital content. NIST GenAI’s first project focuses on building systems that can accurately distinguish between human-created and AI-generated media, starting with text.

While there are services available that claim to detect deepfakes, they have proven to be unreliable, especially when it comes to text. NIST GenAI invites teams from academia, industry, and research labs to submit AI systems that generate content (generators) or systems designed to identify AI-generated content (discriminators). The generators must produce 250-word summaries based on a given topic and set of documents, while discriminators must determine if a summary is potentially AI-written.

To ensure fairness, NIST GenAI will provide the necessary data for testing the generators. However, systems trained on publicly available data that do not comply with applicable laws and regulations will not be accepted. Registration for the pilot study begins on May 1, with the first round closing on August 2. The final results are expected to be published in February 2025.

The launch of NIST GenAI is a response to President Joe Biden’s executive order on AI, which emphasizes greater transparency from AI companies and establishes new standards for labeling AI-generated content. It is also the first announcement from NIST’s AI Safety Institute since the appointment of Paul Christiano, a former OpenAI researcher.

Christiano’s appointment has raised concerns among critics, including scientists within NIST, due to his “doomerist” views on AI development. Despite the controversy, NIST states that NIST GenAI will contribute to the work of the AI Safety Institute.

The need for NIST GenAI and its deepfake-focused study is evident as the volume of AI-generated misinformation and disinformation continues to grow exponentially. A recent study by Clarity, a deepfake detection firm, reveals that there has been a 900% increase in the creation and publication of deepfakes compared to the same time frame last year. This alarming trend has led to significant concerns among Americans, with 85% expressing worries about the spread of misleading deepfakes online, according to a poll conducted by YouGov.

The launch of NIST GenAI is a step towards addressing the challenges posed by AI-generated content and deepfakes. By evaluating and measuring the capabilities of generative AI technologies, NIST aims to promote information integrity and guide the safe and responsible use of digital content. The program’s benchmarks and content authenticity detection systems will play a crucial role in combating the rising threat of AI-generated misinformation and disinformation. As AI continues to advance, it is essential to establish standards and safeguards to protect against the potential misuse of this technology.