Advertising

Generative AI Floods Academic Journals with Spam, Highlighting Potential Risks to Research Evaluation

The rise of generative AI is causing a new problem in academic publishing – AI-generated spam articles. Three journals published by Addleton Academic Publishers have been found to be entirely made up of AI-generated content. These articles follow a template and are filled with buzzwords like “blockchain” and “deep learning.” What’s more concerning is that these fake journals are ranking in the top 10 for philosophy research on the widely used evaluation system, CiteScore. The journals extensively cross-cite each other, inflating their rankings. This manipulation of the system could have negative consequences for researchers, as these rankings often influence academic awards, promotions, and hiring decisions. It highlights how easily generative AI can disrupt and exploit existing systems. While some may argue that the problem lies with flawed metrics like CiteScore, it’s clear that generative AI needs to be regulated to prevent further abuse.

In other news, DeepMind, Google’s AI research lab, is developing AI technology to generate soundtracks for videos. The AI takes a description of a soundtrack paired with a video and creates music, sound effects, and dialogue that match the content. This could revolutionize the film industry by automating the process of creating original soundtracks.

Researchers at the University of Tokyo have developed a robot called Musashi that can drive a small electric car. Equipped with cameras as “eyes,” Musashi can “see” the road and mirrors to navigate the test track. This development could pave the way for autonomous vehicles that can operate safely on public roads.

A new AI-powered search engine called Genspark has raised $60 million in funding. It uses generative AI to write custom summaries in response to search queries. The platform aims to compete with rivals like Perplexity by offering unique and personalized search results.

OpenAI’s ChatGPT, an AI-powered chatbot platform, has different pricing options depending on usage. To help users navigate the various subscription options, an updated guide to ChatGPT pricing has been created.

In research news, a group of researchers from Nvidia, USC, UW, and Stanford have developed an AI system called LLaDa that can resolve ambiguous driving situations by reading the local drivers’ handbook. By accessing the driving manual for a specific region, the AI can generate appropriate responses to unexpected circumstances on the road. While it’s not a complete driving system, it offers an alternative approach to creating a universal driving system.

Runway, a company focused on generative AI tools for content creators, has unveiled its latest model, Gen-3 Alpha. Trained on a vast number of images and videos, Gen-3 can generate video clips from text descriptions and still images. It boasts improved generation speed and fidelity compared to previous models and offers more control over the structure, style, and motion of the videos. This technology has the potential to disrupt the film and TV industry by automating the creation of video content.

On the flip side, AI is facing setbacks in the fast-food industry. McDonald’s announced that it would be removing automated order-taking technology from over 100 restaurant locations due to inaccuracies. The voice-recognition technology developed by IBM was only accurate about 85% of the time, requiring human staff to assist with a significant number of orders. This suggests that certain jobs, particularly those that require understanding diverse accents and dialects, cannot be fully automated by AI.