| Welcome to Global Village Space

Saturday, November 30, 2024

‘Godfather of AI’ and fellow scientist warn of future risks

He has grown increasingly concerned about the risks posed by advanced AI systems, particularly those that may surpass human intelligence.

Geoffrey Hinton, often referred to as the “godfather of artificial intelligence,” and John Hopfield, an American physicist, were jointly awarded the 2024 Nobel Prize in Physics. The Royal Swedish Academy of Sciences recognized their contributions to AI and neural networks, foundational technologies that have led to the rapid expansion of machine learning today. While their groundbreaking work has revolutionized the field of artificial intelligence, both laureates voiced concerns about the potential risks posed by AI’s continued advancement.

Groundbreaking Partnership in AI

Hinton, 76, and Hopfield, 91, were honored for their separate but complementary contributions to the field of neural networks. Hopfield is renowned for devising the Hopfield network, an artificial neural network capable of mimicking how the brain stores and retrieves memories. This innovative model laid the groundwork for more advanced developments in AI.

Read More: Meta’s new AI creates realistic videos and audio from prompts

Building upon Hopfield’s work, Hinton co-developed the Boltzmann machine in the 1980s, an artificial neural network that introduced randomness into the learning process. This development paved the way for deep learning and other AI applications that power many of today’s systems. Hinton’s work with neural networks has been instrumental in enabling the creation of AI models capable of understanding and generating human language, classifying images, and other machine learning tasks.

Growing Concern 

Despite his pioneering role in AI, Hinton has emerged as one of its most vocal critics in recent years. He has grown increasingly concerned about the risks posed by advanced AI systems, particularly those that may surpass human intelligence. Hinton’s warnings echo similar sentiments expressed by other technology leaders, including Elon Musk and Meta’s Mark Zuckerberg, who have called for greater regulation of AI.

“It will be comparable with the industrial revolution,” Hinton remarked when discussing the potential impact of AI. “But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.” He stressed that while AI has the potential to revolutionize industries like healthcare, the possibility of AI systems getting out of control remains a significant concern.

Hopfield’s Parallel Warnings

John Hopfield shared these concerns, reflecting on the rise of other powerful technologies throughout his lifetime. Speaking at a video conference from Princeton University, Hopfield likened AI to nuclear physics and biological engineering—fields that have brought immense benefits but also dangerous consequences. “As a physicist, I’m very unnerved by something which has no control,” he stated.

Hopfield warned that the current understanding of deep-learning systems is still limited. Despite the marvels AI has achieved, scientists do not fully comprehend how these systems function, creating the risk of unforeseen, possibly dangerous consequences. Both laureates called for a deeper understanding of AI’s inner workings to prevent catastrophic scenarios.

Read More: OpenAI unveils new tools to boost developer capabilities in AI race

Hinton and Hopfield’s warnings emphasize the need for greater investment in AI safety research. Hinton advocated for the brightest minds to focus on AI safety, urging governments to compel large tech companies to provide the necessary resources. “Right now there are 99 very smart people trying to make AI better, and one very smart person trying to stop it from taking over,” Hinton said. “Maybe you want to be more balanced.”