Advertising

Google’s Gemini Live: A Natural and Hands-Free Conversation with AI Chatbot

Gemini Live: Google’s Answer to OpenAI’s Advanced Voice Mode

During Google’s recent Made By Google event in Mountain View, California, the tech giant unveiled Gemini Live, a new feature that allows users to engage in semi-natural spoken conversations with an AI chatbot. Powered by Google’s latest large language model, Gemini Live aims to provide a more intuitive and hands-free experience compared to traditional text-based interactions or voice assistants like Siri or Alexa.

While OpenAI’s Advanced Voice Mode, called ChatGPT, showcased a similar feature earlier, Google is the first to roll out the finalized version of this conversational AI tool. Gemini Live takes advantage of low latency, allowing for real-time responses to user queries in under two seconds. The AI chatbot also demonstrates the ability to quickly adapt and pivot during conversations, even when interrupted.

One notable advantage of Gemini Live is the wide range of voice options it offers. Users can choose from ten different voices, each created with the help of voice actors to sound more humanlike. In contrast, OpenAI’s ChatGPT only provides three voice options. This variety enhances the user experience and adds a personal touch to the interactions.

During a demonstration, a Google product manager asked Gemini Live to find family-friendly wineries near Mountain View, complete with outdoor areas and nearby playgrounds. Despite the complexity of the request, Gemini Live successfully recommended Cooper-Garrod Vineyards in Saratoga, which met all the specified criteria. However, there was a minor flaw when the AI mistakenly mentioned a playground called Henry Elementary School Playground, supposedly located just ten minutes away from the vineyard. While there are other playgrounds nearby, the nearest Henry Elementary School is actually a two-hour drive from the area.

One of the highlighted features of Gemini Live is its ability to seamlessly handle interruptions during conversations. Google emphasized that this allows users to have more control over the conversation. However, in practice, this feature did not work flawlessly during the demonstration, as there were instances where the AI and the project managers spoke over each other, resulting in miscommunication.

Google has placed certain limitations on Gemini Live to avoid copyright issues. The company explicitly states that the AI chatbot cannot sing or mimic voices beyond the ten options provided. Additionally, Gemini Live does not currently possess the capability to understand emotional intonation in a user’s voice, a feature that OpenAI showcased during its demo.

Despite its limitations, Gemini Live presents an exciting opportunity to delve more deeply into subjects through a more natural and interactive means. Google acknowledges that Gemini Live is just a stepping stone towards Project Astra, a fully multimodal AI model unveiled at Google I/O. While Gemini Live currently only supports voice conversations, Google plans to enhance its capabilities by incorporating real-time video understanding in the future.

In conclusion, Gemini Live offers users a more intuitive and engaging way to interact with AI chatbots. With its low latency, wide range of voices, and the potential for future enhancements, Gemini Live demonstrates Google’s commitment to advancing conversational AI technology.