Google’s Gemini Live AI assistant will show you what it’s talking about

Google is enhancing its Gemini Live AI assistant with new features that aim to make interactions more intuitive and engaging. The key highlights include: 1. Visual guidance: Gemini Live will be able to highlight specific items on your screen while sharing your camera, helping you identify the right tool or object during a conversation. 2. Expanded app integration: Gemini Live will soon integrate with messaging, phone, and clock apps, allowing you to seamlessly transition between conversations and take actions, such as sending a message. 3. Improved audio model: Google has updated Gemini Live's audio model to better mimic human speech patterns, including intonation, rhythm, and pitch. The assistant will now adjust its tone and speaking speed based on the context of the conversation. 4. Personalized narratives: Users will be able to ask Gemini Live for dramatic retellings of stories from the perspective of different characters or historical figures, with the assistant potentially adopting a unique accent or tone to enhance the narrative. These new features are set to roll out to the recently announced Pixel 10 devices on August 28th, with broader availability on other Android and iOS platforms in the coming weeks.
Note: This is an AI-generated summary of the original article. For the full story, please visit the source link below.