At Google I/O 2025, Google announced a major wave of Gemini AI chatbot updates, introducing new capabilities across mobile, browser, and subscription tiers. These enhancements aim to strengthen Gemini’s role as a powerful multimodal assistant and deepen its utility across Google’s ecosystem.
One standout feature rolling out now is Gemini Live’s camera and screen-sharing support, available to all iOS and Android users. Built on Google’s Project Astra, the tool allows users to engage in real-time voice conversations while streaming visuals from their phone camera or screen to the AI. Imagine pointing your phone at a historical landmark and asking Gemini about its architecture—responses are nearly instantaneous.
Soon, Gemini Live will tap directly into apps like Google Maps, Calendar, and Tasks. That means users could verbally request directions, create events, or build a to-do list, all from the same AI-driven experience. These moves further position Gemini as a modern digital assistant rivaling ChatGPT, Apple’s Siri, and Microsoft’s Copilot.
Google reports Gemini now serves 400 million monthly active users, and it’s leaning into premium plans to drive growth. It’s rebranding Gemini Advanced as Google AI Pro ($20/month) and launching Google AI Ultra ($250/month), which offers advanced perks like exclusive model access, higher rate limits, and early feature previews. Gemini Ultra users can now try out the Veo 3 video model, capable of generating soundtracks natively based on scene content.
Research Tools, Education Features, and Better Responses
Another key part of the Gemini AI chatbot updates is the expansion of Deep Research, the tool designed for long-form reports. Users can now upload private PDFs and images, which Gemini will cross-reference with public data to produce detailed, personalized insights. Coming soon, Deep Research will also connect with Gmail and Google Drive, allowing even more context-aware reports.
Meanwhile, free users get access to the upgraded Imagen 4 image model, which enhances text rendering and visual generation. Google is also introducing Gemini 2.5 Flash as the platform’s new default model, promising faster, more accurate responses with reduced latency.
For students and learners, Gemini is becoming more personalized. It can now generate custom quizzes based on weak areas, analyze incorrect responses, and create action plans to help users master difficult topics. The goal is to turn Gemini into an adaptive tutor as well as an assistant.
Lastly, Pro and Ultra subscribers in the U.S. who use English in Chrome will now see Gemini appear directly in their browser. Users can ask Gemini to summarize information or answer questions about their current web content, making the browser experience smarter and more responsive.
These updates reflect Google’s broader push to redefine how users interact with AI—by making it more useful, personal, and seamlessly integrated into everyday tools.