Islamabad, Mar 26, 2025: Google’s AI assistant, Gemini Live, is receiving a groundbreaking upgrade, transforming smartphones into intelligent, visually aware devices.
This latest enhancement allows Gemini to analyze both your phone’s screen and camera feed in real time, delivering an advanced level of visual assistance.
A recent discovery by a Reddit user revealed that Google has quietly begun rolling out these cutting-edge features, which are linked to its highly anticipated Project Astra.
Previously, Gemini could only process static screenshots, but with this upgrade, the AI assistant continuously observes and interprets live content on your screen.
This real-time analysis opens new possibilities for more interactive and responsive assistance.
Users can activate the feature using the “Share screen with Live” button, enabling Gemini to monitor on-screen activities seamlessly.
Read More: Scams and Overcharging Threaten E-Transactions Growth
Whether it’s navigating apps, understanding displayed content, or providing instant support, this upgrade significantly enhances user experience.
Additionally, the AI assistant now extends its vision to your phone’s camera, allowing it to identify objects, colors, and surroundings with remarkable accuracy.
This added capability paves the way for practical applications, from helping users understand unfamiliar objects to aiding accessibility.
Availability and Access
Google is currently introducing this feature to Gemini Advanced subscribers, who pay $20 monthly for the Google One AI-enhanced plan.
The rollout appears to be expanding gradually, with reports confirming functionality on devices such as Xiaomi smartphones.
Read More: Pakistan, China Collaborate on AI Training Programs for Students
Google has also indicated that Pixel and Samsung Galaxy S25 users may receive early or enhanced access to these new capabilities.
How Does Gemini Compare?
While competitors like Microsoft Copilot, ChatGPT, Grok, and Hugging Face’s HuggingSnap have similar AI-driven visual analysis tools, they are primarily confined to third-party applications.
Google’s decision to integrate these capabilities directly into Android gives it a distinct advantage, making its AI assistant more accessible and seamlessly embedded into the user experience.
As Google continues refining Gemini Live, its ability to “see” and interpret real-world visuals could redefine how users interact with their devices.
The integration of real-time vision-based AI assistance within smartphones positions Google at the forefront of AI-driven mobile technology, offering a glimpse into the future of intelligent digital interaction.