The news: Google will add real-time video and screen-sharing features to its Gemini AI. Screenshare will let users show Gemini what’s on their phone screen and ask questions, while Live Video offers AI insights on users’ surroundings via smartphone cameras, per TechCrunch.
Google demoed an online shopper asking Gemini to suggest what clothing to pair with jeans they were looking at. It's the latest example of how AI is moving beyond text-based prompts and becoming a voice and video contextual assistant for consumers.
AI is coming out of its shell: AI features are becoming more context aware. In Gemini’s case, users can make queries and comparisons without opening multiple search windows or apps. Here are examples from other companies: