Google is making Gemini AI part of everything you do on your smartphone – here’s how
Google showed off a lot of impressive hardware at this year’s Made by Google event, including new Pixel smartphones, earbuds, and smartwatches. But the company’s Gemini AI model was arguably the real star, playing a central or supporting role in nearly every feature unveiled.
We’ve rounded up the most striking, interesting, and quirky ways Gemini is part of Google’s mobile future.
Gemini Live
Gemini’s most notable introduction came in the form of Gemini Live, which, as the name suggests, brings the AI assistant to life and makes it act much more human. Not only can you chat with the Gemini informally without the need for formal commands, but you can even interrupt its response and redirect the conversation without having to restart it. What’s more, with ten new voice options and a better speech engine, Gemini Live feels more like a phone call with a friend/personal assistant than its more robotic predecessors.
Pixel screenshots
Screenshots may be a mundane name, but it’s a key element of the new Pixel 9 smartphone series. The native mobile app uses the Gemini Nano AI model built into the phone to automatically convert your photos into a searchable database. The AI can essentially process an image like a human would.
Let’s say you take a photo of a sign with event details. When you open that photo, Gemini will add options to add the event to your calendar, map out directions to the location, or even open a web page from the sign. And the AI will improve more general searches, like finding photos of a spotted dog or a brick building.
@techradar
♬ original sound – TechRadar
Pixel Studio
Google is using Gemini and its new smartphones to get a head start in the fast-growing AI image generation market with its Pixel Studio app. This text-to-image engine uses Gemini Nano on the smartphone to leverage on-device and cloud models like Imagen 3 to render images faster than standard web portals.
The app itself also includes a menu to change the style. The biggest caveat is that it won’t create human faces. Google didn’t say that was because of controversy earlier this year, but it could just be caution.
Add me
Another image-based AI feature Google announced is almost the inverse of the face-shy Pixel Studio. Add Me uses AI to create a (mostly) seamless group photo that includes the person taking the photo.
All the photographer has to do is switch to someone else, and the AI will guide the new photographer through setting up a second shot and compositing the two images into one with everyone in it.
Pixel weather and more
The least necessary use of Gemini’s advanced AI, and probably the most used, is probably the Pixel Weather app. The Gemini Nano AI model produces customized weather reports that match what the user wants to see in the app. It simplifies customization in subtle but very real ways. There were plenty of other smaller AI highlights during the presentation, too.
For example, Android users can overlay Gemini on their screens and ask questions about what’s visible. Meanwhile, a new Research with Gemini tool will tailor research reports to specific questions, likely in academic settings. Other examples aren’t out yet, but Android phones will soon be able to share what they find using a feature called Circle to Search.