Gemini AI Gets a Boost on Pixel 9 and Pixel Buds Pro 2
Google’s Gemini AI model was squarely in the spotlight Tuesday at the company’s Made by Google event, where the tech giant also introduced a lineup of new Pixel 9 phones, along with a smartwatch and earbuds. The executives who took the stage mentioned Gemini 115 times over the course of the 80-minute presentation.
That included mentions of the chatbot itself, as well as the following Gemini products:
- Gemini Advanced, the $20-a-month subscription service that provides access to Google’s latest AI model, Gemini 1.5 Pro
- Gemini Assistant, the AI assistant on Pixel devices
- Gemini Live, the conversational interface for Gemini
- Gemini Nano, the AI model for smartphones
There were also repeated references to “the Gemini era.”
Case in point:
“We’re fully in the Gemini era, with AI infused into almost everything we’re doing at Google, across our full tech stack,” Rick Osterloh, senior vice president of platforms and devices at Google, said at the Mountain View, California, event. “It’s all to bring you the most helpful AI.”
Google execs also talked up the theme of helpful AI as they highlighted how they think AI will change the way we use our devices. This comes as competitors like ChatGPT maker OpenAI also try to convince us to talk to chatbots and let AI do more of the heavy lifting in search and other daily activities, like checking dates on a calendar or messaging a friend. In Google’s case, more-powerful devices mean we can do more with generative AI beyond our laptops and tablets. But, as Google’s Dear Sydney ad mishap during the Paris Olympics demonstrated, there’s still a gap between what we’re willing to do with AI and what tech companies think we want from AI.
While we got a preview of most of Tuesday’s Gemini news at Google’s I/O developer event in May, there were two new hardware-specific updates worth highlighting:
Faster processing to fuel Gemini on Pixel
Generative AI can yield impressive results in creating images and crafting emails, essays and other writing, but it requires a lot of power. A recent study found that generating a single image with an AI model uses as much energy as fully charging your phone. Typically, this is the kind of power you find in data centers.
But when Pixel 8 devices came out in October, Google introduced its first AI-specific processor. This powerful silicon chip helps make on-device generative AI possible — “on device” meaning the processing happens on your phone, not in a far-off and costly data center. The Tensor G3 processor was first. Now we have the Tensor G4, which was developed with Google’s AI research laboratory DeepMind to help Gemini run on Pixel 9 devices and power everyday activities like shooting and streaming videos with less of a hit to the battery.
Google calls Tensor G4 “our fastest, most powerful chip yet.” According to Shenaz Zack, senior director of Pixel product management, that means 20% faster web browsing and 17% faster app launching than with the Tensor G3.
She noted that the TPUs in the Tensor G4 can generate a mobile output of 45 tokens per second. Here’s what that means:
TPUs are tensor processing units. They help speed up generative AI.
Tokens are pieces of words. AI models are like readers who need help, so they break down text into smaller pieces — tokens — so they can better understand each part and then the overall meaning.
One token is the equivalent of about four characters in English. That means the Tensor G4 can generate roughly three sentences per second.
The Tensor G4 is the first processor to run the Gemini Nano with Multimodality model, the on-device AI model that helps your Pixel 9 phone better understand the text, image and audio inputs you make.
Google has also upgraded the memory in its Pixel 9 devices — up to 12 to 16 gigabytes — so generative AI works quickly and the phone will be able to keep up with future advances. At least until the next big thing comes along.
14 Ways Android 15 Will Change Your Phone (and Not All Are AI)
See all photos
Eyes-free access to Gemini
Like the Pixel 9 family, the new Pixel Buds Pro 2 earbuds come with a processor called the Tensor A1 chip, which also powers AI functionality.
You can think of the earbuds as another audio interface for Gemini — but it’s one without a screen. You can ask for information from your email, as well as for directions, reminders and song recommendations, but you won’t be able to take photos and ask questions.
To have a conversation with Gemini Live while wearing the Pixel Buds Pro 2, you first say, “Hey, Google, let’s talk live.”
There’s one caveat: You’ll need a Google One AI Premium subscription first. This $20-a-month plan provides access to Google’s latest AI models, as well as to Gemini in Google properties like Gmail and Docs, along with 2TB of storage.
Google is offering a free 12-month subscription to the Google One AI Premium plan to anyone who buys the Pixel 9 Pro, 9 Pro XL or 9 Pro Fold now.
“I found myself asking Gemini different types of questions than I do with my phone in front of me. My questions are lot more open-ended,” said Sandeep Waraich, product management lead for Google Wearables, of using Gemini Live on the Pixel Buds Pro 2. “There are more walks and talks, longer sessions that are far more contemplative than not.”
That may be true, but as my CNET colleague David Carnoy pointed out, it looks like you’re wearing Mentos candies in your ears while you ask those different questions.
Source: CNET