Google's Made by Google event showcases the new assistant experience of Gemini AI in real time.

share
Google

At yesterday's Made by Google event on 8/13, Google not only introduced the most extensive Pixel phone series to date but also showcased real-time demonstrations. Throughout the event, they interacted with their AI assistant Gemini in a conversational manner, aiming to dispel doubts about the practicality of AI and hoping that AI will spark consumer demand for Pixel phones.

Live Demonstration of Gemini AI Function in Real Time

At the end of last year, Google introduced its powerful AI Gemini, demonstrating that AI can excel in roles such as advisors, tutors, and assistants. However, it was later revealed by internal employees that Google's demonstration videos were edited, and the actual operation was not as smooth as depicted.

Google Introduces Powerful AI Gemini, Capable of Acting as Advisor, Tutor, and Assistant

During the event, Google conducted live demonstrations using a conversational approach to communicate with the AI assistant. When product manager Dave Citron asked the AI assistant to check his calendar to see if he was available to watch a friend's performance and upload it to DM using a photo, it took three attempts to succeed. Although there were some initial hiccups, Gemini AI excelled in tasks such as writing emails and brainstorming.

In addition to Google's own Pixel phones, demonstrations were also done using Samsung's Galaxy and Motorola phones, showcasing Google AI's comprehensive integration into the Android ecosystem.

Google Emphasizes Security and Privacy

Regarding security and privacy, Google emphasized that the data used by AI would require user permission, and all computations would be done on Google Cloud or directly on Gemini Nano, without leaving the user's phone. Furthermore, the AI technologies used are all developed by Google itself, eliminating concerns about third-party usage. This sets Google apart from Apple and Microsoft, both of which have collaborations with OpenAI or other AI vendors.

Google Aims for AI to Ignite Demand for Pixel Phones

According to a report by CommonWealth Magazine, Gemini Live's voice conversation mode is very similar to ChatGPT's voice conversation feature. However, Gemini Live is subscription-based and requires users to become Gemini Advanced paid members to use it, and currently only offers English conversations.

This year's Made by Google event, held two months earlier than usual, aimed not only to beat Apple's launch but also demonstrated Google's determination to insist on live performances despite minor setbacks.

Google showcased four models in the Pixel 9 series, including the entry-level Pixel 9, the high-end Pixel 9 Pro, Pixel 9 Pro XL, and the foldable Pixel 9 Pro Fold, all equipped with the Tensor G4 chip. The Tensor G4 was developed in collaboration with Google's DeepMind team, aiming to leverage the advantages of the Gemini multimodal model.

Google hopes that the new AI features will ignite consumers' enthusiasm for Pixel phones, disrupting the smartphone market dominated by Apple and Samsung. However, market rumors suggest that compared to the previous generation of smartphone chips, the Tensor G4 chip in this release does not offer significant improvements and could even be seen as a "transitional model."

The market is more eagerly anticipating the next generation Tensor G5 chip. Media outlet Android Authority speculates from public trade information that the Tensor G5 chip will be manufactured by TSMC, promising a noticeable upgrade. Since the Tensor fourth-generation chips are all manufactured by Samsung, switching to TSMC could signify a significant turning point in Google's delicate relationship of cooperation and competition with Samsung.