Google Release Gemini 2.0 Flash Experimental For Users, Announces Improvements To Project Astra

Main Image
  • Like
  • Comment
  • Share

TL;DR

  • With features like native image and audio output, Gemini 2.0 is now available for developers and trusted testers.
  • Google used several benchmarks to show how the new Gemini 2.0 Flash Experimental model is more powerful than the Gemini 1.5 Pro 002.
  • To access Gemini 2.0, users can head to the Gemini web chat or mobile app and select the 2.0 FLash Experimental option from the drop-down menu.

In December 2023, the Alphabet-owned company launched Gemini 1.0, the language processing model that offered multimodal abilities along with a long context window. In 2024, the company also introduced Gemini 1.5, a slightly advanced version of the original model. Now, Google has revealed its “most capable model yet,” the Gemini 2.0.

Gemini 2.0 Offers Native Image And Audio Output

What Is Google's Gemini Nano?

With features like native image and audio output, Gemini 2.0 is now available for developers and trusted testers. However, regular users can access the Gemini 2.0 Flash experimental model via the publicly available chatbot. For those wondering, the Gemini 2.0 Flash model supports multimodal inputs like images, videos, and audio.

Further, it also supports multimodal output, such as natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. Released in the era of Agentic AI, the model is also able to call tools like Google Search code execution, as well as third-party user-defined functions.

Also Read: Google Pixel 9a Roundup: Here’s What We Know So Far

Gemini 2.0 Flash Experimental Outperforms Gemini 1.5 Pro 002 Model

Gemini 2.0 Flash Experimental Outperforms Gemini 1.5 Pro 002 Model

Google used several benchmarks to show how the new Gemini 2.0 Flash Experimental model is more powerful than the Gemini 1.5 Pro 002. For instance, the new language model beats the Gemini 1.5 Pro 002 in MMLU-Pro (which tests models with questions across multiple subjects with higher difficulty tasks), Natural2Code (which analyzes code generation across Python, Java, C++, JS, etc.), FACTS Grounding (which judges the ability to provide correct responses), and MATH (which tests the model with challenging math problems).

The only two benchmarks where the Gemini Pro 1.5 002 performs better are MRCR (1M), which tests the models for multi-discipline college-level multimodal understanding and reasoning problems, and CoVoST2 (21 languages), which informs about the efficacy during speech translation. Besides Gemini 2.0 Flash Experimental, other Gemini 2.0 models will break cover in 2025.

Also Read: Google Brings Bypass Charging to Pixel Phones But There’s a Big Catch

How To Access Gemini 2.0 Flash Experimental?

To access Gemini 2.0, users can head to the Gemini web chat or mobile app and select the 2.0 FLash Experimental option from the drop-down menu. “With this new model, users can experience an even more helpful Gemini assistant,” mentions Google in the official press release. Further, the company plans to extend the Gemini 2.0 model to other products early next year.

With Gemini 2.0 Flash Experimental, Google also announced improvements to Project Astra. Now, the agentic AI can converse in multiple languages (including mixed languages) and better understand accents and uncommon words. With Gemini 2.0, Project Astra can now use Google Search, Maps, and Lens, making it a more helpful AI assistant in users’ day-to-day lives.

Also Read: Do Smart Rings have a bright future as Wearables in India?

Updates To Project Astra, Project Mariner, And Jules

Google has also enhanced Project Astra’s memory. It can now remember up to 10 minutes of in-session chats and conversations. Further, it can also remember conversations from the past to offer optimized results. Last but not least, the company has improved Project Astra’s latency, which allows it to stream and understand audio at the speed of human conversation.

With Gemini 2.0 Flash Experimental and Project Astra, Google also unveiled some information about Project Mariner, an early research prototype that understands and reasons across the information in users’ browser screens, all the way from the pixels to the text, images and code. It can then combine the information as a Chrome Extension and complete the user’s required tasks. Last but not least, Google is also working on Jules, another experimental model that helps with coding.

You can follow Smartprix on TwitterFacebookInstagram, and Google News. Visit smartprix.com for the latest tech and auto newsreviews, and guides.

Shikhar MehrotraShikhar Mehrotra
Shikhar Mehrotra is a seasoned technology writer and reviewer with over five years of experience covering consumer tech across India and global markets. At Smartprix, he has authored more than 1,700 articles, including news stories, features, comparisons, and product reviews spanning automobiles, smartphones, chipsets, wearables, laptops, home appliances, and operating systems. Shikhar has reviewed flagship devices such as the iPhone 16, Galaxy S25+, and Sennheiser HD 505 Open-Ear headphones. He also contributes regularly to Smartprix’s growing automotive section.

With a deep understanding of both iOS and Android ecosystems, Shikhar specializes in daily tech news, how-to explainers, product comparisons, and in-depth reviews. His DSLR photography in product reviews is recognized as among the best on the team.

Before joining Smartprix, Shikhar wrote for leading publications including Forbes Advisor India, Republic World, and ScreenRant. He holds a Bachelor of Arts in Journalism and Mass Communication from Amity University, Lucknow.

Related Articles

ImageSamsung Galaxy Z TriFold Could Launch on December 5 With a Bigger Battery and Triple-Folding Display

Samsung might be ready to take folding phones to the next level. The long-rumored Galaxy Z TriFold, the company’s first smartphone with a triple-folding design, could debut as early as December 5, according to new leaks. This next-gen foldable is expected to outshine the Galaxy Z Fold 7 not just in design, but also in …

ImageGoogle I/O 2025 Now Live: Project Astra, Gemini 2.5 Flash & More Announced

Google kicked off I/O 2025 with Sundar Pichai taking the stage. The keynote started with Google summarizing Gemini 2.5’s progress and how it has 400+ million monthly active users. Pichai claims that Google is bringing Gen AI to more people than any other product in the world. Dive in for all that’s announced during Google’s …

ImageOPPO and Google Deepen AI Partnership with ColorOS 16 and the Upcoming Find X9 Series

OPPO has announced a deeper partnership with Google to improve personalized and secure AI experiences on its phones. The collaboration brings Google’s Gemini AI into OPPO’s ecosystem, particularly within the new Mind Space feature, set to debut with the upcoming Find X9 Series and ColorOS 16. At the core of this initiative is AI Mind …

ImageJio’s free Google Gemini AI Pro offer is Live— Here’s How to Redeem Right Now

Reliance Jio has partnered with Google to offer 18 months of free Gemini AI Pro access to its users. The collaboration marks one of the biggest AI subscription initiatives in the world, covering Jio’s massive user base of over 505 million subscribers. The program begins with a focused rollout for users aged 18 to 25 …

ImageGemini AI In Google Maps Unlocks Hands-Free Conversational Navigation And Exploration Experience

Google Maps is getting a new feature in India that makes navigating and exploring easier and smarter. The company is integrating its Gemini AI assistant into Maps to offer a hands-free, conversational driving experience. You Can Now Communicate With Google Maps Using Natural Language Google Maps users can now interact with the app using natural …

Discuss

Be the first to leave a comment.