TL;DR
- With features like native image and audio output, Gemini 2.0 is now available for developers and trusted testers.
- Google used several benchmarks to show how the new Gemini 2.0 Flash Experimental model is more powerful than the Gemini 1.5 Pro 002.
- To access Gemini 2.0, users can head to the Gemini web chat or mobile app and select the 2.0 FLash Experimental option from the drop-down menu.
In December 2023, the Alphabet-owned company launched Gemini 1.0, the language processing model that offered multimodal abilities along with a long context window. In 2024, the company also introduced Gemini 1.5, a slightly advanced version of the original model. Now, Google has revealed its “most capable model yet,” the Gemini 2.0.
Gemini 2.0 Offers Native Image And Audio Output

With features like native image and audio output, Gemini 2.0 is now available for developers and trusted testers. However, regular users can access the Gemini 2.0 Flash experimental model via the publicly available chatbot. For those wondering, the Gemini 2.0 Flash model supports multimodal inputs like images, videos, and audio.
Further, it also supports multimodal output, such as natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. Released in the era of Agentic AI, the model is also able to call tools like Google Search code execution, as well as third-party user-defined functions.
Also Read: Google Pixel 9a Roundup: Here’s What We Know So Far
Gemini 2.0 Flash Experimental Outperforms Gemini 1.5 Pro 002 Model

Google used several benchmarks to show how the new Gemini 2.0 Flash Experimental model is more powerful than the Gemini 1.5 Pro 002. For instance, the new language model beats the Gemini 1.5 Pro 002 in MMLU-Pro (which tests models with questions across multiple subjects with higher difficulty tasks), Natural2Code (which analyzes code generation across Python, Java, C++, JS, etc.), FACTS Grounding (which judges the ability to provide correct responses), and MATH (which tests the model with challenging math problems).
The only two benchmarks where the Gemini Pro 1.5 002 performs better are MRCR (1M), which tests the models for multi-discipline college-level multimodal understanding and reasoning problems, and CoVoST2 (21 languages), which informs about the efficacy during speech translation. Besides Gemini 2.0 Flash Experimental, other Gemini 2.0 models will break cover in 2025.
Also Read: Google Brings Bypass Charging to Pixel Phones But There’s a Big Catch
How To Access Gemini 2.0 Flash Experimental?
To access Gemini 2.0, users can head to the Gemini web chat or mobile app and select the 2.0 FLash Experimental option from the drop-down menu. “With this new model, users can experience an even more helpful Gemini assistant,” mentions Google in the official press release. Further, the company plans to extend the Gemini 2.0 model to other products early next year.
With Gemini 2.0 Flash Experimental, Google also announced improvements to Project Astra. Now, the agentic AI can converse in multiple languages (including mixed languages) and better understand accents and uncommon words. With Gemini 2.0, Project Astra can now use Google Search, Maps, and Lens, making it a more helpful AI assistant in users’ day-to-day lives.
Also Read: Do Smart Rings have a bright future as Wearables in India?
Updates To Project Astra, Project Mariner, And Jules
Google has also enhanced Project Astra’s memory. It can now remember up to 10 minutes of in-session chats and conversations. Further, it can also remember conversations from the past to offer optimized results. Last but not least, the company has improved Project Astra’s latency, which allows it to stream and understand audio at the speed of human conversation.
With Gemini 2.0 Flash Experimental and Project Astra, Google also unveiled some information about Project Mariner, an early research prototype that understands and reasons across the information in users’ browser screens, all the way from the pixels to the text, images and code. It can then combine the information as a Chrome Extension and complete the user’s required tasks. Last but not least, Google is also working on Jules, another experimental model that helps with coding.
You can follow Smartprix on Twitter, Facebook, Instagram, and Google News. Visit smartprix.com for the latest tech and auto news, reviews, and guides.