In December, Agent ERA started by releasing the experimental version of GEMINI 2.0 Flash. Earlier this year, we updated 2.0 Flash Thinking Experimental on Google AI Studio. This has improved the performance by combining Flash speed with the ability to infer a more complex problem.
Last week, all users of the desktop and mobile Gemini app could use the updated 2.0 Flash, which helped everyone to create, exchange, and collaborate with GEMINI.
Today, we have created an updated GEMINI 2.0 flash that is commonly available from Google AI Studio and Vertex AI Gemini API. Developers can now build production applications in 2.0 flashes.
The GEMINI 2.0 Pro, the best model for coding performance and complex prompts, has also been released. It is available in GEMINI apps for Google AI Studio, Vertex AI, and Gemini Advanced users.
A preview published in Google AI Studio and Vertex AI has released a new model of Gemini 2.0 Flash-Lite, the most expensive model so far.
Finally, GEMINI app users can use 2.0 flash thinking experiments on desktop and mobile model drop -downs.
All of these models have multi -modal input with text output at the time of release, and the number of modalities that can use general availability in the next few months will increase. For more information about the price setting, see Google for Developers Blog. In the future, we are working on updating the GEMINI 2.0 family model and improved functions.
2.0 Flash: New update for general availability
The first series of models introduced in I/O 2024 is popular as a powerful flagship model for developers, is ideal for large -scale, high -frequency tasks, and uses a huge amount of information using context windows. Multi -modal inference is very ability beyond 1 million tokens. I am excited to see the reception by the developer community.
2.0 Flash is now available to more people in AI products. Improving the performance of the key benchmark, the production of images and speeches for text will appear soon.
Try Gemini 2.0 Flash or Google AI Studio and Vertex AI Gemini API with the Gemini app. Details of the price can be found on Google for Developers Blog.
2.0 Pro EXPERIMENTAL: The best model for coding performance and complex prompts
As the GEMINI-EXP-20206 has been shared the early experimental version of GEMINI 2.0, we have gained an excellent feedback from developers about the strengths and best use cases of coding.
Today, we are releasing the experimental version of Gemini 2.0 Pro, which responds to the feedback. It has the most powerful coding performance and ability to understand and infer a complicated prompt that understands and inferces the world’s knowledge better than any model released so far. 2 million tokens come with the largest context window, with a comprehensive analysis and understanding of a huge amount of information, and has a function that calls tools such as Google search and code execution.