Introducing Gemini 1.5
Demis Hassabis, CEO of Google DeepMind, on behalf of the Gemini team
These are exciting times for AI. New advances in this field could make AI even more useful to billions of people in the coming years. Since introducing Gemini 1.0, we’ve been testing, refining, and enhancing its features.
Today we are announcing our next generation model, Gemini 1.5.
Gemini 1.5 offers dramatically enhanced performance. This represents a step change in our approach and is based on research and engineering innovations across nearly every part of our foundational model development and infrastructure. This includes streamlining Gemini 1.5 training and services with a new Mix of Expertise (MoE) architecture.
The first Gemini 1.5 model we will release for initial testing is Gemini 1.5 Pro. This is a medium-sized multimodal model that is optimized for scaling across a wide range of tasks and performs at a similar level to our largest model to date, the 1.0 Ultra. It also introduces breakthrough experimental capabilities in long context understanding.
Gemini 1.5 Pro comes with a standard 128,000 token context window. However, starting today, a limited group of developers and enterprise customers can try out Contextual Windows for up to 1 million tokens in private preview through AI Studio and Vertex AI.
As we deploy the full 1M Token Context Window, we are actively working on optimizations to improve latency, reduce computational requirements, and improve the user experience. We want you to try this groundbreaking feature and will share details about its future availability below.
The continued advancement of these next-generation models opens up new possibilities for people, developers, and businesses to create, discover, and build with AI.