For a comprehensive overview of the technical details behind these features and an approach to responsible development, see the Gemma 3 Technical Report.
Strict safety protocols to responsibly build Gemma 3
Open models require careful risk assessment, and our approach can balance innovation and safety. The development of Gemma 3 included extensive data governance and involves linking with safety policies through fine-tuning and robust benchmark assessments. Intensive testing of more capable models often informs assessments of less capable models, while Gemma 3’s enhanced STEM performance encouraged specific assessments focusing on possible misuse in the creation of hazardous substances. Their results show low risk levels.
As the industry develops more powerful models, it is important to collectively develop risk-predictive approaches to safety. Over time, we will continue to learn and refine the safety practices of open models.
Embedded Safety for Image Applications Using ShieldGemma 2
Along with the Gemma 3, the Shieldgemma 2, a powerful 4B image safety checker built in the Gemma 3 Foundation, is also on sale. ShieldGemma 2 offers off-the-shelf solutions for image safety, safety label output. There are three safety categories: dangerous content and sexually explicit violence. Developers can further customize ShieldGemma for their safety needs and users. Shieldgemma 2 is openly built to provide flexibility and control, leveraging the performance and efficiency of the Gemma 3 architecture to promote responsible AI development.
Ready to integrate with the tools you already use
Gemma 3 and ShieldGemma 2 integrate seamlessly into your existing workflows.
With your favorite tools, you can support hugging Face Trans, Orama, Jax, Keras, Pitorch, Google AI Edge, UNSLOTH, VLLM, gemma.cpp, you have the flexibility to choose the best tool for your project. Make the most of that potential in Google AI Studio or download the model via Kaggle or Hugging Face. Learn Gemma3 to suit your specific needs. The Gemma3 ship features an improved codebase that includes recipes for efficient tweaking and inference. Train and tune your models using preferred platforms such as Google Colab, Vertex AI, and even gaming GPUs. GPU. The Gemma 3 models allow you to get maximum performance on GPUs of all sizes, from Jetson Nano to the latest Blackwell chips. Gemma 3 is featured in the NVIDIA API Catalog, allowing for quick prototyping with API calls alone. Accelerate AI development on many hardware platforms. Gemma3 is also optimized for Google Cloud TPUs and is integrated with AMD GPUs via Open-Source ROCM™ Stack. For CPU running, Gemma.cpp provides a direct solution.
“Gemmaverse” of models and tools
Gemmaverse is a vast ecosystem of community-created Gemma models and tools, ready to enhance and inspire your innovation. For example, AI Singapore’s Sea Lion V3 breaks down language barriers and promotes communication in Southeast Asia. Insait’s Bggpt is the first large-scale language model in pioneering Bulgarian, which demonstrates the power of Gemma to support a wide variety of languages. Nexa AI’s Omniaudio also introduces the possibilities of on-device AI, bringing advanced audio processing capabilities to everyday devices.
We are launching the Gemma 3 Academic Program to further promote academic research breakthroughs. Academic researchers can apply for Google Cloud Credits (valued by $10,000 per award) to accelerate Gemma 3-based research. The application will open today and remain open for four weeks. Please apply to our website.
Get started with Gemma 3
As part of our continued commitment to democratizing access to high quality AI, Gemma 3 represents the next step. Ready to explore Gemma 3? Where should you start:
Instant Exploration:
Get your API key directly from Google AI Studio, use Google AI Studio, use Gemma 3 with Google Genai SDK, try gemma 3 directly in your browser with complete precision.
Customization and Build:
Download the Gemma 3 models from Embracing Face, Orama, or kaggle. Use Face’s transformer library, or your preferred development environment to fine-tune and adapt your model to your own requirements.
Deployment and Extend: