Healthcare is increasingly adopting AI to improve workflow management, patient communication, and support for diagnosis and treatment. It is important that these AI-based systems not only perform well, but also be efficient and privacy-preserving. With these considerations in mind, we built and recently released Health AI Developer Foundations (HAI-DEF). HAI-DEF is a collection of lightweight open models designed to provide developers with a robust starting point for their own health research and application development. The HAI-DEF model is open, giving developers full control over privacy, infrastructure, and model changes. This May, we expanded our HAI-DEF collection with MedGemma. This is a collection of generative models based on Gemma 3 designed to accelerate healthcare and life sciences AI development.
Today, we are proud to announce two new models in this collection. The first is the MedGemma 27B Multimodal. It complements the previously released 4B Multimodal and 27B text-only models by adding support for the interpretation of complex multimodal and longitudinal electronic medical records. The second new model is MedSigLIP. It is a lightweight image and text encoder for classification, search, and related tasks. MedSigLIP is based on the same image encoder that drives the 4B and 27B MedGemma models.
MedGemma and MedSigLIP are powerful starting points for medical research and product development. MedGemma is useful for medical text and image processing tasks that require free text generation, such as generating reports or answering visual questions. MedSigLIP is recommended for image processing tasks with structured output, such as classification and search. All of the above models can run on a single GPU, and MedGemma 4B and MedSigLIP can also be adapted to run on mobile hardware.
For more information on the development and evaluation of MedGemma and MedSigLIP, please see the MedGemma Technical Report.

