Goodfire, a company focused on researching AI interpretability, has raised $50 million in Series A funding rounds to enhance its AI interpretability research and develop the Ember platform.
The funding round, led by Menlo Ventures, saw participation from humanity, Lightspeed Venture Partners, B Capital, Work-Bench, Wing, South Park Commons, and other investors.
Goodfire will use the funds to expand its research activities and develop further its core interpretability platform, Ember.
Ember is designed to allow users to access the internal mechanisms of neural networks, aiming to make these systems more understandable and controllable.
Deedy Das, an investor at Menlo Ventures, said:
“Goodfire’s world-class team has been drove from Openai and Google Deepmind, but they’re opening that box to help businesses truly understand, guide and control AI systems.”
Goodfire focuses on the study of mechanical interpretability, focusing on neural networks of understanding and reverse engineering.
The Ember platform is designed to decode neural processes within AI models, providing direct programmable access to internal work.
Beyond traditional blackbox inputs and outputs, Ember opens up the possibility to apply, train, and adjust AI models. This allows users to uncover hidden insights, accurately control the behavior of the model, and improve overall performance.
“We’re committed to providing a range of services to our customers,” said Eric Ho, co-founder and CEO of Goodfire. “Our vision is to build tools to make it easier to understand, design and modify neural networks. This technology is important for building the next frontier of a safe and powerful foundation model.”
“We’re excited to see the world’s most recent developments in humankind,” said Dario Amodei, CEO and co-founder of humanity.
“The investment in Goodfire reflects our belief that mechanical interpretation is one of the best ways to help transform black box neural networks into understandable and maneuverable systems. This is an important foundation for responsible development of powerful AI.”
Goodfire said he is working on interpretability research through strategic collaborations with leading model developers.
Additionally, the company plans to release additional research previews to demonstrate interpretability techniques in areas such as language models, image processing, and scientific modeling.
These initiatives are set to uncover new scientific insights and change our understanding of how AI models can interact and leverage.