This setup was not a fluke. Thanks to the innovations of companies like Deepseek, open source AI models are becoming more powerful and accessible. Documented and accessible with the introduction of DeepSeek-V3 and DeepSeek-V3 releases, their recent release demonstrated that they can develop high-quality AI models with relatively low computational resources, significantly lower energy consumption, and a modest budget. It changed the assumption that only Titans in the industry can afford to build AI in their heads.
The rise of open source AI
The rapid evolution of open source AI is to democratize access to machine learning. In recent years, many generative AI tools have demonstrated that small, fine-tuned models are superior to unique alternatives at specific tasks.
The open source AI model drives innovation in the industry from healthcare to finance. For example, Tensorflow and Pytorch are widely used in medical imaging for tasks such as tumor detection, diagnostic speed, and increased accuracy. OpenChem helped researchers develop predictive models for drug discovery.
In the financial sector, open source AI models, including QuantConnect and Tazama, are being adopted for algorithmic trading, risk assessment and fraud detection. These applications allow financial institutions to efficiently process huge amounts of data, leading to more informed decisions and improved security measures.
Meanwhile, more and more companies are realizing that they don’t need Openai or Google resources to leverage AI. The impact of this shift is profound. AI was once the domain of a $1 billion R&D lab, and is now accessible to startups, researchers and individuals. Companies that embrace this reality will gain competitiveness by innovating smarter and faster than spending their rivals.
However, there is a warning. Developers and users of AI models (either own or open source) should be aware of potential biases in AI output and ensure they are compliant with relevant regulations. To maintain consumer trust, you also need to maintain data privacy and transparency in your AI applications.
Even today’s “digital-transformed” organizations need a level of “agility” that often surpasses existing capabilities to adapt to technical disruption. However, during periods of high volatility, uncertainty, complexity and ambiguity, it is important to quickly separate from the temporarily dominant but false narrative. Find the facts and stick to the evidence. And we begin to revisit the assumptions and frames underlying our beliefs, shaping our attitudes, for example towards technologically-ready innovation.
The Internet was a groundbreaking technical disruption in the mid-1990s. AI is doing the same thing today, delivering much faster and deeper results.
New ways of thinking about AI
After the demonstration, I asked the executives another question. If AI is very accessible and affordable, what can you do today?
The conversation shifted dramatically. Instead of worrying about prohibitive barriers to entry, deep knowledge, risk, dependencies or the need to develop fear, executives brainstormed practical applications. Automating internal processes, enhancing customer service, improving risk assessment and decision-making.
The challenges were no longer financial or technical. It was about imagination and execution. This is a lesson we need to embrace. AI isn’t just about the Silicon Valley giants. It is built, refined and applied by people with modest budgets and great ideas.
Now is the outcome of the future we have chosen to build. So, your tools, to your readers. AI is yours.