In 2016, Microsoft CEO Satya Nadella stood by Saqib Shaikh, a software engineer with visual impairments, demonstrating a pair of smart glasses developed to help people with visual impairments. Glasses used advanced machine learning and facial recognition techniques to explain the wearer in real time.
This was not just a technical surprise. It was the moment when we reconstructed our thinking about artificial intelligence (AI). Nadella later recalls her experience with Slate’s article, saying, “The beauty of machines and humans working in tandem is lost in discussions about whether AI is a good or bad thing. The discussion should be about the value instilled in the people and institutions that create this technology.”
Since then, Microsoft has embarked on a purposeful, value-driven journey to ensure that AI innovation is based on ethics and human influence. This commitment is realized through concrete actions, including the formation of an etheric committee, the establishment of responsible AI, the disclosure of Microsoft’s six AI principles, and the development of internal AI standards in the current second version. Microsoft’s experience demonstrates that innovation and core values are not mutually exclusive, but are important partners in building technology that is useful to society.
This philosophy is now reflected in the new regulatory framework, the EU AI Law. As the world’s first comprehensive AI regulation, EU AI law represents a turning point. Establish a risk-based approach that aims to balance fostering innovation with reducing the potential harms of AI. Similar to Microsoft’s internal governance efforts, the law allows AI systems to be regulated proportionally based on their use and impact, rather than monolithic.
The gradual enforcement of EU AI law is intentional and ambitious. In February, the law will begin implementing requirements related to the organization’s AI literacy and the prohibition of certain high-risk practices. By August, new rules for general purpose AI will take effect and acknowledge the unique challenges posed by the basic model, which serves as the backbone of countless downstream applications. A year later, in August 2026, the law will begin regulating high-risk AI systems (those that have a significant impact on health, safety, or fundamental rights) with robust requirements for transparency, human surveillance and data governance.
The global meaning is clear. Organisations that provide AI products or services in the EU, or used there, are subject to the law. Essentially, EU AI law sets a new international benchmark for responsible AI. The application should not be dismissed as a regulatory obligation alone. Rather, it is an opportunity to enhance AI practices around the world.
Microsoft is already underway to ensure that AI provision is consistent with the requirements of the AI Act. This is where collaboration is important. As regulatory environments evolve, companies will need expert guidance to interpret, implement and operate these new standards in a way that is critical to their unique capabilities and context.
In-depth expertise in digital transformation, regulatory adjustment and risk management, rooted in IQBusiness’s experience, are examples of how technology can be implied. It is essential to understand that employing responsible AI is not simply a matter of compliance. It is about building trust, demonstrating accountability and promoting sustainable innovation in the long term.
For businesses, this starts with governance. To cooperate with EU AI law, ethical principles must be incorporated into the very structure of the organization. It is explainable and auditable to define the roles and responsibilities of the leadership and the overall technical team, train employees to understand the meaning of AI systems, and ensure the decisions made by AI. It also means integrating AI governance from enterprise risk management into data privacy and digital transformation initiatives.
Risk management is another important pillar. The EU AI Act focuses on identifying, assessing, and mitigating risks associated with AI. Organizations need to understand how AI systems interact with sensitive data, assess potential harm, including bias and unfair outcomes, and ensure that those risks are addressed throughout the system’s lifecycle. These responsibilities cannot be separated and must be incorporated into an enterprise-wide process with appropriate monitoring and accountability.
The most advantage here is the early invoker. Companies that are early and intentionally aligned with EU AI law can establish themselves as leaders in ethical AI. Building transparency, equity and human-centered designs in AI systems will not only meet regulatory requirements, but also build trust with users, regulators and the wider society.
Our shared vision is that responsible AI is the norm and is no exception. The EU AI Act is an important milestone, but it is just the beginning. As part of this beginning, local SAs and the broader African AI communities should consider leveraging best practices and learning established by EU counterparts, as well as privacy regulations, and applying these evolution and iterations within appropriate governance structures and ethical frameworks.
I believe this is the moment when industry leaders step up. Not only do we adhere to it, but also to shape the future of AI in an inclusive, fair, and deeply human way.
•Watson is Microsoft’s senior corporate advisor and Craker CEO of IQBusiness.