As AI forms an industry, it becomes important to balance innovation and responsibility. Government frameworks and evolving regulations around the world are key to ensuring the deployment of ethical, safe and fair AI across the sector.
Over the past few years, it has been impossible for the industry to not be aware of an explosion of interest in AI. The hype cycle we are currently moving into was launched in November 2022 with the launch of ChatGPT. This has led to industry interest in using AI to increase productivity, improve service quality and create new business models. Two and a half years later, it is moving more and more than the stage of a company experimenting with AI. More and more companies are adopting AI solutions into production and gaining profits from their investments.
As AI usage became more widespread and new normal, challenges were seen in its use. Without being checked, AI can appear to be biased, expressing blasphemous, hallucinating, and leading to false, negative, and harmful consequences. Such a bad experience can be avoided using guardrails and other administrative controls.
There are also situations in which AI and machine learning are fundamental to understanding whether the model has created content or made recommendations for actions. For example, in a healthcare setting, it is very important that AI is not overly influenced by the patient’s race, gender, or other demographics when recommending a particular route of care.
AI governance is a set of processes that can be used to ensure that AI is responsible. Safe, ethical, safe and suitable for your purpose. When such governance is used alongside AI, the AI used can be secure and controlled.
In many cases, as with new technologies, AI guidance and government regulations state how AI is used and should not be used, has not kept up to its development and distribution. Furthermore, it is the view of influential people that AI regulations limit their innovation. At this point, we are in a situation where some jurisdictions are enforcing AI regulations – for example, the European Union and China. These include the United States, when President Trump retracted Biden’s AI executive order earlier this year (which essentially described the Biden government’s approach to AI regulation within the United States).
Whether a company or government agency operates within AI regulatory jurisdiction, there is a driving force for using AI responsibly because we want to avoid honorary amation, security, or data breaches, legal challenges, or other important issues that may arise due to the unintended consequences of AI use.
This raises awareness of the risks associated with AI use and the need to manage these risks. Regulation of AI provides a framework for clarifying what risks are and managing them, but it is also entirely possible to ensure that AI is ethical and responsible for using such a risk management framework in unregulated countries.
Globally, many countries are in a position to state their intentions to regulate AI (including India and the UK), and some of these have begun calling laws. However, there seems to be something like a “watch-and-see” approach for many, as the government wants to understand what the approach is from their peers and competitors. Regulatory development is slow.
Earlier this month, the US government issued guidance to US government agencies, directing them to innovate with AI and innovate their services responsibly. The approaches of these guidelines to ensure responsible AI innovation are similar to those that underpin EU AI law. Catalog all AIs in use, risk assessments for each AI, and ensure that higher risk AI manages risk within the appropriate AI governance framework.
Therefore, the guidance to ensure that AI is liable, whether or not jurisdiction is regulated, is steadily becoming clear. AI risk management through a governance framework promotes responsible AI innovation as it can ensure that AI is ethical, safe, safe and legal. Globally, we continue to explore, experiment and embed AI in industry and the wider society, allowing us to ensure the fairness and equity of AI when we build it for the future.
(The author is Chief Data Scientist and Head of Responsible AI at UST, UK)