Dr. Jeff Kleck is a Silicon Valley entrepreneur, adjunct professor at Stanford University, and dean of the Catholic Institute of Technology.
Clarote & AI4Media / Better Image of AI / Power/Profit / CC-BY 4.0
Many, including OpenAI co-founder and CEO Sam Altman, have advocated for an ethical and democratic vision for artificial intelligence. But for democratic AI to become a reality, the world will need more than promises from technology leaders. Ensuring that AI is developed and deployed by ethical practitioners requires appropriate regulation and an appropriate approach to ethical policy.
On the policy front, policymakers around the world are pursuing ethical AI development through highly diverse approaches.
The American approach, taken as a whole, is somewhat haphazard. The Biden administration will provide recommendations and policy guidance to advance ethical AI, including the release of an AI Bill of Rights blueprint in October 2022, followed by a Responsible AI bill in May 2023. Announced further policy guidance for development, however, administrative guidance remains at a very high level and much of it lacks legal enforcement. Developers and users are free to follow or ignore various aspects of the guidance.
Meanwhile, the US Congress has not passed any substantive AI legislation. These AI bills Congress is considering are piecemeal and do not provide an overall ethics regulatory framework. Instead, it deals with sensitive questions such as how AI will impact election integrity and public health. There appears to be little chance of comprehensive AI regulation moving forward in both chambers in the short term.
The net result of the US approach is that ethical questions are far more likely to be answered by private developers and users than by regulators and lawmakers. By choosing not to regulate AI, the United States is accepting greater ethical uncertainty for the potential for greater innovation.
Meanwhile, the European Union has enacted an AI law that regulates AI according to a sliding scale of ethics-based risks. AI innovations that are considered lower risk will receive less regulatory oversight. Riskier systems will face more restrictions, such as having to register with the EU and undergo evaluation before being placed on the market. AI systems deemed to pose an “unacceptable risk” (such as those designed to manipulate people or impose social scoring systems based on socio-economic, racial, or other factors) are prohibited. be done.
This approach implicitly forces European policymakers to believe that there are certain uses of AI that are unethical for all people, or at least the majority of people, and therefore should not be considered or attempted. You’re making a bet.
Despite Europe’s attempts at moral clarity, months later stakeholders are still negotiating over the language of the law’s final code of practice, with tech giants like Amazon, Google and Meta in particular continuing to negotiate. We are lobbying for a lighter-touch approach to avoid unduly stifling innovation. After all, no matter how well-intentioned a law may be, reasonable people will disagree about what is considered a “high risk” and what is an “unacceptable risk.”
Despite their vastly different approaches, the United States and Europe are revealing a fundamental truth in their pursuit of ethical AI. Policies are necessary and can help, but they are also insufficient.
Please enter ethics
Achieving democratic AI will require more intentional shaping of not only how AI is managed, but also how it is developed. But for that we need ethical developers. To understand why, you need to know that AI is unique as a technology in that it reflects the ethical attitudes of its developers. Like humans, AI systems are built on the ethical assumptions of the people who raised them, and ultimately make their own rational decisions.
Currently, AI is in its infancy. As any parent knows, children often learn habits and behavioral principles from their parents at an early age. Good parents more often produce successful children. Bad parents often have the opposite effect. The same principle is at work in artificial intelligence.
Who shapes AI now will determine whether it becomes the scourge of humanity, our defender, or a still-undetermined mixture of both.
Let’s take an example. From facial recognition that struggles to identify certain races to hiring algorithms that promote applicants from one background over another, many people will be disappointed if AI shows racial bias. is expressing anger. How can we fix this problem? From changing the algorithm to manually limiting certain types of responses that the AI gives, to changing the data that is input and the data that is input to the AI system itself. There are various ways to do so.
We can discuss which tool is best to use to fix this problem. But ultimately, no matter which strategy is used, someone will have to make the ethical decisions about whether the goal is color-blind AI or anti-racism. The question is not a technical one, but a moral one.
Or let’s make a hypothesis. Imagine AI being integrated into military targeting systems. If 10% of the casualties will be civilians, will the AI recommend launching the missile? What if one of the casualties is a civilian? What if it turns out that AI can prevent civilian deaths even more accurately than human operators? So, is it morally preferable to replace human analysts with AI in targeting systems? These questions is not just a hypothesis. AI targeting systems are currently being deployed in the conflicts in Ukraine and Gaza.
After all, there are an infinite number of questions of this kind. And they often aren’t cut and dry. There’s a reason people continue to debate so fiercely about how to achieve racial justice and whether the atomic bombings of Hiroshima and Nagasaki were justified. No matter how intelligent a computer is, it cannot simply process all the data and tell us the right thing to do. No lawmaker, no matter how altruistic, can create the rules that govern every situation. Even universal rules must be applied using human wisdom.
To begin with, it is clear that it is important that the people forming the AI be able to judge right from wrong. Unfortunately, people are not born moral. Call it innate selfishness, cultural bias, privilege, original sin, etc., but people must learn to be moral, and to do so they must be taught.
We recognize this need in other areas as well. Over the years, graduate programs have been created offering ethics in science, medicine, and law. Practitioners understood that their field could only be applied morally if they trained students to deal with the challenges they would encounter through a moral lens. AI is no exception, but to date there are no programs or institutions dedicated to the ethical training of future AI engineers and regulators.
This is starting to change. The Catholic Institute of Technology, where I belong, plans to open a Master of Science program in technology ethics in the fall of 2025. We hope other universities will follow our example. Whenever policymakers are unable or unwilling to shape ethical AI, and whenever the law is silent, educational institutions need to fill the gap and ensure that AI is developed properly. there is. In any case, CatholicTech plans to offer ethics courses both in-person and online to as many future scientists and innovators as possible, replenishing industry with talent capable of making moral decisions.
No doubt those of us focused on AI will continue to fight over who gets to raise it from infancy to adulthood and what rules should be imposed. Those are valuable discussions. But if we really want AI to be democratic and good, we also need to focus on teaching people to be good.