Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Reddit appeals to humanity over AI data scraping

June 6, 2025

Grassley discusses the AI ​​whistleblower protection law in a “start point” interview

June 5, 2025

Piclumen Art V1: Next Generation AI Image Generation Model Launches for Digital Creators | Flash News Details

June 5, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Friday, June 6
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»Democratic AI requires good policy and ethical development
AI Legislation

Democratic AI requires good policy and ethical development

versatileaiBy versatileaiNovember 20, 2024No Comments7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Democratic Ai Requires Good Policy And Ethical Development
Share
Facebook Twitter LinkedIn Pinterest Email

Dr. Jeff Kleck is a Silicon Valley entrepreneur, adjunct professor at Stanford University, and dean of the Catholic Institute of Technology.

Clarote & AI4Media / Better Image of AI / Power/Profit / CC-BY 4.0

Many, including OpenAI co-founder and CEO Sam Altman, have advocated for an ethical and democratic vision for artificial intelligence. But for democratic AI to become a reality, the world will need more than promises from technology leaders. Ensuring that AI is developed and deployed by ethical practitioners requires appropriate regulation and an appropriate approach to ethical policy.

On the policy front, policymakers around the world are pursuing ethical AI development through highly diverse approaches.

The American approach, taken as a whole, is somewhat haphazard. The Biden administration will provide recommendations and policy guidance to advance ethical AI, including the release of an AI Bill of Rights blueprint in October 2022, followed by a Responsible AI bill in May 2023. Announced further policy guidance for development, however, administrative guidance remains at a very high level and much of it lacks legal enforcement. Developers and users are free to follow or ignore various aspects of the guidance.

Meanwhile, the US Congress has not passed any substantive AI legislation. These AI bills Congress is considering are piecemeal and do not provide an overall ethics regulatory framework. Instead, it deals with sensitive questions such as how AI will impact election integrity and public health. There appears to be little chance of comprehensive AI regulation moving forward in both chambers in the short term.

The net result of the US approach is that ethical questions are far more likely to be answered by private developers and users than by regulators and lawmakers. By choosing not to regulate AI, the United States is accepting greater ethical uncertainty for the potential for greater innovation.

Meanwhile, the European Union has enacted an AI law that regulates AI according to a sliding scale of ethics-based risks. AI innovations that are considered lower risk will receive less regulatory oversight. Riskier systems will face more restrictions, such as having to register with the EU and undergo evaluation before being placed on the market. AI systems deemed to pose an “unacceptable risk” (such as those designed to manipulate people or impose social scoring systems based on socio-economic, racial, or other factors) are prohibited. be done.

This approach implicitly forces European policymakers to believe that there are certain uses of AI that are unethical for all people, or at least the majority of people, and therefore should not be considered or attempted. You’re making a bet.

Despite Europe’s attempts at moral clarity, months later stakeholders are still negotiating over the language of the law’s final code of practice, with tech giants like Amazon, Google and Meta in particular continuing to negotiate. We are lobbying for a lighter-touch approach to avoid unduly stifling innovation. After all, no matter how well-intentioned a law may be, reasonable people will disagree about what is considered a “high risk” and what is an “unacceptable risk.”

Despite their vastly different approaches, the United States and Europe are revealing a fundamental truth in their pursuit of ethical AI. Policies are necessary and can help, but they are also insufficient.

Please enter ethics

Achieving democratic AI will require more intentional shaping of not only how AI is managed, but also how it is developed. But for that we need ethical developers. To understand why, you need to know that AI is unique as a technology in that it reflects the ethical attitudes of its developers. Like humans, AI systems are built on the ethical assumptions of the people who raised them, and ultimately make their own rational decisions.

Currently, AI is in its infancy. As any parent knows, children often learn habits and behavioral principles from their parents at an early age. Good parents more often produce successful children. Bad parents often have the opposite effect. The same principle is at work in artificial intelligence.

Who shapes AI now will determine whether it becomes the scourge of humanity, our defender, or a still-undetermined mixture of both.

Let’s take an example. From facial recognition that struggles to identify certain races to hiring algorithms that promote applicants from one background over another, many people will be disappointed if AI shows racial bias. is expressing anger. How can we fix this problem? From changing the algorithm to manually limiting certain types of responses that the AI ​​gives, to changing the data that is input and the data that is input to the AI ​​system itself. There are various ways to do so.

We can discuss which tool is best to use to fix this problem. But ultimately, no matter which strategy is used, someone will have to make the ethical decisions about whether the goal is color-blind AI or anti-racism. The question is not a technical one, but a moral one.

Or let’s make a hypothesis. Imagine AI being integrated into military targeting systems. If 10% of the casualties will be civilians, will the AI ​​recommend launching the missile? What if one of the casualties is a civilian? What if it turns out that AI can prevent civilian deaths even more accurately than human operators? So, is it morally preferable to replace human analysts with AI in targeting systems? These questions is not just a hypothesis. AI targeting systems are currently being deployed in the conflicts in Ukraine and Gaza.

After all, there are an infinite number of questions of this kind. And they often aren’t cut and dry. There’s a reason people continue to debate so fiercely about how to achieve racial justice and whether the atomic bombings of Hiroshima and Nagasaki were justified. No matter how intelligent a computer is, it cannot simply process all the data and tell us the right thing to do. No lawmaker, no matter how altruistic, can create the rules that govern every situation. Even universal rules must be applied using human wisdom.

To begin with, it is clear that it is important that the people forming the AI ​​be able to judge right from wrong. Unfortunately, people are not born moral. Call it innate selfishness, cultural bias, privilege, original sin, etc., but people must learn to be moral, and to do so they must be taught.

We recognize this need in other areas as well. Over the years, graduate programs have been created offering ethics in science, medicine, and law. Practitioners understood that their field could only be applied morally if they trained students to deal with the challenges they would encounter through a moral lens. AI is no exception, but to date there are no programs or institutions dedicated to the ethical training of future AI engineers and regulators.

This is starting to change. The Catholic Institute of Technology, where I belong, plans to open a Master of Science program in technology ethics in the fall of 2025. We hope other universities will follow our example. Whenever policymakers are unable or unwilling to shape ethical AI, and whenever the law is silent, educational institutions need to fill the gap and ensure that AI is developed properly. there is. In any case, CatholicTech plans to offer ethics courses both in-person and online to as many future scientists and innovators as possible, replenishing industry with talent capable of making moral decisions.

No doubt those of us focused on AI will continue to fight over who gets to raise it from infancy to adulthood and what rules should be imposed. Those are valuable discussions. But if we really want AI to be democratic and good, we also need to focus on teaching people to be good.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleIndia’s first AI databank launched to strengthen national security
Next Article Understanding AI Detectors: How they work and how to outperform them
versatileai

Related Posts

AI Legislation

Grassley discusses the AI ​​whistleblower protection law in a “start point” interview

June 5, 2025
AI Legislation

California Senate Passes Bills aimed at making AI chatbots safer

June 4, 2025
AI Legislation

Marjorie Taylor Greene Post on AI, Big Beautiful Bill Goes Viral

June 4, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

New Star: Discover why 보니 is the future of AI art

February 26, 20253 Views

How to use Olympic coders locally for coding

March 21, 20252 Views

Dell, IBM and HPE must operate at a single digit margin when it comes to the server market, and only gets worse

March 10, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New Star: Discover why 보니 is the future of AI art

February 26, 20253 Views

How to use Olympic coders locally for coding

March 21, 20252 Views

Dell, IBM and HPE must operate at a single digit margin when it comes to the server market, and only gets worse

March 10, 20252 Views
Don't Miss

Reddit appeals to humanity over AI data scraping

June 6, 2025

Grassley discusses the AI ​​whistleblower protection law in a “start point” interview

June 5, 2025

Piclumen Art V1: Next Generation AI Image Generation Model Launches for Digital Creators | Flash News Details

June 5, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?