Most experts incorrectly assume that AI regulations affect only high-tech developers, but current laws … more
Adobe Stock
AI is evolving, adopted at the speed of lightning, and laws designed to keep us safe are not walking.
You probably already knew that. However, when it comes to AI regulations, there are many other ideas and concepts that may not be as watertight.
The topics of AI regulation are vast, covering everything from different attitudes to the challenge of implementing rules on privacy, human rights, human rights, and how to use tools that are often open source and easily accessible to everyone.
But understanding what that means is becoming more and more important as we realize we have to make decisions about how to use AI in our business and personal life.
So, if you want to understand how it affects us, our business, and society at large, here are five misconceptions to sleep about how AI is regulated.
AI regulations are important only for engineers
The first assumption for many people is that AI regulations are something that only AI engineers, data scientists and developers must worry about. However, as AI systems are increasingly embedded in business functions from marketing and HR to customer service, everyone has an obligation to ensure that it is legally and safely used.
It is important to remember that AI regulations we have seen so far, such as EU, China, and various US laws, primarily impose restrictions on people using AI rather than those who are developing them.
Regardless of the role of professionals within the organization, they need to understand the rules and safeguards. This means understanding the data they are using to do their job, what is going on with it, and what they need to do to stay on the right side of the law.
AI regulations curb innovation
There is a strong sentiment among sections of the AI community that regulations are suppressing innovation. Imposing rules will drive discussion, limiting what AI developers can build and limiting what users can do.
The counterargument is by the fact that regulations actually promote innovation, create a level playing field, and give confidence to businesses working within legal and ethical frameworks.
By setting up guardrails around potentially dangerous or harmful use cases, regulations help industries build trust with their customers and safely experiment with new ideas.
In reality, this is a balanced act, with regulators aiming to promote innovation while mitigateing risks. However, viewing regulations as anti-violating or unnecessary interference is a frequent and dangerous misconception.
AI regulations control what can be developed
So we’ve touched on this before, but in reality it deserves its own point. Amateurs may assume that AI regulations are imposed on major AI developers such as Google or Openai, and limit what they can build in some way.
In fact, most of the laws we’ve seen so far have focused on the effects of AI and what can be done by the people who use it. For example, EU law prohibits or strictly regulates “high-risk” AI activities, such as social scoring, identifying real-time biometrics of people in public places, and exploiting vulnerable groups of people. Other use cases such as facial recognition are limited to law enforcement and are subject to strict guidelines.
So, just because developers can build inherently powerful models, just because something is possible with AI doesn’t mean it’s legal. Ultimately, end users are responsible for the consequences of their actions.
Geopolitics overrides AI law
In 2017, Vladimir Putin said that anyone who will become AI leaders will become the world leaders. His predictions seem to be going well so far. Why do leaders lay barriers to achieving it, with the benefits of war, intelligence and economics that grant nation-states?
In fact, it is because we understand that it can be used as a tool to regulate itself and promote political and geopolitical agendas. For example, the EU emphasizes the importance of maintaining privacy and basic citizenship in law, while China’s policy focuses on social harmony and the maintenance of law enforcement. In the US, lawmakers have shown that increasing the competitiveness of the domestic AI industry is a priority.
Taking an early lead with AI Arms Race gives the nation the opportunity to shape the direction the AI market will take in the next decade, and regulations are an important tool to accomplish this.
AI is a “black box” so it cannot be regulated
Even creators of Foundation AI systems, such as large-scale language models (LLMS), powered by ChatGpt, don’t know exactly how they work.
And if no one knows how they work, how can you impose rules on them? Will they even follow them? Perhaps they can even pretend to follow them in order to somehow make us mistakenly favor ourselves (alignment fake).
These are questions that often arise when the pros and cons of AI regulations are being discussed. However, as we cover, regulations are not designed to control or limit the development or capabilities of AI. It is placing guardrails around potentially dangerous behaviors.
By focusing on outcomes and regulating, there is no need to fully understand AI to regulate it. Ensuring that the regulatory framework we are currently building is robust is important to help us address the implications of more sophisticated and potentially dangerous AI in the future.
Why everyone needs to understand how AI regulations affect them
It is not just government policymakers and computer scientists that need to understand how and why AI is regulated and why regulations affect them.
As AI becomes increasingly embedded in our lives, understanding the rules and why they exist becomes important in exploiting the opportunities we provide in a safe and ethical way.