AI had a huge impact on our society, which led to an increase in regulatory demands. Before enacting new laws, it is wise to assess whether existing legal frameworks can address AI-related issues.
History shows that the legal system has been adapted, not overhauled, to manage disruptive technologies. AI should be no exception.
Back to basics: Laws govern humans, not tools
The law regulates the relationships between people and the entities they create, such as businesses and governments. AI is a tool that cannot be regulated, but it is people who develop, deploy and benefit from AI.
In the event of a malfunction in an autonomous vehicle, the Product Liability Act holds the manufacturer liability. Once you reach the key points of the law, you will see that you do not need specialized AI laws.
Just as society adapted property and contract laws to horse use in the 19th century and applied existing legal frameworks to the Internet, AI does not require new laws. Regardless of the tools involved, legal principles such as responsibility, transparency, and justice already govern human behavior.
Issues for Section 230: Modern Legal Hurdles
The core issues in the legal immunity of high-tech companies are outlined in Section 230 of the 1996 US Communications and Consciousness Act. Designed to protect early Internet platforms, businesses now protect their businesses from liability for Deepfakes, Harassment, and fraud, and the harm generated by AI.
Imagine an automaker avoids liability for a brake defect by claiming that “the car drove itself”, even if the car is unmanned. Section 230 enables this absurdity and allows the platform to avoid the consequences of AI misuse. Rather than creating new regulations, revising this immune shield will address many of the legal issues and risks of AI.
AI rules reality test
Before enacting a new AI law, you need to check whether existing rules can handle cases triggered by AI. This test can be performed by following the AI pyramid layers per layer.
Layer 1: AI’s hardware and computing power are already managed by technical standards. AI farms are regulated by environmental laws that monitor energy and water consumption. The global trends in semiconductors are strictly regulated by export control regimes. Layer 2: Algorithms and AI capabilities are at the heart of regulatory debates that are concerned about AI safety to alignment and bias. Initially, the focus was on quantitative metrics such as parameters and number of flops (floating point operations per second). However, platforms like Deepseek have this approach turned to mind, indicating that powerful AI inference doesn’t always require large computational resources. Layer 3: Data and Knowledge, the lifeline of AI is already regulated by data protection and intellectual property laws. However, the courtroom drama unfolds today reveals the cracks in these frameworks. In America, the Open is fighting the New York Times, but Universal Music Group is suing humanity. Crossing the Atlantic, Getty Images brings stability AI to court. Vertex: What AI uses is where the social, legal and ethical consequences of AI are focused. Whether deepfakes, biased recruitment algorithms, or autonomous weapons, risk arises from the way it is used, not the technology itself.
The entire legal system (contract law, tort law, labor law, criminal law, etc.) operates here. The basic principles apply: Those who develop or benefit from technology must be responsible for the cause of technology. This means holding a company responsible for the harm caused by AI systems through negligence, misuse, or malicious intent.
Conclusion
History shows that transformative technologies from railroads to the internet show an in-demand adaptation rather than a reinvention of legal frameworks. AI is no exception. Existing laws regarding liability, intellectual property, and data can address AI issues. The core challenge lies not in regulatory gaps but in outdated sculptures like Section 230, which protects high-tech platforms from accountability for AI-driven harm.
Society can reduce risks without halting innovation by focusing legal attention on those who create, use and benefit from tools instead of the tools themselves. Update your immune system, enforce transparency and be responsible for the harm done by AI.
Click here for details.
Get nordvpn (74% +3 +3 months from $2.99 per month)