Rome (CNS) – While the potential benefits of AI can outweigh the risk of fear, a group of experts holding conferences in Rome argue that global laws and ethical principles need to be developed and applied, protecting human rights and curbs and preventing the possibility of harm.
“Voluntary compliance to ethical principles and standards is no longer sufficient for high-risk AI applications,” said Marcus Wu of Charge Diferes, the Australian Embassy.
“Like many countries, Australia is working on ways to harness the benefits of AI while reducing risk, ensuring that AI is developed and used in a safe, human-centric, reliable, responsible way,” he said.
“Australia is trying to develop basic administrative rules to shape AI use in high-risk settings,” he said. “This includes assessing AI use, the risks it poses to people’s physical and mental health and human rights, and how severe the impact it is.”
To help them consider, embassy staff invited five experts representing theology, economics, technology, public policy and law to discuss what they deemed the most pressing concerns that require immediate surveillance, precautions and ethical guidelines.
Luigi Laggerrone, professor of economics at Sacred Heart University in Milan and director of business and innovation research at Intesa San Paolo, Italy’s largest bank, said wage inequality is very likely to continue.
Just as the increase in mechanization increased workers’ productivity after the Industrial Revolution, artificial intelligence “certainly helps to increase labor productivity,” he said. Economic theory states that workers’ wages should increase closely with increased productivity.
“But there’s a problem here,” he said. “We have not seen 99.9% of those who received wages and income in the past 70 years not increasing wages by 1.5% or 2% per year,” the increase required to accommodate the increase in estimated annual productivity.
“It’s only 0.01% of people, so basically the manager, the person who decides their paychecks, has increased,” he says, and their salaries have grown much faster than their productivity gains.
Therefore, AI is expected to provide a significant increase in productivity, but those who own the capital are “a much richer and those who provide labour are “a lot poorer.”
Paolo Benanti, the father of the Franciscans, an artificial intelligence expert at the Pope Gregorian University of Rome and a professor of moral theology, said that the “attention economy,” where high-tech companies seek people’s attention on their platforms, could make way for the “intention economy” thanks to large-scale language models. These models can train users to predict and propose products and services, making people’s “intentions” into new products that can be bought and sold.
Father Benanti asked how it would affect the rights of humans that are promoted to interact through some sort of “software-defined reality.”
“Using AI allows for technical monitoring and control of what everyone does everywhere,” says Diego Ciulli, head of government affairs and public policy at Google Italy.
Before AI, he said it was technically impossible to police a huge amount of content on YouTube. However, the same technology developed to detect cyberporn and terrorist content “can be used to monitor everything online and offline and control freedom of speech.”
Edward Santou, lawyer, founder and co-director of the Human Technology Institute at Sydney Institute of Technology, said one of the biggest concerns from a human rights perspective is freedom of speech.
Free speech includes two rights. The right to freely express yourself and send information, and the “right to receive information in an unresolved way.”
“If your intellectual diet is primarily mediated by several social media platforms, it’s becoming increasingly difficult to freely form opinions, whether it’s a matter of religion or politics, whatever it is,” he said.
We are in an age where there is sometimes an exclusive focus nowadays,” the pressure is heightened that there are no restrictions at all, and people are making people speak the way they want it to.
However, he not only said, “We had long-standing laws relating to people from restricting people to being false and defamatory about others,” but also the protection of privacy laws and intellectual property considerations that “cannot reveal people’s trade secrets.” So I know that it is probably not an absolute right. ”
Most of the panelists agreed that the opportunities offered by AI, particularly those in expanding access to education and healthcare, are greater than risk, but Santo warned that this may not be “the right calculus.”
He recalled, as the client said, “For each of the past six days, I had a client who wanted generosity from the judges because I hadn’t tried to steal it, but who wanted generosity from the judges. Certainly, I trust that.”
“It’s not just about how law and candid morality work,” Santou said. The same applies to AI and high-tech companies. “If it is causing human rights harm, you cannot gain credibility on all good use of artificial intelligence on human rights or ethical terms. You must avoid harm to human rights anyway.”
Read more Vatican News
Copyright ©2025 Catholic News Service/Conference of the United States Catholic Bishops
printing