(OSV News) — Since the explosion of generative artificial intelligence with the ability to generate human-like text, realistic images, and convincing movies began in the 2020s, AI users and developers have warned that consistent regulatory guardrails are needed to protect against documented harm, a particular concern for Pope Leo XIV.
On December 11, 2025, President Donald Trump signed an executive order with the promising title “Securing a National Policy Framework for Artificial Intelligence.” But President Trump’s executive order calling for federal regulations to replace state regulations specifically declares that “AI companies must be free to innovate without burdensome regulations.”
OSV News asked Taylor Black, founding director of the Leonham Institute for AI and Emerging Technologies at Catholic University of America and director of AI and venture ecosystems at Microsoft Corporation, to share his thoughts on how or if these two objectives can be reconciled and what the Catholic faith should say about the ethical development of AI.
Advantages and disadvantages of regulating AI
OSV News: What are the potential impacts of this executive order, both positive and negative?
Taylor Black: This executive order raises questions that we’ve been grappling with since the early days of AI governance. So what is the appropriate locus of regulatory authority over an essentially borderless technology?
There are valid arguments on both sides here.
A unified national framework could provide the clarity and consistency that responsible developers, especially small and medium-sized enterprises and start-ups, really need. The current patchwork of state laws poses compliance challenges, and there is a real risk that well-intentioned but technically ill-informed regulations may inadvertently stifle beneficial innovation. This is not an abstract concern. I’ve seen it first hand.
But we need to be honest about what we are sacrificing. States have historically served as laboratories of democracy. In the field of AI, some of the most thoughtful regulatory efforts have emerged at the state level precisely because state legislators are often close to the communities experiencing the real-world impacts of AI. Colorado’s algorithmic discrimination law, which the order specifically criticizes, represents an attempt to address documented harms—harms that communities of color, low-income households, and other marginalized groups are currently experiencing, not hypothetically.
A Catholic perspective on AI regulation
This order posits that “innovation” and “responsible supervision” are fundamentally in tension. But this is exactly the false dichotomy that Pope Leo XIV described in his message to the Builders AI Forum (Rome, November 6-7, 2025). “The question is not just what AI can do, but who we become through the technologies we build.” That framework is important. It moves us from purely utilitarian calculations to questions about something more fundamental: human identity and flourishing.
Catholic social teaching does not ask us to choose between human flourishing and economic dynamism. I argue that true development must include both. The Pope reminded forum participants that “technological innovation can be a form of participation in the sacred act of creation.” But precisely because of that creative participation, “every design choice expresses a vision of humanity and has an ethical and spiritual weight.”
The question is not whether to regulate, but whether to regulate wisely and in a way that protects human dignity while allowing true creativity to flourish.
Tensions between domestic law and local application in AI regulation, technological freedom and safety
OSV News: The main ethical concern is child protection. The administration said the framework would take that into account. Could there be any problems if the state cannot be involved in “local” regulation and enforcement?
Black: The Administration is committed to ensuring that child safety protections are not compromised, and Section 8(b)(i) expressly exempts state child safety laws from preemptive application. That’s important and I take it seriously.
But the thing that keeps me up at night is executive ability.
State attorneys general have been at the forefront of child protection efforts in the digital space. They understand their community. they can move quickly. They have built relationships with local schools, parents, law enforcement, and advocacy groups. No national framework, no matter how well-intentioned, can reproduce this fine-grained relational capacity.
Online child exploitation is not an abstract policy debate. It is happening in real time and at scale, and perpetrators are adapting faster than central regulators can respond. The platforms themselves have acknowledged that their safety teams are overwhelmed, sometimes under legal pressure.
This is not an issue where we can afford to experiment with jurisdictional restructuring while “waiting and seeing” what the national framework will be. The principle of subsidiarity, that matters should be handled by the least competent authority, suggests that states should retain meaningful enforcement capacity, not just formal legal language.
Any national framework must include strong funding for state-level enforcement, clear mechanisms for state attorneys general to address child safety issues without delay from federal preemption, and explicit repeal provisions that restore state authority if federal enforcement proves inadequate.
OSV News: “Big Tech” sometimes resists regulation as harmful to innovation. But the dangers of the way AI is already exploiting humans have been documented. What are your thoughts on how balance can or will be struck in national framework policies, especially in light of the concerns expressed by the Vatican and Pope Leo?
Black: The industry argument, and I say this as someone who has worked in the industry for years, is that regulation stifles innovation. There is also a correct version of this argument. In other words, poorly designed regulations written without technical understanding can create perverse incentives and real costs without commensurate benefits.
But there is also a version of this argument that simply calls for impunity. And we’ve seen where that leads.
The exploitation we documented at the Catholic University School of Law Conference (November 14th, “Corporate Social Responsibility in Big Tech”) (sexual extortion, forced labor, algorithmic discrimination) is not hypothetical. These are current harms happening to real people, often facilitated by systems put in place with minimal oversight because “moving quickly” has been treated as a disqualified good.
What might balanced AI regulation include?
I believe that a balanced national framework should include:
First is the requirement for transparency, which allows independent researchers, civil society, and affected communities to understand how the system works without disclosing truly unique technical details.
The second is a meaningful accountability mechanism that assigns responsibility when AI systems cause harm. Total immunity benefits no one but the bad guys.
Third, invest in development, not just training. We need engineers, executives, and policy makers not only trained in technical compliance but also formed in a moral tradition that can address these questions.
Fourth, continued engagement with the most affected communities. The Rome Call for AI Ethics (signed by the Vatican, Microsoft, IBM, and others) calls for inclusivity. This means that the people managed by these systems need to have a say in their design and deployment. It also means building shared infrastructures that allow institutions, particularly Catholic institutions, to act intelligently and generatively, rather than remaining passive recipients of technologies built by others.
Fifth, there is a clear recognition that “innovation” divorced from ethical responsibility is not true development.
The Pope is clear about what will happen if we get this wrong. In his first interview as pope, he warned that “the danger is that the digital world will take its own course and we will become pawns or be sidelined.” He said “very wealthy” people are investing in AI with “complete disregard for human and human values.”
Catholic social teaching: An ethical, human-first framework for addressing the use and regulation of AI.
The Church’s vision is not one of Luddai’s resistance but something much more radical: technology for the integral development of humanity. And realizing that vision will require a shared infrastructure that is open, interoperable, and managed by the communities it serves, rather than fragmented vendor stacks and ad hoc technology decisions. This is precisely the kind of federated ecosystem-level thinking that needs to inform any national framework.
OSV News: Is there anything else you would like to add?
Black: I don’t come to this question as an opponent of the technology industry or as a skeptic of innovation. I have spent my career building and investing in emerging technologies because I believe in their potential to bring real benefits to humanity.
But I also believe that Catholic social teaching offers us something that is often missing from current debates. It is a framework that begins not with systems or markets, but with humans, created in the image of God and endowed with dignity that no algorithm can give or take away.
The church cannot be satisfied with criticism from the sidelines. We must build the ventures, infrastructure, and formation pathways that embody the vision we have articulated. Pope Leo XIV’s point is correct. This must be a “very ecclesiastical initiative.” And that effort requires an organizational structure, not just an academic program.
Pope Leo XIV concluded his message to the Builders AI Forum with the following prayer: “May your collaboration bear fruit as an intelligent, relational, and lovingly guided AI that reflects the design of its Creator. May the Lord bless your efforts and may they be a sign of hope for the entire human family.”
That vision – AI as a sign of hope for the entire human family – should be the standard by which we measure all national frameworks.
The question before us is not just “How can we win the AI race?” The question is, “What kind of society will we build and who will be left behind?”
Kimberly Hetherington is an OSV News correspondent. She writes from Virginia.

