As the insatiable energy demand of the AI industry collides with infrastructure restrictions, pressures often apply to accelerate the deployment of nuclear energy sources at the expense of safety and surveillance. Major nuclear deployment efforts are underway to meet this recent surge in demand, but speed comes with risk. Nuclear development timelines (often 10-20 years) are a step away from the pace of AI deployment. Efforts to quickly track these timelines raise serious safety and surveillance concerns. “AI Arms Race” is increasingly used to justify efforts to roll back years of nuclear safety and regulatory mechanisms that exist to protect the public, workers and the environment.
The AI Now Institute is launching a new work stream focusing on the impact of AI on energy infrastructure, with a special focus on nuclear safety and regulatory issues. As part of this, we welcome Dr. Sofia Guerra as our advisor. Dr. Guerra is a global expert in assurance and governance for complex digital systems. She has worked for decades with the US Nuclear Regulation Authority, the UK Nuclear Regulation Authority and the International Atomic Energy Agency to enhance public surveillance and safety in high-risk environments.
Our new Energy In Flynis initiative is based on extensive research into AI safety and governance, focusing on energy security, regulatory accountability, and the concentration of power in high-stakes decision-making. At the moment when industry narratives risk eroding public protections in the name of innovation, we ask: Does AI justify risks associated with nuclear power plants and potentially “quickly tracking” regulations due to its harm and increased energy demand?
We examine the implications of AI deployment in high-risk sectors, challenging efforts to erode public surveillance, and highlighting strategies that prioritize safety and democratic accountability.
download