Generating AI has a dirty little secret. Behind every whimsical AI sketch of a cat wearing a space helmet is an enormous industrial footprint. The GPU rack gazings electricity and water to stir digital art. In just a week, Openai reported that the new image model had created 700 million photos. Each requires hundreds or thousands of energy.
Now, researchers say they may have found a way out of this terrible energy spiral by building AI that literally draws in light.
The new systems developed and essentially described at UCLA do not rely on the usual brute-force silicon chip calculations. Instead, it uses a set of laser beams and optical gadgets to generate images almost instantly, consuming only a few millijoules of energy per image.
“Unlike digital diffusion models that require hundreds to thousands of iterative steps, this process achieves image generation with snapshots and does not require additional calculations beyond the initial encoding,” according to a researcher led by researcher Aydogan Ozcan, senior author of the study.
Turn static into art

It helps you peek inside the black box of traditional AI art to understand why this is important. Most image generators are based on a process called diffusion. First, AI is trained to add “digital static” to images until something unrecognizable remains. During the training of the generator AI, billions, if not billions, of images go through this process. Next, when you search for a new image, such as “House on Mars”, it starts with a random static and gradually removes the noise until the picture appears. It’s clever, but it’s slowly and calculatedly hungry.

The UCLA team has turned this process into optical technology. A small digital encoder trained on a standard dataset creates a phase pattern that can be described as a static mathematical blueprint. These patterns are loaded into a spatial light modulator, a kind of liquid crystal screen. When the laser light shines, it transports the encoded pattern to a second modulator known as the diffraction decoder. The result is an instantly realized image in a sensor that is completely reminiscent of light passing through the glass.
“Our optical generation models have little computing power and can synthesize countless images that offer scalable, energy-efficient alternatives to digital AI models,” author Shiqi Chen told Phys.org.

The team tested the system with handwritten figures, butterflies, human faces and even paintings inspired by Vincent van Gogh. The optical results were not perfect, but they appeared statistically similar to what the digital model produces.
“This is probably the first example of optical neural networks not only being laboratory toys, but also being a computational tool that can produce practical value results,” Alexander Ribosky, a quantum optics researcher at Oxford University, told New Scientist.
Secure AI from Green AI
This paper covers two flavors of technology. Snapshot models can create images in a single optical path. The iterative model mimics digital diffusion more closely and improves the output via continuous flashes of light. Both approaches allowed us to create multi-colored Van Gogh-style artwork at resolutions comparable to some digital generators.
Beyond efficiency, researchers also paid attention to privacy. Each image is encoded with a unique optical phase pattern, so only the correct decoder surface can reconstruct the final image. This creates what the author calls the “physical key lock mechanism.”
The system will eventually shrink into an integrated photonic tip, allowing bulky lasers and modulators to be replaced with nanofabrication surfaces. This means that optical generation AI models can be integrated into glasses, VR headsets, or medical imaging tools. As Ozkan said, “Our work shows that optics can be utilized to perform optical tasks on a large scale.”
Overall, the whole picture here is about sustainability. The rapid growth of generative AI has sparked fears that energy demand has been out of control. In 2023, researchers estimated that training large-scale models could release as much carbon as flying thousands of passengers abroad. By eliminating the need for repetitive digital computations during inference, optical AI could make content generation much more sustainable.
Of course, there are still challenges. Optical hardware is tiny, prone to misalignment and can be limited by modulator resolution. Scaling from the lab setup to the data center does not occur overnight. However, the UCLA team has shown that it is possible to rethink generative AI as a photon dance rather than as a power hog.

