Lightricks uses the latest artificial intelligence models to raise the bar for rapid video creation and iteration. The company claims that the newly released LTX-2 underlying model can generate new content faster than playback speeds, further raising the bar for resolution and quality.
The open source LTX-2 can generate stylized, high-resolution 6-second videos in just 5 seconds without compromising quality, allowing creators to produce professional content much faster than before.
While this is an impressive accomplishment, it’s not the only parameter that sets the LTX-2 apart. It combines native audio and video synthesis with open-source transparency, and the company says it can scale output to 4K resolution at up to 48 frames per second if users are willing to wait a few seconds. Even better, creators can run their software on consumer-grade GPUs, significantly reducing computing costs.
Maturation of the diffusion model
LTX-2 is a so-called diffusion model, which works by incrementally adding “noise” to the generated content and reducing that noise until the output resembles the video assets on which the model was trained.
With LTX-2, Lightricks has accelerated the adoption process, allowing creators to output live previews and iterate on ideas almost instantly. This model can also simultaneously generate accompanying audio, such as soundtracks, dialogue, and ambient sound effects, dramatically accelerating creative workflows.
This is a big problem. Previously, creators had to create audio separately from video and spend time splicing it together to make sure it was perfectly in sync. Google’s Veo models are known for their strong integration of synchronous sound production, so these new features in LTX help reinforce the idea that Lightricks’ technology is on par with the cutting edge.
When it comes to access options, Lightricks continues to use LTX-2 to give creators plenty of flexibility. The company’s flagship LTX Studio platform is aimed at professionals who want to create the highest quality videos, sometimes at the expense of some speed. Lightricks claims that the subsequent slight reduction in processing speed will allow it to output video at native 4K resolution at up to 48fps, allowing it to be produced to the same standards you’d expect from a cinematic production.
The platform provides extensive creative control to influence the customizable parameters of your model. These details will be announced soon, but will include pose and depth controls, video-to-video generation, and rendering alternatives. Stay tuned for a release date later this fall.
Lightricks co-founder and CEO Zeev Farbman believes LTX-2’s enhanced capabilities are indicative of the extent to which the adoption model is finally maturing. He said in a statement that LTX-2 is “the most complete and comprehensive creative AI engine we’ve ever built, combining synchronized audio and video, 4K fidelity, flexible workflows, and radical efficiency.”
“This is not vaporware or a research demo,” he said. “This is a real advancement in video generation.”
big milestone
With LTX-2, Lightricks has proven itself at the cutting edge of AI video generation, and the platform builds on numerous industry firsts with previous LTXV models.
In July, the company’s LTXV model family, including the LTXV-2B and LTXV-13B, supported long-duration video generation for the first time, with subsequent updates extending output to up to 60 seconds. This makes AI video production “truly directed,” allowing users to start with an initial prompt and add more prompts in real-time as the video is streamed live.
Even before its one-minute update, the LTXV-13B already had a reputation for being one of the most powerful video creation models out there. Launched in May, the platform is the industry’s first to support multi-scale rendering, allowing users to incrementally enhance their videos by incrementally adding color and detail to their models, in the same way that professional animators “layer” details on top of their work in traditional production processes.
The 13B model was trained on data licensed from Getty and Shutterstock. The company’s partnerships with these content giants are important not only for the quality of the training data, but also for ethical reasons. The model output has far fewer copyright issues that plague the creation of many other AI models.
Lightricks has also released a distilled version of LTXV-13B that simplifies and speeds up the diffusion process. This means you can generate content in just 4 to 8 steps. The distilled version also supports LoRA. This means users can tweak it to create content that fits the aesthetic style of their project.
Innovative billing model
Like these previous models, the LTX-2 is released under an open source license and is a viable alternative to Alibaba’s Wan2 series models. Lightricks emphasized that it’s not just “open access” but truly open source. This means that the pre-trained weights, dataset, and all tools will be available on GitHub along with the model itself.
LTX-2 is currently available to users through LTX Studio and its API, with an open source version expected to be released in November.
For those who prefer to use the paid version via the API, Lightricks offers flexible pricing, starting at just $0.04 per second for a version that generates HD video in just 5 seconds. The Pro version offers a good balance between speed and performance, starting at $0.07 per second. The Ultra version costs $0.12 per second for 4K resolution, video generation at 48 fps, and full fidelity audio. Prices also vary by resolution, with users able to choose from 720p, 1080p, 2K, and 4K.
Lightricks claims that thanks to the model’s processing efficiency, the LTX-2 is priced up to 50% lower than competing models, making expansion projects more economically viable while delivering faster iterations and higher quality than previous generations. Alternatively, after it’s published on GitHub next month, users will be able to use the model by downloading the open-source version and running it on a consumer-grade GPU.
Image source: Unsplash

