Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

It is rare to see a I have a video This model is not the technological equivalent of a closed black box, and it’s even rarer to see a video model optimized enough to run locally on your devices rather than relying on cloud-based services. Lightricks-2, a new AI model video built in partnership with Nvidia, can do both.
Lightricks launched the model with Nvidia at CES 2026one of the largest technology fairs. Nvidia also introduced a number of next-generation AI-based products. software updates for gamersincluding an agent assistant, an educational advisor and an AI upscaler for smoother and more defined graphics.
Watch this: Every announcement from the Nvidia Live CES 2026 stream
Lightricks’ new model will be capable of creating AI clips up to 20 seconds long, at 50 frames per second, the longest end of the spectrum of AI video capabilities in the industry. The model will also include native audio. The ability to output in 4K will be essential for creators who want to use the new model for professional-quality projects. But it’s the capabilities built into the device that really set the new model apart from competitors like Google, I see 3 And Sora from OpenAI.
The model was built with professional creators in mind, whether individual filmmakers or large studios. The focus on clip quality, as well as its on-device optimizations, aims to make it one of the most engaging and secure options for AI-inclined creators.
To learn more about the hardware THESEcheck The new HP professional laptops suitable for computing And AMD’s fast mobile processors.
When AI companies talk about “open” models, they are usually referring to open weight AI models. These models are not truly open source, requiring every part of the process to be disclosed. But they give developers insight into how the model was built. Weights are like ingredients in a cake; Open weight models tell you all the ingredients that go into the dough, but you don’t know the exact measurements of each. Lightricks’ model is open and available now on HuggingFace and ComfyUI.
This is an example of the level of detail in LTX-2 AI videos.
Lightricks’ new video template is also capable of working locally on your devices. This is not normally the case for AI video. Generating AI video clips, even short ones, is a very computationally intensive process, which is why video models use more energy than other AI tools. To get the best results with most AI video generators, you’ll want computers in the data centers used by Google or OpenAI to do the heavy lifting and generate your videos in the cloud rather than on your laptop or phone. With Nvidia’s RTX chips, you can get these high-quality results without outsourcing the workload to a cloud service.
There are many benefits to running AI models locally. You control your data; you don’t have to share it with big tech companies who might use it to improve their own AI models. This is an extremely important factor for major entertainment studios that diving into generative AI but must protect their intellectual property rights. Running AI models on your device, with the right equipment, can also give you results faster. The average generation of an AI video prompt takes 1-2 minutes, so reducing it could save time and money, which are two of the strongest arguments for creators integrating AI into their work.
To find out more, visit AI note-taking ring develops AI wearable device industry and the new Gemini features available on Google TV devices.