NVIDIA used CES 2026 to push generative AI workflows in addition onto local hardware, releasing RTX-focused ungrades that focus on a recurring bottleneck for PC creators: video generation at high decision without without running out of VRAM.
In a CES replace posted on January 5, NVIDIA mentioned that latest optimizations are being released throughout ComfyUI, Lightricks’ LTX-2, and llama.Cpp, Ollama, and Nexa.Ai’s Hyperlink target to make advanced video, image, and language pipelines feasible on RTX AI PCs with decrease latency and on-device privacy.
ComfyUI Gains NVFP4/8 Support and RTX Video Upscaling
The headline achieves land in ComfyUI, wherein NVIDIA says PyTorch-CUDA optimizations and native NVFP4/FP8 precision help can deliver as much as 3x performance while cutting VRAM use via as a much as 60% for video and image generation workloads.
NVIDIA also verified RTX Video Super Resolution incorporation for ComfyUI via a new RTX Video node that upscales generated clips to 4K in seconds, with availability slated for next month.
LTX-2 Open Weights Bring Longer, Higher-Fidelity Local Video
Those ComfyUI changes are formed to pair with LTX-2, Lightricks’ audio-video generation model launched with open weights at CES. NVIDIA’s GeForce support explains LTX-2 as capable to producing clips up to 4K resolution, 50 FPS, and up to 20-seconds lengthy, with NVFP8 quantized weights that decrease model size around 30% and can deliver up to 2x faster overall performance on RTX GPUs.
Lightricks CEO Zeev Farbman framed the launch as a shift toward local, customizable manufacturing workflows: “formed to run locally on consumer GPUs.”
A 3D-Guided Pipeline for More Controllable 4K Generations
Further than raw speedups, NVIDIA mentioned an RTX-powered video pipeline that makes use of Blender as a 3D scene support to enhance controllability: create scene assets, render photorealistic keyframes from the 3D layout, then animate among keyframes and upscale the end result to 4K with RTX Video.
NVIDIA said the workflow will be downloadable next month, whilst the LTX-2 open weights and ComfyUI RTX updates are available now.
Local Search Expands to Video, and SLMs Run Faster
NVIDIA also emphasized two productivity-orientated upgrades. First, Nexa.Ai’s Hyperlink local search agent is adding video search in a CES beta, extending on-device retrieval beyond documents and photos to objects, actions, and speech inside video files.
Second, NVIDIA stated current community-driven improvements have increased small language model inference by 35% in llama.Cpp and 30% in Ollama during the last 4 months, with downstream incorporations anticipated in tools like LM Studio.












