Luma AI's Ray3: First HDR, Reasoning AI Video Model Lands with Adobe

Ray3 elevates generative video with groundbreaking HDR and AI reasoning, empowering professionals through seamless Adobe integration.

September 18, 2025

Luma AI's Ray3: First HDR, Reasoning AI Video Model Lands with Adobe
In a significant development for the rapidly evolving field of generative artificial intelligence, Luma AI has officially launched Ray3, a new video generation model that pushes the boundaries of creative possibility. The company asserts that Ray3 is the first model of its kind capable of producing studio-quality High Dynamic Range (HDR) video and, crucially, the first to be endowed with a "reasoning" engine designed to function like a creative partner. This launch is amplified by a major strategic partnership with Adobe, which will integrate Ray3 into its Firefly application, making the advanced tool immediately available to a vast user base of creative professionals and signaling a major inflection point for the adoption of AI in high-end video production.
The headline feature of Ray3 is its groundbreaking ability to generate video in native HDR, a capability that has been a long-sought-after goal for professional-grade AI video. This advancement allows the model to create footage with significantly greater contrast, deeper shadows, and more brilliant highlights, mirroring the quality expected from high-end cinema cameras.[1] Ray3 supports 10-, 12-, and 16-bit color depths and can export files in professional formats like ACES2065-1 EXR, ensuring seamless integration into existing film and advertising production pipelines.[2][3][4] This technical leap means that AI-generated content can now meet the stringent quality standards of professional color grading and visual effects workflows.[5] Beyond creating HDR content from scratch, the model is also capable of converting standard dynamic range (SDR) videos into vibrant HDR, offering a powerful tool for remastering and enhancing existing footage.[4] For creative professionals, this eliminates a critical barrier, transforming AI-generated video from a novelty into a viable asset for broadcast and studio productions.[4]
Beyond its visual fidelity, Ray3’s most profound innovation may be what Luma AI describes as its multimodal reasoning system.[6] Company executives have positioned this as a fundamental shift away from earlier generative models, which Luma AI CEO Amit Jain likened to "slot machines – powerful but not intelligent."[6] Ray3, by contrast, is designed to "think through" a user's request.[6] The system can interpret creative intent with greater nuance, plan out complex scenes, generate visual concepts, and even evaluate its own outputs to refine the results.[5][4] This internal feedback loop leads to videos that are more coherent, with more consistent characters, more logical scene progressions, and physics that behave more realistically.[2][6] The model's reasoning capability is further enhanced by practical control features, such as visual annotation, which allows creators to draw directly onto images to specify object placement, motion paths, and character interactions without complex prompt engineering.[5] This leap from simple text-to-video translation to a more collaborative and intelligent process marks a significant step toward AI that can function as a genuine creative partner rather than a mere execution tool.[7]
To bridge the gap between advanced technology and practical daily use, Ray3 has been integrated into Luma AI's Dream Machine platform with features specifically tailored to the creative workflow. A new "Draft Mode" allows users to iterate on ideas up to five times faster and more cost-effectively, enabling rapid exploration of concepts before committing to a final, high-quality render.[5][2] Once a desired shot is achieved, a feature called "HiFi" can master the draft into production-grade 4K HDR footage.[5] This workflow is designed to restore a sense of play and experimentation to the creative process. The model's immediate impact is being amplified through key industry partnerships. The most significant of these is with Adobe, which is making Ray3 available in its Firefly app, marking the first time the software giant has integrated a third-party video model in this way.[6][8] For an initial two-week period, Ray3 will be available exclusively on Adobe Firefly and Luma's own Dream Machine platform.[1] This collaboration provides Luma AI with massive distribution while signaling a new strategy for Adobe, which is now embracing leading external models alongside its own. Further validating Ray3’s professional appeal, Luma AI has also partnered with major advertising and creative agencies, including Monks UK, Galeria, Strawberry Frog, and Dentsu Digital, to pioneer its use in global advertising and brand storytelling.[2][4]
In conclusion, the launch of Luma AI's Ray3 represents a pivotal moment for generative video. By achieving both professional-grade HDR output and a sophisticated reasoning system, the model addresses two of the biggest hurdles to widespread adoption in the creative industries: quality and control. The integration into Adobe's ecosystem ensures that this powerful technology will not remain in a research lab but will be placed directly into the hands of millions of creators. Ray3's ability to understand intent, self-correct, and produce footage that meets professional technical standards positions it as a formidable competitor to other high-profile models like OpenAI's Sora.[9] It is more than just an upgrade; it is a clear statement about the future of creative workflows, where AI is an intelligent collaborator capable of elevating human imagination to new, cinematic heights.

Sources
Share this article