To Runway
Updated: 2026-05
1. About This Page
This is a summary of how to apply the image-generation techniques covered in Comfy Cloud to the next Runway exercise. It is a page designed to show students, at the end of the class, how what they have learned here will be useful in the future.
2. What You’ve Learned So Far
Here’s what I’ve been trying out on Comfy Cloud:
- Basic Operation of Diffusion Models — Images emerge from noise
- The Sense of “Opening” Nodes — Internal processing is not a black box
- How Prompts and Parameters Work — Steps, CFG, Sampler, Seed
- Control Mechanisms — ControlNet, LoRA, img2img, inpaint
- Video Generation Mechanisms — Stills + Prompts → Video (Wan 2.2, LTX-2.3)
- Experiencing AI’s Limits — Contradictory prompts, extreme resolution values, edge cases
These are not skills specific to image-generating AI; rather, they are useful for understanding generative AI in general. Since video-generating AI is essentially a “3D version of a diffusion model,” these concepts can be applied to it as well.
3. What is Runway?
Runway is a commercial video-generation AI platform.
- Input: Text, images, video
- Output: High-quality video (5–10 seconds, up to several dozen seconds)
- Features: Superior physical representation and facial consistency compared to open-source video AI
- Uses: Commercial video production, short videos for social media, pre-production
- Pricing: Monthly subscription (tiered plans such as Free, Standard, Pro, and Unlimited)
One of the leading video-generation AI models, on par with the top-tier commercial offerings Sora 2 and Veo 3.
4. The Relationship Between Comfy Cloud and Runway
| Category | Comfy Cloud | Runway |
|---|---|---|
| Access | Browser | Browser |
| Internal | Node-based, visible | Button UI, black box |
| Degree of Control | High | Limited |
| Video Quality (as of 2026) | Medium to High | High |
| Learning Curve | High | Low |
| Use Cases | Learning, Experimentation, Customization | Production, Live Output |
They serve different purposes. Comfy Cloud is a place to understand the content, while Runway is a place to generate high-quality output.
Ideally, you should be able to use both. The goal of this course is to use high-quality tools while understanding how they work.
5. How the skills I gained at Comfy Cloud will be put to use at Runway
5.1 Prompt Design
- Learned about “motion-explicit prompts” in Comfy Cloud → The same applies to Runway
- Understanding of negative prompts → Runway uses a similar mechanism internally
- The order of prompts affects the results → This is common to both
5.2 The Concept of Seeds and Iteration
- Runway also uses the same concept of seeds
- “Reproducing a shot you like” and “changing the seed to create variations” are essentially the same process
- The feel of adjusting parameters
5.3 A Sense of the Limits of Physics and Consistency
- Experienced “multiple characters breaking down” and “strange physics” in Comfy Cloud → Runway has similar limitations
- However, since Runway is the top-tier commercial product, these limitations have been minimized
- The intuition is that “if something doesn’t work in Comfy Cloud, it’s likely to be difficult in Runway as well”
5.4 Working Backwards from the Mechanism
- Designing prompts to avoid scenarios where AI struggles (complex hand movements, text, physical contact)
- The importance of specifying camera movement and composition separately
- Adopting a mindset of “using AI to create something yourself” rather than “leaving it up to the AI”
6. What to Focus on in the Runway Exercise
The “Runway” section of the course consists of two basic sessions and five practical sessions. The expected learning outcomes are as follows:
Basics (2 sessions)
- Runway UI and key features
- Understanding the differences between T2V, I2V, and V2V
- Camera control and Director Mode
- Managing plans and credit usage
Practical Sessions (5 sessions)
- Planning and storyboarding for short films
- Integrating multiple shots (using editing software)
- Synchronizing sound effects and music
- Finalizing the finished work
- Critique and peer review
If you have a feel for “creating a single shot” in Comfy Cloud, you can focus on the stage of “editing multiple shots into a video project” in Runway.
7. The Big Picture of Video-Generating AI
Major models other than the two covered in class (Comfy Cloud and Runway):
- Sora 2 (OpenAI) — Top-tier quality, available for commercial use
- Veo 3.1 (Google) — On par with Sora 2, integrates with Google services
- Kling 2.5 (Kuaishou) — Chinese-made, lower pricing
- Hailuo 02 (MiniMax) — Chinese-made, with real-time capabilities as its strength
- Pika 2.2 — Low-cost commercial tier
- Wan 2.2 / LTX-2.3 — Open source (already tested via Comfy Cloud)
Each model has its strengths and weaknesses, and choosing the right one for the task at hand will be the norm in 2026.
8. If you plan to continue outside of school
If you want to continue learning about generative AI after class:
- Learn systematically using resources such as NVIDIA AI Learning Essentials, as introduced in the “External Resources” section
- Try out the latest models on Civitai and Hugging Face (not available on Comfy Cloud Free; available on the Creator plan)
- Follow the latest workflows on the official ComfyUI Discord
- Check the latest benchmarks for video-generating AI monthly on Artificial Analysis
This is a field where technology advances rapidly, and the landscape can change in just six months. In the long run, “fundamental skills that remain relevant even as tools change” are the most valuable.
9. In Conclusion
Course Design Objectives:
Being someone who uses AI and being someone who understands how it works are two different things. Only those who can do both will be able to master AI as a tool in the long run.
Opening a node in Comfy Cloud to observe the diffusion process gives you a perspective that can be applied to AI tools that don’t even exist yet—five years from now. From that perspective, Runway is just one of many powerful tools.
From here on out, it’s a journey for each of us to discover how to incorporate AI into our own creative work.
10. Navigation
- History — The History of Image-Generating AI
- AI Tools Overview — An Overview of Major Models
- External Resources — Useful External Resources
- Minimum Workflow — The Basic Workflow to Get Started
