Skip to content
Algorithm Exposure

Algorithm Exposure

Updated: 2026-05

1. About This Page

The second installment of the “Fun Experiment” trilogy. Algorithm Exposure — A collection of experiments that intentionally push the internal processing of diffusion models in unusual directions to reveal what is normally hidden.

The goal isn’t simply to “produce beautiful images.” The goal is to take a peek inside the AI. I’ll select one or two examples to show during class. This is where students get a real sense that “diffusion models actually work.”

This experience will serve as the foundation for imagining “what is happening inside video-generating AI” during the Runway exercise later on.

2. Story A: The World of CFG Zero

In the “Parameters” section, CFG was described as “prompt adherence.” Setting CFG to 0 causes the model to completely ignore the prompt.

How to Do It

  • Set K-Sampler’s cfg to 0
  • Whatever you type at the prompt will have no effect on the result
  • What you get is “an average representation of what the model derives from the training data

With SD 1.5, faces and landscapes look blurry. With Flux, they look realistic. You can see the model’s true face.

3. Topic B: How CFG’s Maximum Value Breaks Down

Instead, increase the CFG to 30 or 50.

  • Standard models (SD 1.5, SDXL): Oversaturated, distorted, or burnt-out colors
  • Flux dev: Does not work / Crashes
  • Z Image Turbo: Does not work / Crashes

By observing the point at which the CFG begins to break down, you can gain a practical understanding of why CFG 7–8 is the standard.

4. Topic C: Chaos in Steps 1–3

Reduce the number of steps to just 1, 2, and 3.

  • Step 1: Something faintly resembling noise, with a vague outline
  • Step 2: Beginning to take shape, but still blurry
  • Step 3: The subject finally becomes visible

An experiment to visualize the process of “gradually reducing noise”. The first step shows a reduction of 1/20, and the second step shows a reduction of 2/20.

Exception: Turbo/LCM models are designed to complete a task in 1 to 4 steps, so even with very small steps, they produce reasonably good results. This is because they were “trained to complete the task in a single step,” which is a different phenomenon.

5. Topic D: Latent Space Interpolation

An experiment to observe the intermediate state between two prompts (A and B).

Procedure

  • Use the ConditioningAverage and ConditioningCombine nodes
  • Prompt A: a cat
  • Prompt B: a dog
  • Generate 5 images with weights of 0.0, 0.25, 0.5, 0.75, and 1.0
  • A “gradual transition from cat to dog” is visible

This is interpolation in the CLIP space (text interpretation vectors). You can observe how the AI continuously blends the prompts.

6. Example E: Same Seed, Different Prompt

Keep the seed fixed and only change the prompt.

  • Fixed seed + a cat: A specific cat appears
  • Same seed + a dog: A dog appears where the cat was (the composition is similar)
  • Same seed + a robot: A robot appears in the same spot

You can see that while the seed determines the composition and placement of the image, the prompt determines what is drawn. This gives you a sense of how independent the two are.

7. Idea F: Peeking at the latent image at an intermediate stage

K-Sampler typically displays the results after all steps are complete. If you want to view the intermediate results:

  • Specify the processing range (in %) using the ConditioningSetTimestepRange node
  • Split the entire dataset into segments, such as “first, process 0%–30% using Sampler1,” and “next, process 30%–100% using Sampler2”
  • Decode intermediate latent images using the VAE to preview them

You can observe “what the image looks like in Step 5/20” and “what starts to appear in Step 15/20.”

This is an experiment made possible by ComfyUI’s node-based UI. It’s basically impossible to do in a web UI.

8. Topic G: Injecting Noise

Pass the input (latent image) to K-Sampler after applying a “slight noise” effect.

  • In standard img2img, the process involves “adding a little noise to the original image before applying the diffusion process.”
  • Here, the process involves “mixing a little of the original image into pure noise” before applying the diffusion process.

If you set the denoise value to extreme levels (such as 0.95 or 0.99), you may end up with a “creepy result” where the silhouette of the original image remains faintly visible.

9. Joke H: Mispronouncing “VAE”

Run “VAE Decode” in a separate VAE.

  • Normal: Generated using SD 1.5 → Decoded using an SD 1.5 VAE
  • Experiment: Generated using SD 1.5 → Decoded using an SDXL VAE

Because the formats of the latent images are different, the result is mixed with color noise. You can really feel that the VAE is translating between image formats.

10. Story I: The Sampler Runs Amok

I’m trying out a combination of a sampler and a scheduler that I don’t usually use.

  • dpm_2 + ddim_uniform
  • dpmpp_3m_sde_gpu + exponential

Since each sampler has a different “noise reduction strategy,” combinations that aren’t compatible will produce poor results.

I usually keep these set to euler/normal, but this lets you experience why there are multiple samplers.

11. Idea J: Loop the same image repeatedly

Feed the output of img2img back into the input and repeat the img2img process multiple times.

  • Step 1: Photo → img2img (denoise 0.5)
  • Step 2: Output from Step 1 → img2img (denoise 0.5)
  • Step 3: Output from Step 2 → img2img (denoise 0.5)
  • Repeat 5–10 times

Since the process of “forgetting half” is repeated each time, the result gradually becomes an abstract painting completely detached from the original image. This visualizes the process of the AI “continuing to dream” in succession.

12. How It Is Handled in Class

  • You don’t need to show all 10 examples in class
  • Demonstrate 1 or 2 of them in a 5-minute demo
  • Students can try out the ones they find “interesting” at a later date
  • Design it so students don’t get upset if “the image doesn’t look right” (in fact, the goal is for it to break)

13. Credit Budget

For this experiment, SD 1.5 is sufficient (in fact, SD 1.5 produces more interesting collapse patterns). 1–2 credits per run × 30–40 runs = approximately 30–80 credits.

There’s no need for a single student to do everything. The teacher should try it out beforehand and show one interesting example in class.

14. What’s Next

  • Edge Cases — Experiments involving intentional failure (e.g., ControlNet collisions)
  • To Runway — An overview of video generation AI and the bridge to Runway