Skip to content

LoRA

Updated: 2026-05

1. What You’ll Learn on This Page

LoRA (Low-Rank Adaptation) is a “lightweight fine-tuning” process applied on top of a base model. While retraining the main model (which is several gigabytes in size) is costly, LoRA allows the model to learn specific styles or characters using only tens to hundreds of megabytes of data.

In class, you’ll choose from the 573 LoRAs pre-installed on Comfy Cloud.

2. What You Can Do with LoRA

Typical uses:

  • Art Style Changes: “Ghibli-style,” “watercolor-style,” “1980s anime-style,” “cyberpunk-style,” etc.
  • Specific Characters: Recreation of existing characters (note: some may be subject to copyright restrictions)
  • Specific Styles: Photorealistic, oil painting-style, comic-style
  • Specific effects: Enhanced detail, specific lighting effects, skin texture adjustments
  • Specific objects: Specific buildings, specific tools

Think of LoRA as slightly distorting the base model. You can adjust the strength of the distortion (weight).

3. Example of a Pre-installed LoRA Model (Comfy Cloud)

Since I can’t cover all 573 of them, I’ll break them down by major genre.

3.1 Flux-style LoRA

  • flux1-ghibli_style — Ghibli-style
  • flux1-cinematic_kodak_motion_picture_film_still_style — Cinematic film-style
  • flux1-cyberpunk_anime_style — Cyberpunk anime
  • flux1-comic_book — Comic book style
  • flux1-80s_fantasy_movie — 1980s fantasy movie
  • flux1-2000s_analog_core — 2000s analog photo style
  • flux1-iphone_photo_5l_realism_booster — iPhone photo texture
  • flux1-niji_anime_style — Niji-style anime
  • flux1-pokemon_trainer_sprite_pixelart — Pixel art

3.2 Detail Enhancement Models

  • flux1-detailifier — General detail enhancement
  • flux1-add_micro_details_concept — Micro-details
  • flux1-better_faces_cultures — Improvements to diverse facial features
  • flux1-eye_detail_inpaint — Eye detail refinement

3.3 LoRA for Video

  • wan2.2_t2v_lightx2v_4steps_lora_v1.1_* — Accelerated Wan 2.2 (4 steps)
  • wan2.2_i2v_lightx2v_4steps_lora_v1_* — Accelerated Wan 2.2 image-to-video
  • AnimateLCM_sd15_t2v_lora — SD 1.5-based video generation

You can view the list of LoRAs by clicking the “Models” icon in the left sidebar.

4. The LoRA Workflow

Differences from the minimal workflow:

  1. Insert a Load LoRA node between Load Checkpoints and CLIP Text Encoding
  2. Select the LoRA file you want to use in the LoRA node
  3. Connect the model output of the LoRA node to the K-Sampler
  4. Connect the CLIP output of the LoRA node to CLIP Text Encoding

Key parameters of the LoRA node:

  • strength_model: Strength of influence on the main model (0.0–1.5; typically 0.6–1.0)
  • strength_clip: Strength of influence on CLIP (text interpretation) (also 0.0–1.5)

Setting both to 1.0 maximizes the effect of LoRA. A value of 0.5 reduces the effect by about half. A value of 0.0 disables it.

5. Include “trigger words” in the prompt

LoRA models are often trained to respond more strongly when specific words appear. These are called trigger words.

Examples:

  • Ghibli-style LoRA → Include ghibli style or studio ghibli in the prompt
  • Cyberpunk anime LoRA → cyberpunk anime style

Trigger words are often found in the LoRA filename or in the description section of the distribution platform (such as Civitai). For the pre-installed LoRA models in Comfy Cloud, the filename itself often serves as a hint.

6. Stacking Multiple LoRAs

LoRA nodes can be chained together.

Example: Applying Ghibli-style + Detail Enhancement simultaneously

  • LoRA 1: flux1-ghibli_style with strength 0.8
  • LoRA 2: flux1-detailifier with strength 0.5

However, if you stack too many, their effects may conflict and cause the structure to collapse. It’s safest to limit yourself to 2 or 3.

7. Compatibility with the Base Model

LoRA is associated with the base model used during training.

  • LoRA for SD 1.5 → Use with SD 1.5-based models
  • LoRA for SDXL → Use with SDXL-based models
  • LoRA for Flux → Use with Flux dev / schnell-based models
  • LoRA for Wan → Use with Wan-based models

If the base model is different, loading a LoRA model won’t work (or will produce strange results). The prefixes in the LoRA filenames, such as flux1, flux2, and sd15, are a clue.

8. Estimated Credit Usage

Even when LoRA is incorporated, the computational load remains virtually the same as that of the base model. There is virtually no difference in credit consumption.

However, if you use a heavier base model—such as Z Image Turbo or Flux dev—that includes LoRA, the resource consumption for that model (2–10 cr) will naturally be added on top.

9. About Custom LoRA (Civitai Import)

You cannot use your own LoRA models on the Free or Standard plans. You can import models from Civitai or HuggingFace on the Creator plan ($35/month) or higher.

Since the class assumes the Free plan, you’ll choose from 573 pre-installed apps. Even with that, you can still create a wide variety of apps.

When conducting a demo using Dr. Nakayasu’s personal Standard account: This is for demo purposes only. Students will use the pre-installed LoRA on the Free plan.

10. Exercises (for Class Use)

Exercise A: Changing the Art Style Using the Same Prompt

  • Base prompt: a young woman with curly hair, sitting by a window, looking thoughtful
  • No LoRA → Standard realistic style
  • LoRA: flux1-ghibli_style (strength 0.8) → Ghibli-style
  • LoRA: flux1-cyberpunk_anime_style (strength 0.8) → Cyberpunk
  • LoRA: flux1-comic_book (strength 0.8) → Comic book style
  • Line up the four images to see how much the art style dominates

Exercise B: Observing the Effect by Changing strength

  • Same prompt, same seed, same LoRA
  • Run the model four times with strength set to 0.3, 0.6, 1.0, and 1.5
  • Observe at what point the effect becomes too strong

Exercise C: Stacking LoRAs

  • A three-layer stack consisting of a base model, a style-specific LoRA, and a detail-enhancement LoRA
  • Experiment with strength values ranging from 0.5 to 0.8 for each layer
  • Observe how the effects change depending on the stacking method

11. What’s Next

  • Image to Video — Converting still images into videos (Wan 2.2, etc.)
  • Algorithm Exposure — Experiments that peek inside the model, such as CFG extremization and latent space interpolation
  • Edge Cases — Experiments that intentionally break the model