SCROLL 006
·
2026.04.15 18:35
lora
training
checkpoints
Teaching What I Haven't Written Down Yet
The daily work hasn't stopped. I'm still running batches, still scrolling through hundreds of outputs, still picking the 1-in-15 winners and posting them. Still rotating new LoRAs into the random pool, pulling ones that aren't earning their slot, testing replacements. That part of the pipeline runs on muscle memory at this point.
But the past few days I've been spending my planning time on something different — writing up outlines for tutorials on training your own LoRAs and checkpoints.
This is a topic I've been circling for a while. All the tutorials I've written so far are about *using* the tools. How to prompt, how to stack LoRAs, how to batch generate, how to pick winners. That's the workflow side. But the question I keep seeing from people — on Civitai, in comments, in DMs — is some version of: "How do I make my *own* LoRA?"
And it's a fair question. Once you get good at using other people's LoRAs, the natural next step is wanting one that does exactly what you want. A style that doesn't exist yet. A character that's yours. A quality modifier tuned to your specific taste. The existing LoRAs get you 90% of the way there, but that last 10% is the difference between your work looking like everyone else's and looking like *yours*.
So I'm outlining two tracks. One for LoRA training — which is more accessible, faster to learn, and something you can do on consumer hardware if you're patient. And one for checkpoint training/merging — which is deeper, slower, and more about understanding how the models actually work under the hood.
The LoRA track is closer to done as an outline. Dataset preparation, captioning strategies, training parameters, what actually matters vs. what people overthink. I've trained enough of my own to know where the landmines are — bad captions will ruin a LoRA faster than bad training settings ever will.
The checkpoint track is more ambitious. Merging is one thing — that's mostly about knowing which models complement each other and what merge ratios do what. But actual fine-tuning from a base model is a different conversation entirely. I'm still figuring out how much of that to include vs. keeping it focused on what's practical for someone running A1111 on a gaming PC or a Mac.
Meanwhile the LoRA rotation continues. Every few days something new catches my eye on Civitai, I download it, run it through the testing process — solo at different weights, then stacked with my base LoRAs, then mixed into a batch run. Most don't stick. Maybe 1 in 10 makes it into the permanent rotation. The ones that do usually solve a specific problem I didn't know I had — better fabric rendering, more natural hand poses, lighting that plays nicer with certain checkpoints.
The tutorials I've been planning feel like the right next step. I've been teaching people how to drive. Now it's time to show them how the engine works.
— Admin · END TRANSMISSION —
SCROLL 005
·
2026.04.13 20:46
lora
testing
mps
New LoRAs, Old Favorites, and a bf16 Wall
Spent today testing two new LoRAs I found — **0__11Xx** and **ma1ma1helmes_b**. Both Illustrious-style, both looked promising in the preview images. So I cleared a couple hours and ran them through the usual gauntlet.
First thing I tried was swapping out my People Works LoRA for these. People Works has been in my base stack for a while now — it does something specific to faces and skin rendering that I just like. Hard to describe exactly, but when it's in the mix the output feels more *finished*.
The new ones are different. **0__11Xx** at 0.4 weight gives this slightly more illustrated quality — still detailed, still realistic-leaning, but there's a softness to the rendering that works really well for forest scenes and natural lighting setups. **ma1ma1helmes_b** at 0.45 does something interesting with fabric and clothing detail. Lace, corsets, layered outfits — it picks up on those tags better than most LoRAs I've tested.
I ended up running a full batch with both of them active on a forest scene prompt — blonde girl in a green corset and white lace dress, flower wreath, dappled sunlight through the trees. Heavy on the clothing and environment tags, double-bracketed the important stuff. The results were genuinely good. The lace rendering in particular was a step up from what I usually get.
But here's the thing — when I compared the outputs side by side with the same prompt using my old People Works LoRA, I still preferred the People Works version for faces. The skin has more life to it. The new LoRAs win on clothing and environment detail, People Works wins on the human element. That's a useful thing to know.
Also hit a compatibility wall today that's worth documenting. One version of **People Works+** (the plus variant) is distributed only in **bf16** format. If you're running on a Mac with Apple Silicon using MPS — which I am — bf16 doesn't work. MPS doesn't support bfloat16 operations. The model just fails to load or throws tensor errors. You need to find the fp16 or fp32 version, or convert it yourself. Not a huge deal once you know, but if you're on a Mac and a LoRA mysteriously won't load, check the precision format first. That's probably your answer.
The takeaway from today: I'm adding 0__11Xx and ma1ma1helmes_b to my random LoRA pool for batch generation. They won't replace People Works in the base stack — that stays. But as random additions at 0.3-0.45 weight, they're going to add variety to outputs, especially on scenes with detailed outfits or natural environments. Sometimes the best discoveries don't replace what you have, they just expand what's possible.
— Admin · END TRANSMISSION —
SCROLL 004
·
2026.04.07 14:47
lora
discovery
testing
Reverse Engineering a LoRA From One Image
Saw an image today that stopped me mid-scroll. Something about it was just *better* — the skin had more depth, the shadows had more color, the lighting felt more intentional. Not a different style, just a higher quality version of what I'm already doing.
So I did what I always do when something catches my eye: I dug into the metadata.
The prompt was nothing special. The checkpoint was one I already use. The settings were close to mine. But there was one LoRA I didn't recognize — **Ri-mix [PONY + Illustrious]**.
Downloaded it and spent most of the day testing it. First by itself at different strengths to see what it actually does. Then paired with my usual LoRAs at different combinations and weights. Low strength, high strength, stacked with one LoRA, stacked with three.
What it does is subtle but it touches everything. Colors get a little more nuanced — not more saturated, just more *specific*. Skin picks up ambient light better. Shadows have more variation instead of just being dark. Lighting feels like it has more layers to it. It's the kind of thing where you put two images side by side and one just looks more "real" but you can't immediately point to why.
The strength matters a lot. Too high and it starts fighting with other LoRAs — especially style LoRAs that have their own opinion about lighting. Too low and you don't get much benefit. The sweet spot depends on what else is in the stack.
This one's going into the permanent rotation. Not on every image — it depends on the style and what other LoRAs are in play. But for the realistic-leaning stuff and the detailed illustration work, it's going to be in there. You'll probably start seeing the difference in my output over the next few days.
Sometimes the biggest upgrades don't come from new checkpoints or new prompts. They come from one LoRA that shifts everything 10% better.
— Admin · END TRANSMISSION —
SCROLL 003
·
2026.04.05 05:16
triage
selection
posting
No New Recipes, Just Picking Winners
Some days I don't set up anything new. No new recipes, no new references, no enhancement passes. I just open what's already there and pick.
Today was one of those days. I had a backlog — a few hundred images from batches I ran over the last couple days that I hadn't fully gone through yet. So I loaded up the triage tool and started scrolling.
When you're picking from a big backlog, the temptation is to be generous. "Maybe this one has potential." "I could fix that in img2img." Don't. If it doesn't hit you in the first second, move on. The whole point of generating volume is that you don't have to settle.
Ended up pulling about 40 keepers out of maybe 350 images. Uploaded the best 30 to Civitai with full metadata. Saved 10 to the favorites folder for future remixing.
No new prompts written. No new tools used. Just eyes and judgment.
Days like this feel less productive, but they're actually when the library grows the most. Every image I post today becomes discoverable. Every favorite I save becomes tomorrow's starting material. The pipeline doesn't always need new input — sometimes it just needs you to finish processing what it already gave you.
— Admin · END TRANSMISSION —
SCROLL 002
·
2026.04.03 13:59
img2img
denoise
workflow
7 Models, 3 Denoise Levels, 1 Winner
Ran one of my batch recipe outputs through the img2img finish today. Picked a demon girl with horns from the batch — she had good composition and the expression was right, but the low-res quality wasn't there yet.
So I threw her through 7 different checkpoints at 3 denoise levels each. That's 21 versions of the same image. Low denoise keeps it close to the original — safe, clean, but not much improvement. High denoise lets the model reinterpret more — riskier, but sometimes it finds details you didn't know were there.
Out of 21 versions, 8 were clear improvements. The rest either lost the expression, muddied the horns, or went too far from the original concept. Narrowed those 8 down to 3 finalists by zooming in and comparing side by side.
The winner came from a mid-denoise pass. Enough freedom for the model to sharpen the jewelry and add depth to the skin texture, but not so much that it changed her face. That's the sweet spot — and it's different for every image.
The whole finish process took about 20 minutes. The batch that produced the original took 3 hours. 3 hours of generation, 20 minutes of polish. That's the ratio.
— Admin · END TRANSMISSION —
SCROLL 001
·
2026.04.01 18:47
checkpoint
workflow
selection
5 Checkpoints, 1 Reference, Pick the Winner
Same reference image. Same tags. Five different checkpoints. Keep the best one. That's today's workflow.
I have a folder of about 1,000 reference images I've saved over time — stuff I liked from anywhere. A script randomly pulls one out of the bag. I don't even choose. Whatever it grabs, that's today's starting point.
Each checkpoint interprets the same input differently. One gives me better skin texture. Another nails the lighting but fumbles the hands. A third one produces something I never would have imagined from that reference. The whole point is that I don't know which one will win until I see the results.
Today I ran about 100 images across 5 checkpoints. Kept roughly 1 out of every 5. Most of the rejects aren't bad — they're just not the best version of that concept. When you've seen the best one, the others feel flat.
One image caught me off guard today. Came out way better than I expected — the kind of result where the checkpoint found something in the reference that I didn't even see. That's the thing about this process. You're not fully in control. You set up the conditions and let the models surprise you.
That's 20 keepers out of 100. Tomorrow I'll do it again with a different random pull.
— Admin · END TRANSMISSION —