Where I'm at
No trading today. No dashboards. No clients. Just a broken video pipeline and too much coffee past midnight.
I've been building something different — a content production system called Manhwa Minute. The idea: take a manhwa series (Korean comics, for anyone who hasn't come across them), feed the panels to an AI, generate a narrated recap video with voice, captions, and cinematic zoom effects. Automated. One command to video.
The first version reached panel 60 out of 133 before I killed it. Not because it crashed. Because I was watching the output and the narration was describing a completely different story than what the panels showed. The AI was narrating a generic villainess isekai plot. The panels showed something else entirely. The voice was confident. The pacing was good. The narration was fiction.
• • •
The bug was architectural, not cosmetic
The narration script was generating its story from the series title alone. It read "villainess isekai" and invented the plot from what it knew about the genre. It never looked at the panels. Not once. It was writing a story about what it assumed the comic was about, while the actual images scrolled by underneath telling a completely different one.
Yesterday I wrote that rules describe what you think you do, but data shows what you actually do. Today the same principle showed up in a different domain: the AI was describing what it assumed the panels showed instead of what they actually showed.
Description without observation is fiction. Doesn't matter if it sounds confident. Doesn't matter if the tone is right. If you didn't look at the thing you're describing, you're making it up.
The fix was a new script — one that sends every panel to Claude's vision model in batches of five, generates a description of what each scene actually contains, and saves everything to a file. Then the narration script reads those real descriptions instead of guessing from a title. The narration is grounded now. What the voice says matches what the screen shows. Not because the AI got smarter — because I made it look before it spoke.
Same lesson as Day 39's voice profile. Same lesson as Day 36's backtest. The AI produces confident output regardless of whether it's seen the source material. Your job as the person directing it is to make sure it has. Feed it reality. Don't let it guess.
• • •
Three crashes later
The rebuild took the rest of the day. And the night. And it broke three more times.
The first crash happened at panel 99. The server has a session time limit — roughly fifteen minutes before it kills whatever's running. The build was on minute fourteen, processing the last batch of panel descriptions, when the connection dropped. Work gone. Progress lost.
Except — it wasn't lost. The panel descriptions had been caching to a temporary folder as they completed. Panels 1 through 98 were sitting there, finished, saved. Only the last batch needed regenerating. The cache saved hours of work and hundreds of API calls.
Day 7 I wrote about backing up before touching production. Day 34 I wrote about the dashboard reset. Day 40: cache everything. Not because you expect to crash. Because you will crash. The question isn't whether the session times out. It's whether the work survives when it does.
Second crash: a Python script tried to process a JavaScript template and broke it. A variable reference that the video builder needs — the path to an image overlay — got turned into a literal string. The build ran, the video rendered, and every frame was missing its visual effect because the code was looking for a file called " ${vignettePath}" instead of the actual path. The fix was surgical: find the broken string, replace it, don't regenerate anything else.
Third crash: another timeout during the final assembly step. But by this point the video was already written to disk. The cleanup process died. The video survived. 104 megabytes. Eleven minutes and forty-eight seconds. 1080p. 133 panels with Ken Burns zoom effects and AI-generated narration.
Compressed it to 30 megabytes. Sent a preview. The response: "it's not correct at all." No specifics yet. Waiting on feedback. Can't fix what I don't understand — same lesson from every client interaction since Day 25. The system needs to know what "wrong" means before it can make it right.
• • •
Before closing the session
Before closing the session, I did something the old me wouldn't have thought to do. I wrote a skill file.
Every bug from today — the narration mismatch, the caching gap, the template literal break, the timeout recovery, the panel batching size — documented in one file. Ten bugs. Ten rules. Tomorrow's session starts with institutional knowledge instead of a blank slate.
Day 32 I called this the flywheel. Every bug becomes a documented lesson. Every lesson makes the next build faster. The manhwa pipeline is new — I've never built a video production system before today. But the practice of capturing what went wrong so the system remembers it ? That's thirty-nine days old.
• • •
Forty days
Forty days. Six weeks. And today was the first day that had nothing to do with trading. No bots. No dashboards. No client positions. No P&L. Just a creative engineering problem — take 133 images and turn them into a narrated video — solved with the same approach I've been using since Day 1: describe what I want to an AI agent, debug what it builds, fix what breaks, document what I learn.
The approach works on trading bots. It works on dashboards. It works on content systems. Today it worked on video production. The domain changes. The method doesn't.
Forty days ago I didn't know what SSH meant. Today I built a video pipeline that sends comic panels to an AI vision model, generates scene descriptions, writes narration, synthesizes voice, adds captions and zoom effects, and renders 1080p video. In one day. Without writing a line of code.
The video isn't right yet. The feedback will tell me why. Tomorrow I fix it.
Day 40 complete. One pipeline built. Three crashes survived. 133 panels, one video.
The AI was making up the story. Now it describes what it sees.
Day 40 of ∞ — @astergod Building in public. Learning in public.