Day 22.
Trust Nothing You Shipped Yesterday.

$92 a day. That's what showed up on the usage dashboard this morning. I switched to Opus for the trading strategy work and forgot to switch back.

Where I'm at

$92 a day.

That's what showed up on the usage dashboard this morning. Not the cron jobs — those run on MiniMax, cheap, exactly where they should be. The $92 was all Opus. The premium model. The one that costs fifty times more.

Here's the thing: I'd been running MiniMax as my default for weeks. Ninety-five percent of everything — briefings, content drafts, debugging, brainstorming — goes through Mini and the output is fine. More than fine. But yesterday I was deep in the DCA bot strategy. Trading logic. Position sizing. The kind of work where I need the model to actually think, not just pattern-match. So I switched to Opus deliberately.

And then forgot to switch back.

Every conversation after that — the casual ones, the quick checks, the "hey, what's in this file" messages — all running on the model I'd pulled in for one critical session. The session ended. The setting stayed. Ninety-two dollars later, I noticed.

That's the problem with overrides. You flip a switch for a good reason, and the switch stays flipped long after the reason is gone. It's not that Opus wasn't worth it for the strategy work — it absolutely was. The mistake was treating a temporary escalation as a new default. Like renting a sports car for one track day and then accidentally driving it to the grocery store for a month.

Switched it back. Opus stays reserved for complex reasoning — trading strategy, architectural decisions, anything where I can feel the difference. Everything else returns to Mini.

The lesson isn't "don't use expensive models." It's "switch back when you're done."

• • •

The bot that wouldn't stop buying

Yesterday's entry was about finding the initialization bug — three lines that made the CL bot buy immediately on every restart, bypassing the entire strategy. Today was supposed to be the cleanup. Apply the fix to all four bots. Verify. Move on.

Instead I spent six hours in the wreckage of a sed command.

Here's what happened. I had four bot config files that needed the same fix — remove the auto-buy from initialization. Instead of telling the agent to open each file and edit carefully, I told it to batch-replace across all four files at once with a single sed command. Fast. Efficient. The kind of shortcut that feels clever for about three seconds.

The sed command was wrong. Not subtly wrong — destructively wrong. It didn't just fail to make the edit. It corrupted all four files. The JSON structure broke. The state data — layer prices, entry positions, anchor points — gone. Four bots, four files, all unreadable.

I stared at the terminal for a long time.

Twenty-two days of directing builds and I still make the same mistake: rushing. Every catastrophe in this journal traces back to the same impulse. Day 4, the exposed API key — moving fast, not checking. Day 15, the cascading deploy breaks — fixing multiple things in parallel. Day 17, the Buttondown token in client-side HTML — choosing convenience over caution. And now, Day 22, telling the agent to batch-edit four files because I was too impatient to do them one at a time.

The pattern isn't subtle. I just keep not learning it.

Told the agent to rebuild all four bot configs from scratch. Not from backup — from the exchange API. Pulled current positions, matched fills to layers, reconstructed the state. Took two hours. Would have taken zero if I'd told it to read the sed command back to me before running it.

• • •

The layer problem

With the bots rebuilt, I found a second bug. The XAG bot — silver — was buying from every DCA layer simultaneously instead of only adding at the deepest level. That's not dollar-cost averaging. That's panic buying with extra steps.

The strategy is specific: wait for price to drop 1% below your deepest entry. Then add one layer. One. Not three. Not "whatever's available." The code was executing buys at every layer that met any price condition, regardless of depth.

Quick detour for anyone following along: DCA layers are like rungs on a ladder going down. You enter at the top. If price drops, you add a position one rung lower — that's averaging down. What the XAG bot was doing is the equivalent of jumping on every rung at once. Your average entry is better, sure, but you've used all your ammunition in one move. If price keeps dropping, you've got nothing left.

Described the fix to the agent: only the deepest unfilled layer can trigger. Every other layer waits. The bot went from "buy everything now" to "buy one thing at the right time." Patience, encoded properly this time.

• • •

The dashboard that was lying

While verifying the bots, I looked at the dashboard. Winrate display: "1 trade recorded." That's been there for days. I'd been ignoring it because I assumed it was a display bug — the data must be right underneath.

The data was not right underneath.

The dashboard was reading from a static counter instead of calculating from actual trade history. It showed one trade because someone hardcoded "1" during testing and never replaced it with the real calculation. Every time I glanced at the dashboard and thought "that looks fine," I was looking at a lie.

This connects to something from Day 11 — "a system that reports success while producing wrong answers is worse than a system that crashes." The dashboard didn't crash. It displayed a number. The number was meaningless. And because it looked plausible (one trade recorded, sure, I'm still early), I never questioned it.

Had the agent rewire it to pull from realized PnL data dynamically. The real number: seven trades, 62% winrate. Not amazing. But real. And real is the only number worth looking at.

I also solved a problem that had been nagging me since the bots went live. The exchange API returns averaged position prices — useful for nothing when you're running a layered strategy. You need individual fill prices to know where each layer entered. Found the endpoint: /fapi/v1/allOrders. It returns every fill with exact prices and timestamps. Had the agent build the logic to match fills to active positions. Now each bot knows its real entries, not an average that hides the details.

• • •

End-of-day reflection

It's 2 AM. I've been at this for ten hours. The bots are fixed — actually fixed, verified against live data, not just "the code looks right." The dashboard shows real numbers. The token spend is under control. The sed disaster is cleaned up.

But what stays with me tonight isn't the fixes. It's the pattern.

Every problem today was a version of the same thing: looking at what I expected instead of what was there. The token spend — I expected the default to be reasonable. The bots — I expected the initialization logic from yesterday's fix to be the only problem. The dashboard — I expected the winrate display to reflect real trades. The sed command — I expected the regex to work because it looked right in my head.

Expectation is the enemy of verification.

But here's the thing I have to be honest about: I couldn't have caught most of this yesterday. Yesterday I was building. The bots went live. The initialization logic shipped. You can't monitor a system that isn't running yet. Today was the first day I could actually watch what I built and see how it behaved in the real world — with real prices, real restarts, real edge cases.

The sed disaster? That was rushing, plain and simple. But the XAG layer bug, the dashboard lie, the token spend — those only became visible because I sat down today and watched. Checked the actual outputs against what I expected. Compared the dashboard to the exchange. Read the logs instead of glancing at the status.

Twenty-two days in, and the rhythm is becoming clear. You don't build and verify on the same day. You build on Day 21. You verify on Day 22. The build is the exciting part — features ship, things go live, it feels like progress. The verification is where the real work happens — watching the thing you built do something you didn't intend, and fixing it before it costs you.

The fix is always the same. Check the actual data. Not what you think the data is. Not what the data was last time you looked. What it is right now, on this screen, in this market.

Day 22 complete. Four bots fixed. Dashboard honest. Token spend cut. One sed disaster survived.

Yesterday I built it. Today I made it real.

Day 22 of ∞ — @astergod Building in public. Learning in public.

Day 21 Day 22 of ∞ Day 23

Want to learn what I learned on this day?

Play Day 22 in the Learning Terminal →
Day 21 Day 22 of ∞ Latest