Day 21.
The Code That Ran Before the Rules.

Every time I closed a position, it reopened within seconds. Like something had already decided, before I'd even released the mouse button, that it was going back in.

The CL bot was supposed to wait.

That's the entire strategy. Wait for price to drop 1% below the 24-hour high. Then enter. Patience, written as code. A machine that knows when not to buy.

Instead: every time I closed a position, it reopened within seconds. Like I hadn't done anything. Like something had already decided, before I'd even released the mouse button, that it was going back in.

I closed it again. It bought again.

I described the behavior to the agent. We started reading logs together.

• • •

Quick detour for anyone following along from the beginning

A DCA bot — Dollar Cost Averaging — is a trading bot that buys in layers. Instead of putting everything in at once, it waits for a signal, enters a position, and adds more if price moves against you. The idea is that you average down your entry price rather than catching a falling knife with your full size.

The "main loop" is the heartbeat of the bot — the code that runs every few seconds checking prices, checking conditions, deciding whether to act. This is where the strategy lives.

Initialization is different. It's the code that runs once, at startup, to get the bot into a known state. Set up variables. Check what positions are already open. Prepare.

Nobody thinks much about initialization. It runs, it finishes, the real logic starts. That's what you watch.

That's where the bug was.

• • •

The four-hour chase

We traced it three times before finding it.

First pass: the agent read through the main loop. The trigger logic was clean — price < high_24h * 0.99 before any buy. Clear condition. Couldn't fire on its own. Checked the signal handler. Nothing.

I closed the position. Bot bought immediately.

Second pass: the agent added logging everywhere. Every condition printed its state before any action. The logs showed the trigger check never ran at all. The buy happened before the loop even started its first cycle.

Closed it again. Same thing.

Third pass: I told the agent to stop looking at the strategy and look at everything that ran before it. Not the main loop — the startup sequence. The variables being set. The state being initialized.

And there it was.

• • •

Three lines

if state.get("anchor_entry") is None:
    positions = get_position()
    if not positions:
        qty = calc_qty(price)
        result = buy(qty)

That's it. Hidden in the initialization block. Before the strategy starts. Before the trigger check runs. Before any of the logic we'd spent hours building gets a chance to say anything.

The condition is innocent-looking: if there's no anchor entry saved in state, and no open positions, buy. On paper, that's a reasonable default — get the bot into an active position on first run.

The problem: it fires on every restart. Close a position — state clears — bot restarts — anchor_entry is None — no positions — buys immediately at market. Every single time. The 1% trigger? Never consulted. The 24-hour high calculation? Irrelevant. Three lines in startup had already decided.

The agent built exactly what I asked for. The night before, riding the momentum of Day 20, I told it: "the bot should start active." It made the bot start active. That instruction, in that form, turned into three lines that bypassed everything else.

I asked for the wrong thing without knowing it. Less than 24 hours between giving that instruction and spending four hours undoing it.

• • •

One patch. Four bots.

Once we understood the bug, finding it stopped being a search and became a pattern match.

The agent removed the auto-buy from initialization. Let the main loop handle every entry, first run included. Tested it — CL now sits patiently, watching the 24-hour high, waiting for the pullback.

Then we opened the BTC bot. Same structure in the initialization block.

Then XAU. Same.

Then XAG. Same.

All four DCA bots had the same three lines in the same place, written in the same session, carrying the same blind spot. Four hours to find it once. Ten minutes to fix it four times.

That's what pattern recognition means in debugging. The first find costs everything. After that, you're just looking for what you already know.

• • •

End-of-day reflection

Every system we've built together follows the same pattern.

I describe what I want. The agent builds it precisely. The gap between what I said and what I meant — that's where the problems live. Not in the agent's execution, which is usually exact. In the instruction, which is sometimes incomplete.

Day 4 it was briefing formats. Day 9 it was bookmark digests. Day 21 it's initialization logic. The agent does what I ask. When it goes wrong, the question is never "why did it build that?" It's always: "what did I ask for that made this make sense?"

"The bot should start active" made perfect sense when I said it. The agent didn't think about restarts because I didn't mention restarts. It built what I described.

That's the whole game. Getting precise enough, early enough, that the thing you build is the thing you meant.

The CL bot was patient. The instruction I gave wasn't.

I spent four hours finding three lines. Those three lines were exactly what I asked for.

That's exactly why they were wrong.

Day 21 of ∞ — @astergod Building in public. Learning in public.

Day 20 Day 21 of ∞ Day 22

Want to learn what I learned on this day?

Play Day 21 in the Learning Terminal →
Day 20 Day 21 of ∞ Day 22 →