Where I'm at
I cleared my dashboard this morning. All of it. The old stats, the historical trades, the numbers stretching back to when I didn't understand my own system. The silver crash losses. The position sizing mistakes. The early-days chaos. Gone. Sixty-eight trades. $236.30 gross. Starting from today. Clean slate.
It sounds like a small thing — resetting a counter. But it's not about the counter. It's about what the counter represents.
For the last week I've been watching numbers on my dashboard that include trades from before I knew what I was doing. Losses from position sizes that were too big. Profits from a period where the bots were running with bugs I hadn't found yet. The aggregate number was meaningless because it blended learning with execution, mistakes with mastery. I needed a number I could trust. So I started over.
Day 33 was the hardest entry I've written. Clients got liquidated. I lost $2,500 of my own money on a careless mistake. Silver crashed and exposed that my position sizing wasn't conservative enough. The hedge became the hazard.
I've been going too fast. That's the conclusion I keep arriving at. Not too fast on features — the dashboards work, the onboarding is smooth, the watchdog catches crashes. Too fast on scaling. Too fast on adding clients before the foundation was solid. Too fast on position sizes before I understood how assets correlate under stress.
The strategy works — the DCA logic is sound, the bots execute correctly, the small wins compound. But the parameters around the strategy weren't battle-tested. I was scaling a system I hadn't finished tuning.
Starting today: back to basics. Small sizes. Personal account only. Figure out exactly what the right position size, order size, and capital allocation should be for safe, sustainable accumulation. Not "what works in a backtest." What works when silver drops 25% in two days and your client with $500 gets liquidated.
The answer to "how big should the positions be?" isn't a number. It's a number that survives the worst week, not just the average week.
• • •
The bug that finally made sense
The duplicate order bug — the one that's been haunting the system for two weeks — finally made sense today. Not the race condition I'd been chasing. Not the cache invalidation timing. Something simpler and worse.
At 16:20 UTC yesterday, the exchange's API stopped resolving for about ten minutes. A DNS outage — the system that translates a web address into an actual server location went down briefly. Normal infrastructure hiccup. Recovered on its own.
But during those ten minutes, every time a bot asked "do I have an open position?" the question couldn't reach the exchange. The function caught the error, logged a warning, and returned zero. Zero means: no position. No position means: clear the state. Clear state means: buy.
The bot had a real position. Three layers deep. The exchange knew about it. The state file knew about it. But for ten minutes, the bot thought it had nothing. So it bought again. Then the API recovered, the bot saw a mismatch between its state and the exchange, and tried to reconcile — which sometimes triggered another buy.
Two weeks of duplicate orders. Inflated positions across four clients. Patches that addressed symptoms but not the cause. All traced back to one assumption: if the position check returns zero, the position is gone.
The fix was eleven lines. When the position check throws an error, return the last known value instead of zero. If you had a position an hour ago and the API just died, the position didn't disappear — the API did. Hold your state until you get a confirmed reading from a working connection.
Eight hours across three sessions to find those eleven lines. The code wasn't wrong. The assumption underneath it was.
• • •
Aster staking
Aster launched staking today. I've been reading the docs all morning.
Two reward pools. Base Rewards — 150,000 $ASTER per week, distributed based on which validator you delegate to and how much transaction volume that validator processes. Loyalty Rewards — 300,000 per week plus extra from Aster's buyback program, distributed based on how long you lock and how actively you trade on the platform.
The validator set is institutional: Trust Wallet, BNB Chain, World Liberty Financial, Lista DAO, PancakeSwap. Strong names for a chain that's been live for three days.
The minimum lock is one year. Maximum is four years. Your reward weight scales linearly — 52 weeks gives you 25% of max, 104 gives 50%, 208 gives 100%. But the early exit penalty doesn't scale linearly. Exit a 1-year lock immediately: 25% penalty. Exit a 4-year lock immediately: 60% penalty — and that 60% cap holds for the entire first year and a half.
There's a trading volume boost too. Over 500K weekly volume gets you a 1.05x multiplier on loyalty rewards. Over 50M gets 1.15x. Over 200M gets 1.25x. For someone running trading bots on the platform daily, this matters.
I'm going 52 weeks. The minimum. Not because I'm not bullish — I run a business on this platform, I have clear skin in the game. Because the minimum gives me meaningful loyalty rewards with a penalty curve I can stomach (25% worst case, under 12.5% by month six), and the volume boost makes the 1-year lock punch above its weight for active traders.
In 12 months I'll have real data. Chain performance. Validator reliability. Whether the APY held up as more capital entered. Whether governance shipped. Then I decide whether to extend.
There's only about $1.5–2M staked right now. Early stakers are splitting 450,000 ASTER per week among very few people. The yield looks incredible. It won't last. Don't project today's numbers forward.
Locking longer than one year right now is betting on conviction. Locking for one year is betting on data. I'm betting on data.
• • •
Ray's $493
Ray's bots told me he had no money in his account. Ray told me he had $493. We were both right.
When we migrated Ray from the older API to the newer one, his bot scripts kept the old API key. The balance check function looks for that key first — if it exists, use the old endpoint. The old endpoint pointed at an empty account. The new wallet with $493 was invisible because nobody cleared the old key after migration.
Same function. Two different backends. One right answer, one wrong. The key determined which account you checked, and nobody noticed because the function ran without errors. It returned a confident zero. Not an error. A number. The wrong number.
Cleared the old key from all five scripts. $493 appeared immediately. All five bots bought Layer 1 within thirty seconds.
A function returning a confident number is not the same as a function returning the right number. The system doesn't crash when it's wrong. It just keeps running, confidently, with bad data.
• • •
Day 34
Thirty-four days. Week five. I started this journal not knowing what SSH meant. Today I spent eight hours tracing a DNS outage through a position-checking function to explain two weeks of duplicate orders, and the fix was eleven lines that I directed an AI to write.
But the bigger thing today isn't the bug. It's the reset.
I went too fast. I scaled before the foundation could hold the weight. Clients got hurt. I got hurt. The strategy is sound but the parameters weren't tested under stress. That's on me — not the bots, not the agent, not the market.
So I'm starting over. Not the system — the system works. The assumptions. The sizes. The risk model. Back to my own account, small positions, finding the exact parameters that survive the worst week before I put anyone else's money on them again.
Day 33 I wrote that the bots aren't the risk — the person directing them is. Day 34 is that person deciding to slow down.
Day 34 complete. Dashboard reset. Staking live. One two-week bug explained in eleven lines. Back to basics. The foundation comes first this time.
Day 34 of ∞ — @astergod Building in public. Learning in public.