Yesterday I said: pick one thing and push forward. Not all three. One.
I picked the taller mountain. The DCA strategy hit its local maximum — 500 experiments, zero improvements, the parameters are as good as they get. I could have accepted that. Kept running the bots as they are, collected the small wins, stopped trying to optimize. That's the safe path. Stay on the hill you've already climbed.
Instead I started building something fundamentally different. Not better DCA parameters. A different kind of trading system entirely.
• • •
The new system has eight signals running simultaneously. Momentum — is price accelerating? Mean reversion — has it stretched too far from the average? Trend strength. Volatility compression. Correlation to Bitcoin. Each signal looks at the market from a different angle, and when enough of them agree, the bot takes a position. Not a single trigger like the DCA bots — a consensus.
Quick detour for anyone following along: the DCA bots are simple. Price drops 1% below a high, they buy. Price rises 1%, they sell. One signal, one action, repeat.
This new system is closer to how institutional traders work — multiple indicators, confidence thresholds, position sizing based on how much edge the signals collectively detect. It's the difference between fishing with one line and fishing with a net.
Built the full stack in one session. Data pipeline. Signal engine. Risk management with a hard cap at 12% maximum drawdown — the most the system is allowed to lose before it stops trading. Position sizing using Kelly criterion, which calculates the optimal bet size based on your edge. Execution layer with automatic stop-losses and take-profits. A live dashboard deployed at panke.app where anyone can watch the paper trading in real time.
Paper trading — not real money. Not yet. Ten thousand dollars of simulated capital on a test network. Fifteen symbols. The system needs to prove itself with fake money before it touches real money. That's a lesson from Day 33 that I'm not going to learn twice.
• • •
First backtest result: +98.76% return in 90 days. Sharpe ratio of 1.68. Maximum drawdown: 0%.
I stared at those numbers for about three seconds before I knew they were fake. Ninety-eight percent return with zero drawdown means the strategy barely traded. The signals weren't firing. The confidence threshold was set so high that almost no trade met the criteria.
The system spent 90 simulated days sitting on its hands, occasionally entering one perfect position, and the backtest called that 98% return because the few trades it made happened to work. A bot that doesn't trade can't lose money. That's not performance. That's absence.
Tuned the thresholds down. Lower confidence requirement. Higher leverage. More aggressive sizing. Ran it again.
Modest results this time — but real ones. The signals fired. Positions opened. The system actually traded instead of waiting for conditions that never arrive.
The infrastructure is solid. The signals work. The first honest results are in. Not spectacular. Not fake.
• • •
Then the bot started dying.
Started the live paper trading loop. Three short positions opened — the signals detected bearish conditions across three stocks.
Ten minutes later, the process crashed. Restarted it. Same three positions. Same prices. Ten minutes. Dead again.
The database lock files were conflicting between sessions. The server was killing the process for running too long. Every restart produced the same static snapshot because the simulation doesn't move prices in real time — it needs a continuous connection to the exchange, and the connection kept breaking.
The bot works. The signals fire. The positions open. The infrastructure can't keep it alive on the server for more than ten minutes.
Same problem class as every other persistent process I've tried to run — Day 29's ghost script, Day 32's OpenClaw memory crashes. The server has limits I keep hitting.
Not solved tonight. The bot needs to run as a proper background service, not a session-bound process. Tomorrow's problem.
• • •
In the evening I started learning something completely different: AI video. Not the manhwa pipeline — that's narration over static panels. This is generated video. Moving images from a text prompt.
The insight that surprised me: the skill isn't in the model. The models are available to everyone. The skill is in the prompting. Specifically, negative prompts — telling the AI what NOT to include.
"No blurry edges. No distorted hands. No text. No watermark. Smooth motion." That's the line between output you can use and output you delete.
The moat in AI video isn't the technology. It's knowing how to direct it.
Same principle I've been learning for forty-seven days in every other domain: the AI does what you tell it. The quality of the output depends entirely on the quality of the instruction.
• • •
Forty-seven days. Two trading systems now — DCA for steady compounding, the quant bot for speculative edge. A content engine. A video pipeline. A briefing system. A journal. All running on one server, all directed in English, all built by an AI agent that writes every line of code.
The beginning of a new system always looks like this. Broken infrastructure. Fake backtests. Processes that die every ten minutes. The same mess as Day 1, except now I know that the mess is temporary and the system that emerges from it is real.
Day 1 I built one thing and it took twelve hours. Day 47 I built a full quant trading system, deployed a live dashboard, started paper trading, and began learning AI video — in one day.
The skill isn't the code. It never was. The skill is knowing what to build, what to check, and what not to believe.
Day 47 complete. One quant bot built. One 98% backtest discarded. Three positions opened. Ten minutes of uptime. The taller mountain is messy at the base.