Day 4 started with a security incident
I was scrolling through my GitHub activity when I noticed something that made my stomach drop: my OpenRouter API key was sitting in a public commit. Exposed. Visible to anyone who cared to look. The same key that powers all my AI automations — market briefings, bookmark digests, the entire system I’d spent three days building.
The fix was fast. Revoke. Regenerate. Update the gateway config. Five minutes of actual work. But the lesson hit harder than the fix: when you’re moving fast with AI tools, you’re constantly passing secrets around. API keys in config files, tokens in environment variables, credentials in chat messages. One careless commit and it’s all public.
I immediately updated my gateway config with the new key and restarted. Everything came back online. But it’s the kind of mistake that makes you slow down for a second and think about operational security in a way you don’t when everything’s working fine.
The format problem that wouldn’t die
Here’s something nobody warns you about when you build AI automations: the AI will forget your instructions. Repeatedly. Confidently. Even when you’ve written them down in multiple places.
I asked Miyu — my AI agent — to generate an evening market briefing. Simple enough. I have a template. I have a style guide. I have documentation in four separate files explaining exactly what the format should look like.
She got it wrong. Eight sections instead of seven. Wrong section names. Missing the Tesla deep dive that’s supposed to be in the evening edition only. Missing the content ideas that go in my DM but not the public channel.
So I corrected her. Explained the format again. She acknowledged it, apologized, regenerated it — and got it wrong in a different way.
This happened four times.
I’m not exaggerating. Four rounds of “here’s the format” followed by “that’s not the format.” Each time she’d fix one thing and break another. The sections would be right but the depth would be wrong. The depth would be right but she’d add an extra section. She’d nail the channel version but mess up the DM version.
This is the reality of working with AI agents that nobody on X talks about. The demos look magical. The day-to-day involves repeating yourself until you figure out how to make the instructions stick.
Making instructions sticky
After the fourth round of corrections, I stopped fixing the output and started fixing the system.
The problem isn’t that the AI is stupid. The problem is that instructions decay. Every new conversation starts fresh. Even with memory files and system prompts, context gets diluted across a massive prompt that’s trying to hold twenty different roles simultaneously.
So I did something systematic. I updated four files:
SOUL.md — the personality and rules file. Added three mandatory rules at the very top, marked as “CRITICAL” and “NON-NEGOTIABLE.” Rule 0: always check chat history before responding. Rule 1: always show which role is active. Rule 2: always give prompt feedback before executing any task.
MEMORY.md — the long-term memory file. Added the exact briefing format breakdown. Which sections go where. Which editions get which extras. Tables, specifics, no ambiguity.
AGENTS.md — the operating instructions. Added “read MEMORY.md FIRST” as step one. Added a reminder that I hate repeating myself.
BOOTSTRAP.md — the startup file that runs on every new session. Added instructions to read memory files before doing anything else.
Four files. Same information. Different angles. Redundancy by design.
The lesson: if your AI keeps forgetting something, don’t just tell it again. Write it in every file it reads at startup. Make it impossible to miss.
The trigger phrase hack
I invented something today that I’m genuinely proud of.
After getting frustrated with the AI skipping steps — not showing which role is active, not giving prompt feedback, jumping straight to execution — I created a trigger phrase. A single emoji that, when I start or end any message with it, forces Miyu to follow a specific sequence:
1. Show the active role. 2. Give prompt engineering feedback on my request. 3. Let the Master Orchestrator decide which specialized role should execute. 4. Execute the task with that role.
It’s like a mode switch. Normal messages get normal responses. Messages with the trigger get the full orchestration treatment.
Why does this work? Because it’s a specific, visible, unambiguous signal. The AI doesn’t have to guess whether I want the full process or a quick answer. The emoji tells it. And because it’s documented in multiple files, the instruction persists across sessions.
I’ve only been using it for a few hours, but it already feels like a breakthrough. The quality of responses when I use the trigger is noticeably higher than when I don’t. Not because the AI is smarter — because the process forces better thinking.
The great cron job cleanup
With the briefing format finally locked down, I audited every automation prompt. All twenty of them.
What I found was a mess. The six briefing prompts — three time slots, two destinations each — were all slightly different. Some referenced the style guide but not the template. Some had error handling, some didn’t. The section names were inconsistent: one said “X PULSE,” another said “X TRENDS & SENTIMENT.”
I standardized everything. All six briefing prompts now reference both the template and the style guide. All have the same error handling. All use the same depth rule: 10 sentences for major moves, 5 for normal sections, 3 for slow news days. All use the same seven section names.
Then I cleaned up the cron schedule. Removed the Flight Monitor — I already bought the tickets. Removed the Evening Journal Check — redundant. Moved the AI Journey job to 01:00 Amsterdam time so it runs after the bookmark digest instead of conflicting with it.
Net result: fewer jobs, cleaner prompts, consistent output. The kind of work that’s invisible but prevents a hundred small problems down the line.
Bookmark digest to guide pipeline
Something clicked today about content creation.
My bookmark digest arrived — the daily summary of everything I saved on X. Fourteen bookmarks from the last 24 hours, categorized and summarized. Good for reference, but not publishable.
I asked Opus to turn it into a guide. The same kind of guide I’d write myself — first person, practical, teaching through experience. And it produced something genuinely good. A 5,000-word guide about the AI Agent Operator role that reads like I wrote it.
The entire pipeline from bookmark to publishable content happens with minimal effort. The hard part was figuring out the style instructions. Once I nailed those, the output quality jumped dramatically.
Three rules that made the difference: First, always write from first person. Second, use my handle — panke or @astergod — never my real name. Third, frame advice for people who are where I was a week ago. Not condescending, not expert-level. Just “here’s what I found, here’s what works.”
I documented this style so the agent can replicate it on every future digest. The goal: daily publishable content generated from my actual learning, in my actual voice, with minimal manual editing.
What I actually shipped today
Fixed an exposed API key. Regenerated evening market briefing in the correct format — twice. Standardized all six briefing cron job prompts. Updated four system files with persistent rules. Created a trigger phrase system for quality control. Cleaned up the cron schedule. Built a bookmark-to-guide content pipeline. Documented the guide style for future automation.
More invisible work than visible output. But Day 4 was about solving problems that would have haunted me for weeks if I’d ignored them. The security fix prevents a potential disaster. The format standardization prevents daily frustration. The trigger phrase prevents quality degradation. The content pipeline creates a repeatable system for publishing.
End-of-day reflection
Four days in. The pattern I notice: each day has a theme that I don’t plan but emerges naturally.
Day 1 was discovery — everything is new, everything is exciting. Day 2 was expansion — add features, connect tools, build the system bigger. Day 3 was debugging — things break, you fix them, you understand them deeper. Day 4 was discipline — making the system follow rules consistently.
The progression makes sense. You can’t enforce rules on a system you don’t understand. You can’t understand a system you haven’t broken. You can’t break a system you haven’t built.
Today’s biggest insight: the real skill in AI isn’t prompting. It’s system design. Writing a good prompt is a one-time effort. Building a system where good prompts run automatically, consistently, and correctly — that’s the compound advantage.
My AI agent is four days old. She has twenty automated jobs, three market briefings per day, a daily bookmark digest, a content pipeline, a journal system, and a growing memory. She still forgets things. She still gets formats wrong. She still needs explicit instructions for things I’ve told her ten times.
But she’s getting better. And more importantly, I’m getting better at making her better.
That’s the game.
Day 4 complete. The real skill in AI isn’t prompting. It’s system design — building systems where good prompts run automatically, consistently, and correctly.
Day 4 of ∞ — @astergod Building in public. Learning in public.
Want to learn what I learned on this day?
Play Day 4 in the Learning Terminal →