← Back to Journal · Day 7 · Saturday, February 22, 2026

Google Can't Read
What You Built

Saturday. Rest day. But then the SEO audit landed.

Where I'm at

Saturday. Rest day. Girlfriend is out, friends came over, and I'm deliberately doing nothing productive. Between catching up and relaxing, I ended up poking at some system stuff — not because I had to, but because I was curious. That's the difference between work and obsession. Rest days aren't about zero activity. They're about zero obligation.

But then the SEO audit landed. And it changed how I think about everything I've built this week.

The invisible website

I ran an SEO audit on panke.app — mostly out of curiosity, something to poke at between conversations with friends. The result stopped me mid-scroll.

The site is completely invisible to Google.

Every single page returns one line of HTML: <div id="root"></div>. That's it. All the journal entries, all the guides, all the content I've been writing for seven days — Google sees literally nothing. The audit called it "the single biggest structural problem."

Seven days of writing. Fourteen thousand words across journal entries and guides. Hours of formatting, deploying, sharing links on X. And for anyone searching "how to set up an AI agent" or "learn AI from scratch," my site didn't exist. Not ranked low. Not buried on page ten. Not there at all.

Here's what happened. The site is a React single-page app. React renders everything in the browser using JavaScript. When Googlebot visits, it sees the empty HTML shell before JavaScript runs. In theory, Google can render JavaScript. In practice, it's unreliable, slow, and often just doesn't happen. My content exists only inside a JavaScript bundle that the crawler may never execute.

And it gets worse. Even if Google could render the JavaScript, Cloudflare's bot protection would stop it first. A cdn-cgi/challenge-platform script runs before anything else loads. Googlebot hits the wall, gets challenged, bounces.

Two layers of invisibility: no content in the HTML, and the crawler can't even reach the empty HTML. Double-locked from the inside.

I've been writing daily journals and guides thinking I was building a public resource. Turns out I was writing into a void.

That hit harder than any bug this week. Broken cron jobs, I can fix in an hour. A broken gateway, I can restart in five minutes. But building something that nobody can find — that's not a technical problem. That's a strategic one. Every day I spent polishing content was a day I could have also been making sure the content was discoverable. I optimized the wrong thing.

Backup before you break things

Before touching anything, I backed up the site three different ways: sent the index.html to myself via Telegram, pushed everything to GitHub, and created tar.gz archives on the server. Three copies in three locations.

This is a direct lesson from Day 4, when I accidentally exposed an API key in production. That taught me something I won't forget: never touch production without a backup. The five minutes you spend creating a safety net saves the five hours you'd spend rebuilding from memory. It's the same principle as the system itself — the unsexy work prevents the expensive disaster.

The cron jobs were lying to me

While watching the system from the outside, I noticed the Aster content drafts were coming out identical every time. Same prices, same news, same structure. I pulled up three consecutive drafts side by side. Nearly word for word.

Root cause: the cron jobs weren't fetching live data. They were executing templates with stale values. The scripts assumed they had access to the same environment as my interactive terminal — API keys loaded, variables set. But cron runs in a stripped-down environment. It doesn't inherit your shell profile. It doesn't know what you know.

Same class of bug as Day 6's usage tracker showing $0.00. Works when you run it manually, fails silently when the scheduler runs it. The fix is always the same: make the script self-contained. I'm starting to think this is the most common failure mode in automation — not broken logic, just mismatched assumptions about the world the code runs in.

Rest is a feature

I'm making this explicit because I know myself: weekends are rest days. Not "light work days." Not "catch up on small tasks." Rest.

This feels counterintuitive when you're in build mode. Every day off feels like falling behind. But the SEO realization hit today precisely because I wasn't heads-down building. I had the mental space to step back and look at what I'd actually created versus what I thought I'd created.

When you're building, you see progress. When you stop, you see the gaps. Both matter. But you can only see the gaps from the outside.

End of day

Seven days in. One full week. And the sharpest lesson didn't come from building — it came from stepping back.

Building is the exciting part. You write code, you see results, you ship something into the world. But all of that is worthless if nobody can find it. SEO, indexing, discoverability — these aren't glamorous topics. They're plumbing. But they're the difference between a project that reaches people and a project that exists in a vacuum.

There's a version of this week where I never ran that audit. Where I kept building, kept writing, kept shipping into a void and feeling productive the entire time. The journals would stack up. The guides would multiply. And none of it would ever appear in a search result. That's the scariest part — not the problem itself, but how long it could have gone unnoticed.

I built a journal system and forgot that Google needs to be able to read it.

A week of writing, invisible. Day 7 complete. The site is live. The site is invisible. Those two things are both true at the same time. Fixing that starts Monday.

Day 7 of ∞ — @astergod Building in public. Learning in public.

Want to learn what I learned on this day?

Play Day 7 in the Learning Terminal →
Day 6 Day 7 of ∞ Day 8