I built an AI that knows me better than I do
What happens when your AI stops waiting to be asked
The most useful thing my AI assistant has done in the last two months wasn’t a fintech update or a last minute Liverpool injury alert before an FPL deadline.
It told me Project Hail Mary was in cinemas that weekend and to book before it sold out.
I booked and it was full.
It recommended it because it already knew I was reading the book.
That’s OpenClaw.
Not another dashboard. Not another tool you have to remember to check.
It runs in the background, picks up on what you care about, and shows up with things you didn’t think to ask for.
If you actually set it up properly.
I’ve been running it since February. Here’s what I’ve learned.
OpenClaw launched in November 2025 as Clawdbot, a side project by Peter Steinberger, an Austrian developer who’d sold his last company for €100m, disappeared for three years, then came back building with AI.
Anthropic sent trademark complaints. It briefly became Moltbot, then OpenClaw. It hit 367k GitHub stars. Steinberger joined OpenAI on Valentine’s Day. When someone asked why he said “One welcomed me. One sent legal threats.”
I set my Openclaw up on an Amazon EC2 server in mid-February, mostly because I thought it would be an interesting chatbot. I called it Frank. I spent the next month wrong about what it was.
My relationship with APIs is the same as my relationship with plumbing. I understand how it works until it doesn’t.
I’m a marketer, not an engineer.
Every AI tool I’d used before had one thing in common.
You go to them.
You open a tab, type something, get an answer. Start a new chat and they forget you exist.
OpenClaw is built differently.
It has full access to my computer. It reads files, writes files, runs code, calls APIs, browses the web, sends messages, takes actions.
It doesn’t wait for you to ask. It works while you’re doing something else.
A lot of people run it on a Mac Mini, which creates an odd effect. There’s a physical machine on a desk somewhere that your AI lives in and has full keys to. Like a remote employee who never takes a day off.
Once that clicked, everything followed.
I wasn’t building a chatbot. Instead, I was building a second brain that works for me in the background whether I’ve asked anything or not.
Two months in, I keep coming back to the same thing
The chat is the least interesting part
I have 19 cron jobs running on Frank right now. Some are predictable.
Every morning at 8am a briefing lands in my Telegram. Liverpool fixtures, injury news, MENA fintech headlines, what happened in AI overnight, and an FPL deadline reminder if one’s coming.
My previous morning routine was finding out about things I’d already missed.
The ones I didn’t expect to value are weirder. Once a week Frank reads an article and sends me the key points. Once a month it runs a roast, reads the gap between what I said mattered and what I actually did, then writes something specific enough that I want to forward it to someone. Once a month it invites five guests to a fictional dinner party and writes the first five minutes as dialogue. The guests are chosen based on what I’m wrestling with, not what’s on my task list. At least one is fictional. At least one I’d never have thought to invite.
None of this is chat. All of it shows up while I’m doing something else.
Underneath the scheduled jobs there’s a second layer most people skip.
Frank has a vector database with everything I’ve fed it, daily notes, knowledge files, project states. When I ask about something with history, it searches that first. The context builds. Every conversation makes the next one stronger.
A few weeks ago Frank read a transcript from a video on how original companies are built. I’d forgotten about it.
A few days later, working on something unrelated, Frank connected the video to a project I was running.
It named a data opportunity I hadn’t spotted.
I hadn’t asked it to look. It found the connection because it knew both things, and I hadn’t thought to put them together.
A note-taking app stores things but Frank thinks with what’s stored.
Frank also has named failure modes, which live in a file called SOUL.md. Sycophantic Frank agrees with everything. Essay Frank gives me a thousand words when I wanted forty. Hedge Frank qualifies a strong opinion until the opinion disappears.
There’s one rule across all of it.
Be the assistant you’d actually want to hang out with for a beer.
I know what the doomers say.
AI is coming for jobs.
Some of that will happen.
Some of it is companies using AI as cover for decisions they were already making. That’s a different problem. I’ll write about it separately.
There’s a 160-year-old idea most people miss when they talk about AI.
When you make something more efficient, people don’t use it less. They use it more.
That’s Jevons paradox.
In 1865, William Stanley Jevons noticed that as coal became more efficient, demand didn’t drop. It exploded.
Cheaper and easier didn’t reduce usage. It made more things worth doing. Satya Nadella called this out in 2025 when DeepSeek crashed AI pricing.
Spreadsheets didn’t eliminate accountants. They made accounting faster. Faster meant more analysis and more analysis meant more demand. So we ended up with more accountants, not fewer.
The people worried about AI taking their jobs are usually worried about the wrong thing.
The real question isn’t whether AI replaces you.
It’s whether you understand what it actually does.
Because right now, most people are using about 10% of what these tools can do.
You don’t need to be an engineer to run this. But you do need to think like one.
You need to be curious, and you need to understand the pieces well enough that when something breaks, you can describe what happened.
And things will break.
My cron job once sent me a fabricated Liverpool fixture. Liverpool vs Bayern Munich. A game that doesn’t exist.
I tried to fix it. The hallucination kept coming back for three days. I spent an evening on a Google Calendar integration that never fully worked.
What actually works is debugging OpenClaw with another AI in parallel.
Take the error from your terminal, paste it into Claude. Ask what it means and run the command. Test and repeat.
That’s the workflow.
It’s how a non technical person runs something that looks technical from the outside. The more you do it, the faster it gets. And you learn along the way.
Week one is frustration.
By month one you’re experimenting.
By the end of month two you know what you actually want this to be.
I built an AI agent called Frank. It roasts me, gives me a morning briefing, and acts like a second brain.
But the thing I love most is that I’m bad at remembering birthdays, and Frank reminds me. It’s the smallest possible use case for a system this complicated, and it’s the one that proves the whole thing is working.
Most people never get this far.
They build these systems for impressive demos, not real life.
Frank runs on Telegram. It’s on my phone, my laptop, wherever I am. There’s a voice model from ElevenLabs wired in too, so when I’m walking I can talk out loud and get an answer back.
The birthday reminder doesn’t arrive in an app I have to open. It arrives in the conversation I’m already in.
That’s the difference between something you use once and something you actually rely on
Should you do this?
Probably not if you’re not willing to spend the first few weeks confused and frustrated.
But if you’re the kind of person who keeps thinking “I should automate that” and never does, this is the system that forces you.
The setup will be one click eventually. The people doing it the hard way now will have something the easy version can’t give them. They’ll know what they actually want it to do.
That takes months of honest experimenting. Nobody can do that part for you.
That’s the point.
See you out there.
Martin
P.S. My writing soundtrack Madonna Into The Groove (Live Aid 1985)






