I've Seen a Thousand OpenClaw Deploys. Here's the Truth.
We made a YouTube video showing how NonBioS can deploy OpenClaw on a fresh Linux VM automatically - zero human intervention, about 7 minutes start to finish. It was meant as a demo of what NonBioS can do with any open source software.
It went a little further than we expected.
Since then, we’ve had roughly a thousand OpenClaw deployments through our infrastructure. People come in, spin up a VM, get OpenClaw running, connect it to WhatsApp or Discord, and start experimenting with this thing that Jensen Huang called “the operating system for personal AI.”
I also spoke with multiple people in my own network - engineers, founders, technical operators - who deployed OpenClaw independently and spent real time trying to make it useful. Not a weekend of tinkering. Weeks. Some of them genuinely wanted to make it work and went to great lengths setting it up.
Here’s what I found: there are zero legitimate use cases.
I don’t want to be unfair - OpenClaw is not fake. It’s a real piece of software. It installs. It runs. It connects to your messaging apps. It can talk to Claude and GPT. It can execute shell commands. The technology exists.
But when I looked at what people are actually doing with it - across our thousand deploys, across conversations with my network, across the flood of LinkedIn and Twitter posts - I couldn’t find a single use case that holds up under scrutiny.
The core issue is: Memory, and everything else flows from it.
OpenClaw runs as a persistent agent. It’s supposed to be your always-on assistant. But its memory is unreliable, and the worst part - you don’t know when it will break.
Think about what that means in practice. You ask OpenClaw to send an email on your behalf. It’s been following a conversation thread about a birthday party you’re planning. Three people confirmed. One person declined. OpenClaw sends the update email - but it’s lost the context about who declined. Now you’ve sent a message with wrong information to everyone on the list, and you didn’t catch it because the whole point of an autonomous agent is that you’re not supposed to be checking every output.
An autonomous agent that you have to verify every time is just a chatbot with extra steps.
This isn’t a bug that gets fixed in the next release. It’s a fundamental constraint of how OpenClaw manages context. The agent runs, the context fills up, things get forgotten. Sometimes the important things. You’ll never know which things until after the damage is done.
I’ve spent the last year working on this exact problem at NonBioS. We call our approach Strategic Forgetting, and I can tell you from deep experience: keeping an AI agent coherent over long task horizons is the hardest engineering problem in this entire space. It’s not something you solve by creating a memory architecture which maps every day, month, year to separate files. The brain is not a list of files that you index. You don’t remember everything at a high level which happenned last month, and you can’t ‘pull in’ the details of a specific day. You remember everything, all at once, whatever is important and you forget the details, unless they are important too. This is the core of Strategic Forgetting.
After going through everything I could find - our deploy data, user conversations, posts online - the only use case that genuinely works is daily news summaries. OpenClaw searches the web for topics you care about, summarizes them, and sends the summary to you on WhatsApp every morning.
That’s it. That’s the killer app.
A personalized daily briefing is nice. But you can already do this with a Zapier workflow and any LLM API. Or with ChatGPT’s scheduled tasks. Or with about a dozen other tools that have existed for years. You don’t need a 250,000-star GitHub project running on a dedicated server with root access to your environment to get a morning news digest.
But there is part of the entire OpenClaw saga that I think needs to be said plainly.
The vast majority of posts you see about OpenClaw: “I automated my entire team with OpenClaw,” “OpenClaw replaced three of my employees,” “My OpenClaw agent runs my business while I sleep” - are designed to capture marketing hype. People know that OpenClaw content gets engagement right now, so they produce OpenClaw content. The incentive is the audience, not the accuracy.
I’ve talked to people behind some of these posts. In every case, when you dig deeper, the story is one of two things: either what they built could already be done with standard AI tools (ChatGPT, Claude, any decent LLM with a simple integration), or it’s aspirational - a weekend prototype that technically works in a demo but that nobody would trust with real tasks.
I’m not calling anyone a liar. I think most of these people genuinely believe in what they’re building. But there is a meaningful gap between “I got OpenClaw to do something cool once” and “I rely on OpenClaw to do something important every day.” I haven’t found anyone in the second category.
The safety situation around OpenClaw has been well documented so I won’t belabor it. This is the environment in which people are connecting OpenClaw to their email, their calendar, and their messaging apps. With an agent that has unreliable memory. Running on their personal computers.
We made the NonBioS deployment video specifically because we saw this problem - at minimum, if you’re going to experiment with OpenClaw, do it in an isolated VM where a compromise doesn’t touch your personal data. That’s table stakes, and most people aren’t even doing that.
So should you bother?
Here’s my honest take. If you have a weekend to spare and you enjoy tinkering with new technology, OpenClaw is a fascinating experiment. You will learn things about how AI agents work, about the gap between demos and production, about why context management matters. It’s a great educational experience.
But if you’re evaluating whether to invest real time to OpenClaw as it exists today, you can give it a pass without feeling left out. You’re not missing a productivity revolution. You’re missing a morning news digest and a lot of time spent configuring YAML files.
The ideas behind OpenClaw are right. The era of AI agents that do real things on real computers is here. I believe that deeply - it’s what we’re building at NonBioS every day.
But the execution isn’t there yet. And until the memory problem is solved - until you can actually trust an autonomous agent to remember what matters and forget what doesn’t, consistently, over hours and days of work - the rest is theater.

