top of page

How One Developer Built the "Biggest AI Moment Since ChatGPT" in 90 Days—And Why He Refused to Sell

  • 9 hours ago
  • 13 min read
open claw

Key Points:


  • Peter Steinberger built OpenClaw (originally Clawdbot) in under three months as a personal tool, evolving it from a one-hour WhatsApp prototype to one of the fastest-growing open-source projects in GitHub history


  • The project demonstrated autonomous AI agents capable of executing real-world tasks—transcending chat interfaces to become operational systems that can control computers, read screens, and chain tools independently


  • Despite facing $10,000-$20,000 monthly infrastructure costs, trademark disputes with Anthropic, and serious acquisition interest from Meta and OpenAI, Steinberger kept the project open source


  • OpenClaw represents a fundamental shift from "AI as conversation" to "AI as execution," with implications for how software is built, who controls intelligence infrastructure, and the future viability of traditional apps and APIs


  • The story offers crucial lessons for founders on rapid prototyping, the power of solving personal problems, managing viral growth, and strategic decision-making when corporate giants come calling



In November 2025, Peter Steinberger spent one hour building a tool that would eventually attract offers from Meta and OpenAI. It wasn't meant to be a product. It was meant to let him text his own computer.


The tool—initially called WA-Relay, then Claudus, then Clawdbot, and finally OpenClaw—went public on January 1, 2026. Within weeks, it became one of the fastest-growing repositories in GitHub history, surpassing 100,000 stars and sparking what many observers called the most significant AI development moment since ChatGPT's launch in 2022.


But the real story isn't about GitHub stars or viral growth. It's about what happens when AI breaks free from chat windows and starts executing tasks in the real world—and what one developer's decisions reveal about the future of software, control, and intelligence infrastructure.



The Prototype That Changed Everything


Steinberger didn't start with grand ambitions. After selling his document platform company PSPDFKit to Insight Partners in 2021, he became obsessed with a simple question: "If AI can reason, why is it trapped inside chat windows?"


His answer was WA-Relay, a rough prototype connecting WhatsApp to a local AI loop running on his machine. The evolution from prototype to phenomenon happened through real-world use, not strategic planning.


The pivotal moment came in Marrakesh. While traveling, Steinberger sent his system a voice message—the kind of casual communication anyone does with friends. The agent detected the file format, used ffmpeg to process it, called OpenAI's Whisper API for transcription, and responded. He hadn't coded any of that specific chain.


The system had independently selected and orchestrated the necessary tools. This autonomous tool-chaining behavior marked the shift from AI assistants that answer questions to AI agents that perform actions.


For business leaders, this distinction matters profoundly. An assistant requires explicit instructions for every step. An agent interprets goals, selects appropriate tools, and executes multi-step workflows without predefined paths. The efficiency implications are substantial.



When Viral Growth Becomes Operational Warfare


On January 1, 2026, Clawdbot went public. The growth was immediate and overwhelming. Tech influencers amplified it. The developer community embraced it. Within days, it attracted tens of thousands of GitHub stars.


Then Anthropic intervened. The name Clawdbot sounded too similar to their flagship product Claude, creating trademark concerns. The project briefly became Moltbot before settling on OpenClaw.


What appeared to be a simple rebranding exercise turned into what Steinberger described as a "public war." Within seconds of claiming the new name, impersonators sniped the handle. The stolen accounts promoted crypto tokens and distributed malware. Root NPM packages were targeted. Infrastructure redirects broke.


Steinberger created decoy names and coordinated quiet renames. He paid approximately $10,000 for a business account to reclaim the OpenClaw identity. Crypto promoters flooded Discord channels. Twitter mentions became unusable. At one point, he seriously considered deleting the entire project.


The lesson for founders building in public: viral attention attracts not just users and admirers, but organized bad actors. Security considerations that seem theoretical at 100 users become existential at 100,000. Platform dependencies that feel convenient become vulnerabilities. Growth without infrastructure preparation creates exploitable gaps.


Most founders would welcome such problems—they signal success. But they require resources, processes, and security thinking that solo developers rarely possess.



The Economics of Open Source at Scale


While fighting impersonators, Steinberger faced another challenge: OpenClaw was burning $10,000-$20,000 monthly in infrastructure and API costs. This wasn't venture-backed spending. This was personal capital funding an open-source project.


VCs began calling. Meta showed interest. OpenAI explored possibilities. Each conversation likely involved substantial funding offers and acquisition interest. For context, GitHub's acquisition by Microsoft in 2018 valued the platform at $7.5 billion, and that was before the current AI boom.


Steinberger refused to build an enterprise fork or accept terms that would compromise the open-source nature. His position was clear: if OpenClaw survived, it would survive as open source.

This decision contradicts conventional startup wisdom. When major tech companies express acquisition interest, founders typically engage seriously. The financial outcomes can be life-changing. The resources and distribution these companies offer can amplify impact.


Yet Steinberger prioritized control over compensation. His reasoning centered on a philosophical belief about intelligence infrastructure: SaaS centralizes intelligence; OpenClaw decentralizes it. Instead of renting AI capabilities through vendor platforms, users host agents locally and grant permissions directly. Control shifts from companies to operators.


For business leaders evaluating their own AI strategies, this tension between centralization and control will only intensify. Every SaaS dashboard represents a dependency. Every API call represents vendor lock-in. Every cloud-hosted model represents data flowing to external systems.



Why Every App Is Now "A Slow API"


During his conversation on the Lex Fridman podcast, Steinberger made a provocative claim: "Every app is now a slow API."


The logic is straightforward. Most applications are interfaces—screens and buttons—sitting atop systems that AI agents can now operate directly. If OpenClaw can read screens, control browsers, and execute actions, official APIs become optional. The app layer itself becomes unnecessary friction.


Steinberger predicted agents could eliminate up to 80% of existing applications. This isn't hyperbole for a developer who built a system that can inspect its own source code and debug itself when things break.


Consider the implications. Companies spend millions building polished mobile apps, web interfaces, and dashboard experiences. These interfaces exist because humans need visual representations and simple controls. But if an AI agent can accomplish the same task through screen reading or browser automation, the app becomes a costly intermediary.


The strategic question for business leaders: Are you building apps for humans or building systems for agents? These are increasingly different requirements with different investment priorities.



Programming as Knitting: The New Developer Landscape


Steinberger also challenged the developer community directly, arguing that programming may become like knitting—still creative, still valuable, but no longer a sustainable moat.

His prediction: there won't be "simple iOS engineers anymore, only builders." The core skill shifts from writing syntax to directing systems like OpenClaw. Implementation becomes commoditized; orchestration becomes premium.


This mirrors historical technological shifts. Accountants didn't disappear when spreadsheets emerged; they evolved from calculation specialists to financial strategists. Graphic designers didn't vanish when Photoshop arrived; they elevated from technical execution to creative direction.


But the pace of this AI transition is different. Spreadsheets took decades to fully transform accounting. OpenClaw went from one-hour prototype to 100,000 GitHub stars in roughly two months.


For companies making hiring and training decisions right now, this creates genuine uncertainty. Do you hire traditional software engineers and risk skill obsolescence? Do you hire "AI-native builders" when the category barely exists and standards haven't emerged? Do you invest in retraining programs when the target skills keep shifting?


There are no clear answers yet. But the question itself has become unavoidable.



The Open Source Gambit


Steinberger's decision to keep OpenClaw open source despite corporate interest represents a bet on ecosystem over equity.


Open source creates different value dynamics than proprietary software. When Clawdbot/OpenClaw went viral, thousands of developers immediately began experimenting, extending, and adapting the code. This distributed innovation happens at a pace no single company can match, regardless of resources.


The risk is that someone else captures the value. A well-funded competitor could fork the code, add enterprise features, and build a commercial offering. Meta or OpenAI could integrate similar capabilities into their platforms. The original creator gets attribution but not compensation.


The opportunity is shaping the direction of an emerging technology category. As the reference implementation, OpenClaw influences how developers think about AI agents, what features they expect, and what standards they adopt. This soft power has historically proven valuable—sometimes more valuable than direct ownership.


For business leaders evaluating open source strategies, Steinberger's approach offers a template: build in public, solve real problems, prioritize adoption over monetization initially, and trust that value creation precedes value capture.



What This Means for Your Business


The OpenClaw story contains three actionable insights for business leaders:


First, AI agents are fundamentally different from AI assistants. If your AI strategy assumes that ChatGPT-style interfaces represent the mature form of the technology, you're planning for the past. Agents that execute tasks, chain tools, and operate systems represent the next phase. Evaluate which workflows in your organization could benefit from autonomous execution rather than conversational assistance.


Second, control over intelligence infrastructure will become a strategic decision. The choice between cloud-hosted AI services and locally-run agents isn't purely technical—it's about data sovereignty, vendor dependency, and long-term flexibility. As models become more capable, the question of who controls them and where they run becomes more consequential.


Third, the app layer is under threat. If your business depends on maintaining user engagement through interfaces, consider what happens when agents can accomplish the same tasks without those interfaces. This doesn't mean apps disappear immediately, but it does mean the value proposition shifts. Interfaces become convenience features, not moats.


The broader pattern here is acceleration. Steinberger built something significant in under three months, working alone. The tools, models, and frameworks now available enable individual developers to create systems that previously required teams and years. This democratization of capability means competition can emerge from unexpected places, and incumbent advantages erode faster.



Looking Forward


OpenClaw reached 100,000 GitHub stars faster than most venture-backed startups reach their Series A. It attracted interest from the world's largest tech companies. It demonstrated capabilities that many experts thought were still years away.


Yet the technology itself isn't the most interesting part. What matters is the shift it represents: from AI as a conversation partner to AI as an operational system. From intelligence you talk to, to intelligence you direct.


For business leaders, the question isn't whether this transition happens—it's how quickly and how to position for it. Companies that understand AI agents as infrastructure rather than features will have strategic advantages. Those that continue treating AI as fancy autocomplete will find themselves outpaced.


Steinberger made his choice: keep it open, shape the ecosystem, and accept that financial outcome was secondary to directional influence. Not every founder would make that choice. But every founder should understand why it mattered—and what it signals about where this technology is headed.


The age of AI conversation may have started with ChatGPT. The age of AI execution is starting now.



Frequently Asked Questions


What exactly is OpenClaw and how does it differ from ChatGPT?

OpenClaw (originally Clawdbot) is an AI agent framework that executes tasks on local systems rather than just providing conversational responses. While ChatGPT and similar tools operate within chat interfaces and respond to queries, OpenClaw can control your computer, execute terminal commands, read screens, manage files, and autonomously chain together different tools to complete complex workflows. The fundamental difference is that ChatGPT is designed for conversation; OpenClaw is designed for action. You can connect it through WhatsApp, give it system permissions, and have it perform real operational tasks—from transcribing voice messages using ffmpeg and Whisper to debugging its own source code when something breaks.


Why did Anthropic force the rename from Clawdbot to OpenClaw?

Anthropic, the company behind Claude AI, raised trademark concerns because "Clawdbot" sounded too similar to their flagship product "Claude." This type of trademark enforcement is standard practice when companies believe another product's name could create market confusion or dilute their brand. The similarity wasn't just phonetic—Clawdbot was explicitly named as a playful reference to Claude, which created clear trademark proximity. Steinberger initially renamed the project to "Moltbot" before settling on "OpenClaw." While the rename was legally necessary, it created operational chaos when impersonators immediately stole the new handles across multiple platforms, leading to security incidents, crypto scams, and infrastructure disruptions that nearly caused Steinberger to abandon the project entirely.


How can a solo developer afford $10,000-$20,000 monthly in infrastructure costs?

Most solo developers can't sustainably absorb those costs, which is precisely why Steinberger's situation was unsustainable without external funding or monetization. Having sold his previous company PSPDFKit to Insight Partners in 2021, Steinberger had personal capital to fund the project temporarily, but these weren't trivial expenses even for someone with a successful exit. The costs came primarily from API usage (calling language models at scale), server infrastructure for a rapidly growing user base, and various cloud services required to run a distributed AI agent system. This financial pressure is exactly why venture capital and corporate acquisition offers became attractive—and why his decision to refuse them and keep the project open source was economically significant rather than just philosophically principled.


What does it mean when Steinberger says "every app is now a slow API"?

This statement challenges the fundamental value proposition of traditional applications. Most apps are essentially visual interfaces (screens, buttons, menus) that sit on top of core functionality that could be accessed through APIs or direct system calls. When AI agents can read screens, interpret interfaces, and execute actions programmatically, the app layer becomes unnecessary friction rather than helpful abstraction. For example, instead of a human opening a banking app, navigating menus, and transferring money through button clicks, an AI agent could accomplish the same task by either calling the bank's API directly or by reading and controlling the app interface automatically. The "slow" part refers to the inefficiency of maintaining complex UI/UX when the underlying task could be completed through direct system access—making the app a performance bottleneck rather than an enhancement.


Why would Meta and OpenAI be interested in acquiring or funding OpenClaw?

Both companies have strategic interests in AI agent technology and ecosystem control. For Meta, AI agents represent a potential platform play—if agents become the primary way people interact with digital services, controlling the agent framework means controlling user access to the broader internet, similar to how mobile operating systems became gateway platforms. For OpenAI, acquiring OpenClaw would eliminate a potential competitor while gaining proven technology for local AI execution and autonomous tool-chaining, capabilities that complement their API-based model offerings. Additionally, both companies face criticism around centralization and vendor lock-in; associating with or acquiring a popular open-source project could provide strategic positioning benefits. The project's rapid growth and developer mindshare made it valuable not just for its technology but for its ecosystem momentum and community credibility.


How did OpenClaw grow so fast when it was just a personal tool?

Several factors converged to accelerate OpenClaw's growth. First, timing: it launched in early 2026 when developers were actively experimenting with AI agents and looking for frameworks that went beyond chat interfaces. Second, it solved a real problem that Steinberger himself needed—personal tools built to solve genuine problems tend to have better product-market fit than theoretical solutions. Third, the technology demonstrated genuinely novel capabilities (autonomous tool-chaining, self-inspection, voice message processing) that weren't widely available in other frameworks. Fourth, being open source on GitHub created natural discovery and viral mechanics within developer communities. Fifth, the name changes and public conflicts with Anthropic, impersonators, and platform security issues generated additional attention and publicity. The combination of technical innovation, open-source accessibility, authentic origin story, and dramatic external conflicts created narrative momentum that traditional marketing budgets struggle to replicate.


What are the security risks of giving an AI agent terminal access to your computer?

The security implications are substantial and not fully solved. Granting an AI agent terminal access means it can execute any command you could execute—deleting files, modifying system configurations, installing software, accessing sensitive data, or making network requests. Current AI models occasionally "hallucinate" or make mistakes, and those errors become dangerous when tied to system-level permissions. Additional risks include: prompt injection attacks where malicious actors craft inputs that trick the agent into executing harmful commands; unintended data exposure if the agent sends sensitive information to external APIs; credential leakage if the agent accesses and transmits authentication tokens; and irreversible actions if the agent executes destructive commands before you can intervene. OpenClaw and similar frameworks implement permission systems and sandboxing to mitigate these risks, but using such tools requires understanding that you're trading convenience and capability for increased risk surface. This is appropriate for developers and technical users who understand the implications, but premature for general consumer use without significant additional safety infrastructure.


Could AI agents really eliminate 80% of existing apps as Steinberger predicts?

The 80% figure is provocative but directionally plausible over a long enough timeline, though the transition will be more nuanced than simple elimination. Many apps exist primarily as interfaces to backend systems—banking apps, customer service portals, simple utilities, dashboard tools. If AI agents can accomplish these tasks through APIs, screen reading, or browser automation, the dedicated app becomes optional rather than necessary. However, apps won't disappear entirely for several reasons: many tasks benefit from visual presentation and human oversight; some applications involve creativity and subjective judgment where automation remains limited; certain experiences (games, social media, content consumption) are inherently interface-dependent; and regulatory, security, and liability concerns will slow agent adoption in sensitive domains. The more accurate prediction is that apps become increasingly optional for transactional tasks while remaining relevant for experiential, creative, and oversight functions. The app economy will consolidate around experiences that agents can't replace rather than interfaces that simply wrap API calls.


Why did Steinberger choose to make the agent "aware of its own source code"?

This capability—sometimes called self-inspection or self-modification—fundamentally changes how the system handles errors and improvements. When OpenClaw encounters a bug or limitation, Steinberger can simply tell the agent to examine its own code, identify the problem, and implement a fix. Traditional software requires developers to manually debug, write patches, and deploy updates. A self-aware agent can diagnose its own issues because it has access to its complete codebase and understands its architecture. This isn't fully autonomous self-improvement (the agent still needs human direction and approval), but it dramatically accelerates the development cycle. For Steinberger working solo, this meant he could maintain and improve a complex system faster than would normally be possible. The broader implication is that software development itself becomes a task that AI agents can participate in, blurring the line between users, developers, and the systems they build—which is part of what Steinberger meant when he said programming might become "like knitting."


What does "decentralized intelligence" mean in the context of OpenClaw versus SaaS?

Centralized intelligence (the SaaS model) means AI capabilities live in vendor-controlled cloud infrastructure. You access intelligence by sending data to external servers, where processing happens, and results return. The vendor owns the models, controls access, sees your data, sets pricing, and can change terms or shut down service. Decentralized intelligence (the OpenClaw model) means AI capabilities run on your own infrastructure—your computer, your servers, your environment. You host the models locally, control what data they see, decide which tools they can access, and aren't dependent on external vendors for operational continuity. The tradeoff is complexity and resource requirements: running capable models locally demands technical expertise and computational hardware that not everyone possesses. For businesses, this distinction matters enormously for data sovereignty (especially in regulated industries), vendor lock-in avoidance, long-term cost control, and strategic independence. OpenClaw represents a bet that as models become more efficient and hardware becomes more powerful, the benefits of decentralized intelligence will outweigh the convenience of centralized services—at least for organizations that have the technical capacity to self-host.



References

Comments


The content on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information. All information is provided on an as-is basis. It is not intended to be a substitute for professional advice. Before taking any action or making decisions, you should seek professional advice tailored to your personal circumstances. Comments on posts are the responsibility of their writers and the writer will take full responsibility, liability, and blame for any libel or litigation that results from something written in or as a direct result of something written in a comment. The accuracy, completeness, veracity, honesty, exactitude, factuality, and politeness of comments are not guaranteed.

This policy is subject to change at any time.

© 2023 White Space

bottom of page