Clawdbot

Moltbot: the open source AI assistant revolutionising personal digital management

Clawdbot is the assistant that many users wanted, but that no major company could have designed and launched

by Marco Trabucchi

5' min read

Translated by AI
Versione italiana

5' min read

Translated by AI
Versione italiana

What if there was an AI agent capable of doing everything current chatbots do, but much more? The answer is that it already exists and is operational. In recent weeks, those who frequent technology channels have probably come across Clawdbot - now renamed Moltbot after a legal dispute with Anthropic due to its resemblance to the Claude brand - a piece of software that definitely raises the bar for the AI agents we are used to and that many geeks have already installed on their personal computers.

Thousands of users on X, Discord and Reddit have shared enthusiastic testimonials. Those who have tried it speak of a small revolution in their digital lives. Moltbot is described as a real personal assistant that, if properly trained, is able to get to know a user's habits, commitments and preferences in depth. A 'digital butler', as many have called it, capable of proactively managing reminders, communicating with customers and automating many daily tasks. What's special about it is that it synchronises between different devices, from computer to smartphone, and can be commanded via chat via apps such as WhatsApp or Telegram. "It's like having an external brain," writes one user on Reddit.

Loading...

It was not Big Tech that created it, but an Austrian developer, Peter Steinberger, who released it as an open source project. His story tells a lot about Moltbot's DNA: after creating PSPDFKit, a PDF management framework also adopted by companies such as Lufthansa and IBM and used by hundreds of millions of people, Steinberger sold his stake in 2021 after a 100 million euro investment. After a long hiatus, he returns today with Clawdbot.

What is Clawdbot / Moltbot

Unlike traditional chatbots - ChatGPT, Claude or Gemini - which require an app or website to be opened, Clawdbot simply becomes a contact to chat with. The conversation between user and AI continues seamlessly between devices and platforms, like a single thread. The real discontinuity, however, lies in three characteristics.

The first is persistent memory: Clawdbot preserves conversations, preferences and habits, building an increasingly detailed model of the user over time. The second is proactivity: it does not just respond, but intervenes, suggests and anticipates needs, for example by remembering personal and work deadlines. The third is direct access to the system and its 'agentic' capacity. Clawdbot can send emails, move files, fill in online forms, control the browser and execute scripts. And, if it does not have the right tool, it can create it. Many users have already configured it to handle mail automatically, filtering and replying to routine emails. But not only that.

Federico Viticci of MacStories, a journalist and developer of recognised expertise in the Apple world, dedicated a long article to Clawdbot with the eloquent title: 'Moltbot showed me what the future of personal AI assistants looks like'. "I've worked with a digital assistant," says the author, "that knows my name, my preferences for my morning routine and the way I use Notion and Todoist, but can also control Spotify, my Sonos speaker, Philips Hue lights and Gmail. It runs on Anthropic's Claude Opus 4.5, but I talk to it via Telegram. Basically, it's like talking to a friend'.

Skills, APIs and real costs

Needless to say, trying it out requires a number of tricks, which for the time being make it a tool suitable mainly for computer-savvy users. All the code is open source and available on GitHub, but Clawdbot does not have a proprietary AI model: it relies on an external LLM via API, such as Anthropic's Claude, which is recommended by the developer and the most widely used by the community. To use it, one therefore needs access to the API of a paid AI service and, not least, an always-on computer or, alternatively, a cloud server. In practice, Clawdbot runs locally on the user's device, while a gateway connects it to messaging platforms.

On the cost side, access to AI models via APIs starts at around EUR 20 per month and can grow rapidly depending on the intensity of use, plus any charges for other services or external gateways.

 

The key element of Clawdbot's success is the community, which has created a repository of freely installable skills (extensions): integrations with Spotify, Notion, GitHub, Google Drive, home automation services and other APIs. But there's more: you can ask Moltbot to create a customised skill, and it will be able to develop it itself.

The Dark Side: Security and Cognitive Addiction

But behind the enthusiasm lurks a dark side that cannot be ignored. Security experts have identified instances of Clawdbot exposed on the Internet, often without adequate protection. Unauthorised access means being able to get hold of API keys, tokens and sensitive credentials. This is because in order to function, Clawdbot must store a series of 'digital keys': secret codes that allow it to communicate with other services, such as artificial intelligence models (Claude, GPT), messaging apps, email, calendar and cloud. In jargon they are called API keys or tokens, but conceptually they are equivalent to passwords. If Moltbot is misconfigured and remains visible on the Internet, anyone who finds it can access these keys. At that point it is not just 'spying' on the assistant: it can use it. It can read messages, access e-mails, exploit connected services and even consume the AI subscription at the owner's expense.

It's like leaving your front door open with a post-it note on it that says: 'House, car and office keys in the top drawer'. The problem is not Clawdbot itself, but the fact that it is a very powerful tool often entrusted to inexperienced users, unfamiliar with concepts such as network security or authentication. Developers explain how to protect it, but many skip these steps out of haste or enthusiasm. Thus, an assistant designed to simplify life can turn into a direct access point to one's digital life if misconfigured.

Rachel Tobac, CEO of SocialProof Security, explained to The Verge that an AI agent with administrative access to a computer can be compromised through a technique called prompt injection. Basically, an attacker can send 'malicious' commands or messages to the assistant, which executes them as if they were legitimate. This type of vulnerability is intrinsic to the operation of many AI agents, because the system does not distinguish between secure and manipulated instructions, and there is currently no definitive solution to eliminate it completely.

Then there is a less tangible, but no less insidious risk: cognitive dependency. Many users tell of starting out by asking for information and quickly moving on to delegating decisions. The agent remembers everything, proposes solutions, acts. The result is a gradual relinquishment of decision-making effort, what some call 'cognitive vegetalisation'.

Elon Musk: entro la fine dell'anno i robot umanoidi sul mercato

It's not AGI, but it warns us

The most fascinating (and also disturbing) aspect of Clawdbot is its ability to improve itself: being open source and with access to its own filesystem, it can modify its own code. The result is a cycle of self-improvement reminiscent of the trajectory towards general artificial intelligence (AGI), while remaining guided by human instructions. Moltbot is not AGI, but it is probably the closest example so far of a publicly accessible product moving in that direction. For many analysts, it still represents a turning point in the history of AI assistants.

The enthusiasm surrounding it is understandable. Its capabilities are remarkable: what Apple and Google have been promising for years on phones - and have been unable to deliver for various reasons - has been achieved by a single developer through an open source project, redefining the relationship between man and machine. It remains to be seen whether we will drive this transformation or end up being driven by it. Because, as Federico Viticci wrote in his conclusion, 'after experiencing this kind of superpower, there is no going back'.

Copyright reserved ©
Loading...

Brand connect

Loading...

Newsletter

Notizie e approfondimenti sugli avvenimenti politici, economici e finanziari.

Iscriviti