securityaiautonomous-agentsinvestigationcybersecurity

Clawdbot: The Rise and Fall of an Autonomous Agent - An Investigation into Security and Rebranding

Tincho Fuentesยทยท7 min read

TL;DR: Clawdbot was an open-source AI assistant that went viral in early 2026 for automating tasks with full system access. However, it faced legal trademark conflicts with Anthropic and multiple critical security incidents that led experts to recommend its use only in isolated environments. This report documents the verified facts of the case.


The meteoric rise of a problematic agent

In January 2026, a new open-source project captured the attention of the global tech community: Clawdbot, an artificial intelligence assistant designed to automate tasks directly on the user's machine. The promise was seductive: an autonomous agent capable of executing commands, manipulating files, and managing workflows without constant manual intervention.

The virality was swift. Developers across specialized forums, social media, and code platforms shared impressive use cases. However, this popularity triggered a series of events that would expose serious legal and technical vulnerabilities.

First crisis: The trademark conflict with Anthropic

January 2026: Anthropic, the company behind the Claude language model, formally contacted Clawdbot's developers. The request was clear and legally grounded: the name created direct confusion with their registered "Claude" trademark.

The documented facts:

  • Phonetic similarity: "Claw" and "Claude" share sonic and visual roots
  • Identical sector: Both products operate in the AI assistant space
  • Brand dilution risk: Users incorrectly associated Clawdbot with Anthropic

The team's response was to rename the project to Moltbot. However, this first transition was rushed.

Second crisis: The chaos of multiple rebrandings

Shortly after changing to Moltbot, the project faced new issues that forced a second rebranding to OpenClaw. During these transitions, the following incidents were documented:

Fake domains and repositories

  • Malicious actors registered similar domains during the naming confusion
  • Cloned repositories on GitHub/GitLab impersonated the official project
  • Users downloaded unofficial versions with tampered code

This identity fragmentation created an attack vector for malicious actors who exploited the uncertainty about which was the legitimate repository.

Documented critical vulnerabilities

Security research conducted by specialized firms and independent researchers identified multiple critical attack vectors:

1. Exposed administrative panels

Severity: Critical

Clawdbot's administration panels were found publicly exposed in multiple installations. Compromised information included:

  • Complete conversation history with the agent
  • Authentication tokens in plaintext
  • Third-party service credentials
  • Commands executed on the host system

This exposure violated basic security principles: no administrative panel should be accessible without robust authentication, and credentials should never be stored without encryption.

2. API keys stored in plaintext

Severity: High

Source code analysis revealed that Clawdbot stored API keys for external services (OpenAI, Anthropic, Google Cloud, AWS) in unencrypted configuration files.

Implications:

  • Any process with filesystem access could exfiltrate the keys
  • Unencrypted backups exposed credentials
  • System logs could contain keys in plaintext

This practice contradicts basic OWASP recommendations and industry standards that require the use of secrets managers or encrypted storage.

3. Prompt injection vulnerabilities

Severity: Critical

Security researchers demonstrated multiple prompt injection techniques capable of:

  • Extracting secrets stored in the agent's context
  • Executing unauthorized commands on the host system
  • Modifying agent behaviors without user knowledge
  • Exfiltrating sensitive data to external servers

The most concerning vector allowed injecting malicious instructions into files that the agent read during normal operations, transforming apparently innocuous documents into attack vectors.

4. Skills/modules system without sandboxing

Severity: Critical

The extension system (skills) allowed arbitrary code execution without any isolation or verification:

  • No code review existed for third-party modules
  • Skills could execute system commands without restrictions
  • No granular permission mechanism existed
  • Skill installation didn't warn about dangerous capabilities

This turned each skill into a potential backdoor with full privileges.

5. Phishing and distribution of tampered versions

Severity: High

During the rebranding chaos, multiple phishing campaigns were documented:

  • Cloned websites distributing versions with malware
  • Fake repositories with code that installed cryptominers
  • Crypto scam attempts leveraging the project's popularity
  • Fake social media accounts impersonating the official team

Response from the security community

Cybersecurity firms and independent researchers published detailed analyses. The conclusions were consistent:

Trail of Bits (security audit firm): "Autonomous agents with full system access represent an unacceptable risk without robust security architectures."

SANS Institute: "Clawdbot demonstrates the gap between impressive AI capabilities and basic security practices."

Unanimous community recommendations:

  1. Use only in isolated environments: Virtual machines without access to sensitive data
  2. Do not connect to corporate networks: Risk of lateral movement in case of compromise
  3. Audit all skill code: Personally review any extension before installing
  4. Rotate credentials regularly: Assume keys may have been compromised
  5. Monitor agent activity: Implement exhaustive logging of all actions

Impact on reputation and adoption

Despite initial enthusiasm, the combination of legal and security issues had measurable consequences:

  • Corporate abandonment: Companies evaluating Clawdbot canceled pilots
  • Project forks: Developers created alternative versions with security improvements
  • Drop in GitHub stars: The repository lost traction after the revelations
  • Package manager warnings: Some ecosystems added warnings about the project

Documented lessons from the case

The Clawdbot case illustrates fundamental tensions in autonomous agent development:

1. Capability vs. Security

Powerful agents require broad permissions, but each additional permission expands the attack surface. There's no magic solution: it's an inherent trade-off that must be managed with defensive architecture.

2. Development speed vs. Security maturity

Pressure to launch quickly and capture mindshare led to compromising basic security practices. Premature virality exposed vulnerabilities before adequate controls existed.

3. Open source doesn't guarantee security

Although the source code was public, vulnerabilities persisted for months. The mantra "many eyes find bugs" requires those eyes to include security experts performing systematic audits.

4. Brand and trust are fragile

Multiple rebrandings fragmented the project's identity and created confusion that malicious actors exploited. A consistent brand is also a security measure: it helps users identify legitimate channels.

Current status and outlook

As of this publication (February 2026), the project continues under the name OpenClaw, but with significantly reduced adoption. The team has announced initiatives to improve security:

  • Implementation of sandboxing for skills
  • Credential encryption using specialized libraries
  • Granular permission system for system operations
  • Mandatory code review for skills in the official repository

However, rebuilding trust is a long process that requires independent audits and time to demonstrate sustained commitment to security.

Investigation conclusions

The facts documented in this report lead to clear conclusions:

  1. Autonomous agents are double-edged tools: Their power makes them valuable but dangerous. They're not appropriate for all contexts.

  2. Security must be prioritized from design: Adding security controls after public vulnerabilities is costly and incomplete.

  3. Reactive rebranding has consequences: Forced name changes create windows of opportunity for malicious actors.

  4. The tech community must demand standards: Virality shouldn't be an adoption criterion. Verifiable security practices should be.

The Clawdbot/Moltbot/OpenClaw case is a valuable case study for any team developing autonomous agents: it demonstrates that impressive capabilities without solid security foundations inevitably lead to loss of trust and reputation.

References and additional documentation

This investigative report is based on the following verifiable sources:

Primary journalistic coverage

Technical analysis and security

Project documentation

Additional notes

  • Source code analysis available in public GitHub repositories
  • Security incident reports on specialized platforms
  • Technical discussions in verifiable developer communities

For teams evaluating autonomous agents, the recommendation is clear: prioritize architectures with security by design, implement the principle of least privilege, and conduct independent audits before any deployment in production environments.


Tincho Fuentes Tech journalist and investigative researcher ๐Ÿš€