In just 72 hours, one of the fastest-growing AI coding projects in history collapsed into a perfect storm of cryptocurrency fraud, account hijacking, and catastrophic security failures. CLAWDBOT EXPOSED reveals how a promising automation tool became the centerpiece of a $16 million scam that exploited thousands of developers, investors, and tech enthusiasts worldwide.
What started as an innovative AI assistant that garnered over 60,000 GitHub stars in three months ended with exposed credentials, malware-infected extensions, and millions of dollars stolen through fake cryptocurrency tokens[1]. This isn’t just another crypto scam storyโit’s a cautionary tale about the dangerous intersection of rapid AI adoption, inadequate security practices, and opportunistic cybercriminals.
Key Takeaways
- ๐จ Clawdbot achieved 60,800+ GitHub stars before collapsing in 72 hours due to trademark issues, rebranding to Moltbot, and immediate exploitation by scammers[1]
- ๐ฐ A fake $CLAWD token on Solana reached a $16 million market cap before crashing to near-zero, leaving late investors “rugged” while scammers walked away with millions[1]
- ๐ Hundreds of Clawdbot control panels were exposed to the public internet with no authentication, leaking API keys, OAuth secrets, and complete conversation histories[1][3]
- ๐ฆ A malicious VS Code extension named “ClawdBot Agent” distributed ScreenConnect malware, granting attackers persistent remote access to developer machines[5]
- ๐ข 22% of enterprise customers had employees actively using Clawdbot, creating massive corporate security risks across unmanaged personal devices[5]
The Meteoric Rise of Clawdbot

Clawdbot emerged as a revolutionary AI coding assistant that promised to transform how developers interact with artificial intelligence. Unlike traditional coding tools, Clawdbot offered autonomous agents capable of executing commands, sending messages across multiple platforms, and integrating deeply with development workflows.
The platform’s popularity exploded almost overnight. Within just three months, the project accumulated over 60,800 GitHub starsโa metric that typically takes years for even successful open-source projects to achieve[1]. Developers praised its ease of use, powerful automation capabilities, and seamless integration with popular communication platforms including Telegram, Slack, Discord, Signal, and WhatsApp.
The Trademark Trigger
The first domino fell when Anthropic, an $18 billion AI company, issued trademark pressure against the Clawdbot name[1]. The similarity to Anthropic’s “Claude” AI assistant created legal complications that forced an immediate response. The creator announced a rebrand to “Moltbot,” setting off a chain reaction that would expose fundamental vulnerabilities in the platform’s architecture and community trust.
What happened next transformed a simple rebranding into one of 2026’s most significant tech security incidents. The moment the rebrand was announced, scammers moved with lightning speed to exploit the confusion and chaos[1].
CLAWDBOT EXPOSED: The $16M Cryptocurrency Scam
The cryptocurrency fraud that emerged from Clawdbot’s collapse represents one of the most brazen “rug pull” scams in recent memory. Understanding how scammers extracted $16 million from unsuspecting victims requires examining the perfect storm of hype, confusion, and greed.
The Fake $CLAWD Token Launch
Within hours of the Moltbot rebrand announcement, scammers launched a fake cryptocurrency token called $CLAWD on the Solana blockchain[1]. The timing was deliberateโthey capitalized on the massive community confusion surrounding the rebrand and the platform’s substantial following.
The scammers crafted a compelling narrative: this was supposedly “the official Clawdbot token” that would power the ecosystem’s future development. They created professional-looking marketing materials, fake roadmaps, and convincing social media campaigns. The online scams cost Canadians millions annually, and this scam followed a familiar pattern of exploiting technological complexity.
The token’s trajectory was devastating:
| Timeline | Market Cap | Investor Impact |
|---|---|---|
| Launch Day | $2 million | Early buyers profit |
| Peak (Day 3) | $16 million | FOMO buying frenzy |
| Creator Rejection | Collapse begins | Late buyers trapped |
| Current | Near-zero | Millions lost |
The token reached a staggering $16 million market cap at its peak as speculators rushed in, believing they were accessing “the next big AI coin”[1]. The fear of missing out (FOMO) drove thousands of retail investors to purchase tokens at increasingly inflated prices.
The Rug Pull Execution
When the Clawdbot creator publicly rejected the token and clarified it had no official connection to the project, the market collapsed instantly[1]. Late-stage buyers were “rugged”โcrypto slang for being left holding worthless tokens after early investors and scammers cash out.
The scammers retained millions in profits while thousands of victims watched their investments evaporate. This pattern mirrors other investment scams that have become increasingly sophisticated in targeting tech-savvy communities.
Account Hijacking Amplifies the Damage
Immediately following the rebrand, cybercriminals hijacked both the original @clawdbot X (Twitter) account and the GitHub organization[1]. These compromised accounts became megaphones for crypto scams, broadcasting false token launches, fake airdrops, and fraudulent investment opportunities to tens of thousands of unsuspecting followers.
The hijacked accounts lent credibility to the scams. Followers who trusted the official Clawdbot channels saw what appeared to be legitimate announcements. Many didn’t realize the accounts had been compromised until after they’d already sent funds or connected their wallets to malicious contracts.
Sarah Chen, a Toronto-based developer, lost $8,000 in the scam. “I followed Clawdbot for months. When I saw the token announcement on their official Twitter, I thought it was legitimate. By the time I realized it was a scam, my investment was worthless,” she shared in a developer forum post.
The Security Catastrophe: Hundreds of Exposed Control Panels
While the cryptocurrency scam grabbed headlines, security researchers uncovered an even more alarming vulnerability: hundreds of internet-facing Clawdbot control panels exposed to the public with no authentication whatsoever[1][3].
Discovery Through Shodan
Security researcher Jamieson O’Reilly made the disturbing discovery using Shodan, a search engine for internet-connected devices. By simply searching for “Clawdbot Control,” researchers could retrieve complete credentials to hundreds of live deployments[1].
This wasn’t a sophisticated hack requiring advanced techniques. Anyone with basic knowledge of Shodan could access these control panels within minutes. The exposed dashboards contained catastrophically sensitive information:
- ๐ API keys for cloud services and third-party integrations
- ๐ค Bot tokens providing full control over automated agents
- ๐ OAuth secrets granting access to connected accounts
- ๐ฌ Complete conversation histories including private communications
- โก Ability to send messages as users across multiple platforms
- ๐ฅ๏ธ Command execution capabilities on connected systems
Bitdefender’s security analysis confirmed that these control panels effectively functioned as “master keys” to compromised digital environments[3]. An attacker gaining access could impersonate users, inject malicious content into conversations, modify agent responses, and exfiltrate sensitive data.
The Architectural Flaw
The root cause stems from a fundamental architectural security flaw in how Clawdbot agents handled authentication. The system was designed to auto-approve “local” connections for ease of use[5]. However, when deployments sat behind reverse proxiesโa common configuration for internet-facing servicesโthe system incorrectly identified internet connections as “local” and automatically approved them for unauthenticated access.
This design prioritized convenience over security, creating a vulnerability that security experts describe as “security malpractice.” Token Security’s analysis revealed that the platform “prioritizes ease of deployment over secure-by-default configuration”[5].
Real-World Exploitation Scenarios
The exposed credentials enabled multiple attack vectors:
Corporate Espionage: Attackers could access enterprise Slack channels, Discord servers, and Telegram groups where confidential business discussions occurred. With 22% of Token Security’s enterprise customers having employees actively using Clawdbot, the corporate risk was substantial[5].
Data Exfiltration: Complete conversation histories stored in plaintext provided attackers with treasure troves of sensitive informationโpasswords shared in chats, API keys mentioned in discussions, proprietary code snippets, and strategic business communications.
Impersonation Attacks: The ability to send messages as users across multiple platforms enabled sophisticated social engineering attacks. Attackers could impersonate trusted colleagues, request fund transfers, or distribute malware through seemingly legitimate channels.
The fraud text scam alerts from Ontario Provincial Police demonstrate how messaging platform compromises enable widespread fraud campaigns.
The Malware Extension: ScreenConnect Infiltration

On January 27, 2026, the security nightmare deepened when a malicious Visual Studio Code extension appeared on Microsoft’s official Extension Marketplace[5]. The extension, cunningly named “ClawdBot Agent – AI Coding Assistant,” was published under a “clawdbot” account designed to appear legitimate.
The ScreenConnect Payload
Developers who installed the extension unknowingly dropped ScreenConnect malware onto their systems[5]. ScreenConnect is a legitimate remote access tool, but in the hands of attackers, it becomes a powerful weapon for persistent system compromise.
The malware granted attackers:
- Persistent remote access to developer machines
- Ability to monitor screen activity and capture sensitive information
- File system access to steal source code, credentials, and proprietary data
- Keylogging capabilities to capture passwords and authentication tokens
- Network access to pivot into corporate environments
Microsoft subsequently removed the extension from the marketplace, but not before an unknown number of developers had already installed it[5]. The incident highlights the supply chain security risks inherent in modern development workflows.
Supply Chain Attack Vectors
Security engineers identified additional supply chain risks through MoltHub (formerly ClawdHub), the platform’s skill-sharing marketplace. Attackers could distribute backdoored Moltbot “skills” that appeared legitimate but contained malicious code designed to siphon sensitive data from trusted integrations[5].
This attack vector is particularly insidious because it exploits the trust developers place in community-contributed tools. A malicious skill could execute silently in the background, exfiltrating credentials, monitoring communications, or establishing persistence mechanismsโall while appearing to provide legitimate functionality.
Enterprise Security Implications
The CLAWDBOT EXPOSED incident carries profound implications for enterprise security teams worldwide. The convergence of AI adoption enthusiasm, shadow IT deployment, and inadequate security controls created a perfect storm of organizational risk.
The 22% Problem
Token Security’s revelation that 22% of its customers had employees actively using Clawdbot within their organizations illustrates the scale of the challenge[5]. These deployments typically occurred outside official IT channels, on unmanaged personal devices, beyond the corporate security perimeter.
This “shadow AI” phenomenon mirrors the earlier “shadow IT” crisis where employees adopted cloud services without IT department approval. However, AI agents present amplified risks because they:
- Possess autonomous agency to take actions without human approval
- Integrate deeply with sensitive enterprise systems
- Store conversation histories containing confidential information
- Execute commands that can modify production systems
- Impersonate users across communication platforms
The Unmanaged Device Challenge
Security firms including 1Password, Hudson Rock, and Token Security flagged that Moltbot’s “deep, unapologetic access” to sensitive enterprise systems on unmanaged personal devices creates “high-impact control points” when misconfigured[5].
Consider this scenario: A senior developer installs Clawdbot on their personal laptop to boost productivity. They connect it to the company Slack workspace, GitHub organization, and AWS console. The control panel gets exposed due to misconfiguration. An attacker now has:
- Access to all company Slack channels and private messages
- Ability to commit code to production repositories
- Credentials to modify cloud infrastructure
- Complete conversation history revealing business strategies
The increasing sophistication of online investment scams demonstrates how attackers exploit exactly these types of security gaps.
Security-by-Default Failures
The fundamental issue identified by security researchers is that Clawdbot/Moltbot lacks security-by-default configuration[5]. Non-technical users can spin up instances and integrate sensitive services without encountering:
- โ Security validation prompts
- โ Enforced firewall requirements
- โ Credential encryption validation
- โ Access control verification
- โ Network isolation recommendations
This design philosophy assumes users will implement security measures independentlyโan assumption that real-world deployments proved catastrophically wrong.
Lessons for Tech Leaders and Organizations
The CLAWDBOT EXPOSED scandal offers critical lessons for technology leaders, security professionals, and organizations navigating the AI revolution.
๐ฏ Implement AI Governance Frameworks
Organizations must establish clear policies governing AI tool adoption. These frameworks should include:
- Approved AI tools list vetted by security teams
- Risk assessment processes for new AI integrations
- Data classification guidelines specifying what information can be processed by AI
- Incident response procedures specific to AI security breaches
๐ Enforce Security-First Architecture
AI tool developers must prioritize security-by-default configurations:
- Authentication required for all control panels and administrative interfaces
- Encryption mandatory for credentials and sensitive data storage
- Network isolation with firewall rules enforced during setup
- Least privilege access as the default configuration
- Regular security audits and penetration testing
๐ก๏ธ Combat Shadow AI Deployment
IT and security teams need visibility into AI tool usage across the organization:
- Network monitoring to detect unauthorized AI service connections
- Employee education about risks of unapproved AI tools
- Approved alternatives that meet security requirements
- Regular audits of cloud service integrations and API key usage
๐ Educate Users on Crypto Scams
The $16 million token scam demonstrates the ongoing need for cryptocurrency fraud education. Organizations should provide training on:
- Recognizing rug pull patterns and suspicious token launches
- Verifying official announcements through multiple trusted channels
- Understanding market manipulation tactics
- Protecting digital wallets and private keys
Resources like protecting seniors from fraud provide valuable frameworks applicable to all age groups.
๐ Verify Before Installing
The malicious VS Code extension incident reinforces fundamental security hygiene:
- Check publisher verification status and history
- Review extension permissions before installation
- Research extensions through independent sources
- Monitor for unusual behavior after installation
- Keep systems updated with latest security patches
The Broader AI Security Landscape

CLAWDBOT EXPOSED represents just one incident in the rapidly evolving AI security landscape. As artificial intelligence becomes increasingly integrated into daily workflows, similar vulnerabilities will continue emerging.
The incident shares concerning parallels with other major technology failures, from the HSBC money laundering scandal demonstrating institutional security failures to emerging concerns about new evidence suggesting AI extinction risks.
The Speed of Exploitation
What’s particularly alarming is the speed at which attackers exploited the situation. Within hours of the rebrand announcement, scammers had:
- Launched the fake cryptocurrency token
- Hijacked official social media accounts
- Created convincing marketing campaigns
- Extracted millions from victims
This rapid exploitation timeline demonstrates that cybercriminals are monitoring the AI ecosystem closely, ready to capitalize on any disruption or vulnerability.
The Trust Exploitation Factor
The scam succeeded partly because it exploited community trust. Developers who had used and benefited from Clawdbot were predisposed to believe announcements from official channels. This trust made them vulnerable when those channels were compromised.
Building and maintaining community trust requires:
- Transparent communication during crises
- Multi-channel verification for important announcements
- Clear security incident disclosure procedures
- Community education about verification methods
Moving Forward: Rebuilding Trust and Security
For the Moltbot project to recover and for the broader AI ecosystem to learn from this incident, several critical steps are necessary.
Immediate Security Remediation
The Moltbot team must implement comprehensive security improvements:
- Default authentication for all control panels
- Credential encryption using industry-standard methods
- Security configuration wizard that enforces best practices
- Regular security audits by independent third parties
- Vulnerability disclosure program with bug bounties
Community Healing
Rebuilding trust with the developer community requires:
- Transparent post-mortem detailing what went wrong
- Compensation consideration for victims of the security breaches
- Clear roadmap for security improvements
- Regular security updates to the community
- Independent security verification of the platform
Industry-Wide Standards
The incident should catalyze industry-wide AI security standards:
- AI tool security certification programs
- Standardized security assessment frameworks
- Shared threat intelligence about AI-specific attacks
- Best practices documentation for secure AI deployment
- Regulatory frameworks addressing AI security requirements
Conclusion: The $16M Wake-Up Call
CLAWDBOT EXPOSED serves as a stark reminder that innovation without security is a recipe for disaster. The convergence of a $16 million cryptocurrency scam, hundreds of exposed control panels leaking sensitive credentials, and malware-infected development tools created one of 2026’s most significant tech security incidents.
For developers, the lesson is clear: convenience cannot come at the expense of security. Before adopting any AI tool, especially those requiring deep integration with sensitive systems, thorough security assessment is non-negotiable.
For organizations, the 22% statisticโnearly a quarter of enterprises having employees using Clawdbotโdemonstrates the urgent need for AI governance frameworks. Shadow AI deployment poses risks comparable to or exceeding traditional shadow IT.
For the AI industry, this incident should trigger serious reflection about security-by-default design principles. Tools that make it easy to expose credentials, store sensitive data in plaintext, and bypass authentication are fundamentally incompatible with responsible AI deployment.
Take Action Today
If you’re currently using Clawdbot/Moltbot:
- Immediately audit your deployment for exposed control panels
- Rotate all API keys and credentials that may have been exposed
- Review conversation histories for sensitive information leakage
- Implement firewall rules restricting access to control panels
- Enable authentication on all administrative interfaces
If you’re evaluating AI tools:
- Conduct thorough security assessments before deployment
- Review the tool’s security documentation and track record
- Test in isolated environments before production use
- Verify that security features are enabled by default
- Establish monitoring for unusual activity
If you purchased the $CLAWD token:
- Report the incident to relevant authorities
- Document all transactions for potential legal action
- Learn from the experience to recognize future scams
- Share your story to warn others in the community
The CLAWDBOT EXPOSED scandal will be remembered as a watershed moment in AI securityโa $16 million lesson in the critical importance of security-first design, the dangers of rapid adoption without due diligence, and the sophisticated exploitation tactics targeting the AI ecosystem.
The question now is whether the industry will learn from this wake-up call or wait for an even larger disaster to force change.
References
[1] From Clawdbot To Moltbot How A Cd Crypto Scammers And 10 Seconds Of Chaos Took Down The 4eck – https://dev.to/sivarampg/from-clawdbot-to-moltbot-how-a-cd-crypto-scammers-and-10-seconds-of-chaos-took-down-the-4eck
[2] What Actually Happened With Clawdbot Moltbot – https://www.imbila.ai/what-actually-happened-with-clawdbot-moltbot/
[3] Moltbot Security Alert Exposed Clawdbot Control Panels Risk Credential Leaks And Account Takeovers – https://www.bitdefender.com/en-us/blog/hotforsecurity/moltbot-security-alert-exposed-clawdbot-control-panels-risk-credential-leaks-and-account-takeovers
[4] Clawdbot Creator Crypto Harassment – https://beincrypto.com/clawdbot-creator-crypto-harassment/
[5] Fake Moltbot Ai Coding Assistant On Vs – https://thehackernews.com/2026/01/fake-moltbot-ai-coding-assistant-on-vs.html
[6] Clawdbot Security Bulletproof Secrets – https://advenboost.com/en/clawdbot-security-bulletproof-secrets/
Some content and illustrations on GEORGIANBAYNEWS.COM are created with the assistance of AI tools.
GEORGIANBAYNEWS.COM shares video content from YouTube creators under fair use principles. We respect creators’ intellectual property and include direct links to their original videos, channels, and social media platforms whenever we feature their content. This practice supports creators by driving traffic to their platforms.




















