Anthropic briefly blocked Peter Steinberger from accessing its Claude models last Friday, citing “suspicious” activity in a notification he shared on social media. The OpenClaw creator posted about the incident early that morning, warning that future compatibility with Anthropic’s offerings might become more challenging. His account was restored within hours after the post gained widespread attention, though the exact trigger for reinstatement remains unclear.
Steinberger, who now works for OpenAI, faced a flood of responses on the platform, including one from an Anthropic engineer. This engineer asserted that the company has never banned anyone for using OpenClaw and offered assistance, but it’s uncertain if this intervention resolved the suspension. Anthropic has not provided further details on the matter.
The temporary ban followed Anthropic’s recent announcement that subscriptions to Claude would no longer cover usage through third-party tools like OpenClaw. Instead, users must pay separately via the Claude API based on consumption, effectively imposing what Steinberger calls a “claw tax.” He claims he was adhering to this new policy by using his API key when the ban occurred.
Anthropic justified the pricing shift by stating that subscriptions weren’t designed to handle the “usage patterns” of claws. These tools can be more compute-intensive than standard prompts or scripts, often running continuous reasoning loops, automating retries, and integrating with numerous external services.
Steinberger dismissed this explanation, suggesting a strategic motive behind the change. He noted the timing coincided with Anthropic rolling out features like Claude Dispatch for its Cowork agent, which allows remote control and task assignment. He implied that Anthropic first incorporated popular open-source functionalities into its proprietary system, then restricted access to alternatives.
During the social media exchange, some commenters questioned Steinberger’s decision to join OpenAI instead of Anthropic. One user remarked, “You had the choice, but you went to the wrong one.” Steinberger retorted, “One welcomed me, one sent legal threats,” highlighting past tensions between him and Anthropic.
Others asked why he continues to use Claude for testing rather than relying solely on OpenAI’s models. He clarified that his role at the OpenClaw Foundation involves ensuring compatibility with all model providers, including Claude, which remains a preferred choice for many OpenClaw users over ChatGPT. His job at OpenAI focuses on future product strategy, and he hinted at working on reducing reliance on Claude in response to the pricing update.
Steinberger did not respond to requests for additional comments, leaving the broader implications of this incident for open-source AI tooling and corporate competition unresolved.


