The open-source OpenClaw AI tool began circulating through developer communities late last year. Installation guides quickly spread across coding forums and chat groups. Engineers shared setup scripts on GitHub mirrors, troubleshooting tips circulated in private WeChat groups, and freelance technicians began offering paid installation services.
OpenClaw is an AI agent designed to automate tasks by connecting large language models with real computer actions. Created by Austrian developer Peter Steinberger and released in late 2025, the system links AI models with a user’s operating system. This allows it to browse websites, run commands, manage files and organise information.
[See more: China Telecom and Alibaba launch major AI data centre in Guangdong]
For many engineers, the appeal is clear. OpenClaw AI can function like a digital assistant capable of performing real work, from running software tests and analysing data to automating programming workflows, customer service tasks and research. As interest grew, the project quickly surpassed 100,000 stars on GitHub – an indicator of popularity in the open-source community – and attracted millions of visits during the peak of the frenzy.
The rapid spread of the software has also drawn attention to broader tensions within China’s rapidly expanding artificial intelligence ecosystem, particularly the balance between rapid innovation and regulatory oversight.
The rise of AI agents

The surge of interest around OpenClaw AI reflects a broader shift in artificial intelligence development. For much of the past decade, AI systems have largely functioned as analytical or conversational tools. Chatbots powered by companies such as OpenAI can answer questions, summarise documents or generate code. AI agents represent a different step; they can take actions, not just produce information.
OpenClaw AI works by linking large language models with a user’s computer environment. Once configured, the system can break down complex instructions into smaller steps and attempt to carry them out sequentially. For example, a user might instruct the system to research a topic, download relevant documents, summarise the findings and organise the results in folders. The system then attempts to execute those steps with limited supervision.
Autonomous AI agents have attracted growing interest from technology companies and researchers. Organisations including Google DeepMind and Meta have been experimenting with similar agent-based systems designed to automate tasks such as coding, research and software testing. Some researchers say these systems could increase productivity by allowing individuals or small teams to manage larger workloads.
[See more: AI-Immersive shows to feature at Guangdong Fashion Week]
China’s developer community has proven particularly receptive to the idea. The country’s digital economy already relies heavily on integrated online platforms that combine messaging, payments, shopping and logistics. Within ecosystems run by companies such as Alibaba and Tencent, automation tools can interact with multiple services simultaneously. For programmers experimenting with OpenClaw AI, the technology offers a way to automate complex tasks across these platforms.
The surge in interest has even produced a small service economy around the software. Because installing OpenClaw AI can be technically challenging, freelance technicians have begun offering paid setup services online. Reports suggest remote installations can cost around 100 yuan, while in-person setup services may reach roughly 1,500 yuan. During periods of intense demand, some technicians reported earning substantial sums installing the software for developers and entrepreneurs eager to experiment with AI agents.
Security concerns and regulatory scrutiny
Despite the excitement surrounding AI agents, cybersecurity researchers say systems like OpenClaw AI raise important security questions. Because the software performs tasks autonomously, it requires extensive access to a user’s computer environment. In order to operate effectively, it may need permission to run commands, interact with applications, access files and connect to external services.
Security researchers warn that this level of system access could create vulnerabilities if the system is misconfigured or manipulated. Analysts studying AI agent frameworks have highlighted the risk of prompt injection attacks, in which malicious instructions embedded in websites or documents can trick an AI system into performing unintended actions.
In theory, attackers could attempt to persuade an AI agent to download harmful software, expose sensitive information or execute commands that compromise a system. Cybersecurity analysts have warned that some OpenClaw deployments may be exposed online, suggesting the software could be installed without adequate safeguards.
[See more: China bets on AI to create jobs and revive growth]
The rapid adoption of the software has drawn the attention of regulators in China, where authorities have increasingly focused on the governance of artificial intelligence systems. Officials have introduced rules governing generative AI services, algorithms and data security as the technology spreads across industries.
In recent months, Chinese authorities have reportedly warned some government agencies and financial institutions about the potential risks of installing autonomous AI software without a proper security review. Some organisations have restricted the tool while officials evaluate possible vulnerabilities.
The response reflects the broader balancing act within China’s technology policy. The country has invested heavily in artificial intelligence development and is home to a rapidly growing ecosystem of AI startups and research institutions. Companies such as Baidu have launched generative AI models and services, while government initiatives aim to strengthen China’s position in global AI competition.
At the same time, policymakers have emphasised the importance of maintaining oversight of emerging technologies that could affect cybersecurity, data protection or social stability.
A glimpse of the next phase of AI

Whether OpenClaw AI becomes a lasting platform or remains a short-lived technology trend remains uncertain. Open-source software projects often experience waves of rapid experimentation before developers identify stable uses and address security concerns.
However, the attention surrounding the tool highlights a broader shift in how artificial intelligence may evolve. The next generation of AI systems is already moving beyond generating text or images toward autonomous agents capable of performing tasks on behalf of users. If that vision develops further, individuals and companies could deploy AI systems that perform tasks with minimal supervision.
[See more: AI glasses sales in China jumped 80 percent. Here are the 6 brands behind the surge]
For China’s technology sector, the debate around OpenClaw AI also illustrates a familiar tension. The country’s engineers and entrepreneurs continue to experiment with new tools that promise productivity gains and new business models. At the same time, regulators are attempting to ensure that rapidly advancing technologies remain secure and controllable.
As AI agents continue to develop, the debate around OpenClaw AI may offer an early glimpse of the challenges ahead. Governments, companies and developers will likely face new risks as autonomous AI moves from experimental software into real-world use.


