At a high-profile panel in Wuzhen, a senior researcher from China’s DeepSeek, Chen Deli, delivered a stark message: while the near term could feel like a productivity honeymoon, AI may threaten large swathes of jobs within the next decade. His comments landed amid renewed attention on DeepSeek after the startup’s low-cost R1 reasoning model drew global notice earlier this year. That combination, a powerful model plus an internal warning from a prominent researcher, has reignited debates about how quickly AI could drive job displacement and what companies and governments should do in response.
Who Is Chen Deli And What He Said
Chen Deli is a senior researcher representing DeepSeek during one of the company’s rare public appearances. At the World Internet Conference in Wuzhen, he said that AI could eliminate most jobs within the next ten to twenty years, and he urged tech companies to act as guardians of humanity by sounding the alarm and helping to reshape social structures to handle the transition. He framed the near-term period as a honeymoon phase, where AI augments human work and raises productivity, but warned that capability growth could later produce sweeping disruption. These were not abstract academic remarks: they came from a scientist inside one of the companies that has shaken global AI markets with a cost-efficient reasoning model.
Chen’s comments stand out for two reasons. First, they come from an industry insider rather than an outside observer. Second, DeepSeek’s R1 model and related work have contributed to a perception that highly capable models can be produced at much lower cost than previously assumed. That dynamic raises the possibility that adoption can happen faster and more broadly than many policy frameworks presumed.
DeepSeek, R1, And The China AI Moment
DeepSeek was founded in 2023 and broke into global view earlier in 2025 when it released R1, a reasoning-focused model that analysts and rivals described as unusually cost efficient relative to leading Western models. The firm has since kept a low public profile, creating additional intrigue around any public statements its researchers make. Reports have documented DeepSeek’s R1 as a model that can match or rival some competitors on benchmarks while reportedly being much cheaper to train and infer from, a fact that reshaped market conversations about compute intensity and the economics of model scale.
DeepSeek’s rise is also playing into larger geopolitical dynamics. Observers note that breakthroughs from Chinese AI labs feed into an intensifying US China AI race that affects investment, talent flows, and policy stances on both sides. Chen’s public warning, therefore, did more than flag worker risk; it arrived in the context of a strategic contest where technological capability, national ambition, and regulatory choices now interact in real time.
What Chen Deli Means By Job Displacement And AI Jobs Risk
When Chen speaks of job losses he is describing a spectrum, not a single event. AI jobs risk covers a range of outcomes: task automation inside roles that remain, role compression that reduces headcount for the same output, and full role elimination when systems handle the end-to-end responsibility. Examples are intuitive: routine back-office processing, standardized contract drafting, basic coding tasks, and template-driven customer support are all tasks where generative and reasoning models can already assist or substitute. As models improve in reasoning, the frontier shifts toward higher-skill tasks, expanding the set of roles at risk.
History suggests two important patterns. First, technology rarely removes all jobs without creating some new ones. New categories emerge, AI system designers, model auditors, data governance specialists, and roles tied to human oversight. Second, though net employment can recover, the transition can be disruptive and uneven, affecting sectors, geographies, and demographic groups differently. Chen’s point is about scale and speed: if AI capabilities accelerate rapidly, the social mechanisms that normally smooth transitions may be overwhelmed. That is the core of the AI jobs risk he highlighted.
How Fast Could This Happen? Timelines and Uncertainty
Chen suggested a timeframe of five to twenty years for major labor market disruption. Forecasting timelines in technological revolutions is notoriously difficult, and reasonable experts disagree. The variables include model capability progress, compute and data availability, regulatory constraints, and business incentives to automate. But DeepSeek’s R1 and similar models have altered the baseline assumptions about cost and capability, increasing the probability of faster adoption across industries. That does not guarantee wholesale job loss in a fixed period, but it does raise the policy urgency for preparing labor markets and social safety nets.
Practical Implications for Policymakers and Industry
Chen’s warning has practical implications that fall into three buckets.
Reskilling and Education
Governments and firms should accelerate investment in reskilling at scale. That includes short-cycle technical training, but equally important are capabilities machines struggle to replicate: creative problem solving, people management, and domain-specific judgment. Public-private partnerships can help align curriculum with market needs.
Corporate Governance and Responsible Deployment
Companies working on generative AI should adopt risk-assessment frameworks, document potential labor impacts, and create phased deployment plans with human-in-the-loop safeguards. Chen called for tech firms to act as whistleblowers by proactively communicating risk and helping design mitigation rather than hiding consequences.
Social Safety Nets and Transition Policies
If AI jobs risk materializes at scale, society may need policies ranging from portable benefits to wage insurance and targeted unemployment supports. Policymakers should pilot interventions now rather than wait for crises to emerge.
What Business Leaders Should Do Today
For business leaders, the immediate steps are pragmatic. Map which roles and tasks are most exposed to automation, invest in internal retraining programs, and redesign jobs to combine human strengths with AI augmentation. Treat AI deployments as change programs, not pure technology rollouts, with metrics that include worker outcomes and not only cost savings. Companies should also be transparent with employees about timelines and provide pathways for role transitions. Those actions reduce operational disruption and preserve morale.
Chen Deli’s public warning is a sobering contribution to a debate about speed, scale, and responsibility in AI. It matters because it comes from inside a firm whose technology has already reshaped market expectations about cost and capability. While the future is uncertain, the combination of DeepSeek’s rise and Chen’s cautionary framing should motivate both policymakers and companies to treat AI jobs risk as a pressing problem that requires planning and investment now. Acting early can make the difference between a chaotic displacement and a managed transition where AI elevates human work rather than simply replacing it.
Read More

Tuesday, 11-11-25
