
Humanity "needs to wake up" to the "serious civilizational challenge" posed by powerful artificial intelligence (AI) systems, warns Anthropic chief executive Dario Amodei, who sits at the forefront of AI development but is also one of the most vocal leaders on the potential dangers of the technology.
"I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it," Amodei wrote in a 38-page essay published on January 26.
The essay, titled 'The Adolescence of Technology', is a follow-up to last year's 'Machines of Loving Grace' essay, that focused on what powerful AI could achieve if gotten right.
In the latest essay, Amodei aims to map out the risks that humans are about to face and chart out a plan to tackle them, involving voluntary measures taken by companies and actions taken by governments.
"I firmly believe that government actions will be required to some extent but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks," Amodei wrote.
"It’s also common for regulations to backfire or worsen the problem they are intended to solve. It’s thus very important for regulations to be judicious: they should seek to avoid collateral damage, be as simple as possible, and impose the least burden necessary to get the job done," he added.
'Country of geniuses in a data centre'
Amodei's concern emerges from his belief that powerful AI systems could be as little as 1–2 years away which he describes as a "country of geniuses in a data centre."
"I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist... (and) for every cognitive action we can take, this country can take ten" he said.
"I think it should be clear that this is a dangerous situation—a report from a competent national security official to a head of state would probably contain words like “the single most serious national security threat we’ve faced in a century, possibly ever.” It seems like something the best minds of civilization should be focused on," he added.
To be sure, Amodei believes that if we act "decisively and carefully", the risks can be overcome. "I would even say our odds are good...But we need to understand that this is a serious civilizational challenge," he said.
Risks posed by AI
Another prominent risk is the threat of bioterrorism. "Biology is by far the area I’m most worried about, because of its very large potential for destruction and the difficulty of defending against it," he said.
"A disturbed loner [who] can perpetrate a school shooting, but probably can’t build a nuclear weapon or release a plague...will now be elevated to the capability level of the PhD virologist," he added.
To address these risks, AI companies can build guardrails to limit misuse, including preventing models from assisting in the development of bioweapons, and invest in defences against biological threats. Governments can also set transparency requirements to improve monitoring and collective risk management.
'AI countries'
Amodei also warned that countries could exploit their AI advantage to gain power over other countries, thereby empowering authoritarian governments, which could potentially lead to a "global totalitarian dictatorship."
He also reiterated the risk of China taking the lead in AI. "I am not singling out China out of animus to them in particular—they are simply the country that most combines AI prowess, an autocratic government, and a high-tech surveillance state," he wrote.
Amodei has repeatedly argued against selling AI chips to China, even comparing it to selling nuclear weapons to North Korea.
"It makes no sense to sell the CCP the tools with which to build an AI totalitarian state and possibly conquer us militarily," he said.
The next tier of risk are AI companies themselves, he says "AI companies control large data centres, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users," he wrote, adding that he believes the governance of AI companies "deserves a lot of scrutiny".
'Large-scale job losses'
In the essay, Amodei also warns about "large-scale job losses and concentration of economic power". He has previously talked about AI displacing 50 percent of all entry-level white collar jobs in the next 1–5 years.
Amodei says that companies should think about how to take care of their employees. "In the short term, being creative about ways to reassign employees within companies ... In the long term, in a world with enormous total wealth, in which many companies increase greatly in value due to increased productivity and capital concentration, it may be feasible to pay human employees even long after they are no longer providing economic value in the traditional sense," he said.
He also noted that Anthropic is currently considering a range of possible pathways for its own employees that the company plans to share in the near future.
Anthropic raised $13 billion at a $183 billion post-money valuation, led by investment firm Iconiq in September 2025. The company is reportedly in fundraising talks to raise around $25 billion at a $350 billion valuation.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.