• About
  • Advertise
  • Privacy & Policy
  • Contact
Wednesday, January 28, 2026
  • Login
  • Register
thehopper.news
  • Home
    • Home
    • About
  • Analysis
  • Regions
    • Discussion
    • Africa
    • Asia-Pacific
    • Europe & NATO
    • Americas
    • Russia & Eurasia
    • Middle East & North Africa
  • Themes
  • Intel & Security
  • Foreign Affairs
  • Geopolitics
  • News
    • All
    • Politics
    • World
    US dollar hits 4-year low after Trump says it’s ‘doing great’

    US dollar hits 4-year low after Trump says it’s ‘doing great’

    US launches military ‘readiness’ drills in the Middle East

    US launches military ‘readiness’ drills in the Middle East

    Russian tourists accused of trespassing on US military base – embassy

    Russian tourists accused of trespassing on US military base – embassy

    Anthropic boss issues warning about ‘unimaginable’ power of AI

    Anthropic boss issues warning about ‘unimaginable’ power of AI

    Fyodor Lukyanov: Here’s what’s behind the US shift on EU allies

    Fyodor Lukyanov: Here’s what’s behind the US shift on EU allies

    Poland and Baltics have made ‘big mistake’ with Russia – Kremlin

    Poland and Baltics have made ‘big mistake’ with Russia – Kremlin

    Israel seeking major military pact with US – FT

    Israel seeking major military pact with US – FT

    ‘Doomsday clock’ moved closer to midnight

    ‘Doomsday clock’ moved closer to midnight

    Ukraine expects to join EU next year – Zelensky

    Ukraine expects to join EU next year – Zelensky

    Zelensky tells military to focus on killing Russians

    Zelensky tells military to focus on killing Russians

No Result
View All Result
thehopper.news
No Result
View All Result
Home News

Anthropic boss issues warning about ‘unimaginable’ power of AI

by Admin
January 27, 2026
in News, Politics, World
0
Anthropic boss issues warning about ‘unimaginable’ power of AI
27
SHARES
108
VIEWS
Share on FacebookShare on Twitter

Published: January 27, 2026 10:35 pm
Author: RT

Dario Amodei has urged humanity to “wake up” and address existential risks before it’s too late

The CEO of leading AI company Anthropic, Dario Amodei, has issued an ominous public warning that humanity is on the cusp of being handed “almost unimaginable power,” for which it is dangerously unprepared.

In a nearly 20,000-word essay titled “The Adolescence of Technology,” Amodei sketches a near-future where AI systems vastly more capable than any Nobel laureate or statesman could be at everyone’s disposal within the next few years. A critical and accelerating factor, Amodei reveals, is that AI development is now creating a self-reinforcing feedback loop.

“Because AI is now writing much of the code at Anthropic, it is already substantially accelerating our progress in building the next generation of AI systems,” he writes, warning that the company is close to “a point where the current generation of AI autonomously builds the next.”

Read more

RT
The deepfake threat is reshaping global politics

He argues that without decisive and careful action, this technology could lead to catastrophic risks ranging from mass job displacement to human extinction. Other existential dangers include the potential for “a global totalitarian dictatorship” enabled by AI-powered surveillance, propaganda and autonomous weapons.

Amodei also details “autonomy risks,” where AI systems could “go rogue and overpower humanity” – noting that this danger would not even require a sci-fi army of physical robots. The essay chillingly observes that “plenty of human action is already performed on behalf of people whom the actor has not physically met.”

Among the most urgent threats, Amodei highlights the potential for AI to drastically lower the barrier to creating biological and other weapons of mass destruction.

Read more

Google DeepMind CEO Demis Hassabis​.
‘Catastrophic outcomes’ could accompany rise of advanced AI – Google DeepMind CEO

“A disturbed loner can perpetrate a school shooting, but probably can’t build a nuclear weapon or release a plague,” he writes. A powerful AI, however, would make “everyone a PhD virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon step-by-step.”

In a worst-case scenario, he warns a powerful AI could theoretically guide the creation of a synthetic pathogen capable of “destroying all life on Earth.”

As one of the key industry leaders, whose company is a chief rival to OpenAI, Amodei calls for “surgical” regulation, starting with transparency laws, to build necessary guardrails.


READ MORE: ‘Superintelligent’ AI could end humanity – tech icons

“Humanity needs to wake up,” Amodei concludes, framing the coming years as a critical test of civilization’s maturity. With the technology itself now fueling its own breakneck evolution, he urges a collective response to steer the “glittering prize” of AI away from potential ruin.

Full Article

Tags: Russia Today
Share11Tweet7
Previous Post

Fyodor Lukyanov: Here’s what’s behind the US shift on EU allies

Next Post

The Wire – January 27, 2026

Admin

Admin

Next Post
The Wire – January 27, 2026

The Wire - January 27, 2026

thehopper.news

Copyright © 2023 The Hopper New

Navigate Site

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Follow Us

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

*By registering into our website, you agree to the Terms & Conditions and Privacy Policy.
All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • Home
    • Home
    • About
  • Analysis
  • Regions
    • Discussion
    • Africa
    • Asia-Pacific
    • Europe & NATO
    • Americas
    • Russia & Eurasia
    • Middle East & North Africa
  • Themes
  • Intel & Security
  • Foreign Affairs
  • Geopolitics
  • News

Copyright © 2023 The Hopper New

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.