• About
  • Advertise
  • Privacy & Policy
  • Contact
Tuesday, July 29, 2025
  • Login
  • Register
thehopper.news
  • Home
    • Home
    • About
  • Video
    • Discussion
  • Geopolitics
  • Intel & Security
  • Foreign Affairs
  • News
    • All
    • Politics
    • World

    Houthis announce ‘new phase’ of attacks on Israel-linked ships

    Israel to ‘do in Gaza what we did in Tokyo and Berlin’ – senator

    Israel to ‘do in Gaza what we did in Tokyo and Berlin’ – senator

    ‘Unprecedented’ Ukrainian drone attack leaves man dead – Russian governor

    ‘Unprecedented’ Ukrainian drone attack leaves man dead – Russian governor

    ‘Godfather of AI’ warns governments to collaborate before it’s too late

    ‘Godfather of AI’ warns governments to collaborate before it’s too late

    EU submits to 15% Trump tariffs in new US trade deal

    Brazil to defy Trump with push for BRICS cooperation – senior official

    Brazil to defy Trump with push for BRICS cooperation – senior official

    Several dead after train crash in Germany

    UK cautions it could fight China over Taiwan

    Vatican enlists ‘hot priests’ to save faith

    Top Russian orchestra hits high note in new Sochi venue

    Top Russian orchestra hits high note in new Sochi venue

No Result
View All Result
thehopper.news
No Result
View All Result
Home News

Google removes ban on weaponizing AI

by Admin
February 5, 2025
in News, Politics, World
0
Google removes ban on weaponizing AI
27
SHARES
108
VIEWS
Share on FacebookShare on Twitter

Published: February 5, 2025 5:59 pm
Author: RT

In a reversal of previous policies the tech giant will now permit the technology to be used for developing arms and surveillance tools

Google has significantly revised its artificial intelligence principles, removing earlier restrictions on using the technology for developing weaponry and surveillance tools. The update, announced on Tuesday, alters the company’s prior stance against applications that could cause “overall harm.”  

In 2018, Google established a set of AI principles in response to criticism over its involvement in military endeavors, such as a US Department of Defense project that involved the use of AI to process data and identify targets for combat operations. The original guidelines explicitly stated that Google would not design or deploy AI for use in weapons or technologies that cause or directly facilitate injury to people, or for surveillance that violates internationally accepted norms.  

The latest version of Google’s AI principles, however, has scrubbed these points. Instead, Google DeepMind CEO Demis Hassabis and senior executive for technology and society James Manyika have published a new list of the tech giant’s “core tenants” regarding the use of AI. These include a focus on innovation and collaboration and a statement that “democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.”  

Margaret Mitchell, who had previously co-led Google’s ethical AI team, told Bloomberg the removal of the ‘harm’ clause may suggest that the company will now work on “deploying technology directly that can kill people.”  

Read more

RT
Vatican warns of ‘shadow of evil’ in AI

According to The Washington Post, the tech giant has collaborated with the Israeli military since the early weeks of the Gaza war, competing with Amazon to provide artificial intelligence services. Shortly after the October 2023 Hamas attack on Israel, Google’s cloud division worked to grant the Israel Defense Forces access to AI tools, despite the company’s public assertions of limiting involvement to civilian government ministries, the paper reported last month, citing internal company documents.  

Google’s reversal of its policy comes amid continued concerns over the dangers posed by AI to humanity. Geoffrey Hinton, a pioneering figure in AI and recipient of the 2024 Nobel Prize in physics, warned late last year that the technology could potentially lead to human extinction within the next three decades, a likelihood he sees as being up to 20%.  

Hinton has warned that AI systems could eventually surpass human intelligence, escape human control and potentially cause catastrophic harm to humanity. He has urged significant resources be allocated towards AI safety and ethical use of the technology and that proactive measures be developed.


Full Article

Tags: Russia Today
Share11Tweet7
Previous Post

India to develop AI chip from scratch

Next Post

Argentina withdraws from WHO

Admin

Admin

Next Post
Argentina withdraws from WHO

Argentina withdraws from WHO

thehopper.news

Copyright © 2023 The Hopper New

Navigate Site

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Follow Us

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

*By registering into our website, you agree to the Terms & Conditions and Privacy Policy.
All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • Home
    • Home
    • About
  • Video
    • Discussion
  • Geopolitics
  • Intel & Security
  • Foreign Affairs
  • News

Copyright © 2023 The Hopper New

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.