Artificial Intelligent could wipe out humanity, experts warn

Nepal Views

Artificial Intelligent could wipe out humanity, experts warn

Kathmandu: Artificial intelligence (AI) could lead to the extinction of humanity, experts including the heads of OpenAI and Google Deepmind have warned.

Dozens have supported a statement published on the webpage of the Center for AI Safety. ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,’ the statement reads.

Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind and Dario Amodei of Anthropic have all supported the statement.

The Centre for AI Safety website suggests a number of possible disaster scenarios: AIs could be weaponised – for example, drug-discovery tools could be used to build chemical weapons. AI-generated misinformation could destabilise society and ‘undermine collective decision-making’.

Likewise, the power of AI could become increasingly concentrated in fewer and fewer hands, enabling ‘regimes to enforce narrow values through pervasive surveillance and oppressive censorship’.

Dr Geoffrey Hinton, who issued an earlier warning about risks from super-intelligent AI, has also supported the Centre for AI Safety’s call. Yoshua Bengio, professor of computer science at the university of Montreal, also signed.

Dr Hinton, Prof Bengio and NYU Professor Yann LeCun are often described as the ‘godfathers of AI’ for their groundbreaking work in the field – for which they jointly won the 2018 Turing Award, which recognises outstanding contributions in computer science.

But others say the fears are overblown. Prof LeCun, who also works at Meta, has said these apocalyptic warnings are overblown tweeting that ‘the most common reaction by AI researchers to these prophecies of doom is face palming’.

Many other experts similarly believe that fears of AI wiping out humanity are unrealistic, and a distraction from issues such as bias in systems that are already a problem.

Arvind Narayanan, a computer scientist at Princeton University, has previously told the BBC that sci-fi-like disaster scenarios are unrealistic: ‘Current AI is nowhere near capable enough for these risks to materialise,’ he said, ‘As a result, it’s distracted attention away from the near-term harms of AI.’

Leave a Reply

Your email address will not be published. Required fields are marked *

Related News

Copyright © 2024 Digital House Nepal Pvt. Ltd. - All rights reserved