Advertisement

News

Skynet getting closer? Tech industry leaders warn of "extinction risk" posed by AI

Skynet getting closer? Tech industry leaders warn of "extinction risk" posed by AI
Pedro Domínguez

Pedro Domínguez

  • Updated:

The enormous progress being made in the world of artificial intelligence has two sides. On the one hand, the “AI revolution” can be of great use to humans, including tasks as simple as helping you with your work (ChatGPT) or creating images in a matter of seconds (Midjourney). On the other hand, this development could compromise our privacy depending on its implementation, something that the European Union intends to regulate.

ChatGPT ACCESS

But, even if this is a very negative and even dystopian aspect, the loss of privacy and other risks associated with the advancement of artificial intelligences would not be by far the worst thing that could happen to us as a species. The large-scale implementation of AIs in our society would entail a huge risk: total extinction.

And no, it’s not something “fanciful” and exclusive to science fiction. A group of technology industry leaders, including big names such as Sam Altman (CEO of OpenAI), Demis Hassabis (CEO of Google DeepMind) and Dario Amodei (CEO of Anthropic) have warned that the AIs they are developing could one day pose a threat to the very existence of the human race, and that they should be considered a social risk on the same level as nuclear wars or pandemics.

“Mitigating the risk of extinction of artificial intelligences should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads an open letter signed by top tech industry leaders that has been released by the nonprofit Center for AI Safety.

“There’s a misconception, even in the artificial intelligence community, that there are only a handful of doomsayers,” says Dan Hendrycks, executive director of the Center for AI Security. “But, in fact, many people privately express concern about these things.”

The letter is signed by more than 350 executives, researchers and engineers working in the field of artificial intelligence, and highlights the potential danger of this technology if technological and legislative measures are not put in place in time.

ChatGPT ACCESS

Microsoft itself last week endorsed a series of AI regulations that could be put in place by the U.S. government. Among them is the possibility that artificial intelligences could be slowed down or turned off completely in case of emergency.

Some of the links added in the article are part of affiliate campaigns and may represent benefits for Softonic.

Pedro Domínguez

Pedro Domínguez

Publicist and audiovisual producer in love with social networks. I spend more time thinking about which videogames I will play than playing them.

Latest from Pedro Domínguez

Editorial Guidelines