News

AI leaders warn the technology poses ‘risk of extinction’ like pandemics and nuclear war

Hundreds of business leaders and public figures sounded a sobering alarm on Tuesday over what they described as the threat of mass extinction posed by artificial intelligence.

Among the 350 signatories of the public statement are Sam Altman, the chief executive of OpenAI, the company behind the popular conversation bot ChatGPT; and Demis Hassabis, the CEO of Google DeepMind, the tech giant’s AI division.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the one-sentence statement released by the San Francisco-based nonprofit Center for AI Safety.

Supporters of the statement also feature a range of figures like musician Grimes, environmental activist Bill McKibben and neuroscience author Sam Harris.

Concern about the risks posed by AI and calls for forceful regulation of the technology have drawn greater attention in recent months in response to major breakthroughs like ChatGPT.

In testimony before the Senate two weeks ago, Altman warned lawmakers: “If this technology goes wrong, it can go quite wrong.”

OpenAI CEO Sam Altman addresses a speech during a meeting, at the Station F in Paris, May 26, 2023.

Joel Saget/AFP via Getty Images, FILE

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he added, suggesting the adoption of licenses or safety requirements necessary for the operation of AI models.

Like other AI-enabled chat bots, ChatGPT can immediately respond to prompts from users on a wide range of subjects, generating an essay on Shakespeare or a set of travel tips for a given destination.

Microsoft launched a version of its Bing search engine in March that offers responses delivered by GPT-4, the latest model of ChatGPT. Rival search company Google in February announced an AI model called Bard.

The rise of vast quantities of AI-generated content has raised fears over the potential spread of misinformation, hate speech and manipulative responses.

Hundreds of tech leaders, including billionaire entrepreneur Elon Musk and Apple co-founder Steve Wozniak, signed an open letter in March calling for a six-month pause in the development of AI systems and a major expansion of government oversight.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the letter said.

In comments last month to Fox News host Tucker Carlson, Musk raised further alarm: “There’s certainly a path to AI dystopia, which is to train AI to be deceptive.”

PHOTO: FILE - Tesla CEO Elon Musk in Washington, U.S., Jan. 27, 2023.

Tesla CEO Elon Musk in Washington, U.S., Jan. 27, 2023.

Jonathan Ernst/Reuters, FILE

The statement released on Tuesday included other major backers from the AI industry, including Microsoft Chief Technology Officer Kevin Scott and OpenAI Head of Policy Research Miles Brundage.

Addressing the brevity of the 22-word statement released on Tuesday, the Center for AI Safety said on its website: “It can be difficult to voice concerns about some of advanced AI’s most severe risks.”

“The succinct statement below aims to overcome this obstacle and open up discussion,” the Center for AI Safety added.

Source: abc news

Donate to Breeze of Joy Foundation

Global NewsX

Global NewsX is a news sharing website that offers a wide range of categories, from politics and business to entertainment and sports. With its easy-to-navigate interface, users can quickly find the news they are looking for and stay up-to-date on the latest global events. Whether you're interested in breaking news, in-depth analysis, or just want to stay informed, Global NewsX has got you covered.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Home
Videos
Back
Account