Generative AI Is a ‘Double-Edged Sword,’ US Senators Say, Urging Harm Assessment
Two US senators have joined the long list of notable figures demanding that generative artificial intelligence, the technology behind tools like ChatGPT and Midjourney, be put under the microscope to examine its risks.
In a letter Thursday to the head of the US Government Accountability Office, Sens. Edward Markey of Massachusetts and Gary Peters of Michigan asked that the agency conduct a detailed technology assessment, saying generative artificial intelligence carries with it a “broad range of serious harms.”
“Although generative AI holds the promise of many benefits, it is already causing significant harm,” the letter says. “In order to draw the maximum benefits from advances in AI, we must carefully study and understand its costs.” Markey and Peters add that “Congress urgently requires the non-partisan, technical expertise that GAO is well placed to deliver.”
The senators cite risks from generative AI like manipulative voice, text and image synthesis; “deepfakes” that include pornographic imagery and videos made of people (particularly women) without their consent; and companies that implement AI chatbots that provide customers with harmful and incorrect information.Â
The technological advancements of generative AI are already playing a positive part in industries from the arts to the sciences, the senators say, but they call the technology “a double-edged sword” with serious implications if irresponsibly used.
Companies have been using AI systems for years, but concerns around generative AI offerings like OpenAI’s ChatGPT chatbot and the image-generating tools Midjourney and Dall-E have been on the rise ever since ChatGPT’s meteoric rise in popularity after its launch late last year.Â
Read more:Â Bank Customers Aren’t Happy With AI Chatbots. Here’s Why
Tech executives, industry experts and even the CEOs behind these AI technologies have been outspoken about the potential negative repercussions of generative AI models in the absence of industry or government regulation and oversight.
Microsoft, an industry leader in the AI space, published a 40-page report in May that said AI regulation is necessary to stay ahead of bad actors and misuse.Â
In March, Tesla CEO Elon Musk and other industry leaders urged for a pause on AI models more advanced than ChatGPT 4, the most advanced version of the large language model behind the ChatGPT chatbot, citing “profound risks” to humanity.
And earlier this month, OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis, alongside other scientists and notables, signed a statement regarding advanced AI systems, saying that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
In their Thursday letter, Markey and Peters include a list of questions the Government Accountability Office could use to assess generative AI’s risks. It includes queries on security measures, commercial pressures, and the impact on workers.Â
Watch this: ChatGPT Creator Testifies Before Congress On AI Safety and Regulation
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.
Source: CNET