AI and You: Google’s News Ambitions, Tech Companies Sign White House Pledge
This recap of some interesting developments around generative AI was written by a human.
I say that because of a report this week that Google is working on a new AI tool called Genesis that’s supposed to be able to write news stories. The company has pitched the tool to a handful of major news organizations, including The New York Times, The Washington Post and The Wall Street Journal, as a “personal assistant” or “helpmate” for journalists that can automate some tasks, the Times reported. Genesis is able to “take in information — details of current events, for example — and generate news content,” the Times said, citing “people familiar with the matter.”
In an emailed statement to CNET, Google acknowledged it’s exploring how AI could aid news publishers but didn’t give specifics on the tools it’s testing. “In partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work,” said a Google spokesperson. “These tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”
Still, people who’ve seen Google’s pitch called it “unsettling,” the Times said, because it “seemed to take for granted the effort that went into producing accurate and artful news stories.” Another concern: Google, which decides which news stories users see at the top of their search results, could give preference to stories that use Genesis.
To be sure, many publishers, including the Post, the Journal, The Associated Press, NPR, Insider and CNET, are experimenting with genAI tools to see how they might assist reporters by creating everything from headlines to story summaries to routine recaps of sports events and election results. AI tools could help media organizations, which have been cutting staff amid a challenging advertising market, keep pace with the 24/7 news cycle.
But Google’s efforts come as governments have criticized the search engine juggernaut for not giving “news outlets a larger slice of its advertising revenue,” and as news sites call out Google (and other AI companies) for “sucking up” their editorial content to train their AI systems without permission and “without compensating the publishers,” the Times noted.
On top of this, Google’s chatbot, Bard, which provides more-complex answers to users’ search queries, is already raising publisher’s concerns because it may mean Google doesn’t need to send users to more authoritative sources for answers, sources like news publishers.
How will this story end? Not sure even an AI could predict that at this point.
7 Tech Companies Sign White House Safety Pledge
The other big news of the week came Friday when seven AI tech companies — Amazon, Google, Meta, Microsoft, ChatGPT-maker OpenAI, Anthropic and Inflection — agreed with the Joe Biden administration’s ask that they allow “independent security experts to test their systems before they are released to the public and committed to sharing data about the safety of their systems with the government and academics,” The Washington Post reported. “The firms also pledged to develop systems to alert the public when an image, video or text is created by artificial intelligence, a method known as ‘watermarking.'”
“US companies lead the world in innovation, and they have a responsibility to do that and continue to do that, but they have an equal responsibility to ensure that their products are safe, secure and trustworthy,” Jeff Zients, the White House chief of staff, said in an interview with NPR.
The tech assurances around AI safety come as governments and AI, tech and other experts say generative AI systems may pose serious risks to humanity and that companies creating these systems should be regulated. Congress has generally not offered “comprehensive legislation” to regulate Silicon Valley, the Post noted, adding that Sen. Chuck Schumer has created a bipartisan committee to look at creating new rules around AI.
Already under scrutiny by the FTC over its ChatGPT chatbot, OpenAI tweeted that the White House pledge shows that AI companies have agreed to “a set of voluntary commitments to reinforce the safety, security and trustworthiness of AI technology and our services. An important step in advancing meaningful and effective AI governance around the world.”
Expect more details about how the companies will live up to their AI safety pledges to emerge in coming weeks. Here’s the White House fact sheet on the announcement.
Google, Bard and beyond
In other Google news, the company, which made Bard publicly available in May in English, Japanese and Korean at its Google I/O developer fest, said its chatbot now supports over 40 languages, including Arabic, Chinese, Danish, Farsi, French, German, Greek, Polish, Portuguese, Spanish, Thai, Ukrainian and Vietnamese. The complete list of languages can be found here.
In addition, Google also made good on an I/O promise to “allow users to drop images into Bard to help you analyze, create a caption or find more information on the internet,” CNET reported. But that feature is available only in English — at least for right now.
The details about these and other Bard updates can be found on Google’s blog.
Meta and Microsoft partner on Llama 2 AI engine
Meta, which launched its Llama large language model in February, is stepping up efforts to get more people to use its AI tech. The company this week said the next generation of Llama, Llama 2, is now available free for commercial and research use, as part of a deal with Microsoft, CNET reported. Meta shared the news on its blog.
Large language models, or LLMs, are what power generative AI chatbots, including OpenAI’s ChatGPT and Google’s Bard. Microsoft launched an AI-powered Bing search earlier this year, which uses ChatGPT. Under the partnership with Meta, Microsoft said it now also offers access to Llama 2 through Azure AI and on Windows.
AP licenses its archive to ChatGPT
The Associated Press says it’s licensed its text archive of news stories going back to 1985 to OpenAI/ChatGPT in a deal with undisclosed financial terms. The news comes as copyright holders and authors, including comedian Sarah Silverman, are suing ChatGPT for harvesting their copyrighted content without permission, to train its chatbot, and as the US Federal Trade Commission investigates how ChatGPT essentially works.
“In order to guard against how the courts may decide, maybe [AI companies] want to go out and sign licensing deals so you’re guaranteed legal access to the material you’ll need,” Nick Diakopoulos, a professor of communications studies and computer science at Northwestern University, told the AP.
While the AP has billed itself as one of the first media organizations to use AI to create news summaries and other content, it says it doesn’t use any genAI in its news stories today. But that will most certainly change. In any case, OpenAI, with this deal, is paying for publishers’ content in some way, and that alone is interesting.
Tracking subway fare hoppers, drug dealers
Here are two interesting stories on how AI technology is being used to track people doing things they shouldn’t be doing — and is spurring privacy questions along the way.
First up, the New York City subway system has been quietly using AI-surveillance software at some subway stations to capture the faces of people who skip paying fares. It’s part of a program to reduce losses from “fare evasion,” according to public documents and government contracts obtained by NBC News.
“The system was in use in seven subway stations in May, according to a report on fare evasion published online by the Metropolitan Transit Authority, which oversees New York City’s public transportation,” NBC News said. “The MTA expects that by the end of the year, the system will expand by ‘approximately two dozen more stations, with more to follow,'” the report says. The report also found that the MTA lost $690 million to fare evasion in 2022.”
Though the MTA says its focus is on fare evasion, privacy advocates are concerned about what the subway system will do with the face scans. NBC News said an MTA spokesperson told the news outlet that the AI system “doesn’t flag fare evaders to New York police, but she declined to comment on whether that policy could change.”
And now for the second story. Forbes reported that New York used AI tech, including Automatic License Plate Recognition technology, to assess the driving behavior of a drug trafficker after analyzing the route the driver had taken over multiple years and analyzing traffic patterns deemed suspicious.
While the Rekor software allowed people to identify and arrest the drug trafficker, it was also used “to examine the driving patterns of anyone passing one of Westchester County’s 480 cameras over a two-year period,” Forbes added, citing an ACLU senior staff attorney who described the mass surveillance of drivers as “quite horrifying.”
AI, anime and Harry Potter
AI is now converting live-action movies into anime style using Stable Diffusion, a popular text-to-image converter, CNET video producer Jason Pepper told me. “The power of this app continues to impress me. This example takes a scene from a Harry Potter movie and converts it into anime.” The 45-second clip created by Twitter user @heyBarsee, in which Hermione shows off her wizarding skills with the Wingardium Leviosa levitating charm, is worth a watch.
AI word of the week
Over the past few months, I’ve read through AI glossaries to get caught up on the vocabulary around the new world of generative AI. This week’s AI word of the week, paperclips, comes courtesy of CNBC’s “How to talk about AI like an insider.”
“Paperclips: An important symbol for AI Safety proponents because they symbolize the chance an AGI [artificial general intelligence program] could destroy humanity. It refers to a thought experiment published by philosopher Nick Bostrom about a ‘superintelligence’ given the mission to make as many paperclips as possible. It decides to turn all humans, Earth, and increasing parts of the cosmos into paperclips. OpenAI’s logo is a reference to this tale.
“Example: ‘It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal,” Bostrom wrote in his thought experiment.”
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.
Source: CNET