Technology

AI and You: The Copyright ‘Sword’ Over AI, Life Coaches Including Jesus Coming Your Way

Anyone following the twists and turns over generative AI tools knows that content creators are justifiably unhappy that tools like OpenAI’s ChatGPT and Google Bard may be slurping up their content, without permission or compensation, to “train” the large language models powering those chatbots.

Now there’s word that The New York Times may sue OpenAI. 

The paper updated its terms of service on Aug. 3 to say outsiders can’t scrape any of its copyrighted content to train a machine learning or AI system without permission. That content includes “text, photographs, images, illustrations, designs, audio clips, video clips, “look and feel,” metadata, data, or compilations.” The paper told AdWeek that it didn’t have any additional comment beyond what was spelled out in its terms of service.

But after reportedly meeting with the maker of ChatGPT and having “tense” and “contentious” conversations, the NYT may end up suing OpenAI “to protect the intellectual property rights associated with its reporting,” NPR said, citing two people with direct knowledge of the discussions.

“A lawsuit from the Times against OpenAI would set up what could be the most high-profile legal tussle yet over copyright protection in the age of generative AI,” NPR noted. “A top concern for the Times is that ChatGPT is, in a sense, becoming a direct competitor with the paper by creating text that answers questions based on the original reporting and writing of the paper’s staff.”

(ChatGPT wouldn’t be the only one using that information to answer users’ questions, or prompts. As a reminder, ChatGPT powers Microsoft’s Bing search engine and Microsoft has invested at least $11 billion in OpenAI as of January, according to Bloomberg.)

This possible legal battle comes after more than 4,000 writers, including Sarah Silverman, Margaret Atwood and Nora Roberts, called out genAI companies for essentially stealing their copyrighted work. Getty Images sued Stability AI in February, saying the maker of the popular Stable Diffusion AI image-generation engine trained its system using over 12 million photos from Getty’s archive without a license. The lawsuit is here.

Over the past few months, OpenAI has seemed to acknowledge the copyright issues. In July, the company signed an agreement with the Associated Press to license the AP’s news archive back to 1985 for undisclosed terms. (The AP this week announced its new AI edit standards, noting that while its reporters can “experiment” with ChatGPT, they can’t use it to create “publishable content.”)

The AP deal is a tacit acknowledgement by OpenAI that it needs to license copyrighted content, which opens the door for other copyright owners to pursue their own agreements. 

In the meantime, OpenAI this month told website operators they can opt out of having their websites scraped for training data. Google also said there should be a “workable opt-out,” according to a Google legal filing in Australia that was reported on by The Guardian. Google “has not said how such a system should work,” The Guardian noted.

While opting out is something, it doesn’t really address the copyright issues. And while tech companies’ counterarguments may focus on fair use of copyrighted materials, the sheer quantity of content that goes into feeding these large language models may go beyond fair use.

“If you’re copying millions of works, you can see how that becomes a number that becomes potentially fatal for a company,” Daniel Gervais, who studies generative AI and is co-director of the intellectual property program at Vanderbilt University, told NPR. 

The Times didn’t comment to NPR about the latter’s scoop, so NPR quoted Times executives’ recent comments about protecting their intellectual property against AI companies. That includes New York Times Company CEO Meredith Kopit Levien, who said at a conference in June, “There must be fair value exchange for the content that’s already been used, and the content that will continue to be used to train models.” 

Federal copyright law says violators can face fines from $200 up to $150,000 for each infringement “committed willfully,” NPR noted. 

Where will this all go? We’ll see, but I’ll give the last word to Vanderbilt’s Gervais: “Copyright law is a sword that’s going to hang over the heads of AI companies for several years unless they figure out how to negotiate a solution.” 

Here are the other doings in AI worth your attention. 

Amazon: Generative AI will create ‘customer review highlights’  

The world’s largest e-commerce site will use generative AI to make it easier for buyers who rely on customer product reviews to make purchase decisions, Amazon said in a blog post this week. Specifically, it’s rolling out AI-generated “review highlights” designed to help customers identify “common themes” across those customer reviews.

“Want to quickly determine what other customers are saying about a product before reading through the reviews?” wrote Vaughn Schermerhorn, director of community shopping at Amazon. “The new AI-powered feature provides a short paragraph right on the product detail page that highlights the product features and customer sentiment frequently mentioned across written reviews to help customers determine at a glance whether a product is right for them.”

Amazon notes that “last year alone, 125 million customers contributed nearly 1.5 billion reviews and ratings to Amazon stores—that’s 45 reviews every second.”

Of course, there’s a question about whether those reviews are legit, as CNET, Wired and others have reported. Amazon says it “proactively blocked over 200 million suspected fake reviews in 2022” and  reiterated in another blog post this week that it “strictly prohibits fake reviews.” The company says it’s using “machine learning models that analyze thousands of data points to detect risk, including relations to other accounts, sign-in activity, review history, and other indications of unusual behavior” and that it just filed two lawsuits against brokers of fake reviews.

The new AI-generated review highlights, meanwhile, will “use only our trusted review corpus from verified purchases.”

Snapchat AI goes rouge, people ‘freak out’ it may be alive

Remember that time Microsoft introduced an AI called Tay, which then went rogue after people on Twitter taught it to swear and make racist comments?

Well, something similar – the going rogue part – happened to Snapchat’s chatbot, causing “users to freak out over an AI bot that had a mind of its own,” CNN reported.

Instead of offering recommendations and answering questions in its conversations with users, Snapchat’s My AI Snaps, powered by ChatGPT, did something that up until now only humans could do: Post a live “Story (a short video of what appeared to be a wall) for all Shachat users to see,” CNN said.

Snapchat users took to social media to express their puzzlement and concern: “Why does My AI have a video of the wall and ceiling in their house as their story?” asked one. “This is very weird and honestly unsettling,” said another. And my favorite: “Even a robot ain’t got time for me.”

Snapchat told CNN it was a “glitch” and not a sign of sentience. Sure, it was a glitch.  

But even before the tool went rogue, some Snapchat users were already less than thrilled with My AI Snaps. Launched in April, the tool has been criticized by users for “creepy exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription,” CNN said.

“Unlike some other AI tools, Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it and bring it into conversations with friends,” CNN added. “The net effect is that conversing with Snapchat’s chatbot may feel less transactional than visiting ChatGPT’s website. It also may be less clear that you’re talking to a computer.”

McKinsey unveils Lilli, a genAI to organize its IP

Instead of offering up another McKinsey and Co. report on how speedily businesses are adopting genAI, this week the nearly 100-year-old consultancy nabs a mention in this roundup for introducing its own generative AI tool for employees. McKinsey describes the tool, which is called Lilli and uses the firm’s intellectual property and proprietary data, as a “researcher, time saver, and an inspiration.”

“It’s a platform that provides a streamlined, impartial search and synthesis of the firm’s vast stores of knowledge to bring our best insights, quickly and efficiently, to clients,” McKinsey said, noting that it “spans more than 40 carefully curated knowledge sources; there will be more than 100,000 documents and interview transcripts containing both internal and third-party content, and a network of experts across 70 countries.”

The goal, the company adds, is to help its employees find stuff. “This includes searching for the most salient research documents and identifying the right experts, which can be an overwhelming task for people who are new to our firm. Even for senior colleagues, the work typically takes two weeks of researching and networking.”

Though I typically don’t like it when these AI assistants are named after women, I see that McKinsey was paying homage to an important member of the team. It says Lilli is named after Lillian Dombrowski, who was the first woman McKinsey hired as a professional and who later became the controller and corporate secretary for the firm.

OpenAI makes its first acquisition, a design studio  

OpenAI made its first ever acquisition, announcing in a blog post that it bought Global Illumination, a “company that has been leveraging AI to build creative tools, infrastructure, and digital experiences” and that will work on “our core products including ChatGPT.” Terms of the deal weren’t disclosed, but OpenAI said the Global Illumination team is known for building products for Instagram and Facebook and made “significant contribution” at Google, YouTube, Pixar and Riot Games. 

One of Global Illumination’s founders is Thomas Dimson, who served as director of engineering at Instagram and helped run a team for the platform’s discovery algorithms, according to TechCrunch.

Google testing a new kind of AI assistant offering life advice  

As part of its battle with OpenAI and Microsoft for AI dominance, Google is reportedly working on turning its genAI tech into a “personal life coach” able to “answer intimate questions about challenges in people’s lives,” according to The New York Times.

Google’s DeepMind research lab is working to have genAI “perform at least 21 different types of personal and professional tasks, including tools to give user life advice, ideas, planning instructions and tutoring tips,” the paper said, citing documents about the project it was able to review.

What kind of things might it advise you on? Stuff like how to tell a really good friend you won’t be able to attend her wedding because you can’t afford it, or what you need to do to train to be a better runner, the NYT said. It might also create a financial budget for you, including meal and workout plans, the Times said. 

But here’s the rub: Google’s own AI safety experts told the company’s executives in December that users “could experience diminished health and well being” and “a loss of agency” by relying on and becoming too dependent on the AI, the NYT added. That’s why Google Bard, launched in May, “was barred from giving medical, financial or legal advice.”

Google DeepMind said in a statement to the Times that it’s evaluating many projects and products and that “isolated samples” of the work it’s doing “are not representative of our product roadmap.” All that translates into: It’s still working on the tech and it hasn’t decided on whether it’ll be a public-facing product in the future.

AI app offers spiritual guidance from Jesus, Mary and Joseph — and Satan 

Speaking of life coaches, want to share thoughts with Jesus Christ, the apostles, the prophets, Mary, Joseph, Judas, Satan or other biblical figures? Turns out there’s now an app for that.

Called Text With Jesus, the ChatGPT-powered app impersonates biblical figures and offers a plethora of responses incorporating at least one Bible verse, “whether the topic is personal relationship advice or complex theological matter,” The Washington Post reported. “Many people in the Bible, Mary Magdalene among them, are only accessible in the app’s premium version, which costs $2.99 a month.”

You can also choose to “Chat With Satan,” who signs his texts with a “smiling face with horns” emoji, the Post said. Yeah, what could possibly go wrong with that? 

The app, available since July, was created by Catloaf Software and CEO Stéphane Peter, who said he’d previously built static apps that allowed users to get quotes from historical figures like author Oscar Wilde and America’s founding fathers. But ChatGPT opened the opportunity to allow for interaction with users. Peter said he’s gotten positive feedback from church leaders, as well as criticism from some online users who called the app blasphemous, according to the Post. 

I downloaded the app so I could ask “Jesus Christ” for comment. In answer to my question, Why should I believe anything you say?, “Jesus” offered this response: “I understand your skepticism, and it is important to question and seek truth.” 

As a journalist, I’ll just say, Amen to that.  

AI word of the week: Anthropomorphism 

Reading about Google’s life coach, the Jesus app and Snapchat’s AI meanderings inspired me to choose “anthropomorphism” to add to your AI vocabulary. Ascribing humanlike qualities to nonhuman things, like computers or animals, isn’t a new idea. But it takes on an interesting dimension when it’s applied to genAI, and when you consider that someone wants us to think a chatbot can stand in for a biblical figure.

The following definition comes courtesy of The New York Times and its “Artificial Intelligence Glossary: Neural networks and other terms explained.”

“Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it is kind or cruel based on its answers, even though it is not capable of having emotions, or you may believe the AI is sentient because it is very good at mimicking human language.”

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.

Source: CNET

Donate to Breeze of Joy Foundation

Global NewsX

Global NewsX is a news sharing website that offers a wide range of categories, from politics and business to entertainment and sports. With its easy-to-navigate interface, users can quickly find the news they are looking for and stay up-to-date on the latest global events. Whether you're interested in breaking news, in-depth analysis, or just want to stay informed, Global NewsX has got you covered.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button