Technology

AI and You: Sarah Silverman Calls Out AI Funny Business, Ikea Rethinks the Couch

It’s only funny until someone loses an eye. Or in the case of conversational AI companies, until copyright holders say they’re not OK with having their work used without permission to train the large language models powering today’s generative AI giants.

This week, comedian Sarah Silverman, along with authors Christopher Golden and Richard Kadrey, filed a lawsuit against OpenAI, creator of ChatGPT, and Meta, which developed the AI model called LLaMA. The suit alleges that AI systems were trained on the authors’ copyrighted works, likely taken from pirated digital-book collections known as “shadow libraries,” the Associated Press reports. 

“The OpenAI suit notes that a request to ChatGPT to summarize Silverman’s book ‘The Bedwetter’ returns a detailed summary of the book, and asserts that it wouldn’t be possible to provide a summary of that variety without having the full text in the training model,” according to Barron’s. “Most large language model creators provide little data on the underlying data powering their models.”

Meta and Open AI declined to comment to the AP and Barron’s.

This isn’t the first time authors have called out AI companies for potentially stealing their work without compensation. Last month, best-selling authors including Margaret Atwood and Nora Roberts signed an open letter from the Authors Guild to the CEOs of Google, IBM, Open AI, Meta and Microsoft calling out the “inherent injustice in exploiting our works as part of your AI systems without our consent, credit or compensation.” 

“Millions of copyrighted books, articles, essays and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill. You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited,” the open letter says.

Courts will have to decide whether AI systems ingesting some copyrighted materials qualifies as “fair use.” But in the meantime, expect other copyright holders to bring similar challenges.

Here are the other doings in AI worth your attention.

FTC investigates ChatGPT over consumer data

In a scoop this week, The Washington Post reported that the US Federal Trade Commission has opened an “expansive investigation into OpenAI, probing whether the maker of the popular ChatGPT bot has run afoul of consumer protection laws by putting personal reputations and data at risk.”

The investigation involves personal privacy information, data security practices, and how OpenAI handles complaints that its chatbot makes “false, misleading or disparaging” statements about real individuals, according to a 20-page demand for records by the FTC that was shared by the Post. The FTC declined to comment to the Post, but OpenAI CEO Sam Altman tweeted this week that he’s disappointed that the FTC’s request for information about its business practices started with a “leak” to the newspaper. 

“That said, it’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC,” Altman tweeted. “We built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it,” Altman said in another tweet. “We protect user privacy and design our systems to learn about the world, not private individuals.”

AI detectors are biased, easy to fool  

One of the more popular guessing games online these days is whether something was written by a human or by AI. A group of researchers from Stanford University set out to test generative AI “detectors” to see if they could tell the difference. 

“The research team was surprised to find that some of the most popular GPT detectors, which are built to spot text generated by apps like ChatGPT, routinely misclassified writing by non-native English speakers as AI generated, highlighting limitations and biases users need to be aware of,” CNET science editor Jackson Ryan reported.

The takeaway: Such detection software misclassifies writing from non-native English speakers, a bias problem, and also can be fooled by “literary language.”

“Basically, if you’re using verbose and literary text, you’re less likely to be classified as an AI,” Jackson noted after running some of his own experiments. “But this shows a worrying bias and raises concerns non-native English speakers could be adversely affected in, for instance, job hiring or school exams, where their text is flagged as generated by AI.”

Elon Musk announces AI company, taps 11 men to help him

After signing letters warning that AI companies should pause development due to the potential risks artificial intelligence poses to society, Twitter owner and Tesla CEO Elon Musk announced a new AI company this week: xAI.

The news comes after Musk earlier this year filed to incorporate an AI company in a challenge to OpenAI’s ChatGPT. In April, Musk said he was going to launch a company called TruthGPT as a “maximum truth-seeking AI.”

Musk tweeted Wednesday that xAI was being formed “to understand reality.”

Elon Musk

Elon Musk’s xAI will seek “to understand reality,” he says. 

James Martin/CNET

The xAI website says the company’s goal is to “understand the true nature of the universe” and that it will work with Twitter, Tesla and other companies on its mission. Meanwhile, xAI has tweeted to ask, “What are the most fundamental unanswered questions?”

A group of 12 men, including Musk, was announced as the team working for xAI. They have experience across OpenAI, DeepMind, Google Research, Microsoft Research and Tesla, xAI says. The company is also being advised by Dan Hendrycks, the director of the nonprofit Center for AI Safety. 

AI and jobs

There’s already a lot of analysis and speculation about how generative AI might change the future of work – namely, what jobs could be transformed or eliminated due to the new technology. In March, Goldman Sachs said AI automation could impact 300 million jobs.

This week, the Organization for Economic Cooperation and Development, which represents 38 countries including the US, the UK and Canada, found that “27% of the region’s labor force works in occupations at a high risk of being replaced by AI, even though adoption of the technology remains fairly low as the technology is, for the moment, in its infancy,” according to a Forbes summary of the report, which is worth a glance.

The OECD said people in the manufacturing and finance industries are excited and concerned about the opportunities for AI: “The findings show that AI use at work can lead to positive outcomes for workers around job satisfaction, health and wages. Yet there are also risks around privacy, work intensity and bias. The survey revealed a clear divide between what workers think about AI use in their jobs today and their fears for the future.”

The results, added the organization, “highlight the urgent need for policy action now, to ensure that no one is left behind.”

YouTube tests AI-generated quizzes to gauge merit of educational videos

YouTube said on its experiments page that it’s testing AI-generated quizzes in its mobile app that cover educational videos people check out. The quizzes test your “understanding of a subject covered in a video you recently watched. If you choose to take a quiz, a link to the recently watched video will appear under it so you can easily navigate back to learn more about the topic at hand.” The test will be offered to a limited number of English-language users. 

The goal: to help YouTube get a “better understanding of how well each video covers a certain topic,” according to TechCrunch. But the true goal may be another way that YouTube will highlight/showcase videos on its site.  

Ikea uses AI to create prototypes for new dinner plates, foldable couches  

Jupiter Research says global retail spend on chatbots – for use cases like customer support, marketing and payment processing – is “forecast to reach $12 billion in 2023; growing to $72 billion by 2028. Increasing by 470% over five years, much of this growth will be driven by the emergence of cost-effective open language models, most notably ChatGPT, in regions such as North America and Europe.”

But retailers may also use the technology to help them design new products. At least, that’s the takeaway from work being done by Ikea, which tapped design lab Space10 AI to test out new product designs for dinner plates (with an emphasis on using local materials) and for couches, according to It’s Nice That, a blog I follow that covers all things related to design. 

Watch the dinner plates video (which scrolls through more than 100 designs super fast) to get a sense of the emphasis on using regional, easily sourced materials. 

And take a look at the couch. Space10 came up with a design brief – called Couch in an Envelope – to use ChatGPT to come up with a flat-pack, completely recyclable sofa that weighs only 10 kilos (or 22 pounds). It’s got a midcentury modern vibe. The designers talked about the importance of refining their prompts, from “couch” to sitting or reclining “platforms.” 

The prototype “is screwless, it folds down flat for warehouse storage and can be carried by one person alone. It features fewer materials than your standard sofa – currently imagined with aluminum, cellulose-based fabrics and mycelium foam – which would make it easier to produce. It’s also modular, meaning you can change its setup depending on your space,” It’s Nice That noted.

Cool, right? But I call this out as a reminder about why AI needs humans in the loop. It will be creative thinkers who will need to prompt generative AI systems to dream up these new, interesting kinds of ideas.  

When crochet patterns go awry: You say banana, AI says BB-8? 

ChatGPT, which is focused on text, wasn’t designed to create knitting or crochet patterns, which is why CNN reporter AJ Willingham’s experiments to get the generative AI engine to create those patterns is fun to see. It’s also an illustration of the concept of “narrow AI,” a term used to describe AI systems designed to handle a specific task.  

The “control project” was to create a “simple object with a distinct shape that has been reiterated in innumerable crochet patterns across the internet,” Willingham said. But ChatGPT’s attempt to create a pattern for a banana instead yielded two stacked round spheres that reminded me of Star Wars robot BB-8 and definitely in no way resembled a banana. Things only got crazier from there, when the reporter tried to get patterns for Baby Yoda, Antarctica, and Dubai’s distinctive hotel, the Burj Al Arab. 

“At first glance, ChatGPT’s crochet patterns look and read exactly like a crochet pattern,” Willingham notes. “They even have chirpy little introductions, and the program can clearly mimic terms any crafter would recognize – such as “work a stitch. … However, once the instructions progressed past a few common beginning stitches, the project usually devolved into one of two things: spheres, or complete nonsense.”

See for yourself. 

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.



Source: CNET

Donate to Breeze of Joy Foundation

Global NewsX

Global NewsX is a news sharing website that offers a wide range of categories, from politics and business to entertainment and sports. With its easy-to-navigate interface, users can quickly find the news they are looking for and stay up-to-date on the latest global events. Whether you're interested in breaking news, in-depth analysis, or just want to stay informed, Global NewsX has got you covered.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Home
Videos
Back
Account