鶹ý

Skip to main content

Earnest chats with objects are not so unusual. Mark “The Bird” Fidrych, the famed Detroit Tiger, used to stand on the pitching mound whispering to the baseball. Forky, the highly animate utensil from Toy Story 4, once posed deep questions about friendship to a ceramic mug. And many of us have made repeated queries of the Magic 8 Ball despite its limited set of randomly generated answers.

Our talking to computers also goes way back, and that history is getting weirder. We’re seeing a wave of avatars and bots marketed to provide companionship, romance, therapy, or portals to dead loved ones, and even meet religious needs. It may be a function of AI companies making chatbots better at human mimicry in order to convince us that chatbots have social value worth paying for. Consider that some of these companies compare their products to magic (they aren’t), talk about the products having feelings (they don’t), or admit they just want people to feel that the products are magic or have feelings.

Image of AI in your business People can and do disagree about the risks and rewards of these services and whether any of them are good or bad for specific types of people or for humanity in general. Research can help inform such debates, which also encompass ethical and philosophical issues. Luckily for us, these larger questions are mostly beyond the scope of the Business Blog, which we’re happy to report is still written by and for real people. Here’s what is in scope:

  • Don’t misrepresent what these services are or can do. Your therapy bots aren’t licensed psychologists, your AI girlfriends are neither girls nor friends, your griefbots have no soul, and your AI copilots are not gods. We’ve warned companies about making false or unsubstantiated claims about AI or algorithms. And we’ve followed up with action, including recent cases against WealthPressDK AutomationAutomators AI, and CRI Genetics. We’ve also repeatedly advised companies – with reference to past cases – not to use automated tools to mislead people about what they’re seeing, hearing, or reading.
  • Don’t offer these services without adequately mitigating risks of harmful output. It’s a deliberate design choice to offer bots and avatars that perform as if human. We’ve discussed how that choice has inherent risks in terms of manipulation and inducing consumers, even if inadvertently, to make harmful choices. The risks may be greater for certain audiences, such as children. And we’ve warned AI companies repeatedly to assess and mitigate risks of reasonably foreseeable harm before and after deploying their tools. Our recent case against Rite Aid for its unfair use of facial recognition technology is an instructive case in point.
  • Don’t insert ads into a chat interface without clarifying that it’s paid content. More and more advertising will likely creep into the output that consumers get when interacting with various generative AI services. It will be tempting for firms offering simulated humans for companionship and the like to do the same, especially given the ability to target ads based on what these services gather or glean about their users. We’ve explained that any generative AI output should distinguish clearly between what is organic and what is paid. The Commission has also explored the wider problem of blurred digital advertising to children, advising marketers to steer clear of it altogether.   
  • Don’t use consumer relationships with avatars and bots for commercial manipulation. A company offering an anthropomorphic service also shouldn’t manipulate people via the attachments formed with that service, such as by inducing people to pay for more services or steering them to affiliated businesses. That notion applies equally to how such a service is designed to react if people try to cancel their subscription. Consistent with the FTC’s rulemaking proposal to make it easier for people to “click to cancel” subscriptions, a bot shouldn’t plead, like Hal 9000 in 2001: A Space Odyssey, not to be turned off.
  • Don’t violate consumer privacy rights. These avatars and bots can collect or infer a lot of intensely personal information. Indeed, some companies are marketing as a feature the ability of such AI services to know everything about us. It’s imperative that companies are honest and transparent about the collection and use of this information and that they don’t surreptitiously change privacy policies or relevant terms of service. These principles take on even more importance when these AI services are targeted to children. As reflected by our cases involving Alexa and Ring, the FTC will hold companies accountable for how they obtain, retain, and use consumer data. Such accountability applies fully in the context of AI and algorithms because, as you may have heard, there’s no AI exception to the laws on the books.

Is it possible that companies will ultimately develop all of these services in ways that merit no FTC attention? We posed this question to our Magic 8 Ball. Its answer: “Outlook not so good.”

The FTC has more posts in the AI and Your Business series:

J Jane
June 12, 2024

SUCCOR means "assistance and support in times of hardship and distress"

Dave Bauman
June 24, 2024

In reply to by J Jane

Succor, which rhymes with sucker, which means easily deceived and swindled. See P.T. Barnum and how a showman swindles money from others.

M
June 18, 2024

I didn't expect to find a brutal comedy roast of AI bros on the FTC website but here I am, clapping and cheering from the audience.

Sophie
June 18, 2024

This is a brilliantly written article - thank you! Precise, sassy, humane. Appreciate ya FTC.

David
June 21, 2024

I don't care why the FTC has a blog but I'm here for it.

Mort D'Impaler…
June 21, 2024

If I smoked, I would have lit up a Lucky after reading that.

Brett
June 25, 2024

This was extremely interesting. Thank you! I am happy to know there is some oversight, it seems like this is a bit out of control right now. AI is just fuel to an already big fire. The 'Hey, Alexa! What are you doing with my data?' too was interesting. I am curious about a few specific things.

> "Don’t violate consumer privacy rights"

Is it possible to learn more about this section? It seems like many companies are just collecting whatever data they want.

> "Consumers – not companies – control their data."

Do I actually have control of my data? Can I delete the data that a company has on me?

> "The Commission has also explored the wider problem of blurred digital advertising to children, advising marketers to steer clear of it altogether."

This troubled me, they do not take your advisement. We need much stronger enforcement.

R Rogger
July 08, 2024

This guidance is so much helping us on business thinking. Thanks for sharing the info.

More from the Business Blog

Get Business Blog updates