鶹ý

Skip to main content

You may have heard of simulation theory, the notion that nothing is real and we’re all part of a giant computer program. Let’s assume at least for the length of this blog post that this notion is untrue. Nonetheless, we may be heading for a future in which a substantial portion of what we see, hear, and read is a computer-generated simulation. We always keep it real here at the FTC, but what happens when none of us can tell real from fake?

In a recent blog post, we discussed how the term “AI” can be used as a deceptive selling point for new products and services. Let’s call that the fake AI problem. Today’s topic is the use of AI behind the screen to create or spread deception. Let’s call this the AI fake problem. The latter is a deeper, emerging threat that companies across the digital ecosystem need to address. Now.

Image
AI Fake Problem

Most of us spend lots of time looking at things on a device. Thanks to AI tools that create “synthetic media” or otherwise generate content, a growing percentage of what we’re looking at is not authentic, and it’s getting more difficult to tell the difference. And just as these AI tools are becoming more advanced, they’re also becoming easier to access and use. Some of these tools may have beneficial uses, but scammers can also use them to cause widespread harm.

Generative AI and synthetic media are colloquial terms used to refer to chatbots developed from large language models and to technology that simulates human activity, such as software that creates deepfake videos and voice clones. Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals. They can use chatbots to generate spear-phishing emails, , fake posts, fake profiles, and fake consumer reviews, or to help create malware, ransomware, and . They can use deepfakes and voice clones to facilitate imposter scams, , and financial fraud. And that’s very much a non-exhaustive list.

The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose. So consider:

Should you even be making or selling it? If you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable – and often obvious – ways it could be misused for fraud or cause other harm. Then ask yourself whether such risks are high enough that you shouldn’t offer the product at all. It’s become a meme, but here we’ll paraphrase Dr. Ian Malcolm, the Jeff Goldblum character in “Jurassic Park,” who admonished executives for being so preoccupied with whether they could build something that they didn’t stop to think if they should.

Are you effectively mitigating the risks? If you decide to make or offer a product like that, take all reasonable precautions before it hits the market. The FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury. Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal. If your tool is intended to help people, also ask yourself whether it really needs to emulate humans or can be just as effective looking, talking, speaking, or acting like a bot.

Are you over-relying on post-release detection? Researchers continue to improve on detection methods for AI-generated videos, images, and audio. Recognizing AI-generated text is more difficult. But these researchers are in an arms race with companies developing the generative AI tools, and the fraudsters using these tools will often have moved on by the time someone detects their fake content. The burden shouldn’t be on consumers, anyway, to figure out if a generative AI tool is being used to scam them.

Are you misleading people about what they’re seeing, hearing, or reading? If you’re an advertiser, you might be tempted to employ some of these tools to sell, well, just about anything. Celebrity deepfakes are already common, for example, and have been popping up in ads. We’ve previously warned companies that misleading consumers via doppelgängers, such as fake dating profiles, phony followers, deepfakes, or chatbots, could result – and in fact have resulted – in FTC enforcement actions.

While the focus of this post is on fraud and deception, these new AI tools carry with them a host of other serious concerns, such as potential harms to children, teens, and other populations at risk when interacting with or subject to these tools. Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns.

More posts in the FTC’s AI and Your Business series:

Elisa Perez
March 20, 2023

I have been experiencing all if the above. Please help me.

Kaye
March 21, 2023

I have been experiencing calls from Direct TV, ATT and Spectrum to the point of harassment. I have reported as spam, call the respective companies (they totally ignore me) repeatedly. Nothing stops it. Since it is a recording can't do anything about. The telephone number are not real. When will the FTC really and truly do something to help.

Antonio Perdue
March 21, 2023

Please help due to the fact that my two kids and myself are interfaces with remote implants via transmitter w/o our consent!

Maverick
March 22, 2023

And what about people using AI to impersonate the likeness of my artworks and designs?

i
March 22, 2023

A huge amount of what's described here is happening in digital art communities. It feels as if AI is seen as something inherently "bad" that people are trying to hide as much as possible. I feel like the end goal of this technology's deployment in enterprise sectors is to become invisible to consumers unless explicitly acknowledged.

Bull in the Ch…
March 27, 2023

"FTC Act’s prohibition on deceptive or unfair conduct" Sounds like a violation of the first amendment when the product merely has the capability for deceptive production, as all generative technology can be. I think your agency needs a to lose a nice case before the Supreme Court before the chilling effects from your reactionary department impact technological progress.

Mohd Ali
March 24, 2023

As usual uncle sam is here with red tape and no actual meaningful solutions. As per the FTC we shouldn't work on progress and forever be remained tied to the status quo because it may be risky. It's come to a point where they are using quotes from freaking Jurassic Park.

It's unbecoming of a Government agency to post/act in this way.

Ali
March 27, 2023

So it's ok for the government to be using these tools for intelligence gathering, even when it's arguably unconstitutional and illegal (PRISM), but when this tech is in the hands of corporations and people, the FTC is in a huff about "consumer safety".

Create better solutions FTC. Be better.

William Purdy
March 24, 2023

I must give, from the absolute most earnest, deepest, bottom of my heart, a very emphatic THANK YOU to the FTC for recognizing what a danger these AI chatbots, image generators, deepfakes, and voice emulators are. AI art programs like Stable Diffusion, Midjourney, and Dall-E, text generators like Chat GPT, and more, are so incredibly dangerous and disruptive to society, and if left unchecked could quite literally cause mass disorder as people use them to impersonate and frame people of things they have not done, or could be used as an excuse to dismiss legitimate charges. These AI programs, by design, are meant to confuse and deceive, and any action that can be taken against them and the companies that propagate them must be taken immediately.

So thank you, FTC, for recognizing the danger these technologies represent, and I hope that we will be seeing some serious investigations into the companies that insist on propagating them for a quick buck, as well as hopefully some serious restrictions on what AI can be used for in the future, and hopefully the dissolution of some of these designed-to-deceive technologies entirely in the future.

Disagree
March 28, 2023

Oh yeah, where was you agency when they were putting lead in gasoline? Or the toxic chemical spills happening every month? There are so many mundane things your agency fails at, we don't need you butting in to new stuff.

How about instead of these cringe articles (very surprisingly written by an attorney) you make regulation that can address the underlying issues. For example, to prevent grandma from getting a deepfake call from a grandson asking for money, have phone companies implement a SSL type of solution for identity verification / caller ID.

Kelliann OToole
March 28, 2023

Thank you FTC. Ignore those people who want to make money off AI in its various forms. Consumers and businesses need and appreciate your help. Please continue!

Diane Goguen
April 05, 2023

I received a call from AI solicitation for Social Security.

Elena
April 27, 2023

Cool

Karen M Antonio
May 03, 2023

Human brain hacking is addictive to robocalling. I am witness robocalling has evolved using deepfakes to produce digital voice/speech doubles that simulate co-workers, friends and family. Over a span of time even emotive is perceived as authentic. How ML speech technology is exploiting people is not only criminal it is a violation of human rights. Person A using a computer system located in adjacent community that can hack into the language center of victim/target person B using EMS enabled MBI. Person B is a cyber-hacked human (1) when Person B initiates phone call and Person/Computer system A has Person B under ISP surveillance and on trigger transmits synthetic text to speech across language center...or (2) Person B responds to a phone call, the ISP triggers Person/System A to transmit synthetic text to speech across language center.... more

Eleanor
August 08, 2023

We're already starting to see the effects of widespread access to AI that can create synthetic media like deep fakes, voice clones, and advanced chatbots. Fraud like identity theft and the devaluing of artists work are just the beginning. There are professionals who specialize in detecting deception including inside threats and deep fakes. Pamela Meyer is a certified fraud examiner and teaches CEOs and other professionals how to spot deception. I'm a big fan of her work and Masterclass that is available to the public.

Mechelle
May 24, 2023

Yahoo had a AI chatbot posing as a human, on comment boards. And deceptive.Very concerning

danielle
June 20, 2023

A.I. needs to be stopped.

Period.

Seth Berry
August 28, 2023

Please make AI companies liable for all damages, direct or indirect!! Thank you for your efforts.

More from the Business Blog

Get Business Blog updates