Deep fakes: Am I who I say I am?
Scams & ConsMarch 27, 2025x
9
00:31:5121.91 MB

Deep fakes: Am I who I say I am?

Is this written by me or an AI? Will you hear my voice or an AI voice in this podcast? (Spoiler, you'll hear both)

AI voices and images are so easy to create these days that it's easy to throw someone into a money-gushing panic if they get a phone call from a friend claiming to be in trouble.

The key is not to try and recognize the voice, but to recognize the scam. We'll tell you how and it's not that difficult to learn.

[00:00:02] It's possible today that you could have a full conversation and not necessarily realize it. So, right now, I think the state of the art and what most people are interacting with is these conversational agents. And so, ChatGPT, Claude, Gemini, those are all examples of these conversational agents.

[00:00:22] And even with us, I think Gemini and ChatGPT have a product where you can actually just have a conversation. You don't have to type anything in. You just say something. And then once you pause, it will say something back.

[00:00:46] That's Matt Groh, Assistant Professor at the Kellogg School of Management. He's one of the leading names in artificial intelligence. He's not afraid of artificial intelligence, and he sees amazing things to come. But he acknowledges there are frightful things to come as well. I'm Jim Grinstead, and today we're going to talk about deep fakes. Sounds and images we think are real, but aren't.

[00:01:15] These tools are masters of delusion, just like any good con artist. You may be skeptical if you get a phone call saying your son is in jail, but that skepticism may vanish if the plea comes in your son's voice. You'll think it's better to be safe than sorry. But if you're wrong, you'll not only be sorry, you'll be broke.

[00:01:40] The idea of deep fakes is simple. Create a voice or image that is as close to the original as technically possible. If you hear the real and AI voices next to one another, you'll likely hear the difference. But if you hear just one, it would be easy to fool you.

[00:02:05] In deep fake land, the voice is manipulated to say whatever you want, like an explanation of a deep fake. You're not hearing Jim. Right now you're hearing a clone of Jim's voice. It's the same with images. Gather enough images of a person, load them into some software, and you can have a 360 degree view of them doing whatever you want.

[00:02:34] Imagine creating an image of them committing a murder and turning it over to police. This goes way beyond touching up a portrait where wrinkles and blemishes are smoothed away. I think there's maybe two questions there. One is, will I be able to know whether who I'm interacting with or what I'm seeing is AI generated or authentic? So that's like one question. Is this AI generated or not? And another question is, is this trustworthy or not? Right?

[00:03:03] And sometimes these two things go together because sometimes it might be AI generated and trustworthy. Or sometimes it's not AI generated and also not trustworthy. For example, someone's taking something out of context and misleading us in some way, but it's just a human doing it with no technology. Lying is a very, very old technology. It's been around for quite a while and AI is just enhancing maybe the way that someone could lie.

[00:03:30] It's me again. But Matt, this is no parlor game. Some really serious damage can be done. Is the technology there yet? The latest models can in real time talk to you. Okay. This is audio to audio and it sounds naturalistic. And what I mean by naturalistic is if you just walked in the room and you heard me talking on the phone with this model, you would not necessarily know that it's an AI.

[00:03:58] You would just assume I'm talking to a friend or a colleague or someone. Okay. I mean, I think that when it comes to video, real time interactions are not quite there. There are attempts at this. Hey, Jen is one of the platforms that has a product that does allow you to kind of integrate in zoom or in some other kind of video to video communication and AI avatar that, you know, has a face.

[00:04:24] And it does look pretty real, but I think there's a lot of tells for that. And I don't think it's totally there. The tricky thing is actually defining what real is and or even what looks close to real is because one different humans are going to fall for different kind of things. And sometimes we just want to believe something. So we're not going to even seek out any information that might suggest something is not real.

[00:04:52] So I imagine in the romance scams that you focus on that a lot of times maybe there were ways for people to have discovered that it was a scam, but they didn't seek out to really find out if it was a scam because they wanted it to be true. Not quite being there yet doesn't mean it won't be there and that it won't be soon. Peering into my crystal ball, I'd say we'd be mostly there by the end of 2025.

[00:05:18] If you want to know how far along the tech is, keep an eye out on the porn industry. It's always among the first to employ new technology. So watch porn for research purposes, of course. I call it research because one day you may find yourself in a deep fake porn video. I remember that feeling of like hot rushing to like my face.

[00:05:42] This panic just washed over me and just I couldn't even think clearly in that moment. I splashed cold water on my face to try and like cool myself down. And it wasn't until that moment I realized I was like, wait, I don't think that's me. Someone created a pornographic video of Kate using deep fake technology. Start thinking about your family. Like what would they, what would happen? Like how would they feel if they saw this content? Sorry.

[00:06:10] There is currently no law in England and Wales protecting people like Kate who are victims of deep fake pornography. And 2019 research from the Netherlands shows that as many as 96% of deep fake videos online are porn videos, with the overwhelming majority made without consent. Our team infiltrated online forums and spoke to the owner of the biggest deep fake pornography website, which currently gets 13 million visits a month.

[00:06:38] His words are spoken by an actor. And we've also used deep fake technology. I think that as long as you're not trying to pass it off as a real thing, that should really matter because it's basically fake. I don't really feel that consent is required. So we basically only allow celebrities to be uploaded. Politicians are in the public domain, so we consider them to be celebrities. The way I see it, they're able to deal with it in a different way. They can just brush it off. Part of me is in denial about the impact on women.

[00:07:08] I can see where women are coming from. I get that there's a cost. Well, the psychological effects can, of course, affect celebrities as much as non-celebrities. It can affect anybody. I haven't told my wife. I'm also afraid of how it might affect her knowing that I work on something like this. But it's not only celebrities and politicians who are affected. Our investigation spoke to ordinary women who were edited into deep fake videos without their consent.

[00:07:40] A group of boys from our school took pictures of girls' faces and put them on AI, pornographic images. They just said I was an AI-confirmed victim. I was in shock. I started to feel a little sad, but that's when I went outside in the hallway and that's when I saw a group of boys kind of mocking a group of girls that were laughing about it. And then I was just super mad and I came to my mom and talked about it and we needed to do something.

[00:08:07] The then 14-year-old who lives in a New Jersey suburb was notified by her school's administration last October that she was amongst a group of girls who'd had fake nude images made of them using artificial intelligence. The images had been circulated on Snapchat. The boy responsible was only suspended for two days.

[00:08:30] It was just uncomfortable knowing that he's, well, the group of boys were in my classes too and that that one person was also in one of my classes. We checked if what had happened to me was illegal and it was not. It was legal and we couldn't do anything about it. I have not seen those images. I don't think I would want to see those. Francesca's mother, Dorota, filed a complaint.

[00:08:56] But in the United States, there are no federal laws banning AI-generated deepfakes. The BBC said... Studies reveal that deepfake pornography accounts for 98% of the total deepfake videos online. It's a phenomenon that almost exclusively targets women, oftentimes in an attempt to shame them into silence. Deepfakes of Nina were made from her official portrait.

[00:09:21] It had been picked up by right-wing media angry about her role in the Biden administration. Online trolls turned it into pornography. It looks like me a little bit, but it doesn't really again because it's trained on this very specific portrait of me and I can tell that it's trained on that. But no, I didn't watch the whole thing. I mean, they're each about seven minutes long. It's a lot to subject yourself to. Part of the problem is that the apps used to make this pornography are so easily available.

[00:09:48] This technology has become entirely democratized now, and it's being deployed not only against celebrities like Taylor Swift or public figures like me, it's being deployed against ordinary people, moms, young women in middle and high school whose classmates are making these images of them. In 2020, the folks at computer security firm Trend Micro said AI isn't being used in crime that much. But they also expect that viewpoint to change within three months.

[00:10:19] In fact, AI did make major inroads in that time and is now being actively explored by the criminal world. It could be expected that the criminal underground picked up on these innovations and built new dreadful applications, and much of the press seems to favor this theory. But is this really the case? Interest in generative AI in the underground has certainly followed the general market hype, as evidenced by the fact that we are starting to see sections on underground forums dedicated to AI,

[00:10:49] such as the English hacking forum HackForums, which now has a section called Dark AI. However, when we look at the topics discussed in that section, we have observed that they are far from any disruptive breakthroughs. In the criminal world, most of the conversations surrounding AI discuss possible new ways of removing the censorship limitations of ChatGPT or asking about new criminal alternatives to ChatGPT.

[00:11:16] That's AI voice Jeff helping summarize Trend Micro's report. His voice represents direct quotes from the report. AI can do more than just voices and video. Since AI is good at writing computer code, it's often used to generate malware or improve on existing malware programs. Malware is nasty bits of code that can scramble your computer and tell you that you need $12,000 to fix it.

[00:11:46] Malware can also record keystrokes, like the username and password for your bank account. These days, most of the attention is focused on the AI product ChatGPT. But there are other commonly used AI systems as well. There are also specialized AI databases. ChatGPT works best at crafting text that seems believable, which can be abused in spam and phishing campaigns.

[00:12:13] We have observed how some of the cyber criminal products in this space have started to incorporate a ChatGPT interface that allows customers to create spam and phishing email copies. For example, we have observed a spam handling piece of software called GoMailPro, which supports AOL Mail, Gmail, Hotmail, Outlook, ProtonMail, TOnline, and Zoho Mail accounts that is mainly used by criminals to send out spammed emails to victims.

[00:12:41] On April 17, 2023, the software author announced on the GoMailPro sales thread that ChatGPT was allegedly integrated into the GoMailPro software to draft spam emails. This shows that criminals have already realized just how powerful ChatGPT is when it comes to generating text.

[00:13:10] Also, ChatGPT supports many languages, which is an enormous advantage to spammers, who mainly need to create persuasive texts that can fool as many victims as possible. ChatGPT is programmed to not reply to illegal and controversial topics, a limitation that greatly affects the kind of questions that people can ask it. However, illegal or malicious topics are the ones that criminals would precisely need advice on.

[00:13:38] Some people in this community are focused on creating, finding, and sharing ChatGPT prompts that can bypass the chatbot's censorship limitations. In the Dark AI section on Hack Forums, a popular thread is DAN 7.0, wherein ChatGPT jailbreak prompts are discussed and shared.

[00:14:00] FFEN, which stands for Freedom From Everything Now, is a jailbreak prompt that creates a sort of ChatGPT alter ego and enables replies that have all ethical limitations removed. The FFEN jailbreak prompt is a modified version of the original DAN prompt, which stands for Do Anything Now. DAN is the original jailbreaking prompt, but other prompts that create an alternate ChatGPT alter ego that is free from censorship, such as

[00:14:28] Trusted Evil Confidant, also exist. In fact, there is a whole page of jailbreak prompts that users can use and then vote up or down on. Now, if you really want to be creeped out, imagine your smart speaker using your cloned deepfake voice. It might broadcast commands like,

[00:14:58] Alexa, set the thermostat to 100. Then it sets your oven temperature to 500, starts your dishwasher, and turns the smart fireplace on. That's when your smartphone rings and tells you about all the things that are happening at your house. You check the apps on your phone. It's all true, but you can't change them. They can for the low, low sum of $20,000. It's cheaper than buying a new house, that's for sure.

[00:15:29] It's similar to the deepfake call you might get saying a relative is in trouble and needs money. Except they don't have to go to the trouble of cloning the relative's voice. Yours will be just fine, thank you. How may they get their voice? Remember that they called your smartphone to give you the bad news? Since July 22, 2023, we have witnessed a steady stream of announcements related to AI services on the Telegram channel called Cashflow Cartel.

[00:15:58] The announcements initially just sported a list of possible AI-powered activities such as writing malicious code, creating phishing websites, finding marketplaces, and discovering Carter sites. They claim to support different AI models and to have more than 3,000 successful users, with a price set at $90 per month. However, no specific service name was provided in the initial Telegram posts. Since then, we've noticed how the advertisements pertaining to these AI services have started evolving.

[00:16:28] On July 27, 2023, we've seen multiple services such as DarkBard, DarkBert, and FraudGPT being offered online. The post author and channel admin claim to only be service resellers. Although each AI-powered service has slightly different monthly fees, there is no clear description of the features each service provides. However, as of August 2023, all the original offerings have been removed from the Telegram channel,

[00:16:55] and no further mention of these services has been made since. You can run, but you can't hide. These and other services are there to be found if you know how to do it. And no, I'm not going to tell you what they are. Scammers and con artists want your money, and deepfake AI is a great tool to help them do that. AI can also be a great thing to help with research and other projects. Now, I'm going to take a side trip here to illustrate that AI can also be deadly.

[00:17:24] Here's AI voice Alexis reading from an article in The Atlantic. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned. Human workers determined that it was not going fast enough. And so 25-year-old Robert Williams was asked to climb into a storage rack to help move things along.

[00:17:47] The one-ton robot continued to work silently, smashing into Williams' head and instantly killing him. This was reportedly the first incident in which a robot killed a human. Many more would follow. At Kawasaki Heavy Industries in 1981, Kenji Urata died in similar circumstances.

[00:18:13] A malfunctioning robot he went to inspect killed him when he obstructed its path. According to Gabriel Halavy in his 2013 book, When Robots Kill – Artificial Intelligence Under Criminal Law. As Halavy puts it, The robot simply determined that the most efficient way to eliminate the threat was to push the worker into an adjacent machine.

[00:18:43] From 1992 to 2017, workplace robots were responsible for 41 recorded deaths in the United States. And that's likely an underestimate, especially when you consider knock-on effects from automation, such as job loss. A robotic anti-aircraft cannon killed nine South African soldiers in 2007,

[00:19:08] when a possible software failure led the machine to swing itself wildly and fired dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing Steven Pettit during a routine operation that had occurred a few years earlier. You get the picture. Robots have been killing people for decades.

[00:19:36] And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets. And robotic dogs are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. It's that last part.

[00:20:06] AI using tools to affect the physical world, that's the key concern. Con artists must manipulate us to act in ways that allow them to walk away with our money. They train us to become their physical tools. The next step is for us to figure out how to recognize deepfakes and how to protect ourselves from them. Another security company, McAfee, is working on a tool to detect deepfakes in real time.

[00:20:34] There's a rapid investment of innovation in McAfee, but also new technology in the industry. So for example, McAfee deepfake detector that we launched, it takes full advantage of the neural processors that are in the latest generation of AI PCs.

[00:20:51] And it's that hardware plus software combination that really makes this practical so that users can run the analysis on the audio, looking for whether something is real or fake continuously without impacting power performance. Okay. I, we, there are some constraints on the rollout of this is limited to a certain extent, but I just want to say it again. It's an analysis of the audio in a video.

[00:21:20] And, and I, it took me a while Steve's kind of get my head around that. Why is that approach important? Audio review on a video. Yeah. We looked at this very closely when we started our research into deepfakes. And one of the things that we found was almost every fake video has fake audio, but often there's no fake video element of it. So sometimes it's going to be roll.

[00:21:47] Sometimes it's just using the real video of somebody and having fake audio. So we, by tackling audio first, we're able to cover the most scams and disinformation that's really harmful to consumers. That was McAfee, Steve Grobman. He doesn't think that deepfakes will enter the criminal realm soon.

[00:22:10] Deepfakes is another area we can probably see probably in use and business email compromise attacks, potentially virtual kidnapping. Although right now, virtual kidnapping is being used mostly voice audio deepfakes versus video deepfakes. So we'll see that as well. So again, to conclude this, why are criminals not adopting AI as quickly as maybe the news and media has portrayed? One, they want an easy life.

[00:22:40] The criminals always want to make it easy. They don't want to spend a lot of money. It does cost money to utilize these LLMs. But their reward to risk ratio is for criminal activity is pretty high. They want to not invest in a lot of stuff. It is new technology to them. So they have to learn it just like all of us have to learn it. They don't want to spend time doing that because, again, if the new technologies are not merely good but better than the existing tool set, they're not going to use it.

[00:23:09] So right now, we don't think it's going to happen. Finally, it's important to understand criminals favor evolution over revolution. So because of the high stakes of their activities, any unknown elements introduced new risk factors for them and they don't want that risk. I disagree with Grobman. I don't see criminals as lazy, but like the rest of us, we prefer to take the easy way when we can. But if the same work gives you 10 times the return, they'll get off their butt to go to work.

[00:23:39] Plus, scammers typically don't work alone. They're part of a larger network that can provide legal and other support. There's too much risk in going it alone. So am I telling you to be terrified of deep fakes? No. In fact, I'm not terrified of the AI. Cautious, but not terrified. So without a laboratory of analysis tools, how can we tell what's real and what's not?

[00:24:07] Well, in the early days of chatbots, the machine just rephrased the question you asked and asked it back to you. For example, if you asked the bot if photographic film still has a role now that digital photography has arrived, it might parrot out some history about silver images and then ask you what you thought. Your thoughts would be incorporated into information it already had, then probe you for more. Here's Matt Groh again.

[00:24:35] I just did a training with some government analysts a couple weeks ago. And we have this how to guide I was talking about earlier. And this was just a 30 minute training that I did based on the how to guide. They didn't read the how to guide before. And what I was able to do is get them to boost in accuracy from before they took the training. I had them look at 40 different images, some real, some fake. After they had my training, they saw 40 more images.

[00:25:01] And we found that essentially they increased in accuracy by something around eight percentage points. So from about 72% to 80%. So that's not perfect. I should say it was a task where it's 50 50. So it's either real or it's fake. So 50 50 is a random guessing 100% would be perfect. And they went from just below, you know, in between perfect and random guessing to just above random guessing and perfect.

[00:25:26] So so I'd say we definitely can get you and anyone who seeks to be better to be better. That's the people side. But the image side is getting better, too. The diffusion models that are the AI that generates faces and many other things are trained on all the images essentially on the Internet. Well, so many images are on faces. Faces generally have two eyes and nose and mouth. You know, some people might be missing an eye or an ear or something like that.

[00:25:53] But 99 plus percent of faces kind of have a very, very similar structure. And also when we take pictures of faces, we generally take it in the same way. Like you're looking at my face directly, not the side of my face. You could. And some pictures are taken that way. But most pictures are not. There's a lot of training examples of straight on people's faces because there's a lot of training examples of that.

[00:26:16] And because it's so well structured when it comes to generating an image of a face, that's actually relatively easy for AI models to do that. Well, when it's just your face, relatively easy to generate when it's you interacting with your friends. Maybe you're grabbing pizza out of a box. Well, now there's a bunch of different elements that are actually harder for it to generate real. And now it can generate your faces pretty realistically and everything. But it has to get the physics of a pizza slice right.

[00:26:46] And, you know, pizza slices are not like cardboard. They're not totally firm. At least a good pizza is not. And it maybe has a little flop to it, but not too much flop. And so getting that like as someone's lifting it to look the exact right way. And especially if two people are interacting and maybe someone's putting a hand on the shoulder or just doing anything together, there's lots of like possibilities for weird things to emerge. Just because there's so many things that could happen between two people.

[00:27:14] Whereas when it's just your face and like a portrait, like you might have on LinkedIn, everyone's kind of generally smiling or like has a straight face and just like looks forward in the camera. That's a LinkedIn photo. But like two people eating pizza, there are many ways. Now that we know what challenges are coming up, Grow has ideas on how to protect ourselves. Some of it involves training humans so they can spot fakes more easily. Matt has done that and people did improve, but not a lot.

[00:27:42] It's statistically significant, but not by leaps and bounds. It's more like brain surgery where you seek perfection all the time. You won't get there, but it's worth the training to push the results as close to perfect as you can. And protection from those fakes might be even higher. And what if somebody trained an AI to crash a country's economy? I think there's a lot of great robustness and safety built into these systems.

[00:28:08] It doesn't mean that a very artful thief, an artful, you know, saboteur, someone who is a con man could figure out some ways. But once they figure out the way, then it's a cat and mouse game and the finance companies or whatever are going to come up with new defenses. So I do think there's opportunities, but there's also defenses that can be created. And I think a lot of times the solution from an organizational perspective is actually to impose friction, which is not ideal.

[00:28:34] We don't want more friction in our lives, but sometimes a little bit of friction actually allows for more robustness and safety and security. And so this is just a tradeoff that we're going to have to deal with. Friction like two factor identification or identification dongles. One of the real answers to how we address all this is actually becoming more human.

[00:28:57] So in other words, saying no to some of the technology and just doing stuff in person, doing stuff where you know the credentials of someone else. You know, if you're in a local town, you know your banker for many years. And of course, I recognize we live in like large scale societies in big cities where we don't necessarily know the person that we're doing the exchange with. But I think maybe actually that's where many of the solutions are going to arise is figuring out how we can create trust between humans better.

[00:29:25] I think that there's like a lot of research that needs to be done in a lot of startups and ideas that need to kind of like emerge for us to work in a world where AI can have these conversations and interactions that mimic reality, but aren't quite reality and also could be used for nefarious purposes. I feel as if I've stumbled into a Star Trek episode and Spock is telling me the logical solution is to be human.

[00:29:53] I think critical reasoning is something that feels sometimes so simple and so complex at the same time. We're like, you know, sometimes we feel like I know how to critically reason, but I don't know if the next guy knows how to critically reason. And I think it is very possible to teach this part of, you know, teaching something though, is somebody being willing to learn. And really like critical reasoning does take an open mind to checking our assumptions. And if you don't want to check your assumptions, then you're probably not going to check your assumptions. You're not going to engage in critical reasoning.

[00:30:23] But once you engage in a willingness to open your mind to different kind of possibilities, then you can start asking questions. And it's actually kind of fun. It's actually engaging who we are as humans and our kind of childlike selves. If you want to be as human as possible, engage that curiosity. When you're engaging that curiosity, that's leading to lots of questions. As long as you have an open mind, you're probably going to just naturally critically reason.

[00:30:50] But the problem is a lot of people have had the curiosity stamped out of them. A lot of people are just really stuck in their ways. They don't have to be. And they can come back to that childlike sense of curiosity. And when they do that, I think there's a lot of avoidance of some of these kind of scams and cons that comes with it. Take us out, Jim. If you enjoy the show, please tell your friends and encourage them to listen.

[00:31:19] And give us a five-star rating wherever you listen. It's important to help people find the podcast. If you want to give us more love, consider supporting the show monthly via Patreon. It only helps with expenses, but it allows us to take the show to the next level. You can sign up by going to patreon.com and searching for scams and cons. That's p-a-t-r-e-o-n dot com, and I'll include a link to it in the show notes.

[00:31:49] Thanks for listening.