Using AI to build sucker lists
Scams & ConsApril 10, 2025x
11
00:18:1012.52 MB

Using AI to build sucker lists

Tell me what's on your mind.

Is this written by me or an AI? Will you hear my voice or an AI voice in this podcast? (Spoiler, you'll hear both)

AI voices and images are so easy to create these days that it's easy to throw someone into a money-gushing panic if they get a phone call from a friend claiming to be in trouble.

The key is not to try and recognize the voice, but to recognize the scam. We'll tell you how and it's not that difficult to learn.

[00:00:01] Now we call it the Turing Test, in which it tries to figure out how would you tell the difference between a human and a machine? How would you know the machine's not intelligent? He says, well put a human and a machine in a different room, we send them in questions, and if after a while you can't tell which one's a machine and which one's a human, then it makes no sense to say the machine isn't thinking.

[00:00:34] That was Walter Isaacson, a journalist and a damn smart guy. He's talking about the Turing Test, which was an idea that surfaced around 1950 and was much discussed among scientists and philosophers. When Isaacson made those remarks in 2014, artificial intelligence was not what it is today.

[00:01:03] He didn't think there would be a machine that could pass the test, but if you've used AI these days, you'd have to acknowledge that they come pretty close, especially depending upon how it's being used. So what does this philosophy and computer science question have to do with scams and cons? It's because these are the tools that will be coming for you. Some already are.

[00:01:31] And while it's wonky, if you don't know how they work, you could allow scammers to ransack your life savings, and as I've seen in a few cases, cost you your life. Scammers haven't totally embraced the full power of AI yet, but it won't be long before they do.

[00:02:02] First, the easy question. Why will people fall for these scams? Because they'll seem real. Like the Turing test we opened the episode with, it will be very difficult to know if you're communicating with a person or a machine. Already people are being scammed with clones of loved ones' voices. It's that level of personalization that traps people into giving up their money. Now that's the why, but what about the how?

[00:02:31] Let's go back to 1950 and the Turing model. Well before that, there were machines that could learn. Small robots that could navigate around traffic cones. They would repeatedly try and fail, but each success and failure produced data.

[00:03:00] Eventually, they could run the course flawlessly, like a rat in a maze. As hardware and software became more robust, the machines learned more complex tasks and how to do them faster. You might remember from watching movies of the 50s and 60s that showed spools of audio tape spinning on large computers. That's how data was stored. It took a lot of tape and space.

[00:03:30] Processing power wasn't close to what it is today. So it took time to gather the data, then process it through algorithms to maybe get the answer you wanted. Jump ahead a decade or two and you have hard drives that could store a lot of data in a relatively small space and access it quickly. You had the Intel 286 processor.

[00:03:58] It had increased math processing power that got even bigger a few years later with the 486. That opened the door for targeted advertising mail and other personalized communications. It also cracked the door just a bit for scammers. They sent loosely targeted romance scam letters. They would pretend to be prisoners looking for a friend, someone working on an offshore oil rig, or a soldier wanting to talk with someone back home.

[00:04:28] Those who responded and gave money went into databases which held lots of information about them, including what scams they fell for and how much money was taken from them. From these computers came sucker lists. Paper lists had been built and sold for years, but the information aged quickly, and it was difficult to put the information to use. Now scammers could ask for a list of people in the Midwest who fell for romance scams and lost more than $5,000.

[00:04:58] The more specific your request, the more the list cost. It's been more than 70 years since Turing's observation.

[00:05:26] The latest piece of the puzzle in AI is large language models. They're also called LLMs. These are huge troves of data that computer algorithms can comb through to build patterns and provide answers to complex questions. Sure, they make mistakes, any new technology does, but those mistakes also provide opportunities for programmers to learn and make the models better.

[00:05:53] Just like teaching a robot to navigate around traffic cones. Where will those troves of data come from, and how good is it? If we put garbage in, we get garbage out, right? The Secret Service is investigating a major data breach at Target. Approximately 40 million customers' credit and debit cards may have been compromised. Our Chris Trankman is live with what you need to know tonight. Chris? Well, we're talking about a big stretch from November 27th, that was right before Thanksgiving,

[00:06:22] all the way up until just a few days ago, December 15th, meaning that virtually every customer who shops at Target could be a victim. Now, that was 2013. The population of the U.S. was about 300 million, so that conceivably be more than 10% of the population who had its data exposed. It's about to get worse. Over three years, Yahoo lost 3 billion user accounts.

[00:06:50] Microsoft lost information on 60,000 companies worldwide. The Real Estate Wealth Network was taken for 1.5 billion records. First American Financial lost 885 billion users. In 2015, the Federal Office of Personnel Management lost data on 22 million current and past federal employees, and the national public data was robbed of 2.9 billion Americans.

[00:07:19] The list goes on, but it's clear where those massive data sets are coming from. Link those data sets together, buy some other data that's already available to the public, then all you need to do is write some computer code and pick your marks. Now, this takes a lot of time and money. Scammers, like the rest of us, are people too. They like to find the easiest and least expensive way to make money.

[00:07:44] That generally means buying the sucker list from people who have already built the databases and written the computer code. John Clay, Vice President of Threat Intelligence at Trend Micro, sees much of the same thing. He was asked if the AI thread is real or just hype. Our researchers have been looking into this for quite some time. We published a blog last year, late last year, about this. But they wanted to give an update on where things are at this point.

[00:08:13] And based on the information that we have and the analysis we have, looking at the underground markets, what their offerings, what services their offerings in these areas, also what the chatter is in the forums. So we infiltrate these forums and look at what are they talking about and stuff. Some of the key takeaways that we've got, adoption rates are going to be low. We don't expect really the use of generative AI and AI by adversaries for another 12 to 24 months,

[00:08:41] mainly because the current tools, tool sets are working just fine for them. Compared to last year, they've kind of abandoned any attempt at training a real LLM model. Instead, they're mostly focused on jailbreaking existing models that are available to them, like OpenAI and others. And then we're finally starting to see the emergence of actual criminal deepfakes and deepfake services being offered in the underground.

[00:09:10] So that's one area that seems to be growing more than what we've seen. So a couple of things, again, less training of actual LLMs, private LLMs themselves. They're just jailbreaking. We're seeing a lot more of this activity by them where they want to jailbreak the existing LLMs. And they're doing that in a number of ways. They're also, we're also seeing a lot of advertisements in the underground for jailbreaking services.

[00:09:40] So jailbreak as a service is an offering that's coming up quite a bit. There's a number of applications or services being offered. Escape GPT is one of them. Black Hat GPT. Loop GPT are a couple of them. And then also we're seeing a number of places who are actually offering a whole slew of different services. Uncensored GPT, unfiltered GPT, Black Hat programming, you know, et cetera, et cetera.

[00:10:09] So all of these are happening out there as well. I would say, you know, the criminal uses of LLMs to support the development of malware and malicious tools is one area. It's not unlike where they spread that kind of information in the past. But we're going to probably start seeing more and more of the jailbreaking to try to develop malware. Because as most of you probably know, if I try to create a malware through Chat GPT, it's going to deny me.

[00:10:38] They have the security controls in place. So criminals are constantly looking at how can I get around that? A lot of forum discussions around this topic. So how do I jailbreak Chat GPT? How do I jailbreak the other LLMs that are out there? They are going to use it to improve their social engineering tricks. So LLMs are very good at asking it to create content for you. And it can create that content in a very good manner.

[00:11:05] So phishing emails, for example, business email compromise. These types of attacks will probably see more usage of the generative AI tools in those areas. But outside of that, probably not. Deep fakes, though, definitely merging. In fact, we see prices ranging from anywhere from $10 per image to $500 per minute of video. A lot of these videos are getting good.

[00:11:30] One of the areas that's interesting that we are seeing is actually common knowledge that banks and cryptocurrencies have to obtain proof that an account owner is a real person. And we're starting to see where services are being offered to take that aspect and criminalize it.

[00:11:49] So what they're doing is they're taking fake IDs with an image and they're creating a deep fake image of that person so that when you go to your bank or your financial institution to create an account, they'll give you that information. So they'll hold up a picture of the ID with a picture of themselves, even though that person is totally fake. This is an area that we think is going to cause some challenges for the financial institutions out there. They're going to have to deal with this definitely in the future here.

[00:12:18] But again, deep fakes is another area we can probably see, probably in use in business email compromise attacks, potentially. Virtual kidnapping, although right now virtual kidnapping is being used mostly voice audio deep fakes versus video deep fakes. So we'll see that as well. Why are criminals not adopting AI as quickly as we as maybe the news and media has portrayed? One, they want an easy life.

[00:12:46] The criminals always want to make it easy. Already there are places on the dark web like fraud GPT, worm GPT and more where scammers can buy the tools to build their own sucker lists. And if they wish, sell those lists to others. Hackers like going after big databases, trying to get a massive score in one attack. But they have other methods to snag your information.

[00:13:10] The main way that we see how they get access to people's information is when people go to bad sites, they'll load malware up on their devices. And so whenever you're typing in your credentials to your username and password, they get access to it and stealing it that way. Social security numbers are a double edged sword. On one hand, they're very useful for our everyday lives. But it's that reliance on social security numbers that hackers take advantage of.

[00:13:34] One problem in America is that we do use social security numbers almost exclusively to, you know, above anything else except for maybe driver's license numbers. So that will never change. You know, your driver's license and your social security numbers don't change. So it's used in just about any sort of major account information or anything related to your credit, anything related to background checks or student information, anything like that.

[00:14:07] What is the threat of combined use of AI and scams? For the moment, it's just greater efficiency on the part of the scammers to trick people into giving up information. Scammers use what they already know to fish more data from their marks. Secondly, the technology is getting better, making it easier for those scamming by phone to convince a mark that a family member is in trouble and cash is needed immediately.

[00:14:33] The voices sound real, and the scammers make the situation sound urgent, so marks are less likely to check it out. An example is the AI voices I've used in this program. I use a low-level service for these voices, but if I paid more, the more lifelike the voices would sound, and I could get them with various accents and languages.

[00:14:57] Lastly, the more money these operations make, the more they're likely to be taken over by organized crime, which has tentacles that stretch internationally. That means more police can be looking for them, but the scammers can be more difficult to find. The most important question now is how to protect yourself. Well, the best thing you can do is to slow everything down. Scammers want you to act quickly so you don't have time to think.

[00:15:28] Don't give them that advantage. And avoid obvious scammer moves like being asked to pay with gift cards. Next, be aware. Question why people need the information they ask for. How will it be used? And refuse to give them information you don't think is relevant to the transaction. Lots of merchants ask questions for marketing reasons. They don't need it for sale.

[00:15:56] A hotel desk clerk may casually ask if you're in town for business or pleasure. It's none of their damn business. And if that data is stolen, it tells scammers a lot about your habits and who you might know. Lastly, take a few preventative steps. One major AI scam is to clone the voice of a family member, then use that voice to call another family member to ask for help. The cloned voice is very convincing.

[00:16:24] One great protective technique is to agree on information known only among family members, something like the last restaurant you visited. Even better, pick a name or a place from your distant past that won't show up in databases. If someone calls saying your friend or family member is in danger, make them produce that word. And if they don't, hang up. It takes courage to do such a thing,

[00:16:52] but it's more likely that you'll avoid a scam than put a family member in greater danger. There's also the problem of deep fakes. Images created to make you believe something because you can see it with your eyes, or something you know isn't true, but puts you in an embarrassing situation. We'll do a separate show on that problem to help you identify fakes and what to do if you're confronted with one. But until we can build better tools to catch AI attacks and fakes,

[00:17:21] the best protection is yourself. Protect your information, share only what's needed, and always, always be on guard. If you enjoy the podcast and want to support it, please tell your friends and encourage them to listen. If you want to show us some love, consider donating a few dollars a month via Patreon. It not only helps with expenses,

[00:17:49] it allows us to take the podcast to the next level, all without advertising. You can sign up by going to patreon.com and search for scams and cons. That's p-a-t-r-e-o-n dot com. You can also find a link in the show notes. Thanks for listening.