Published: July 8, 2024
Reading time: 6 minute read
Written by: Doriel Abrahams

Generative AI has come to almost every field, including online fraud. If you’ve been reading the same articles and reports I have over the last year and a half, you’ll have been getting a lot of material that all falls between gloom and doom. 

Looking at what’s happening on the front lines of fraud so far, we need to keep tracking this. But we also need to retain a balanced perspective because I’m not seeing any magic so far, and that’s great. 

Fraudsters Never Fear AI “Taking Their Job”

In other industries, people worry about AI taking their jobs. Fraudsters never worry about that. The goal of a fraudster is to achieve the maximum possible financial gain with the minimum possible effort. In other words, it’s about ROI. If AI makes it possible to steal the same, or even more, with less work from the fraudster — that’s a criminal’s jackpot.

That’s why we see fraudsters enthusiastically embracing AI to streamline things like the creation of convincing phishing emails and messages, let fraudsters expand into multiple languages with ease (no, I haven’t learned French; that was a deepfake doing it for me), enhance and automate scams through chatbots and instant messaging, and speed up making malware. Even non-technical bad actors can use ChatGPT and the like to scale up their attacks and make them more sophisticated. Zscaler found a 60% increase in phishing attacks in 2023. That’s not something you can ignore.

As the US Treasury Department said in March, “Complex and persistent cyber threats continue to grow… Generative AI can help existing threat actors develop and pilot more sophisticated malware, giving them complex attack capabilities previously available only to the most well-resourced actors. It can also help less-skilled threat actors to develop simple but effective attacks.”

Generative AI Streamlines Fraud, But Doesn’t Change the Game

Generative AI can be a fraudster’s friend. I’m not dismissing the legitimate concerns over it or the need to keep a sharp eye on how it’s evolving in the hands of criminals (this is how I justify spending so much time experimenting with GenAI to see what I could do if I were a fraudster, and checking out what fraudsters are discussing among themselves).

That said, the impact is limited. Generative AI might sometimes look like magic, but it’s not. It doesn’t create something out of nothing. The fraudster still needs to do the thinking and analysis to find the chinks in a site’s armor that makes a successful fraud attack possible. That’s time-consuming and difficult. 

In this respect, fraudsters are similar to hackers exploiting vulnerabilities — one study showed that if an LLM agent was taught about specific vulnerabilities and exposures, it could succeed in an attack 87% of the time. But when it wasn’t given that playbook, it only succeeded 7%. It’s the same with fraudsters; the hard work of finding the weaknesses and planning the attack takes time and still needs to be done by humans.

Once there’s a plan, GenAI will surely come to the fore to automate the attack. It will generate fake data for the attack, but only as long as you phrase things correctly — admittedly, as I showed myself, you don’t need complex prompt engineering to make this happen, but it takes time. And if you need to test 50 times to find the right way to attack, that’s still 50 sets of identity data used up, whether fake or stolen. If social engineering is involved, it will develop a base script or translate things.

These are all things fraudsters have been doing for years. Generative AI makes them faster, easier, and more open to criminals with minimal expertise. But it’s not magic. It’s just the next stage in the same arms race.

Fraud Prevention Keeps Pace

Fraudsters might be adapting to incorporate GenAI in their attacks, but fraud fighters are also upping their GenAI game. 

As the Treasury says in the same report, “Most institutions are now assessing novel AI technologies to enhance core business, customer, and risk management activities. The integration of AI offers the sector increased efficiency, precision, and adaptability, as well as the potential to bolster the resiliency of institutions’ systems, data, and services.”

At Forter, we recently launched AI Insights as part of our data studio so that our users can ask questions about business, customer, payment and fraud patterns and numbers from their own data and get answers instantly from within an automatically generated custom dashboard. This makes uncovering fraud and abuse patterns far easier. 

For instance, if you’re starting to prepare for the 2024 holiday season and want to reflect on last year, you could ask the model to show you how you did on Black Friday or Cyber Monday in 2023, segmented by payment data or other filter. The model will then generate a dashboard that provides an overview of Black Friday performance, including approvals, declines, and chargeback rates by payment methods. The model won’t tell you what to do, making it far easier to see the trends and information you need for an actionable analysis. 

It goes without saying that AI has also been key to effective fraud detection and prevention for years already. That technology isn’t lagging, certainly at Forter, now that AI has gone generative.

Looking Back, Looking Forward: The Arms Race Continues

It’s been about 18 months since ChatGPT launched and overnight became the main topic of discussion. It’s been long enough for fraudsters and fraud fighters to explore the potential of AI as it is now. I think we can look back at the hype and breathe a bit. 

There is an impact, but the nature of the game hasn’t changed. Some things are faster and easier, but that’s true for both fraudsters and fraud fighters. GenAI isn’t magic. (Yet, anyway.) So far, it helps us take what we have and make it faster, easier, and more automated in some areas. 

It may well be that GenAI speeds things up. For example, the fact that there’s more info-stealing malware might mean more stolen identities are on the market to buy. Using those stolen identities might be easier in terms of speed and barrier to entry. And so on. It’s too early to say how much this impact will spiral and how much it will be contained by the other constraints that require human action, thought and involvement.

I don’t know what’s coming next, which is why I continue to monitor it so closely and with such fascination. Will AI agents soon allow us to control all our mobile device apps through voice assistants, opening up the options for fraud attacks and automation on mobile that used to be possible only on the web? Will highly tailored and personalized phishing scams become things we’re exposed to every day? Will a new wave of fraudsters enter the field simply because the barrier to entry is now so low? 

I’ll keep sharing what I find. For now, though, from my perspective, GenAI fraud is the same fraud I’ve been fighting for over a decade. It’s just got a slightly better set of wheels. That’s ok because we do, too.

Doriel Abrahams is the Principal Technologist at Forter, where he monitors emerging trends in the fight against fraudsters, including new fraud rings, attacker MOs, rising technologies, etc. His mission is to provide digital commerce leaders with the latest risk intel so they can adapt and get ahead of what’s to come.

6 minute read