Why Fraud Doesn’t Look Like Fraud to a Data Scientist
Watch more: Visa Protect
Fraud rarely announces itself.
For most organizations, it shows up after the fact as a loss, a chargeback or a rule that fired too late. To a data scientist, fraud looks different.
It appears as a deviation. A subtle shift in behavior that only becomes clear when transactions are viewed not as isolated events, but as part of a living, constantly adapting system.
That difference in perspective is becoming critical as fraudsters increasingly use artificial intelligence to move faster, test defenses at scale and exploit rigid controls. Broad rules and aggressive shutdowns may stop some attacks, but they also blur the signals that distinguish criminals from legitimate customers.
That reality surfaced quickly in a conversation between PYMNTS CEO Karen Webster and Visa’s John Munn, senior vice president and head of predictive fraud intelligence. Webster described receiving yet another letter informing her that her personal data had been exposed in a breach and was now circulating on the dark web. The notice was unsettling, but hardly surprising after years of high-profile compromises.
For Munn, the episode underscored a broader truth about fraud today. The advantage for criminals increasingly lies not in stealing data, but in exploiting it efficiently and at scale. Fraud is not an occasional threat but a permanent and profitable criminal business, one that continuously adapts to whatever defenses are put in place.
Munn likened the business of fraud prevention to squeezing a balloon. Pressure applied in one place simply causes activity to surface somewhere else. That dynamic is why fraud never truly goes away. It just changes shape.
“I think that’s always going to be the story with fraud,” Munn said. “It’s such a big and booming criminal business.”
The Problem With Treating Fraud Like an Incident
Historically, fraud teams were measured on how much fraud they stopped. That often led to rigid rules that shut down entire categories of transactions deemed risky. Those controls worked in a narrow sense, but they also blocked large volumes of legitimate activity.
Munn warned that rules that broadly stop suspicious behavior may prevent certain attacks, but they stop more good transactions than bad ones. Each false decline creates friction that customers feel immediately, even if institutions rarely see the cumulative impact.
These declines may seem minor compared to fraud losses, but they add up. Consumers switch cards, abandon purchases or disengage from merchants that make paying feel difficult. Over time, overly aggressive controls erode trust and revenue just as surely as fraud itself.
Modern fraud prevention must be precise enough to distinguish legitimate behavior from criminal activity in real time, Munn said.
Fraud as Misclassification
From a data science perspective, the real risk is not just fraud, but misclassification, Munn said.
When legitimate behavior is mistaken for criminal activity, systems lose valuable signal. Fraudsters expect overreaction. They probe systems knowing that broad controls will shut everything down, creating noise that hides their activity. Precision changes that dynamic by allowing normal behavior to pass through while anomalies become easier to spot.
“When you create false positives, it’s annoying,” Munn said. “It’s a bad customer experience. It slows us down in our life.”
Reducing those false positives requires understanding behavior deeply enough to know what normal looks like and how it changes.
Why Fraud Starts Before the Transaction
Another way data scientists see fraud differently is in timing. Fraud does not begin at authorization.
Enumeration attacks harvest emails, phone numbers and identifiers before a purchase is attempted. Token provisioning introduces risk when a card is added to a digital wallet. Account logins and authentication events create additional opportunities for attackers to test defenses.
“There are a bunch of things that happen well before a transaction occurs that are moments of vulnerability,” Munn said.
By identifying and addressing risk earlier in the lifecycle, organizations can reduce downstream fraud and approve more legitimate transactions when customers are ready to pay.
Why Models Learn Faster Than Rules
Traditional machine learning models relied heavily on manually engineered features. Those models worked, but they were slow to adapt and struggled to keep up with rapidly changing fraud patterns.
Newer deep learning architectures allow models to ingest raw, unstructured data and analyze long histories of behavior. Instead of relying on a fixed set of rules or summaries, models can identify subtle changes automatically.
“With a deep learning model, I might simply expose the model to all of that data as far back as I can go, and the model will algorithmically go in and find those subtle changes,” Munn said.
That ability improves predictive accuracy while reducing false positives, a combination that legacy systems find difficult to achieve.
Letting More Good Transactions Through
The goal of this approach is not just to stop fraud, but to confidently approve legitimate commerce.
According to Munn, Visa’s deep learning models deliver authorization rates that are 15% to 20% higher than previous generations. Those gains reflect better modeling techniques, improved latency and the use of multiple models together to strengthen decisions without slowing transactions.
A key advantage is scale. Visa has one of the widest payments datasets in the world, which in turn gives Visa insight into attacks seen across the globe. That data also enables models to be trained, tested and updated continuously. Continuous updates are critical because fraudsters look for gaps wherever defenses fail to adapt.
“If you’re not adjusting and taking advantage of these best-in-class tools, fraudsters find the gaps in the system,” Munn said.
Seeing the System, Not Just the Threat
To data scientists, fraud is not a problem to be solved once. It is a system to be observed, understood and continuously refined.
Criminals are already using AI to operate faster and more efficiently. Defenders have little choice but to respond in kind. But the advantage does not come from locking everything down. It comes from seeing clearly enough to know when not to.
As Munn put it, fighting fraud today requires precision, not sledgehammers.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post Why Fraud Doesn’t Look Like Fraud to a Data Scientist appeared first on PYMNTS.com.