
Top stories




Marketing & Media#EXCLUSIVE | Prof. Madinga: Why green marketing isn’t closing the deal
Prof. Nkosivile Madinga 5 hours


More news













But here is the harder question, and it is one that should concern anyone working inside a bank, a fintech, or a payment platform: can consumers still detect fraud based on the signals they have been taught to look for?
Because those signals — awkward language, suspicious links, requests that feel slightly off — are no longer sufficient markers. AI has made fraud increasingly indistinguishable from legitimate contact, and the gap between a real communication and a manufactured one is closing fast.
This Easter, Standard Bank, Absa, Nedbank, and GoTyme all issued fraud warnings. The advice is correct. The fact that it keeps needing to be issued, at increasing frequency, is worth paying attention to.
According to the South African Banking Risk Information Centre's 2024 Annual Crime Statistics, digital banking fraud incidents rose 86% last year — nearly 98,000 cases, with losses approaching R1.9bn.
The primary driver is social engineering: criminals manipulating customers into surrendering credentials rather than breaching banking systems directly.
Fraudsters now use AI-generated phishing emails that are grammatically flawless, contextually accurate, and calibrated to the tone of the institution they are impersonating.
Voice cloning can replicate a bank official convincingly enough to pass a real-time call. Deepfake video is beginning to appear in higher-value scenarios. AI-assisted tools allow syndicates to run more attacks simultaneously, at a fraction of the cost of traditional operations.
Consumer education programmes have always assumed that a sufficiently alert customer is a meaningful line of defence. At scale, that assumption is no longer reliable and the volume of fraud landing downstream is evidence of it.
Rules-based systems work by identifying known patterns: transaction amounts above a threshold, unusual geographies, mismatched device fingerprints. They are effective against fraud that behaves like fraud.
AI-assisted social engineering is specifically designed not to. When a customer is manipulated into authorising a transfer, the credentials are legitimate, the session is genuine, and the payment instruction is valid.
The transaction clears the rules because it was constructed to. By the time a complaint surfaces the incident, the money has moved.
Real-time transaction monitoring evaluates behaviour in context — assessing what is happening against what is normal for this customer, at this time, on this device, in this payment corridor.
A transfer that clears the credential layer can still carry a behavioural signature. The amount may be outside the customer's typical range. The destination account may be newly registered.
The session may have been preceded by an unusual sequence of actions. The timing may be inconsistent with how this customer has ever transacted. None of these signals is definitive on its own.
Evaluated together, in real time, before the transaction settles, they can shift the probability calculation enough to trigger a hold, a step-up verification, or a flag for review.
This is the intervention point legacy tools miss: not before the customer has been deceived, but before the fraud completes. Where adaptive models — ones that learn from live behaviour rather than static rules — have a meaningful advantage.
The institutions that will contain exposure most effectively are those running monitoring infrastructure that moves as fast as the fraud does and that means acting before the transaction settles.