Feel safe in the knowledge that you’re covered by KYC checks? Are you sure about that? An arms race of sorts is brewing between financial institutions and cybercriminals to see whose sophisticated measures will win out. Tamas Kadar of SEON looks at how to defeat advanced techniques like voice phishing and deepfakes.
With businesses increasingly using customer identity verification solutions to prevent criminals from impersonating real customers, many fraudsters are fighting back with new ways to trick them.
Consider biometric tools: These use what’s known as “unique identifiers” in order to determine a customer is legitimate, and they are able to pick someone out of a group of similar individuals. This has spurred cybercriminals to create more “deepfakes” in order to impersonate a real customer either via their voice or biometric verification system (to trick the given software’s voice or facial recognition, respectively) that uses a video to verify a customer.
This is an example of fraudsters using synthetic identity fraud, namely through biometric spoofing. Such threats are becoming more prevalent due widespread availability of generative AI, which they can use to animate a victim’s photo and even convince some video biometric software that the information is legitimate.
Indeed, a lot of software can indeed be fooled: A Which survey found that 40% of new phones’ face recognition was easily spoofed with a printed photo. Many companies are countering this risk by combining face recognition with other tools, such as less easily spoofed fingerprinting software.
Phishing is on the rise — and in more sophisticated ways than before
Generally speaking, phishing is on the rise, and voice phishing in particular has become much more common thanks to voice cloning technology like Resemble AI and ElevenLabs. But even text-based phishing is growing more sophisticated thanks to technological advancements like ChatGPT and other types of generative AI.
AI makes phishing an even more lucrative option for cybercriminals, as they can use AI to create especially convincing text, and they can use AI to impersonate someone’s voice, lending even further credence to their scam. In this scenario, the fraudster targets their victims by impersonating someone they know, such as a manager, coworker, IT support worker or family member.
They then trick individuals into providing sensitive information (like passwords or other login information) that enable them to gain access to their funds. While phishing scams have often been less convincing in the past thanks to incoherent text, AI tools are helping criminals to write more sophisticated messages to their victims.
A CBS investigation found that all it takes is a TikTok video that contains someone’s voice in order to use it for voice cloning purposes. Criminals can then, say, trick a victim into thinking a loved one is in trouble, and they need to transfer large sums of money to help them.
The Connection Between Blockchain Analytics & Ransomware Payments
While government officials advise against making ransomware payments, victims still often will acquiesce. But in doing so, they risk more than emboldening cyber criminals — they could inadvertently engage in sanctions violations.
Read moreDetailsTraditional KYC isn’t enough, so what’s the solution?
With the above in mind, what are some new ways that banks can respond to the changing fraud landscape? By fighting fire with fire.
One solution involves combining your KYC process with pre-KYC checks, such as digital footprint analysis, as this enables you to gather more information on a customer based on other publicly available info on them. All you need for this is their email address, phone number or IP address.
For example, by using a reverse phone or email lookup, you’re able to gather additional data points like
- Whether their email address or phone number has been involved in a data breach
- Whether it’s been blacklisted
- Whether there are any social media accounts linked to the phone or email address (criminals are far less likely to have an extensive social media trail)
Of course, some customers may choose to hide their IP address behind a VPN, but you can find out if they’re potentially suspicious using another tool: device fingerprinting. By identifying the combination of hardware and software configurations a customer is using and assigning them with a unique identifier, device fingerprinting can keep track of information like the user’s true location, whether they’re using the Tor browser (often favored and misused by criminals for its anonymity) or whether you’ve already blocked the user’s device identifier, regardless of whether they were using a different user name at the time.
Going the extra mile to gather additional data points on customers will make you better able to spot criminals who may otherwise bypass your KYC verification checks using phished customer details or sophisticated biometric spoofing techniques.