The widespread availability of AI tools is being hailed as a massive business enabler. But guess whose business it’s also enabling? Fraudsters. Baptiste Collot, co-founder and CEO of fraud prevention platform Trustpair, explores audio-jacking, the latest tool in the fraud arsenal.
Your CFO just called about an urgent payment to one of your biggest suppliers. They emphasized how essential making this payment on time is to continue a positive business relationship — and asked you to wire the money to the supplier’s bank account as soon as possible. You drop everything and immediately begin the process to finalize the multimillion-dollar transaction.
You find out later that you’ve been audio-jacked, a rising generative AI-driven scheme where fraudsters hijack a live conversation and use a large language model (LLM) to understand the conversation in order to manipulate the audio output without you knowing. In this case, a sophisticated fraudster intercepted a live conversation between you and your CFO and gave you fake bank account information for your top supplier, leading you to wire money to a fraudsters’ bank account.
This isn’t just a problem for CFOs: This type of AI-driven fraud attack can happen between people of any level across the payment chain, opening the door wide open to fraud.
As attacks like this accelerate in sophistication — and come with a hefty price tag — successfully navigating this new era of risks becomes table stakes for finance and risk leaders.
The AI-driven fraud landscape
Recently, a finance worker at a multinational company sent $26 million to fraudsters who used deepfake technology to pose as the company’s CFO on a video conference call. Using past online conferences where the CFO was speaking to train generative AI, the fraudster was able to digitally recreate a scenario where a deepfake CFO ordered money transfers to fraudulent bank accounts. This is no longer science fiction but a reality that business leaders need to prepare for.
In addition, OpenAI is currently testing a new model called Voice Engine, which uses text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker. Once released to the broader public, this model could create even easier access to voice cloning tools used to commit fraud.
In fact, recent research found that 83% of U.S. companies saw an increase in cyber fraud (activities like hacking, deepfakes, highly sophisticated phishing schemes, voice cloning, etc.) attempts on their organization in the past year. Overall, payment fraud attempts on U.S. businesses spiked 71% in 2023 as fraudsters wield more sophisticated attacks.
The Rise of Cybersecurity GRC
As regulations proliferate regarding the risks posed by our increasingly digital economy, companies face a choice: make cyber compliance the responsibility of existing teams or build a brand-new function: cybersecurity GRC, seated at the intersection of business, IT, privacy and cybersecurity. Security risk and compliance director Yasmine Abdillahi of Comcast clearly favors the latter, as she explores here.
Read more3 steps to fight back
As fraudsters’ methods change, companies need to advance their fraud prevention tactics as well. Companies can take three immediate steps to help mitigate these risks.
Enable humans with automation
Traditional methods, such as training employees on security principles, remain important when fighting fraud, but humans are often the weakest link when it comes to fraud risks.
Automation technology and humans can work together to instantly validate bank accounts and third-party identities across thousands of vendors and payments. Supply chains are growing increasingly complex, and it’s common for large companies to have tens of thousands of suppliers spread across the globe.
Finance and treasury teams tasked with paying this complex web of vendors are at a huge disadvantage. Fraudsters need to successfully dupe only one person with manipulated and fake payment data to get paid; companies have to be confident they are paying the right person every time.
Automation helps finance teams elevate their payment processes, understanding when a bank account doesn’t match the company they are paying, ensuring they don’t send payments to the wrong account.
Implement two layers of defense
In this new era of fraud where cybersecurity and prevention are inextricably intertwined, businesses need to adopt two layers of defense. Cyber defenses are the first line, but 2023 was one of the worst years on record for damaging cyber attacks.
Modern fraud prevention technology is the second line of defense. Even if an employee is duped by a deepfake video call, phishing email or audio-jacked phone call, when finance teams go to make a fraudulent payment to a new bank account, effective technology should immediately flag that the bank account and identity of the vendor do not match, stopping fraud in its tracks.
Recent data shows that many companies are investing in cybersecurity defenses like training and tech, but only 23% had implemented automation to confirm bank accounts before payments.
The growth of cyber fraud attacks requires investment across both areas. Fraud prevention tech ensures that cyber attacks don’t escalate from digital intrusions to multimillion-dollar, reputation-damaging fraud incidents.
Focus on quick value and implementations
Companies are struggling with IT bandwidth. They don’t have the time or the resources to embark on a multi-year, expensive overhaul of their technology. In fact, 92% percent of CFOs planned to increase investment in technology this year, yet only 30% of technology projects will succeed. Given that failure rate, it’s often decided that companies’ manual processes and employee training will suffice in the fight against fraud.
Tech integrations with enterprise resource planning platforms (ERPs) or treasury systems can ensure payment data accuracy and avoid outdated information that leads to payment rejections without the big overhead from a major digital transformation project.