Description
As fraud related to artificial intelligence (AI) becomes increasingly sophisticated and accessible, many legacy lines of defense are no longer able to effectively protect financial institutions and their customers. Financial institutions need to take a more proactive approach to fraud. By collecting and analyzing real-time data and using AI to identify patterns, FIs can quickly detect suspicious activity and clamp down on fraud.
Karen Postma, Senior Vice President of Risk Solutions at PSCU/Co-op Solutions, has long been a leader in detecting and deterring financial fraud. In a recent PaymentsJournal podcast, she sat down with Jennifer Pitt, Senior Analyst in Javelin Strategy & Research’s Fraud and Security practice, to discuss the nature of the latest attacks against credit unions and their members as well as the scourge of first-party fraud.
The Old Rules Don’t Apply
Consumers have learned that if an email doesn't sound quite right or contains suspicious punctuation or misspellings, then it may not be legitimate. However, fraudsters are now leveraging generative AI like ChatGPT to create content that more effectively looks like a normal email than a phishing email.
“We can no longer tell consumers to look for those basic things like spelling errors, grammar errors,” Pitt said. “We need to be better at giving more generic advice to consumers about emails. If you're not intending to get this email, if you don't know the sender, don't answer it. Instead, contact the company directly yourself.”Another way non-technical individuals use AI is with a tool called WormGPT, which effectively writes code or malware with fraudulent intent.
“I don't have a technical background, but I could leverage these tools to create malware that I could embed in a phishing email or in other content to put keyloggers on a consumer's computer or other device,” Postma said. “That's probably one of the most unnerving components of AI utilization by cybercriminals.”
AI is also targeting employees at large companies. Several recent data breaches that Postma has seen have been phishing campaigns targeted at high-level employees whose credentials have been compromised, which can lead to an entire company being compromised.
AI is being leveraged to trick identity verification and circumvent know-your-customer (KYC) protocols via deepfakes using voice, photo and video. Criminals are also using AI to get around multifactor authentication.
“These scams are looking for anything from passwords to financial payment to one-time passwords to absolutely anything that they can get their hands on,” Postma said. “As soon as fraudsters have convinced the consumer that they are their financial institution, those multifactors become very compromised.”
The Fourth Layer
Postma’s team at PSCU/Co-op Solutions has been talking to credit unions about adding a fourth layer to multifactor authentication: the data aspect. This data becomes a validation for the transaction, and that verification at the end offers a red flag that there might be a scam happening.
This is not data that you would typically get in an authorization component; rather, it would be data obtained through online banking, through the contact center, or through various components that will confirm if the IP address is one the consumer has used before, if the consumer has used the device before and/or if the inquiry is coming from overseas or within the geographical location that would be expected for the consumer.
"The holiday season is here, bringing with it a host of celebrations. From office parties to family gatherings, shoppers are navigating an evolving landscape of gift-giving traditions. In our latest podcast episode, we dive into how consumer trends, new technologies, and the timeless appeal of...
Published 11/19/24
Artificial intelligence is fueling a major transformation in the financial fraud landscape. AI has democratized criminal sophistication and fraud at a very low cost of conducting business, generating more malignant actors that financial institutions have to fight against.
What can these...
Published 11/13/24