Will generative AI kill KYC authentication?

For decades, the financial sector and other industries have relied on an authentication mechanism dubbed “know your customer” (KYC), a process that confirms a person’s identity when opening account and then periodically confirming that identity overtime. KYC typically involves a potential customer providing a variety of documents to prove that they are who they claim to be, although it could also be applied to authenticating other people such as employees. With the ability of generative artificial intelligence (AI) that use large language models (LLMs) to create highly persuasive document replicas, many security executives are rethinking how KYC should look in a generative AI world.

How generative AI uses LLMs to enable KYC fraud

Consider someone walking into a bank in Florida to open an account. The prospective customer says that they just moved from Utah and that they are a citizen of Portugal. They present a Utah driver’s license, a bill from two Utah utility companies, and a Portuguese passport. The problem goes beyond the probability that the bank staffer does not know what a Utah driver’s license or Portuguese passport looks like. The AI-generated replicas are going to look exactly like the real thing. The only way to authenticate is to either connect to databases from Utah and Portugal (or make a phone call) to not only verify that these documents exist in the official systems but that the image in the official systems matches the photo on the documents being examined. 

An even bigger security threat is the ability of generative AI create bogus documents quickly and on a massive scale. Cyber thieves love scale and efficiency. “This is what is coming: Unlimited fake account setup attempts and account recovery attempts,” says Kevin Alan Tussy, CEO at FaceTec, a vendor of 3D face liveness and matching software.

AI-generated fake personal histories could validate AI-generated fake KYC documents

Lee Mallon, the chief technology officer at AI vendor Humanity.run, sees an LLM cybersecurity threat that goes way beyond quickly making false documents. He worries that thieves could use LLMs to create deep back stories for their frauds in case someone at a bank or government level reviews social media posts and websites to see if a person truly exists.

“Could social media platforms be getting seeded right now with AI-generated life histories and images, laying the groundwork for elaborate KYC frauds years down the line? A fraudster could feasibly build a ‘credible’ online history, complete with realistic photos and life events, to bypass traditional KYC checks. The data, though artificially generated, would seem perfectly plausible to anyone conducting a cursory social media background check,” Mallon says. “This isn’t a scheme that requires a quick payoff. By slowly drip-feeding artificial data onto social media platforms over a period of years, a fraudster could create a persona that withstands even the most thorough scrutiny. By the time they decide to use this fabricated identity for financial gains, tracking the origins of the fraud becomes an immensely complex task.”

KYC security tools, processes need to adapt

Alexandre Cagnoni, director of authentication at WatchGuard Technologies, agrees that the KYC security threats from LLMs are frightening. â€œI do believe that KYC techniques will need to incorporate more sophisticated identity verification processes that will for certain require AI-based validations, using deepfake detection systems. The same way MFA and then transaction signing became a requirement for financial institutions in the 2000s because of the new MitB attacks, now they will have to deal with the growth of those fake identities,” he says. “It’s going to be a challenge because there are not a lot of (good) deepfake detection technologies around and it will have to be quite good to avoid time-consuming tasks, false positives or the creation of more friction and frustration for users.”

Cagnoni says that decent detection tools for catching deepfake videos are available, but even the best ones still struggle. He references recent testing where systems that identified fake videos as fake also identified valid videos as likely fake. 

Rex Booth, CISO at identity management firm SailPoint Technologies, differs from others in that he sees the KYC LLM problem as serious but not immediately critical. “I don’t think the KYC script needs to be completely rewritten, but it does need to be built upon, strengthened, and augmented. We do not fully use the potential of some of the authentication measures that we have available today,” he says. “Granted, the tools that we have today are insufficient, but they are nonetheless the tools that we have.”

A handful of current authentication mechanisms–including biometrics with liveness testing and behavioral analytics with as many datapoints as possible–are often mentioned as possible ways to combat LLM-generated identity fraud, but most are strong at verifying whether an existing customer is indeed the one making a request. They are much less effective for onboarding because there is often no data about a new potential customer. Behavioral analytics, for example, only work if a history of that user’s behaviors is available for comparison. Biometrics only work with a highly reliable indication of what that person truly looks like.

Some behavioral analytics can operate standalone and do not necessarily always have to leverage the history of that customer, says Linda Miller, CEO of the Audient Group, a Washington DC-based consulting firm. â€œAre they typing in their Social Security number as though they are copying it rather than typing it from memory? Have you checked the applicant’s name against a database of recent data-breach victims?” she says. “You are not going to solve this problem with a tool. The KYC strategy has to be multi-layered, and it has to be risk-based.”

Sharing identity details one hedge against AI- and LLM-based fraud

One possible way to do that would be to have far more identity details shared among financial entities. Cagnoni argues that companies are hesitant about sharing such details, however. Beyond competitive issues, there is the poorly defined area of global compliance and whether such data-sharing would violate any regional compliance rule.

“Consider the privacy regulations here in Europe. Could I share any behavioral information? They would have to share some identifying information to communicate,” Cagnoni says. “What if I have a limp in my right leg? Could sharing that violate any type of regulations? Most of the banks don’t want to share this information about customers. I don’t see banks sharing behavioral information.”

Booth, however, doubts compliance implications of sharing such authentication data among companies would be a problem because just about all the impacted compliance rules focus on personally identifiable data. “Behavioral analytics in terms of how I move my mouse is an individual-level authentication. That is not data.”

As for trying to access government databases to verify identities or documents, Miller says she is highly skeptical. “Government and data are two words that go horribly together. Government agencies are using very old data through really antiquated systems. Some 25 states today are still on Cobol.”

AI-enabled fake employees another KYC fraud risk

Another problem Cagnoni flags is fake employees. That is where a thief identifies companies that allow fully remote work. “How are individuals vetted? In situations where you wouldn’t have to turn up at the office, you could be maintaining 100 jobs. We don’t currently have a silver bullet for this.” 

That scheme would take advantage of how a lot of enterprises function. The first X weeks would be paperwork and training and other tasks that don’t necessarily produce much that is tangible. Cagnoni suggests that an effective thief could receive money from payroll for many weeks–potentially months–before being discovered, assuming they are ever discovered.

Business models don’t incentivize KYC detection

Part of the problem that authentication experts stressed is that many business models today do not meaningfully allow for the time, effort, and investment to do extensive identify verifications. â€œThe authentication systems we have today are a reflection of the incentives we put in place. The incentives right now are to encourage workers to move fast and reduce friction and accept a certain amount of fraud,” Booth says. “As it stands right now, the incentives don’t work for meaningful identity verification.”

Miller agrees with Booth that the structure that businesses use for bonuses and other incentives discourage workers from trying to identify fraudulent behaviors. “There is a human element to it. Businesses put in place perverse incentives when it comes to a diligent KYC process,” she says, adding that management doesn’t want to slow down processes “when people are waiting to close a transaction with a customer.”

Instead, Miller, who had been a principal at Grant Thornton, the audit and assurance firm, suggests that CISOs identify their top financial issues “and tailor your KYC to those highest risk areas. But remember that in almost every arena, generative AI will accelerate fraud. Social engineering and phishing are about to become an order of magnitude more effective [because of generative AI].”

Authentication, Financial Services Industry, Fraud, Generative AI