Connect with us

Hi, what are you looking for?


AI now a double-edged sword

While it has unquestionable benefits for banks and financial services institutions in fraud containment, AI is also a high-end tool for fraudsters:

Artificial intelligence (AI) techniques have been in use in banks since the early 2010s for what is then defined as anomaly detection – in other words in finding out deviations from a norm – in matters relating to fraud, cybersecurity and AML processes. There have been different approaches in the use of AI in various types of frauds and banks could reap adequate benefits from having AI-based fraud detection solutions because by developing one model, they could apply that to different channels.

Fraud management is essentially a process to identify, detect, react and stop the execution of fraud. By analyzing massive volumes of data, AI systems are now capable of spotting irregularities that could point to a possible fraud. Several forms of fraud, including phishing, identity theft and payment fraud, can be detected and prevented by systems powered by AI. They can also adjust to changing circumstances and pick up on emerging trends, making them more effective. Today’s AI-based solutions can also connect with other security systems (for example, biometric authentication tools and identity verification tools) and work in cohesion.


One clear example cited by AI experts of the use of AI system in fraud management is the case of Danske Bank, which implemented a fraud detection solution developed by San Diego, California-based Teradata, a global enterprise software company. The company helped the bank to modernize its fraud detection process and reduce some 1200 false positives per day. A case study on the implementation had quoted that the bank, once fully implemented the solution, could reduce its false positives by 60% and could reach 80% as the model continued to learn, increase detection of real fraud by 50% and refocus its time and resources toward actual cases of fraud and identifying new fraud methods.

Coming to the current times, fraudulent activities have evolved to unprecedented levels. Globally, one of the recent instances of innovative frauds detected has been the dividend tax fraud, which had impacted several financial institutions in European countries – cited as a clear instance of the extent to which fraudulent activities can change over time and how even the best of the preventive solutions can fail. Also called dividend stripping, it is committed through a complex mechanism of trading, selling and repurchasing shares over a certain period to unlawfully avoid payment of dividend taxes, or to claim unjustified tax reimbursements.


In the fraud space, it has recently been demonstrated that one can hack into an individual’s bank account using a copy of the AI-generated voice of the individual. Using generative AI tools, a fraudster can get information like account balance and transaction history by faking the account holder’s voice and convincing the concerned bank official that it is the customer who is interacting. Fraudsters have also been using images and videos imitating the individual to get his or her login credentials and then hack the account. Another threat perception is fake account creation where automated bots can create fake accounts at great speed, which gives fraudsters the ability to influence product reviews, distribute false information and spread malware.


Even as newer methods are being evolved and implemented in fraud detection and prevention across the world, fraudsters have only become more sophisticated and innovative. For example, the global trend shows there have been increases in instances of deepfake scams, in cybercrimes, in phishing, in ransomware attacks, in investment frauds, and in e-commerce frauds. Some of the newer genre of frauds are card-testing fraud, where fraudsters steal credentials of a card but before making any large-scale fraud, go on making smaller transactions to test the capacity of the card and then make the hit; account takeover fraud which is a type of identity theft in which the fraudsters get someone’s account and utilize it in illegal activities, and triangulation fraud, which consists of 3 key players – a seller who is genuine, an individual who intends to shop, and a fraudster.

AI is, however, a double-edged sword. While it has been unscrupulously used by fraudsters, it has also come to the aid of financial services institutions.

AI has proven that it can make fraud detection faster, more reliable and more efficient where traditional fraud-detection models fail.


For example, AI-powered systems can process huge amounts of data faster and accurately than some of the existing software-based systems. Some of these systems help to reduce the error margin in identifying normal and fraudulent customer behaviour, authenticate payments faster and provide analysts with actionable insights.

AI-based systems are now expeditiously used to detect and flag anomalies in real-time banking transactions, app usage, payment methods and similar areas.

Some of the latest AI models are capable of self-learning by processing historical data and continuously changing themselves to evolving fraud patterns. Using machine learning, predictive models can be developed to mitigate risks from frauds and this is possible with no or minimum human intervention.

For example, in identity theft, AI has very effective tools. Being familiar with customer behaviour patterns, these tools can detect unusual activity like password changes and contact details and flag the customer, after which remedial measures can be initiated. Likewise, in the case of phishing, the fraudulent activity can be detected by assessing email subject lines, content and other details and classify questionable emails as spam. This alerts the user and mitigates fraud risk. Algorithms based on ML can easily differentiate between original and fake identities, authenticate signatures and spot forgeries with a high accuracy rate.


In general, AI uses a group of algorithms that monitor incoming data and stop fraud threats before they materialize. AI-based systems deployed can learn with historical data and adjust their rules to stop threats that may have never arisen in the past. Also being dynamic, these systems can continuously function to cut down false positives (genuine users being blocked) by improving the accuracy of its rules. And these activities can be performed at super speed without impacting user experience.


Experts identify the major benefits of using AI in fraud detection as:

  • Real-time detection as AI tools can processes incoming data and block new threats dynamically.
  • Perfection as usage grows as AI tools become better and more effective with more data getting analyzed.
  • Fraud detection in almost real-time, which helps free up staff involved in preventing frauds.

AI tools in fraud detection have their own limitations, however. Even the most refined AI tools have shown their frailties in countering phishing, social engineering and other types of social frauds, mainly because these threats are not automated.


What are the major benefits that AI brings in fraud management domain?

  • Real-time surveillance whereby occurrence of fraudulent transactions can be thwarted post-haste.
  • Advanced pattern recognition whereby complex patterns and correlations can be detected in data that may be impossible for human-based systems.
  • Behavioural analysis where user behaviour can be analyzed and profiles created to detect aberrations from typical patterns.
  • Adaptive learning as AI systems continuously gain knowledge from fresh data, adjusting to changing fraud techniques.


Major AI tools that are being developed and deployed or almost ready for deployment in fraud management domain are:

1. Tools for fingerprinting for creating thorough user profiles, extensive data enrichment based on email addresses, IP addresses, or phone numbers, and the ability to examine social and digital profiles

2. Digital security tools that integrate separate API tools into a unified and complete solution and for verification of identity of users thereby preventing unauthorized access to accounts. This is essentially a tool against Account Takeover frauds.

3. Tools for payments optimization, revenue protection and abuse protection.

4. Tools that can retrieve email information from databases, including those of service providers, social networking sites and webmail hosts. These tools can also examine and enhance information derived from phone numbers and IP addresses.

5. Tools that can extract actionable information from various sources to assist in countering fraud risks involved in transactions, account opening, KYC etc. These tools are intended to protect an organization from fraud risks that use automated scripts, like taking over of account, keying credentials, card fraud, fake account, etc.


AI has developed and assumed an unbelievable role today. There are generative AI tools like ChatGPT, Claude and Midjourney, which are now available to anyone with a connectivity. While these tools are useful across a wide range of use cases, these have also been exploited by fraudsters.

Generative AI has become a major tool for fraudsters to create scores of fake identity documents, contracts, invoices and other essential scam materials in a matter of minutes. They can therefore scale up fraud schemes and target thousands of victims. This is mainly because generative AI has made is possible for anyone to create the texts, images and audio that resemble human creations. Generative AI tools are helping fraudsters create varied, tailored content for each target and most of these tools can be used anonymously.

And there is no effective regulatory framework to counter this malady.

Many experts in financial crime have warned that generative AI is one of the biggest threats facing the financial services industry. Some of them have compared it to the Wild West and the gunslingers, bank robberies and bounties.

Nevertheless, many of them have not lost hope. They say generative AI can also be tapped to create tools for fraud detection and control. Since much of financial services involves text and numbers, generative AI and Large Language Models (LLMs), capable of learning meaning and context, can be harnessed to develop more intelligent and capable chatbots and improve fraud detection. Those in banks handling manual fraud reviews can make use of LLM-based assistants to tap into information from policy documents that can help expedite decision-making on whether cases are fraudulent. LLMs are also helpful in predicting the next transaction of a customer, which can help banks pre-emptively assess risks and block fraudulent transactions.

It is now more than ever necessary for financial services institutions to develop more advanced models using more data to secure themselves. They must also aim at reducing false positives in fraud detection for transactions to improve customer satisfaction.

[email protected]

Read more:

Branch Manager’s Digest





Regulation – Reshaping the Mechanism

PR Newswire

Copyright © Glocal Infomart Pvt Ltd. All rights reserved. Usage of content from website is subject to Terms and Conditions.