Articles

AI Adoption: Key Advantages for Treasury

  • By Andrew Deichler
  • Published: 1/21/2020

artificial intelligence

A key limitation of robotic process automation (RPA) is that it is not actually “intelligent.” RPA does what it is told. Artificial intelligence (AI), in contrast, uses machine learning so that it can essentially think for itself. The software learns, without human intervention, by analyzing data. It can be used to develop new rules, instantly discover exceptions and build forecasts.

RPA is like an Excel macro; it is automation that mimics what the user tells it to mimic, noted Bob Stark, vice president of strategy for Kyriba. Treasury management systems (TMS) and other types of financial platforms don’t typically rely on RPA within their product, but rather support their customers’ use of robotic process to automate the interaction with other systems. “Machine learning, on the other hand, learns from the data that it receives within the treasury system so has a natural role within a TMS,” he said.

CHALLENGES TO ADOPTION

But although AI has incredible potential to improve many processes for treasury and finance, it has yet to catch on, noted Jason Dobbs, senior manager, and Kyle Olovson, CTP, senior consultant, both with Actualize Consulting. They see this as irrational, particularly since humans have the ability to only recognize a few patterns. Machines, meanwhile, can pick up on patterns indefinitely. However, recognizing patterns does depend on human design, which means that AI can contain biases that can skew the outcomes. 

Dr. Rana el Kaliouby, CEO and co-founder of Affectiva and MindShift keynote speaker at AFP 2019, sees bias as the true “threat” that AI poses. She noted that while many people are apprehensive about the technology, what we should really be wary of is AI that is not ethical—systems that perpetuate biases existing today. Therefore, companies that utilize this technology should take care to steer clear of biases whenever they adopt these systems.

In an interview with the AFP Conversations podcast, Dr. el Kaliouby explained that the key to avoiding data and algorithmic bias is diversity. “I think the way we get there is to ensure that the teams that are designing and deploying these AI systems are as diverse and inclusive as possible,” she said. “At Affectiva, at my company, we make sure that we bring in people from different age groups, different genders, different ethnic groups, and different perspectives. We have our machine learning scientists, our data scientists and our computer vision folks around the table, but we also bring artists into the equation or psychologists because I believe they have a very different perspective on how we think about this, and that's really important.”

But for the treasury and finance space specifically, Stark is less concerned about bias affecting AI and ML. Unlike RPA, AI and ML aren’t based on what the user thinks, but on what the artificially-intelligent process is able to glean from the data. “I think in a treasury use case, introducing bias is more likely when using RPA because the ‘bot is automating the steps and finding the data that the user thinks is correct,” he said. “In treasury, at least, machine learning programs allow the data to drive the decision, and not you as a programmer. Machine learning can help you overcome your own bias by potentially finding a very different, objective conclusion.”

Stark noted that if, in the initial setup of an AI/ML program, an organization sets certain parameters or thresholds before it has adequate data, then it’s possible that biases could impact the outcomes. “But if you do it right, data should drive your decisions rather than the other way around,” he said.

USE CASE: FIGHTING FRAUD

Financial institutions have gradually been adopting AI and ML solutions to protect customer accounts. AI can help banks and their corporate customers keep track of fraudulent activity and anomalies much faster than ever before, explained David Duan, data science stream lead and principal data scientist at Fraedom.This works by the model having an understanding of what is ‘normal’ for each account or card and recognizing patterns based on past transactions and behaviors,” he explained. “For example, if 99% of the transactions for one account happen Monday through Friday, a transaction that occurs over the weekend will be seen as abnormal and flagged as such.”

Duan noted that anomalous transactions aren't always fraud incidents. However, using AI to flag any transactions that are out of the ordinary is worth the hassle of slowing down unorthodox legitimate transactions. Last year, Visa helped banks prevent approximately $25 billion in payments fraud using AI, reducing global fraud rates to 0.1%.

The Visa Advanced Authorization (VAA) is a risk management tool that monitors and evaluates transaction authorizations on VisaNet in real time to help banks identify and respond to emerging fraud patterns and trends. Visa processed more than 127 billion transactions between merchants and banks last year, using AI to analyze all of them in about one millisecond per transaction, allowing for legitimate transactions to be processed and fraud to be quickly rooted out.

Of course, there will always be bad actors that try to use innovative technology to their advantage, and AI is no exception. Micheal Reitblat, CEO of Forter, noted that criminals are using AI to mimic the way good users behave. “So instead of collecting static data like credit cards and addresses and getting access to physical locations, they are actually recording user behavior,” he said. “They understand that AI is being used to track fraudsters, so they need to fight back with a similar technology.”

Furthermore, the good actors are actually at a disadvantage. Although they have more data at their disposal now than ever before, they don’t have enough “bad” data, Reitblat explained. “If there’s a fraudulent transaction, we can’t collect a million of them just to learn a better model,” he said.

Additionally, using AI to make predictions on fraud isn’t like predicting normal customer behaviors, because fraudsters adapt. They are constantly fighting the efforts of the good actors; the moment they understand that AI is being used to identify and catch them, they change their behavior. Once that happens, the AI model needs to learn from scratch again.

For more insights on AI and machine learning, download Part 2 of the AFP Executive Guide to Emerging Technologies, underwritten by Kyriba.

Copyright © 2024 Association for Financial Professionals, Inc.
All rights reserved.