AI in Financial Services

By: Guest

14, June, 2019

Categories:

Artificial Intelligence - Consumer - Funding - Machine Learning - Virtual Assistant -

  •  
  •  
  •  
  •  
  •  
  •  

N. R. Srinivasa Raghavan, PhD, Chief of Data Science, Infosys

The Backdrop

In the financial services market where disruption now seems as common as regulation, AI has shown that it can execute advanced computations in smarter, faster and more efficient ways with process automation and data analysis being core intelligent automation functions. However, corporate leaders can decide on leveraging AI to achieve their goals. For eg Financial services decision makers must find the answer to the fundamental question: how can AI help financial services actually make better decisions and save money?

In recent times, companies are increasingly delegating repetitive tasks to machines through robotic process automation (RPA) and predictive analytics. Computers are automatically processing incoming invoices to improve efficiency in departments burdened by mountains of paperwork and escalating deadlines.

AI gaining a foothold

A number of enterprise-scale AI applications include fraud detection, robo-advisory for wealth management, predictive maintenance of assets, real-time intrusion detection and surveillance, product recommendation and user experience personalization. A large and growing share of banks are also using predictive analysis and voice recognition to interact with customers, and even their own employees.

There are many fintech startups that use disruptive technologies, including AI, to innovate and create new services. Most recently, even the  traditional banks are investing in these technologies to solve business challenges, such as compliance (sanctions and fraud screening), operational efficiency, customer experience, eliminating risk, needs analysis and product matching. An Infosys survey of corporate leaders across the globe showed that 80 percent of global CEOs believe that AI applications will drive at least 30 percent of their new revenues and cost-savings initiatives.

Even though at an early stage of maturity, AI has become a business necessity rather than a technology toy.

AI and the skills gap

Automation and AI will enable professionals to devote more time to product development and customer service, creating value in ways only humans can. AI even promises to free financial services analysts from their stereotypical 100-plus hour weeks, although that’s still some time in the future. Even with this pace, we’re still at least 15 to 20 years away from AI becoming the heart of a major corporation and making decisions now reserved for senior executives.

2.0 Deploying Enterprise AI Applications

Before a large organization can seriously deploy AI, it must deal with the problem of data dispersed among multiple silos and systems. AI is a holistic technology that works best when it gathers data from across an organization. This reorganizing of data and systems is a serious effort and a significant hurdle to AI implementation.

Each AI solution, however, is specific to the problem it addresses or opportunity it exploits. With new expectations or changes in context, the existing model’s performance would deteriorate. And if models use high specific applications like deep learning, reuse is even more difficult, since the solution is tightly coupled to a specific type of data.

Organizations that successfully use AI will be ones that drag it out of the lab. When successful, the technology empowers users across every department to solve their day-to-day problems. AI will also require a change in the traditional chief information officer (CIO) role. Historically their responsibilities were overwhelmingly technical, but AI will force them to more formally straddle the boundary between business and IT. CIOs must become more influential voices in the boardroom. They have a unique opportunity to be the voice that bridges the gap between business and technology.

Risk Management and Ethical Aspects of AI

Financial services organizations face risks from every direction, internal and external, accidental and criminal. Even small typos can have monumental consequences. Last year, a Samsung Securities employee accidentally wiped out 12 percent of the company’s stock price. About 2,000 employees were due a $2 billion dividend, or just under a dollar for each share owned. Instead, the administrator accidentally issued 2 billion shares.

Advanced analytics and machine learning can calculate and assign a fraud score to transactions within milliseconds, highlighting fraudulent purchases or approving real ones without human intervention – or detrimental impact on customer experience.

Machine learning is helping organizations go even further by gradually recognizing new signs of fraud that old systems could not catch. Criminals may be steps ahead now, but AI can close that gap and take steps to buffer against it.

AI and machine learning is taking banks’ defenses to a higher level. Able to consider hundreds and even thousands of parameters when looking for suspicious patterns, machine learning is proving faster, sharper and more accurate to sniff out wrongdoing.

Ethical AI

“Don’t be evil” is simple advice for people and businesses alike. But what happens when an organization hands responsibility for decision making to machines? Without human oversight, AI can go spectacularly wrong, even if the underlying decisions are made with sound logic. Examples include Microsoft’s racist chatbot Tay, life-and-death decisions facing autonomous car manufacturers, and issues of algorithmic bias in recruiting.

There are serious pitfalls to handing over too much autonomy to machines. Executives should appoint ethics committees to oversee AI projects, which can ensure objectivity and avoid straying into morality minefields.

In order to address these risks, we believe any AI system should be ideally designed to support the following themes:

•           AI should be socially beneficial.

•           AI should avoid creating or reinforcing common known biases.

•           AI should not exploit vulnerabilities of individuals or groups.

•           AI should be built and tested first for safety and be accountable for outcomes.

•           AI should embed privacy by design and be open for audits.

Organizations must also know that the world is watching how they use AI and the ethical implications. And it is not just the media and non-government organizations (NGOs) that are interested in how the AI evolution unfolds and its potentially uneven impact on underrepresented groups.

Infosys will be exhibiting at the AI & Big Data Expo in Amsterdam (19-20th June 2019) and are booth number: 568