Chaitanya Chokkareddy
Indian banks have come a long way in adopting Artificial Intelligence (AI) and its application. Today, AI is hands-down the best tool available for crunching data. It can sift through mounds of numbers, words and images to predict, recommend, interact, detect and manage. In the banking and finance world, this has meant new ways of underwriting loans, analysing trade, minimizing risks and detecting fraud. It is even being used to improve interactions: allowing banks and financial organisations to interface better with their customers, and provide faster, personalised, easier communications.
The flip side of this pretty picture is that hackers and fraudsters are using AI too. Many “experts” aren’t finding ways to improve your AI, but instead finding means to manipulate it.
Security has always been a running concern in the world of money. But while AI often provides the means to prevent many attacks, you need to ask - Are there ways in which AI-based applications make you susceptible to hacks or data leaks? What are the risks that AI based applications can pose to your security?
Though there are several security issues that we can consider, in my experience the biggest vulnerability lies within your training data. Training data is the data you upload into the algorithm to create your AI-based application. In the wrong hands, this data can be manipulated to crash your systems or to breach your customers’ privacy and security.
To protect your AI-based application means to protect your data. And there are three security issues you must work on for this:
1. Where are you storing your training data?
Using cloud AI services means that you upload your data to the cloud. But you must try to know if this cloud is under direct control of your service provider, and do they own the entire stack?
Apart from this, remember that while your cloud may be secure, it isn’t always used securely. Gartner predicts that through 2022, at least 95% of cloud security failures will be the customer’s fault. When you use the cloud for AI or other services, you will need to audit threats and risks that your employees’ usage may pose to the cloud.
2. Who owns your training data?
When you use cloud-based machine algorithm to create an AI-based application for your business, you will need to upload your data into the service’s algorithm. For example, you may use an NLP to sift through your call data. This is often proprietary data, not only does it provide you with your competitive edge, it is also often sensitive data that you are legally bound to protect. You need to ensure that you do not give away any rights to your data, and that you will own the final model created with that data. So, look carefully at the multiple “terms of service”, license agreements and other fine print before signing up for using any AI technology. If you find the provisions unfavourable or vague, consider another service.
Likewise, you also need to ensure you aren’t using unmoderated public datasets for your training data, as this can open you up to malicious data sources that poisons your application.
3. Have you remembered to clean up?
Once training is complete, it is important to remove all traces from the servers used. Eventually, you will use AI. Whenever you start your AI project, it’s easy to get carried away with the possibilities it opens out. Often managers prioritize speed and innovation. However, take time out to understand what goes on behind the scenes. This will go a long way in securing your application from manipulation while ensuring customer data privacy too.
The author is Chief Innovation Officer, Ozonetel.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.