The banking and financial services sector in India has come a long way in adopting Artificial Intelligence (AI) and its application. AI not only helps in enhancing customer experience but also helps improve operational excellence.
The ability to use AI at scale for financial services is enabling innovative solutions for complex problems which were once difficult to resolve. AI in the BFSI market is expected to witness a 30 per cent growth by 2024. “The intelligent and adaptive nature of this technology helps streamline and automate processes and cater to customers’ changing demands. AI applications are helping banks analyse massive amounts of data and provide actionable insights,” says Faisal Husain, Co-founder & CEO of Synechron.
Even though AI has been around for a few years now, its application in the banking sector is still in its nascent stage and suffers from security concerns due to limited knowledge among users. It is important for enterprises to keep themselves updated on the latest information security techniques by installing a holistic data protection architecture to filter the inputted information and train the AI-systems to identify potential threat vectors,” he adds.
“While this emerging technology is increasing the level of accuracy and fast-tracking decision making, it is often faced with a challenge to protect the data residing with them, both personal information as well as organization specific details. Some of the security risks AI systems suffer from include:
- Data Poisoning: Owing to the large volume of structured and unstructured data, BFSI companies become a prime target for cyber crooks to perpetrate data attacks. As deployment of AI-enabled models in financial services sees an uptick, there is a risk of manipulating the data used to train these models by hackers. Known as Data Poisoning, such an attack results in generation of erroneous output. Cybercriminals use reverse engineering to extract a replica of the AI model and carry out these attacks which often go unnoticed for a long period. Therefore, AI models can be skewed using sophisticated technologies to produce a targeted result.
- Adversarial AI: As organizations deploy intelligent systems, there are untrusted infrastructure tools such as open source data analysts and ML frameworks which can be compromised by criminals to extract data. Hackers use adversarial machine learning to detect patterns and identify vulnerabilities and fraud controls in the network. This enables then to commission malware in the systems which sits undetected on the network, and slowly exfiltrates confidential data passing through the system.
AI-systems are susceptible to adversarial attacks, and thus input sanitization should be on the security agenda of BFSI companies. IT systems should be trained to identify potential adversarial attacks by implementing a weaker version of the same such as distorted images. To prevent data leakage, security infrastructure should exhaustively cover all network endpoints. Humans are often the weakest link in security; business leaders should take conscious steps such as regular trainings and awareness initiatives to develop a common understanding of company’s security procedures among the employees. It is important to note that security is a system design, therefore security features should be baked in the design stages of an AI-based application and updated overtime to tackle the expanding threat landscape.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!