In addition to securing the data storage infrastructure, organizations must be aware of the quirks that can be exploited by attackers to fool the AI system.
With Artificial Intelligence systems playing a key role in transforming the BFSI sector, the most important challenge for technology leaders looking to leverage AI is to be able to collect, store, retain and protect the data that the AI systems need. Especially, all the acquired intelligence is stored as 'models' for as long as the AI system is in use.
Sharing his views on the top security issues for AI based applications, Rajesh Ganesan, Director of Product Management, ManageEngine, says, “Companies trying to leverage AI must understand that the results are directly dependent on the volume, quality and the diversity of data from which the AI systems can learn optimally and continuously. Better the learning, more accurate are the subsequent predictions made by the AI system and hence it is critically important that data keeps being available.”
Both the business data and the AI models could have sensitive information including personal information of people and hence become a hot target for attackers looking to steal information. It is essential to treat this set of data the same way as other sensitive data and build multiple layers of security including encryption. Where encrypting data hinders the AI systems from being efficient, organizations should look at technologies like homomorphous encryption to balance between AI system benefits and information security.”
“In addition to securing the data storage infrastructure, organizations must be aware of the quirks that can be exploited by attackers to fool the AI system. Most prominent of them is the adversarial attack which can be employed by one or a set of motivated attackers, to intentionally fool the AI system by supplying it with wrong data which results in the system learning the wrong things and carrying over that learning to its eventual behaviour,” he says.
“Also troublesome is the phenomenon of concept drift, where the intelligence acquired in the AI model based on operational data suddenly changes drastically, resulting in the models becoming irrelevant. Any subsequent prediction or action made by the AI system based on the irrelevant model leads to security issues, like data exposure, stealing or even deletion. Concept drift could also be engineered by intentionally supplying wrong data to business applications, as typically the AI systems have no intelligence to ascertain if they are learning from the right or wrong data source,” Ganesan adds.
Equally important to be aware of is that humans can engineer bias into the AI system again by intentionally injecting unfair and non-inclusive data in the business applications, especially during AI system training to build the models. With this ability, attackers can control the functioning of business application to cause gender bias, favouritism etc. to their advantage.“One definitive way for organizations to protect themselves from the above factors would be to invest in 'explainable AI', which is nothing but the added ability for the AI system to also explain why it does the various predictions and actions, before actually doing them. This gives the opportunity to people in real time to counter engineered factors like adversarial attacks or concept drifts or intentionally planted bias. A well-informed organization with a well-meaning leadership, will understand the nuances of all such concepts and the need to invest in a wholesome system to be able to fully leverage the benefits of the AI technologies,” explains Ganesan.