Traditional data privacy frameworks have been based on the principle that access to our personal information is predicated on obtaining our consent. The deployment of artificial intelligence (AI) to collect and process personal data from public sources, is changing that paradigm.
From the time that customer data has become commoditised, the key challenge to garner more users and gain access to a user’s personal data, has been the need to wrangle consent to access personal data. Companies that offered ostensibly ‘free’ services to entice their users to consent to share their personal data have been hit by tightening data privacy laws that limit the use of personal data to only those purposes expressly consented to.
Consent Wrangled From UsWhile our right to privacy has become more universally recognised, ironically, our ability to restrict access to our personal information, by withholding consent has diminished. With the terms and conditions of content-sharing apps becoming increasingly complex, users’ consent is extracted by drowning users in a blizzard of words.
Increasingly, apps seek permissions that have no correlation to the services provided. Consent wrangling has become par for the course, with apps offering ad-free experiences devoid of annoying pop-ups if you grant consent for unrelated data collection by accessing your camera, microphone, location or even logging keystrokes.
As a response to increasingly complex privacy notices, some countries have tightened their data privacy laws to mandate express, informed consent be obtained before accessing personal information. Despite these measures, the amount of personal information that is willingly shared on social media widens the window of opportunity for automated data harvesting tools to garner access to information that was once considered private.
Enter AI ToolsCompanies have accordingly diverted their attention to deploying AI tools to stitch together meaningful analysis regarding their users from non-personal data that exists in the public domain.
These fragmented pieces of person-specific data scattered over the internet are relatively meaningless when assessed by humans as strands of non-personal data. However, when pieced together by AI tools, this data yields user-specific inferences that could arguably be viewed as personal information.
Since vast amounts of our personal data is shared on social media and is in the public domain, this may not seem particularly sensitive. However, the aggregated analysis of unrelated segments of non-sensitive personal data about individuals can yield insights greater than the sum of the parts.
Photos posted by a friend on social media in which you appear, facial recognition tools at an airport or a sporting event for security or ticketless entry, or even your phone’s interactions with smart devices or airport Wi-Fi, all garner insights into what you do, and where you travel. It should come as no surprise then, that apps encourage sharing of personal information in subtle, but persuasive ways.
AI-driven Hyper-Targeted AdsDevice information, photos and videos that are voluntarily shared are increasingly processed by AI-driven image and video content analysis tools. Developments in machine learning have enabled video content analysers to be trained, basis the information it processes, to identify individuals, their interactions, and their surroundings. This ability to recognise objects, themes, locations and arguably other individuals from publicly available photos or videos helps classify the relevance of the video for certain audiences.
While this does prevent irrelevant or explicit content from being channelled to certain audiences, there is a growing debate about alternative avenues for misuse. With the advertisement industry pivoting to hyper-targeting based on personal tastes and preferences, sensitive personal data such as health information, spending patterns and purchasing power, has become an actively harvested commodity - one that is a major line item on the balance sheets of Big Data companies.
AI tools are designed to use all data at their disposal, without regard for how that data was collected or where the data was obtained from. The lack of ethical boundaries coded into AI tools raises very real concerns about the targeting of vulnerable populations or arbitrary AI-driven decision-making. While AI tools used by apps are inherently designed to promote purchases, without a regulatory framework, AI-driven hyper-targeted advertisements could hypothetically exploit addictions and proclivities of vulnerable customer groups to promote alcohol, gambling, or tobacco. While traditionally, mass surveillance is what has alarmed most privacy advocates, AI deployed even for targeted advertisement poses a significant risk to privacy rights.
Regulating AI ToolsIt should come as no surprise then, that regulators and even the Ministry of Electronics and Information Technology are considering implementing regulatory frameworks (via the Digital India Act) to protect citizens from being at the mercy of automated AI decisioning tools. Without a governing framework to govern how AI tools are utilised, there is scarcely any control over the permitted end-use of data disclosed to and processed via AI tools.
The use cases where sensitive personal information is processed by AI are expanding each day; from virtual therapists detecting suicide risks to healthcare providers deciding health insurance premia by monitoring inactive lifestyles via apps collecting background data. Traditionally, assessments of creditworthiness conducted by human assessors would be limited to financial statements and other data disclosed by the assessee.
However, AI tools may include assessments from fragments of personal data in the public domain beyond the data willfully disclosed by an assessee. Such data may include spending patterns on e-commerce websites, availing buy-now, pay-later schemes and even factors that point to disposable income or even fiscal stress. With companies racing to define target audiences, financial profiling has led to the use of AI to identify individuals, their social networks, brand preferences and offline spending patterns. These illustrative use cases of AI, for hyper-targeted advertisements and creditworthiness assessments, have already begun to make people feel stalked.
Private entities may be transgressing into collecting sensitive personal data, acting like a fly on the wall using AI-driven insights. However, the use of AI to detect tax evasive patterns by the Internal Revenue Service in the US, and closer home, the unregulated use of AI-driven facial recognition tools for surveillance and security, and apps for COVID surveillance, are stark reminders of how the boundaries between public interest and privacy are being blurred. It would therefore be remiss for any regulatory framework seeking to prevent overreach by AI to regulate private entities but exempt the government.
Akash Karmakar is a technology and telecommunications lawyer and a partner with the Law Offices of Panag & Babu. Views are personal and do not represent the stand of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.