Moneycontrol
HomeNewsOpinionAI and Risk: The privacy and security perils of ChatGPT

AI and Risk: The privacy and security perils of ChatGPT

This week Samsung banned its employees from using ChatGPT for work purposes after an engineer accidentally leaked sensitive internal data including source code. For companies and individuals, ensuring vigilant conduct while using chatbots and handling digital information is the best way forward

May 05, 2023 / 15:25 IST
Story continues below Advertisement

A number of concerns have been raised specifically with regards to the data privacy and security aspects of using ChatGPT. (File image)

Conscious, self-aware, truly intelligent or just useful tools, the debate on artificial intelligence rages on and will be the defining question of our era. For lawyers and policy specialists, the core issue is ‘risk’. To assess, allocate and audit risk within a matrix of potential harms and desired outcomes.

Take ChatGPT, the AI chatbot based on the GPT-4 language model, which has taken the world by storm. The CEO of IBM, Arvind Krishna recently stated that he believed approximately 30 percent of their non-customer-facing jobs, i.e., around 7,800 jobs, could be replaced by Artificial Intelligence over the next five years.

Story continues below Advertisement

Promise And Pitfalls

But the many advantages and utility of these chatbots cannot be denied or overlooked. In addition to having the ability to comb through and process enormous amounts of data in seconds, they have proven to be excellent at simplifying everyday tasks such as writing emails, answering simple questions, drafting standard documents and so forth. These chatbots have also become highly proficient in mimicking human-like emotions and responses and engaging in realistic conversations with users.