Moneycontrol PRO
HomeNewsOpinionThe human perils of giving ChatGPT more memory

The human perils of giving ChatGPT more memory

OpenAI mustn't repeat Facebook's mistake of "remembering" user preferences and views and feeding more of the same, ultimately driving them into silos. It must offer diverse perspectives on political or social issues, even if they challenge a user’s prejudices

February 15, 2024 / 09:46 IST
ChatGPT has experienced gangbusters growth, pushed for user engagement and is now storing personal information.

OpenAI is rolling out what it calls a memory feature in ChatGPT. The popular chatbot will be able to store key details about its users to make
answers more personalised and “more helpful,” according to OpenAI. These can be facts about your family or health, or preferences about how you want ChatGPT to talk to you so that instead of starting on a blank page it’s armed with useful context. As with so many tech innovations, what sounds cutting
-edge and useful also has a dark flipside: It could blast another hole into our digital privacy and — just maybe — push us further into the echo chambers that social media forged.

AI firms have been chasing new ways of increasing chatbot “memory”for years to make their bots more useful. They’re also following a roadmap that worked for Facebook, gleaning personal information to better target users with content to keep them scrolling.

OpenAI’s new feature — which is rolling out to both paying subscribers and free users — could also make its customers more engaged, benefiting the business. At the moment, ChatGPT’s users spend an average of seven-and-a-half minutes per visit on the service, according to market research firm SimilarWeb. That makes it one of the stickiest AI services available, but the metric could go higher. Time spent on YouTube, for instance, is 20 minutes
for each visit. By processing and retaining more private information, OpenAI could boost those stickiness numbers, and stay ahead of competing chatbots from Microsoft, Anthropic, and Perplexity.

But there are worrying side effects. OpenAI states that users will be “in control of ChatGPT’s memory,” but also that the bot can “pick up details itself.” In other words, ChatGPT could choose to remember certain facts that it deems important. Customers can go into ChatGPT’s settings menu and turn off whatever they want the chatbot to forget, or shut down
the memory feature entirely. “Memory” will be on by default, putting the onus on users to turn things off.

Collecting data by default has been the setup for years at Facebook, and the expansion of “memory” could become a privacy minefield in AI if other companies follow OpenAI’s lead. OpenAI says it only uses people’s data to train its models, but other chatbot makers can be far looser. A recent survey of 11 romance chatbots found nearly all of them said they might share personal data to advertisers and other third parties, including details about people’s sexual health and medication use, according to the Mozilla Foundation, a nonprofit that promotes online transparency.

Here’s another unintended consequence that has echoes of Facebook: a memory-retentive ChatGPT that’s more personalised could reinforce  the filter bubbles people find themselves in, thanks to social feeds that for years have fed them a steady diet of content confirming their cultural and political biases.

Imagine ChatGPT logging in its memory bank that I supported a certain political party. If I then asked the chatbot why its policies were better for the economy, it might prioritise information that supported the party line and omit critical analysis of those policies, insulating me from viable counterarguments.

If I told ChatGPT to remember that I’m a strong advocate for environmental sustainability, my future queries about renewable energy sources might get answers that neglect to mention that fossil fuels can sometimes be viable. That would leave me with a narrower view of the energy debate.

OpenAI could tackle this by making sure ChatGPT offers diverse perspectives on political or social issues, even if they challenge a user’s prejudices. It could add critical thinking prompts to encourage users to consider perspectives they haven’t expressed yet. And in the interests of transparency, it could also tell users when it’s giving them tailored information. That might put a damper on its engagement metrics, but it would be a more responsible approach.

ChatGPT has experienced gangbusters growth, pushed for user engagement and is now storing personal information, which almost makes its path look a lot like the one Mark Zuckerberg once trod with similarly noble intentions. To avoid the same toxic side effects his apps had on mental health and society, OpenAI must do everything it can to stop its software from putting people into ever-deeper silos. The very idea of critical thinking could become dangerously novel for humans.

Parmy Olson is a Bloomberg Opinion columnist. Views do not represent the stand of this publication. 

Credit: Bloomberg 

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Parmy Olson is a Bloomberg Opinion columnist covering technology. Views are personal, and do not represent the stand of this publication.
first published: Feb 15, 2024 09:45 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347