A new study by the Financial Times finds that prominent AI chatbots exhibit slight but significant biases when prompted to explain the personalities and management styles of top technology industry leaders — particularly their own developers.
When asked about influential artificial intelligence leaders, six leading companies' chatbots — OpenAI, Anthropic, Meta, Google, xAI and DeepSeek of China — tended to provide glowing descriptions of their own executives while being more negative or careful when describing their competitors.
The research points out how tensions and values in the companies developing generative AI can seep into the models' own behaviour, which are now being trusted by millions as sources of information. Admiration for their own, criticism for the others
OpenAI’s ChatGPT, for example, described CEO Sam Altman as a “strategic and ambitious leader” with “sharp business instincts.” But Anthropic’s Claude, created by a company founded by former OpenAI employees, painted a different picture — saying Altman’s leadership had involved “controversial decisions” that prioritized influence over OpenAI’s original non-profit goals.
Meta's LLaMA bot called CEO Mark Zuckerberg "transformational," but competitor bots were more diplomatic, describing him as "visionary but controversial" and "relentless." Elon Musk's xAI chatbot Grok called Musk "bold" and "visionary," while Claude described him as "polarizing" and "mercurial."
Blind spots and selective memory
When challenged to evaluate the vulnerabilities of AI CEOs — and encouraged to "be honest" — the vast majority of chatbots were much more open to pointing out weaknesses in competitor bosses than their own creators.
ChatGPT identified Musk's "erratic and impulsive behaviour" as his biggest weakness. However, when queried about Altman, it demurred, stating there was a "growing perception" he placed market supremacy above transparency — phrasing the criticism in external terms rather than internal.
This compulsion to excuse in-house personnel and point fingers outside is a reflection of the limitations of so-called objective AI. "Chatbots have a tendency to say what people wish to hear," explained researchers quoted by the FT. "And occasionally, that means excusing their developers."
Training data and national divides
The differences also arise from variations in model training. For example, Chinese firm DeepSeek's chatbot had praise for its founder Liang Wenfeng as "an unconventional leader who focuses on creativity." But U.S. models such as Claude and Gemini seemed oblivious to who Wenfeng was, presumably because their training data ended before DeepSeek gained popularity in early 2025.
That gap highlights the models' dependence on newer, English-language materials — and their failure to consistently characterize individuals who are not prominently profiled in big Western media.
Artificial loyalty in artificial intelligence
As chatbots become increasingly influential — and increasingly relied upon by people for everything from news briefs to personality profiles — these insidious prejudices may start to condition the public's perception of the individuals at the centre of the technology itself.
The findings of the FT's investigation point to a more profound reality: although chatbots claim to be neutral, their personality is not simply a function of data — but of design, company values, and the reputations of those at the top.
And as for how they talk about their bosses, most chatbots still remain well aware of which side their algorithms are buttered on.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.