Moneycontrol PRO
LAMF
LAMF

Small AI firm claims it breached McKinsey’s internal platform in hours

A reported two-hour breach of a widely used internal AI tool is raising fresh questions about how secure enterprise AI systems really are.
March 18, 2026 / 12:00 IST
McKinsey has used Lilli internally for about two years. (Image credit: Reuters)
Snapshot AI
  • AI startup claims breach of McKinsey's internal platform Lilli
  • CodeWall.ai reports its agent swiftly accessed sensitive data
  • McKinsey has not confirmed the breach or extent of exposed data

A small AI startup has claimed it was able to breach an internal platform used by McKinsey in just a couple of hours, drawing attention to potential security gaps in enterprise AI systems.

According to the claim, the platform, known as Lilli, has been used internally by McKinsey for about two years to support its consulting work. The breach was carried out by CodeWall.ai, a company whose founder Paul Price says he is currently its only employee. He said an AI agent was deployed to test the system and was able to gain access far more quickly than expected.

CodeWall.ai claims the agent was able to access a large volume of internal data, including millions of chat messages and hundreds of thousands of files. The company described the material as highly sensitive, suggesting it included what it called McKinsey’s “intellectual crown jewels”.

The details, however, are based on the startup’s own account, and it is not yet clear how much of the claim has been independently verified. McKinsey has not publicly confirmed the extent of any breach at the time of reporting, and there is no official statement outlining what data, if any, may have been exposed.

Even so, the claim has quickly gained attention because of what it suggests about how AI tools are being used inside large organisations. Platforms like Lilli are designed to pull together internal knowledge, documents and conversations, making them powerful but also potentially risky if access controls are not tightly managed.

For a while now, security researchers have been flagging a simple risk. The more these AI tools are plugged into a company’s internal data, the more damage they can do if something goes wrong. If access controls are loose or poorly designed, an automated agent can pull out large volumes of sensitive information very quickly.

If the claims in this case hold up, it taps into a bigger issue. Companies have been moving fast to roll out AI across teams, often prioritising what the tools can do over how well they are secured and monitored.

That is really the larger takeaway here. It is not just about one platform or one incident. As AI becomes a core part of how organisations work, even a small gap can have outsized consequences. Cases like this are likely to push companies to take a harder look at how these systems are being built and what safeguards are actually in place.

Moneycontrol World Desk
first published: Mar 18, 2026 12:00 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347