There are over four million apps across the App Store and Google Play. The average smartphone has somewhere between 60 and 90 apps installed. Yet, the average person uses only nine or ten of them daily, and about 30 per month. Two-thirds of total app usage time flows to just two categories: social media and entertainment. Everything else fights for scraps.
This is the market AI enters - saturated at the top, brutal everywhere else.
To understand what's actually changing, you have to separate two phenomena that often get conflated: AI is transforming how apps get built, and it's transforming what apps can do. These are different shifts with different implications, and they're happening simultaneously.
A year ago, AI coding assistants were useful for boilerplate and simple completions. Today, they handle complex multi-file changes, long-horizon debugging, and refactoring across entire codebases.
This isn't an incremental improvement on a smooth curve - it's a step change in capability.
The frontier models have moved into territory previously considered too difficult: autonomous coding sessions that maintain coherence for dozens of hours, resolving the kind of tangled GitHub issues that would take a human developer half a day of context-loading just to understand.
Alongside the models themselves, a new category of tools has emerged. Platforms like Replit, Cursor, and Lovable combine language models with instant hosting and real-time collaboration. You describe what you want; the system generates, tests, and deploys it. Lovable reportedly hit $100 million in annual recurring revenue in eight months. Replit went from $10 million to $100 million in the nine months following their Agent release. These aren't vanity metrics - they reflect user demand for a fundamentally different way of building software.
This creates two distinct paths. Professional developers use AI-augmented IDEs (integrated development environments) to accelerate existing workflows. Non-technical builders use natural language to create functional applications that would have required a development team 18 months ago. Both paths are real, neither invalidates the other.
What matters the most is that the bottleneck for app development has shifted. The old constraint was "can we write this code?" Now that writing code is cheap, three new constraints emerge.
First, verification - can we confirm this code is actually correct? AI-generated code is probabilistic; for a social feature, a subtle bug annoys users, but for code that moves money, it's catastrophic.
Second, testing - do we have systematic coverage for output that varies with each generation? The testing problem becomes harder, not easier, when the code isn't deterministic.
Third, and most importantly, insight - do we know what to build in the first place? When execution costs collapse, the relative value of understanding users deeply increases. The scarce input is no longer engineering capacity; it's clarity about the problem worth solving.
The competitive implications follow directly. When development costs fall, the advantage of "we have more engineers" weakens. Small teams gain ground against incumbents, at least for products where the moat was engineering capacity rather than data, distribution, or network effects. The barrier to entry has dropped; the barrier to excellence hasn't.
The product side of the equation is equally significant, though subtler.
AI offers genuinely new primitives for building products. Natural language understanding that actually works - users can express intent in words instead of learning navigation hierarchies. "Find that Italian place we went to in March" beats tapping through nested menus. Multimodal input where cameras become query interfaces - point at a plant, get its name; photograph a math problem, get the solution. Voice that goes from dictation to conversation.
And then there's reasoning. We now have entities that can think through problems, maintain context across long interactions, and act with something resembling judgment. Not pattern matching but planning, evaluating, and executing multi-step tasks.
These primitives enable product categories that were previously impossible - AI companions, creative co-pilots, autonomous agents that act on behalf of users. And they make previously complex features trivial -contextual support that understands what went wrong, onboarding that adapts to each person, personalization that operates on meaning rather than click patterns.
Traditional mobile apps had near-zero marginal cost per user. Once built, serving another user costs almost nothing. This made freemium viable, enabled ad-supported models, and kept subscription prices anchored low.
AI features reintroduce meaningful marginal costs. A heavy user of an AI-powered feature might cost several dollars per month in API calls. This breaks assumptions that many business models depend on. Freemium gets harder, you can't give away the expensive features for free. Ad-supported models require much higher revenue per user. Usage-based pricing becomes more attractive but adds friction and unpredictability.
This constraint is real, not temporary. Inference costs will fall, but the fundamental economics of compute-per-query aren't disappearing.
Interestingly, there's an offsetting factor: what people are willing to pay for software is also changing.
Consumer subscriptions anchored around $10-15 per month for years. Netflix established the ceiling; everyone else stayed below it. That ceiling held because software was framed as "access to content or services."
AI products are breaking through that ceiling. OpenAI launched ChatGPT Pro at $200 per month. Anthropic offers Claude at $100 and $200 monthly tiers. And people pay. This would have been unthinkable for any consumer product three years ago.
The mechanism is subtle but important: when software feels like hiring an assistant rather than subscribing to a service, willingness to pay restructures entirely. The framing shifts from "content access" to "capability expansion." Software that makes users measurably more productive or capable gets evaluated by different criteria than software that entertains.
This isn't automatic. The product has to actually deliver capability expansion, and the user has to perceive it that way. But when it works, the old pricing anchors simply don't apply.
People exposed to ChatGPT or Gemini now expect more from software generally. The shift isn't about any particular interface - it's about directness. Users have experienced expressing what they want and getting it, without first learning an app's navigation hierarchy or mental model. That's the new baseline: intent in, result out, no abstractions in between.
Search that doesn't understand meaning, forms that demand you adapt to their structure, menus that require you to think like a developer - all of it now feels like friction that shouldn't exist.
This creates pressure on existing apps to add AI capabilities, even when those features aren't core to the product.
The relationship between speed and value has also clarified in unexpected ways. Deep Research features - now available in ChatGPT, Gemini, and Perplexity - have users willingly waiting ten or more minutes for results. This is novel in consumer tech, where conventional wisdom held that anything beyond a few seconds meant abandonment. It turns out users will wait, but only when the output justifies the wait. "Slow but powerful" is viable if powerful means something.
Jeff Bezos famously noted that it's hard to predict what will change, but easier to predict what won't. The fundamentals of building valuable software haven't moved.
Understanding the user problem in detail still matters more than the tools you use to solve it. Product-market fit isn't something AI generates for you. Distribution remains hard - getting in front of users, earning their attention, converting them to retained customers. Network effects still compound advantages for those who achieve them and remain inaccessible to those who don't.
AI changes the tools. It changes the cost structures. It changes what's possible to build. It even changes what users expect.What it doesn't change: the hard work of figuring out what's worth building, for whom, and why they'll care. That's still the game. The players just have new tools.(Pranav Sanga is a Strategic Partnership Manager at Google, based in San Francisco. He has spent six years working with mobile app developers across Asia-Pacific and the US.)
Views are personal and do not represent the stand of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.