AI isn’t coming. It’s already here. It’s into the tools we use, the workflows we trust, and the decisions we make without even realising it.
It’s ambient, fast-moving, and quietly reshaping how work gets done across every industry. But while the headlines shout disruption and possibility, most companies are still stuck in observer mode, watching, waiting, and hoping someone else figures it out first.
The hesitation is understandable. Technology adoption often follows a familiar arc: excitement, scepticism, experimentation, then scale. But with generative AI, the arc has compressed. Tools like GPT-4o, Claude, Gemini, and open-source models such as LLaMA 3 and Mistral have rapidly matured. Costs are plummeting. Capabilities are multiplying.
This is why I propose a practical, simple, and high-leverage move: Organisations should allocate 10% of their capital expenditure – or equivalent team time – towards structured AI experiments.
This isn’t a call for reckless investment. It’s a disciplined approach to de-risk innovation while building internal capacity for the AI-native era.
Why This Moment Demands Action
In 2023 alone, OpenAI slashed its token pricing by more than 90%. This means the marginal cost of applied intelligence—research, summarisation, synthesis, decision support – has dramatically declined.
More importantly, we are witnessing a shift from theoretical use-cases to practical deployment.
- At Khan Academy, the AI-powered tutor “Khanmigo” now supports students with real-time learning assistance.
- A mid-sized law firm in India recently integrated a contract-review assistant powered by GPT, enabling junior associates to reduce turnaround times by over 60%.
- An FMCG startup used generative tools to create 300 variants of marketing visuals and copy in under 72 hours, something that once required weeks of agency coordination.
These are not moonshots. They are small, contained, high-impact experiments, with measurable outcomes.
Why 10%? The Logic of Optionality
Ten per cent is a meaningful yet manageable threshold. It is large enough to fund real experiments, not just pilots made for internal presentations. And it is small enough to avoid threatening operational budgets or provoking excessive scrutiny.
Organisations that underfund experimentation risk stagnation. Those that overfund without structure often fall into ‘innovation theatre’ – where the outputs are visually impressive but commercially irrelevant.
The 10% allocation strikes a balance. It forces prioritisation. It invites urgency. And it builds psychological safety for teams to take creative risks without betting the company.
What Makes a Good AI Experiment?
Not every initiative needs to be transformative. In fact, most shouldn’t be. The ideal AI experiment:
* Solves a real, clearly defined problem
* Can be executed in 2–4 weeks
* Requires minimal infrastructure
* Involves real users and measurable outcomes
* Can be sunset with minimal loss if unsuccessful
Recent examples include:
# A logistics firm developing an AI-powered assistant to summarise daily route issues and generate incident reports for operations managers.
# A financial controller using GPT to draft compliance summaries, later refined by human experts.
# A customer support team deploying a “draft-first” chatbot that reduced first response time by 40%, while retaining human judgement.
None of these required a custom model or proprietary data. They were built using off-the-shelf tools, governed by simple prompts and automation layers.
Making Failure Useful
Experimentation should not be measured only by success rate – but by the speed and quality of learning. In fact, a healthy AI experimentation programme expects that 70% of trials may not scale. But each one offers learnings that feed into future cycles.
Well-run teams build internal libraries of prompts, reusable workflows, and learnings from failed attempts. Over time, this institutional knowledge becomes a competitive advantage – one that cannot be replicated by competitors simply purchasing the same tools.
Organisations that reward learning and not just output will accelerate faster than those who reward compliance.
How to Get Started
For leaders unsure where to begin, a simple playbook may help:
1. Identify Friction: Ask every business unit to list the top three repetitive or error-prone tasks they wish AI could assist with.
2. Prioritise Use-Cases: Focus on internal workflows, research tasks, and customer-facing touchpoints that are rule-based.
3. Ring-Fence Resources: Dedicate 10% of capex or 10% of team time, whichever is more feasible, to AI-focused trials.
4. Timebox Execution: Every experiment should be designed for delivery, testing, and evaluation within 4 to 6 weeks.
5. Celebrate Learnings: Create a public space to share what worked, what didn’t, and what should be tried next.
The goal is to create a flywheel of action, insight, and iteration.
The Real Risk Is Inaction
AI experimentation is no longer a luxury. It is an operational necessity. The most valuable outcome of a 10% investment is not the tools you build, but the capability your team develops in understanding, adapting to, and shaping this new paradigm.
The winners of this AI transition won’t be the ones who wait for perfect clarity. They will be the ones who move early, learn fast, and compound their insights through relentless small bets.
In an age where intelligence is becoming a commodity, the ability to experiment intelligently may well be your organisation’s strongest moat.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.