Moneycontrol PRO
HomeNewsOpinionAI's Revolutionary Promise: Progress, cost, and consequence

AI's Revolutionary Promise: Progress, cost, and consequence

The latest AI advances showcase extraordinary potential to transform work, communication, and creativity. However, they also raise urgent concerns around energy consumption, equity, privacy, and inadequate governance frameworks 

May 29, 2025 / 10:13 IST
artificial intelligence

The recent mindblowing AI innovations represent wonderful achievements of modern engineering.

The recent artificial intelligence (AI) advancements unveiled by Google and Microsoft in May 2025 revolutionise how technology integrates into daily life. What seemed like science fiction just a few years ago is now very close to reality. The sophistication of these systems, their multimodal capabilities, and their potential to augment human intelligence in unprecedented ways deserves genuine admiration. From Google’s AI Mode changing search massively with real-time camera interactions and agentic shopping assistants to Microsoft’s autonomous GitHub Copilot and enterprise-grade Discovery platform, these innovations open up the possibilities widely. These mindblowing AI innovations —from real-time multilingual translation in Google Meet to Microsoft’s autonomous code-generating Copilot—represent wonderful achievements of modern engineering. These systems demonstrate AI’s transformative potential.

They help immensely in stitching together efficiently different workflows, enhancing creativity and making it more egalitarian, and bridging communication gaps at scales that were never thought of earlier. Yet, as we marvel at these breakthroughs, it’s necessary to also confront the less glamorous realities: there is a huge financial, environmental, and societal costs that these systems incur, and existing governance frameworks might not be capable enough at the moment to manage their ethical complexities.

Cost of New Technology

What is the price at which these technologies are available, and does that create inequities is something that the world needs to keep an eye on Google’s AI Ultra subscription, priced at $249.99 monthly, is one such example of the financial barrier to accessing cutting-edge AI. While discounts may ease initial adoption, such pricing risks increasing the digital divide, reserving advanced tools like Gemini 2.5 Pro and Deep Think mode for those who can afford premium tiers. Microsoft’s Azure AI Foundry and Windows AI Foundry, though democratising model deployment, still require substantial computational infrastructure, privileging enterprises with deep pockets over smaller developers. The shift toward agentic AI—systems that act autonomously—introduces hidden costs, too. For instance, Google’s Project Mariner, which handles event bookings, and Microsoft’s multi-agent trip planners rely on continuous cloud compute resources, driving up energy consumption. Training models like Gemini 2.5 and GPT-4o demand vast data centres, whose carbon footprints remain largely unaddressed in corporate sustainability reports.

Energy and Infrastructure Crisis

The environmental toll of these AI systems is perhaps the most underappreciated challenge. Google’s Live Search, which processes real-time camera feeds, and Microsoft’s on-device Phi 4 mini model for Edge browser AI tasks, both depend on sprawling server farms and optimised hardware. While companies tout efficiency gains, the sheer scale of deployment—Google’s AI Overviews now serve 1.5 billion users monthly—suggests net energy use will rise sharply. Microsoft’s partnership with xAI to host Grok3 on Azure further compounds this, as training and inference for large language models (LLMs) are notoriously energy-intensive. Without a parallel investment in renewable energy grids, the AI revolution risks undermining global climate goals.

It needs to be accepted that the computational muscle powering these advancements comes at staggering costs, even with algorithmic and hardware related improvements, like DeepSeek and others. Training models like Gemini 2.5 Pro and GPT-4o requires data centres consuming 25 terawatt-hours annually—equivalent to powering 2.3 million US households. Inference operations, the daily deployment of AI for tasks like Live Search queries or GitHub Copilot suggestions, are even more voracious: current estimates suggest AI systems globally could soon consume 23 gigawatts, rivaling the Netherlands’ entire electricity usage. Microsoft’s Azure-hosted Grok3 and Google’s AI Overviews exemplify this paradox—each user convenience incrementally strains energy grids. While companies tout efficiency gains through specialised chips like TPU v6 and on-device Phi 4 models, the sheer scale of deployment risks negating these benefits. Without accelerated renewable energy adoption, AI’s carbon footprint could set the world back on its climate commitments.

Battling Privacy, Autonomy, and Bias 

Ethical concerns cannot just be wished away. Google’s virtual try-on feature, which uses personal photos to simulate apparel, raises valid concerns about data consent and biometric privacy. Google’s AI Mode, which personalises responses using private search histories, will stretch the limits of existing privacy frameworks. Similarly, Microsoft’s NLWeb framework, which enables chatbots to access proprietary website data via the MCP protocol, could expose sensitive information if security protocols are not adhered to, even if inadvertently. Agentic AI introduces deeper dilemmas: when Google’s Gemini books a restaurant or Microsoft’s Copilot drafts code autonomously, who bears liability for errors? When errors occur in medical diagnoses or financial decisions shaped by AI, accountability mechanisms remain undefined. These systems’ “black box” nature complicates accountability, particularly in critical domains like healthcare or finance. Bias, too, remains a threat which is not going away so soon. Despite advances, AI models trained on historical data risk entrenching inequalities further, especially in AI-driven drug programs.

Current approaches to data privacy, intellectual property, and algorithmic accountability were not designed for systems that can generate human-like content, make autonomous decisions, or process personal information at unprecedented scales. The legal and ethical frameworks that govern AI deployment vary dramatically across jurisdictions, creating a patchwork of regulations that multinational AI systems must navigate.

Educational institutions and workforce development programs face similar challenges in preparing society for an AI-integrated future. While these new AI capabilities promise to enhance human productivity and creativity, that very activity, by its definition, will displace certain types of work and render some skills obsolete.

Governance Gaps and Policy Imperatives

Current regulatory frameworks aren’t keeping pace with AI’s rapid march. Public policy rarely keeps pace with technological progress because all issues aren’t apparent at the start, and there is no one entity that can keep an eye out for all issues, understand them and then develop reasonable policies. Feedback loops are always slow, if not non-existent. Google’s Deep Search and Microsoft’s Discovery platform, which automate research and scientific workflows, operate in a regulatory space where standards for validation and reproducibility are not well defined or might not even exist. The lack of global consensus on AI ethics exacerbates risks. For instance, Microsoft’s MCP protocol, while enabling interoperability, could centralise power over AI ecosystems among tech giants. What that can do to competition is not that difficult to imagine. Regulatory gaps are indeed very conspicuous: while the EU’s AI Act mandates energy reporting for model training, it ignores inference phases responsible for 80% of AI’s power usage. Malaysia’s 2025 AI Ethics Framework proposes solutions—third-party bias audits, watermarking AI-generated content, and sustainability-linked tax incentives—but global coordination lags.

Moreover, the governance challenges are particularly complex because AI systems operate across traditional regulatory boundaries. A conversational AI trained on global data, deployed through cloud infrastructure spanning multiple countries, and used by individuals worldwide defies neat categorisation under existing regulatory schemes. Questions of jurisdiction, liability, and enforcement become increasingly murky as AI systems become more autonomous and their decision-making processes more opaque.

Proactive Policymaking

Policymakers must prioritise several actions. There is an urgent need for reasonable energy regulation. Transparency in AI-related energy use should be mandated and corporate tax incentives should be tied to renewable energy adoption. There should be public disclosure of energy use per AI query/model, modeled on Malaysia’s carbon reporting standards, to inform both corporate strategy and user behaviour. These mandates do not cost a lot, but can bring about a much needed transparency. The next much needed step would be to push for renewable energy usage. Use Microsoft’s cloud investments and Google’s server farm networks to create renewable microgrids, transforming data centres into clean energy hubs.

To tackle the problem of access equity, AI tools for public institutions and SMEs could be subsidised to prevent a two-tiered digital society. For ethical audits, enforce third-party bias and safety assessments for high-stakes AI applications, akin to clinical trials for pharmaceuticals. Make the liability frameworks robust. Clarify legal responsibility for AI-driven decisions, particularly in healthcare, finance, and autonomous systems.

These are just suggestions. But the main point is that these issues need serious thought. And these policy actions should be carried out only after a thorough cost-benefit analysis, so as to not be too onerous for the businesses, or else these efforts will die a silent death.

Collaboration Over Competition

The trajectory set by Google and Microsoft need not lead to dystopia. Their innovations—like Google’s crisis response enhancements and Microsoft’s AI-augmented scientific research—are indeed very exciting. Realising this potential, however, requires collaboration and cooperation among nations and private entities. Public-private partnerships could pool resources for greener AI infrastructure, while open-source initiatives, such as Microsoft’s contributions to the MCP standard, must be prioritised to prevent monopolisation. International bodies like the UN and IEEE need to speed up the development of global AI ethics standards, ensuring that technologies like Gemini’s Agent Mode and Copilot’s autonomous coding conform to human rights principles.

The AI era demands not just admiration for what’s possible, but vigilance against what could be lost.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Arindam Goswami is a software professional and a Research Scholar at The Takshashila Institution. Views are personal, and do not represent the stand of this publication.
first published: May 29, 2025 10:13 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347