AI Energy Use and Risks Debated at Summit

Mark Bennett

Politeness, Compute Power and Data Centers

At the NDTV AI Summit, former UK Prime Minister Rishi Sunak and AI researcher Stuart Russell tackled growing public concerns about artificial intelligence, focusing on its energy demands and long-term implications.

Sunak shared a personal anecdote about his daughters including “please” and “thank you” in chatbot interactions. While acknowledging the politeness, he noted that AI systems are not human and added that additional wording increases computational load. His remark highlighted a broader issue: every AI query requires processing power inside data centers, where specialised chips perform multiple layers of neural-network calculations to generate responses.

Russell responded that polite phrasing does not meaningfully change how AI systems behave. He also argued that AI’s electricity usage should be assessed in context alongside other energy-intensive sectors.

How Much Power Does AI Use?

Research comparing digital services suggests a single generative AI query can consume several times more electricity than a standard web search. Traditional search engines retrieve indexed results, while generative models compute answers in real time through intensive processing.

Academic studies show energy consumption per query varies widely depending on factors such as prompt length, response size, model architecture, and hardware efficiency. More complex outputs generally require more computational cycles.

At a broader level, AI’s rapid adoption is contributing to rising electricity demand from data centers. Research cited by Reuters from the Electric Power Research Institute estimates that data centers could account for up to 9% of U.S. electricity use by 2030, more than double current levels. Separate analysis supported by the U.S. Department of Energy projects that data-center power demand could nearly triple in the coming years, driven partly by AI workloads.

Global reporting has pointed to similar trends, with AI infrastructure expansion accelerating electricity demand and potentially requiring significant new power generation capacity.

Efficiency and Optimization

Despite these projections, researchers emphasize that energy use is not fixed. Advances in hardware design, cooling systems, and model optimization can reduce power consumption without sacrificing performance. Deployment strategies and improved chip efficiency may also limit long-term impact.

As a result, the future energy footprint of AI will depend not only on demand growth but also on technological improvements in efficiency.

Long-Term Questions About AI Capabilities

Beyond energy concerns, Russell addressed broader debates about the trajectory of advanced AI systems. He noted that some technology companies openly aim to build machines that exceed human-level intelligence, raising questions about safety and control.

He referenced early warnings from Alan Turing, who speculated that machines could one day surpass human intellectual abilities. Today, discussions often revolve around concepts such as artificial general intelligence and superintelligence, theoretical systems that do not yet exist but are actively debated in research and policy circles.

Russell stressed that current AI systems operate within defined computational frameworks and lack consciousness or independent intent. Nonetheless, scientists and policymakers continue examining potential long-term societal impacts as systems grow more capable.

Balancing Infrastructure and Innovation

The summit underscored two parallel realities. Artificial intelligence depends on expanding computing infrastructure that carries measurable energy costs. At the same time, debates about its future development and potential risks remain ongoing across academia, industry, and government.

As AI adoption accelerates, both its environmental footprint and its broader implications are likely to remain central topics in global technology discussions.

Share This Article