Google Claims Gemini AI Prompt Uses Less Energy Than 9 Seconds of TV
Google asserts per-prompt AI efficiency, but critics question if these small figures account for the technology's full environmental toll.
August 22, 2025

In a move to address the growing concerns over the environmental footprint of artificial intelligence, Google has released figures suggesting a single prompt in its Gemini model consumes less energy than watching television for nine seconds.[1][2][3][4][5][6][7][8] The technology giant claims that a median text query to its AI uses a minuscule 0.24 watt-hours of energy, produces just 0.03 grams of carbon dioxide equivalent, and consumes 0.26 milliliters of water, an amount comparable to about five drops.[9][1][3][10][5][6] These figures, detailed in a technical paper and a series of blog posts, represent an effort by Google to bring more transparency to the often-opaque world of AI's energy consumption and to position its technology as a leader in efficiency.[11][1][2] However, the announcement has been met with skepticism from some industry experts who argue the numbers, while seemingly small, may not paint a complete picture of the technology's true environmental cost.[5][12][13][7]
Google's central assertion is that through a "full-stack" approach to efficiency—encompassing everything from more efficient model architectures and custom-built Tensor Processing Units (TPUs) to improvements in data center operations—it has achieved dramatic reductions in the environmental impact of its AI.[11][1] The company reports that over a recent 12-month period, the energy consumption for a median Gemini text prompt plummeted by a factor of 33, while the associated carbon footprint fell by a factor of 44.[11][1][3][10] The published methodology is, according to Google, more comprehensive than many public estimates because it accounts for the entire AI serving infrastructure, including the power drawn by active AI accelerators, host systems, idle machines, and data center overhead like cooling.[2][10] The company argues that narrower methodologies, which might only consider the active computer chip, can substantially underestimate the real operational footprint of AI at scale.[1][2]
The comparison to television usage is a key part of Google's messaging, aiming to frame the energy cost of an AI prompt in familiar, everyday terms. Modern televisions on average consume between 50 and 200 watts of power.[14][15][16][17] A television consuming 100 watts would use approximately 0.25 watt-hours of energy in nine seconds, placing Google's claim of 0.24 watt-hours for a Gemini prompt squarely in that ballpark. While the per-prompt impact appears minimal, the cumulative effect of hundreds of millions of users making billions of queries daily raises significant questions about the overall energy demand of the rapidly growing AI industry.[2][8] The International Energy Agency has projected that the electricity consumption of data centers, largely driven by AI, could more than double by 2030.[8]
Despite Google's push for transparency, several independent researchers have raised critical questions about the company's calculations, particularly concerning water consumption and carbon emissions accounting.[5][12][7] Experts like Shaolei Ren, an associate professor at the University of California, Riverside, argue that Google's figure of five drops of water per prompt is misleading because it only accounts for the direct water used for cooling its data centers.[5][12][7] It omits the significantly larger volume of "indirect" water consumed by power plants to generate the electricity that powers those data centers.[7] This omission, critics say, presents only the "tip of the iceberg" of the true water footprint.[5][12] Furthermore, scrutiny has been applied to Google's use of "market-based" emissions reporting, which allows the company to subtract renewable energy purchases.[12][7][18][19] Researchers suggest that a "location-based" measure, reflecting the actual mix of energy sources on the local grid, would provide a more accurate and less flattering picture of the immediate environmental impact.[5][12][7]
The debate over Gemini's energy use highlights a fundamental tension within the AI industry. On one hand, there are undeniable and impressive gains in computational efficiency, as touted by Google.[11][1][10] On the other hand, the explosive growth in AI's deployment and capabilities threatens to outpace these efficiency gains, a phenomenon known as the Jevons paradox, where increased efficiency leads to greater overall consumption.[20] Google's own sustainability reports have shown a significant increase in its overall carbon footprint and water usage in recent years, largely attributed to the expansion of its AI services.[18] This underscores the challenge facing the entire technology sector: reconciling the drive for ever-more-powerful AI with the urgent need for environmental sustainability. As AI becomes more deeply integrated into daily life, the demand for greater transparency and more comprehensive, independently verified accounting of its environmental costs will only intensify.[21]
Sources
[1]
[3]
[4]
[6]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[20]
[21]