Skip to content
syntax and script logo
Syntax and Script

Come on a tech journey with me.

  • Home
  • Blog
  • About
syntax and script logo
Syntax and Script

Come on a tech journey with me.

The Hidden Cost Of Your AI Prompts

Chika O., July 1, 2025July 1, 2025

Toggle
  • Conclusion: 
    • Like this:
    • Related

Every time you ask ChatGPT a question, servers spin faster in a data center, more energy is consumed, and more cold water is taken away from its natural source.

Your prompt result has a cost!

If AI models came with a warning label, it would read: Use responsibly.

Growing up, I remember TV campaigns on efficient energy use by switching off electric gadgets and bulbs when they are not in use. But no one is telling us to ease up on our prompts. AI companies are too busy marketing their products and competing with each other to be concerned enough to have this discussion. We need to start the conversation!

According to OpenAI CEO Sam Altman, a single ChatGPT prompt consumes about 0.34 watt-hours of electricity, which is just enough to power a high-efficiency light bulb for a couple of minutes, and a few drops of water because AI data centers currently use water for cooling.

This statement may be a gross underestimate of facts and, as such, doesn’t sound like much because AI companies don’t like disclosing their data. But assuming it is correct, multiplying these little costs by the millions of prompts ChatGPT handles daily will make the numbers staggering. 

For context, your ChatGPT prompt or query uses roughly 10 to 15 times the energy it takes to run a Google search. And this is just for text generation. Video generation is far more demanding.

LLMs, or Large Language Models, like ChatGPT or Gemini, are AI models trained on massive amounts of data to generate human-like responses. 

When you hit ‘Send‘ for an AI prompt, the LLM is not pulling information from a database; it’s running a complex neural calculation on thousands of GPUs in a data center. It will analyze your prompt, likely using natural language processing, and generate a response based on its training and the specific instructions you gave. 

This is a simplified version of what happens, and it costs a lot. However, the real cost sink happens during the training of these AI models.

Training GPT-4, which is not the most recent ChatGPT model, reportedly cost over $100 million. That of Gemini cost between $30 and $191 million, even before considering staff salaries.  

According to Epoch AI, an AI research institute, salaries can account for 29 to 49% of the total cost of building LLMs.

Let’s break down these costs:

  • Computational power: GPUs and TPUs used for training the models and processing our queries don’t come cheap. They cost thousands of dollars, and they are the bare hardware essentials needed.
  • Data curation: LLMs require massive amounts of data for proper training. A team of highly trained data scientists works on data acquisition and cleaning to obtain high-quality datasets for training. Like, think about how much content they have to scrape from books, websites, you name it. And they have to check it for biases or copyrighted stuff. They also have to make sure they don’t run into any legal troubles while acquiring this data, even though recent news on copyright infringements by AI companies for this purpose is kind of making it legal to trespass. Anything for AI!
  • Engineering and deployment: Building and deploying the LLM takes another team and infrastructure effort, including design of the model, training, testing, research funding to improve the models, and even lawyers. These are all salaries to be paid.
  • Maintenance: Once the model is deployed, it requires continuous maintenance and updates based on user feedback.

Training an artificial intelligence model is a capital and human-hours-intensive project, and these costs are growing exponentially with the evolution of AI capabilities. 

The cost of training frontier AI models has grown by a factor of 2–3x every year for the last eight years. If that trend continues, we’re talking over a billion dollars per model by 2027.

So, we’ve established that building and running AI models is a resource-intensive process. This brings us to the controversies surrounding the costs of the training and the data centers where AI’s processing power is physically located. 

There have been accusations of encroachment and concerns about land, freshwater, and electricity grid pressure from AI data centers. All of these resources are limited resources (at least on Earth).

Think about this: the water used to cool data centers is fresh and often pulled from the same sources that feed farms and cities. A single hyperscale data center can use up to 1.5 million litres of water per day. Places like Northern Virginia in the US host almost 600 data centers, the highest concentration of datacenters in the world. 

And even though this water returns to the water cycle as rain, it may fall in areas where it cannot be used as potable water, and even a closed-loop system that reuses the water comes with its own challenges. Dumping the hot water back to the source harms water-dwelling plants and animals, including the ecosystems that support them. 

The water requirement for AI data centers has been reported to put a strain on the water supply of the communities where they are located.

Then there’s the large carbon footprint from the training phase of getting the LLM ready to use. Training large models can emit hundreds of tons of CO₂, and AI-related emissions are expected to grow 11-fold by 2030.

In spite of these concerns, the AI industry is still booming, and based on the massive AI compute requirements, top tech companies are pouring billions into building more data centers. However, finding land to site these is becoming increasingly difficult as communities do not want these massive electric hubs near them.

Energy-wise, the International Energy Agency says data center electricity demand will double by 2030, reaching the equivalent of Japan’s current energy consumption.

These AI costs affect not just electricity grids. They affect the local communities where the data centers are situated. A case in point is xAI’s Memphis data center pollution allegations, where the local community alleges that “xAI has installed and operated at least 35 combustion turbines and other sources of air pollution at the Colossus site …”

So what is the industry doing about these problems?

To mitigate the water resource strain, big tech companies have begun exploring water-free liquid cooling techniques for data centers. However, at the time of filming this video, this solution has not yet gained widespread adoption.

For electricity, companies like Google, Microsoft, and Amazon are eyeing nuclear energy as a clean and scalable solution. 

And yes, nuclear power might actually be the only solution to this energy-guzzling AI problem because it is a highly efficient source of energy, but it doesn’t come without baggage. Nuclear energy generation creates radioactive waste that can remain dangerous for tens of thousands of years.

Right now, Finland is the only country with a permanent nuclear waste disposal solution, which involves burying the waste 400 meters deep in the earth.

Everywhere else? Nuclear waste is stored temporarily in decommissioned nuclear plants or entombed in steel and concrete, which is very expensive and safe for now until it can be moved to a permanent disposal site, or something goes wrong.

So, where does all these leave us with AI advancements and needs?

Conclusion: 

First off, it’s easy to forget that behind every clever AI answer is a chain of computation, infrastructure, human teams, and environmental impact. 

Every prompt you type leaves a footprint. And it will be much better for big tech companies and governments to take more responsibility for AI. But AI responsibility is actually on all of us.

We don’t need to stop using AI. We just need to use it smarter.

You can contribute your quota of emission reduction by:

  • being more concise with your prompt, 
  • batching your queries instead of firing off ten in a row, 
  • or choosing lighter AI models when you have less compute-intensive queries/prompts.

For consumers like me, this can mean using perplexity.ai instead of ChatGPT if my query is on something that doesn’t require much depth. 

For the more technical AI prompters, here’s an article by the Open Data Science Community on some of the best lightweight LLMs in 2025.

Hopefully, with advancements in technologies surrounding AI, we will be able to fully harness the power of AI without further harming the planet. Because if left unchecked, some of these strains on scarce resources can lead to far-reaching effects that generations after us will have to settle.

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on Pinterest (Opens in new window) Pinterest
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Related

AI Update AIAI EngineeringAI newsAI updateartificial intelligenceCost of AI

Post navigation

Previous post
Next post

Leave a commentCancel reply

©2026 Syntax and Script | WordPress Theme by SuperbThemes
%d