- Advertisement -
Innovation comes impact. The social media revolution has changed the way we share content, how we buy, sell and learn, but also raise questions about technological abuse, censorship and protection. Every time we take a step forward, we also have to tackle challenges and AI is no different.
One of the biggest challenges for AI is energy consumption. Together, data centers and AI currently use between 1-2% of the world’s electricity, but this figure is rising rapidly.
To make things more complicated, these estimates change as our AI technologies and usage patterns evolve. In 2022, data centers, including AI and cryptocurrencies, used platforms that are used around 460 TWH power. At the beginning of 2024 it was projected that they could use until 2030 until an extra 900 TWH. At the beginning of 2025, this figure was radically revised to around 500 TWH, largely due to more efficient AI models and data center technologies. In addition, to place this in context, the demand from the electric vehicle industry will probably reach 854TWH by 2030, with domestic and industrial heating on around 486 TWHWH.
However, this growth is still important, and everyone – both providers and users – has the duty to ensure that they are used AI Tools Is as efficient as possible.
Global Environment Director, OVHCloud.
How does the AI infrastructure become more powerful?
Whether it is about Moore’s law that tells us that we will see more transistors on the same chip, or the law of Koomey that tells us that we will see more calculations per joule of energy, computer use has always become more efficient over time and the time and the GPUsThe AI ”engines” will certainly follow that trend.
When we look back between 2010 and 2018, the amount of Datacenter calculations that are performed by 550%increased, but energy consumption increased by only 6%. We already see this kind of improvement in AI -Deskloads, and we have many reasons to be a little more optimistic about the future.
We also see an increase in the adoption of liquid cooling technologies. According to markets and markets, the market for liquid cooling in data centers will grow almost ten times over the next seven years. Water has a thermal conductivity that is much larger than air, making liquid cooling techniques more powerful (and therefore cheaper) than air cooling. This is ideal for AI-Workloads, which tend to consume more power and to run hotter than non-II work burden. Water cooling dramatically increases the effectiveness of the power consumption of data centers.
We also see significant innovation in the liquid cooling field itself. Historically, data centers have used direct liquid for chip cooling (DLTC) where cooling plates are on CPUs or GPUs. As the power (and consequently heat) rise, we see more immersion cooling, whereby the entire server is immersed in a non-conducting liquid and all components can be cooled at the same time.
This size can even be combined with DLTC cooling, so that server components that usually ‘hot’ (such as the CPU and GPU) receive more cooling capacity, while the rest of the server is cooled by the surrounding liquid.
How can we make AI more resource efficient?
In addition to strength, we usually regard water as a source in itself. Consider search a standard on the internet. A searched search used by AI used about 25 ml of water, where a non-AI search assignment will use 50 times less: half milliliters. On an industrial scale, a recent test case of the National Renewable Energy Laboratory Smart Water -Cooling found reduced water consumption by approximately 56%; In their case, more than a million liters of water per year.
It is also important to think about the minerals that our infrastructure uses, because they do not exist separately. Re-use components where possible, whether they recycle when this is not the case, can be a hugely efficient way to prevent unnecessary purchases and reduce the environmental impact of AI.
For example, consider lithium, an important part in electric cars. Lithium can require up to half a million liters of water and generate fifteen tons of CO2 for one tons of metal. At the same time there is a geopolitics element for our use of resources: about a third of our nickel, used in cooling bodies, came from Russia.
In many cases it is even possible to restore certain metals. With pyrolysis you can, for example, obtain “black” copper from complex components. Then separate the elements via electrolysis to restore pure copper, nickel, iron, palladium, titanium, silver and gold, which changes e-waste valuable assets. Although this will not be a significant income flow, it is a strong example of sustainability that is an income generator instead of a cost center!
How can users make their AI processes more powerful?
It is not enough for users to rely on data center operators and equipment manufacturers to reduce energy consumption and CO2 footprints. All organizations must be aware of energy consumption and ensure that their company Is where possible sustainable due to design.
To give a hands-on example, AI model training is rarely sensitive to latency, because it is usually not a user process. This means that it can be done everywhere and as a result must be done at locations that have greater access to renewable energy. For example, a company that models training in Canada instead of in Poland has a CO2 footprint about 85% lower.
At the same time, it is important to be pragmatic about AI infrastructure. According to Intel PCF / OVHCloud LCA, An Nvidia H100 has a cradle-to-gate (production) CO2 footprint about three times higher than an NVIDIA L4, which strengthens how important it is for organizations to understand which GPUs they need for the task.
In many cases, the newest GPU will be important – especially when organizations try to bring applications Quickly put on the market but in some, a lower specie and more sustainable GPU will do the same work at the same time.
AI Sustainability: An exercise in attention to detail
In general there is absolutely no doubt that our power and resource consumption will increase in the future; That is the price of progress. What we can do is to ensure that we have a precedent to make each part of our AI seal supplies and processes as efficient as possible from the start, so that future developments also include this in their standard business procedures.
If we can make fractional profit where possible, they will get up and ensure that today’s needs do not endanger the world of tomorrow.
We have the best cloud computing providers.
This article was produced as part of the TechRadarpro expert insight channel, where today we have the best and smartest spirits in the technology industry. The views expressed here are those of the author and are not necessarily those of TechRadarpro or Future PLC. If you are interested in contributing to find out more here: https://www.techradar.com/news/submit-your-story-techradar-pro
- Advertisement -