An increase in electricity demand due to greater use of artificial intelligence should be taken into consideration in managing load growth and setting policies, according to an article in the energy research journal Joule.
“There is increasing apprehension that the computational resources necessary to develop and maintain AI models and applications could cause a surge in data centers' contribution to global electricity consumption,” Alex de Vries, a doctoral candidate at the VU Amsterdam School of Business and Economics and the founder of Digiconomist, said in the commentary in Joule, The growing energy footprint of artificial intelligence.
Generative AI, which is used to create new content such as text, images, or videos, has recently attracted a lot of attention with products such as ChatGPT and OpenAI’s DALL-E. Both tools use natural language processing and share a common process: an initial training phase followed by an inference phase that produces useful output.
In the training phase, an AI model is fed huge datasets. Large language models such as Generative Pre-trained Transformer 3, from which ChatGPT was developed, have reportedly used as much as 1,287 megawatt hours during the training phase.
Despite the fact that training has often been considered the most energy intensive development phase and that training has been the focus of most sustainability research, “there are indications that the inference phase might contribute significantly to an AI model’s life-cycle costs,” the author said in the article.
Compared to the estimated 1,287 MWh used in GPT-3’s training phase, the inference phase’s energy demand appears considerably higher, de Vries said, citing research firm SemiAnalysis, which suggested that OpenAI required 3,617 NVIDIA servers, with a total of 28,936 graphics processing units, to support ChatGPT, implying an energy demand of 564 MWh per day.
In addition, de Vries said, Google reported that 60 percent of AI-related energy consumption from 2019 to 2021 stemmed from inference. He also noted that Google’s parent company, Alphabet, expressed concern regarding the costs of inference compared to the costs of training.
Google’s energy demand “could substantially increase” if generative AI is integrated into every Google search, de Vries said, citing SemiAnalysis, which estimated that implementing AI similar to ChatGPT in each Google search would require 512,821 NVIDIA servers. “At a power demand of 6.5 kilowatt per server, this would translate into a daily electricity consumption of 80 gigawatt hours and an annual consumption of 29.2 terawatt hours,” de Vries said.
The worst case scenario, suggests Google’s AI alone “could consume as much electricity as a country such as Ireland,” about 29.3 TWh per year, de Vries said, noting, however, that “scenario assumes full-scale AI adoption utilizing current hardware and software, which is unlikely to happen rapidly.”
A more pragmatic projection of worldwide AI-related electricity consumption could be derived from NVIDIA’s sales of AI servers.
The company is expected to deliver 100,000 AI servers in 2023. If operating at full capacity, those servers would have a combined power demand of 650 to 1,020 megawatts and on an annual basis could consume up to 5.7 to 8.9 TWh, the article said. Compared with the historical estimated 205 TWh of annual data center electricity consumption “this is almost negligible” de Vries, said. In addition, the supply chain bottleneck of AI servers is likely to persist for several more years, he said.
De Vries also noted that server utilization rates will likely be less than 100 percent, which will mitigate part of their potential electricity consumption. And, he said, hardware efficiency improvements, innovations in model architectures and algorithms could help to mitigate or even reduce AI-related electricity consumption in the long term.
Nonetheless, increased efficiency often leads to increased demand, de Vries noted.
“While the exact future of AI-related electricity consumption remains difficult to predict, the scenarios discussed in this commentary underscore the importance of tempering both overly optimistic and overly pessimistic expectations,” de Vries said.
“It is probably too optimistic to expect that improvements in hardware and software efficiencies will fully offset any long-term changes in AI-related electricity consumption,” de Vries said. “These advancements can trigger a rebound effect whereby increasing efficiency leads to increased demand for AI, escalating rather than reducing total resource use.”
“It would be advisable for developers not only to focus on optimizing AI, but also to critically consider the necessity of using AI in the first place, as it is unlikely that all applications will benefit from AI or that the benefits will always outweigh the costs,” de Vries said.
In addition, de Vries said, “regulators might consider introducing specific environmental disclosure requirements to enhance transparency across the AI supply chain, fostering a better understanding of the environmental costs of this emerging technological trend.”
House Hearing Examines AI and Energy Demand
The House Energy and Commerce Committee’s Energy, Climate, and Grid Security Subcommittee on Oct. 19 held a hearing, “The Role of Artificial Intelligence in Powering America’s Energy Future,” which examined advances in artificial intelligence applications and the opportunities AI can bring to the energy sector.
Witnesses at the hearing were Edward Abbo, President & Chief Technology Officer, C3 AI; Paul Dabbar, Former Under Secretary for Science, U.S. Department of Energy; Distinguished Visiting Fellow, Center on Global Energy Policy, Columbia University; Jeremy Renshaw, Senior Technical Executive - AI, Quantum, and Innovation, Electric Power Research Institute; and, Sreedhar Sistu, Vice President, Artificial Intelligence, Schneider Electric.