Artificial intelligence (AI) has undeniably brought both convenience and challenges to various industries. While AI has made significant strides in fields like healthcare and astronomy, its environmental impact and potential harm in other sectors raise questions about its overall benefit. The complex interplay between AI technology and its effects on the environment is prompting a call for more research and transparency.

Professor Teresa Heffernan from Saint Mary’s University, an AI researcher, highlights concerns about the environmental footprint of large language models (LLMs) like Google’s Bard and ChatGPT. These models, celebrated for their text-based capabilities, consume substantial computing energy during both training and use, contributing to their carbon emissions.

Transparency is a key issue, with Heffernan pointing out a lack of openness regarding data and processes. Assessing the environmental impact of AI, a recent report by the Canadian Institute for Advanced Research (CIFAR) focused on the carbon dioxide emissions generated during LLM training. The report identified three critical factors: model training time, hardware power usage, and carbon intensity from the energy grid, which together determine the dynamic power consumption of LLMs.

The research revealed staggering carbon emissions associated with training AI models. For instance, Microsoft’s GPT-3 emitted the equivalent of 502 tonnes of CO2 during training, equivalent to the emissions of 304 homes in a year. Similarly, DeepMind’s Gopher, a 2021 LLM, released 352 tonnes of CO2 during its training. Importantly, carbon emissions continue when AI models respond to queries, contributing to ongoing environmental impacts.

The varied environmental impacts of AI Applications

Smaller algorithms like Bloom, though seemingly less impactful, still produce 19 kilograms of CO2 per day during development. This becomes substantial when deployed in user-facing applications like web searches, leading to millions of daily queries.

Beyond carbon emissions, AI systems also deplete freshwater reserves as they generate heat during operation, necessitating cooling. Cornell University’s research indicated that Google’s data centers consumed 12.7 billion liters of fresh water in 2021, while a Microsoft GPT-3 training center used around 700,000 liters. Even a simple interaction with AI models like ChatGPT can be likened to the consumption of a 500-milliliter water bottle for cooling purposes.

Concerned about AI’s environmental impact, CTVNews.ca reached out to companies mentioned in the report. Microsoft, for instance, emphasized its commitment to sustainability, pledging investments in research to measure energy use and carbon impact while improving efficiency and relying on clean energy.

Microsoft’s commitment to environmental responsibility

While the report focuses on LLMs, other AI applications also exert environmental pressure. David Rolnick, a computer science professor at McGill University, stressed that the impact of AI depends on its application. AI can be a tool for good when used in applications like monitoring deforestation but can exacerbate environmental issues, such as in oil and gas exploration.

Rolnick likens AI to a hammer, emphasizing that its impact depends on how it’s wielded. Many AI algorithms are energy-efficient and play essential roles in various industries, from manufacturing to finance.