The environmental impact of human acts in the planet, and the impact of our technology is a subject that has been discussed a lot in the last years. Environmental problems, leading to climate changes is, in fact, a huge problem that we must produce solutions… and fast.
A study made back in 2019, from the Allen Institute for AI, argued for the prioritization of “Green AI” efforts that focus on the energy efficiency of AI systems. In this study we can read that “The computations required for deep learning research have been doubling every few months…” and that “These computations have a surprisingly large carbon footprint…”.
Here is the problem: On the one hand, artificial intelligence can be a powerful tool to combat climate change, but on the other, the energy that is consumed by AI can be excessive. The question arises: What would it take to make AI “greener”?
What would it take to make AI “greener” and efficient?
Companies need to change their mindset… bigger is (not) always better. To prevent Artificial Intelligence carbon footprint, companies need to promote practices of more holistic and multidimensional model evaluation.
Also, most of the companies don’t have the talent to build AI efficiently, but they are aware that AI can leverage their products and services. Our tip… don’t do it alone! Companies should look for partnerships to jumpstart their AI strategies.
It’s important to recognize that there are impacts from AI Systems, and that we need to mitigate them. For example, did you know that GPT-3 (a powerful language model by OpenAI) consumed enough energy in its training equivalent to driving a car from Earth to the moon and back! So, how do we make AI cleaner? Here are some key steps…
Open Sourcing AI research & Efficient sharing: One of the major problems is that research related to AI and Deep Leaning is normally shared without code. It’s a fact that this needs to go “Open Source”. By doing that, we promote efficient sharing, and can optimize the effort needed to replicate results. It’s a relief see that this situation is changing, as conferences like NeurIPS are now requiring reproducible code submissions along with research papers.
Increase hardware efficiency: Hardware is everyday offering better performance on deep learning tasks, but also increased efficiency (performance per watt). In fact, Google has developed TPUs and pushed the entire chip market toward more specialized products. We hope to see, in the next few years, other companies bring more focus to hardware for AI workloads.
Understand and democratize deep learning: It’s a fact that deep learning works, but the research community still don’t fully understand how or why it works. Pushing the limit on deep learning’s accuracy remains an exciting area of research, and existing models are already accurate enough to be deployed in a wide range of applications. So, if more people are working on the technology, we’ll be more likely to see surprising innovations in performance and energy efficiency, leading to the development of more accurate and efficient models.
If you liked this article, please check news section. Don’t miss it out, and if you’re struggling with your digital transformation, remember… you are not alone in this… Texter Blue is here to help you providing the best results! Make sure you read our news and articles and contact us.