The US Department of Energy has warned of the risks and opportunities of AI (artificial intelligence) in the management of critical energy infrastructure, according to a risk assessment published on April 29.
AI optimization and potential threats
Artificial intelligence could boost U.S. energy security by helping to analyze vast amounts of data, simulate weather events, and predict maintenance needs. However, AI could also be exploited to sabotage critical infrastructures or mislead decision-makers, underlining the need for rigorous security practices.
Department of Energy statements
DOE’s Office of Cybersecurity, Energy Security and Emergency Response (CESER) has announced plans to publish an updated assessment later this year, after gathering input from energy sector stakeholders on the opportunities presented by AI and the knowledge gaps.
Risks of misuse and cyber attacks
Using faulty AI models to automatically inform or execute decisions can lead to poor results, such as overestimating economic gains at the expense of system reliability. What’s more, machine learning-based systems are vulnerable to attack, as adversaries can, for example, “poison” the AI model with erroneous data.
AI interaction with other technologies
Coupling AI with other technologies, such as unmanned drone systems, could be used to attack energy infrastructure. However, human supervision and protection of important models or datasets can help mitigate these risks.
Other studies and future implications
Other DOE reports have examined the short-term opportunities that AI can offer to improve the planning, permitting and operation of network infrastructures. A report from Argonne National Laboratory has identified long-term challenges in the energy sector that could potentially be solved by AI, while Lawrence Berkeley National Laboratory has studied the electricity requirements of AI and other types of computing.
The integration of artificial intelligence into the US energy sector offers significant advantages, but requires constant vigilance to guard against the potential risks of abuse and malicious exploitation.