Document


Title

Using spatial reinforcement learning to build forest wildfire dynamics models from satellite images
Document Type: Journal Article
Author(s): Sriram Ganapathi Subramanian; Mark Crowley
Publication Year: 2018

Cataloging Information

Keyword(s):
  • algorithms
  • Canada
  • fire management
  • fire spread
  • Fort McMurray
  • Markov
  • satellite imagery
  • wildfire dynamics
  • wildfire prediction
Region(s):
  • International
Record Maintained By:
Record Last Modified: June 20, 2019
FRAMES Record Number: 58075

Description

Machine learning algorithms have increased tremendously in power in recent years but have yet to be fully utilized in many ecology and sustainable resource management domains such as wildlife reserve design, forest fire management, and invasive species spread. One thing these domains have in common is that they contain dynamics that can be characterized as a spatially spreading process (SSP), which requires many parameters to be set precisely to model the dynamics, spread rates, and directional biases of the elements which are spreading. We present related work in artificial intelligence and machine learning for SSP sustainability domains including forest wildfire prediction. We then introduce a novel approach for learning in SSP domains using reinforcement learning (RL) where fire is the agent at any cell in the landscape and the set of actions the fire can take from a location at any point in time includes spreading north, south, east, or west or not spreading. This approach inverts the usual RL setup since the dynamics of the corresponding Markov Decision Process (MDP) is a known function for immediate wildfire spread. Meanwhile, we learn an agent policy for a predictive model of the dynamics of a complex spatial process. Rewards are provided for correctly classifying which cells are on fire or not compared with satellite and other related data. We examine the behavior of five RL algorithms on this problem: value iteration, policy iteration, Q-learning, Monte Carlo Tree Search, and Asynchronous Advantage Actor-Critic (A3C). We compare to a Gaussian process-based supervised learning approach and also discuss the relation of our approach to manually constructed, state-of-the-art methods from forest wildfire modeling. We validate our approach with satellite image data of two massive wildfire events in Northern Alberta, Canada; the Fort McMurray fire of 2016 and the Richardson fire of 2011. The results show that we can learn predictive, agent-based policies as models of spatial dynamics using RL on readily available satellite images that other methods and have many additional advantages in terms of generalizability and interpretability.

Online Link(s):
Citation:
Ganapathi Subramanian, Sriram; Crowley, Mark 2018. Using spatial reinforcement learning to build forest wildfire dynamics models from satellite images. Frontiers in ICT 5:6.