Skip to main content
eScholarship
Open Access Publications from the University of California

DeepMCLP: Solving the MCLP with Deep Reinforcement Learning for Urban Spatial Computing

Published Web Location

https://doi.org/10.25436/E2KK5V
Abstract

Maximal Covering Location Problem (MCLP) is a classical spatial optimization problem that plays a significant role in urban spatial computing. Due to its NP-hard, finding an exact solution for this problem is computationally challenging. This study proposes a deep reinforcement learning-based approach called DeepMCLP to address the MCLP problem. We model MCLP as a Markov Decision Process. The encoder with attention mechanisms learns the interaction between demand points and facility points and the decoder outputs a probability distribution over candidate facility points, and a greedy policy is employed to select facility points, resulting in a feasible solution. We utilize the trained DeepMCLP model to solve both artificially synthesized data and real-world scenarios. Experimental results demonstrate that our algorithm effectively solves the MCLP problem, achieving faster solving times compared to mature solvers and smaller optimality gaps compared to the genetic algorithm. Our algorithm offers a novel perspective on solving spatial optimization problems, and future research can explore its application to other spatial optimization problems, providing scientific and effective guidance for urban planning and urban spatial analysis.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View