Optimizing Smart City Water Distribution Systems Using Deep Reinforcement Learning
Date
Advisors
Journal Title
Journal ISSN
ISSN
Volume Title
Publisher
Type
Peer reviewed
Abstract
Inefficient scheduling in water distribution systems can lead to energy waste, costly overflows, and a system that cannot keep up with demand. Simultaneous real-time management of system components such as pumps and valves to optimize operation in response to demand variations is a challenging task. Recent advances in deep reinforcement learning provides an opportunity to overcome the state explosion problem using function approximation to generalize from a limited interaction with the environment. In this work, we train a Long Short-Term Memory (LSTM) based Reinforcement Learning (RL) agent to optimize the energy usage of a smart water distribution system while maintaining a safe operating envelope. We compare the performance of the RL agent to two agents based on human experience in the domain; a baseline controller that is based on simple operational logic, and a fuzzy logic controller that captures imprecise human requirements. We show that the RL agent outperforms the other agents in terms of energy usage and operational safety, indicating its potential benefits for large-scale smart city systems. Future research work will focus on prioritized large-scale system scheduling to cope with smart city emergency situations.