Skip to main content Start main content

A Learning-Informed Optimisation Framework for Dynamic Balancing-Charging Management of Shared Autonomous Electric Vehicle Systems

Seminar

Seminar event image  Prof LIU Yang
  • Date

    27 Aug 2025

  • Organiser

    Department of Aeronautical and Aviation Engineering

  • Time

    17:00 - 18:00

  • Venue

    FJ302 Map  

Enquiry

General Office aae.info@polyu.edu.hk

Remarks

To receive a confirmation of attendance, please present your student or staff ID card at check-in.

Summary

Abstract

This seminar examines the dynamic balancing-charging (BC) management problem to optimise the real-time operation of Shared Autonomous Electric Vehicle Systems (SAEVSs). We focus on jointly optimising both fleet management and charging decisions under stochastic demands and operational constraints. This necessitates advanced modeling and foresight analytics to capture system dynamics and the long-term impact of decisions, enabling real-time, informed decision-making. To this end, this paper proposes a novel learning-informed optimisation framework that augments the precision strength of an optimisation approach with the foresight capability of Deep Reinforcement Learning (DRL). Specifically, we develop a multi-agent DRL model, functioning as a "Manager", which learns a dynamic BC strategy at the grid level amidst system uncertainties. The information about the underlying demand distribution and intricate interactions between current decisions and future system dynamics is preserved in the DRL Manager model to inform the optimisation of operational decisions via BC strategy. Supporting the vehicle-level decisions, we propose a customised space-time-battery network flow model, referred to as a "Worker". This model follows the far-sighted BC strategy developed by the Manager model to optimise real-time vehicle assignments. Nevertheless, learning the optimal BC strategy is highly non-trivial for combinatorial decision-making in large-scale SAEVSs. To tackle the learning challenges, we establish a synergistic DRL-based algorithm to solve the learning-informed optimisation framework, which coordinates the learning process of the BC strategy with the Multi-Agent Twin Delayed Deep Deterministic policy gradient (MA-TD3) algorithm and the optimised decision-making process. The proposed algorithm is enhanced with two key innovations: a bottom-up reward assignment strategy that improves credit assignment by capturing the chain effect of vehicle assignments, and a learning enhancement that leverages optimisation-generated demonstrations to improve exploration efficiency in large learning spaces. Through extensive experiments conducted on a city-scale SAEVS, we demonstrate that our framework achieves a 6.19% increase in system profit and an 11.16% improvement in the order fulfillment rate by optimising dynamic BC management. This study also lays a methodological basis for further exploring the integration of DRL and optimisation techniques, with the aim of enhancing decision-making capabilities in urban mobility systems.

Speaker

Prof. Liu Yang is jointly appointed as an Associate Professor in the Department of Civil and Environmental Engineering and the Department of Industrial Systems Engineering and Management at the National University of Singapore. Prof. Liu teaches and researches transport planning and modelling, urban mobility and logistics, traffic congestion management and data-driven methods. She received her BS from Tsinghua University, MPhil from Hong Kong University of Science and Technology, and PhD from Northwestern University. She has worked on research projects supported by the Singapore Ministry of Education, National Research Foundation, Land Transport Authority, Urban Redevelopment Authority, A*STAR, Cisco Systems, and ST Engineering. Her work is recognised internationally with research awards such as the Transportation Science Journal Best Paper Award. She serves on the editorial boards of Transportation Science (Associate Editor), Transportation Research Part C, and Socio-Economic Planning Sciences (Associate Editor). 

Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here