TR2019-142

Model-based deep reinforcement learning for CACC in mixed-autonomy vehicle platoons


    •  Chu, T., Kalabić, U., "Model-based deep reinforcement learning for CACC in mixed-autonomy vehicle platoons", IEEE Conference on Decision and Control (CDC), DOI: 10.1109/​CDC40024.2019.9030110, December 2019, pp. 4079-4084.
      BibTeX TR2019-142 PDF
      • @inproceedings{Chu2019dec,
      • author = {Chu, Tianshu and Kalabić, Uroš},
      • title = {Model-based deep reinforcement learning for CACC in mixed-autonomy vehicle platoons},
      • booktitle = {Proc. IEEE Conference on Decision and Control},
      • year = 2019,
      • pages = {4079--4084},
      • month = dec,
      • doi = {10.1109/CDC40024.2019.9030110},
      • url = {https://www.merl.com/publications/TR2019-142}
      • }
  • Research Areas:

    Control, Dynamical Systems, Machine Learning

Abstract:

This paper proposes a model-based deep reinforcement learning (DRL) algorithm for cooperative adaptive cruise control (CACC) of connected vehicles. Differing from most existing CACC works, we consider a platoon consisting of both human-driven and autonomous vehicles. The humandriven vehicles are heterogeneous and connected via vehicleto-vehicle (V2V) communication and the autonomous vehicles are controlled by a cloud-based centralized DRL controller via vehicle-to-cloud (V2C) communication. To overcome the safety and robustness issues of RL, the algorithm informs lowerlevel controllers of desired headway signals instead of directly controlling vehicle accelerations. The lower-level behavior is modeled according to the optimal velocity model (OVM), which determines vehicle acceleration according to a headway input. Numerical experiments show that the model-based DRL algorithm outperforms its model-free version in both safety and stability of CACC. Furthermore, we study the impact of different penetration ratios of autonomous vehicles on the safety, stability, and optimality of the CACC policy.

 

  • Related News & Events