TR2017-033

Efficient Convex Optimization on GPUs for Embedded Model Predictive Control


    •  Yu, L., Goldsmith, A., Di Cairano, S., "Efficient Convex Optimization on GPUs for Embedded Model Predictive Control", Workshop on General Purpose Processing with Graphics Processing Units, DOI: 10.1145/​3038228.3038234, February 2017, pp. 12-21.
      BibTeX TR2017-033 PDF
      • @inproceedings{Yu2017feb,
      • author = {Yu, Leiming and Goldsmith, Abraham and Di Cairano, Stefano},
      • title = {Efficient Convex Optimization on GPUs for Embedded Model Predictive Control},
      • booktitle = {Workshop on General Purpose Processing with Graphics Processing Units},
      • year = 2017,
      • pages = {12--21},
      • month = feb,
      • doi = {10.1145/3038228.3038234},
      • isbn = {978-1-4503-4915-4},
      • url = {https://www.merl.com/publications/TR2017-033}
      • }
  • MERL Contacts:
  • Research Area:

    Control

Abstract:

GPU applications have traditionally run on PCs or in larger scale systems. With the introduction of the Tegra line of mobile processors, NVIDIA expanded the types of systems that can exploit the massive parallelism offered by GPU computing architectures. In this paper, we evaluate the suitability of the Tegra X1 processor as a platform for embedded model predictive control. MPC relies on the real time solution of a convex optimization problem to compute the control input(s) to a system. Relative to traditional control techniques such as PID, MPC is very computationally demanding. Quadratic programming algorithms for the solution of convex optimization problems generally lend themselves to parallelization. However, until the introduction of the Tegra, there has never been an off-the-shelf embedded processor that would enable a massively parallel embedded implementation.
We investigate two different gradient based algorithms, ADMM and PQP, for solving the QP that occurs in a large class of MPC problems. The performance of these algorithms is dominated by the performance of matrix-matrix and matrix-vector products. Our work focuses on maximizing the performance of these operations for relatively small matrices of 100 to 1000 elements per dimension, which are common in the MPC control implementations found in automotive and factory automation applications. Modern BLAS libraries for CPUs and GPUs are quantitatively evaluated. We create SGEMV kernels that can outperform the state-of-the-art cuBLAS by 2.3x on TX1. Different kernel fusion schemes utilizing concurrent kernel execution and zero copy mechanisms are investigated. For ADMM, our implementation achieves 46.6x speedup over the single threaded CPU version and 2.7x speedup over the optimized OpenBLAS version. For PQP, we achieve 41.2x speedup over the single threaded CPU version and 4.2x speedup over the OpenBLAS version.