Learning-Based Control in Safety-Critical Systems: Lyapunov-Guided Reinforcement Learning, Barrier Functions, and Formal Guarantees
Keywords:
Safe reinforcement learning, Lyapunov stability, control barrier functions, formal verification, runtime assurance, safety-critical systems, autonomous controlAbstract
This review aimed to synthesize and critically analyze recent advances in integrating learning-based control with formal safety mechanisms—specifically Lyapunov-guided reinforcement learning (RL), control barrier functions (CBFs), and formal verification frameworks—to identify the key themes, methodological progress, and implementation challenges in safety-critical systems. A qualitative review design was employed, focusing on 12 peer-reviewed journal and conference papers published between 2017 and 2025 that explicitly addressed learning-based control with formal safety and stability guarantees. Data collection relied exclusively on systematic literature analysis, emphasizing relevance to safety-critical applications such as robotics, autonomous vehicles, and power systems. The selected studies were imported into NVivo 14 software for qualitative coding. Using open, axial, and selective coding, recurring patterns and concepts were extracted until theoretical saturation was achieved. The data were organized into four main themes—Lyapunov-guided RL, CBF frameworks, formal guarantees and verification, and practical applications—each containing multiple subthemes and conceptual codes. The synthesis revealed that Lyapunov-guided reinforcement learning provides theoretical stability certificates during policy optimization, while CBF-based frameworks act as safety filters enforcing real-time constraint satisfaction. Formal guarantees and verification methods—such as runtime assurance architectures, reachability analysis, and proof-carrying policies—extend these approaches to certifiable control. However, implementation challenges persist regarding scalability, data efficiency, and computational tractability in real-world applications. Across studies, hybrid strategies combining learning with classical control and verification yielded the most promising balance between adaptability and safety. Learning-based control in safety-critical systems is evolving toward a hybrid paradigm where data-driven adaptability coexists with analytical safety guarantees. Integrating Lyapunov, barrier, and formal verification methods enables provably safe reinforcement learning but demands advances in scalability, uncertainty handling, and real-time computation for widespread adoption.
Downloads
References
A Review On Safe Reinforcement Learning Using Lyapunov and Barrier Functions. (2025). arXiv.
A Unified View of Safety-Critical Control in Autonomous Systems. (2024). Annual Reviews in Control.
Adaptive and Learning-Based Control of Safety-Critical Systems. (n.d.). Springer.
Desong Du, S., Han, S., Qi, N., Bou Ammar, H., Wang, J., & Pan, W. (2023). Reinforcement Learning for Safe Robot Control using Control Lyapunov Barrier Functions. arXiv.
Fisac, J. F., Akametalu, A. K., Zeilinger, M. N., Kaynama, S., Gillula, J., & Tomlin, C. J. (2017). A General Safety Framework for Learning-Based Control in Uncertain Robotic Systems. arXiv.
Goodloe, A. E. (2022). Assuring Safety-Critical Machine Learning Enabled Systems: Challenges and Promise. NASA Langley.
How to Certify Machine Learning Based Safety-critical Systems? A Systematic Literature Review. (2021). arXiv.
Jin, Z., & Li, J. Y. (2019). Testing and verification of neural-network-based safety-critical control software: A systematic literature review. arXiv.
Learning-Based Safety-Stability-Driven Control for Safety-Critical Systems under Model Uncertainties. (2020). arXiv.
Safe Learning for Control using Control Lyapunov Functions and Control Barrier Functions: A Review. (2021). Elsevier.
Tambon, F., Laberge, G., Le An, N., Nikanjam, A., Mindom, P. S., Pequignot, Y., … Laviolette, F. (2021). How to Certify Machine Learning Based Safety-critical Systems? arXiv.
Qin, C., Wu, Y., Zhang, J., & Zhu, T. (2023). Reinforcement Learning-Based Decentralized Safety Control for Constrained Interconnected Nonlinear Safety-Critical Systems. Entropy, 25(8), 1158.
Safe Reinforcement Learning Using Robust Control Barrier Functions. (n.d.). Semantics Scholar.