Data-driven virtual reference feedback tuning and reinforcement Qlearning for model-free position control of an aerodynamic system
Document Type
Conference Proceeding
Publication Title
2016 24th Mediterranean Conference on Control and Automation (MED)
Publisher
Institute of Electrical and Electronics Engineers Inc
School
School of Engineering
RAS ID
24345
Abstract
This paper compares a linear Virtual Reference Feedback Tuning model-free technique applied to feedback controller tuning based on input-output data with two Reinforcement Qlearning model-free nonlinear state feedback controllers that are tuned using input-state experimental data (ED) in terms of two separate learning techniques. The tuning of the state feedback controllers is done in a model reference setting that aims at linearizing the control system (CS) in a wide operating range. The two learning techniques are validated on a position control case study for an open-loop stable aerodynamic system. The performance comparison of our tuning techniques is discussed in terms of their structural complexity, CS performance, and amount of ED needed for learning.
DOI
10.1109/MED.2016.7535876
Access Rights
subscription content
Comments
Radac, R. B., Precup, R. E., & Roman, R. C. (2016). Data-driven virtual reference feedback tuning and reinforcement Qlearning for model-free position control of an aerodynamic system. In 2016 24th Mediterranean Conference on Control and Automation (MED) (pp. 1126-1132). Available here.