Data-driven virtual reference feedback tuning and reinforcement Qlearning for model-free position control of an aerodynamic system
Institute of Electrical and Electronics Engineers Inc
School of Engineering
This paper compares a linear Virtual Reference Feedback Tuning model-free technique applied to feedback controller tuning based on input-output data with two Reinforcement Qlearning model-free nonlinear state feedback controllers that are tuned using input-state experimental data (ED) in terms of two separate learning techniques. The tuning of the state feedback controllers is done in a model reference setting that aims at linearizing the control system (CS) in a wide operating range. The two learning techniques are validated on a position control case study for an open-loop stable aerodynamic system. The performance comparison of our tuning techniques is discussed in terms of their structural complexity, CS performance, and amount of ED needed for learning.
Not open access