What is it about?
This paper proposes the combination of two model-free controller tuning techniques, namely linear Virtual Reference Feedback Tuning (VRFT) and nonlinear state-feedback Q-learning. Our new mixed VRFT-Q learning approach employs neural networks and is solves the reference trajectory tracking problems defined as optimization problems.
Featured Image
Why is it important?
This can be of large importance for many applications as extensive simulation results for a representative nonlinear system are included.
Perspectives
Read the Original
This page is a summary of: Model-free constrained data-driven iterative reference input tuning algorithm with experimental validation, International Journal of General Systems, October 2015, Taylor & Francis,
DOI: 10.1080/03081079.2015.1072524.
You can read the full text:
Contributors
The following have contributed to this page