Research
Research Divisions
Research Progress
Achievements
Research Programs
Location: Home>Research>Research Progress
Reinforcement Learning Tracking Control for Robotic Manipulator With Kernel-Based Dynamic Model
Author: Update times: 2020-12-30                          | Print | Close | Text Size: A A A

Reinforcement learning (RL) is an efficient learning approach to solving control problems for a robot by interacting with the environment to acquire the optimal control policy. However, there are many challenges for RL to execute continuous control tasks. In this article, without the need to know and learn the dynamic model of a robotic manipulator, a kernel-based dynamic model for RL is proposed. In addition, a new tuple is formed through kernel function sampling to describe a robotic RL control problem. In this algorithm, a reward function is defined according to the features of tracking control in order to speed up the learning process, and then an RL tracking controller with a kernel-based transition dynamic model is proposed. Finally, a critic system is presented to evaluate the policy whether it is good or bad to the RL control tasks. The simulation results illustrate that the proposed method can fulfill the robotic tracking tasks effectively and achieve similar and even better tracking performance with much smaller inputs of force/torque compared with other learning algorithms, demonstrating the effectiveness and efficiency of the proposed RL algorithm.

This study is published on IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 31.9(2020):3570-3578.

 

Copyright © 2003 - 2013. Shenyang Institute of Automation (SIA), Chinese Academy of Sciences
All rights reserved. Reproduction in whole or in part without permission is prohibited.
Phone: 86 24 23970012 Email: siamaster@sia.cn