近期,实验室博士生卢彦岐为第一作者,实验室教师吴承伟作为通讯作者的论文“Deep Reinforcement Learning Control of Fully-Constrained Cable-Driven Parallel Robots”已被控制领域权威期刊IEEE Transactions on Industrial Electronics录用。
该论文研究了绳驱并联机器人基于深度强化学习的控制算法设计问题。绳驱并联机器人模型复杂、运行环境具有不确定性。现有控制方法保守性强,控制精度低,不能满足实际需求。本文在现有成果基础上引入深度强化学习机制,提出了基于李雅普诺夫函数约束的柔性actor-critic强化学习控制算法,提升了控制精度,证明了闭环系统的稳定性。设计过程中引入的基准控制器提高了训练数据的有效性。计算机仿真和实验结果均表明了本文算法的有效性和优势。
Abstract
Cable-driven parallel robots (CDPRs) have complex cable dynamics and working environment uncertainties, which bring challenges to the precise control of CDPRs. This paper introduces the reinforcement learning to offset the negative effect on control performance of CDPRs resulting from the uncertainties. The problem of controller design for CDPRs in the framework of deep reinforcement learning is investigated. A learning-based control algorithm is proposed to compensate for uncertainties due to cable elasticity, mechanical friction, etc. A basic control law is given for the nominal model, and a Lyapunov-based deep reinforcement learning control law is designed. Moreover, the stability of the closed-loop tracking system under the reinforcement learning algorithm is proved. Both simulation and experiments validate the effectiveness and advantages of the proposed control algorithm.