Abstract
© 2020 The Authors Tracking maneuvering target in real time autonomously and accurately in an uncertain environment is one of the challenging missions for unmanned aerial vehicles (UAVs). In this paper, aiming to address the control problem of maneuvering target tracking and obstacle avoidance, an online path planning approach for UAV is developed based on deep reinforcement learning. Through end-to-end learning powered by neural networks, the proposed approach can achieve the perception of the environment and continuous motion output control. This proposed approach includes: (1) A deep deterministic policy gradient (DDPG)-based control framework to provide learning and autonomous decision-making capability for UAVs; (2) An improved method named MN-DDPG for introducing a type of mixed noises to assist UAV with exploring stochastic strategies for online optimal planning; and (3) An algorithm of task-decomposition and pre-training for efficient transfer learning to improve the generalization capability of UAV's control model built based on MN-DDPG. The experimental simulation results have verified that the proposed approach can achieve good self-adaptive adjustment of UAV's flight attitude in the tasks of maneuvering target tracking with a significant improvement in generalization capability and training efficiency of UAV tracking controller in uncertain environments.
Original language | English |
---|---|
Journal | Defence Technology |
DOIs | |
Publication status | Published - 27 Nov 2020 |
Keywords
- maneuvering target tracking
- multi-task
- deep reinforcement learning
- UAV
- meta-learning