Abstract
Developing efficient motion policies for multi-agents is a challenge in a decentralized dynamic situation, where each agent plans its own paths without knowing the policies of the other agents involved. This paper presents an efficient learning-based motion planning method for multi-agent systems. It adopts the framework of multi-agent deep deterministic policy gradient (MADDPG) to directly map partially observed information to motion commands for multiple agents. To improve the efficiency of MADDPG in sample utilization, so as to train more brilliant agents that can adapt to more complex environments, a strategy named mixing experience (ME) is introduced to MADDPG, and this has led to our proposed ME-MADDPG algorithm. The novel ME strategy can be embodied into three specific mechanisms: 1) An artificial potential field (APF) based sample generator to produce high-quality samples in the early training stage; 2) A dynamic mixed sampling strategy to mix the training data from different sources with a variable proportion; 3) A delayed learning skill to stabilize the training of the multiple agents. A series of experiments have been conducted to verify the performance of the proposed ME-MADDPG algorithm, and it has been demonstrated that, compared with MADDPG, the proposed algorithm can significantly improve the convergence speed and convergence effect in the training process, and it has also shown better efficiency and better adaptability in complex dynamic environments while it is used for multi-agent motion planning applications.
Original language | English |
---|---|
Pages (from-to) | 2393-2427 |
Number of pages | 35 |
Journal | International Journal of Intelligent Systems |
Volume | 37 |
Issue number | 3 |
DOIs | |
Publication status | Published - 9 Dec 2021 |
Keywords
- Software
- Human-Computer Interaction
- Artificial Intelligence
- Theoretical Computer Science