基于个体协同触发强化学习的多机器人行为决策方法* .txt
DOI:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

中图分类号: TH89TP2426文献标识码: A国家标准学科分类代码: 5108050 .txt

基金项目:

*基金项目:国家自然科学基金重大项目(71991463,71790615)、国家自然科学基金重大研究计划集成项目(91846301)、湖南省教育厅科学研究重点项目(18A303)、湖南社科基金项目(18YBA272)、湖南省社科评审委员会项目(XSP18YBZ123)、湖南省重点实验室开放研究基金项目(1807)资助 .txt


Multirobot behavior decision making method based on individualcollaborative trigger reinforcement learning .txt
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    摘要:为了提高多机器人行为最优决策控制中强化学习的效率和收敛速度,研究了多机器人的分布式马尔科夫建模与控制策略。根据机器人有限感知能力设计了个体协同感知触发函数,机器人个体从环境观测结果计算个体协同触发响应概率,定义一次触发过程后开始计算联合策略,减少机器人间通讯量和计算资源。引入双学习率改进Q学习算法,并将该算法应用于机器人行为决策。仿真实验结果表明,当机器人群组数量在20左右时,本文算法的协同效率较高,单位时步比为1085 0。同时距离调节参数η对机器人协同搜索效率有影响,当η=0008时,所需的移动时步比和平均移动距离都能达到最小值。通过双学习率的引入,该算法较基于环境模型的强化学习算法具有更高的学习效率和适用性,平均性能提升35%,对于提高多机器人自主协同能力具有较高的理论意义及应用价值。 .txt

    Abstract:

    Abstract:In order to improve the efficiency and convergence speed of reinforcement learning in multirobot behavior optimal decision making control, the distributed Markov modeling and control strategy for multirobots are studied in this paper. According to the limited perception ability of the robots, an individualcooperative trigger perception function is designed. The individual robot calculates the individualcooperative trigger response probability from the environment observation results, and defines that after a trigger process the joint strategy calculation starts, which reduces the communication amount and computing resources among robots. The Qlearning algorithm is improved through introducing the duallearning rate strategy, which is applied to the behavior decisionmaking of robots. The simulation experiment results show that the algorithm proposed in this paper has quite high cooperative efficiency when the number of robots in the group is about 20. The unit time step ratio is 1085 0. At the same time, the distance adjustment parameter η has an influence on the cooperative search efficiency of the robot. When η is 0008, the required moving time step ratio and average moving distance can reach minimum. Through introducing the double learning rate, the proposed algorithm possesses higher learning efficiency and applicability compared with the reinforcement learning algorithm based on environment model, the average performance improvement reaches about 35%. The proposed algorithm has a high theoretical significance and application value for improving the autonomous cooperative ability of multirobots. .txt

    参考文献
    相似文献
    引证文献
引用本文

徐雪松,曾智,邵红燕,杨胜杰,李想 . txt.基于个体协同触发强化学习的多机器人行为决策方法* . txt[J].仪器仪表学报,2020,41(5):66-75

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2022-03-01
  • 出版日期: