通信学报

• 学术论文 • 上一篇    下一篇

基于两层模糊划分的在策略时间差分算法

穆翔1,刘全1,2,傅启明1,孙洪坤1,周鑫 1   

  1. 1. 苏州大学 计算机科学与技术学院,江苏 苏州 215006;2. 吉林大学 符号计算与知识工程教育部重点实验室,吉林 长春 130012
  • 出版日期:2013-10-25 发布日期:2013-10-15
  • 基金资助:
    国家自然科学基金资助项目(61070223, 61103045, 61070122, 61272005);江苏省自然科学基金资助项目(BK2012616);江苏省高校自然科学研究基金资助项目(09KJA520002, 09KJB520012);吉林大学符号计算与知识工程教育部重点实验室基金资助项目(93K172012K04)

TD algorithm based on double-layer fuzzy partitioning

  • Online:2013-10-25 Published:2013-10-15

摘要: 针对传统的基于查询表或函数逼近的Q值迭代算法在处理连续空间问题时收敛速度慢、且不易求解连续行为策略的问题,提出了一种基于两层模糊划分的在策略时间差分算法——DFP-OPTD,并从理论上分析其收敛性。算法中第一层模糊划分作用于状态空间,第二层模糊划分作用于动作空间,并结合两层模糊划分计算出Q值函数。根据所得的Q值函数,使用梯度下降方法更新模糊规则中的后件参数。将DFP-OPTD应用于经典强化学习问题中,实验结果表明,该算法有较好的收敛性能,且可以求解连续行为策略。

Abstract: When dealing with the continuous space problems, the traditional Q-iteration algorithms based on lookup-table or function approximation converge slowly and are difficult to get a continuous policy. To overcome the above weaknesses, an on-policy TD algorithm named DFP-OPTD was proposed based on double-layer fuzzy partitioning and its convergence was proved. The first layer of fuzzy partitioning was applied for state space, the second layer of fuzzy partitioning was applied for action space, and Q-value functions were computed by the combination of the two layer fuzzy partitioning. Based on the Q-value function, the consequent parameters of fuzzy rules were updated by gradient descent method. Applying DFP-OPTD on two classical reinforcement learning problems, experimental results show that the algorithm not only can be used to get a continuous action policy, but also has a better convergence performance.

No Suggested Reading articles found!