[1] |
BROWN N , SANDHOLM T . Superhuman AI for multiplayer poker[J]. Science, 2019,365(6456): 885-890.
|
[2] |
SILVER D , HUANG A , MADDISON C J ,et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016,529(7587): 484-489.
|
[3] |
卢卡斯·西蒙, 沈甜雨, 王晓 ,等. 基于统计前向规划算法的游戏通用人工智能[J]. 智能科学与技术学报, 2019,1(3): 219-227.
|
|
LUCAS S , SHEN T Y , WANG X ,et al. General game AI with statistical forward planning algorithms[J]. Chinese Journal of Intelligent Science and Technology, 2019,1(3): 219-227.
|
[4] |
HASSABIS , DEMIS . Artificial intelligence:chess match of the century[J]. Nature, 2017,544(7651): 413-414.
|
[5] |
LAI M . Giraffe:using deep reinforcement learning to play chess[J]. arXiv preprint, 2015,arXiv:1509.01549.
|
[6] |
RUBEN RODRIGUEZ T , BONTRAGER P , TOGELIUS J ,et al. Deep reinforcement learning for general video game AI[C]// 2018 IEEE Conference on Computational Intelligence and Games. Piscataway:IEEE Press, 2018.
|
[7] |
CERTICKY M , CHURCHILL D . The current state of starcraft AI competitions and bots[C]// The 13th Artificial Intelligence and Interactive Digital Entertainment Conference. Beijing:Tsinghua University Press, 2017.
|
[8] |
ARULKUMARAN K , CULLY A , TOGELIUS J . AlphaStar:an evolutionary computation perspective[C]// The Genetic and Evolutionary Computation Conference Companion.[S.l.:s.n]. 2019.
|
[9] |
ONTANóN S , SYNNAEVE G , URIARTE A ,et al. RTS AI problems and techniques[M]. [S.l.:s.n.]. 2015.
|
[10] |
TOGELIUS J , . AI researchers,video games are your friends[C]// International Joint Conference on Computational Intelligence. Berlin:Springer, 2015.
|
[11] |
TURING A M . Computing machinery and intelligence[J]. Mind, 1950,59(236): 433-460.
|
[12] |
张宏达, 李德才, 何玉庆 . 人工智能与“星际争霸”:多智能体博弈研究新进展[J]. 无人系统技术, 2019,2(1): 5-16.
|
|
ZHANG H D , LI D C , HE Y Q . Artificial intelligence and starcraft:new progress in multi-agent game research[J]. Unmanned Systems Technology, 2019,2(1): 5-16.
|
[13] |
MNIH V , KAVUKCUOGLU K , SILVER D ,et al. Human-level control through deep reinforcement learning[J]. Nature, 2015,518(7540): 529-533.
|
[14] |
BROWN N , SANDHOLM T . Superhuman AI for heads-up no-limit poker:libratus beats top professionals[J]. Science, 2017:1733.
|
[15] |
YE D H , LIU Z , SUN M F ,et al. Mastering complex control in MOBA games with deep reinforcement learning[J]. arXiv preprint, 2019,arXiv:1912.09729.
|
[16] |
SONG S , WENG J , SU H ,et al. Playing FPS games with environment-aware hierarchical reinforcement learning[C]// The 28th International Joint Conference on Artificial Intelligence.[S.l.:s.n]. 2019: 3475-3482.
|
[17] |
王飞跃 . 人工智能在多角色游戏中获胜[J]. 中国科学基金, 2020,34(2): 205-206.
|
|
WANG F Y . AI wins in multi-role games[J]. Bulletin of National Natural Science Foundation of China, 2020,34(2): 205-206.
|
[18] |
王飞跃 . 人工社会、计算实验、平行系统——关于复杂社会经济系统计算研究的讨论[J]. 复杂系统与复杂性科学, 2004,1(4): 25-35.
|
|
WANG F Y . Artificial society,computational experiment,parallel system:discussion on computational research of complex social and economic system[J]. Complex Systems and Complexity Science, 2004,1(4): 25-35.
|
[19] |
WANG F Y . Toward scientific games:an ACP-based approach[R]. 2010.
|
[20] |
LI L , LIN Y L , ZHENG N N ,et al. Parallel learning:a perspective and a framework[J]. IEEE/CAA Journal of Automatica Sinica, 2017,4(3): 389-395.
|
[21] |
PEARL J , MACKENZIE D . The book of why:the new science of cause and effect[J]. Science, 2018,361(6405):855.2-855.
|
[22] |
LI L , WANG X , WANG K F ,et al. Parallel testing of vehicle intelligence via virtual-real interaction[J]. Science, 2019(28).
|
[23] |
李宪港, 李强 . 典型智能博弈系统技术分析及指控系统智能化发展展望[J]. 智能科学与技术学报, 2020,2(1): 36-42.
|
|
LI X G , LI Q . Technical analysis of typical intelligent game system and development prospect of intelligent command and control system[J]. Chinese Journal of Intelligent Science and Technology, 2020,2(1): 36-42.
|
[24] |
叶佩军, 王飞跃 . 人工智能——原理与技术[M]. 北京: 清华大学出版社, 2020.
|
|
YE P J , WANG F Y . Artificial intelligence,principle and technology[M]. Beijing: Tsinghua University PressPress, 2020.
|