StarCraft is an important game for studying the future combat autonomous decision technology. Similarities between StarCraft and the autonomous decision process are described. Planning, learning, and uncertainty in decision-making algorithms for StarCraft are also analyzed. Firstly, the key problem of future combat autonomous decision-making technology is discussed in terms of decision complexity. Then, the article proposes to create a large-scale war game to clarify the development of future battle autonomous decision-making technologies, such as system's top-level architecture, game AI modeling technology, large game engines, etc. in order to provide a useful reference for the development of autonomous decision system intelligent technology.
HUANG Bincheng
,
CHEN Si
,
GAO Fang
,
GE Jianjun
,
WU Xueling
. On future combat autonomous decision technology for starcraft[J]. Science & Technology Review, 2021
, 39(5)
: 117
-125
.
DOI: 10.3981/j.issn.1000-7857.2021.05.013
[1] 胡晓峰. 军事指挥信息系统中的机器智能:现状与趋势[J]. 人民论坛学术前沿, 2016(15):22-34.
[2] 陶九阳. AlphaGo技术原理分析及人工智能军事应用展望[J]. 指挥与控制学报, 2016, 2(2):114-120.
[3] Lin C S, Ting C K. Emergent tactical formation using genetic algorithm in real-time strategy games[C]//International Conference on Technologies & Applications of Artificial Intelligence. Taiwan:IEEE, 2012.
[4] Synnaeve G, Pierre Bessière. Special Tactics:A bayesian approach to tactical decision-making[C]//Computational Intelligence and Games (CIG), 2012 IEEE Conference on. Granada:IEEE, 2012.
[5] Oh I S, Cho H C, Kim K J. Imitation learning for combat system in RTS games with application to starcraft[C]//Computational Intelligence & Games. Dortmund:IEEE, 2014.
[6] Park S, Park S, Lee H, et al. Collaborative goal distribution in distributed multiagent systems[C]//IEEE International Conference on Robotic Computing. Laguna Hills:IEEE Computer Society, 2018.
[7] Shao K, Zhu Y, Zhao D. Cooperative reinforcement learning for multiple units combat in starCraft[C]//2017 IEEE Symposium Series on Computational Intelligence (SSCI). Honolulu:IEEE, 2017.
[8] Rocha Tavares A, Vieira D K S, De Oliveira T N, et al. Algorithm selection in adversarial settings:From experiments to tournaments in StarCraft[J]. IEEE Transactions on Games, 2018, 11(3):1-1.
[9] Liu S, Louis S, Ballinger C. Evolving effective micro behaviors in real-time strategy games[J]. IEEE Transactions on Computational Intelligence and AI in Games, 2016, 12(3):1-1.
[10] Shantia A, Begue E, Wiering M. Connectionist reinforcement learning for intelligent unit micro management in StarCraft[C]//Neural Networks (IJCNN), The 2011 International Joint Conference on. San Jose:IEEE, 2011.
[11] Churchill D, Buro M. Portfolio greedy search and simulation for large-scale combat in starcraft[C]//Computational Intelligence in Games (CIG), 2013 IEEE Conference on. Niagara Falls:IEEE, 2013.
[12] Liu S, Louis S J, Nicolescu M. Using CIGAR for finding effective group behaviors in RTS game[J]. 2013, 15(2):1-8.
[13] Nguyen T, Nguyen K, Thawonmas R. Integrating fuzzy integral and heuristic search for unit micromanagement in RTS games[C]//2014 IEEE Congress on Evolutionary Computation (CEC). Beijing:IEEE, 2014.
[14] Bryan S, Weber. Standard economic models in nonstandard settings-starCraft:Brood war[C]//Computational Intelligence in Games(CIG), 2018 IEEE Conference on. Maastricht:IEEE, 2018.
[15] Certicky M, Sarnovsky M, Varga T. Use of machine learning techniques in real-time strategy games[C]//World Symposium on Digital Intelligence for Systems and Machines, Kosice:IEEE, 2018:159-164.
[16] Cho H C, Kim K J, Cho S B. Replay-based strategy prediction and build order adaptation for StarCraft AI bots[C]//Computational Intelligence in Games. Niagara Falls:IEEE, 2013.
[17] Tang Z, Zhao D, Zhu Y, et al. Reinforcement learning for build-order production in StarCraft Ⅱ[C]//2018 Eighth International Conference on Information Science and Technology (ICIST), Cordoba:IEEE, 2018:1-4.
[18] Takino H, Hoki K. Human-like build-order management in StarCraft to win against specific opponent's strategies[C]//International Conference on Applied Computing & Information Technology/International Conference on Computational Science & Intelligence. Okayama:IEEE Computer Society, 2015.
[19] Chen C Y, Liao X L, Liao C C, et al. Pattern formation based on potential field in real-time strategy games[C]//Technologies and Applications of Artificial Intelligence (TAAI), 2012 Conference on. Tainan:IEEE Computer Society, 2012.
[20] Horn J, Nafpliotis N, Goldberg D E. A niched pareto genetic algorithm for multi-objective optimization[C]//Evolutionary Computation, 1994. IEEE World Congress on Computational Intelligence. Proceedings of the First IEEE Conference on. Orlando:IEEE, 1994.
[21] Hsu W L, Chen Y P. Learning to select actions in starcraft with genetic algorithms[C]//2016 Conference on Technologies and Applications of Artificial Intelligence (TAAI). Hsinchu:IEEE, 2016.
[22] GPB Neto, CDA Siebra. Evolving the behavior of autonomous agents in strategic combat scenarios via SARSA reinforcement learning[C]//2014 Brazilian Symposium on Computer Games and Digital Entertainment, Porto Alegre:IEEE Computer Society, 2015, 12-19.
[23] 陈承裕. 基于势的实时策略博弈模式生成[D]. 台湾:中正大学, 2013.
[24] 郭圣明, 贺筱媛, 胡晓峰, 等. 军用信息系统智能化的挑战与趋势[J]. 控制理论与应用, 2016, 3(12):1562-1571.
[25] 杨瑞平, 张兆峰. 指挥控制系统仿真[M]. 北京:国防工业出版社, 2013.