Invited Talks

Search and State Evaluation in Real-Time Strategy Games by Michael Buro

RTS games are fast paced war simulations in which players instruct units to gather resources, scout, build armies, upgrade weapons, and fight. RTS games pose many AI challenges that are not present in classic perfect information games: the number of available moves is huge, action effects may be stochastic, the playing field is only partially observable, and the game proceeds at high speed even if players choose not to act. AI systems have improved considerably in recent years, but they still are no match for professional players. In this presentation I will give an overview of AI techniques used in RTS games, ranging from build-order optimization, over combat AI based on simulation, to hierarchical search and state evaluation based on convolutional neural networks.

Michael Buro is a full professor in the computing science department at the University of Alberta in Edmonton, Canada.  He received his Ph.D. in 1994 for his work on Logistello | an Othello playing program that later defeated the reigning human World champion 6-0 in 1997 | while he worked as a research scientist at NEC Research Institute in Princeton, NJ. Michael's research is focused on heuristic search, machine learning, and applied game theory.  In his current work,  he is developing algorithms for adversarial planning,  state/action inference and abstraction,  and opponent modelling for complex real-time decision domains.  Recent research highlights include the construction of the World's strongest Skat playing program (based on recursive Monte Carlo  for  imperfect  information  games)  and  one  of  the  strongest  StarCraft  programs  based  on  automated build-order planning and heuristic search applied to small-combat scenarios.

 

Search for Optimal Solutions: the Heart of Heuristic Search is Still Beating by Ariel Felner

The field of heuristic search has spawned large number of subfields such as finding good heuristics, abstracting state-spaces, finding solutions of different qualities or that meet different requirements or constraints. However, a major research direction within the field of heuristic search is that of finding optimal solutions. In many cases optimal solutions are indeed needed and and can be practicably obtained  (e.g., GPS navigation and computerized games). But, more importantly, studying the nature of optimal solution and algorithms for achieving them shed light on the theoretical structure of the tasks and allows the derivation of algorithms and solutions to all other subfields. Thus, the research of finding optimal solution is still at the heart of the field of heuristic search.

While the A* algorithm was proved to be optimally effective there exists a large number of algorithms and research directions that enhance the A* family of algorithms and improve their performance. In this talk I'll cover a number of such recent algorithms. These algorithms assume a fixed heuristic function but exploit various algorithmic directions to improve upon A* in many ways along the following lines: better memory usage, improved generations of nodes, interleaving depth-first searches into A*, enhanced calculations of the heuristic and recent developments of optimal bidirectional search.

Ariel Felner received his Ph.D in 2002 from Bar-Ilan University, Israel, and is now an associate professor at Ben-Gurion University, Israel. He is the chair of the Israeli Association for Artificial intelligence (IAAI) and a council member of SoCS. He is interested in all aspects of heuristic search and has performed research in various areas within the field of heuristic search. He pays specific attention to pedagogical and historical aspects of teaching concepts in this field.