DeepMind, owned by Google, is an artificial intelligence research company developing programs to solve complex problems for the benefit of mankind. So obviously they trained their AI to play StarCraft. Once they had their champion, AlphaStar, they pit it against professional StarCraft II players TLO (Dario Wünsch) and MaNa (Grzegorz Komincz). AlphaStar beat both humans 5-0.
I know what you’re thinking, the machines obviously won because humans are but filthy meat sacs, jabbing away at the keys with their inarticulate monkey paws. Well, no, actually, turns out the AI had a far lower APM (actions per minute) than the human professionals, with an average of 280. TLO’s average was around 600, for comparison. AlphaStar also had a greater delay between observation and action at 350 milliseconds on average.
Surely, you say, it must be because humans have to use their squidgy ocular organs to actually look at things while the AI sees all things at all times. Again, not really, AlphaStar could “see” the whole map (excluding stuff hidden by the fog of war obvs) without having to zoom or scroll, but it still had to manage its focus of attention. The AI switched focus about as much as the humans did. MaNa did manage to win a game against a version of the AI that had to use a camera, but it was a prototype that had only been learning for seven days.
Researchers concluded that “AlphaStar’s success against MaNa and TLO was in fact due to superior macro and micro-strategic decision-making, rather than superior click-rate, faster reaction times, or the raw interface.”
Take comfort, dear reader, AlphaStar only knows how to play Protoss v. Protoss. For now.