Competitive computer games are challenging domains for artificial intelligence techniques. In such games, human players often resort to strategies, or game-playing policies, to guide their low-level actions. In this research, we propose a computational version of this behavior, by modeling game playing as an algorithm selection problem: agents must map game states to algorithms to maximize their performance. By reasoning over algorithms instead of low-level actions, we reduce the complexity of decision making in computer games. With further simplifications on the state space of a game, we were able to discuss game-theoretic concepts over aspects of real-time strategy games, as well as generating a game-playing agent that successfully learns how to select algorithms in AI tournaments. We plan to further extend the approach to handle incomplete-information settings, where we do not know the possible behaviors of the opponent.