We develop an upper bound for the potential performance improvement of an agent using a best response to a model of an opponent instead of an uninformed game-theoretic equilibrium strategy. We show that the bound is a function of only the domain structure of an adversarial environment and does not depend on the actual actors in the environment. This bounds-finding technique will enable system designers to determine if and what type of opponent models would be profitable in a given adversarial environment. It also gives them a baseline value with which to compare performance of instantiated opponent models. We study this method in two domains: selecting intelligence collection priorities for convoy defense and determining the value of predicting enemy decisions in a simplified war game.