We examine the long-term implication of two models of learning with recency bias: recursive weights and limited memory. We show that both models generate similar beliefs and that both have a weighted universal consistency property. Using the limited-memory model we produce learning procedures that both are weighted universally consistent and converge with probability one to strict Nash equilibrium.