The paper is concerned with learning in large-scale multi-agent games. The empirical centroid fictitious play (ECFP) algorithm is a variant of the well-known fictitious play algorithm that is practical and computationally tractable in large-scale games. ECFP has been shown to be an effective tool in learning consensus equilibria (a subset of the Nash equilibria) in certain games. However, the behavior of ECFP has only been characterized in terms of convergence of the networked-average empirical frequencies as opposed to the more traditional notion of learning mixed equilibria, namely the notion of convergence of individual empirical frequencies. The behavior of ECFP in terms of convergence in empirical frequencies is herein studied and the equilibrium concept of mean-centric equilibrium (MCE) is introduced. The concept of MCE is similar in spirit to that of Nash equilibrium (NE) but, in MCE each player is at equilibrium with respect to a centroid representing the aggregate behavior, as opposed to NE where players are at equilibrium with respect to the strategies of individual opponents. The MCE concept is well suited to large scale games where it is reflective of the fact that in many large scale games of interest, utilities are greatly affected by changes in the aggregate behavior but less susceptible to changes in the strategy of a particular opposing player. MCE is also well suited to large-scale games in that it can be learned using practical, low-information-overhead behavior rules (e.g. ECFP).