Effective norms, emerging from sustained individual interactions over time, can complement societal rules and significantly enhance performance of individual agents and agent societies. Researchers have used a model that supports the emergence of social norms via learning from interaction experiences where each interaction is viewed as a stage game. In this social learning model, which is distinct from an agent learning from repeated interactions against the same player, an agent learns a policy to play the game from repeated interactions with multiple learning agents. The key research question is to characterize when and how the entire population of homogeneous learners converge to a consistent norm when multiple action combinations yield the same optimal payoff. In this paper we study two extensions to the social learning model that significantly enhances its applicability. We first explore the effects of heterogeneous populations where different agents may be using different learning algorithms. We also investigate norm emergence when agent interactions are physically constrained. We consider agents located on a grid where an agent is more likely to interact with other agents situated closer to it than those that are situated afar. The key new results include the surprising acceleration in learning with limited interaction ranges. We also study the effects of pure-strategy players, i.e., non-learners in the environment.