Distributed collaborative learning has enabled building machine learning models from distributed mobile users' data. It allows the server and users to collaboratively train a learning model where users only share model parameters with the server. To protect privacy, the server can use secure multiparty computation to learn the global model without revealing users' parameter updates in the clear. However this privacy preserving distributed learning opens the door to poisoning attacks, where malicious users poison their training data to maliciously influence the behavior of the global model. In this paper, we propose MLGuard, a privacy preserving distributed collaborative learning system with poisoning attack mitigation. MLGuard employs lightweight secret sharing scheme and a novel poisoning attack mitigation technique. We address several challenges such as preserving users' privacy, mitigating poisoning attacks, respecting resource constraints of mobile devices, and scaling to large number of users. Evaluation results demonstrate the effectiveness of MLGuard on building high accurate learning models with the existence of malicious users, while imposing minimal communication cost on mobile devices.