Motivation - To investigate ways to support human-automation teams with real-world, imperfect automation where many system failures are the result of systematic failure. Research approach - An experimental approach was used to investigate how variance in agent reliability may influence human's trust and subsequent reliance on agent's decision aids. Sixty command and control (C2) teams, each consisting of a human operator and two cognitive agents, were asked to detect and respond to battlefield threats in six ten-minute scenarios. At the end of each scenario, participants completed the SAGAT queries, followed by the NASA TLX queries. Findings/Design - Results revealed that teams with experienced human operators accepted significantly less inappropriate recommendations from agents than teams with inexperienced operators. More importantly, the knowledge of agent's reliability and the ratio of unreliable tasks have significant effects on human's trust, as manifested in both team performance and human operators' rectification of inappropriate recommendations from agents. Originality/Value - It represents an important step toward uncovering the nature of human trust in human-agent collaboration. Take away message - This research has shown that given even minimal basis for understanding when the operator should and should not trust the agent recommendations allows operators to make better AUDs, to have better situation awareness on the critical issues associated with automation error, and to establish better trust in intelligent agents.