In previous work we presented a computational framework that allows a robot or agent to reason about whether it should trust an interactive partner or whether the interactive partner trusts the robot (Wagner & Arkin, 2011). This article examines the use of this framework in a well-known situation for examining trust-the Investor-Trustee game (King-Casas, Tomlin, Anen, Camerer, Quartz, & Montague, 2005). Our experiment pits the robot against a person in this game and explores the impact of recognizing and responding to trust signals. Our results demonstrate that the recognition that a person has intentionally placed themselves at risk allows the robot to reciprocate and, by doing so, improve both individuals play in the game. This work has implications for home healthcare, search and rescue, and military applications.