--In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots--inferred capability and intention--and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UA V agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multifaceted mental models when collaborating with robots across multiple contexts. I NTRODUCTION Trust is a cornerstone of long-lasting collaboration in human teams, and is crucial for human-robot cooperation . For example, human trust in robots influences usage , and willingness to accept information or suggestions . Misplaced trust in robots can lead to poor task-allocation and unsatisfactory outcomes.