A growing amount of empirical evidence shows the effectiveness of actors engaged in different collaborative governance arrangements in addressing environmental problems. However, studies also show that actors sometimes collaborate only as a means of advocating their own interests, while largely lacking a willingness to contribute towards jointly negotiated solutions to common problems. Hence, collaboration is sometimes unable to deliver any tangible outcomes, or merely produces symbolic outcomes such as aggregated wish lists where conflicts of interest are left untouched. Clearly, no single blueprint exists for how to succeed by using collaborative approaches to solve environmental problems. One way of approaching this puzzle is through the lenses of the participating actors and the ways in which they engage in collaboration with each other.
This thesis aims to provide a foundation for designing computer agents able to work better with people and with other agents in heterogeneous groups. When agents work together on a collaborative activity, in addition to performing their share of the activity, they may be able to help one another and thus improve the collective utility. The thesis specifically focuses on investigating the question of how, when and what kinds of helpful behavior should emerge when agents collaborate, taking into account the costs of a helpful action. It considers collaborative activities that take place in settings in which there is uncertainty about agents' capabilities and about the state of the world. To ensure that helpful behavior improves the overall benefit of the collaboration, the thesis incorporates decision-theoretic mechanisms for managing helpful behavior into existing formalizations of collaborative activity. It provides an investigation of the way people perceive the usefulness of helpful actions when proposed by a computer agent. It proposes incentives for facilitating collaboration among self-interested agents. In addition to these theoretical and empirical contributions, my findings are applied to several real-life application domains with different characteristics.
Interacting actions — actions whose joint effect differs from the union of their individual effects — are challenging both to represent and to plan with due to their combinatorial nature. So far, there have been few attempts to provide a succinct language for representing them that can also support efficient centralized and distributed privacy preserving planning. In this paper we suggest an approach for representing interacting actions succinctly and show how such a domain model can be compiled into a standard single-agent planning problem as well as to privacy preserving multi-agent planning. We test the performance of our method on a number of novel domains involving interacting actions and privacy.
For a while, it looked like Rethink Robotics would shake up the world with its collaborative robots: rather than having to write code, workers could teach bots to perform tasks by guiding them through the process. However, the market doesn't appear to have shared its vision. Rethink has suddenly shut down after a potential buyer backed out of a deal. Sales of Baxter and Sawyer robots weren't meeting expectations, Rethink chief Scott Eckert said, leaving the company low on cash. It really needed this acquisition to go through, in other words.