Measuring the Efficiency of Charitable Giving with Content Analysis and Crowdsourcing

AAAI Conferences

In the U.S., individuals give more than 200 billion dollars to over 50 thousand charities each year, yet how people make these choices is not well understood. In this study, we use data from CharityNavigator.org and web browsing data from Bing toolbar to understand charitable giving choices. Our main goal is to use data on charities' overhead expenses to better understand efficiency in the charity marketplace. A preliminary analysis indicates that the average donor is "wasting" more than 15% of their contribution by opting for poorly run organizations as opposed to higher rated charities in the same Charity Navigator categorical group. However, charities within these groups may not represent good substitutes for each other. We use text analysis to identify substitutes for charities based on their stated missions and validate these substitutes with crowd-sourced labels. Using these similarity scores, we simulate market outcomes using web browsing and revenue data. With more realistic similarity requirements, the estimated loss drops by 75%—much of what looked like inefficient giving can be explained by crowd-validated similarity requirements that are not fulfilled by most charities within the same category. A choice experiment helps us further investigate the extent to which a recommendation system could impact the market. The results indicate that money could be redirected away from the long-tail of inefficient organizations. If widely adopted, the savings would be in the billions of dollars, highlighting the role the web could have in shaping this important market.


Distributed Knowledge in Crowds: Crowd Performance on Hidden Profile Tasks

AAAI Conferences

Individuals today discuss information and form judgements as crowds in online communities and platforms. "Wisdom of the crowd" arguments suggest that, in theory, crowds have the capacity to bring together diverse expertise, pooling distributed knowledge and thereby solving challenging and complex problems. This paper concerns one way that crowds might fall short of this ideal. A large body of research in the social psychology of small groups concerns the shared information bias, a tendency for group members to focus on common knowledge at the expense of rarer information which only one or a few individuals might possess. We investigated whether this well-known bias for small groups also impacts larger crowds of 30 participants working on Amazon’s Mechanical Turk. We found that crowds failed to adequately pool distributed facts; that they were partially biased in how they shared facts; and that individual perception of group decisions was unstable. Nonetheless, we found that aggregating individual reports from the crowd resulted in moderate performance in solving the assigned task.


Rise of the robot teachers: iPad apps can teach children as well as humans, says study

Daily Mail - Science & tech

Researchers compared how well children learnt from an iPad app to how well they learnt speaking in-person with an instructor. Millions of devastated Tinder users are forced to spend a... Is Apple expanding into digital GLASSES? Report suggests the... WhatsApp finally launches video calls: Feature will come to... The app that lets the colorblind see the world in a new... Millions of devastated Tinder users are forced to spend a... Is Apple expanding into digital GLASSES? Report suggests the... WhatsApp finally launches video calls: Feature will come to...


An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets

AAAI Conferences

Crowdsourced labor markets represent a powerful new paradigm for accomplishing work. Understanding the motivating factors that lead to high quality work could have significant benefits. However, researchers have so far found that motivating factors such as increased monetary reward generally increase workers’ willingness to accept a task or the speed at which a task is completed, but do not improve the quality of the work. We hypothesize that factors that increase the intrinsic motivation of a task – such as framing a task as helping others – may succeed in improving output quality where extrinsic motivators such as increased pay do not. In this paper we present an experiment testing this hypothesis along with a novel experimental design that enables controlled experimentation with intrinsic and extrinsic motivators in Amazon’s Mechanical Turk, a popular crowdsourcing task market. Results suggest that intrinsic motivation can indeed improve the quality of workers’ output, confirming our hypothesis. Furthermore, we find a synergistic interaction between intrinsic and extrinsic motivators that runs contrary to previous literature suggesting “crowding out” effects. Our results have significant practical and theoretical implications for crowd work.


Researchers use Twitter, AI to develop flood warning system

#artificialintelligence

Researchers are combining Twitter, citizen science and artificial intelligence (AI) techniques to develop an early-warning system for flood-prone communities in urban areas.