Facebook apologises for psychological experiments on users

The Guardian

Facebook's second most powerful executive, Sheryl Sandberg, has apologised for the conduct of secret psychological tests on nearly 700,000 users in 2012, which prompted outrage from users and experts alike. The experiment, revealed by a scientific paper published in the March issue of Proceedings of National Academy of Sciences, hid "a small percentage" of emotional words from peoples' news feeds, without their knowledge, to test what effect that had on the statuses or "likes" that they then posted or reacted to. "This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated," said Sandberg, Facebook's chief operating officer while in New Delhi. We never meant to upset you." The statement by Sandberg, deputy to chief executive Mark Zuckerberg, is a marked climbdown from its insistence on Tuesday that the experiment was covered by its terms of service.


Scientists Are Just as Confused About the Ethics of Big-Data Research as You

WIRED

When a rogue researcher last week released 70,000 OkCupid profiles, complete with usernames and sexual preferences, people were pissed. When Facebook researchers manipulated stories appearing in Newsfeeds for a mood contagion study in 2014, people were really pissed. OkCupid filed a copyright claim to take down the dataset; the journal that published Facebook's study issued an "expression of concern." Outrage has a way of shaping ethical boundaries. Shockingly, though, the researchers behind both of those big data blowups never anticipated public outrage.


"I Didn't Sign Up for This!": Informed Consent in Social Network Research

AAAI Conferences

The issue of whether, and how, to obtain informed consent for researchstudies that use social network data has recently come to the fore insome controversial cases.Determining how to acquire valid consent that meets the expectations of participants, while minimising the burden placed on them, remains an open problem.We apply Nissenbaum's model of contextual integrity to the consent process, to study whether social norms of willingness to share social network data can be leveraged to avoid burdening participants with too many interventions, while still accurately capturing their own sharing intent. We find that for the 27.7% of our participants (N = 109) who conform to social norms, contextual integrity can be used to significantly reduce the time taken to capture their consent, while still maintaining accuracy. Our findings have implications for researchers conducting such studies who are looking to acquire informed consent without having to burden participants with many interactions.


No Data in the Void: Values and Distributional Conflicts in Empirical Policy Research and Artificial Intelligence Economics for Inclusive Prosperity

#artificialintelligence

Economics has experienced an empirical turn in the last few decades. We have entered an era of big data, machine learning, and artificial intelligence. Experimental methods have greatly increased in importance in both the social and life sciences. And recent efforts at reforming the publication system promise to improve the replicability and credibility of published findings. One might be tempted to conclude that this increased availability of and reliance on quantitative evidence allows us to dispense with the normative judgements of earlier days. I will argue that the opposite is the case. The choice of objective functions, which define our goals, and of the set of policies to be considered matters ever more in all of these contexts. A famous example in debates about the dangers of artificial intelligence (AI) is the hypothetical AI system with the objective of producing as many paperclips as possible. If sufficiently capable, such an AI system might end up annihilating humanity in the pursuit of this objective. Another example is the design of experiments. The majority of experiments in the social and life sciences are designed based on the (implicit) objective of obtaining precise estimates of causal effects. Such experiments randomly assign treatments using fixed probabilities.


A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis

arXiv.org Artificial Intelligence

Online symptom checkers have significant potential to improve patient care, however their reliability and accuracy remain variable. We hypothesised that an artificial intelligence (AI) powered triage and diagnostic system would compare favourably with human doctors with respect to triage and diagnostic accuracy. We performed a prospective validation study of the accuracy and safety of an AI powered triage and diagnostic system. Identical cases were evaluated by both an AI system and human doctors. Differential diagnoses and triage outcomes were evaluated by an independent judge, who was blinded from knowing the source (AI system or human doctor) of the outcomes. Independently of these cases, vignettes from publicly available resources were also assessed to provide a benchmark to previous studies and the diagnostic component of the MRCGP exam. Overall we found that the Babylon AI powered Triage and Diagnostic System was able to identify the condition modelled by a clinical vignette with accuracy comparable to human doctors (in terms of precision and recall). In addition, we found that the triage advice recommended by the AI System was, on average, safer than that of human doctors, when compared to the ranges of acceptable triage provided by independent expert judges, with only a minimal reduction in appropriateness.