Goto

Collaborating Authors

Threats of a Replication Crisis in Empirical Computer Science

Communications of the ACM

Andy Cockburn (andy.cockburn@canterbury.ac.nz) is a professor at the University of Cantebury, Christchurch, New Zealand, where he is head of the HCI and Multimedia Lab. Pierre Dragicevic is a research scientist at Inria, Orsay, France.


What has happened down here is the winds have changed - Statistical Modeling, Causal Inference, and Social Science

#artificialintelligence

Someone sent me this article by psychology professor Susan Fiske, scheduled to appear in the APS Observer, a magazine of the Association for Psychological Science. The article made me a little bit sad, and I was inclined to just keep my response short and sweet, but then it seemed worth the trouble to give some context. I'll first share the article with you, then give my take on what I see as the larger issues. The title and headings of this post allude to the fact that the replication crisis has redrawn the topography of science, especially in social psychology, and I can see that to people such as Fiske who'd adapted to the earlier lay of the land, these changes can feel catastrophic. I will not be giving any sort of point-by-point refutation of Fiske's piece, because it's pretty much all about internal goings-on within the field of psychology (careers, tenure, smear tactics, people trying to protect their labs, public-speaking sponsors, career-stage vulnerability), and I don't know anything about this, as I'm an outsider to psychology and I've seen very little of this sort of thing in statistics or political science. As I don't know enough about the academic politics of psychology to comment on most of what Fiske writes about, so what I'll mostly be talking about is how her attitudes, distasteful as I find them both in substance and in expression, can be understood in light of the recent history of psychology and its replication crisis. In short, Fiske doesn't like when people use social media to publish negative comments on published research. She's implicitly following what I've sometimes called the research incumbency rule: that, once an article is published in some approved venue, it should be taken as truth.


Science Needs to Learn How to Fail So It Can Succeed

WIRED

Social science is great at making wacky, wonderful claims about the way the world--and the human mind--works. College students walk more slowly after being exposed to words relating to elderly people. Elections are determined by the outcome of college football games. Obesity is contagious, you can have business success by standing in an expansive "power pose," baseball players with a K in their name are more likely to strike out, and hurricanes with girl names are more dangerous than hurricanes with boy names. Andrew Gelman is a professor of statistics and political science at Columbia University.


Two unrelated topics in one post: (1) Teaching useful algebra classes, and (2) doing more careful psychological measurements

#artificialintelligence

Kevin Lewis and Paul Alper send me so much material, I think they need their own blogs. In the meantime, I keep posting the stuff they send me, as part of my desperate effort to empty my inbox. "Should Students Assessed as Needing Remedial Mathematics Take College-Level Quantitative Courses Instead? A Randomized Controlled Trial," by A. W. Logue, Mari Watanabe-Rose, and Daniel Douglas, which begins: Many college students never take, or do not pass, required remedial mathematics courses theorized to increase college-level performance. Some colleges and states are therefore instituting policies allowing students to take college-level courses without first taking remedial courses.


[Perspective] Measurement error and the replication crisis

Science

Measurement error adds noise to predictions, increases uncertainty in parameter estimates, and makes it more difficult to discover new phenomena or to distinguish among competing theories. A common view is that any study finding an effect under noisy conditions provides evidence that the underlying effect is particularly strong and robust. Yet, statistical significance conveys very little information when measurements are noisy. In noisy research settings, poor measurement can contribute to exaggerated estimates of effect size. This problem and related misunderstandings are key components in a feedback loop that perpetuates the replication crisis in science.