Uncovering Response Biases in Recommendation

AAAI Conferences

An user-specific tendency of biased movie rating is investigated, leading six identified types of rating pattern in a massive movie rating dataset. Based on the observed bias assumption, we propose a rescaling method of preferential scores by considering the rating types.  Experimental results show significant enhancement for movie recommendation systems.


Teaching machines to avoid our mistakes

#artificialintelligence

The conventional wisdom is that intelligent systems, while good with numbers and maybe facts, are not going to be able to cope with the world of judgment and decision-making. The common assumption is that computers will not be able to deal with the nuance of reasoning that drives the solely human ability to assess what is happening in the world and then make reasoned decisions in reaction to that assessment. And herein lies my problem -- the assumption hidden in this belief is that humans are actually good at this sort of reasoning. And it's not clear that this is true. In particular, we seem prone to reasoning mistakes based on biases in decision-making that hinder us every day.


Teaching machines to avoid our mistakes

#artificialintelligence

The conventional wisdom is that intelligent systems, while good with numbers and maybe facts, are not going to be able to cope with the world of judgment and decision-making. The common assumption is that computers will not be able to deal with the nuance of reasoning that drives the solely human ability to assess what is happening in the world and then make reasoned decisions in reaction to that assessment. And herein lies my problem -- the assumption hidden in this belief is that humans are actually good at this sort of reasoning. And it's not clear that this is true. In particular, we seem prone to reasoning mistakes based on biases in decision-making that hinder us every day.


We jump to conclusions even when it pays to wait for the facts

New Scientist

People jump to conclusions they want to be true, even when it is against their interests to do so, according to a study of how we make decisions. "In the case of topics important for one's identity, preferential treatment of information consistent with a person's worldview is understandable," says Filip Gęsiarz of University College London. "It can fulfil many psychological needs other than a search for objective truth, such as protecting one's core values or avoiding uncertainty." So when it comes to big, defining issues like Brexit or climate change, we know that objective truth is not always what people are looking for. But Gęsiarz and his colleagues wanted to see whether similar factors were in effect even in trivial judgements, and ones where it pays to be accurate.


The World Isn't as Bad as Your Wired Brain Tells You

WSJ.com: WSJD - Technology

Our best hope for breaking their spell may lie in understanding the workings of our cognitive and social biases--and the algorithms of online social networks that reinforce them. First described in 1973 by psychologists Amos Tversky and Daniel Kahneman, author of the book "Thinking, Fast and Slow," the availability bias refers to our tendency to think that whatever we heard about most recently is more common than it actually is. This might have been useful when we had to make life choices based on a trickle of information, but now that we have a fire hose of it, we can't seem to be rational about the likelihood of bad things happening. Share of survey respondants saying there is more or less crime in the U.S. than there was a year ago The availability bias helps explain why people are afraid of shark attacks, even though they're more likely to drown at the beach. People fear terrorism, even though the odds they will die in a plane crash are far higher--and the odds that they'll be killed walking down the street are many times higher still.