Goto

Collaborating Authors

Simulation of Human Behavior


CNN's Don Lemon claims Trump voters must have 'cognitive dissonance' to support such a 'bad person'

FOX News

Don Lemon reacts to President Trump's RNC speech, points blame at Trump voters. CNN anchor Don Lemon went after Trump voters yet again following the president's speech at the Republican National Convention Thursday, saying they must suffer from "cognitive dissonance" to support someone Lemon described as a "bad person." Lemon's colleague Chris Cuomo had told him that the president's supporters had concluded that despite Trump's flaws, "Joe Biden will be worse" for the country. Cuomo theorized that Trump's voters are willing to "forgive" Trump's wrongdoings rather than vote for the Democrat. "I think you're letting them off easy," Lemon responded.


Overcome your cognitive biases with help from this online training

Mashable

As humans, we're forced to make lots of decisions on a daily basis. And while we may think we make every single decision based on facts, logic, and reasoning, there's a lot more to it than that. As it turns out, we kind of suck at the whole decision-making process due to our own cognitive biases. Discovered by two dudes named Daniel Kahneman and Amos Tversky in the 1970s, cognitive biases are basically like mental shortcuts or rules that simplify the decision-making process. Sure, we try to rationalize them after the fact through logic and reasoning, but the choice itself is pre-determined by the unconscious part of the mind.


What Soldiers, Doctors, and Professors Can Teach Us About Artificial Intelligence During COVID-19

#artificialintelligence

Artificial intelligence technology can tell doctors when a scan reveals a tumor, can help the military distinguish between a truck and a school bus as a target, and can answer a high volume of college students' questions. Sectors of our economy such as the military, health care, and higher education are much further along than the K-12 system in incorporating artificial intelligence systems and machine learning into their operations. And many of those uses--even when they are not specifically for education--can spark ideas for applications in K-12 that may be more pertinent than ever imagined. With the coronavirus upending traditional ways of delivering education, AI technologies--which are designed to model human intelligence and solve complex problems--may be able to help with logistical challenges such as busing and classroom social distancing, provide support to overwhelmed teachers, and glean new information about remote learning. AI techniques and systems are "like the internal combustion engine--you can use them to power a lot of different things," said David Danks, a professor of philosophy and psychology at Carnegie Mellon University in Pittsburgh, who studies cognitive science, machine learning, and how AI affects people.


Global Big Data Conference

#artificialintelligence

Recently, I was reading Rolf Dobell''s The Art of Thinking Clearly, which made me think about cognitive biases in a way I never had before. I realized how deeply seated some cognitive biases are. In fact, we often don't even consciously realize when our thinking is being affected by one. For data scientists, these biases can really change the way we work with data and make our day-to-day decisions, and generally not for the better. Data science is, despite the seeming objectivity of all the facts we work with, surprisingly subjective in its processes.


Global Big Data Conference

#artificialintelligence

Recently, I was reading Rolf Dobell''s The Art of Thinking Clearly, which made me think about cognitive biases in a way I never had before. I realized how deeply seated some cognitive biases are. In fact, we often don't even consciously realize when our thinking is being affected by one. For data scientists, these biases can really change the way we work with data and make our day-to-day decisions, and generally not for the better. Data science is, despite the seeming objectivity of all the facts we work with, surprisingly subjective in its processes.


Towards a Human-Centred Cognitive Model of Visuospatial Complexity in Everyday Driving

arXiv.org Artificial Intelligence

We develop a human-centred, cognitive model of visuospatial complexity in everyday, naturalistic driving conditions. With a focus on visual perception, the model incorporates quantitative, structural, and dynamic attributes identifiable in the chosen context; the human-centred basis of the model lies in its behavioural evaluation with human subjects with respect to psychophysical measures pertaining to embodied visuoauditory attention. We report preliminary steps to apply the developed cognitive model of visuospatial complexity for human-factors guided dataset creation and benchmarking, and for its use as a semantic template for the (explainable) computational analysis of visuospatial complexity.


AI Research Considerations for Human Existential Safety (ARCHES)

arXiv.org Artificial Intelligence

Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to humanity's long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contemporary technical research might be directed to address those risks. A key property of hypothetical AI technologies is introduced, called \emph{prepotence}, which is useful for delineating a variety of potential existential risks from artificial intelligence, even as AI paradigms might shift. A set of \auxref{dirtot} contemporary research \directions are then examined for their potential benefit to existential safety. Each research direction is explained with a scenario-driven motivation, and examples of existing work from which to build. The research directions present their own risks and benefits to society that could occur at various scales of impact, and in particular are not guaranteed to benefit existential safety if major developments in them are deployed without adequate forethought and oversight. As such, each direction is accompanied by a consideration of potentially negative side effects.


On the Causes and Consequences of Deviations from Rational Behavior

arXiv.org Artificial Intelligence

Traditionally, economists have focused on a rational decision maker - the "homo economicus" - to model human behavior. The observation of various deviations of behavior from the benchmark of optimizing rational decision making has motivated an entire field, behavioral economics. Research in this field has identified a plethora of different, partly distinct and partly interacting, behavioral biases, which are related to cognitive limitations, stress, limited memory, preference anomalies, and social interactions, among others. These biases are typically established by comparing actual behavior against a theoretical benchmark, often in simplistic, unrealistic, or abstract settings that are unfamiliar to the decision makers. Field evidence for behavioral biases among professionals is still scarce, mostly because of the difficulty to establish a rational benchmark in complex real-world settings. Consequently, most contributions focus on documenting a behavioral deviation in one particular dimension. This makes it often difficult to compare the behavioral biases documented in the literature. Moreover, deviations from rational behavior are usually seen as being related to suboptimal performance. However, this connotation often rests on a priori reasoning or value judgments because it is typically even harder or impossible to identify the consequences of deviations from the rational benchmark than the deviations themselves.


AI-Powered Digital People - Synced

#artificialintelligence

People around the world enjoy "virtual human" characters, whether in Hollywood films, Japanese anime, or video games. In recent years, AI-powered virtual humans have increasingly insinuated themselves into our daily lives. The virtual pop icon Teresa Teng has performed songs with Taiwanese singer Jay Chou, achieving huge success. The popular Chinese debate show "I CAN I BB" hosted a spirited episode on whether "Falling in love with an AI human can be considered true love or not," where many people argued it is possible for a human to fall in love with an AI. Are there limits to such human-machine relationships?


Improving Confidence in the Estimation of Values and Norms

arXiv.org Artificial Intelligence

Autonomous agents (AA) will increasingly be interacting with us in our daily lives. While we want the benefits attached to AAs, it is essential that their behavior is aligned with our values and norms. Hence, an AA will need to estimate the values and norms of the humans it interacts with, which is not a straightforward task when solely observing an agent's behavior. This paper analyses to what extent an AA is able to estimate the values and norms of a simulated human agent (SHA) based on its actions in the ultimatum game. We present two methods to reduce ambiguity in profiling the SHAs: one based on search space exploration and another based on counterfactual analysis. We found that both methods are able to increase the confidence in estimating human values and norms, but differ in their applicability, the latter being more efficient when the number of interactions with the agent is to be minimized. These insights are useful to improve the alignment of AAs with human values and norms.