Goto

Collaborating Authors

 cognitive processing


Which linguistic cues make people fall for fake news? A comparison of cognitive and affective processing

Lutz, Bernhard, Adam, Marc, Feuerriegel, Stefan, Pröllochs, Nicolas, Neumann, Dirk

arXiv.org Artificial Intelligence

Fake news on social media has large, negative implications for society. However, little is known about what linguistic cues make people fall for fake news and, hence, how to design effective countermeasures for social media. In this study, we seek to understand which linguistic cues make people fall for fake news. Linguistic cues (e.g., adverbs, personal pronouns, positive emotion words, negative emotion words) are important characteristics of any text and also affect how people process real vs. fake news. Specifically, we compare the role of linguistic cues across both cognitive processing (related to careful thinking) and affective processing (related to unconscious automatic evaluations). To this end, we performed a within-subject experiment where we collected neurophysiological measurements of 42 subjects while these read a sample of 40 real and fake news articles. During our experiment, we measured cognitive processing through eye fixations, and affective processing in situ through heart rate variability. We find that users engage more in cognitive processing for longer fake news articles, while affective processing is more pronounced for fake news written in analytic words. To the best of our knowledge, this is the first work studying the role of linguistic cues in fake news processing. Altogether, our findings have important implications for designing online platforms that encourage users to engage in careful thinking and thus prevent them from falling for fake news.


The Effect of Information Type on Human Cognitive Augmentation

Fulbright, Ron, McGaha, Samuel

arXiv.org Artificial Intelligence

When performing a task alone, humans achieve a certain level of performance. When humans are assisted by a tool or automation to perform the same task, performance is enhanced-- augmented. Recently developed cognitive systems are able to perform cognitive processing at or above the level of a human in some domains. When humans work collaboratively with such "cogs" in a human/cog ensemble, we expect augmentation of cognitive processing to be evident and measurable. This paper shows the degree of cognitive augmentation depends on the nature of the information the cog contributes to the ensemble. Results of an experiment are reported showing conceptual information is the most effective type of information resulting in increases in cognitive accuracy, cognitive precision, and cognitive power.


Synthetic Expertise

Fulbright, Ron, Walters, Grover

arXiv.org Artificial Intelligence

We will soon be surrounded by artificial systems capable of cognitive performance rivaling or exceeding a human expert in specific domains of discourse. However, these "cogs" need not be capable of full general artificial intelligence nor able to function in a stand-alone manner. Instead, cogs and humans will work together in collaboration each compensating for the weaknesses of the other and together achieve synthetic expertise as an ensemble. This paper reviews the nature of expertise, the Expertise Level to describe the skills required of an expert, and knowledge stores required by an expert. By collaboration, cogs augment human cognitive ability in a human/cog ensemble. This paper introduces six Levels of Cognitive Augmentation to describe the balance of cognitive processing in the human/cog ensemble. Because these cogs will be available to the mass market via common devices and inexpensive applications, they will lead to the Democratization of Expertise and a new cognitive systems era promising to change how we live, work, and play. The future will belong to those best able to communicate, coordinate, and collaborate with cognitive systems.


Social intelligence is not sentience

#artificialintelligence

On Saturday morning, June 11, Jeff Bezo's newspaper The Washington Post published a story under the headline "The Google engineer who thinks the company's AI has come to life." The headline was followed by a brief explanation of Blake Lamoine, a Southern grown, former U.S. military, ex-convict, Christian mystic, AI researcher, father, and genius of compassion (I added that last part) and his belief that there's "a ghost in the machine."* If your eyes haven't rolled to the back of your head yet, then chances are you're reading this from the front porch of a double-wide trailer parked somewhere below the Mason Dixon with a glass of sweet tea in your hand and a coon dog at your feet. Which is clearly not something any "reasonable" person would choose to do in the year 2022. Or if, like me, you're a bit more progressed from the stereotype, you might be standing in front of a classroom of semi-attentive undergraduate students at a Southeastern research university making your best effort to bridge the ever-widening practical and theoretical gaps between old-world journalistic traditions and new-age neoliberal ideologies related to the function of human language in society.


Ho

AAAI Conferences

A typical AI system engages many levels of cognitive processing from learning to problem solving. The issue we would like to address in this paper is: Can a unified representational scheme be used in learning processes as well as the various levels of cognitive processing from concept representation to problem solving including the generation of action plans? In a previous paper we defined a set of representations called "atomic operational representations" that employs an explicit representation of the temporal dimension and that can be used to ground concepts in the physical world, such as concepts that involve various activities and interactions. In this paper we apply operational representations in a unified manner to the following cognitive processes: 1) the unsupervised learning and encoding of causal rules of actions and their consequences; and 2) the application of the learned causal rules to problem solving processes that produce desired action plans. The unique and explicit temporal characteristic of operational representations is the key feature that allows the encoded concepts to be used in a unified manner across the various levels of cognitive processing. Hence, abstractions in the form of operational representations have an important role toplay in AI.


The five Is: Key principles for interpretable and safe conversational AI

Wahde, Mattias, Virgolin, Marco

arXiv.org Artificial Intelligence

In this position paper, we present five key principles, namely interpretability, inherent capability to explain, independent data, interactive learning, and inquisitiveness, for the development of conversational AI that, unlike the currently popular black box approaches, is transparent and accountable. At present, there is a growing concern with the use of black box statistical language models: While displaying impressive average performance, such systems are also prone to occasional spectacular failures, for which there is no clear remedy. In an effort to initiate a discussion on possible alternatives, we outline and exemplify how our five principles enable the development of conversational AI systems that are transparent and thus safer for use. We also present some of the challenges inherent in the implementation of those principles.


People contemplating the end of a relationship start saying 'I' and 'we' more

Daily Mail - Science & tech

Break-ups are something that many people dread - especially when you don't see them coming. Now, a new study has revealed a key way to tell if your partner is thinking of breaking up with you, based on their language. The study found that people contemplating the end of a relationship change their launguage and start saying'I' and'we' more. According to the experts, the use of the word'I' is correlated with depression and sadness, and is a key sign that someone is carrying a heavy cognitive load. The researchers hope the findings will provide people with a key insight into how loved ones may respond over time to the end of a romantic relationship.


The Battle for Jobs: Staying Relevant in the Robotics Age

#artificialintelligence

With robots potentially as not only your coworkers but also your competition, what capabilities and unique talents are essential to keep your job? We asked the co-chairs of IAOP's Global Human Capital Chapter: What skills do humans need to compete with robots? A recent KPMG white paper titled Rise of the Humans states that automation and robotics will transform jobs according to two main dimensions – Cognitive Automation and Cognitive Processing & Robotic Automation. The authors said Cognitive Automation changes fall into two main areas: the Leveraged Professional, which enables the people of lesser qualifications to perform at substantially higher levels, e.g., a paralegal giving attorney-level advice, or allowing a lower qualified professional deliver a world-class output. Second is the Connected Worker, which affords everyone in a specific role to access technologies and the best ideas and knowledge on a topic.


Building an emotional machine

#artificialintelligence

From the sci-fi classic "Bladerunner" to the recent films "Her" and "Ex Machina," pop culture is filled with stories demonstrating our simultaneous fascination with and fear of artificial intelligence (AI). This interest is rooted in questions about where the line between human and artificial intelligence will be, and whether that line might one day disappear. Will robots eventually be able to not only think but also feel and behave like us? Could a robot ever be fully human? It is a relatively new field that started in the 1990s.8 A new multidisciplinary field called developmental robotics is paving the way to some answers.(a) Rather than writing programs that try to mimic specific human behaviors like love, developmental roboticists build machines that learn and develop the way humans do as they grow from newborn infants to adults.


The Cognitive Processing of Causal Knowledge

Morris, Scott B., Cork, Doug, Neapolitan, Richard E.

arXiv.org Artificial Intelligence

There is a brief description of the probabilistic causal graph model for representing, reasoning with, and learning causal structure using Bayesian networks. It is then argued that this model is closely related to how humans reason with and learn causal structure. It is shown that studies in psychology on discounting (reasoning concerning how the presence of one cause of an effect makes another cause less probable) support the hypothesis that humans reach the same judgments as algorithms for doing inference in Bayesian networks. Next, it is shown how studies by Piaget indicate that humans learn causal structure by observing the same independencies and dependencies as those used by certain algorithms for learning the structure of a Bayesian network. Based on this indication, a subjective definition of causality is forwarded. Finally, methods for further testing the accuracy of these claims are discussed.