Goto

Collaborating Authors

 blenderbot 2


Those Aren't Your Memories, They're Somebody Else's: Seeding Misinformation in Chat Bot Memories

Atkins, Conor, Zhao, Benjamin Zi Hao, Asghar, Hassan Jameel, Wood, Ian, Kaafar, Mohamed Ali

arXiv.org Artificial Intelligence

One of the new developments in chit-chat bots is a long-term memory mechanism that remembers information from past conversations for increasing engagement and consistency of responses. The bot is designed to extract knowledge of personal nature from their conversation partner, e.g., stating preference for a particular color. In this paper, we show that this memory mechanism can result in unintended behavior. In particular, we found that one can combine a personal statement with an informative statement that would lead the bot to remember the informative statement alongside personal knowledge in its long term memory. This means that the bot can be tricked into remembering misinformation which it would regurgitate as statements of fact when recalling information relevant to the topic of conversation. We demonstrate this vulnerability on the BlenderBot 2 framework implemented on the ParlAI platform and provide examples on the more recent and significantly larger BlenderBot 3 model. We generate 150 examples of misinformation, of which 114 (76%) were remembered by BlenderBot 2 when combined with a personal statement. We further assessed the risk of this misinformation being recalled after intervening innocuous conversation and in response to multiple questions relevant to the injected memory. Our evaluation was performed on both the memory-only and the combination of memory and internet search modes of BlenderBot 2. From the combinations of these variables, we generated 12,890 conversations and analyzed recalled misinformation in the responses. We found that when the chat bot is questioned on the misinformation topic, it was 328% more likely to respond with the misinformation as fact when the misinformation was in the long-term memory.


"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset

Smith, Eric Michael, Hall, Melissa, Kambadur, Melanie, Presani, Eleonora, Williams, Adina

arXiv.org Artificial Intelligence

As language models grow in popularity, it becomes increasingly important to clearly measure all possible markers of demographic identity in order to avoid perpetuating existing societal harms. Many datasets for measuring bias currently exist, but they are restricted in their coverage of demographic axes and are commonly used with preset bias tests that presuppose which types of biases models can exhibit. In this work, we present a new, more inclusive bias measurement dataset, HolisticBias, which includes nearly 600 descriptor terms across 13 different demographic axes. HolisticBias was assembled in a participatory process including experts and community members with lived experience of these terms. These descriptors combine with a set of bias measurement templates to produce over 450,000 unique sentence prompts, which we use to explore, identify, and reduce novel forms of bias in several generative models. We demonstrate that HolisticBias is effective at measuring previously undetectable biases in token likelihoods from language models, as well as in an offensiveness classifier. We will invite additions and amendments to the dataset, which we hope will serve as a basis for more easy-to-use and standardized methods for evaluating bias in NLP models.


Meta unleashes BlenderBot 3 upon the internet, its most competent chat AI to date

Engadget

More than half a decade after Microsoft's truly monumental Taye debacle, the incident still stands as stark reminder of how quickly an AI can be corrupted after exposure to the internet's potent toxicity and a warning against building bots without sufficiently robust behavioral tethers. On Friday, Meta's AI Research division will see if its latest iteration of Blenderbot AI can stand up to the horrors of the interwebs with the public demo release of its 175 billion-parameter Blenderbot 3. A major obstacle currently facing chatbot technology (as well as the natural language processing algorithms that drive them) is one of sourcing. Traditionally, chatbots are trained in highly-curated environments -- because otherwise you invariably get a Taye -- but that winds up limiting the subjects that it can discuss to those specific ones available in the lab. Conversely, you can have the chatbot pull information from the internet to have access to a broad swath of subjects but could, and probably will, go full Nazi at some point. "Researchers can't possibly predict or simulate every conversational scenario in research settings alone," Meta AI researchers wrote in a Friday blog post.


Why open-ended conversational AI is a hard nut to crack

#artificialintelligence

The'intelligence' of AI is growing all the time. And AI has many forms, from Spotify's recommendation system to self-drive cars. AI utilises natural language processing (NLP) to deliver natural and human-like language. It mimics humans and generates human-like messages by analysing commands. That said, it is still challenging to create an AI tool that understands the nuances of natural human languages is hard.


Why the true test for today's conversational AI chatbots is time

#artificialintelligence

Did you miss a session at the Data Summit? From Siri to Alexa to Google, we are surrounded by AI systems that have been designed with a single goal: to understand us. We've seen incredible progress already. By performing hundreds of billions of calculations in the blink of an eye, the latest AI techniques can understand certain types of text with human-level accuracy. The challenge becomes significantly more daunting, however, when text is part of a larger conversation, where it requires considering context to interpret what the user means and decide how to respond. Still, chatbots like Facebook's BlenderBot 2.0 seem to foreshadow far less frustrating interactions with AI.


Top 10 AI Innovations Of 2021 So Far

#artificialintelligence

AI is a complex and ever-evolving field where organisations and individuals are constantly focused 0n finding novel solutions to pressing challenges. The year has been full of path-breaking innovations which have pushed the boundaries and made way for better outcomes. In this article, we list the top ten AI innovations of 2021 so far. OpenAI and Microsoft's GitHub Copilot is an AI-based tool for programmers to write better code. The programmer can describe a function to the Copilot in plain English as a comment, and the machine will convert it to actual code.


Facebook Open Sources a Chatbot That Can Discuss Any Topic - KDnuggets

#artificialintelligence

I recently started a new newsletter focus on AI education and already has over 50,000 subscribers. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Last year, Facebook AI Research(FAIR) open sourced BlenderBot 1.0, the largest open domain chatbot ever built. BlenderBot is able to engage in a large variety of conversations across nearly any topic while displaying human-like characteristics such as empathy and personable levels of engagement.


Blender Bot 2.0: An open source chatbot that builds long-term memory and searches the internet

#artificialintelligence

Facebook AI Research has built and open-sourced BlenderBot 2.0, the first chatbot that can simultaneously build long-term memory it can continually access, search the internet for timely information, and have sophisticated conversations on nearly any topic. It's a significant update to the original BlenderBot, which we open-sourced in 2020 and which broke ground as the first to combine several conversational skills -- like personality, empathy, and knowledge -- into a single system. When talking to people, BlenderBot 2.0 demonstrated that it's better at conducting longer, more knowledgeable, and factually consistent conversations over multiple sessions than its predecessor, the existing state-of-the-art chatbot. The model takes pertinent information gleaned during conversation and stores it in a long-term memory so it can then leverage this knowledge in ongoing conversations that may continue for days, weeks, or even months. The knowledge is stored separately for each person it speaks with, which ensures that no new information learned in one conversation is used in another. During conversation, the model can generate contextual internet search queries, read the results, and incorporate that information when responding to people's questions and comments.


Facebook's BlenderBot chat AI no longer has the mental capacity of a goldfish

Engadget

Last April, Facebook's AI research lab (FAIR) announced and released as open source its BlenderBot social chat app. While the neophyte AI immediately proved far less prone to racist outbursts than previous attempts, BlenderBot was not without its shortcomings. For one, the system had the recollection capacity of a goldfish -- any subject or data point the AI wasn't initially trained simply didn't exist in its online reality, as evidenced by the OG BB's continued insistence that Tom Brady still plays for the New England Patriots. For another, due to its limited knowledge of current events, the system had a strong tendency to hallucinate knowledge, like a digital Dunning-Kruger effect. But the advancements BlenderBot 2.0 displays, which FAIR debuted on Friday, should make the AI far more sociable, knowledgeable, and capable.