blenderbot 3
Improving Open Language Models by Learning from Organic Interactions
Xu, Jing, Ju, Da, Lane, Joshua, Komeili, Mojtaba, Smith, Eric Michael, Ung, Megan, Behrooz, Morteza, Ngan, William, Moritz, Rashel, Sukhbaatar, Sainbayar, Boureau, Y-Lan, Weston, Jason, Shuster, Kurt
We present BlenderBot 3x, an update on the conversational model BlenderBot 3, which is now trained using organic conversation and feedback data from participating users of the system in order to improve both its skills and safety. We are publicly releasing the participating de-identified interaction data for use by the research community, in order to spur further progress. Training models with organic data is challenging because interactions with people "in the wild" include both high quality conversations and feedback, as well as adversarial and toxic behavior. We study techniques that enable learning from helpful teachers while avoiding learning from people who are trying to trick the model into unhelpful or toxic responses. BlenderBot 3x is both preferred in conversation to BlenderBot 3, and is shown to produce safer responses in challenging situations. While our current models are still far from perfect, we believe further improvement can be achieved by continued use of the techniques explored in this work.
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- North America > United States (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- (3 more...)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Those Aren't Your Memories, They're Somebody Else's: Seeding Misinformation in Chat Bot Memories
Atkins, Conor, Zhao, Benjamin Zi Hao, Asghar, Hassan Jameel, Wood, Ian, Kaafar, Mohamed Ali
One of the new developments in chit-chat bots is a long-term memory mechanism that remembers information from past conversations for increasing engagement and consistency of responses. The bot is designed to extract knowledge of personal nature from their conversation partner, e.g., stating preference for a particular color. In this paper, we show that this memory mechanism can result in unintended behavior. In particular, we found that one can combine a personal statement with an informative statement that would lead the bot to remember the informative statement alongside personal knowledge in its long term memory. This means that the bot can be tricked into remembering misinformation which it would regurgitate as statements of fact when recalling information relevant to the topic of conversation. We demonstrate this vulnerability on the BlenderBot 2 framework implemented on the ParlAI platform and provide examples on the more recent and significantly larger BlenderBot 3 model. We generate 150 examples of misinformation, of which 114 (76%) were remembered by BlenderBot 2 when combined with a personal statement. We further assessed the risk of this misinformation being recalled after intervening innocuous conversation and in response to multiple questions relevant to the injected memory. Our evaluation was performed on both the memory-only and the combination of memory and internet search modes of BlenderBot 2. From the combinations of these variables, we generated 12,890 conversations and analyzed recalled misinformation in the responses. We found that when the chat bot is questioned on the misinformation topic, it was 328% more likely to respond with the misinformation as fact when the misinformation was in the long-term memory.
- North America > United States (0.28)
- Europe > Ukraine (0.05)
- Europe > Russia (0.04)
- (2 more...)
Generative AI: Preparing for next-gen artificial intelligence
Towards the end of last year, management consultant McKinsey published an article where the first paragraph was created by ChatGPT, the generative artificial intelligence (AI) language model. The article's authors admitted that the AI's attempt was "not perfect but overwhelmingly impressive". They noted that products like ChatGPT and GitHub Copilot take technology into realms once thought to be reserved for humans. "With generative AI, computers can now arguably exhibit creativity. They can produce original content in response to queries, drawing from data they've ingested and interactions with users," they said.
What is ChatGPT? Everything you need to know about Elon Musk's new AI chatbot
It's the world's new favourite chatbot, having already amassed more than one million users less than a week after its public launch. But what exactly is ChatGPT, the artificial intelligence system created by a OpenAI, a US company that lists Elon Musk as one of its founders? Well, the chatbot is a large language model that has been trained on a massive amount of text data, allowing it to generate eerily human-like text in response to a given prompt. Here, MailOnline looks at everything you need to know about ChatGPT, including how it works, who can use it, what it means for the future, and any concerns that have been raised. It is the world's new favourite chatbot, having already garnered more than one million users less than a week after its public launch. OpenAI says its ChatGPT model has been trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF).
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Switzerland (0.05)
- Asia > North Korea (0.05)
- Media (0.95)
- Information Technology (0.95)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.49)
Artificial intelligence suffers from some very human flaws. Gender bias is one
Last month, Facebook parent Meta unveiled an artificial intelligence chatbot said to be its most advanced yet. BlenderBot 3, as the AI is known, is able to search the internet to talk to people about almost anything, and it has abilities related to personality, empathy, knowledge and long-term memory. BlenderBot 3 is also good at peddling anti-Semitic conspiracy theories, claiming that former US President Donald Trump won the 2020 election, and calling Meta Chairman and Facebook co-founder Mark Zuckerberg "creepy". It's not the first time an AI has gone rogue. In 2016, Microsoft's Tay AI took less than 24 hours to morph into a rightwing bigot on Twitter, posting racist and misogynistic tweets and praising Adolf Hitler.
- North America > United States (1.00)
- Asia (0.40)
- Europe (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.55)
AI Ethics Lucidly Questioning This Whole Hallucinating AI Popularized Trend That Has Got To Stop
Latest trend in AI consists of referring to AI hallucinations, which AI Ethicists find to be ... [ ] misleading and altogether problematic. If you have been keeping up with the latest news about AI, you'd almost certainly believe that you were hallucinating. Wait, hold on, I meant to say that you would almost certainly believe that the AI was hallucinating. And you would have lots of solid reasons for believing so. The notion of AI that hallucinates seems to keep gaining rather wide popularity. This raises all manner of AI Ethics qualms and issues. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
- Health & Medicine (0.68)
- Government (0.47)
Meta AI's New Chatbot Goes 'Bad' in Days
Meta AI has built and unveiled BlenderBot 3, a 175 billion-parameter chatbot that it has made publicly available, complete with model weights, code, datasets, and model cards. "BlenderBot 3 delivers superior performance because it's built from Meta AI's publicly available OPT-175B language model -- approximately 58 times the size of BlenderBot 2," said Meta in an announcement on Friday. Unlike its predecessor, BlenderBot 3 can search the Internet to chat about almost any topic. Moreover, it can learn and improve its skills and safety through natural conversations and feedback from people in the real world. In contrast, most datasets are typically collected through research studies that "can't reflect the diversity of the real world", claims Meta.
Spoofing the Blenderbot
Facebook became a known brand this century, but the iconic moniker was scrapped in favor of "Meta" in 2022. The latest from these lords of nomenclature is the Blenderbot 3, described in a blog post on ai.facebook.com The post, attributed to "Joelle Pineau, managing director of fundamental AI research at Meta," opens with a paragraph that begins by addressing "problematic or offensive language" and ends with a clunky evisceration of the English vernacular, to wit: "When we launched BlenderBot 3 a few days ago, we talked extensively about the promise and challenges that come with such a public demo, including the possibility that it could result in problematic or offensive language. While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized." Frankenstein words like "productionized" should be edited out at this level, but never mind.
- North America > United States (0.05)
- Europe (0.05)
- Asia > India (0.05)
This AI newsletter is all you need #8
Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. This week's highlight is surely Meta's new chatbot: BlenderBot 3. BlenderBot 3 is accessible to everyone in the U.S. to chat with in order to collect feedback on its capabilities.
This AI newsletter is all you need #8
This week's highlight is surely Meta's new chatbot: BlenderBot 3. BlenderBot 3 is accessible to everyone in the U.S. to chat with in order to collect feedback on its capabilities. It seems like "Meta's new AI chatbot can't stop bashing Facebook" with some hilarious and unexpected answers. The bot has some really funny answers bashing its own company, and as they clearly say in the article: "If you're worried that artificial intelligence is getting too smart, talking to Meta's AI chatbot might make you feel better." Indeed, even though BlenderBot 3 would pass a very specific Turing test and be classified as "intelligent" by some people, it remains a machine interpolating (and not extrapolating as humans can do) from data. Data gathered from human discussions on the internet, including our biases, and some of the worst ones due to anonymity's tendency to bring out the worst in some people.