chat bot
Synthetic Interlocutors. Experiments with Generative AI to Prolong Ethnographic Encounters
Søltoft, Johan Irving, Kocksch, Laura, Munk, Anders Kristian
This paper introduces "Synthetic Interlocutors" for ethnographic research. Synthetic Interlocutors are chatbots ingested with ethnographic textual material (interviews and observations) by using Retrieval Augmented Generation (RAG). We integrated an open-source large language model with ethnographic data from three projects to explore two questions: Can RAG digest ethnographic material and act as ethnographic interlocutor? And, if so, can Synthetic Interlocutors prolong encounters with the field and extend our analysis? Through reflections on the process of building our Synthetic Interlocutors and an experimental collaborative workshop, we suggest that RAG can digest ethnographic materials, and it might lead to prolonged, yet uneasy ethnographic encounters that allowed us to partially recreate and re-visit fieldwork interactions while facilitating opportunities for novel analytic insights. Synthetic Interlocutors can produce collaborative, ambiguous and serendipitous moments.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > Denmark > North Jutland > Aalborg (0.04)
- Oceania > Australia (0.04)
- (5 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.95)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.50)
A Voter-Based Stochastic Rejection-Method Framework for Asymptotically Safe Language Model Outputs
This paper proposes a new method for preventing unsafe or otherwise low quality large language model (LLM) outputs, by leveraging the stochasticity of LLMs. We propose a system whereby LLM checkers vote on the acceptability of a generated output, regenerating it if a threshold of disapproval is reached, until sufficient checkers approve. We further propose estimators for cost and failure rate, and based on those estimators and experimental data tailored to the application, we propose an algorithm that achieves a desired failure rate at the least possible cost. We demonstrate that, under these models, failure rate decreases exponentially as a function of cost when voter count and threshold are chosen according to the algorithm, and that the models reasonably estimate the actual performance of such a system in action, even with limited data.
'My boss keeps inviting me over, is this sexual harassment?': Women battling discrimination in the workplace create AI chatbot which allows you to ask whether behaviour is inappropriate
Two women have created an AI chat bot to allow individuals in the workplace to easily find out if they are victims of sexual harassment. The pioneering tool, which is aimed at helping victims anonymously report both discrimination and racism as well as sexual harassment, allows individuals to ask personally-curated questions for an AI bot to assess and answer. Trained on the UK Equality Act, workers can ask questions like: 'My boss keeps asking me to have dinner with him and stroking my arm. I have said no several times and it's making me anxious. The tool is part of an app called'SaferSpace', founded by PR guru Ruth Sparkes and business entrepreneur Sunita Gordon.
- Europe > United Kingdom (0.18)
- Europe > Ireland (0.05)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
Are chat bots changing the face of religion? Three faith leaders on grappling with AI
"Write a sermon in the voice of a rabbi of about 1,000 words that relates the Torah portion Vayigash to intimacy and vulnerability. That was the prompt rabbi Joshua Franklin put in ChatGPT, the results of which he used to deliver a sermon to congregants of the Jewish Center of the Hamptons in December 2022. The sermon the chatbot came up with spoke of Joseph, the son of Jacob and a prophet in the Abrahamic faiths. It quoted from a book by Brown, a professor who specializes on topics of intimacy, to define vulnerability as "the willingness to show up and be seen when we have no control over the outcome". Being vulnerable could mean "we are able to form deeper, more meaningful bonds with those around us", the chat bot wrote. It wasn't the greatest sermon, Franklin thought, but it was passable. And that was his point. The irony of the AI-written speech about vulnerability and human connection was that it lacked exactly what it preached: human vulnerability and emotion. "It actually had a little bit of content to it," he said. "And the congregation thought it was written by some other famous rabbis.
- North America > United States > California (0.05)
- Europe > Switzerland > Zürich > Zürich (0.05)
Those Aren't Your Memories, They're Somebody Else's: Seeding Misinformation in Chat Bot Memories
Atkins, Conor, Zhao, Benjamin Zi Hao, Asghar, Hassan Jameel, Wood, Ian, Kaafar, Mohamed Ali
One of the new developments in chit-chat bots is a long-term memory mechanism that remembers information from past conversations for increasing engagement and consistency of responses. The bot is designed to extract knowledge of personal nature from their conversation partner, e.g., stating preference for a particular color. In this paper, we show that this memory mechanism can result in unintended behavior. In particular, we found that one can combine a personal statement with an informative statement that would lead the bot to remember the informative statement alongside personal knowledge in its long term memory. This means that the bot can be tricked into remembering misinformation which it would regurgitate as statements of fact when recalling information relevant to the topic of conversation. We demonstrate this vulnerability on the BlenderBot 2 framework implemented on the ParlAI platform and provide examples on the more recent and significantly larger BlenderBot 3 model. We generate 150 examples of misinformation, of which 114 (76%) were remembered by BlenderBot 2 when combined with a personal statement. We further assessed the risk of this misinformation being recalled after intervening innocuous conversation and in response to multiple questions relevant to the injected memory. Our evaluation was performed on both the memory-only and the combination of memory and internet search modes of BlenderBot 2. From the combinations of these variables, we generated 12,890 conversations and analyzed recalled misinformation in the responses. We found that when the chat bot is questioned on the misinformation topic, it was 328% more likely to respond with the misinformation as fact when the misinformation was in the long-term memory.
- North America > United States (0.28)
- Europe > Ukraine (0.05)
- Europe > Russia (0.04)
- (2 more...)
With ChatGPT, Teachers Can Plan Lessons, Write Emails, and More. What's the Catch?
The education community has been abuzz with the rise of ChatGPT, an artificial intelligence tool that can write anything with just a simple prompt. Most of the conversation has been centered on the extent to which students will use the chat bot--but ChatGPT could also fundamentally change the nature of teachers' jobs. So far, teachers have used--or considered using--the chat bot to plan lessons, put together rubrics, offer students feedback on assignments, respond to parent emails, and write letters of recommendation, among other tasks. While some educators worry about the implications of automating these parts of teaching, others say that the tool can save them hours of work, freeing up time for student interactions or their personal life. After all, a typical teacher works about 54 hours a week, but just under half of that time is devoted to directly teaching students, according to a nationally representative survey of teachers conducted by the EdWeek Research Center last year. Just under a third of teachers said if they could spend less time on any one task, it would be general administrative work.
Episode #159: How AI-Powered Chat GPT Can Generate Toy and Game Concepts -- The Toy Coach
With AI-Powered Chat GPT, generating toy and game ideas has never been easier. If you are in the process of developing a toy or a game but are stuck in the concept phase, this week’s episode is one you cannot afford to miss. AI-Powered Chat GPT can generate “creative” toy & game ideas quickly and easily - and all you need to get started is a single sentence. Chat GPT was released in November of 2022, and is known for its ability to code, write stories, and even music. But today we’re giving this clever chatbot a new challenge…to come up with a concept for a toy or game. Curious to know how it did? I bet you’ll be shocked by the results.
- Information Technology > Communications > Mobile (0.41)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.40)
Artificial Intelligence made big leaps in 2022 -- is that exciting or scary?
So was 2022 the year that advancements in artificial intelligence made the world a much scarier place, or does it just feel that way? Put another way, did I write this introduction, or did a chat bot? Brian Christian is author of the bestselling book "The Alignment Problem," and he's here to help us look back and forward at the impact AI is having on our lives. Good to have you here. SHAPIRO: Well, in addition to everything we just heard about, this was also the year that a piece of art generated by AI won a prize at the Colorado State Fair.
Challenges due to AI & Bots
Artificial Intelligence is expected to permanently change the banking industry in profound ways during the coming months and years. Companies want to seek a competitive edge by implementing more technology to achieve improvements in speed, cost, accuracy and efficiency. The key for global corporate enterprise is to benefit from the collective intelligence presented by RPA and cognitive technologies along with human workers. Only by having technology combine with human talent can global corporate enterprise achieve scalable intelligent automation. And only with scalable intelligent automation enterprise resiliency be realized.
The new Turing test: Are you human?
In 1950, when Alan Turing conceived "The Imitation Game" as a test of computer behavior, it was unimaginable that humans of the future would spend most hours of their day glued to a screen, inhabiting the world of machines more than the world of people. That is the Copernican Shift in AI. "I propose to consider the question, 'Can machines think?'" Buried in the controversy this summer about Google's LaMDA language model, which an engineer claimed was sentient, is a hint about a big change that's come over artificial intelligence since Alan Turing defined the idea of the "Turing Test" in an essay in 1950. Turing, a British mathematician who laid the groundwork for computing, offered what he called the "Imitation Game." Two entities, one a person, one a digital computer, are asked questions by a third entity, a human interrogator.