Specifying and Implementing Multi-Party Conversation Rules with Finite-State-Automata

AAAI Conferences

Current existing chatbot engines do not properly handle a group chat with many users and many chatbots. This prevents chatbots from developing their full potential as social participants. This happens because there is a lack of methods and tools to design and engineer conversation rules. The work presented in this paper has two major contributions: the presentation of a Finite-State-Automata-based DSL (Domain Specific Language), called DSL-CR, for engineering multi-party conversation rules for inter-message coherence to be used by chatbot engines; and its usage in a real-world dialogue problem with four bots and humans. With this tool, the amount of domain and programming expertise needed for creating conversation rules is reduced, and a larger group of people, like linguists, can specify the conversation rules.


The success of Multibot: soft cap has been submitted!

#artificialintelligence

There are lots of blockchain projects entering the ICO stage. All of them offer their solutions in a wide variety of branches. However, not all of them are able to collect the required funds for further development. Multibot startup, which is right in the midst of ICO, can already be proud of its software being successfully received (the lower limit of the fundraising). This moment itself indicates that our development is supported by the society and is interesting to investors!


Multibot: an effective solution for automation of exchange trades

#artificialintelligence

Crypto-trading becomes more and more popular. And it is natural, because the world has recently got to know about the cryptocurrency. And their rate jumps can be a good help to improve your financial well-being. But the cryptocurrency trading as well as other exchange trading instruments is a complex work that requires a self-control and covering a huge range of all sorts of factors. That's why exchange trades are more and more often intrusted to robot-advisers.


Dialogue Design and Management for Multi-Session Casual Conversation with Older Adults

arXiv.org Artificial Intelligence

We address the problem of designing a conversational avatar capable of a sequence of casual conversations with older adults. Users at risk of loneliness, social anxiety or a sense of ennui may benefit from practicing such conversations in private, at their convenience. We describe an automatic spoken dialogue manager for LISSA, an on-screen virtual agent that can keep older users involved in conversations over several sessions, each lasting 10-20 minutes. The idea behind LISSA is to improve users' communication skills by providing feedback on their non-verbal behavior at certain points in the course of the conversations. In this paper, we analyze the dialogues collected from the first session between LISSA and each of 8 participants. We examine the quality of the conversations by comparing the transcripts with those collected in a WOZ setting. LISSA's contributions to the conversations were judged by research assistants who rated the extent to which the contributions were "natural", "on track", "encouraging", "understanding", "relevant", and "polite". The results show that the automatic dialogue manager was able to handle conversation with the users smoothly and naturally.


Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys

arXiv.org Artificial Intelligence

The rise of increasingly more powerful chatbots offers a new way to collect information through conversational surveys, where a chatbot asks open-ended questions, interprets a user's free-text responses, and probes answers when needed. To investigate the effectiveness and limitations of such a chatbot in conducting surveys, we conducted a field study involving about 600 participants. In this study, half of the participants took a typical online survey on Qualtrics and the other half interacted with an AI-powered chatbot to complete a conversational survey. Our detailed analysis of over 5200 free-text responses revealed that the chatbot drove a significantly higher level of participant engagement and elicited significantly better quality responses in terms of relevance, depth, and readability. Based on our results, we discuss design implications for creating AI-powered chatbots to conduct effective surveys and beyond.