Collaborating Authors

Say What I Want: Towards the Dark Side of Neural Dialogue Models Artificial Intelligence

Neural dialogue models have been widely adopted in various chatbot applications because of their good performance in simulating and generalizing human conversations. However, there exists a dark side of these models -- due to the vulnerability of neural networks, a neural dialogue model can be manipulated by users to say what they want, which brings in concerns about the security of practical chatbot services. In this work, we investigate whether we can craft inputs that lead a well-trained black-box neural dialogue model to generate targeted outputs. We formulate this as a reinforcement learning (RL) problem and train a Reverse Dialogue Generator which efficiently finds such inputs for targeted outputs. Experiments conducted on a representative neural dialogue model show that our proposed model is able to discover such desired inputs in a considerable portion of cases. Overall, our work reveals this weakness of neural dialogue models and may prompt further researches of developing corresponding solutions to avoid it.

Designing a Smart Conversational Interface - Growth Tech News


The world has witnessed significant advancements in human-computer dialogue. Today, conversational interfaces are slowly taking the place of rigid GUI dialogue boxes and web forms. The evident evolution of UI design is all about leveraging artificial intelligence to enable end-users to get answers and perform routine tasks quickly without strain. However, what's the key to designing and building AI-powered and user-friendly chat interfaces? Chatbots are programs that facilitate text-based conversations between computers and people in a natural language.

Recent advances in conversational NLP : Towards the standardization of Chatbot building Artificial Intelligence

Dialogue systems have become recently essential in our life. Their use is getting more and more fluid and easy throughout the time. This boils down to the improvements made in NLP and AI fields. In this paper, we try to provide an overview to the current state of the art of dialogue systems, their categories and the different approaches to build them. We end up with a discussion that compares all the techniques and analyzes the strengths and weaknesses of each. Finally, we present an opinion piece suggesting to orientate the research towards the standardization of dialogue systems building.

Ensemble-Based Deep Reinforcement Learning for Chatbots Artificial Intelligence

Such an agent is typically characterised by: (i) a finite set of states 6 S {s i} that describe all possible situations in the environment; (ii) a finite set of actions A {a j} to change in the environment from one situation to another; (iii) a state transition function T (s,a,s null) that specifies the next state s null for having taken action a in the current state s; (iv) a reward function R (s,a,s null) that specifies a numerical value given to the agent for taking action a in state s and transitioning to state s null; and (v) a policy π: S A that defines a mapping from states to actions [2, 30]. The goal of a reinforcement learning agent is to find an optimal policy by maximising its cumulative discounted reward defined as Q (s,a) max π E[r t γr t 1 γ 2 r t 1 ... s t s,a t a,π ], where function Q represents the maximum sum of rewards r t discounted by factor γ at each time step. While a reinforcement learning agent takes actions with probability Pr ( a s) during training, it selects the best action at test time according to π (s) arg max a A Q (s,a). A deep reinforcement learning agent approximates Q using a multi-layer neural network [31]. The Q function is parameterised as Q(s,a; θ), where θ are the parameters or weights of the neural network (recurrent neural network in our case). Estimating these weights requires a dataset of learning experiences D {e 1,...e N} (also referred to as'experience replay memory'), where every experience is described as a tuple e t ( s t,a t,r t,s t 1). Inducing a Q function consists in applying Q-learning updates over minibatches of experience MB {( s,a,r,s null) U (D)} drawn uniformly at random from the full dataset D . This process is implemented in learning algorithms using Deep Q-Networks (DQN) such as those described in [31, 32, 33], and the following section describes a DQN-based algorithm for human-chatbot interaction.

A Chatbot from Future: Building an end-to-end Conversational Assistant with


You might have seen in my previous post that I've been using to build chatbots. You will find many tutorials on Rasa that are using Rasa APIs to build a chatbot. But I haven't found anything that talks details on those APIs, what are the different API parameters, what do those parameters mean and so on. In this post, I will not only share how to build a chatbot with Rasa, but also discuss the APIs used and how you can use your Rasa model as a service to communicate from a NodeJS application. Rasa is an open source Conversational AI framework.