One goal of AI work in natural language is to enable communication between people and computers without resorting to memorization of complex commands and procedures. Automatic translation – enabling scientists, business people and just plain folks to interact easily with people around the world – is another goal. Both are just part of the broad field of AI and natural language, along with the cognitive science aspect of using computers to study how humans understand language.
Humans are constantly fascinated with auto-operating AI-driven gadgets. The latest trend that is catching the eye of the majority of the tech industry is chatbots. And with so much research and advancement in the field, the programming is winding up more human-like, on top of being automated. The blend of immediate response reaction and consistent connectivity makes them an engaging change to the web applications trend. In general terms, a bot is nothing but a software that will perform automatic tasks.
It's been said that nostalgia isn't what it used to be, and in the world of technology there's a lot to be nostalgic about! Just over twenty years ago client-server was all the rage, and then the internet arrived and suddenly browser-based systems became the new way to do everything. Then in the mid-2000's Apple coined the phrase, "there's an app for that", which then begat a mad rush to build phone-based apps for everything under the sun. And now, in 2018, all of those things have been usurped by a new UI that will change the technology landscape yet again: chatbots. And, for generation Z or millennials, this is the way they expect to be able to interact with everything.
They can identify patterns in voice commands and react to them according to predefined algorithms. In HR these are employed when the initial contact with applicants is being made – often when it comes to answering standard questions about an advertised position. As chatbots they can support an ongoing interaction between recruiter and applicant during the recruiting process, in this case they quite simply increase the recruiters accessibility or the applicant. Second field of application: „Natural Language Processing" (NLP), this technology supports the scanning of letters of application to characterize the applicants range and use vocabulary and his "wording" in general. NLP can assist in writing job advertisements, by using a language which is exactly targeted towards the preferred group of applicants.
Although there is an emerging trend towards generating embeddings for primarily unstructured data, and recently for structured data, there is not yet any systematic suite for measuring the quality of embeddings. This deficiency is further sensed with respect to embeddings generated for structured data because there are no concrete evaluation metrics measuring the quality of encoded structure as well as semantic patterns in the embedding space. In this paper, we introduce a framework containing three distinct tasks concerned with the individual aspects of ontological concepts: (i) the categorization aspect, (ii) the hierarchical aspect, and (iii) the relational aspect. Then, in the scope of each task, a number of intrinsic metrics are proposed for evaluating the quality of the embeddings. Furthermore, w.r.t. this framework multiple experimental studies were run to compare the quality of the available embedding models. Employing this framework in future research can reduce misjudgment and provide greater insight about quality comparisons of embeddings for ontological concepts.
Recently, Talmor and Berant (2018) introduced ComplexWebQuestions - a dataset focused on answering complex questions by decomposing them into a sequence of simpler questions and extracting the answer from retrieved web snippets. In their work the authors used a pre-trained reading comprehension (RC) model (Salant and Berant, 2018) to extract the answer from the web snippets. In this short note we show that training a RC model directly on the training data of ComplexWebQuestions reveals a leakage from the training set to the test set that allows to obtain unreasonably high performance. As a solution, we construct a new partitioning of ComplexWebQuestions that does not suffer from this leakage and publicly release it. We also perform an empirical evaluation on these two datasets and show that training a RC model on the training data substantially improves state-of-the-art performance.
We propose Object-oriented Neural Programming (OONP), a framework for semantically parsing documents in specific domains. Basically, OONP reads a document and parses it into a predesigned object-oriented data structure (referred to as ontology in this paper) that reflects the domain-specific semantics of the document. An OONP parser models semantic parsing as a decision process: a neural net-based Reader sequentially goes through the document, and during the process it builds and updates an intermediate ontology to summarize its partial understanding of the text it covers. OONP supports a rich family of operations (both symbolic and differentiable) for composing the ontology, and a big variety of forms (both symbolic and differentiable) for representing the state and the document. An OONP parser can be trained with supervision of different forms and strength, including supervised learning (SL) , reinforcement learning (RL) and hybrid of the two. Our experiments on both synthetic and real-world document parsing tasks have shown that OONP can learn to handle fairly complicated ontology with training data of modest sizes.
ABSTRACT The availability of powerful Natural Language Processing techniques led to the emergence of AI tool that reads and interprets unstructured textual information, such as news and social media messages. The sentiment of finance-related content influences trading and investment decisions of players in financial markets and hence, moves the prices of assets. Dr. Svetlana Borovkova has been working for several years in the area of sentiment analysis and its relation to financial markets; applications of sentiment analysis range from commodity trading to systemic risk to quantitative investment strategies. In this talk, Dr. Borovkova will give an overview of this exciting field and show, among other things, how media sentiment can be used to forecast global financial distress, to generate sector and country rotation investment strategies and to help enhance machine learning applications to intraday trading. SPEAKER Dr. Svetlana Borovkova is an Associate Professor of Quantitative Finance in Vrije Universiteit Amsterdam and Head of Quantitative Modelling in risk advisory firm Probability & Partners.
The practical application of Artificial Intelligence (AI) by banks and financial service providers will be a hot topic in 2018. While seemingly a recent development, banks have in fact deployed AI to improve efficiency and lower costs for well over two decades. Starting with the use of Natural Language Processing (NLP) disciplines across several banking processes in the 1990s, the emergence of big-data and cloud computing drove the adoption of additional Machine Learning capabilities by banks. Robo advisors and fraud detection are two recent examples of AI applications in banking. Inflection Point Despite the long history of AI deployment, we are now at an inflection point in the transformation of banking.
Artificial Intelligence (AI) is an important and evolving concept that is having significant impact within the Customer Experience industry -- and it's a topic that is being talked about on a seemingly daily basis at this point. But is AI really ready for prime time in customer care? I sat down with Michael Johnston, Director of Research and Innovation at Interactions, to get answers to some questions that are frequently asked about AI and Machine Learning as they apply to customer care. Artificial Intelligence refers to the capability of a machine to imitate intelligent human behavior. Put another way, AI technologies are algorithms that attempt to mimic things that humans do.
Due to recent explosion of text data, researchers have been overwhelmed by ever-increasing volume of articles produced by different research communities. Various scholarly search websites, citation recommendation engines, and research databases have been created to simplify the text search tasks. However, it is still difficult for researchers to be able to identify potential research topics without doing intensive reviews on a tremendous number of articles published by journals, conferences, meetings, and workshops. In this paper, we consider a novel topic diffusion discovery technique that incorporates sparseness-constrained Non-negative Matrix Factorization with generalized Jensen-Shannon divergence to help understand term-topic evolutions and identify topic diffusions. Our experimental result shows that this approach can extract more prominent topics from large article databases, visualize relationships between terms of interest and abstract topics, and further help researchers understand whether given terms/topics have been widely explored or whether new topics are emerging from literature.