Goto

Collaborating Authors

Results


How machines are changing the way companies talk

#artificialintelligence

Anyone who's ever been on an earnings call knows company executives already tend to look at the world through rose-colored glasses, but a new study by economics and machine learning researchers says that's getting worse, thanks to machine learning. The analysis found that companies are adapting their language in forecasts, SEC regulatory filings, and earnings calls due to the proliferation of AI used to analyze and derive signals from the words they use. In other words: Businesses are beginning to change the way they talk because they know machines are listening. Forms of natural language processing are used to parse and process text in the financial documents companies are required to submit to the SEC. Machine learning tools are then able to do things like summarize text or determine whether language used is positive, neutral, or negative.


Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration

arXiv.org Artificial Intelligence

Explaining the predictions of AI models is paramount in safety-critical applications, such as in legal or medical domains. One form of explanation for a prediction is an extractive rationale, i.e., a subset of features of an instance that lead the model to give its prediction on the instance. Previous works on generating extractive rationales usually employ a two-phase model: a selector that selects the most important features (i.e., the rationale) followed by a predictor that makes the prediction based exclusively on the selected features. One disadvantage of these works is that the main signal for learning to select features comes from the comparison of the answers given by the predictor and the ground-truth answers. In this work, we propose to squeeze more information from the predictor via an information calibration method. More precisely, we train two models jointly: one is a typical neural model that solves the task at hand in an accurate but black-box manner, and the other is a selector-predictor model that additionally produces a rationale for its prediction. The first model is used as a guide to the second model. We use an adversarial-based technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features. In addition, for natural language tasks, we propose to use a language-model-based regularizer to encourage the extraction of fluent rationales. Experimental results on a sentiment analysis task as well as on three tasks from the legal domain show the effectiveness of our approach to rationale extraction.


You Are What You Tweet: Profiling Users by Past Tweets to Improve Hate Speech Detection

arXiv.org Artificial Intelligence

Hate speech detection research has predominantly focused on purely content-based methods, without exploiting any additional context. We briefly critique pros and cons of this task formulation. We then investigate profiling users by their past utterances as an informative prior to better predict whether new utterances constitute hate speech. To evaluate this, we augment three Twitter hate speech datasets with additional timeline data, then embed this additional context into a strong baseline model. Promising results suggest merit for further investigation, though analysis is complicated by differences in annotation schemes and processes, as well as Twitter API limitations and data sharing policies.


Open Problems in Cooperative AI

arXiv.org Artificial Intelligence

Problems of cooperation--in which agents seek ways to jointly improve their welfare--are ubiquitous and important. They can be found at scales ranging from our daily routines--such as driving on highways, scheduling meetings, and working collaboratively--to our global challenges--such as peace, commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate. Since machines powered by artificial intelligence are playing an ever greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and to foster cooperation. We see an opportunity for the field of artificial intelligence to explicitly focus effort on this class of problems, which we term Cooperative AI. The objective of this research would be to study the many aspects of the problems of cooperation and to innovate in AI to contribute to solving these problems. Central goals include building machine agents with the capabilities needed for cooperation, building tools to foster cooperation in populations of (machine and/or human) agents, and otherwise conducting AI research for insight relevant to problems of cooperation. This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms. However, Cooperative AI is not the union of these existing areas, but rather an independent bet about the productivity of specific kinds of conversations that involve these and other areas. We see opportunity to more explicitly focus on the problem of cooperation, to construct unified theory and vocabulary, and to build bridges with adjacent communities working on cooperation, including in the natural, social, and behavioural sciences.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


What is AI? Everything you need to know about Artificial Intelligence

ZDNet

It depends who you ask. Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if it had been done by a human, would have to apply intelligence in order to accomplish it. That's obviously a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not. Modern definitions of what it means to create intelligence are slightly more specific. Francois Chollet, AI researcher at Google and creator of the machine-learning software library Keras, has said intelligence is tied to a system's ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios. "Intelligence is the efficiency with which you acquire new skills at tasks you didn't previously prepare for," he said. "Intelligence is not skill itself, it's not what you can do, it's how well and how efficiently you can learn new things." It's a definition under which modern AI-powered systems, such as virtual assistants, would be characterised as having demonstrated'narrow AI'; the ability to generalise their training when carrying out a limited set of tasks, such as speech recognition or computer vision. Typically, AI systems demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity. This ebook, based on the latest ZDNet / TechRepublic special feature, advises CXOs on how to approach AI and ML initiatives, figure out where the data science team fits in, and what algorithms to buy versus build. AI is ubiquitous today, used to recommend what you should buy next online, to understanding what you say to virtual assistants, such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.


information retrieval document search using vector space model in R

#artificialintelligence

Now calculate cosine similarity between each document and each query. For each query sort the cosine similarity scores for all the documents and take top-3 documents having high scores.


Generate Your Counterfactuals: Towards Controlled Counterfactual Generation for Text

arXiv.org Artificial Intelligence

Machine Learning has seen tremendous growth recently, which has led to a larger adoption of ML systems for educational assessments, credit risk, healthcare, employment, criminal justice, to name a few. Trustworthiness of ML and NLP systems is a crucial aspect and requires guarantee that the decisions they make are fair and robust. Aligned with this, we propose a framework GYC, to generate a set of counterfactual text samples, which are crucial for testing these ML systems. Our main contributions include a) We introduce GYC, a framework to generate counterfactual samples such that the generation is plausible, diverse, goal-oriented, and effective, b) We generate counterfactual samples, that can direct the generation towards a corresponding condition such as named-entity tag, semantic role label, or sentiment. Our experimental results on various domains show that GYC generates counterfactual text samples exhibiting the above four properties. %The generated counterfactuals can then be fed complementary to the existing data augmentation for improving the debiasing algorithms performance as compared to existing counterfactuals generated by token substitution. GYC generates counterfactuals that can act as test cases to evaluate a model and any text debiasing algorithm.


Towards Coinductive Models for Natural Language Understanding. Bringing together Deep Learning and Deep Semantics

arXiv.org Artificial Intelligence

This article contains a proposal to add coinduction to the computational apparatus of natural language understanding. This, we argue, will provide a basis for more realistic, computationally sound, and scalable models of natural language dialogue, syntax and semantics. Given that the bottom up, inductively constructed, semantic and syntactic structures are brittle, and seemingly incapable of adequately representing the meaning of longer sentences or realistic dialogues, natural language understanding is in need of a new foundation. Coinduction, which uses top down constraints, has been successfully used in the design of operating systems and programming languages. Moreover, implicitly it has been present in text mining, machine translation, and in some attempts to model intensionality and modalities, which provides evidence that it works. This article shows high level formalizations of some of such uses. Since coinduction and induction can coexist, they can provide a common language and a conceptual model for research in natural language understanding. In particular, such an opportunity seems to be emerging in research on compositionality. This article shows several examples of the joint appearance of induction and coinduction in natural language processing. We argue that the known individual limitations of induction and coinduction can be overcome in empirical settings by a combination of the the two methods. We see an open problem in providing a theory of their joint use.


Clustering-based Automatic Construction of Legal Entity Knowledge Base from Contracts

arXiv.org Artificial Intelligence

In contract analysis and contract automation, a knowledge base (KB) of legal entities is fundamental for performing tasks such as contract verification, contract generation and contract analytic. However, such a KB does not always exist nor can be produced in a short time. In this paper, we propose a clustering-based approach to automatically generate a reliable knowledge base of legal entities from given contracts without any supplemental references. The proposed method is robust to different types of errors brought by pre-processing such as Optical Character Recognition (OCR) and Named Entity Recognition (NER), as well as editing errors such as typos. We evaluate our method on a dataset that consists of 800 real contracts with various qualities from 15 clients. Compared to the collected ground-truth data, our method is able to recall 84\% of the knowledge.