Collaborating Authors

An End-to-End Conversational Second Screen Application for TV Program Discovery

AI Magazine

In this article, we report on a multiphase R&D effort to develop a conversational second screen application for TV program discovery. Our goal is to share with the community the breadth of artificial intelligence (AI) and natural language (NL) technologies required to develop such an application along with learnings from target end-users. We first give an overview of our application from the perspective of the end-user. We then present the architecture of our application along with the main AI and NL components, which were developed over multiple phases. The first phase focuses on enabling core functionality such as effectively finding programs matching the user’s intent. The second phase focuses on enabling dialog with the user. Finally, we present two user studies, corresponding to these two phases. The results from both studies demonstrate the effectiveness of our application in the target domain.

MiTAP for Biosecurity: A Case Study

AI Magazine

MITAP (MITRE text and audio processing) is a prototype system available for monitoring infectious disease outbreaks and other global events. MITAP focuses on providing timely, multilingual, global information access to medical experts and individuals involved in humanitarian assistance and relief work. Multiple information sources in multiple languages are automatically captured, filtered, translated, summarized, and categorized by disease, region, information source, person, and organization. Critical information is automatically extracted and tagged to facilitate browsing, searching, and sorting. The system supports shared situational awareness through collaboration, allowing users to submit other articles for processing, annotate existing documents, post directly to the system, and flag messages for others to see. MITAP currently stores over 1 million articles and processes an additional 2,000 to 10,000 daily, delivering up-to-date information to dozens of regular users.

Fake News: A Survey of Research, Detection Methods, and Opportunities Artificial Intelligence

The explosive growth in fake news and its erosion to democracy, justice, and public trust has increased the demand for fake news analysis, detection and intervention. This survey comprehensively and systematically reviews fake news research. The survey identifies and specifies fundamental theories across various disciplines, e.g., psychology and social science, to facilitate and enhance the interdisciplinary research of fake news. Current fake news research is reviewed, summarized and evaluated. These studies focus on fake news from four perspective: (1) the false knowledge it carries, (2) its writing style, (3) its propagation patterns, and (4) the credibility of its creators and spreaders. We characterize each perspective with various analyzable and utilizable information provided by news and its spreaders, various strategies and frameworks that are adaptable, and techniques that are applicable. By reviewing the characteristics of fake news and open issues in fake news studies, we highlight some potential research tasks at the end of this survey.

CiteSeerX: AI in a Digital Library Search Engine

AI Magazine

Since then, the project has been directed by C. Lee Giles. While it is challenging to rebuild a system like Cite-SeerX from scratch, many of these AI technologies are transferable to other digital libraries and search engines. This is different from arXiv, Harvard ADS, and machine cluster to a private cloud using virtualization PubMed, where papers are submitted by authors or techniques (Wu et al. 2014). CiteSeerX extensively pushed by publishers. Unlike Google Scholar and leverages open source software, which significantly Microsoft Academic Search, where a significant portion reduces development effort. Red Hat of documents have only metadata (such as titles, Enterprise Linux (RHEL) 5 and 6 are the operating authors, and abstracts) available, users have full-text systems for all servers. Tomcat 7 is CiteSeerX keeps its own repository, which used for web service deployment on web and indexing serves cached versions of papers even if their previous servers. MySQL is used as the database management links are not alive any more. In additional to system to store metadata. Apache Solr is used paper downloads, CiteSeerX provides automatically for the index, and the Spring framework is used in extracted metadata and citation context, which the web application. In this section, we highlight four AI solutions that are Document metadata download service is not available leveraged by CiteSeerX and that tackle different challenges from Google Scholar and only recently available in metadata extraction and ingestion modules from Microsoft Academic Search. Finally, CiteSeerX (tagged by C, E, D, and A in figure 1).

Latent Semantic Search and Information Extraction Architecture Artificial Intelligence

The motivation, concept, design and implementation of latent semantic search for search engines have limited semantic search, entity extraction and property attribution features, have insufficient accuracy and response time of latent search, may impose privacy concerns and the search results are unavailable in offline mode for robotic search operations. The alternative suggestion involves autonomous search engine with adaptive storage consumption, configurable search scope and latent search response time with built-in options for entity extraction and property attribution available as open source platform for mobile, desktop and server solutions. The suggested architecture attempts to implement artificial general intelligence (AGI) principles as long as autonomous behaviour constrained by limited resources is concerned, and it is applied for specific task of enabling Web search for artificial agents implementing the AGI.