Goto

Collaborating Authors

AI Magazine


Building AI Applications: Yesterday, Today, and Tomorrow

AI Magazine

AI applications have been deployed and used for industrial, government, and consumer purposes for many years. The experiences have been documented in IAAI conference proceedings since 1989. Over the years, the breadth of applications has expanded many times over and AI systems have become more commonplace. Indeed, AI has recently become a focal point in the industrial and consumer consciousness. This article focuses on changes in the world of computing over the last three decades that made building AI applications more feasible. We then examine lessons learned during this time and distill these lessons into succinct advice for future application builders.


The Evolution of Scheduling Applications and Tools

AI Magazine

Neither of these terms are fundamental categories. The initial AIMS scheduling problem encompassed 29,000 discrete activities, subject to 97,000 complex metric constraints specified by AIMS applications developers. Generating feasible schedules was an essential requirement for operating the 777, potentially threatening a Boeing investment of almost 10 billion dollars. The scale and complexity of this problem were unprecedented, and there were very few applicable tools or standards. Input requirements were provided as text, with a semantics negotiated and maintained through frequent discussion.


AAAI Conferences Calendar

AI Magazine

This page includes forthcoming AAAI sponsored conferences, conferences presented by AAAI Affiliates, and conferences held in cooperation with AAAI. AI Magazine also maintains a calendar listing that includes nonaffiliated conferences at www.aaai.org/Magazine/calendar.php. IEA/AIE-2017 will be AAAI Spring Symposium Series. The New Orleans, Louisiana USA. held June 17-21, 2017 in Arras, AAAI 2017 Spring Symposium Series LPN-Orleans Riverside Hotel, New Orleans, MR'17 will be held July 3-6, 2017 in Eleventh International AAAI Conference Louisiana USA. ICWSM-17 will be held May 15-18 in Montréal, Québec, Canada.


RuleML (Web Rule Symposium) 2016 Report

AI Magazine

Moreover, 2 keynote and 2 tutorial papers were invited. Most regular papers were presented in one of these tracks: Smart Contracts, Blockchain, and Rules, Constraint Handling Rules, Event Driven Architectures and Active Database Systems, Legal Rules and Reasoning, Rule-and Ontology-Based Data Access and Transformation, Rule Induction, and Learning. Following up on previous years, RuleML also hosted the 6th RuleML Doctoral Consortium and the 10th International Rule Challenge, which this year was dedicated to applications of rule-based reasoning, such as Rules in Retail, Rules in Tourism, Rules in Transportation, Rules in Geography, Rules in Location-Based Search, Rules in Insurance Regulation, Rules in Medicine, and Rules in Ecosystem Research. The 10th International Rule Challenge Awards went to Ingmar Dasseville, Laurent Janssens, Gerda Janssens, Jan Vanthienen, and Marc Denecker, for their paper Combining DMN and the Knowledge Base Paradigm for Flexible Decision Enactment, and Jacob Feldman for his paper What-If Analyzer for DMN-based Decision Models. As in previous years, RuleML 2016 was also a place for presentations and face-to-face meetings about rule technology standardizations, which this year Mark Your Calendars!


Using Global Constraints to Automate Regression Testing

AI Magazine

Nowadays, any communicating or autonomous systems rely on high-quality software-based components. To ensure a sufficient level of quality, these components must be thoroughly verified before being released and being deployed in operational settings. Regression testing is a crucial verification process that executes any new release of a software-based component against previous versions of the component, with existing test cases. However, the selection of test cases in regression testing is challenging as the time available for testing is limited and some selection criteria must be respected. This problem, coined as Test Suite Reduction (TSR), is usually addressed by validation engineers through manual analysis or by using approximation techniques. Even if the underlying optimization problem is untractable in theory, solving it in practice is crucial when there are pressing needs to release high-quality components while at the same time reducing the time-to-market of new software releases. In this paper, we address the TSR problem with sound Artificial intelligence techniques such as Constraint Programming (CP) and global constraints. By associating each test case a cost-value aggregating distinct criteria, such as execution time, priority or importance due to the error-proneness of each test case, we propose several constraint optimization models to find a subset of test cases covering all the test requirements and optimizing the overall cost of selected test cases. Our models are based on a combination of NVALUE, GLOBALCARDINALITY, and SCALAR_PRODUCT, three well-known global constraints that can faithfully encode the coverage relation between test cases and test requirements. Our contribution includes the reuse of existing preprocessing rules to simplify the problem before solving it and the design of structure-aware heuristics, which take into account the notion of costs, associated with test cases. The work presented in this paper has been motivated by an industrial application in the communication domain. Our overall goal is to develop a constraint-based approach of test suite reduction that can be deployed to test a complete product line of conferencing systems in continuous delivery mode. By implementing this approach in a software prototype tool and experimentally evaluated it on both randomly generated instances and industrial instances, we hope to foster a quick adoption of the technology.



Deploying nEmesis: Preventing Foodborne Illness by Data Mining Social Media

AI Magazine

Foodborne illness afflicts 48 million people annually in the U.S. alone. Over 128,000 are hospitalized and 3,000 die from the infection. While preventable with proper food safety practices, the traditional restaurant inspection process has limited impact given the predictability and low frequency of inspections, and the dynamic nature of the kitchen environment. Despite this reality, the inspection process has remained largely unchanged for decades. CDC has even identified food safety as one of seven ”winnable battles”; however, progress to date has been limited. In this work, we demonstrate significant improvements in food safety by marrying AI and the standard inspection process. We apply machine learning to Twitter data, develop a system that automatically detects venues likely to pose a public health hazard, and demonstrate its efficacy in the Las Vegas metropolitan area in a double-blind experiment conducted over three months in collaboration with Nevada’s health department. By contrast, previous research in this domain has been limited to indirect correlative validation using only aggregate statistics. We show that adaptive inspection process is 64 percent more effective at identifying problematic venues than the current state of the art. If fully deployed, our approach could prevent over 9,000 cases of foodborne illness and 557 hospitalizations annually in Las Vegas alone. Additionally, adaptive inspections result in unexpected benefits, including the identification of venues lacking permits, contagious kitchen staff, and fewer customer complaints filed with the Las Vegas health department.


Editorial Introduction: Innovative Applications of Artificial Intelligence 2016

AI Magazine

This issue features expanded versions of articles selected from the 2016 AAAI Conference on Innovative Applications of Artificial Intelligence held in Phoenix, Arizona. We present a selection of three articles that describe deployed applications, two articles that discuss work on emerging applications, and an article based on the 2016 Robert S. Engelmore Memorial Lecture.


PAWS — A Deployed Game-Theoretic Application to Combat Poaching

AI Magazine

Poaching is considered a major driver for the population drop of key species such as tigers, elephants, and rhinos, which can be detrimental to whole ecosystems. While conducting foot patrols is the most commonly used approach in many countries to prevent poaching, such patrols often do not make the best use of the limited patrolling resources.


Automated Volumetric Intravascular Plaque Classification Using Optical Coherence Tomography

AI Magazine

An estimated 17.5 million people died from a cardiovascular disease in 2012, representing 31 percent of all global deaths. Most acute coronary events result from rupture of the protective fibrous cap overlying an atherosclerotic plaque. The task of early identification of plaque types that can potentially rupture is, therefore, of great importance. The state-of-the-art approach to imaging blood vessels is intravascular optical coherence tomography (IVOCT). However, currently, this is an offline approach where the images are first collected and then manually analyzed an image at a time to identify regions at risk of thrombosis. This process is extremely laborious, time consuming and prone to human error. We are building a system that, when complete, will provide interactive 3D visualization of a blood vessel as an IVOCT is in progress. The visualization will highlight different plaque types and enable quick identification of regions at risk for thrombosis. In this paper, we describe our approach, focusing on machine learning methods that are a key enabling technology. Our empirical results using real OCT data show that our approach can identify different plaque types efficiently with high accuracy across multiple patients.