Goto

Collaborating Authors

 Overview


Distributed Control of Microscopic Robots in Biomedical Applications

arXiv.org Artificial Intelligence

Current developments in molecular electronics, motors and chemical sensors could enable constructing large numbers of devices able to sense, compute and act in micron-scale environments. Such microscopic machines, of sizes comparable to bacteria, could simultaneously monitor entire populations of cells individually in vivo. This paper reviews plausible capabilities for microscopic robots and the physical constraints due to operation in fluids at low Reynolds number, diffusion-limited sensing and thermal noise from Brownian motion. Simple distributed controls are then presented in the context of prototypical biomedical tasks, which require control decisions on millisecond time scales. The resulting behaviors illustrate trade-offs among speed, accuracy and resource use. A specific example is monitoring for patterns of chemicals in a flowing fluid released at chemically distinctive sites. Information collected from a large number of such devices allows estimating properties of cell-sized chemical sources in a macroscopic volume. The microscopic devices moving with the fluid flow in small blood vessels can detect chemicals released by tissues in response to localized injury or infection. We find the devices can readily discriminate a single cell-sized chemical source from the background chemical concentration, providing high-resolution sensing in both time and space. By contrast, such a source would be difficult to distinguish from background when diluted throughout the blood volume as obtained with a blood sample.


AAAI 2006 Spring Symposium Reports

AI Magazine

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Computer Science Department, was pleased to present its 2006 Spring Symposium Series held March 27-29, 2006, at Stanford University, California.


AAAI 2006 Spring Symposium Reports

AI Magazine

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Computer Science Department, was pleased to present its 2006 Spring Symposium Series held March 27-29, 2006, at Stanford University, California. The titles of the eight symposia were (1) Argumentation for Consumers of Health Care (chaired by Nancy Green); (2) Between a Rock and a Hard Place: Cognitive Science Principles Meet AI Hard Problems (chaired by Christian Lebiere); (3) Computational Approaches to Analyzing Weblogs (chaired by Nicolas Nicolov); (4) Distributed Plan and Schedule Management (chaired by Ed Durfee); (5) Formalizing and Compiling Background Knowledge and Its Applications to Knowledge Representation and Question Answering (chaired by Chitta Baral); (6) Semantic Web Meets e-Government (chaired by Ljiljana Stojanovic); (7) To Boldly Go Where No Human-Robot Team Has Gone Before (chaired by Terry Fong); and (8) What Went Wrong and Why: Lessons from AI Research and Applications (chaired by Dan Shapiro).


Guest Editors' Introduction

AI Magazine

This editorial introduces the articles published in the AI Magazine special issue on Innovative Applications of Artificial Intelligence (IAAI), based on a selection of papers that appeared in the IAAI-05 conference, which occurred July 9-13 2005 in Pittsburgh, Pennsylvania. IAAI is the premier venue for learning about AI's impact through deployed applications and emerging AI application technologies. Case studies of deployed applications with measurable benefits arising from the use of AI technology provide clear evidence of the impact and value of AI technology to today's world. The emerging applications track features technologies that are rapidly maturing to the point of application. The six articles selected for this special issue are extended versions of papers that appeared at the conference. Three of the articles describe deployed applications that are already in use in the field. Three articles from the emerging technology track were particularly innovative and demonstrated some unique technology features ripe for deployment.


Comparative Analysis of Frameworks for Knowledge-Intensive Intelligent Agents

AI Magazine

A recurring requirement for human-level artificial intelligence is the incorporation of vast amounts of knowledge into a software agent that can use the knowledge in an efficient and organized fashion. This article discusses representations and processes for agents and behavior models that integrate large, diverse knowledge stores, are long-lived, and exhibit high degrees of competence and flexibility while interacting with complex environments. There are many different approaches to building such agents, and understanding the important commonalities and differences between approaches is often difficult. We introduce a new approach to comparing frameworks based on the notions of commitment, reconsideration, and a categorization of representations and processes. We review four agent frameworks, concentrating on the major representations and processes each directly supports. By organizing the approaches according to a common nomenclature, the analysis highlights points of similarity and difference and suggests directions for integrating and unifying disparate approaches and for incorporating research results from one framework into alternatives.


Celebrating AI's Fiftieth Anniversary and Continuing Innovation at the AAAI/IAAI-06 Conferences

AI Magazine

The seeds of AI were sewn at the Dartmouth Conference in the summer of 1956. John McCarthy, then an assistant mathematics professor at Dartmouth, organized the conference and coined the name "artificial intelligence" in his conference proposal. This summer AAAI celebrates the first 50 years of AI; and continues to foster the fertile fields of AI at the National AI conference (AAAI-06) and Innovative Applications of AI conference (IAAI-06) in Boston.


Achieving Human-Level Intelligence through Integrated Systems and Research: Introduction to This Special Issue

AI Magazine

This special issue is based on the premise that in order to achieve human-level artificial intelligence researchers will have to find ways to integrate insights from multiple computational frameworks and to exploit insights from other fields that study intelligence. Articles in this issue describe recent approaches for integrating algorithms and data structures from diverse subfields of AI. Much of this work incorporates insights from neuroscience, social and cognitive psychology or linguistics. The new applications and significant improvements to existing applications this work has enabled demonstrates the ability of integrated systems and research to continue progress towards human-level artificial intelligence.


Components, Curriculum, and Community: Robots and Robotics in Undergraduate AI Education

AI Magazine

Although the Lego RCX's has helped guide Sony's own choice of Hitachi H8 microcontroller lists at 16 megahertz next-generation AIBO features and software and 32 kilobytes of memory, the overhead support. As for two-legged platforms, the University of the firmware and interpreter yield of Freiburg has already prototyped a about 10 kilobytes and 500 hertz throughput soccer team of Robosapiens running from for a typical user--slightly better with alternative handheld computers.


Complexity Results and Approximation Strategies for MAP Explanations

Journal of Artificial Intelligence Research

MAP is the problem of finding a most probable instantiation of a set of variables given evidence. MAP has always been perceived to be significantly harder than the related problems of computing the probability of a variable instantiation Pr, or the problem of computing the most probable explanation (MPE). This paper investigates the complexity of MAP in Bayesian networks. Specifically, we show that MAP is complete for NP^PP and provide further negative complexity results for algorithms based on variable elimination. We also show that MAP remains hard even when MPE and Pr become easy. For example, we show that MAP is NP-complete when the networks are restricted to polytrees, and even then can not be effectively approximated. Given the difficulty of computing MAP exactly, and the difficulty of approximating MAP while providing useful guarantees on the resulting approximation, we investigate best effort approximations. We introduce a generic MAP approximation framework. We provide two instantiations of the framework; one for networks which are amenable to exact inference Pr, and one for networks for which even exact inference is too hard. This allows MAP approximation on networks that are too complex to even exactly solve the easier problems, Pr and MPE. Experimental results indicate that using these approximation algorithms provides much better solutions than standard techniques, and provide accurate MAP estimates in many cases.


Intelligent DNA-Based Molecular Diagnostics Using Linked Genetic Markers

AAAI Conferences

Dhiraj K. Pathak 1, Eric P. Hoffman 2, and 1 Mark W. Perlin 1 Department of Computer Science, Carnegie Mellon University 2 Department of Molecular Genetics and Biochemistry, University of Pittsburgh Abstract This paper describes a knowledge-based system for molecular diagnostics, and its application to fully automated diagnosis of X-hnked genetic disorders. Molecular diagnostic information is used in chnical practice for determining genetic risks, such as carrier determination and prenatal diagnosis. Initially, blood samples are obtained from related individuals, and PCR amphfication is performed. Linkage-based molecular diagnosis then entails three data analysis steps. First, for every individual, the alleles (i.e., DNA composition) are determined at specified chromosomal locations. Second, the flow of genetic material among the individuals is established. Third, the probability that a given individual is either a carrier of the disease or affected by the disease is determined. The current practice is to perform each of these three steps manually, which is costly, time consuming, labor-intensive, and error-prone. As such, the knowledge-intensive data analysis and interpretation supersede the actual experimentation effort as the major bottleneck in molecular diagnostics.