But University of Wisconsin-Madison research psychiatrist Giulio Tononi, who was recently selected to take part in the creation of a "cognitive computer," says the goal of building a computer as quick and flexible as a small mammalian brain is more daunting than it sounds. Tononi, professor of psychiatry at the UW-Madison School of Medicine and Public Health and an internationally known expert on consciousness, is part of a team of collaborators from top institutions who have been awarded a $4.9 million grant from the Defense Advanced Research Projects Agency (DARPA) for the first phase of DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project. Tononi and scientists from Columbia University and IBM will work on the "software" for the thinking computer, while nanotechnology and supercomputing experts from Cornell, Stanford and the University of California-Merced will create the "hardware." Dharmendra Modha of IBM is the principal investigator. "Every neuron in the brain knows that something has changed," Tononi explains.
When many think about the progress of AI and its impact on work, they envision a world where the robots and software thinking machines do all of the work and there's little room left for the work humans used to do. It's certainly not the future for the AI and human workforce that the Defense Advanced Research Projects Agency (DARPA) sees. DARPA is the agency that helped usher in the Internet, as well as the original expert systems of the 1960s through 1980s, as well as the big data analysis and machine learning systems that lay the foundation for natural language processing, self-driving cars, personal assistant bots. Now DARPA is leading the efforts to make AI and humans even more collaborative co-workers. AI has proven some of its value in the form of very targeted and specialized systems.
This paper reports on the findings of an ongoing project to investigate techniques to diagnose complex dynamical systems that are modeled as hybrid systems. In particular, we examine continuous systems with embedded supervisory controllers which experience abrupt, partial or full failure of component devices. The problem we address is: given a hybrid model of system behavior, a history of executed controller actions, and a history of observations, including an observation of behavior that is aberrant relative to the model of expected behavior, determine what fault occurred to have caused the aberrant behavior. Determining a diagnosis can be cast as a search problem to find the most likely model for the data. Unfortunately, the search space is extremely large. To reduce search space size and to identify an initial set of candidate diagnoses, we propose to exploit techniques originally applied to qualitative diagnosis of continuous systems. We refine these diagnoses using parameter estimation and model fitting techniques. As a motivating case study, we have examined the problem of diagnosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
Effective knowledge management maintains the knowledge assets of an organization by identifying and capturing useful information in a usable form, and by supporting refinement and reuse of that information in service of the organization's goals. A particularly important asset is the "internal" knowledg embodied in the experiences of task experts that may be lost with shifts in projects and personnel. Concept Mapping provides a framework for making this internal knowledge explicit in a visual form that can easily be examined and shared. However, it does not address how relevant concept maps can be retrieved or adapted to new problems. CBR is playing an increasing role in knowledge retrieval mad reuse for corporate memories, and its capabilities are appealing to augmenthe concept mapping process.
We motivate and describe an implementation of the MINDS* speech recognition system. MINDS uses knowledge of dialog structures, user goals and focus in a problem solving situation. The knowledge is combined to form predictions which translate into dynamically generated semantic network grammars. An experiment evaluated recognition accuracy given different levels of knowledge as constraints. Our results show that speech recognition accuracy improves dramatically, when the maximally constrained dynamic network grammar is used to process the speech input signal.