Plotting

 Technology


Beyond the Elves: Making Intelligent Agents Intelligent

AI Magazine

In fact, DARPA, which funded the project, ways. Elves) (Scerri, Pynadath, and Tambe 2002; Finally, we will present some lessons Pynadath and Tambe 2003) and required learned and recent research that was motivated detailed information about the calendars by our experiences in deploying the of people using the system. Thus, we decided to deploy a new application of the Electric The Travel Elves introduced two major Elves, called the Travel Elves. This application advantages over traditional approaches to appeared to be ideal for wider deployment travel planning. First, the Travel Elves provided since it could be hosted entirely outside an interactive approach to making an organization and communication travel plans in which all of the data could be performed over wireless devices, required to make informed choices is such as cellular telephones. For example, when The mission of the Travel Elves (Ambite deciding whether to park at the airport or et al. 2002, Knoblock 2004) was to facilitate take a taxi, the system compares the cost planning a trip and to ensure that the of parking and the cost of a taxi given other resulting travel plan would execute selections, such as the airport, the specific smoothly. Initial deployment of the Travel parking lot, and the starting location Elves at DARPA went smoothly.


You Recommended What?

AI Magazine

Our top and front-end call center software (no mean feat: these were Windows PCs simulating IBM "green screens"!), and tuning the recommender to produce high quality recommendations that were successful against historical sales data. The recommendations were designed to be delivered in real time to the call center agents during live inbound calls. For instance, if the customer ordered the pink housecoat, the recommender might suggest the fuzzy pink slippers to go with it, based on prior sales experience. The company was ready for a big test: our lead consultant was standing behind one of the call center agents, watching her receive calls. Then the moment came: the IT folk at the company pushed the metaphoric big red button and switched her over to the automated recommender system.


The Voice of the Turtle: Whatever Happened to AI?

AI Magazine

On March 27, 2006, I gave a light-hearted and occasionally bittersweet presentation on “Whatever Happened to AI?” at the Stanford Spring Symposium presentation – to a lively audience of active AI researchers and formerly-active ones (whose current inaction could be variously ascribed to their having aged, reformed, given up, redefined the problem, etc.)  This article is a brief chronicling of that talk, and I entreat the reader to take it in that spirit: a textual snapshot of a discussion with friends and colleagues, rather than a scholarly article. I begin by whining about the Turing Test, but only for a thankfully brief bit, and then get down to my top-10 list of factors that have retarded progress in our field, that have delayed the emergence of a true strong AI.


The Third International Conference on Human-Robot Interaction

AI Magazine

Human-Robot Interaction (HRI-2008) with robots," highlights the importance It also featured Foundation, and the European a panel on "robo-ethics" intended Network for the Advancement of Artificial to start a discussion of the ethical Cognitive Systems (EU Cognition) and societal implications of provided grants. More than 250 autonomous robots and a panel on representatives from academia, government, "what is HRI?" that examined the constitutive and industry attended HRI-components of human-robot 2008. HRI is the premier forum for the Of the 134 submissions, the program presentation and discussion of committee accepted 48 full research results in human-robot interaction. Human-robot interaction 27 submissions) were featured in a special is inherently interdisciplinary session. The workshops artificial intelligence, cognitive science, addressed metrics (an examination of ergonomics, human-computer proposed guidelines for evaluating interaction, psychology, robotics, and HRI), coding behavioral video data other fields. From 1997 to 2000, he was vice president of development for Fourth Planet, Inc., a developer of real-time visualization software. Fong has published more than 50 papers in field robotics, human-robot interaction, virtual reality user interfaces, and parallel processing, was chair of the 2006 AAAI Spring Symposium on human-robot interaction in space, and is cogeneral chair for HRI-2008. Kerstin Dautenhahn is the research professor of artificial intelligence in the School of Computer Science and coordinator of the Adaptive Systems Research Group at the University of Hertfordshire in the United Kingdom. Save the Date! -- July 11-15, 2010 AAAI comes to Atlanta, Georgia in 2010! Please mark your calendars, and visit www. She was general chair of IEEE RO-MAN06 and cogeneral chair of HRI-2008. Scheutz was the coprogram chair for HRI-Seven student teams competed to award went to "Robots in Organizations: University of Amsterdam took top Jodi Forlizzi.


Reconstructing True Wrong Inductions

AI Magazine

There have been many erroneous pre-scientific and common sense inductions. We want to understand why people believe in wrong theories. Our hypothesis is that mistaken inductions are due not only to the lack of facts, but also to the poor description of existing facts and to implicit knowledge which is transmitted socially. This paper presents several experiments the aim of which is to validate this hypothesis by using machine learning and data mining techniques to simulate the way people build erroneous theories from observations.


Lessons Learned Delivering Optimized Supply Chain Planning to the Business World

AI Magazine

Technically the underlying optimization development of online commerce forced problem is either NP or P-space businesses to question the week-plus supply-chain complete (depending on the details of the planning cycles that had been domain). Furthermore, the problem mixes the norm. Finally, the year 2000 (Y2K) a dozen or so classic optimization problems problem caused an across-the-board from AI and operations research (OR), replacement of enterprise software, allowing and much of the expected savings from many businesses to update their global supply-chain optimization are lost if approach to supply-chain planning. The end result of all of these factors was This article describes our experience a huge upswing in demand for supplychain from four years of solving supply-chain planning tools from i2 Technologies planning and optimization problems and other vendors. When I joined i2 in across industries, and some of the lessons 1996 as optimization architect, the company we learned.


Often, It’s not About the AI

AI Magazine

Narrowly focused task and domain specific AI has been applied successfully for more than twenty five years, and has produced immense value in industry and government. It doesn’t lead directly to artificial general intelligence (AGI), but it does have real problem solving value. It is useful to note that many of the reasons why some otherwise meritorious AI applications fail have nothing to do with the AI per se, but rather, with systems engineering and organizational issues. For example: the domain expert is pulled out to work on more critical projects; the application champion rotates out of his/her position; or the sponsor changes priorities. A system may not make it past an initial pilot test for logistical vs. substantive technical reasons. Some embedded AI systems may work well for years on a software platform that is orphaned and porting it would be prohibitively expensive. A system may work well in a pilot test, but it might not scale for huge numbers of users without extensive performance optimization. The core AI system may be great but the user interface could be suboptimal. The delivered application system might work well, but it could be hard to maintain internally. The system may work according to the sponsor’s requirements, but it might not be applied to the part of the problem that delivers the largest economic results; or the system might not produce enough visible organizational benefits to protect it in subsequent budget battles. Alternatively, the documented results may be quite strong, but may not be communicated effectively across organizational boundaries. All software projects are vulnerable to one or more of these problems. The fact that some software projects have a relatively small percentage of their total code in embedded AI methods doesn’t make them an exception. However, knowing about these potential problems could help AI project teams to be proactive about avoiding them whenever possible.


Electric Elves: What Went Wrong and Why

AI Magazine

Software personal assistants continue to be a topic of significant research interest. This article outlines some of the important lessons learned from a successfully-deployed team of personal assistant agents (Electric Elves) in an office environment. In the Electric Elves project, a team of almost a dozen personal assistant agents were continually active for seven months. Each elf (agent) represented one person and assisted in daily activities in an actual office environment. This project led to several important observations about privacy, adjustable autonomy, and social norms in office environments. In addition to outlining some of the key lessons learned we outline our continued research to address some of the concerns raised.


Learning from Noise

AI Magazine

Because the data consisted of long records of real values, the student was advised to use artificial neural networks. After several weeks of producing random classifiers, the student showed up at my office and asked whether I could help. It always seems a good idea to analyze the data first, so we constructed a primitive visualization: signal strength of four antennae over time. The graphs looked like we'd glued a pen on a dog's tail while showing him a juicy T-bone steak. I suggested we add a few functions, such as pairwise difference, mean, deviation, and so on--just to get a feel for the data.


A Too-Clever Ranking Method

AI Magazine

I developed what I scored, and those with the lowest scores could be removed before running C4.5 to build a decision tree with the remainder. I ran an experiment in which I removed the bottom 10 percent of the instances in a University of California, Irvine (UCI) data set. The resulting tree was smaller and more accurate (as measured by 10-fold CV) than the tree built on the full data set. Then I removed the bottom 20 percent of the instances and got a tree that was smaller than the last one and just as accurate. At that point I had the feeling that this was going to make a great paper for the International Conference on Machine Learning (ICML).