Meet the Woman Pioneering Work To Make AI Emotionally Intelligent

#artificialintelligence

Humans are already forming relationships with their artificial intelligence (AI) assistants, so we should make that technology as emotionally aware as possible by teaching it to respond to our feelings. That is the premise of Rana el Kaliouby, cofounder and CEO of Affectiva, an MIT spinout company that sells emotion recognition technology based on her computer science PhD, which she spent building the first ever computer that can recognise emotions. The machine learning-based software uses a camera or webcam to identify parts of human faces (eyebrows, the corners of eyes, etc), classify expressions and map them onto emotions like joy, disgust, surprise, anger, and so on, in real time. "We are getting lots of interest around chatbots, self-driving cars, anything with a conversational interface. If it's interfacing with a human it needs social and emotional skills.


Robot 'sets new Rubik's Cube record' - BBC News

#artificialintelligence

A robot has just set a new record for the fastest-solved Rubik's Cube, according to its makers. The Sub1 Reloaded robot took just 0.637 seconds to analyse the toy and make 21 moves, so that each of the cube's sides showed a single colour. That beats a previous record of 0.887 seconds, which was achieved by an earlier version of the same machine using a different processor. Infineon provided its chip to highlight advancements in self-driving car tech. But one expert has questioned the point of the stunt.


Adaptive Intelligent Vehicle Modules for Tactical Driving

AAAI Conferences

SAPIENT is a reasoning system that combines high-level task goals with low-level sensor constraints to control simulated and (ultimately) real vehicles like the Carnegie Mellon Navlab robot vans. SAPIENT consists of a number of reasoning modules whose outputs are combined using a voting scheme. The behavior of these modules is directly dependent on a large number of parameters both internal and external to the modules. Without carefully setting these parameters, it is difficult to assess whether the reasoning modules can interact correctly; furthermore, selecting good values for these parameters manually is tedious and error-prone. We use an evolutionary algorithm, termed Population-Based Incremental Learning, to automatically set each module's parameters.


How Task Analysis Can Be Used to Derive and Organize the Knowledge For the Control of Autonomous Vehicles

AAAI Conferences

The Real-time Control System (RCS) Methodology has evolved over a number of years as a technique to capture task knowledge and organize it in a framework conducive to implementation in computer control systems. The fundamental premise of this methodology is that the present state of the task activities sets the context that identifies the requirements for all of the support processing. In particular, the task context at any time determines what is to be sensed in the world, what world model states are to be evaluated, which situations are to be analyzed, what plans should be invoked, and which behavior generation knowledge is to be accessed. This results in a methodology that concentrates first and foremost on the task definition. It starts with the definition of the task knowledge in the form of a decision tree that clearly represents the branching of tasks into layers of simpler and simpler subtask sequences. This task decomposition framework is then used to guide the search for and to emplace all of the additional knowledge. This paper explores this process in some detail, showing how this knowledge is represented in a task contextsensitive relationship that supports the very complex realtime processing the computer control systems will have to do.


601

AI Magazine

Keith M. Andress, coauthor of "Evidence Accumulation and Flow of Control in a Hierarchical Spatial Reasoning System, " is a research associate in the Robot Vision Lab at Purdue University His research interests are in formalisms for accumulation of evidence, expert systems, and computer vision. Steven J. Frank, author of "What AI Practitioners Should Know about the Law. Part Two" is an attorney practicing with Nutter, McClennen & Fish, One International Place, Boston, Massachusetts 02210-2699. Martin Herman, coauthor of "A Framework for Representing and Reasoning about Three-Dimensional Objects for Vision" is group leader of the Sensory Intelligence Group in the Robot Systems Division at the National Bureau of Standards, Gaithersburg, MD 20899. His research interests are robotics, robot vision, image understanding, world modeling, real-time planning, autonomous vehicles, and remotely operated vehicles Avinash C. Kak, coauthor of "Evidence Accumulation and Flow of Control in a Hierarchical Spatial Reasoning System, " is a professor of electrical engineering at Purdue University.