In part, the critics of AI are driven by the knowledge that'white collar jobs' are the ones that are now under threat. Business leaders are frequently confronted by notions of job-killing automation and headlines on the variation of the theme that "Robots Will Steal Our Jobs." Elon Musk, CEO of Tesla, Silicon Valley figurehead, and champion of technology-driven innovation even goes a step further by suggesting AI is a fundamental threat to human civilisation. The robot on the assembly line is now a familiar image. AI in middle management is new.
That's because, to paraphrase Amazon's Jeff Bezos, artificial intelligence (AI) is "not just in the first inning of a long baseball game, but at the stage where the very first batter comes up." Look around, and you will find AI everywhere--in self driving cars, Siri on your phone, online customer support, movie recommendations on Netflix, fraud detection for your credit cards, etc. To be sure, there's more to come. Featuring 30 lectures, MIT's course "introduces students to the basic knowledge representation, problem solving, and learning methods of artificial intelligence." It includes interactive demonstrations designed to "help students gain intuition about how artificial intelligence methods work under a variety of circumstances."
Recent innovations around the autonomous car have shaken up the automotive industry. Manufacturers and their suppliers are all accelerating their work on the cars of the future, both regular human-operated cars as well as driverless or semi-autonomous vehicles. But beyond just issues of autonomy, these cars of the future are undergoing a fundamental shift in human-machine interaction. Consumers today crave more relational and conversational interactions with devices, as evidenced by the popularity of chatbots and virtual assistants like Siri and Alexa – and the automotive industry has taken notice. As such, next-generation cars are emerging as advanced artificial intelligence (AI) systems that will power an entirely new automotive experience in which cars will become conversational interfaces between the driver, passengers, the vehicle itself and its controls -- all connected to the IoT and mobile devices we use.
Humans are already forming relationships with their artificial intelligence (AI) assistants, so we should make that technology as emotionally aware as possible by teaching it to respond to our feelings. That is the premise of Rana el Kaliouby, cofounder and CEO of Affectiva, an MIT spinout company that sells emotion recognition technology based on her computer science PhD, which she spent building the first ever computer that can recognise emotions. The machine learning-based software uses a camera or webcam to identify parts of human faces (eyebrows, the corners of eyes, etc), classify expressions and map them onto emotions like joy, disgust, surprise, anger, and so on, in real time. "We are getting lots of interest around chatbots, self-driving cars, anything with a conversational interface. If it's interfacing with a human it needs social and emotional skills.
The IJCAI-09 Workshop on Learning Structural Knowledge from Observations (STRUCK-09) took place as part of the International Joint Conference on Artificial Intelligence (IJCAI-09) on July 12 in Pasadena, California. The workshop program included paper presentations, discussion sessions about those papers, group discussions about two selected topics, and a joint discussion. As a result, many cognitive architectures use structural models to represent relations between knowledge of different complexity. Structural modeling has led to a number of representation and reasoning formalisms including frames, schemas, abstractions, hierarchical task networks (HTNs), and goal graphs among others. These formalisms have in common the use of certain kinds of constructs (for example, objects, goals, skills, and tasks) that represent knowledge of varying degrees of complexity and that are connected through structural relations.