Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
In a Harvard Business School classroom in Boston, MA, robots are on the rise. MBA students are trying to crack a case study on the self-driving cars pioneered by Tesla, Google, and Uber. What is the potential for robots to reshape our roads? And what are the challenges and opportunities of entering that business? This is a case that David Yoffie, professor of international business administration, believes is essential reading for tomorrow's business leaders.
This is an integrative review that address the question, "What makes for a good explanation?" with reference to AI systems. Pertinent literatures are vast. Thus, this review is necessarily selective. That said, most of the key concepts and issues are expressed in this Report. The Report encapsulates the history of computer science efforts to create systems that explain and instruct (intelligent tutoring systems and expert systems). The Report expresses the explainability issues and challenges in modern AI, and presents capsule views of the leading psychological theories of explanation. Certain articles stand out by virtue of their particular relevance to XAI, and their methods, results, and key points are highlighted. It is recommended that AI/XAI researchers be encouraged to include in their research reports fuller details on their empirical or experimental methods, in the fashion of experimental psychology research reports: details on Participants, Instructions, Procedures, Tasks, Dependent Variables (operational definitions of the measures and metrics), Independent Variables (conditions), and Control Conditions.
So you want to learn the Mathematics for Machine Learning? Well, for Machine Learning or Deep Learning and AI, a thorough mathematical understanding is not an option. I know the options out there; prerequisites and the skills you need to become successful in Machine Learning and AI. If you want to learn Machine Learning, these classes will help you to master the mathematical foundation required for writing programs and algorithms for Machine Learning, Deep Learning and AI. My goal in this piece is to help you find the resources to gain good intuition and get you the hands-on experience you need with coding neural nets, stochastic gradient descent, and principal component analysis.