The St. Thomas Common Sense Symposium: Designing Architectures for Human-Level Intelligence

AI Magazine

To build a machine that has "common sense" was once a principal goal in the field of artificial intelligence. But most researchers in recent years have retreated from that ambitious aim. Instead, each developed some special technique that could deal with some class of problem well, but does poorly at almost everything else. We are convinced, however, that no one such method will ever turn out to be "best," and that instead, the powerful AI systems of the future will use a diverse array of resources that, together, will deal with a great range of problems. To build a machine that's resourceful enough to have humanlike common sense, we must develop ways to combine the advantages of multiple methods to represent knowledge, multiple ways to make inferences, and multiple ways to learn. We held a two-day symposium in St. Thomas, U.S. Virgin Islands, to discuss such a project -- - to develop new architectural schemes that can bridge between different strategies and representations. This article reports on the events and ideas developed at this meeting and subsequent thoughts by the authors on how to make progress.


Microsoft co-founder launches $125m fund to teach AI common sense

#artificialintelligence

Called Project Alexandria, Allen's basing the research effort out of his Allen Institute for Artificial Intelligence (AI2) in Seattle. It will first seek to produce standard measurements to determine the common sense abilities of AI models. Once it's possible to calculate whether something has common sense, researchers will be better placed to start figuring out how to teach it. Project Alexandria will be looking at ways to crowdsource common sense knowledge from individuals. By collecting common sense reactions "at an unprecedented scale," Alexandria will aim to develop a dataset sufficiently comprehensive to train an AI model.


Designing Architectures for Human-Level Intelligence

AI Magazine

To build a machine that has "common sense" was once a principal goal in the field of artificial intelligence. But most researchers in recent years have retreated from that ambitious aim. Instead, each developed some special technique that could deal with some class of problem well, but does poorly at almost everything else. We are convinced, however, that no one such method will ever turn out to be "best," and that instead, the powerful AI systems of the future will use a diverse array of resources that, together, will deal with a great range of problems. To build a machine that's resourceful enough to have humanlike common sense, we must develop ways to combine the advantages of multiple methods to represent knowledge, multiple ways to make inferences, and multiple ways to learn.


IBM: Response to RFI

#artificialintelligence

In order for AI systems to enhance quality of life, both personally and professionally, they must acquire broad and deep knowledge from multiple domains, learn continuously from interactions with people and environments, and support reasoned decisions. Broadly, the AI fields' long-term progress depend upon many advances. As AI systems become ubiquitous in people's lives, serving many purposes in both personal and professional tasks, there are still many things they cannot do or that they should do much better. In order for AI systems to enhance humans' quality of life, both personally and professionally, they must acquire broad and deep knowledge from multiple domains, learn continuously from interactions with people and environments, and support reasoned decisions. Significant research efforts should be devoted to address these deficiencies.


Artificial Intelligence & Their Rights

#artificialintelligence

The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Type 1: Reactive Machines Cortana, Siri, Google Now, A.L.I.C.E., Tumblrbots, AlphaGo, Deep Blue, and IBM's Watson are all examples of reactive machines. Machines that learn, to a point. For example, Deep Blue, who beat the international grand chess master at his own game, could learn and predict possible moves, and knew the rules of the game. But that was it, it could only learn and study and play the game in real time.