Commonsense Reasoning


[P] Interactive demo of a neural coreference resolution SOTA model open-source code • r/MachineLearning

@machinelearnbot

Coreference resolution is a very challenging NLP task in which you try to link mentions with real life entities. It is the basis of the Winograd Schema Challenge, a test designed to defeat the AIs who've beaten the Turing Test! Hope you like it, I definitely think there should be more interactive demo of NLP systems like this!


Data Resources: Datasets Center for Data on the Mind

#artificialintelligence

Dataset from the U.S. Department of Education that includes various metrics on outcomes from degree-granting undergraduate institutions from 1996-2015, including student debt, college completion rates, job placement, and more


How IBM Is Building A Business Around WatsonTrue Viral News

#artificialintelligence

Paul Horn, then director of IBM Research, had been bugging Lickel to come up with an idea for the company's next "grand challenge," Big Blue's tradition of tackling incredibly tough problems just to see if they can be solved. In the beginning, the researchers experimented with rule based systems, similar to Doug Lenat's Cyc project that would answer questions based on information provided by human experts, almost the way an encyclopedia works. But where the company really sees great opportunity is by offering Watson as a service other companies and developers can access through API's in order to develop their own applications. "So Watson is not only giving answers it is also, in some cases, posing questions to human conventional wisdom."


Robots Need "Common Sense" AI to Work Out Our Uncertain World

#artificialintelligence

At the Machine Intelligence Summit in Berlin last week, Jeremy presented advances in mobile robot task planning and manipulation, with an overview of the field and examples of work from his lab, including machine vision, common sense reasoning and robotic grasping. This includes methods for task planning, manipulation, long life robots, whole body control, machine vision, and machine learning. I would also expect robot grasping in unstructured settings, such as logistics picking, to be solved, though not necessarily with the speed and reliability of humans. Jeremy Wyatt spoke at the Machine Intelligence Summit, Berlin, on 29-30 June.


Winograd Schema Challenge Results: AI Common Sense Still a Problem, for Now

IEEE Spectrum Robotics Channel

The Winograd Schema Challenge tasks computer programs with answering a specific type of simple, commonsense question called a pronoun disambiguation problem (PDP). Solving the problem means successfully determining whether each pronoun refers to Babar or to the old man. To figure out who this "he" refers to, you have to understand that giving people (or elephants) things makes them happy, and that the old man, being rich, is in a position to give Babar the thing that he wants. It was known that there were issues with the Turing test, and there were many research groups in other areas such as learning, natural language processing, and computer vision that had challenge problems, whereas we really didn't have one.


A tougher Turing Test shows that computers still have virtually no common sense

#artificialintelligence

The Winograd Schema Challenge asks computers to make sense of sentences that are ambiguous but usually simple for humans to parse. Disambiguating Winograd Schema sentences requires some common-sense understanding. Marcus, who is also the cofounder of a new AI startup, Geometric Intelligence, says it's notable that Google and Facebook did not take part in the event, even though researchers at these companies have suggested they are making major progress in natural language understanding. "It's going to come up when you start to support dialogues," says Charlie Ortiz, a senior principal researcher at Nuance, a company that makes voice recognition and voice interface software, which sponsored the Winograd Schema Challenge.


A tougher Turing Test shows chatbots are still pretty stupid

#artificialintelligence

To find out just how advanced our current AI systems are, researchers have developed a tougher Turing Test - called the Winograd Schema Challenge - which measures how well robotic intelligence matches human intelligence. So, to test current AI systems, The Winograd Schema Challenge was created by Hector Levesque from the University of Toronto. The entrant submitted by Quan Liu, built with assistance from researchers at York University in Toronto and the National Research Council of Canada, used techniques known as deep learning, where software is trained on huge amounts of data to try and spot patterns and mimic the neuron activity going on in our own brains. The hope is that some advance will take place, raising the level of AI intelligence quickly, but that's very wishful thinking.


Rise of the machines postponed after all contestants fail AI challenge TheINQUIRER

#artificialintelligence

THE WINOGRAD Schema Challenge is a competition intended to reward technologists who can build a system that understands the kind of ambiguous sentences humans come out with all the time, but which are simple for other humans, even stupid ones, to understand. And with things like Apple's Siri, Microsoft's Cortana and Google Assistant, the Winograd Schema Challenge must surely be as good as obsolete by now. However, one of the two best-placed systems, led by Quan Liu, a researcher at the University of Science and Technology of China, together with researchers from York University in Montreal and the National Research Council of Canada, used neural network-based machine learning in a bid to train their computer to recognise the many different contexts in which words can be used. The Challenge is deliberately designed to be different from the Turing Test, which tests only whether a human can be fooled into thinking that an AI program is human.


A tougher Turing Test shows that computers still have virtually no common sense

#artificialintelligence

The Winograd Schema Challenge asks computers to make sense of sentences that are ambiguous but usually simple for humans to parse. Disambiguating Winograd Schema sentences requires some common-sense understanding. Marcus, who is also the cofounder of a new AI startup, Geometric Intelligence, says it's notable that Google and Facebook did not take part in the event, even though researchers at these companies have suggested they are making major progress in natural language understanding. "It's going to come up when you start to support dialogues," says Charlie Ortiz, a senior principal researcher at Nuance, a company that makes voice recognition and voice interface software, which sponsored the Winograd Schema Challenge.


An AI with 30 Years' Worth of Knowledge Finally Goes to Work

#artificialintelligence

And now, after years of work, Lenat's system is being commercialized by a company called Lucid. Among other projects, the company is developing a personal assistant equipped with Cyc's general knowledge. This involved adding new information to the Cyc knowledge base and a new front-end interface that allows doctors to input natural-language queries such as "Find patients with bacteria after a pericardial window." "Deep learning is mainly about perception," he says, "but there is a lot of inference involved in everyday human reasoning, and Cyc represents a serious effort to grapple with the subtlety of that inference.