The central challenge in commonsense knowledge representation research is to develop content theories that achieve a high degree of both competency and coverage. We describe a new methodology for constructing formal theories in commonsense knowledge domains that complements traditional knowledge representation approaches by first addressing issues of coverage. These concepts are sorted into a manageable number of coherent domains, one of which is the representational area of commonsense human memory. These representational areas are then analyzed using more traditional knowledge representation techniques, as demonstrated in this article by our treatment of commonsense human memory.
This paper presents a semantically grounded method for extracting commonsense knowledge. First, commonsense rules are identified, e.g., one cannot see imaginary objects. Second, those rules are combined with a basic semantic representation in order to infer commonsense knowledge facts, e.g. one cannot see a flying carpet. Further combinations of semantic relations with inferred commonsense facts are proposed and analyzed. Results show that this novel method is able to extract thousands of commonsense facts with little human interaction and high accuracy.
Contextual knowledge is essential in answering questions given speciﬁc observations. While recent approaches to building commonsense knowledge basesvia text mining and/or crowdsourcing are successful,contextual knowledge is largely missing. To addressthis gap, this paper presents SocialExplain, a novel approach to acquiring contextual commonsense knowledge from explanations of social content. The acquisition process is broken into two cognitively simple tasks:to identify contextual clues from the given social content, and to explain the content with the clues. An experiment was conducted to show that multiple piecesof contextual commonsense knowledge can be identi-ﬁed from a small number of tweets. Online users veriﬁed that 92.45% of the acquired sentences are good,and 95.92% are new sentences compared with existingcrowd-sourced commonsense knowledge bases.
The Winograd Schema Challenge has recently been proposed as an alternative to the Turing test. A Winograd Schema consists of a sentence and question pair such that the answer to the question depends on the resolution of a definite pronoun in the sentence. The answer is fairly intuitive for humans but is difficult for machines because it requires commonsense knowledge about words or concepts in the sentence. In this paper we propose a novel technique which semantically parses the text, hunts for the needed commonsense knowledge and uses that knowledge to answer the given question.
Reasoning with commonsense knowledge plays an important role in various NLU tasks. Often the commonsense knowledge is needed to be extracted separately. In this paper we present our work of automatically extracting a certain type of commonsense knowledge. The knowledge resembles the kind that humans have about the events and the entities that participate in those events. One example of such knowledge is that "IF A bullying B causes T rescued Z THEN (possibly) Z = B ''. We call this knowledge an event-based conditional commonsense. Our approach involves semantic parsing of natural language sentences by using the Knowledge Parser (K-Parser) and extracting the knowledge, if found. We extracted about 19000 instances of such knowledge from the Open American National Corpus.