Commonsense is a challenge not only for representation and reasoning but also for large scale knowledge engineering required to capture the breadth of our "everyday" world. One approach to knowledge engineering is to "outsource" the effort to the public through games that generate structured commonsense knowledge from user play. To date, such games have focused on symbolic and textual knowledge. However, an effective commonsense reasoning system will require spatial and physical reasoning capabilities. In this paper, I propose a tool for gathering commonsense information from ordinary people. It is a user-friendly 3D sculpting tool for modeling and annotating models of physical objects and spaces.
Identity relations are at the foundation of many logic-based knowledge representations. We argue that the traditional notion of equality, is unsuited for many realistic knowledge representation settings. The classical interpretation of equality is too strong when the equality statements are re-used outside their original context. On the Semantic Web, equality statements are used to interlink multiple descriptions of the same object, using owl:sameAs assertions. And indeed, many practical uses of owl:sameAs are known to violate the formal Leibniz-style semantics. We provide a more flexible semantics to identity by assigning meaning to the subrelations of an identity relation in terms of the predicates that are used in a knowledge-base. Using those indiscernability-predicates, we define upper and lower approximations of equality in the style of rought-set theory, resulting in a quality-measure for identity relations.
The Internet of Things (IoT) is an uninterrupted connected network of embedded objects/ devices with identifiers without any human intervention using standard and communication protocol. It provides encryption, authorization and identification with different device protocols like MQTT, STOMP or AMQP to securely move data from one network to another. IoT in connected Government helps to deliver better citizen services and provides transparency. It improves the employee productivity and cost savings. It helps in delivering contextual and personalized service to citizens and enhances the security and improves the quality of life.
I propose that the notion of cognitive state be broadened from the current predicate-symbolic, Language-of-Thought framework to a multi-modal one, where perception and kinesthetic modalities participate in thinking. In contrast to the roles assigned to perception and motor activities as modules external to central cognition in the currently dominant theories in AI and Cognitive Science, in the proposed approach, central cognition incorporates parts of the perceptual machinery. I motivate and describe the proposal schematically, and describe the implementation of a bimodal version in which a diagrammatic representation component is added to the cognitive state. The proposal explains our rich multimodal internal experience, and can be a key step in the realization of embodied agents. The proposed multimodal cognitive state can significantly enhance the agent's problem solving. Note: Memory, as well as the information retrieved from memory and from perception, represented in a predicate-symbolic form.
The aim of this track is to bring researchers from the knowledge representation (KR) and the natural language processing (NLP) communities together to discuss common "representational" and "reasoning" issues. In addition to the two main challenging concerns, namely, expressivity and fast reasoning, representations should attempt to be transparent and friendly. The NLP community has made some progress in terms of processing and handling ambiguity and the KR community has realized that a lot of knowledge is already "coded" in NL. Researchers on both sides consider benefiting from each other's progress and taking on issues that were left to be solved by the "other" community. The accepted papers and posters in this track discuss issues relating to the semantic web, semantic annotation, NL semantics and NLPbased techniques, the bottleneck KR problem, ontologies and NL interpretation, the use of KR and knowledge bases to resolve NL ambiguity, the use of NL to "disambiguate and strengthen" single observations for learning tasks, the use of NL to support constructing DB/ontology query interface, the possibility of using (controlled) NL as a KR, underspecified representations and reasoning, mapping "syntax and semantics" to a KR, and the disadvantages of using KR that is remote from NL semantics.