Methods for Integrating Knowledge with the Three-Weight Optimization Algorithm for Hybrid Cognitive Processing

AAAI Conferences

In this paper we consider optimization as an approach for quickly and flexibly developing hybrid cognitive capabilities that are efficient, scalable, and can exploit knowledge to improve solution speed and quality. In this context, we focus on the Three-Weight Algorithm, which aims to solve general optimization problems. We propose novel methods by which to integrate knowledge with this algorithm to improve expressiveness, efficiency, and scaling, and demonstrate these techniques on two example problems (Sudoku and circle packing).


A Framework for Parallelizing OWL Classification in Description Logic Reasoners

arXiv.org Artificial Intelligence

In this paper we report on a black-box approach to parallelize existing description logic (DL) reasoners for the Web Ontology Language (OWL). We focus on OWL ontology classification, which is an important inference service and supported by every major OWL/DL reasoner. We propose a flexible parallel framework which can be applied to existing OWL reasoners in order to speed up their classification process. In order to test its performance, we evaluated our framework by parallelizing major OWL reasoners for concept classification. In comparison to the selected black-box reasoner our results demonstrate that the wall clock time of ontology classification can be improved by one order of magnitude for most real-world ontologies.


Mixed-Initiative Reasoning for Integrated Domain Modeling, Learning and Problem Solving

AAAI Conferences

The main challenge addressed by this research is the knowledge acquisition bottleneck defined as the difficulty of creating and maintaining a knowledge base that represents a model of the exp ertise domain that exists in the mind of a domain expert. The mixed-initiative approach we are investigating, called Disciple (Tecuci et al. 1999; Boicu et al., 2000), relies on developing a very capable agent that can collaborate with the domain expert to develop its knowledge base. In this approach both the agent and the expert are accorded responsibility for those elements of knowledge engineering for which they have the most aptitude, and together they form a complete team for knowledge base development. The domain modeling and problem solving approach is based on task reduction paradigm. The knowledge base to be developed consisting of an OKBC-type ontology that defines the terms from the application domain, and a set of plausible task reduction rules expressed with these terms.


Efficiency of GDL Reasoners - IEEE Xplore Document

#artificialintelligence

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Copyright 2017 IEEE - All rights reserved.


When is it right and good for an intelligent autonomous vehicle to take over control (and hand it back)?

arXiv.org Artificial Intelligence

There is much debate in machine ethics about the most appropriate way to introduce ethical reasoning capabilities into intelligent autonomous machines. Recent incidents involving autonomous vehicles in which humans have been killed or injured have raised questions about how we ensure that such vehicles have an ethical dimension to their behaviour and are therefore trustworthy. The main problem is that hardwiring such machines with rules not to cause harm or damage is not consistent with the notion of autonomy and intelligence. Also, such ethical hardwiring does not leave intelligent autonomous machines with any course of action if they encounter situations or dilemmas for which they are not programmed or where some harm is caused no matter what course of action is taken. Teaching machines so that they learn ethics may also be problematic given recent findings in machine learning that machines pick up the prejudices and biases embedded in their learning algorithms or data. This paper describes a fuzzy reasoning approach to machine ethics. The paper shows how it is possible for an ethics architecture to reason when taking over from a human driver is morally justified. The design behind such an ethical reasoner is also applied to an ethical dilemma resolution case. One major advantage of the approach is that the ethical reasoner can generate its own data for learning moral rules (hence, autometric) and thereby reduce the possibility of picking up human biases and prejudices. The results show that a new type of metric-based ethics appropriate for autonomous intelligent machines is feasible and that our current concept of ethical reasoning being largely qualitative in nature may need revising if want to construct future autonomous machines that have an ethical dimension to their reasoning so that they become moral machines.