Goto

Collaborating Authors

Expert Systems


From data to knowledge and AI via graphs: Technology to support a knowledge-based economy

ZDNet

These past few months have not been kind to any of us. The ripples caused by the COVID-19 crisis are felt far and wide, and the world's economies have taken a staggering blow. As with most things in life, however, this crisis has also brought some interesting side effects. Reimagining business for the digital age is the number-one priority for many of today's top executives. We offer practical advice and examples of how to do it right.


On Exploiting Hitting Sets for Model Reconciliation

arXiv.org Artificial Intelligence

In human-aware planning, a planning agent may need to provide an explanation to a human user on why its plan is optimal. A popular approach to do this is called model reconciliation, where the agent tries to reconcile the differences in its model and the human's model such that the plan is also optimal in the human's model. In this paper, we present a logic-based framework for model reconciliation that extends beyond the realm of planning. More specifically, given a knowledge base $KB_1$ entailing a formula $\varphi$ and a second knowledge base $KB_2$ not entailing it, model reconciliation seeks an explanation, in the form of a cardinality-minimal subset of $KB_1$, whose integration into $KB_2$ makes the entailment possible. Our approach, based on ideas originating in the context of analysis of inconsistencies, exploits the existing hitting set duality between minimal correction sets (MCSes) and minimal unsatisfiable sets (MUSes) in order to identify an appropriate explanation. However, differently from those works targeting inconsistent formulas, which assume a single knowledge base, MCSes and MUSes are computed over two distinct knowledge bases. We conclude our paper with an empirical evaluation of the newly introduced approach on planning instances, where we show how it outperforms an existing state-of-the-art solver, and generic non-planning instances from recent SAT competitions, for which no other solver exists.


Knowledge Graphs in Manufacturing and Production: A Systematic Literature Review

arXiv.org Artificial Intelligence

Knowledge graphs in manufacturing and production aim to make production lines more efficient and flexible with higher quality output. This makes knowledge graphs attractive for companies to reach Industry 4.0 goals. However, existing research in the field is quite preliminary, and more research effort on analyzing how knowledge graphs can be applied in the field of manufacturing and production is needed. Therefore, we have conducted a systematic literature review as an attempt to characterize the state-of-the-art in this field, i.e., by identifying exiting research and by identifying gaps and opportunities for further research. To do that, we have focused on finding the primary studies in the existing literature, which were classified and analyzed according to four criteria: bibliometric key facts, research type facets, knowledge graph characteristics, and application scenarios. Besides, an evaluation of the primary studies has also been carried out to gain deeper insights in terms of methodology, empirical evidence, and relevance. As a result, we can offer a complete picture of the domain, which includes such interesting aspects as the fact that knowledge fusion is currently the main use case for knowledge graphs, that empirical research and industrial application are still missing to a large extent, that graph embeddings are not fully exploited, and that technical literature is fast-growing but seems to be still far from its peak.


Neurosymbolic AI: The 3rd Wave

arXiv.org Artificial Intelligence

Current advances in Artificial Intelligence (AI) and Machine Learning (ML) have achieved unprecedented impact across research communities and industry. Nevertheless, concerns about trust, safety, interpretability and accountability of AI were raised by influential thinkers. Many have identified the need for well-founded knowledge representation and reasoning to be integrated with deep learning and for sound explainability. Neural-symbolic computing has been an active area of research for many years seeking to bring together robust learning in neural networks with reasoning and explainability via symbolic representations for network models. In this paper, we relate recent and early research results in neurosymbolic AI with the objective of identifying the key ingredients of the next wave of AI systems. We focus on research that integrates in a principled way neural network-based learning with symbolic knowledge representation and logical reasoning. The insights provided by 20 years of neural-symbolic computing are shown to shed new light onto the increasingly prominent role of trust, safety, interpretability and accountability of AI. We also identify promising directions and challenges for the next decade of AI research from the perspective of neural-symbolic systems.


Learning Contextual Causality from Time-consecutive Images

arXiv.org Artificial Intelligence

Causality knowledge is crucial for many artificial intelligence systems. Conventional textual-based causality knowledge acquisition methods typically require laborious and expensive human annotations. As a result, their scale is often limited. Moreover, as no context is provided during the annotation, the resulting causality knowledge records (e.g., ConceptNet) typically do not take the context into consideration. To explore a more scalable way of acquiring causality knowledge, in this paper, we jump out of the textual domain and investigate the possibility of learning contextual causality from the visual signal. Compared with pure text-based approaches, learning causality from the visual signal has the following advantages: (1) Causality knowledge belongs to the commonsense knowledge, which is rarely expressed in the text but rich in videos; (2) Most events in the video are naturally time-ordered, which provides a rich resource for us to mine causality knowledge from; (3) All the objects in the video can be used as context to study the contextual property of causal relations. In detail, we first propose a high-quality dataset Vis-Causal and then conduct experiments to demonstrate that with good language and visual representation models as well as enough training signals, it is possible to automatically discover meaningful causal knowledge from the videos. Further analysis also shows that the contextual property of causal relations indeed exists, taking which into consideration might be crucial if we want to use the causality knowledge in real applications, and the visual signal could serve as a good resource for learning such contextual causality.


Deliberative and Conceptual Inference in Service Robots

arXiv.org Artificial Intelligence

Service robots need to reason to support people in daily life situations. Reasoning is an expensive resource that should be used on demand whenever the expectations of the robot do not match the situation of the world and the execution of the task is broken down; in such scenarios the robot must perform the common sense daily life inference cycle consisting on diagnosing what happened, deciding what to do about it, and inducing and executing a plan, recurring in such behavior until the service task can be resumed. Here we examine two strategies to implement this cycle: (1) a pipe-line strategy involving abduction, decision-making and planning, which we call deliberative inference and (2) the use of the knowledge and preferences stored in the robot's knowledge-base, which we call conceptual inference. The former involves an explicit definition of a problem space that is explored through heuristic search, and the latter is based on conceptual knowledge including the human user preferences, and its representation requires a non-monotonic knowledge-based system. We compare the strengths and limitations of both approaches. We also describe a service robot conceptual model and architecture capable of supporting the daily life inference cycle during the execution of a robotics service task. The model is centered in the declarative specification and interpretation of robot's communication and task structure. We also show the implementation of this framework in the fully autonomous robot Golem-III. The framework is illustrated with two demonstration scenarios.


A Practical Approach towards Causality Mining in Clinical Text using Active Transfer Learning

arXiv.org Artificial Intelligence

Objective: Causality mining is an active research area, which requires the application of state-of-the-art natural language processing techniques. In the healthcare domain, medical experts create clinical text to overcome the limitation of well-defined and schema driven information systems. The objective of this research work is to create a framework, which can convert clinical text into causal knowledge. Methods: A practical approach based on term expansion, phrase generation, BERT based phrase embedding and semantic matching, semantic enrichment, expert verification, and model evolution has been used to construct a comprehensive causality mining framework. This active transfer learning based framework along with its supplementary services, is able to extract and enrich, causal relationships and their corresponding entities from clinical text. Results: The multi-model transfer learning technique when applied over multiple iterations, gains performance improvements in terms of its accuracy and recall while keeping the precision constant. We also present a comparative analysis of the presented techniques with their common alternatives, which demonstrate the correctness of our approach and its ability to capture most causal relationships. Conclusion: The presented framework has provided cutting-edge results in the healthcare domain. However, the framework can be tweaked to provide causality detection in other domains, as well. Significance: The presented framework is generic enough to be utilized in any domain, healthcare services can gain massive benefits due to the voluminous and various nature of its data. This causal knowledge extraction framework can be used to summarize clinical text, create personas, discover medical knowledge, and provide evidence to clinical decision making.


Scalable and interpretable rule-based link prediction for large heterogeneous knowledge graphs

arXiv.org Artificial Intelligence

Neural embedding-based machine learning models have shown promise for predicting novel links in biomedical knowledge graphs. Unfortunately, their practical utility is diminished by their lack of interpretability. Recently, the fully interpretable, rule-based algorithm AnyBURL yielded highly competitive results on many general-purpose link prediction benchmarks. However, its applicability to large-scale prediction tasks on complex biomedical knowledge bases is limited by long inference times and difficulties with aggregating predictions made by multiple rules. We improve upon AnyBURL by introducing the SAFRAN rule application framework which aggregates rules through a scalable clustering algorithm. SAFRAN yields new state-of-the-art results for fully interpretable link prediction on the established general-purpose benchmark FB15K-237 and the large-scale biomedical benchmark OpenBioLink. Furthermore, it exceeds the results of multiple established embedding-based algorithms on FB15K-237 and narrows the gap between rule-based and embedding-based algorithms on OpenBioLink. We also show that SAFRAN increases inference speeds by up to two orders of magnitude.


The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?

arXiv.org Artificial Intelligence

Our title alludes to the three Christmas ghosts encountered by Ebenezer Scrooge in \textit{A Christmas Carol}, who guide Ebenezer through the past, present, and future of Christmas holiday events. Similarly, our article will take readers through a journey of the past, present, and future of medical AI. In doing so, we focus on the crux of modern machine learning: the reliance on powerful but intrinsically opaque models. When applied to the healthcare domain, these models fail to meet the needs for transparency that their clinician and patient end-users require. We review the implications of this failure, and argue that opaque models (1) lack quality assurance, (2) fail to elicit trust, and (3) restrict physician-patient dialogue. We then discuss how upholding transparency in all aspects of model design and model validation can help ensure the reliability of medical AI.


Deep Learning-Based Bearing Fault Diagnosis Method for Embedded Systems

#artificialintelligence

Bearing elements are vital in induction motors; therefore, early fault detection of rolling-element bearings is essential in machine health monitoring. With the advantage of fault feature representation techniques of time–frequency domain for nonstationary signals and the advent of convolutional neural networks (CNNs), bearing fault diagnosis has achieved high accuracy, even at variable rotational speeds. However, the required computation and memory resources of CNN-based fault diagnosis methods render it difficult to be compatible with embedded systems, which are essential in real industrial platforms because of their portability and low costs. This paper proposes a novel approach for establishing a CNN-based process for bearing fault diagnosis on embedded devices using acoustic emission signals, which reduces the computation costs significantly in classifying the bearing faults. A light state-of-the-art CNN model, MobileNet-v2, is established via pruning to optimize the required system resources. The input image size, which significantly affects the consumption of system resources, is decreased by our proposed signal representation method based on the constant-Q nonstationary Gabor transform and signal decomposition adopting ensemble empirical mode decomposition with a CNN-based method for selecting intrinsic mode functions. According to our experimental results, our proposed method can provide the accuracy for bearing faults classification by up to 99.58% with less computation overhead compared to previous deep learning-based fault diagnosis methods.