Goto

Collaborating Authors

Expert Systems


g-f(2)144 THE BIG PICTURE OF THE DIGITAL AGE, Accenture, Technology Vision 2021. Leaders Wanted. Masters of Change at a Moment of Truth.

#artificialintelligence

I joined the Technology Research & Development team from Advanced Technology & Architecture where I was the global lead for Emerging Technology. I have held several global leadership roles within our technology group for Application Portfolio Optimization and SOA/Integration Architecture. I have worked at the leading edge of technology, notably in voice recognition, knowledge-based systems and neural networks.


The Inescapable Duality of Data and Knowledge

arXiv.org Artificial Intelligence

We will discuss how over the last 30 to 50 years, systems that focused only on data have been handicapped with success focused on narrowly focused tasks, and knowledge has been critical in developing smarter, intelligent, more effective systems. We will draw a parallel with the role of knowledge and experience in human intelligence based on cognitive science. And we will end with the recent interest in neuro-symbolic or hybrid AI systems in which knowledge is the critical enabler for combining data-intensive statistical AI systems with symbolic AI systems which results in more capable AI systems that support more human-like intelligence.


Actionable Cognitive Twins for Decision Making in Manufacturing

arXiv.org Artificial Intelligence

Actionable Cognitive Twins are the next generation Digital Twins enhanced with cognitive capabilities through a knowledge graph and artificial intelligence models that provide insights and decision-making options to the users. The knowledge graph describes the domain-specific knowledge regarding entities and interrelationships related to a manufacturing setting. It also contains information on possible decision-making options that can assist decision-makers, such as planners or logisticians. In this paper, we propose a knowledge graph modeling approach to construct actionable cognitive twins for capturing specific knowledge related to demand forecasting and production planning in a manufacturing plant. The knowledge graph provides semantic descriptions and contextualization of the production lines and processes, including data identification and simulation or artificial intelligence algorithms and forecasts used to support them. Such semantics provide ground for inferencing, relating different knowledge types: creative, deductive, definitional, and inductive. To develop the knowledge graph models for describing the use case completely, systems thinking approach is proposed to design and verify the ontology, develop a knowledge graph and build an actionable cognitive twin. Finally, we evaluate our approach in two use cases developed for a European original equipment manufacturer related to the automotive industry as part of the European Horizon 2020 project FACTLOG.


Hardware Acceleration of Explainable Machine Learning using Tensor Processing Units

arXiv.org Artificial Intelligence

Machine learning (ML) is successful in achieving human-level performance in various fields. However, it lacks the ability to explain an outcome due to its black-box nature. While existing explainable ML is promising, almost all of these methods focus on formatting interpretability as an optimization problem. Such a mapping leads to numerous iterations of time-consuming complex computations, which limits their applicability in real-time applications. In this paper, we propose a novel framework for accelerating explainable ML using Tensor Processing Units (TPUs). The proposed framework exploits the synergy between matrix convolution and Fourier transform, and takes full advantage of TPU's natural ability in accelerating matrix computations. Specifically, this paper makes three important contributions. (1) To the best of our knowledge, our proposed work is the first attempt in enabling hardware acceleration of explainable ML using TPUs. (2) Our proposed approach is applicable across a wide variety of ML algorithms, and effective utilization of TPU-based acceleration can lead to real-time outcome interpretation. (3) Extensive experimental results demonstrate that our proposed approach can provide an order-of-magnitude speedup in both classification time (25x on average) and interpretation time (13x on average) compared to state-of-the-art techniques.


Semantic Contextual Reasoning to Provide Human Behavior

arXiv.org Artificial Intelligence

In recent years, the world has witnessed various primitives pertaining to the complexity of human behavior. Identifying an event in the presence of insufficient, incomplete, or tentative premises along with the constraints on resources such as time, data and memory is a vital aspect of an intelligent system. Data explosion presents one of the most challenging research issues for intelligent systems; to optimally represent and store this heterogeneous and voluminous data semantically to provide human behavior. There is a requirement of intelligent but personalized human behavior subject to constraints on resources and priority of the user. Knowledge, when represented in the form of an ontology, procures an intelligent response to a query posed by users; but it does not offer content in accordance with the user context. To this aim, we propose a model to quantify the user context and provide semantic contextual reasoning. A diagnostic belief algorithm (DBA) is also presented that identifies a given event and also computes the confidence of the decision as a function of available resources, premises, exceptions, and desired specificity. We conduct an empirical study in the domain of day-to-day routine queries and the experimental results show that the answer to queries and also its confidence varies with user context.


White Paper Machine Learning in Certified Systems

arXiv.org Artificial Intelligence

Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.


Why Artificial Intelligence Might Not Win a War

#artificialintelligence

It's widely presumed that artificial intelligence (AI) will play a dominant role in future wars. However, the way the future unfolds might be nothing like that. AI developments increasingly led by machine learning-enabled technologies, seem to be going in another direction. AI is a fairly plastic term. Its meaning has shifted over time, reflecting changes in both our understanding of what intelligence is and in the technology available to mimic this.


Evaluation of a Bi-Directional Methodology for Automated Assessment of Compliance to Continuous Application of Clinical Guidelines, in the Type 2 Diabetes-Management Domain

arXiv.org Artificial Intelligence

Evidence-based recommendations are often published in the form of clinical guidelines and protocols, as documents intended to be used by clinicians to provide the state of the art care. However, as demonstrated repeatedly in multiple clinical domains, clinicians often do not sufficiently adhere to the guidelines in a manner sensitive to the context of each patient. Such gaps are important to detect; fast, large-scale detection might lead to specific adjustments, usually of the clinicians' management patterns, but possibly of the guidelines themselves. In this study, we evaluated the DiscovErr system, in which we had implemented a new methodology for assessment of compliance to continuous implementation of clinical guidelines. This new methodology is based on a bi-directional search from the objective of the guideline to the longitudinal multivariate patient data, and vice versa. The evaluation of DiscovErr was performed in the type 2 Diabetes management domain, by comparing its performance to a panel of three clinicians, two experts in diabetes-patient management and a senior family practitioner highly experienced in diabetes treatment. The system and the three experts commented on the management of 10 patients who were randomly selected before the evaluation from a database containing longitudinal records of 2,000 type 2 diabetes patients. On average, each patient record spanned 5.23 years; the overall data of the selected patients included 1,584 time-oriented medical transactions (laboratory tests or medication administrations). We assessed the correctness (i.e.


Quick Learning Mechanism with Cross-Domain Adaptation for Intelligent Fault Diagnosis

arXiv.org Machine Learning

This paper presents a quick learning mechanism for intelligent fault diagnosis of rotating machines operating under changeable working conditions. Since real case machines in industries run under different operating conditions, the deep learning model trained for a laboratory case machine fails to perform well for the fault diagnosis using recorded data from real case machines. It poses the need of training a new diagnostic model for the fault diagnosis of the real case machine under every new working condition. Therefore, there is a need for a mechanism that can quickly transform the existing diagnostic model for machines operating under different conditions. we propose a quick learning method with Net2Net transformation followed by a fine-tuning method to cancel/minimize the maximum mean discrepancy of the new data to the previous one. This transformation enables us to create a new network with any architecture almost ready to be used for the new dataset. The effectiveness of the proposed fault diagnosis method has been demonstrated on the CWRU dataset, IMS bearing dataset, and Paderborn university dataset. We have shown that the diagnostic model trained for CWRU data at zero load can be used to quickly train another diagnostic model for the CWRU data at different loads and also for the IMS dataset. Using the dataset provided by Paderborn university, it has been validated that the diagnostic model trained on artificially damaged fault dataset can be used for quickly training another model for real damage dataset.


Knowledge-based Extraction of Cause-Effect Relations from Biomedical Text

#artificialintelligence

We propose a knowledge-based approach for extraction of Cause-Effect (CE) relations from biomedical text. Our approach is a combination of an unsupervised machine learning technique to discover causal triggers and a set of high-precision linguistic rules to identify cause/effect arguments of these causal triggers. We evaluate our approach using a corpus of 58,761 Leukaemia-related PubMed abstracts consisting of 568,528 sentences. We could extract 152,655 CE triplets from this corpus where each triplet consists of a cause phrase, an effect phrase and a causal trigger. As compared to the existing knowledge base - SemMedDB (Kilicoglu et al., 2012), the number of extractions are almost twice.