Expert Systems: Overviews


A literature review on current approaches and applications of fuzzy expert systems

arXiv.org Artificial Intelligence

The main purposes of this study are to distinguish the trends of research in publication exits for the utilisations of the fuzzy expert and knowledge-based systems that is done based on the classification of studies in the last decade. The present investigation covers 60 articles from related scholastic journals, International conference proceedings and some major literature review papers. Our outcomes reveal an upward trend in the up-to-date publications number, that is evidence of growing notoriety on the various applications of fuzzy expert systems. This raise in the reports is mainly in the medical neuro-fuzzy and fuzzy expert systems. Moreover, another most critical observation is that many modern industrial applications are extended, employing knowledge-based systems by extracting the experts' knowledge.


Fuzzy Knowledge-Based Architecture for Learning and Interaction in Social Robots

arXiv.org Artificial Intelligence

In this paper, we introduce an extension of our presented cognitive - based emotion model [27][28] and [30], where we enhance o ur knowledge - based emotion unit of the architecture by embedding a fuzzy rule - based system to it. The model utilizes the cognitive parameters dependency and their corresponding weights to regulate the robot's behavior and fuse their behavior data to achiev e the final decision in their interaction with the environment. Using this fuzzy system, our previous model can simulate linguistic parameters for better controlling and generating understandable and flexible behaviors in the robots. We implement our model on an assistive healthcare robot, named Robot Nurse Assistant (RNA) and test it with human subjects. Our model records all the emotion states and essential information based on its predefined rules and learning system. Our results show that our robot inte racts with patients in a reasonable, faithful way in special conditions which are defined by rules. This work has the potential to provide better on - demand service for clinical ex-p e rts to monitor the patients' emotion states and help them make better decis ions accordingly .


A Guide to Rules Engines for IoT: Forward-Chaining Engines - DZone IoT

#artificialintelligence

An inference engine using forward chaining applies a set of rules and facts to deduce conclusions, searching the rules until it finds one where the IF clause is known to be true. The process of matching new or existing facts against rules is called pattern matching, which forward chaining inference engines perform through various algorithms, such as Linear, Rete, Treat, Leaps, etc. When a condition is found to be TRUE, the engine executes the THEN clause, which results in new information being added to its dataset. In other words, the engine starts with a number of facts and applies rules to derive all possible conclusions from those facts. This is where the name "forward chaining" comes from -- the fact that the inference engine starts with the data and reasons its way forward to the answer, as opposed to backward chaining, which works the other way around.


Lie on the Fly: Strategic Voting in an Iterative Preference Elicitation Process

arXiv.org Artificial Intelligence

A voting center is in charge of collecting and aggregating voter preferences. In an iterative process, the center sends comparison queries to voters, requesting them to submit their preference between two items. Voters might discuss the candidates among themselves, figuring out during the elicitation process which candidates stand a chance of winning and which do not. Consequently, strategic voters might attempt to manipulate by deviating from their true preferences and instead submit a different response in order to attempt to maximize their profit. We provide a practical algorithm for strategic voters which computes the best manipulative vote and maximizes the voter's selfish outcome when such a vote exists. We also provide a careful voting center which is aware of the possible manipulations and avoids manipulative queries when possible. In an empirical study on four real-world domains, we show that in practice manipulation occurs in a low percentage of settings and has a low impact on the final outcome. The careful voting center reduces manipulation even further, thus allowing for a non-distorted group decision process to take place. We thus provide a core technology study of a voting process that can be adopted in opinion or information aggregation systems and in crowdsourcing applications, e.g., peer grading in Massive Open Online Courses (MOOCs).


KALM: A Rule-based Approach for Knowledge Authoring and Question Answering

arXiv.org Artificial Intelligence

Knowledge representation and reasoning (KRR) is one of the key areas in artificial intelligence (AI) field. It is intended to represent the world knowledge in formal languages (e.g., Prolog, SPARQL) and then enhance the expert systems to perform querying and inference tasks. Currently, constructing large scale knowledge bases (KBs) with high quality is prohibited by the fact that the construction process requires many qualified knowledge engineers who not only understand the domain-specific knowledge but also have sufficient skills in knowledge representation. Unfortunately, qualified knowledge engineers are in short supply. Therefore, it would be very useful to build a tool that allows the user to construct and query the KB simply via text. Although there is a number of systems developed for knowledge extraction and question answering, they mainly fail in that these system don't achieve high enough accuracy whereas KRR is highly sensitive to erroneous data. In this thesis proposal, I will present Knowledge Authoring Logic Machine (KALM), a rule-based system which allows the user to author knowledge and query the KB in text. The experimental results show that KALM achieved superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems.


LF-PPL: A Low-Level First Order Probabilistic Programming Language for Non-Differentiable Models

arXiv.org Machine Learning

We develop a new Low-level, First-order Probabilistic Programming Language (LF-PPL) suited for models containing a mix of continuous, discrete, and/or piecewise-continuous variables. The key success of this language and its compilation scheme is in its ability to automatically distinguish parameters the density function is discontinuous with respect to, while further providing runtime checks for boundary crossings. This enables the introduction of new inference engines that are able to exploit gradient information, while remaining efficient for models which are not everywhere differentiable. We demonstrate this ability by incorporating a discontinuous Hamiltonian Monte Carlo (DHMC) inference engine that is able to deliver automated and efficient inference for non-differentiable models. Our system is backed up by a mathematical formalism that ensures that any model expressed in this language has a density with measure zero discontinuities to maintain the validity of the inference engine.


Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention

arXiv.org Artificial Intelligence

In this paper, we present a novel approach for the task of eXplainable Question Answering (XQA), i.e., generating natural language (NL) explanations for the Visual Question Answering (VQA) problem. We generate NL explanations comprising of the evidence to support the answer to a question asked to an image using two sources of information: (a) annotations of entities in an image (e.g., object labels, region descriptions, relation phrases) generated from the scene graph of the image, and (b) the attention map generated by a VQA model when answering the question. We show how combining the visual attention map with the NL representation of relevant scene graph entities, carefully selected using a language model, can give reasonable textual explanations without the need of any additional collected data (explanation captions, etc). We run our algorithms on the Visual Genome (VG) dataset and conduct internal user-studies to demonstrate the efficacy of our approach over a strong baseline. We have also released a live web demo showcasing our VQA and textual explanation generation using scene graphs and visual attention.


Towards a more efficient use of process and product traceability data for continuous improvement of industrial performances

arXiv.org Artificial Intelligence

Nowadays all industrial sectors are increasingly faced with the explosion in the amount of data. Therefore, it raises the question of the efficient use of this large amount of data. In this research work, we are concerned with process and product traceability data. In some sectors (e.g. pharmaceutical and agro-food), the collection and storage of these data are required. Beyond this constraint (regulatory and / or contractual), we are interested in the use of these data for continuous improvements of industrial performances. Two research axes were identified: product recall and responsiveness towards production hazards. For the first axis, a procedure for product recall exploiting traceability data will be propose. The development of detection and prognosis functions combining process and product data is envisaged for the second axis.


Research for Practice

Communications of the ACM

This installment of Research for Practice features a curated selection from Alex Ratner and Chris Ré, who provide an overview of recent developments in Knowledge Base Construction (KBC). While knowledge bases have a long history dating to the expert systems of the 1970s, recent advances in machine learning have led to a knowledge base renaissance, with knowledge bases now powering major product functionality including Google Assistant, Amazon Alexa, Apple Siri, and Wolfram Alpha. Ratner and Re's selections highlight key considerations in the modern KBC process, from interfaces that extract knowledge from domain experts to algorithms and representations that transfer knowledge across tasks.


Towards the Development of a Rule-based Drought Early Warning Expert Systems using Indigenous Knowledge

arXiv.org Artificial Intelligence

Abstract--Drought forecasting and prediction is a complicated process due to the complexity and scalability of the environmental parameters involved. Hence, it required a high level of expertise to predict. In this paper, we describe the research and development of a rule-based drought early warning expert systems (RB-DEWES) for forecasting drought using local indigenous knowledge obtained from domain experts. The system generates inference by using rule set and provides drought advisory information with attributed certainty factor (CF) based on the user's input. The system is believed to be the first expert system for drought forecasting to use local indigenous knowledge on drought. The architecture and components such as knowledge base, JESS inference engine and model base of the system and their functions are presented. The intricate complexity of drought has always been a stumbling block for drought forecasting and prediction systems [1]. This is mostly due to the web of environmental events (such as climate variability) that directly/indirectly triggers this environmental phenomenon. There are six broad categories of drought: meteorological, climatological, atmospheric, agricultural, hydrologic and water drought [1]. Nevertheless, irrespective of the category of drought, there is a consensus amongst scientist that drought is a disastrous condition of lack of moisture caused by a deficit in precipitation in a certain geographical region over some time period [2].