"Today's expert systems deal with domains of narrow specialization. For expert systems to perform competently over a broad range of tasks, they will have to be given very much more knowledge. ... The next generation of expert systems ... will require large knowledge bases. How will we get them?"
– Edward Feigenbaum, Pamela McCorduck, H. Penny Nii, from The Rise of the Expert Company. New York: Times Books, 1988.
Machine learning gives us the ability to train a model, which can convert data rows into labels in such a way that similar data rows are mapped to similar or the same label. For example, we are building SPAM filter for email messages. We have a lot of email messages, some of which are marked as SPAM and some as INBOX. We can build a model, which learns to identify the SPAM messages. The messages to be marked as SPAM will be in some way similar to those, which are already marked as SPAM. The concept of similarity is vitally important for machine learning. In the real world, the concept of similarity is very specific to the subject matter and it depends on our knowledge.
Knowledge representation and reasoning (KRR) is one of the key areas in artificial intelligence (AI) field. It is intended to represent the world knowledge in formal languages (e.g., Prolog, SPARQL) and then enhance the expert systems to perform querying and inference tasks. Currently, constructing large scale knowledge bases (KBs) with high quality is prohibited by the fact that the construction process requires many qualified knowledge engineers who not only understand the domain-specific knowledge but also have sufficient skills in knowledge representation. Unfortunately, qualified knowledge engineers are in short supply. Therefore, it would be very useful to build a tool that allows the user to construct and query the KB simply via text. Although there is a number of systems developed for knowledge extraction and question answering, they mainly fail in that these system don't achieve high enough accuracy whereas KRR is highly sensitive to erroneous data. In this thesis proposal, I will present Knowledge Authoring Logic Machine (KALM), a rule-based system which allows the user to author knowledge and query the KB in text. The experimental results show that KALM achieved superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems.
FlexLogix has announced inference-optimized nnMAX clusters to develop the InferX X1 edge inference co-processor for incorporation in SoCs as IP, and in chip form, in Q3. Its performance advantage is claimed to be strong at low batch sizes which are required in edge applications where there is typically only one camera/sensor. InferX X1's performance at small batch sizes is close to data center inference boards and is optimized for large models which need 100s of billions of operations per image. For example, for YOLOv3 real time object recognition, InferX X1 processes 12.7 frames/second of 2 megapixel images at batch size 1. Performance is roughly linear with image size: so frame rate approximately doubles for a 1 megapixel image.
Grade prediction for future courses not yet taken by students is important as it can help them and their advisers during the process of course selection as well as for designing personalized degree plans and modifying them based on their performance. One of the successful approaches for accurately predicting a student's grades in future courses is Cumulative Knowledge-based Regression Models (CKRM). CKRM learns shallow linear models that predict a student's grades as the similarity between his/her knowledge state and the target course. A student's knowledge state is built by linearly accumulating the learned provided knowledge components of the courses he/she has taken in the past, weighted by his/her grades in them. However, not all the prior courses contribute equally to the target course. In this paper, we propose a novel Neural Attentive Knowledge-based model (NAK) that learns the importance of each historical course in predicting the grade of a target course. Compared to CKRM and other competing approaches, our experiments on a large real-world dataset consisting of $\sim$1.5 grades show the effectiveness of the proposed NAK model in accurately predicting the students' grades. Moreover, the attention weights learned by the model can be helpful in better designing their degree plans.
Application of decision support systems for conflict modeling in information operations recognition is presented. An information operation is considered as a complex weakly structured system. The model of conflict between two subjects is proposed based on the second-order rank reflexive model. The method is described for construction of the design pattern for knowledge bases of decision support systems. In the talk, the methodology is proposed for using of decision support systems for modeling of conflicts in information operations recognition based on the use of expert knowledge and content monitoring.
Viral hepatitis is the regularly found health problem throughout the world among other easily transmitted diseases, such as tuberculosis, human immune virus, malaria and so on. Among all hepatitis viruses, the uppermost numbers of deaths are result from the long-lasting hepatitis C infection or long-lasting hepatitis B. In order to develop this system, the knowledge is acquired using both structured and semi-structured interviews from internists of St.Paul Hospital. Once the knowledge is acquired, it is modeled and represented using rule based reasoning techniques. Both forward and backward chaining is used to infer the rules and provide appropriate advices in the developed expert system. For the purpose of developing the prototype expert system SWI-prolog editor also used. The proposed system has the ability to adapt with dynamic knowledge by generalizing rules and discover new rules through learning the newly arrived knowledge from domain experts adaptively without any help from the knowledge engineer.
On a less-trafficked floor of the Whitney Museum, curators have scoured the museum's permanent collection to display art that uses "instructions, sets of rules, and code" to investigate a world "increasingly driven by automated systems." In the nineties, the game designer Frank Lantz produced such work. "I would make some marks on a page, and then I would just connect the endpoints of all the lines to the nearest unconnected endpoint, and then I would add another rule," he said. His method had a whiff of misanthropy. He wanted to render himself obsolete and let something else take over.
Expert System is making enhancements to Cogito, its Artificial Intelligence platform that understands textual information and automatically processes natural language, delivering key updates in the areas of knowledge graphs, machine learning, and RPA. Cogito 14.4 enables users to more easily customize its Knowledge Graph of approximately 350,000 concepts connected by 2.8 Million relationships and lets them import targeted knowledge from any sources (such as company repositories Wikipedia or Geonames) in only a few clicks, enabling the platform to resolve references to real-world entities (such as people, companies, locations) and to link them to knowledge repositories by using standardized identifiers. Cogito 14.4 also extends its Natural Language Processing (NLP) extraction pipeline with a new active learning workflow that accelerates machine-learning-based analytics projects. Through an intuitive web application, Cogito 14.4's active learning workflow enables end-users to visualize the quality of extraction and provide feedback to the engine, which iteratively retrains the engine to reach the user's quality goals, thus reducing the amount of manual annotation needed Cogito 14.4 includes a Robotic Process Automation (RPA) connector that extends the use of RPA bots into process automation leveraging knowledge (and not only structured data) as well as requiring human-like judgement. The Cogito RPA Connector leverages deep contextual understanding to extract precise data from unstructured business documents.
In this paper we investigate two variants of association rules for preference data, Label Ranking Association Rules and Pairwise Association Rules. Label Ranking Association Rules (LRAR) are the equivalent of Class Association Rules (CAR) for the Label Ranking task. In CAR, the consequent is a single class, to which the example is expected to belong to. In LRAR, the consequent is a ranking of the labels. The generation of LRAR requires special support and confidence measures to assess the similarity of rankings. In this work, we carry out a sensitivity analysis of these similarity-based measures. We want to understand which datasets benefit more from such measures and which parameters have more influence in the accuracy of the model. Furthermore, we propose an alternative type of rules, the Pairwise Association Rules (PAR), which are defined as association rules with a set of pairwise preferences in the consequent. While PAR can be used both as descriptive and predictive models, they are essentially descriptive models. Experimental results show the potential of both approaches.
As Artificial Intelligence (AI) becomes an integral part of our life, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to explain its behavior is one of the key requirements of an explainable agency. Prior work on explanation generation focuses on supporting the reasoning behind the robot's behavior. These approaches, however, fail to consider the cognitive effort needed to understand the received explanation. In particular, the human teammate is expected to understand any explanation provided before the task execution, no matter how much information is presented in the explanation. In this work, we argue that an explanation, especially complex ones, should be made in an online fashion during the execution, which helps to spread out the information to be explained and thus reducing the cognitive load of humans. However, a challenge here is that the different parts of an explanation are dependent on each other, which must be taken into account when generating online explanations. To this end, a general formulation of online explanation generation is presented. We base our explanation generation method in a model reconciliation setting introduced in our prior work. Our approach is evaluated both with human subjects in a standard planning competition (IPC) domain, using NASA Task Load Index (TLX), as well as in simulation with four different problems.