Goto

Collaborating Authors

Expert Systems


Pinaki Laskar on LinkedIn: #AI #Engineering #machinelearning

#artificialintelligence

AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner Why Is Transdisciplinary #AI Needed? TransAI should be human-centred by including the following aspects: Explainable AI, i.e., they allow humans to understand the reasons behind their recommendations or decisions; Verifiable AI, i.e., they guarantee fundamental properties like safety, privacy and security; Physical AI, it refers to the use of AI techniques to solve problems that involve direct interaction with the physical world, e.g., by observing the world through sensors or by modifying the world through actuators. What distinguishes Physical AI systems is their direct interaction with the physical world, contrasting with other AI types, e.g., financial recommendation systems (where AI is between the human and a database); chatbots (where AI interacts with the human via Internet); or AI chess-players (where the chess board state to the AI algorithm). Collaborative AI, i.e., they can share knowledge with humans and take decisions jointly with them; Integrative AI, i.e., they can combine different requirements and methods into one AI system. TransAI embraces interdependent elements: Philosophical AI AI has closer scientific connections with philosophy than do other sciences, because AI shares many concepts with philosophy, e.g.


Even computer experts think ending human oversight of AI is a very bad idea

ZDNet

The right to a human review will become impractical and disproportionate in many cases as AI applications grow in the next few years, said a consultation from the UK government. While the world's largest economies are working on new laws to keep AI under control to avoid the technology creating unintended harms, the UK seems to be pushing for a rather different approach. The government has recently proposed to get rid of some of the rules that exist already to put breaks on the use of algorithms – and experts are now warning that this is a dangerous way to go. In a consultation that was launched earlier this year, the Department for Digital, Culture, Media and Sport (DCMS) invited experts to submit their thoughts on some new proposals designed to reform the UK's data protection regime. Among those featured was a bid to remove a legal provision that currently enables citizens to challenge a decision that was made about them by an automated decision-making technology, and to request a human review of the decision.


AI in Software Automation Process - Time Bulletin

#artificialintelligence

Artificial Intelligence (AI) is when a machine imitates the cognitive functions that humans associate with other human minds, such as learning and problem solving, reasoning, knowledge representation, social intelligence, and general intelligence in terms of computer systems. It is an emerging field and vital applications include machine learning, expert systems, natural language processing, speech recognition, machine vision and neural semantic systems. Approaches include statistical methods, computational intelligence, soft computing, and orthodox symbolic AI. One word that best describes use of artificial intelligence is Automation or digitalization. Automating process involves employing AI platforms that can support the digitalization process and deliver the same or better results that human brain would have achieved.


Tying quantum computing to AI prompts a smarter power grid

#artificialintelligence

Fumbling to find flashlights during blackouts may soon be a distant memory, as quantum computing and artificial intelligence could learn to decipher an electric grid's problematic quirks and solve system hiccups so fast, humans may not notice. Rather than energy grid faults turning into giant problems--such as voltage variations or widespread blackouts--blazing fast computation blended with artificial intelligence could rapidly diagnose trouble and find solutions in tiny splits of seconds, according to Cornell research forthcoming in Applied Energy (Dec. 1, 2021). "Energy power system failures are an old problem and we are still using classic computational methods to resolve them," said Fengqi You, the Roxanne E. and Michael J. Zak Professor in Energy Systems Engineering in the College of Engineering. "Today's power systems can benefit from AI and the computational power of quantum computing, so power systems can be stable and reliable." You, along with doctoral student Akshay Ajagekar, are co-authors of "Quantum Computing-based Hybrid Deep Learning for Fault Diagnosis in Electrical Power Systems."


Tying quantum computing to AI prompts smarter power grid

#artificialintelligence

Fumbling to find flashlights during blackouts may soon be a distant memory, as quantum computing and artificial intelligence could learn to decipher an electric grid's problematic quirks and solve system hiccups so fast, humans may not notice. Rather than energy grid faults turning into giant problems – such as voltage variations or widespread blackouts – blazing fast computation blended with artificial intelligence could rapidly diagnose trouble and find solutions in tiny splits of seconds, according to Cornell research forthcoming in Applied Energy (Dec. 1, 2021). "Energy power system failures are an old problem and we are still using classic computational methods to resolve them," said Fengqi You, the Roxanne E. and Michael J. Zak Professor in Energy Systems Engineering in the College of Engineering. "Today's power systems can benefit from AI and the computational power of quantum computing, so power systems can be stable and reliable." You, along with doctoral student Akshay Ajagekar, are co-authors of "Quantum Computing-based Hybrid Deep Learning for Fault Diagnosis in Electrical Power Systems."


Every time I fire a conversational designer, the performance of the dialog system goes down

arXiv.org Artificial Intelligence

Incorporating explicit domain knowledge into neural-based task-oriented dialogue systems is an effective way to reduce the need of large sets of annotated dialogues. In this paper, we investigate how the use of explicit domain knowledge of conversational designers affects the performance of neural-based dialogue systems. To support this investigation, we propose the Conversational-Logic-Injection-in-Neural-Network system (CLINN) where explicit knowledge is coded in semi-logical rules. By using CLINN, we evaluated semi-logical rules produced by a team of differently skilled conversational designers. We experimented with the Restaurant topic of the MultiWOZ dataset. Results show that external knowledge is extremely important for reducing the need of annotated examples for conversational systems. In fact, rules from conversational designers used in CLINN significantly outperform a state-of-the-art neural-based dialogue system.


RuleBert: Teaching Soft Rules to Pre-trained Language Models

arXiv.org Artificial Intelligence

While pre-trained language models (PLMs) are the go-to solution to tackle many natural language processing problems, they are still very limited in their ability to capture and to use common-sense knowledge. In fact, even if information is available in the form of approximate (soft) logical rules, it is not clear how to transfer it to a PLM in order to improve its performance for deductive reasoning tasks. Here, we aim to bridge this gap by teaching PLMs how to reason with soft Horn rules. We introduce a classification task where, given facts and soft rules, the PLM should return a prediction with a probability for a given hypothesis. We release the first dataset for this task, and we propose a revised loss function that enables the PLM to learn how to predict precise probabilities for the task. Our evaluation results show that the resulting fine-tuned models achieve very high performance, even on logical rules that were unseen at training. Moreover, we demonstrate that logical notions expressed by the rules are transferred to the fine-tuned model, yielding state-of-the-art results on external datasets.


WRENCH: A Comprehensive Benchmark for Weak Supervision

arXiv.org Machine Learning

Recent \emph{Weak Supervision (WS)} approaches have had widespread success in easing the bottleneck of labeling training data for machine learning by synthesizing labels from multiple potentially noisy supervision sources. However, proper measurement and analysis of these approaches remain a challenge. First, datasets used in existing works are often private and/or custom, limiting standardization. Second, WS datasets with the same name and base data often vary in terms of the labels and weak supervision sources used, a significant "hidden" source of evaluation variance. Finally, WS studies often diverge in terms of the evaluation protocol and ablations used. To address these problems, we introduce a benchmark platform, \benchmark, for a thorough and standardized evaluation of WS approaches. It consists of 22 varied real-world datasets for classification and sequence tagging; a range of real, synthetic, and procedurally-generated weak supervision sources; and a modular, extensible framework for WS evaluation, including implementations for popular WS methods. We use \benchmark to conduct extensive comparisons over more than 100 method variants to demonstrate its efficacy as a benchmark platform. The code is available at \url{https://github.com/JieyuZ2/wrench}.


Improved genetic algorithm and XGBoost classifier for power transformer fault diagnosis

#artificialintelligence

Power transformer is an essential component for the stable and reliable operation of electrical power grid. The traditional diagnostic methods based on dissolved gas analysis (DGA) have been used to identify the power transformer faults. However, the application of these methods is limited due to the low accuracy of fault identification. In this paper, a transformer fault diagnosis system is developed based on the combination of an improved genetic algorithm (IGA) and the XGBoost. In the transformer fault diagnosis system, the improved genetic algorithm is employed to pre-select the input features from the DGA data and optimize the XGBoost classifier. Performance measures such as average unfitness value, likelihood of evolution leap, and likelihood of optimality are used to validate the efficacy of the proposed improved genetic algorithm. The results of simulation experiments show that the improved genetic algorithm can get the optimal solution stably and reliably, and the proposed method improves the average accuracy of transformer fault diagnosis to 99.2\%. Compared to IEC ratios, dual triangle, support vector machine (SVM), and common vector approach (CVA), the diagnostic accuracy of the proposed method is improved by 30.2\%, 47.2\%, 11.2\%, and 3.6\%, respectively. The proposed method can be a potential solution to identify the transformer fault types.


Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems

arXiv.org Artificial Intelligence

Despite the surprising power of many modern AI systems that often learn their own representations, there is significant discontent about their inscrutability and the attendant problems in their ability to interact with humans. While alternatives such as neuro-symbolic approaches have been proposed, there is a lack of consensus on what they are about. There are often two independent motivations (i) symbols as a lingua franca for human-AI interaction and (ii) symbols as (system-produced) abstractions use in its internal reasoning. The jury is still out on whether AI systems will need to use symbols in their internal reasoning to achieve general intelligence capabilities. Whatever the answer there is, the need for (human-understandable) symbols in human-AI interaction seems quite compelling. Symbols, like emotions, may well not be sine qua non for intelligence per se, but they will be crucial for AI systems to interact with us humans--as we can neither turn off our emotions nor get by without our symbols. In particular, in many human-designed domains, humans would be interested in providing explicit (symbolic) knowledge and advice--and expect machine explanations in kind. This alone requires AI systems to at least do their I/O in symbolic terms. In this blue sky paper, we argue this point of view, and discuss research directions that need to be pursued to allow for this type of human-AI interaction.