Goto

Collaborating Authors

Directed Networks


Deep Neural Network in R

#artificialintelligence

Neural Network in R, Neural Network is just like a human nervous system, which is made up of interconnected neurons, in other words, a neural network is made up of interconnected information processing units. The neural network draws from the parallel processing of information, which is the strength of this method. A neural network helps us to extract meaningful information and detect hidden patterns from complex data sets. A neural network is considered one of the most powerful techniques in the data science world. This method is developed to solve problems that are easy for humans and difficult for machines.


Data Science: Supervised Machine Learning in Python

#artificialintelligence

Data Science: Supervised Machine Learning in Python - Full Guide to Implementing Classic Machine Learning Algorithms in Python and with Scikit-Learn Created by Lazy Programmer Team, Lazy Programmer Inc. English [Auto], Spanish [Auto]Preview this Course - GET COUPON CODE In recent years, we've seen a resurgence in AI, or artificial intelligence, and machine learning. Machine learning has led to some amazing results, like being able to analyze medical images and predict diseases on-par with human experts. Google's AlphaGo program was able to beat a world champion in the strategy game go using deep reinforcement learning. Machine learning is even being used to program self driving cars, which is going to change the automotive industry forever. Imagine a world with drastically reduced car accidents, simply by removing the element of human error.


Artificial Intelligence and IoT: Naive Bayes

#artificialintelligence

A project-based course to build an AIoT system from theory to prototype. Artificial Intelligence and Automation with Zang Cloud Sample codes are provided for every project in this course. You will receive a certificate of completion when finishing this course. There is also Udemy 30 Day Money Back Guarantee, if you are not satisfied with this course. This course teaches you how to build an AIoT system from theory to prototype particularly using Naive Bayes algorithm.


Naive Bayes Classifiers II: Application

#artificialintelligence

Now, we're going to see how we can use our training data to train our Naive Bayes' model. What does it even mean to train a Naive Bayes' model? In our task, we have two classes. So, n 2. Let's work our way through this formula and see how these different terms are calculated. First, let's look at the P(c) term.


The Inescapable Duality of Data and Knowledge

arXiv.org Artificial Intelligence

We will discuss how over the last 30 to 50 years, systems that focused only on data have been handicapped with success focused on narrowly focused tasks, and knowledge has been critical in developing smarter, intelligent, more effective systems. We will draw a parallel with the role of knowledge and experience in human intelligence based on cognitive science. And we will end with the recent interest in neuro-symbolic or hybrid AI systems in which knowledge is the critical enabler for combining data-intensive statistical AI systems with symbolic AI systems which results in more capable AI systems that support more human-like intelligence.


Conditions and Assumptions for Constraint-based Causal Structure Learning

arXiv.org Machine Learning

The paper formalizes constraint-based structure learning of the "true" causal graph from observed data when unobserved variables are also existent. We define a "generic" structure learning algorithm, which provides conditions that, under the faithfulness assumption, the output of all known exact algorithms in the literature must satisfy, and which outputs graphs that are Markov equivalent to the causal graph. More importantly, we provide clear assumptions, weaker than faithfulness, under which the same generic algorithm outputs Markov equivalent graphs to the causal graph. We provide the theory for the general class of models under the assumption that the distribution is Markovian to the true causal graph, and we specialize the definitions and results for structural causal models.


NNrepair: Constraint-based Repair of Neural Network Classifiers

arXiv.org Artificial Intelligence

The technique aims to fix the logic of the network at an intermediate layer or at the last layer. NNrepair first uses fault localization to find potentially faulty network parameters (such as the weights) and then performs repair using constraint solving to apply small modifications to the parameters to remedy the defects. We present novel strategies to enable precise yet efficient repair such as inferring correctness specifications to act as oracles for intermediate layer repair, and generation of experts for each class. We demonstrate the technique in the context of three different scenarios: (1) Improving the overall accuracy of a model, (2) Fixing security vulnerabilities caused by poisoning of training data and (3) Improving the robustness of the network against adversarial attacks. Our evaluation on MNIST and CIFAR-10 models shows that NNrepair can improve the accuracy by 45.56 percentage points on poisoned data and 10.40 percentage points on adversarial data. NNrepair also provides small improvement in the overall accuracy of models, without requiring new data or re-training.


Markov Modeling of Time-Series Data using Symbolic Analysis

arXiv.org Machine Learning

Markov models are often used to capture the temporal patterns of sequential data for statistical learning applications. While the Hidden Markov modeling-based learning mechanisms are well studied in literature, we analyze a symbolic-dynamics inspired approach. Under this umbrella, Markov modeling of time-series data consists of two major steps -- discretization of continuous attributes followed by estimating the size of temporal memory of the discretized sequence. These two steps are critical for the accurate and concise representation of time-series data in the discrete space. Discretization governs the information content of the resultant discretized sequence. On the other hand, memory estimation of the symbolic sequence helps to extract the predictive patterns in the discretized data. Clearly, the effectiveness of signal representation as a discrete Markov process depends on both these steps. In this paper, we will review the different techniques for discretization and memory estimation for discrete stochastic processes. In particular, we will focus on the individual problems of discretization and order estimation for discrete stochastic process. We will present some results from literature on partitioning from dynamical systems theory and order estimation using concepts of information theory and statistical learning. The paper also presents some related problem formulations which will be useful for machine learning and statistical learning application using the symbolic framework of data analysis. We present some results of statistical analysis of a complex thermoacoustic instability phenomenon during lean-premixed combustion in jet-turbine engines using the proposed Markov modeling method.


The Efficient Shrinkage Path: Maximum Likelihood of Minimum MSE Risk

arXiv.org Machine Learning

When linear models are fit to ill-conditioned or confounded narrow-data, TRACE plots are useful in demonstrating and justifying deliberately biased estimation. This makes TRACE diagnostics powerful "visual" displays. If advanced students of regression are trained in interpretation of Trace plots, they could help admininstrators capable of basic statistical thinking avoid misinterpretations of questionable regression coefficient estimates. All five types of ridge TRACE plots for a wide variety of ridge paths can be explored using R-functions. For example, the RXshrink aug.lars() function generates TRACE s for Least-Angle, Lasso and Forward Stagewise methods (Efron, Hastie, Johnstone and Tibshirani 2004; Hastie and


Any Part of Bayesian Network Structure Learning

arXiv.org Artificial Intelligence

We study an interesting and challenging problem, learning any part of a Bayesian network (BN) structure. In this challenge, it will be computationally inefficient using existing global BN structure learning algorithms to find an entire BN structure to achieve the part of a BN structure in which we are interested. And local BN structure learning algorithms encounter the false edge orientation problem when they are directly used to tackle this challenging problem. In this paper, we first present a new concept of Expand-Backtracking to explain why local BN structure learning methods have the false edge orientation problem, then propose APSL, an efficient and accurate Any Part of BN Structure Learning algorithm. Specifically, APSL divides the V-structures in a Markov blanket (MB) into two types: collider V-structure and non-collider V-structure, then it starts from a node of interest and recursively finds both collider V-structures and non-collider V-structures in the found MBs, until the part of a BN structure in which we are interested are oriented. To improve the efficiency of APSL, we further design the APSL-FS algorithm using Feature Selection, APSL-FS. Using six benchmark BNs, the extensive experiments have validated the efficiency and accuracy of our methods.