dependency



The Challenges Facing Today's Artificial Intelligence Strategies

#artificialintelligence

The hype surrounding artificial intelligence (AI) is intense despite that fact that as yet, artificial intelligence (AI) for most enterprises is still at an early, or planning, stage. While a lot has been done, there is a lot more to do before it becomes commonplace. However, that hasn't stopped spe...


Exploring Supervised Machine Learning Algorithms

#artificialintelligence

The main goal of this reading is to understand enough statistical methodology to be able to leverage the machine learning algorithms in Python's scikit-learn library and then apply this knowledge to solve a classic machine learning problem. The first stop of our journey will take us through a brief history of machine learning. Then we will dive into different algorithms. On our final stop, we will use what we learned to solve the Titanic Survival Rate Prediction Problem. With that noted, let's dive in!


Learning Structured Text Representations

arXiv.org Artificial Intelligence

In this paper, we focus on learning structure-aware document representations from data without recourse to a discourse parser or additional annotations. Drawing inspiration from recent efforts to empower neural networks with a structural bias, we propose a model that can encode a document while automatically inducing rich structural dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluation across different tasks and datasets shows that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.


Addressing the AI Engineering Gap - AI and Big Data

#artificialintelligence

Realising the dream of AI is more process than magic. But the engineering process itself is a journey. Given the talk of AI in the general media, you could forgive yourself for thinking that it will'just happen'. It sometimes sounds as if machine learning is on an unstoppable roll, poised to revolutionise industry and commerce through sheer momentum of technology. But of course, it's not that simple for organisations that want to adopt AI methods.


Detecting Diseases in Chest X-ray Using Deep Learning

@machinelearnbot

Chest Xrays are used to diagnose multiple diseases. From pneumonia to lung nodules multiple diseases can be diagnosed using just this one modality using Deep Learning. Chest Xray 14 dataset was recently released by NIH which has over 90000 Xray plates tagged with 14 diseases or being normal. This has started a race to make Computer Aided Diagnosis (CAD) Systems which can learn discerning thoracic diseases from Xrays. If you happen to be following the development following the release of the dataset, you would have noticed research coming out from various research labs on this dataset.


Kicking the Sensing Habit

AI Magazine

Sensor dependency is an affliction that affects an alarming number of robots, and the problem is spreading. In some situations, sensor use is advisable, perhaps even unavoidable. However, there is an important difference between sensor use and sensor abuse. This article lists some of the telltale signs of sensor dependency and reveals the tricks of the trade used on unwitting roboticists by wily sensor pushers.


Knowledge Verification Base

AI Magazine

He points out that one of the key features these systems lack is "a suitable verification methodology or a technique for testing the consistency and completeness of a rule set." It is precisely this feature that we address here. LES is a generic rule-based expert system building tool (Laffey, Perkins, and Nguyen 1986) similar to EMYCIN (Van Melle 1981) that has been used as a framework to construct expert systems in many areas, such as electronic equipment diagnosis, design verification, photointerpretation, and hazard analysis. LES represents factual data in its frame database and heuristic and control knowledge in its production rules. LES allows the knowledge engineer to use both data-driven and goaldriven rules.


Tree-Structured Neural Machine for Linguistics-Aware Sentence Generation

arXiv.org Artificial Intelligence

Different from other sequential data, sentences in natural language are structured by linguistic grammars. Previous generative conversational models with chain-structured decoder ignore this structure in human language and might generate plausible responses with less satisfactory relevance and fluency. In this study, we aim to incorporate the results from linguistic analysis into the process of sentence generation for high-quality conversation generation. Specifically, we use a dependency parser to transform each response sentence into a dependency tree and construct a training corpus of sentence-tree pairs. A tree-structured decoder is developed to learn the mapping from a sentence to its tree, where different types of hidden states are used to depict the local dependencies from an internal tree node to its children. For training acceleration, we propose a tree canonicalization method, which transforms trees into equivalent ternary trees. Then, with a proposed tree-structured search method, the model is able to generate the most probable responses in the form of dependency trees, which are finally flattened into sequences as the system output. Experimental results demonstrate that the proposed X2Tree framework outperforms baseline methods over 11.15% increase of acceptance ratio.


Robust Algorithms for Machine Learning

@machinelearnbot

Machine learning is often held out as a magical solution to hard problems that will absolve us mere humans from ever having to actually learn anything. But in reality, for data scientists and machine learning engineers, there are a lot of problems that are much more difficult to deal with than simple object recognition in images, or playing board games with finite rule sets. For these majority of problems, it pays to have a variety of approaches to help you reduce the noise and anomalies, to focus on something more tractable. One approach is to design more robust algorithms where the testing error is consistent with the training error, or the performance is stable after adding noise to the dataset1. The idea of any traditional (non-Bayesian) statistical test is the same: we compute a number (called a "statistic") from the data, and use the known distribution of that number to answer the question, "What are the odds of this happening by chance?"