Collaborating Authors


ISN's machine learning model crunches safety data – ConstructConnect Canada


ISN, a global contractor and supplier information management company based in Texas, is using machine learning to predict how and when serious …

Interpretable Low-Resource Legal Decision Making Artificial Intelligence

Over the past several years, legal applications of deep learning have been on the rise. However, as with other high-stakes decision making areas, the requirement for interpretability is of crucial importance. Current models utilized by legal practitioners are more of the conventional machine learning type, wherein they are inherently interpretable, yet unable to harness the performance capabilities of data-driven deep learning models. In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks. Specifically, we introduce a model-agnostic interpretable intermediate layer, a technique which proves to be effective for legal documents. Furthermore, we utilize weakly supervised learning by means of a curriculum learning strategy, effectively demonstrating the improved performance of a deep learning model. This is in contrast to the conventional models which are only able to utilize the limited number of expensive manually-annotated samples by legal experts. Although the methods presented in this work tackles the task of risk of confusion for trademarks, it is straightforward to extend them to other fields of law, or more generally, to other similar high-stakes application scenarios.

Hard machine learning can predict hard materials


Superhard materials are in high demand by industry, for use in applications ranging from energy production to aerospace, but finding suitable new materials has largely been a matter of trial and error, based on classical hard materials such as diamonds. In a paper in Advanced Materials, researchers from the University of Houston (UH) and Manhattan College report a machine-learning model that can accurately predict the hardness of new materials, allowing scientists to more readily find compounds suitable for use in a variety of applications. Materials that are superhard – defined as those with a hardness value exceeding 40 gigapascals on the Vickers scale, meaning it would take more than 40 gigapascals of pressure to leave an indentation on the material's surface – are rare. "That makes identifying new materials challenging," said Jakoah Brgoch, associate professor of chemistry at UH and corresponding author of the paper. "That is why materials like synthetic diamond are still used even though they are challenging and expensive to make."

A deep reinforcement learning model for predictive maintenance planning of road assets: Integrating LCA and LCCA Artificial Intelligence

Road maintenance planning is an integral part of road asset management. One of the main challenges in Maintenance and Rehabilitation (M&R) practices is to determine maintenance type and timing. This research proposes a framework using Reinforcement Learning (RL) based on the Long Term Pavement Performance (LTPP) database to determine the type and timing of M&R practices. A predictive DNN model is first developed in the proposed algorithm, which serves as the Environment for the RL algorithm. For the Policy estimation of the RL model, both DQN and PPO models are developed. However, PPO has been selected in the end due to better convergence and higher sample efficiency. Indicators used in this study are International Roughness Index (IRI) and Rutting Depth (RD). Initially, we considered Cracking Metric (CM) as the third indicator, but it was then excluded due to the much fewer data compared to other indicators, which resulted in lower accuracy of the results. Furthermore, in cost-effectiveness calculation (reward), we considered both the economic and environmental impacts of M&R treatments. Costs and environmental impacts have been evaluated with paLATE 2.0 software. Our method is tested on a hypothetical case study of a six-lane highway with 23 kilometers length located in Texas, which has a warm and wet climate. The results propose a 20-year M&R plan in which road condition remains in an excellent condition range. Because the early state of the road is at a good level of service, there is no need for heavy maintenance practices in the first years. Later, after heavy M&R actions, there are several 1-2 years of no need for treatments. All of these show that the proposed plan has a logical result. Decision-makers and transportation agencies can use this scheme to conduct better maintenance practices that can prevent budget waste and, at the same time, minimize the environmental impacts.

How neural networks simulate symbolic reasoning


Researchers at the University of Texas have discovered a new way for neural networks to simulate symbolic reasoning. This discovery sparks an exciting path toward uniting deep learning and symbolic reasoning AI. In the new approach, each neuron has a specialized function that relates to specific concepts. "It opens the black box of standard deep learning models while also being able to handle more complex problems than what symbolic AI has typically handled," Paul Blazek, University of Texas Southwestern Medical Center researcher and one of the authors of the Nature paper, told VentureBeat. This work complements previous research on neurosymbolic methods such as MIT's Clevrer, which has shown some promise in predicting and explaining counterfactual possibilities more effectively than neural networks.

Simple Recurrent Neural Networks is all we need for clinical events predictions using EHR data Artificial Intelligence

Recently, there is great interest to investigate the application of deep learning models for the prediction of clinical events using electronic health records (EHR) data. In EHR data, a patient's history is often represented as a sequence of visits, and each visit contains multiple events. As a result, deep learning models developed for sequence modeling, like recurrent neural networks (RNNs) are common architecture for EHR-based clinical events predictive models. While a large variety of RNN models were proposed in the literature, it is unclear if complex architecture innovations will offer superior predictive performance. In order to move this field forward, a rigorous evaluation of various methods is needed. In this study, we conducted a thorough benchmark of RNN architectures in modeling EHR data. We used two prediction tasks: the risk for developing heart failure and the risk of early readmission for inpatient hospitalization. We found that simple gated RNN models, including GRUs and LSTMs, often offer competitive results when properly tuned with Bayesian Optimization, which is in line with similar to findings in the natural language processing (NLP) domain. For reproducibility, Our codebase is shared at

Hierarchical Gaussian Process Models for Regression Discontinuity/Kink under Sharp and Fuzzy Designs Machine Learning

We propose nonparametric Bayesian estimators for causal inference exploiting Regression Discontinuity/Kink (RD/RK) under sharp and fuzzy designs. Our estimators are based on Gaussian Process (GP) regression and classification. The GP methods are powerful probabilistic modeling approaches that are advantageous in terms of derivative estimation and uncertainty qualification, facilitating RK estimation and inference of RD/RK models. These estimators are extended to hierarchical GP models with an intermediate Bayesian neural network layer and can be characterized as hybrid deep learning models. Monte Carlo simulations show that our estimators perform similarly and often better than competing estimators in terms of precision, coverage and interval length. The hierarchical GP models improve upon one-layer GP models substantially. An empirical application of the proposed estimators is provided.

Neuro-Symbolic AI: An Emerging Class of AI Workloads and their Characterization Artificial Intelligence

Neuro-symbolic artificial intelligence is a novel area of AI research which seeks to combine traditional rules-based AI approaches with modern deep learning techniques. Neuro-symbolic models have already demonstrated the capability to outperform state-of-the-art deep learning models in domains such as image and video reasoning. They have also been shown to obtain high accuracy with significantly less training data than traditional models. Due to the recency of the field's emergence and relative sparsity of published results, the performance characteristics of these models are not well understood. In this paper, we describe and analyze the performance characteristics of three recent neuro-symbolic models. We find that symbolic models have less potential parallelism than traditional neural models due to complex control flow and low-operational-intensity operations, such as scalar multiplication and tensor addition. However, the neural aspect of computation dominates the symbolic part in cases where they are clearly separable. We also find that data movement poses a potential bottleneck, as it does in many ML workloads.

QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension Artificial Intelligence

Alongside huge volumes of research on deep learning models in NLP in the recent years, there has been also much work on benchmark datasets needed to track modeling progress. Question answering and reading comprehension have been particularly prolific in this regard, with over 80 new datasets appearing in the past two years. This study is the largest survey of the field to date. We provide an overview of the various formats and domains of the current resources, highlighting the current lacunae for future work. We further discuss the current classifications of ``reasoning types" in question answering and propose a new taxonomy. We also discuss the implications of over-focusing on English, and survey the current monolingual resources for other languages and multilingual resources. The study is aimed at both practitioners looking for pointers to the wealth of existing data, and at researchers working on new resources.

How to build a machine learning model in 10 minutes


I spent the first era learning how to build models with tools like scikit-learn and TensorFlow, which was hard and took forever. I spent most of that time feeling insecure about all the things I didn't know. The second era–after I kind of knew what I was doing – I spent wondering why building machine learning models was so damn hard. After my insecurity cleared, I took a critical look at the machine learning tools we used today and realized this stuff is a lot harder than it needs to be. That's why I think the way we learn machine learning today is about to change. It's also why I'm always delighted when I discover a tool that makes model-building fun, intuitive, and friction-less.