Goto

Collaborating Authors

Machine Learning in Materials Science

#artificialintelligence

Before getting into what polymers are on a molecular level, let's see some familiar materials that are good examples. Some examples of polymers include: plastic, nylon, rubber, wood, protein, and DNA. In this case, we will focus primarily on synthetic polymers like plastic and nylon. At the molecular level, polymers are composed of long chains of repeating molecules. The molecule that repeats in this chain is known as a monomer (or subunit).


Classifying Music Genres with LightGBM

#artificialintelligence

Many machine learning algorithms can perform worse if they deal with data that has an extremely large number of features (dimensions). This is particularly the case if many of those features are highly sparse. This is where dimension reduction can be useful. The idea is to project the high dimensional data into a lower dimension subspace, while retaining as much of the variance present in the data as possible. We will initially use two methods (PCA and t-SNE) to explore whether it is appropriate to use dimension reduction on our lyric data, as well as get an early indication of what a good range of dimensions to reduce into might be.


Papers to Read on Stochastic Gradient Descent

#artificialintelligence

Abstract: We study the Stochastic Gradient Descent (SGD) algorithm in nonparametric statistics: kernel regression in particular. The directional bias property of SGD, which is known in the linear regression setting, is generalized to the kernel regression. More specifically, we prove that SGD with moderate and annealing step-size converges along the direction of the eigenvector that corresponds to the largest eigenvalue of the Gram matrix. These facts are referred to as the directional bias properties; they may interpret how an SGD-computed estimator has a potentially smaller generalization error than a GD-computed estimator. The application of our theory is demonstrated by simulation studies and a case study that is based on the FashionMNIST dataset.


The History of Artificial Intelligence - Science in the News

#artificialintelligence

It began with the "heartless" Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can't machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.


Amazon digs into ambient and generalizable intelligence at re:MARS

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Many, if not most, AI experts maintain that artificial general intelligence (AGI) is still many decades away, if not longer. And the AGI debate has been heating up over the past couple of months. However, according to Amazon, the route to "generalizable intelligence" begins with ambient intelligence. And it says that future is unfurling now.


HPE invests in TruEra for AI explainability and quality management

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. With artificial intelligence (AI) and machine learning (ML) now serving as key attributes to make IT systems faster, more accurate and beneficial for an enterprise's bottom line, the importance of transparency in how these components are working also becomes more critical . Why? Biases can creep into AI / ML models just as it does in humans, and when it does, queries can go awry and skewed analytics can cause production results to be incorrect. Explainable AI is important for trust, compliance and building less-biased AI models. Both customers and regulators want to know more about what's inside the black box.


How to Create a AI Chatbot in Python Framework

#artificialintelligence

Chatbots are software tools created to interact with humans through chat. The first chatbots were able to create simple conversations based on a complex system of rules. Using Flask Python Framework and the Kompose Bot, you will be able to build intelligent chatbots. In this post, we will learn how to add a Kompose chatbot to the Python framework Flask. You will need a Kommunicate account for deploying the python chatbot.


How do Kernel Regularizers work with neural networks?

#artificialintelligence

Regularization is the process of fine-tuning neural network models by inducing a penalty term in the error parameter to obtain an optimal and reliable model which converges better with minimal loss during testing and performs better for unseen data. Regularization helps us get a more generic and reliable model which functions well with respect to changes in patterns of data and any possible uncertainties. So in this article let us see how kernel regularizers work with neural networks and place at what layers of the neural networks are useful to obtain optimal neural networks. Regularization is the process of adding penalty factors to the network layers to alter the weight propagation through the layers which facilitate the model to converge optimally. There are mainly two types of penalties that can be enforced on the network layers which are named as L1 regularization considers the weight of the layers as it is while the L2 regularization considers the squares of weights.


Humans in the loop help robots find their way: Computer scientists' interactive program aids motion planning for environments with obstacles

#artificialintelligence

Engineers at Rice University have developed a method that allows humans to help robots "see" their environments and carry out tasks. The strategy called Bayesian Learning IN the Dark -- BLIND, for short -- is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time. The peer-reviewed study led by computer scientists Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Peña and Constantinos Chamzas of Rice's George R. Brown School of Engineering was presented at the Institute of Electrical and Electronics Engineers' International Conference on Robotics and Automation in late May. The algorithm developed primarily by Quintero-Peña and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to "augment robot perception and, importantly, prevent the execution of unsafe motion," according to the study. To do so, they combined Bayesian inverse reinforcement learning (by which a system learns from continually updated information and experience) with established motion planning techniques to assist robots that have "high degrees of freedom" -- that is, a lot of moving parts.


Dr. Zubin Jelveh: Machine Learning Can Predict Shooting Victimization Well Enough to Help Prevent It - UMD College of Information Studies

#artificialintelligence

Using arrest and victimization records from the Chicago PD, a machine learning model can predict the risk of being shot in the next 18 months. UMD College of Information Studies Assistant Professor Zubin Jelveh--alongside co-authors Sara B. Heller of the University of Michigan, Benjamin Jakubowski of the Courant Institute of Mathematical Sciences, and Max Kapustin of the Brooks School of Public Policy--recently published a paper on research that supports that shootings are predictable enough to be preventable. Using arrest and victimization records for almost 644,000 people from the Chicago Police Department, the team trained a machine learning model to predict the risk of being shot in the next 18 months. They addressed central concerns about police data and algorithmic bias by predicting shooting victimization rather than arrest, which accurately captures risk differences across demographic groups despite bias in the predictors. Out-of-sample accuracy is strikingly high: of the 500 people with the highest predicted risk, 13 percent are shot within 18 months, a rate 130 times higher than the average Chicagoan.