Machine Learning


Deep Learning, Knowledge Representation and Reasoning

Journal of Artificial Intelligence Research

The recent success of deep neural networks at tasks such as language modelling, computer vision, and speech recognition has attracted considerable interest from industry and academia. Achieving a better understanding and widespread use of such models involves the use of Knowledge Representation and Reasoning together with sound Machine Learning methodologies and systems. The goal of this special track, which closed in 2017, was to serve as a home for the publication of leading research in deep learning towards cognitive tasks, focusing on applications of neural computation to advanced AI tasks requiring knowledge representation and reasoning.


Columbia University DSI Alumni Use Machine Learning to Discover Coronavirus Treatments - insideBIGDATA

#artificialintelligence

Two graduates of the Data Science Institute (DSI) at Columbia University are using computational design to quickly discover treatments for the coronavirus. Andrew Satz and Brett Averso are chief executive officer and chief technology officer, respectively, of EVQLV, a startup creating algorithms capable of computationally generating, screening, and optimizing hundreds of millions of therapeutic antibodies. They apply their technology to discover treatments most likely to help those infected by the virus responsible for COVID-19.


Machine Learning in Python: Principal Component Analysis (PCA) for Handling High-Dimensional Data

#artificialintelligence

Machine Learning in Python: Principal Component Analysis (PCA) for Handling High-Dimensional Data In this video, I will be showing you how to perform principal component analysis (PCA) in Python using the scikit-learn package. PCA represents a powerful learning approach that enables the analysis of high-dimensional data as well as reveal the contribution of descriptors in governing the distribution of data clusters. Particularly, we will be creating PCA scree plot, scores plot and loadings plot. This video is part of the [Python Data Science Project] series. If you're new here, it would mean the world to me if you would consider subscribing to this channel.


AI at the Edge Still Mostly Consumer, not Enterprise, Market

#artificialintelligence

Data-driven experiences are rich, immersive and immediate. Think pizza delivery by drone, video cameras that can record traffic accidents at an intersection, freight trucks that can identify a potential system failure. These kinds of fast-acting activities need lots of data -- quickly. So they can't sustain latency as data travels to and from the cloud. That to-and-fro takes too long.


Deep reinforcement learning for supply chain and price optimization

#artificialintelligence

Supply chain and price management were among the first areas of enterprise operations that adopted data science and combinatorial optimization methods and have a long history of using these techniques with great success. Although a wide range of traditional optimization methods are available for inventory and price management applications, deep reinforcement learning has the potential to substantially improve the optimization capabilities for these and other types of enterprise operations due to impressive recent advances in the development of generic self-learning algorithms for optimal control. In this article, we explore how deep reinforcement learning methods can be applied in several basic supply chain and price management scenarios. The traditional price optimization process in retail or manufacturing environments is typically framed as a what-if analysis of different pricing scenarios using some sort of demand model. In many cases, the development of a demand model is challenging because it has to properly capture a wide range of factors and variables that influence demand, including regular prices, discounts, marketing activities, seasonality, competitor prices, cross-product cannibalization, and halo effects. Once the demand model is developed, however, the optimization process for pricing decisions is relatively straightforward, and standard techniques such as linear or integer programming typically suffice. For instance, consider an apparel retailer that purchases a seasonal product at the beginning of the season and has to sell it out by the end of the period. Assuming that a retailer chooses pricing levels from a discrete set (e.g., \$59.90, \$69.90, etc.) and can make price changes frequently (e.g., weekly), we can pose the following optimization problem: The first constraint ensures that each time interval has only one price, and the second constraint ensures that all demands sum up to the available stock level.



Army Seeks AI Ground Truth

#artificialintelligence

Deep neural networks are being mustered by U.S. military researchers to marshal new technology forces on the Internet of Battlefield Things. U.S. Army and industry researchers said this week they have developed a "confidence metric" for assessing the reliability of AI and machine learning algorithms used in deep neural networks. The metric seeks to boost reliability by limiting predictions based strictly on the system's training. The goal is to develop AI-based systems that are less prone to deception when presented with information beyond their training. SRI International has been working since 2018 with the Army Research Laboratory as part of the service's Internet of Battlefield of Things Collaborative Research Alliance.


Artificial Intelligence A-Z : Learn How To Build An AI

#artificialintelligence

Combine the power of Data Science, Machine Learning and Deep Learning to create powerful AI for Real-World applications! Your CCNA start Deep Learning A-Z: Hands-On Artificial Neural Networks Deep Learning and Computer Vision A-Z: OpenCV, SSD & GANs Artificial Intelligence for Business ZERO to GOD Python 3.8 FULL STACK MASTERCLASS 45 AI projects Comment Policy: Please write your comments that match the topic of this page's posts. Comments that contain links will not be displayed until they are approved.


Google is using AI to design AI processors much faster than humans can

#artificialintelligence

To one extent or another artificial intelligence is practically everywhere these days, from games to image upscaling to smartphone "personal assistants." More than ever, researchers are pouring a ton of time, money, and effort into AI designs. At Google, AI algorithms are even being used to design AI chips. This is not a complete design of silicon that Google is dealing with, but a subset of chip design known as placement optimization. This is a time-consuming task for humans.


Researchers find AI is bad at predicting GPA, grit, eviction, job training, layoffs, and material hardship

#artificialintelligence

A paper coauthored by over 112 researchers across 160 data and social science teams found that AI and statistical models, when used to predict six life outcomes for children, parents, and households, weren't very accurate even when trained on 13,000 data points from over 4,000 families. They assert that the work is a cautionary tale on the use of predictive modeling, especially in the criminal justice system and social support programs. "Here's a setting where we have hundreds of participants and a rich data set, and even the best AI results are still not accurate," said study co-lead author Matt Salganik, a professor of sociology at Princeton and interim director of the Center for Information Technology Policy at the Woodrow Wilson School of Public and International Affairs. "These results show us that machine learning isn't magic; there are clearly other factors at play when it comes to predicting the life course." The study, which was published this week in the journal Proceedings of the National Academy of Sciences, is the fruit of the Fragile Families Challenge, a multi-year collaboration that sought to recruit researchers to complete a predictive task by predicting the same outcomes using the same data.