Goto

Collaborating Authors

Statistical Learning


How to Prepare for an Automated Future: 7 Steps to Machine Learning

#artificialintelligence

The increasingly digital economy requires boards and executives to have a solid understanding of the rapidly changing digital landscape. Naturally, artificial intelligence (AI) is an important stakeholder. Those organisations that want to prepare for an automated future should have a thorough understanding of AI. However, AI is an umbrella term that covers multiple disciplines, each affecting the business in a slightly different way. Artificial intelligence consists of the seamless integration of robotics, cognitive systems and machine learning.


Beyond CUDA: GPU Accelerated Python for Machine Learning on Cross-Vendor Graphics Cards Made Simple

#artificialintelligence

Machine learning algorithms -- together with many other advanced data processing paradigms -- fit incredibly well to the parallel-architecture that GPU computing offers. This has driven massive growth in the advancement and adoption of graphics cards for accelerated computing in recent years. This has also driven exciting research around techniques that optimize towards concurrency, such as model parallelism and data parallelism. In this article you'll learn how to write your own GPU accelerated algorithms in Python, which you will be able to run on virtually any GPU hardware -- including non-NVIDIA GPUs. We'll introduce core concepts and show how you can get started with the Kompute Python framework with only a handful of lines of code. First we will be building a simple GPU Accelerated Python script that will multiply two arrays in parallel which this will introduce the fundamentals of GPU processing.


Paper: Bayesian statistics and modelling

#artificialintelligence

Bayesian statistics and modelling is an open access paper published by Nature Reviews as part of its first volume of Methods Primers. Bayesian statistics is an approach to data analysis based on Bayes' theorem, where available knowledge about parameters in a statistical model is updated with the information in observed data. The background knowledge is expressed as a prior distribution and combined with observational data in the form of a likelihood function to determine the posterior distribution. The posterior can also be used for making predictions about future events. This Primer paper describes the stages involved in Bayesian analysis, from specifying the prior and data models to deriving inference, model checking and refinement.



EETimes - ReRAM Machine Learning Embraces Variability

#artificialintelligence

TORONTO--Sometimes a problem can become its own solution. For CEA-Leti scientists, it means that traits of resistive-RAM (ReRAM) devices that have been previously considered as "non-ideal" may be the answer to overcoming barriers to developing ReRAM-based edge-learning systems, as outlined in a recent Nature Electronics publication titled "In-situ learning using intrinsic memristor variability via Markov chain Monte Carlo sampling." It describes how RRAM, or memristor, technology can be used to create intelligent systems that learn locally at the edge, independent of the cloud. Thomas Dalgaty, a CEA-Leti scientist at France's Université Grenoble, explained how the team were able to navigate the intrinsic non-idealities of ReRAM technology--the learning algorithms used in current ReRAM-based edge approaches cannot be reconciled with device programming randomness, or variability, among others. In a telephone interview with EE Times, he said the solution was to implement a Markov Chain Monte Carlo (MCMC) sampling learning algorithm in a fabricated chip that acts as a Bayesian machine-learning model, which actively exploited memristor randomness. For the purposes of the research, Dalgaty said it's important to clearly define what is meant by an edge system.



Machine Learning A-Z : Hands-On Python & R In Data Science

#artificialintelligence

Learn to create Machine Learning Algorithms in Python and R from two Data Science experts. And in this section we're talking about the K means clustering algorithm. And in this tutorial we're going to talk about the intuition behind Kamins. So Kamins is a algorithm that allows you to closter your data and as we will see it's a very convenient tool for discovering categories of groups in your data set that you wouldn't have otherwise thought of yourself. And in this section or in this specific tutorial we'll learn how to understand k means on an intuitive level and we'll see an example of Hardwick's.



US Sanctions on Russia Rewrite Cyberespionage's Rules

WIRED

Less than four months after the revelation of one of the biggest hacking events in history--Russia's massive breach of thousands of networks that's come to be known as the SolarWinds hack--the US has now sent the Kremlin a message in the form of a punishing package of diplomatic and economic measures. But even as the retribution for SolarWinds becomes clear, the question remains: What exactly is that message? By most any interpretation, it doesn't seem to be based on a rule that the United States has ever spelled out before. On Thursday, the Biden administration fulfilled its repeated promises of retaliation for both the SolarWinds hacking campaign and a broad array of other Russian misbehavior that includes the Kremlin's continuing disinformation operations and other interference in the 2020 election, the poisoning of Putin political adversary Aleksey Navalny, and even older Russian misdeeds including the NotPetya worm and the cyberattack on the 2018 Winter Olympics. The Treasury Department has leveled new sanctions at six cybersecurity companies with purported ties to Russian intelligence services, as well as four organizations associated with its disinformation operations.


Maximum entropy RL (provably) solves some robust RL problems

AIHub

Nearly all real-world applications of reinforcement learning involve some degree of shift between the training environment and the testing environment. However, prior work has observed that even small shifts in the environment cause most RL algorithms to perform markedly worse. As we aim to scale reinforcement learning algorithms and apply them in the real world, it is increasingly important to learn policies that are robust to changes in the environment. Broadly, prior approaches to handling distribution shift in RL aim to maximize performance in either the average case or the worst case. While these methods have been successfully applied to a number of areas (e.g., self-driving cars, robot locomotion and manipulation), their success rests critically on the design of the distribution of environments.