Collaborating Authors


Using SHAP to Explain Machine Learning Models


Do you understand how your machine learning model works? Despite the ever-increasing usage of machine learning (ML) and deep learning (DL) techniques, the majority of companies say they can't explain the decisions of their ML algorithms [1]. This is, at least in part, due to the increasing complexity of both the data and models used. It's not easy to find a nice, stable aggregation over 100 decision trees in a random forest to say which features were most important or how the model came to the conclusion it did. This problem grows even more complex in application domains such as computer vision (CV) or natural language processing (NLP), where we no longer have the same high-level, understandable features to help us understand the model's failures.

Understanding BERT with Hugging Face - KDnuggets


In a recent post on BERT, we discussed BERT transformers and how they work on a basic level. The article covers BERT architecture, training data, and training tasks. However, we don't really understand something before we implement it ourselves. So in this post, we will implement a Question Answering Neural Network using BERT and a Hugging Face Library. In this task, we are given a question and a paragraph in which the answer lies to our BERT Architecture and the objective is to determine the start and end span for the answer in the paragraph.

How to Create Unbiased Machine Learning Models - KDnuggets


AI systems are becoming increasingly popular and central in many industries. They decide who might get a loan from the bank, whether an individual should be convicted, and we may even entrust them with our lives when using systems such as autonomous vehicles in the near future. Thus, there is a growing need for mechanisms to harness and control these systems so that we may ensure that they behave as desired. One important issue that has been gaining popularity in the last few years is fairness. While usually ML models are evaluated based on metrics such as accuracy, the idea of fairness is that we must ensure that our models are unbiased with regard to attributes such as gender, race and other selected attributes.

7 Open Source Libraries for Deep Learning Graphs - KDnuggets


If you're a deep learning enthusiast you're probably already familiar with some of the basic mathematical primitives that have been driving the impressive capabilities of what we call deep neural networks. Although we like to think of a basic artificial neural network as some nodes with some weighted connections, it's more efficient computationally to think of neural networks as matrix multiplication all the way down. We might draw a cartoon of an artificial neural network like the figure below, with information traveling in from left to right from inputs to outputs (ignoring recurrent networks for now). This type of neural network is a feed-forward multilayer perceptron (MLP). If we want a computer to compute the forward pass for this model, it's going to use a string of matrix multiplies and some sort of non-linearity (here represented by the Greek letter sigma) in the hidden layer: MLPs are well-suited for data that can be naturally shaped as 1D vectors.

Researchers Hid Malware Inside an AI's 'Neurons' And It Worked Scarily Well


The authors concluded that a 178MB AlexNet model can have up to 36.9MB of malware embedded into its structure without being detected using a technique called steganography. Neural networks could be the next frontier for malware campaigns as they become more widely used, according to a new study. According to the study, which was posted to the arXiv preprint server on Monday, malware can be embedded directly into the artificial neurons that make up machine learning models in a way that keeps them from being detected. The neural network would even be able to continue performing its set tasks normally. "As neural networks become more widely used, this method will be universal in delivering malware in the future," the authors, from the University of the Chinese Academy of Sciences, write.

DeepMind's AI predicts structures for a vast trove of proteins


The human mediator complex has long been one of the most challenging multi-protein systems for structural biologists to understand.Credit: Yuan He The human genome holds the instructions for more than 20,000 proteins. But only about one-third of those have had their 3D structures determined experimentally. And in many cases, those structures are only partially known. Now, a transformative artificial intelligence (AI) tool called AlphaFold, which has been developed by Google's sister company DeepMind in London, has predicted the structure of nearly the entire human proteome (the full complement of proteins expressed by an organism). In addition, the tool has predicted almost complete proteomes for various other organisms, ranging from mice and maize (corn) to the malaria parasite (see'Folding options').

What Can Artificial Intelligence Do?


Can Artificial Intelligence see, think, act? Many questions revolve around the mysterious and fascinating world of Artificial Intelligence. The answers are not always clear but often require a little imagination to find the skills of human beings in the machines. Artificial Intelligence is still far from what humans fear, especially in the workplace: the replacement of resources with machines. To date, AI can simulate human abilities but it cannot emulate creativity, nor can it provide answers or outputs different from those for which it was programmed.

Google's Translatotron 2 removes ability to deepfake voices


All the sessions from Transform 2021 are available on-demand now. In 2019, Google released Translatotron, an AI system capable of directly translating a person's voice into another language. The system could create synthesized translations of voices to keep the sound of the original speaker's voice intact. But Translatotron could also be used to generate speech in a different voice, making it ripe for potential misuse in, for example, deepfakes. This week, researchers at Google quietly released a paper detailing Translatotron's successor, Translatotron 2, which solves the original issue with Translatotron by restricting the system to retain the source speaker's voice.

The Rapid Evolution of the Canonical Stack for Machine Learning


You might think that's something like Kubeflow but Kubeflow is more of a pipelining and orchestration system that's not really agnostic to the languages and frameworks that run on it.

The Tokyo Olympics' opening ceremony featured an orchestrated video game soundtrack


The Tokyo Olympics opening kicked off early this morning, and the parade of nations, where athletes walk through Japan's Olympic stadium, had a Japanese twist. A medley of videogame music, orchestrated, formed the soundtrack for the parade. It all kicked off with the main theme from Dragon Quest -- which sounds pretty Olympian outright -- followed by hits from Final Fantasy, Monster Hunter, Nier, Sonic, Chrono Trigger and, er, eFootball. There are some notable omissions -- no Nintendo songs (Pokemon? Zelda?) being the biggest one -- but some Street Fighter II songs might have fitted well into the competitive theme.