Sports


Neural networks from scratch

#artificialintelligence

Creating complex neural networks with different architectures in Python should be a standard practice for any machine learning engineer or data scientist. But a genuine understanding of how a neural network works is equally valuable. In this article, learn the fundamentals of how you can build neural networks without the help of the frameworks that might make it easier to use. While reading the article, you can open the notebook on GitHub and run the code at the same time. In this article, I explain how to make a basic deep neural network by implementing the forward and backward pass (backpropagation). This requires some specific knowledge about the functions of neural networks, which I discuss in this introduction to neural networks.


D.R.I.V.E. 2020 MLB Projections: DataRobot Intelligent Value Estimator

#artificialintelligence

MLB models were built upon a dataset going back to 1998, and includes roughly 1,500 season-specific statistics for each player - everything from age, to wRC, to days on the Injured List. We created approximately 2,000 more variables for each player via feature engineering, capturing relevant information from previous seasons in an attempt to allow the model to understand their trajectories as players. After building this massive dataset, we then relied on DataRobot to do the heavy data science lifting of training and evaluating many dozens of different machine learning models to determine which model (or ensemble of multiple models) would give us the best predictions for the 2020 season. Normally, this modeling stage of the data science process would've taken weeks, but our team could iterate quickly with automated machine learning from DataRobot, building models overnight and then going back to the data acquisition and preparation to refine our approach.


IBM's AI generates new footage from video stills

#artificialintelligence

A paper coauthored by researchers at IBM describes an AI system -- Navsynth -- that generates videos seen during training as well as unseen videos. While this in and of itself isn't novel -- it's an acute area of interest for Alphabet's DeepMind and others -- the researchers say the approach produces superior quality videos compared with existing methods. If the claim holds water, their system could be used to synthesize videos on which other AI systems train, supplementing real-world data sets that are incomplete or marred by corrupted samples. As the researchers explain, the bulk of work in the video synthesis domain leverages GANs, or two-part neural networks consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples. They're highly capable but suffer from a phenomenon called mode collapse, where the generator generates a limited diversity of samples (or even the same sample) regardless of the input.


Good news for lazy joggers: Scientists develop ankle 'exoskeleton' that makes running easier

Daily Mail - Science & tech

Couch potatoes trying to get in shape could one day be helped along their fitness journey by an ankle exoskeleton that makes it easier and less tiring to run. The robotic device attaches to the ankle of joggers and was found in lab tests to slash energy expenditure by 14 per cent when compared to standard running shoes. It was created by robotics experts at Stanford University and funded in part by sporting behemoth Nike. The engineers behind the project say the equipment currently only works on a treadmill and when the device is hooked up to a machine via cables. However, they are working to make the exoskeleton portable and lightweight and easy to integrate into future running equipment.


5G Commercialization and Trials in Korea

Communications of the ACM

Since Korea has a limited ICT R&D fund compared to other IT global countries, its strategy was essential to achieve its global competence in each generation of mobile communication. Just after the rollout of the world's first 5G service, the government took the next step by announcing the 5G strategy to promote the 5G application to a wide-ranging industry and create a sustainable 5G ecosystem leading to new growth engines. In this article, we focus on the government-industry 5G collaborations, including the R&D roadmap and promotion to the 5G commercialization, the global collaboration, the first 5G experience, and 5G vertical trials to make the 5G-enabled industrial transformation take place in Korea. The development of an electronic digital switching system called TDX in the 1980s, the world's first CDMA mobile service in the 1990s, and the nationwide wired and mobile broad Internet networks in the 2000s are the key advances that made it possible for Korean consumers to easily adopt new technologies such as LTE and 5G. In 2018, the handset penetration rate of South Korea was similar to western Europe, where LTE adaption was 84% with 99.95% coverage and 65Mbps downlink capacity.4


AIkido Pharma Adds Artificial Intelligence Leader to Advisory Board

#artificialintelligence

AIkido Pharma Incorporated (Nasdaq: AIKI) today announced the addition of Andreas Typaldos to the Company's Advisory Board. Mr. Typaldos is a pioneer software and technology entrepreneur, and a private equity investor through a Typaldos Family Office. Also, together with leading scientists in Drug Development at Tufts University and the Fudan University in Shanghai and the Shanghai Center for Drug Discovery and Development, he is on the Board of Directors of Quantitative Cell Diagnostix, www.qcd-x.com, In the past, Mr. Typaldos was founder, founding investor, Board Member, and Chief Executive of a number of software, technology, consulting services, and internet companies, including: Anthony Hayes, CEO of AIkido, noted "Mr. Typaldos is an industry leader in Artificial Intelligence and Machine Learning. His participation on our advisory board will help the Company expand its Artificial Intelligence (AI) and Machine Learning (ML) presence in the drug development field. We are honored he has agreed to lend his expertise and we are excited to work with him."


Backprop with Approximate Activations for Memory-efficient Network Training

Neural Information Processing Systems

Training convolutional neural network models is memory intensive since back-propagation requires storing activations of all intermediate layers. This presents a practical concern when seeking to deploy very deep architectures in production, especially when models need to be frequently re-trained on updated datasets. In this paper, we propose a new implementation for back-propagation that significantly reduces memory usage, by enabling the use of approximations with negligible computational cost and minimal effect on training performance. The algorithm reuses common buffers to temporarily store full activations and compute the forward pass exactly. It also stores approximate per-layer copies of activations, at significant memory savings, that are used in the backward pass.


Google's New Shoe Insole Analyzes Your Soccer Moves

#artificialintelligence

Jacquard started out as a sensor on a denim jacket, where specially woven textile on the sleeve let the wearer control actions on their phone by touching the fabric. Swipe a palm up the sleeve to change music tracks, swipe down to call an Uber. A double-tap during a bike ride would send an ETA to a pair of headphones. But Google's wearable sensor technology is evolving beyond just taps and swipes. The Jacquard sensor, called the Tag, can now be installed into the insole of a shoe, where it can automatically identify a series of physical motions.


Why Artificial Intelligence Projects Are Failing

#artificialintelligence

The promise of Artificial Intelligence (AI) to solve real problems through automation, amplification and simplification is definitely achievable. Today, AI presents a technologist quite possibly one of the most glamorous projects to work on. We are tempted to jump on the bandwagon. But everything is not so great in AI Land. The truth is that – AI projects are failing.


Building an AI-powered Battlesnake with reinforcement learning on Amazon SageMaker Amazon Web Services

#artificialintelligence

Battlesnake is an AI competition based on the traditional snake game in which multiple AI-powered snakes compete to be the last snake surviving. Battlesnake attracts a community of developers at all levels. Hundreds of snakes compete and rise up in the ranks in the online Battlesnake global arena. Battlesnake also hosts several offline events that are attended by more than a thousand developers and non-developers alike and are streamed on Twitch. Teams of developers build snakes for the competition and learn new tech skills, learn to collaborate, and have fun. Teams can build snakes by using a variety of strategies ranging from state-of-the-art deep reinforcement learning (RL) algorithms to unique heuristics-based strategies. This post shows how to use Amazon SageMaker to build an RL-based snake.