Goto

Collaborating Authors

 fathom


Help, My Friend Got Me a Dumb AI-Generated Present

WIRED

"An artist friend of mine got me an AI-generated painting as a gift. I can see she tried to personalize the concept, and it's nicely framed, but part of me still feels a little cheated. For timely guidance on encounters with technology, open a support ticket via email; or register and post a comment below. There's something implicitly paradoxical about feeling "cheated" by a present. A gift is, by definition, something that comes into your possession at no cost or effort, an object that exists outside the economic concepts of debt and fair exchange.


Federated Hypergradient Descent

Kan, Andrew K

arXiv.org Artificial Intelligence

In this work, we explore combining automatic hyperparameter tuning and optimization for federated learning (FL) in an online, one-shot procedure. We apply a principled approach on a method for adaptive client learning rate, number of local steps, and batch size. In our federated learning applications, our primary motivations are minimizing communication budget as well as local computational resources in the training pipeline. Conventionally, hyperparameter tuning methods involve at least some degree of trial-and-error, which is known to be sample inefficient. In order to address our motivations, we propose FATHOM (Federated AuTomatic Hyperparameter OptiMization) as a one-shot online procedure. We investigate the challenges and solutions of deriving analytical gradients with respect to the hyperparameters of interest. Our approach is inspired by the fact that, with the exception of local data, we have full knowledge of all components involved in our training process, and this fact can be exploited in our algorithm impactfully. We show that FATHOM is more communication efficient than Federated Averaging (FedAvg) with optimized, static valued hyperparameters, and is also more computationally efficient overall. As a communication efficient, one-shot online procedure, FATHOM solves the bottleneck of costly communication and limited local computation, by eliminating a potentially wasteful tuning process, and by optimizing the hyperparamters adaptively throughout the training procedure without trial-and-error. We show our numerical results through extensive empirical experiments with the Federated EMNIST-62 (FEMNIST) and Federated Stack Overflow (FSO) datasets, using FedJAX as our baseline framework.


Improved Learning Bounds for Branch-and-Cut

Balcan, Maria-Florina, Prasad, Siddharth, Sandholm, Tuomas, Vitercik, Ellen

arXiv.org Artificial Intelligence

Branch-and-cut is the most widely used algorithm for solving integer programs, employed by commercial solvers like CPLEX and Gurobi. Branch-and-cut has a wide variety of tunable parameters that have a huge impact on the size of the search tree that it builds, but are challenging to tune by hand. An increasingly popular approach is to use machine learning to tune these parameters: using a training set of integer programs from the application domain at hand, the goal is to find a configuration with strong predicted performance on future, unseen integer programs from the same domain. If the training set is too small, a configuration may have good performance over the training set but poor performance on future integer programs. In this paper, we prove sample complexity guarantees for this procedure, which bound how large the training set should be to ensure that for any configuration, its average performance over the training set is close to its expected future performance. Our guarantees apply to parameters that control the most important aspects of branch-and-cut: node selection, branching constraint selection, and cutting plane selection, and are sharper and more general than those found in prior research.


Fathom AI Launches Fathom Pro Wearable Fitness System

#artificialintelligence

Sports injury startup Fathom AI has launched Fathom Pro, a wearable sensor-based system aimed at reducing injuries resulting from running and providing recovery exercises based on biometric feedback. Fathom Pro uses three compact sensors, each the size of a quarter, that the user places on their lower back and just above each ankle. These sensors collect movement and force data and then send it to the Fathom app. The data is then analyzed in the app using human movement research and recovery best practices to find imbalances in the users' running form. The Fathom AI app then identifies ways the user may be compensating for weak musculature or limited mobility.


Federated Multi-task Hierarchical Attention Model for Sensor Analytics

Chen, Yujing, Ning, Yue, Chai, Zheng, Rangwala, Huzefa

arXiv.org Machine Learning

Sensors are an integral part of modern Internet of Things (IoT) applications. There is a critical need for the analysis of heterogeneous multivariate temporal data obtained from the individual sensors of these systems. In this paper we particularly focus on the problem of the scarce amount of training data available per sensor. We propose a novel federated multi-task hierarchical attention model (FATHOM) that jointly trains classification/regression models from multiple sensors. The attention mechanism of the proposed model seeks to extract feature representations from the input and learn a shared representation focused on time dimensions across multiple sensors. The underlying temporal and non-linear relationships are modeled using a combination of attention mechanism and long-short term memory (LSTM) networks. We find that our proposed method outperforms a wide range of competitive baselines in both classification and regression settings on activity recognition and environment monitoring datasets. We further provide visualization of feature representations learned by our model at the input sensor level and central time level.


A new benchmark suite for machine learning

#artificialintelligence

To learn more about how to build next-generation machine learning applications, check out the session "Building reinforcement learning applications with Ray" at the AI Conference in San Francisco, September 4-7, 2018. Hurry--best price ends June 8. We are in an empirical era for machine learning, and it's important to be able to identify tools that enable efficient experimentation with end-to-end machine learning pipelines. Organizations that are using and deploying machine learning are confronted with a plethora of options for training models and model inference, at the edge and on cloud services. To that end,MLPerf, a new set of benchmarks compiled by a growing list of industry and academic contributors,was recently announced at the recent Artificial Intelligence conference in NYC.


Learning to Branch

Balcan, Maria-Florina, Dick, Travis, Sandholm, Tuomas, Vitercik, Ellen

arXiv.org Artificial Intelligence

Tree search algorithms, such as branch-and-bound, are the most widely used tools for solving combinatorial and nonconvex problems. For example, they are the foremost method for solving (mixed) integer programs and constraint satisfaction problems. Tree search algorithms recursively partition the search space to find an optimal solution. In order to keep the tree size small, it is crucial to carefully decide, when expanding a tree node, which question (typically variable) to branch on at that node in order to partition the remaining space. Numerous partitioning techniques (e.g., variable selection) have been proposed, but there is no theory describing which technique is optimal. We show how to use machine learning to determine an optimal weighting of any set of partitioning procedures for the instance distribution at hand using samples from the distribution. We provide the first sample complexity guarantees for tree search algorithm configuration. These guarantees bound the number of samples sufficient to ensure that the empirical performance of an algorithm over the samples nearly matches its expected performance on the unknown instance distribution. This thorough theoretical investigation naturally gives rise to our learning algorithm. Via experiments, we show that learning an optimal weighting of partitioning procedures can dramatically reduce tree size, and we prove that this reduction can even be exponential. Through theory and experiments, we show that learning to branch is both practical and hugely beneficial.


This Computer Uses Light--Not Electricity--To Train AI Algorithms

#artificialintelligence

William Andregg ushers me into the cluttered workshop of his startup Fathom Computing and gently lifts the lid from a bulky black box. Inside, green light glows faintly from a collection of lenses, brackets, and cables that resemble an exploded telescope. It's a prototype computer that processes data using light, not electricity, and it's learning to recognize handwritten digits. In other experiments the device learned to generate sentences in text. Right now, this embryonic optical computer is good, not great: on its best run it read 90 percent of scrawled numbers correctly.


I want to be a machine learning engineer … what will my salary be?

#artificialintelligence

The Role: Once relegated to science fiction, machine learning is a fast-growing field of computer science today. It focuses on designing algorithms that allow machines to make decisions based on information they gather themselves, finding patterns and insights in places they weren't explicitly programmed to look, mimicking human learning. Machine learning can be found in many products today, such as when streaming services recommend a new show or artist based on previous consumption habits, a text messaging application guesses the rest of a sentence or an autonomous vacuum cleaner swerves around the leg of a table that it crashed into earlier. In each example the machine has adjusted its behaviour based on information it has gathered on its own. It is up to a machine learning engineer to design the complex algorithms that enable machines to gather information and identify such patterns.


Crazy new military tech

FOX News

Included in the new technology are machine-gun toting robots that charge up the beaches as advance assault, as well as speedboats that instantly transformed into small stealthy submarines diving beneath the surface to avoid detection. For the past two weeks, the Navy and Marine Corps have been quietly testing about 50 new fascinating technologies out at Camp Pendleton, at the Ship-to-Shore Maneuver Exploration and Experimentation Advanced Naval Technology Exercise 2017, in California. The exercise is investigating how the military can leverage the latest technological advances for ship-to-the-shore, or the space between the Naval ship and the beach where they could potentially land. Sailors and Marines have been experimenting with the technology and evaluating the wide range of sea, air and land innovations in a variety of realistic scenarios. The tech includes amphibious vehicles, but also drones like quadcopters and potentially weapon-wielding ground robots.