Carlos Gaitan, the CEO and Co-founder of Benchmark Labs a leading provider of AI & IoT-driven weather forecasting solutions for the agriculture, energy, and insurance sectors joins Enterprise Radio. Dr. Gaitan is the Co-founder and CEO of Benchmark Labs. He did his doctoral studies at the University of British Columbia (Vancouver, Canada) working with William Hsieh in machine learning applications in the environmental sciences. He also holds a Bachelor degree in Civil Engineering and a Master degree in Hydrosystems from the Pontificia Universidad Javeriana (Bogota, Colombia). He is an elected member of the American Meteorological Society's (AMS) Artificial Intelligence Committee.
A little bit ago, I covered Google AI's pathways architecture, calling it a revolution in Machine Learning. One of the standouts in Google's novel approach was the implementation of sparse activation in their training architecture. I liked this idea so much that I decided to explore this in a lot more depth. That's where I came across Sparse Weight Activation Training (SWAT), by some researchers at the Department of Electrical And Computer Engineering, University of British Columbia. And the paper definitely has me excited.
It may seem intuitive that AI and deep learning can speed up workflows -- including novel drug discovery, a typically years-long and several-billion-dollar endeavor. But professors Artem Cherkasov and Olexandr Isayev were surprised to find that no recent academic papers provided a comprehensive, global research review of how deep learning and GPU-accelerated computing impact drug discovery. In March, they published a paper in Nature to fill this gap, presenting an up-to-date review of the state of the art for GPU-accelerated drug discovery techniques. Cherkasov, a professor in the department of urologic sciences at the University of British Columbia, and Isayev, an assistant professor of chemistry at Carnegie Mellon University, join NVIDIA AI Podcast host Noah Kravitz this week to discuss how GPUs can help democratize drug discovery. In addition, the guests cover their inspiration and process for writing the paper, talk about NVIDIA technologies that are transforming the role of AI in drug discovery, and give tips for adopting new approaches to research.
With millions of boats across the world, it is difficult for regulators to stop fishers who exceed catching limits. Even those with no malicious intent can accidentally catch too much because of manual reporting practices. Enter OnDeck Fisheries AI, a Vancouver, B.C.-based startup developing software to automatically scan how many fish are brought onto a ship. The company's machine learning and computer vision technology tracks the precise biomass and type of fish without requiring human observers. The idea is to help regulators and fishers rely on an automated solution to ensure they are complying with fishing regulations.
Francis, Jonathan (Carnegie Mellon University) | Kitamura, Nariaki (Carnegie Mellon University) | Labelle, Felix (Carnegie Mellon University) | Lu, Xiaopeng (Carnegie Mellon University) | Navarro, Ingrid (Carnegie Mellon University) | Oh, Jean
Recent advances in the areas of multimodal machine learning and artificial intelligence (AI) have led to the development of challenging tasks at the intersection of Computer Vision, Natural Language Processing, and Embodied AI. Whereas many approaches and previous survey pursuits have characterised one or two of these dimensions, there has not been a holistic analysis at the center of all three. Moreover, even when combinations of these topics are considered, more focus is placed on describing, e.g., current architectural methods, as opposed to also illustrating high-level challenges and opportunities for the field. In this survey paper, we discuss Embodied Vision-Language Planning (EVLP) tasks, a family of prominent embodied navigation and manipulation problems that jointly use computer vision and natural language. We propose a taxonomy to unify these tasks and provide an in-depth analysis and comparison of the new and current algorithmic approaches, metrics, simulated environments, as well as the datasets used for EVLP tasks. Finally, we present the core challenges that we believe new EVLP works should seek to address, and we advocate for task construction that enables model generalizability and furthers real-world deployment.
In the last decade, there have been significant advances in multi-agent reinforcement learning (MARL) but there are still numerous challenges, such as high sample complexity and slow convergence to stable policies, that need to be overcome before wide-spread deployment is possible. However, many real-world environments already, in practice, deploy sub-optimal or heuristic approaches for generating policies. An interesting question that arises is how to best use such approaches as advisors to help improve reinforcement learning in multi-agent domains. In this paper, we provide a principled framework for incorporating action recommendations from online suboptimal advisors in multi-agent settings. We describe the problem of ADvising Multiple Intelligent Reinforcement Agents (ADMIRAL) in nonrestrictive general-sum stochastic game environments and present two novel Q-learning based algorithms: ADMIRAL - Decision Making (ADMIRAL-DM) and ADMIRAL - Advisor Evaluation (ADMIRAL-AE), which allow us to improve learning by appropriately incorporating advice from an advisor (ADMIRAL-DM), and evaluate the effectiveness of an advisor (ADMIRAL-AE). We analyze the algorithms theoretically and provide fixed point guarantees regarding their learning in general-sum stochastic games. Furthermore, extensive experiments illustrate that these algorithms: can be used in a variety of environments, have performances that compare favourably to other related baselines, can scale to large state-action spaces, and are robust to poor advice from advisors.
The seven-arm octopus, Haliphron atlanticus, weighs as much as a person and haunts deep, dark waters from New Zealand to Brazil and British Columbia. So few people have seen this creature alive that researchers must study it in death--typically, as a mound of purplish flesh that washes ashore or turns up in a net. A living seven-arm octopus was scooped up by a Norwegian fishing trawler in 1984, but "when laid on deck the body collapsed," a local zoologist wrote at the time. What remained of the creature, he added, was "sack-shaped, large and flappy." Another turned up in a South Pacific research trawl in the early two-thousands, but the preservation process turned it into a "frozen lump," the giant-squid expert Steve O'Shea wrote.
Filming a top-level backcountry snowboarding event presents distinct technical challenges. The action moves all over the mountain; riders navigate through groups of trees, sail over jumps, and carve around obstacles, all the while making split-second adjustments to their speed and direction. The unpredictable and fast-paced nature of the competition can leave even the most talented camera operators struggling to keep up. For Travis Rice and Liam Griffin, the organizers of the Natural Selection Tour, this issue was compounded by the fact that they wanted to broadcast their event live. The annual three-stop jamboree sees a hand-picked field of the world's top snowboarders (eight women and 16 men) compete at specially selected courses in Jackson Hole, Wyoming; Alaska; and British Columbia.
We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners. Barely robust learning algorithms learn predictors that are adversarially robust only on a small fraction $\beta \ll 1$ of the data distribution. Our proposed notion of barely robust learning requires robustness with respect to a "larger" perturbation set; which we show is necessary for strongly robust learning, and that weaker relaxations are not sufficient for strongly robust learning. Our results reveal a qualitative and quantitative equivalence between two seemingly unrelated problems: strongly robust learning and barely robust learning.
We establish new generalisation bounds for multiclass classification by abstracting to a more general setting of discretised error types. Extending the PAC-Bayes theory, we are hence able to provide fine-grained bounds on performance for multiclass classification, as well as applications to other learning problems including discretisation of regression losses. Tractable training objectives are derived from the bounds. The bounds are uniform over all weightings of the discretised error types and thus can be used to bound weightings not foreseen at training, including the full confusion matrix in the multiclass classification case.