bair
Imitation & Innovation In AI - FoundersList
Speaker: Alison Gopnik, Distinguished Professor of Psychology, UC Berkeley Talk Title: Imitation & Innovation in AI: What Four-year-olds Can Do & AI Can't (Yet) About Talk: Young children's learning may be an important model for artificial intelligence (AI). Comparing children & artificial agents in the same tasks & environments can help us understand the abilities of existing systems & create new ones. In particular, many current large data-supervised systems, such as large language models (LLMs), provide new ways to access information collected by past agents. However, they lack the kinds of exploration & innovation that are characteristic of children. New techniques may help to instantiate child-like curiosity, exploration & play in AI systems.
Diffusion Models for Video Prediction and Infilling
Höppe, Tobias, Mehrjou, Arash, Bauer, Stefan, Nielsen, Didrik, Dittadi, Andrea
Predicting and anticipating future outcomes or reasoning about missing information in a sequence are critical skills for agents to be able to make intelligent decisions. This requires strong, temporally coherent generative capabilities. Diffusion models have shown remarkable success in several generative tasks, but have not been extensively explored in the video domain. We present Random-Mask Video Diffusion (RaMViD), which extends image diffusion models to videos using 3D convolutions, and introduces a new conditioning technique during training. By varying the mask we condition on, the model is able to perform video prediction, infilling, and upsampling. Due to our simple conditioning scheme, we can utilize the same architecture as used for unconditional training, which allows us to train the model in a conditional and unconditional fashion at the same time. We evaluate RaMViD on two benchmark datasets for video prediction, on which we achieve state-of-the-art results, and one for video generation.
- Europe > Sweden > Stockholm > Stockholm (0.04)
- South America > Brazil (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
A First Principles Theory of Generalization - KDnuggets
I recently started a new newsletter focus on AI education and already has over 50,000 subscribers. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Most of machine learning(ML) papers we see these days are focused on advancing new techniques and methods in different areas such as natural language or computer vision. ML research is advancing at a frantic pace despite the lack of fundamental theory of machine intelligence.
Here Are Free AI Learning Resources For Beginners - Analytics India Magazine
Given how artificial intelligence is a buzzing topic, it has sparked a slew of beginner-friendly introductory resources that clear the general concepts from this very broad topic. And for most newcomers, the most interesting topic in AI is Deep Learning. In fact, Google's Python-based Deep Learning framework Tensorflow has helped many a developer get up to speed with the technical concepts. Besides videos and free online courses, you must also have a reading list that helps you cover the math and statistics behind the algorithms. While YouTube videos remain the main learning source and a key starting point for beginners, there is a slew of resources, especially books that can help cement fundamental concepts.
- Education > Educational Setting > Online (0.92)
- Education > Educational Technology > Educational Software > Computer Based Training (0.36)
Campus artificial intelligence researchers aim to improve self-driving cars
The Berkeley Artificial Intelligence Research Lab, or BAIR, released a study on BDD100K -- a driving database that can be used to train algorithms of self-driving cars -- May 12. The data set can be used to train self-driving cars' artificial intelligence programs, according to BAIR's website. The study concluded that the data set can help researchers understand how different scenarios affect current self-driving car programs. A study by the research team that created the data set described two contributions to self-driving cars, one of which is the data set and the other its video annotation system. According to BAIR's website, BDD100K is "the largest and most diverse driving video dataset," containing 100,000 driving clips.
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
Berkeley Researchers Create Virtual Acrobat – Synced – Medium
The Berkeley Artificial Intelligence Research (BAIR) Lab yesterday proposed DeepMimic, a Reinforcement Learning (RL) technique that enables simulated characters to regenerate highly dynamic physical movements learned from data collected from human subjects. BAIR is a top-tier research lab focused on computer vision, machine learning, natural language processing, and robotics. RL methods have been shown to be applicable to a diverse suite of robotic tasks, particularly motion control problems. A typical RL includes a policy function that consists of all action selections that machines can do, and a value function that returns a low or high reward each time a machine takes an action. The epoch-making Go computer AlphaGo produced by DeepMind is grounded on the same technique.
NVIDIA Delivers AI Supercomputer to Berkeley NVIDIA Blog
NVIDIA CEO Jen-Hsun Huang earlier this year delivered our NVIDIA DGX-1 AI supercomputer in a box to the University of California, Berkeley's Berkeley AI Research Lab (BAIR). BAIR's over two dozen faculty and more than 100 graduate students are at the cutting edge of multi-modal deep learning, human-compatible AI and connecting AI with other scientific disciplines and the humanities. "I'm delighted to deliver one of the first ones to you," Jen-Hsun told a group of researchers at BAIR celebrating the arrival of their DGX-1. The team at BAIR are working on a dazzling array of AI problems across a huge array of fields -- and they're eager to experiment with as many different approaches as possible. To do that, they need speed, explains Pieter Abbeel, an associate professor at UC Berkeley's Department of Electrical Engineering and Computer Science.
- Information Technology > Hardware (1.00)
- Education > Educational Setting > Higher Education (0.60)