Not enough data to create a plot.
Try a different view from the menu above.
For the last two years, Facebook AI Research (FAIR) has worked with 13 universities around the world to assemble the largest ever data set of first-person video--specifically to train deep-learning image-recognition models. AIs trained on the data set will be better at controlling robots that interact with people, or interpreting images from smart glasses. "Machines will be able to help us in our daily lives only if they really understand the world through our eyes," says Kristen Grauman at FAIR, who leads the project. Such tech could support people who need assistance around the home, or guide people in tasks they are learning to complete. "The video in this data set is much closer to how humans observe the world," says Michael Ryoo, a computer vision researcher at Google Brain and Stony Brook University in New York, who is not involved in Ego4D.