Pic2Recipe!, a website created by MIT electrical engineering and computer science student Nick Hynes, is a neural network that's been trained to recognise food from more than one million recipes on Food.com and AllRecipes. "It can look at a photo of a dish and be able to predict the ingredients and even suggest similar recipes," Hynes says. The website lets anyone upload images that are then analysed by machine learning systems. When the system is able to determine food in photos it then searches against a database and suggests similar recipes.
While the reason for this remains unclear, researchers suggest that alcohol may block the learning of new information, giving the brain more resources to lay down recently learned information into long-term memory. Participants watched a video of a staged theft, where a man and a woman entered a house and stole some jewellery, money and a laptop. In March, researchers from Glasgow Caledonian University and London South Bank University found that if alcohol is consumed after witnessing a crime it can protect memory from misleading information. In the study, participants watched a video of a staged theft, where a man and a woman entered a house and stole some jewellery, money and a laptop.
Meeker's analysis highlighted the opportunities surrounding digital innovation in patient empowerment and health management, improvements to clinical pathways and protocols, and preventative health. These technologies can be leveraged to capture the massive volume of data that describes a patient's past and present state, project potential future states, analyze that data in real time, assist in reasoning about the best way to achieve patient and physician goals, and provide both patient and physician constant real-time support. But new technologies, including computer vision, natural language understanding, and machine learning, present interface capabilities that enable individuals to easily "show" or "talk to" their AI virtual assistant about what they're doing. With the algorithm developments of deep learning, symbolic AI, computer vision, natural language, and machine learning combined with a smartphone -- which puts the power of a supercomputer in everyone's pocket and is always with you, always on, and always connected -- we are at the beginning of the AI era.
With a single click of a button, the website allows users to upload a photo of the mystery dish and then the system, using machine learning, goes through the massive mounds of data to analyze it. Such association has allowed the AI to successfully retrieve correct recipes of food images 65 percent of the time during test trials. The technology is reminiscent of the dish recognition feature Pinterest introduced back in May, which enables users to find recipes based on meals they take pictures of. Pinterest introduced in May a dish recognition feature that helps users search for pins and recipes relevant to their uploads.
It's the latest addition to an increasing list of similar projects, following the likes of both Pinterest and, yes, Silicon Valley's actually real hot dog-identifying app. Simply put: the researchers fed the AI more than 1 million recipes and nearly 1 million images. Instead of generating nightmare fuel, though, this web portal grabs recipes based on the food photos that you upload and ranks them based on how sure the system is that the results are correct. Funny enough, the only result that Pic2Recipe got right on the nose was when I uploaded a picture of a hot dog.
MIT has created an artificial intelligence algorithm which can accurately tell you the recipe behind a dish after being shown no more than a picture. The deep-learning AI system, dubbed Pic2Recipe, has been trained by researchers to predict the ingredients and suggest similar recipes, and 65 percent of the time, the AI was correct. MIT hopes the AI could also be used to better understand our eating habits, which in turn could provide information for researchers and healthy eating initiatives in the future. In a paper presented later this month at the Computer Vision and Pattern Recognition conference in Honolulu, lead author and CSAIL graduate student Nick Hynes, alongside Amaia Salvador of the Polytechnic University of Catalonia in Spain, Javier Marin, Ferda Ofli and research director Ingmar Weber of QCRI said another aim is to modernize and increase the scope of the "Food-101" dataset, a 2014 project to create an algorithm capable of detecting images of food.
Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analyzing photos like these could help us learn recipes and better understand people's eating habits. In a new paper with the Qatar Computing Research Institute (QCRI), the team trained an artificial intelligence system called Pic2Recipe to look at a photo of food and be able to predict the ingredients and suggest similar recipes. "In computer vision, food is mostly neglected because we don't have the large-scale datasets needed to make predictions," says Yusuf Aytar, an MIT postdoc who co-wrote a paper about the system with MIT Professor Antonio Torralba. The CSAIL team's project aims to build off of this work but dramatically expand in scope.
Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analyzing photos like these could help us learn recipes and better understand people's eating habits. In a new paper with the Qatar Computing Research Institute (QCRI), the team trained an artificial intelligence system called Pic2Recipe to look at a photo of food and be able to predict the ingredients and suggest similar recipes. "In computer vision, food is mostly neglected because we don't have the large-scale datasets needed to make predictions," says Yusuf Aytar, an MIT postdoc who co-wrote a paper about the system with MIT Professor Antonio Torralba. They then used that data to train a neural network to find patterns and make connections between the food images and the corresponding ingredients and recipes.
MIT researchers developed Recipe1M, a database of recipes annotated with information about the ingredients in a wide range of dishes. The closest match Pic2Recipe came up with for a McDonald's Big Mac was a White Castle cheeseburger, a competing brand popular in the US. Images of the latest culinary masterpieces on social media may soon serve a practical purpose, if the creators of a new AI system have their way. We also tested the app using some of the most recognisable food items from a number of popular restaurants and fast food chains, including a McDonald's Big Mac (left) which was matched to a White Castle cheeseburger (right), a competing brand popular in the US This was a match of 88 per cent.