AI offers recipes and ingredients list according to food pics

A deep-learning algorithm is created by the CSAIL (Computer Science and Artificial Intelligence Laboratory) team that can detect and identify the goods used to cook a dish shown to this algorithm. It then generates recipes and suggests ingredients.



Social media created a cult of food photography. Some has fewer selfies than pictures of what he or she has eaten. However, those images are simply stored on our phone and we rarely check them.


CSAIL believes that, if an algorithm tells us ingredients and additional information about the food we eat, we can become more consistent and healthy with food. It can help us understand our eating, cravings, and remember recipes, encouraging us to cook at home. The researchers tested the app, called Pic2Recipe, on the ability to show the ingredients and recipe of the dish from the photo. It also suggests recipes that a person might prefer.


With computer vision, we ignore food almost all the time, as it doesn’t have large data-sets that enable researchers to work with this topic. However, the photos on social media that most people hate, like pictures of what everybody was eating today, can give great statistical insight to food preferences of a certain audience and tell a lot about eating habits, health, and the lifestyle of the person.


With computer vision, we ignore food almost all the time, as it doesn’t have large data-sets that enable researchers to work with this topic. However, the photos on social media that most people hate, like pictures of what everybody was eating today, can give great statistical insight to food preferences of a certain audience and tell a lot about eating habits, health, and the lifestyle of the person.


Yusuf Aytar and Antonio Torralba wrote a paper about this deep-learning algorithm. Its main ideas are presented in the Computer Vision and Pattern Recognition conference based in Honolulu.


The computer model is as good as people in anticipating the movements of the objects. CSAIL has a project, called ‘3-D physics engine’, that can predict the way an object can move. It is a sort of brain simulation.


We don’t appreciate all the things we can do and feel by default, especially with the predictions of the paths the objects can follow. Almost everybody likes the complicated mechanisms created by Rube Goldberg, and one of the main reasons is that we can follow the link of objects interacting, falling, rolling, crashing, and we can calculate the direction of the next move. It is the part that satisfies us the most. Have you ever thought about the way we acquire knowledge and experience, about the way our mechanisms of anticipation occur, and can we teach artificial intelligence something like this?


Researchers in CSAIL (Science and Artificial Intelligence Laboratory) based at MIT wanted to teach AI visual anticipation based on many factors, making AI guess the direction and moves of the object as well as humans do.


For this purpose, they’ve used a 3-D physics engine to stimulate intuition similar to humans. The system, called Galileo, was practicing by watching real videos to understand physical processes and possible result of action in different situations.


The researchers tried to make the system make generalizations about physical laws in different situations according to the videos it trained on. It was mostly about objects falling, rolling, and colliding, but the trial succeeded, so it is possible to teach machines skills in predicting movements according to fundamental laws of physics.


Let’s take a ramp case, for instance. In such situation, Galileo can check the density of an object and understand if it can float. This is the first step in teaching deep-learning systems. However, the main idea that should be concluded from this case is that, with proper training, it is possible to teach an AI system the basic laws of physics.