Social media lovers often post photos of their favorite food online. Now, researchers at the Massachusetts Institute of Technology (MIT) have developed a technology that uses artificial intelligence that looks on an image of the food and then picks out ingredients which have been used in the recipe. This new technology named as Pic2Recipe also has the potential to be advanced into a “dinner assistant” to help people analyze what’s in their food when they don’t have clear nutritional info.
According to the researchers, these images posted online can offer interesting insights into health habits and preferences for foods.
Apparently useless still pictures of food posted online can be used to help read and learn peoples eating habits using artificial intelligence and algorithms to create a data base of recipes and recommends ingredients.
The previous models invented by other researchers did not use large data banks. But using large scale datasets, this smart food technology can also be used to predict how the food is prepared and also provide nutritional information in the nearby future.
Although researchers have created datasets to develop an algorithm to recognize pictures of food with 50% accuracy, in future this accuracy is only going to be improved up to 80%, signifying, the scope of the dataset might be a restraining factor. Even with the largest dataset available with the University of Hong Kong has more than 110,000 images and 65,000 recipes, with ingredients and instruction list with each respectively, has also limitations because it contains only Chinese cuisine.
MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) team plans to build upon the dataset to expand in scope, by generalizing across the population. According to professors, the digitization of the food is the typically neglected because of that we do not have large scale datasets to make predictions.
To develop this database researchers are scrutinizing all food websites, over one million recipes that were marked up with information about the ingredients with a wide range of recipes. Then they will use this data to prepare a neural network to find configurations and patterns and make associates between the food images and the matching ingredients and recipes.
The team has an online demo to demonstrate when people can upload a picture of food and test it out. Currently, if given a photo of food item to this technology, it could identify ingredients like flour, eggs, and butter etc. It also gives a number of suggested recipes that determines to be similar to the image uploaded from the database.
Presently, the system is doing sound with desserts, however, it has difficulty in determining ingredients for unclear recipes like smoothies. But in future, the team anticipates improving the system to understand the food in more details including various versions of food and distinguishing them.
Later this month, the team of researchers will be presenting their paper at a conference in Honolulu, which is funded, in fragments, by QCRI, in addition to the European Regional Development Fund (ERDF) and the Spanish Ministry of Economy, Industry, and Competitiveness.