LLaVA-Chef: A Multi-modal Generative Model for Food Recipes
Mohbat, Fnu, Zaki, Mohammed J.
–arXiv.org Artificial Intelligence
In the rapidly evolving landscape of online recipe sharing within a globalized context, there has been a notable surge in research towards comprehending and generating food recipes. Recent advancements in large language models (LLMs) like GPT-2 and LLaVA have paved the way for Natural Language Processing (NLP) approaches to delve deeper into various facets of food-related tasks, encompassing ingredient recognition and comprehensive recipe generation. Despite impressive performance and multi-modal adaptability of LLMs, domain-specific training remains paramount for their effective application. This work evaluates existing LLMs for recipe generation and proposes LLaVA-Chef, a novel model trained on a curated dataset of diverse recipe prompts in a multi-stage approach. First, we refine the mapping of visual food image embeddings to the language space. Second, we adapt LLaVA to the food domain by fine-tuning it on relevant recipe data. Third, we utilize diverse prompts to enhance the model's recipe comprehension. Finally, we improve the linguistic quality of generated recipes by penalizing the model with a custom loss function. LLaVA-Chef demonstrates impressive improvements over pretrained LLMs and prior works. A detailed qualitative analysis reveals that LLaVA-Chef generates more detailed recipes with precise ingredient mentions, compared to existing approaches.
arXiv.org Artificial Intelligence
Aug-29-2024
- Country:
- Asia (0.04)
- Europe > Belgium
- Brussels-Capital Region > Brussels (0.04)
- North America > United States
- Idaho > Ada County
- Boise (0.05)
- New York
- New York County > New York City (0.04)
- Rensselaer County > Troy (0.04)
- Idaho > Ada County
- Genre:
- Research Report (0.70)
- Industry:
- Health & Medicine > Consumer Health (0.68)
- Technology: