Skip to content

Hmv123/Food_Calorie_Tracker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Plate Calorie Analyzer

Overview

This project is a colab notebook demo that analyzes a food plate image to detect food items, estimate quantities and calories, compute total macronutrients, give a simple nutritional score, and offer suggestions to improve meal balance. It uses an image-to-text pipeline (image encoded as base64), a Large Language Model (via LangChain/OpenAI wrappers) for interpretation, and Gradio for the interactive UI.

Files / primary artifact

  • Food_Calorie_Tracker.ipynb — main notebook (encoding, LLM prompt, Gradio UI)
  • .env (recommended) — contains OPENAI_API_KEY if using OpenAI
  • (Optional) requirements.txt — list of Python dependencies

Dependencies

Recommended Python packages (create a virtual environment before installing):

  • pillow
  • gradio
  • langchain-openai (or langchain + compatible OpenAI client)
  • langchain-core
  • python-dotenv
  • requests (optional)

Example requirements.txt snippet:

pillow
gradio
langchain-openai
langchain-core
python-dotenv
requests

Setup

  1. Create and activate a virtual environment (PowerShell example):
python -m venv .venv; .\.venv\Scripts\Activate.ps1
pip install -r requirements.txt
  1. Create a .env file in the notebook directory with your OpenAI API key:
OPENAI_API_KEY=your_api_key_here
  1. Open Food_Calorie_Tracker.ipynb in Jupyter or VS Code and run cells top-to-bottom. The Gradio demo cell launches an interactive UI (the notebook uses demo.launch(share=True)).

Usage

  • Upload a plate image (clear, top-down photo works best).
  • Select Meal Type (Breakfast/Lunch/Dinner) and Diet Type (Vegan/Vegetarian/Non-Vegetarian).
  • Click "Analyze" and wait for the streamed LLM output — the UI shows progress as the model responds.

How it works (high level)

  1. User uploads a plate image in Gradio (the notebook encodes it to base64).
  2. The notebook builds a structured nutrition prompt and embeds the base64 image as an image_url object.
  3. The prompt is sent to an LLM via LangChain’s ChatOpenAI wrapper. Responses are streamed back so the UI shows incremental progress.
  4. Output is expected in a fixed markdown/table format (Items Detected, Total Nutrition, Nutritional Score, What's Missing).
  5. Results are displayed in the Gradio Markdown output area.

Prompt & output format

  • The notebook uses a strict format in the prompt asking the model to return:
    • A table of detected items with Quantity, Calories, Protein, Carbs, Fat
    • Total nutrition summary
    • Nutritional score (X/10 components + overall)
    • Three suggestions to improve balance
  • Keeping this fixed format makes downstream parsing easier.

Limitations & caveats

  • The notebook relies on the LLM to interpret images via base64 embedding inside text prompts — accuracy depends on LLM multimodal capability.
  • Calories and portions are approximate; not a certified nutrition tool.
  • Using demo.launch(share=True) exposes a public Gradio URL while running — be careful with sensitive images.
  • Streaming capability depends on the installed LangChain/OpenAI wrapper supporting llm.stream.
  • Network and API costs: using OpenAI will consume tokens and may be billable.

Privacy & security

  • Uploaded images are included in prompts and sent to the model provider. If you need local-only processing, replace the LLM step with a local vision model or remove share=True from Gradio.
  • Never commit .env to source control.

Testing suggestions

  • Test with a few plate photos of varying complexity: single-item (banana), mixed plate (rice + curry + veg), and multi-serving plates.
  • Inspect the raw LLM output if parsing fails; adjust the prompt to enforce exact formatting.
  • If streaming fails, switch to a non-streamed Chat API call to get the whole output at once.

Next steps / improvements

  • Integrate a local vision model (CLIP/YOLO) to detect items and send structured labels to the LLM.
  • Add a local calorie database to convert detected items to better calorie estimates.
  • Wrap the notebook into a Flask/FastAPI service for production deployment and add tests.

About

The Plate Calorie Analyzer notebook transforms a user-uploaded plate photograph into a concise nutritional report using a hybrid image-to-text and LLM pipeline.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors