Skip to content

Conversation

@Chesars
Copy link
Contributor

@Chesars Chesars commented Dec 12, 2025

Title

fix(cost_calculator): correct gpt-image-1 cost calculation using token-based pricing

Relevant issues

Fixes #13847

Pre-Submission checklist

  • I have Added testing in the tests/litellm/ directory
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

Summary

The current cost calculator for gpt-image-1 uses pixel-based pricing (inherited from DALL-E), but OpenAI's gpt-image-1 actually uses token-based pricing.

Example

import litellm

response = litellm.image_generation(
    model="gpt-image-1",
    prompt="A cat sitting on a windowsill",
    size="1024x1024",
    quality="medium",
)

# Response includes usage data:
# response.usage.input_tokens = 150
# response.usage.output_tokens = 8000
# response.usage.input_tokens_details.text_tokens = 150
# response.usage.input_tokens_details.image_tokens = 0

# Cost with this fix is calculated correctly using tokens
cost = litellm.completion_cost(completion_response=response)
# Returns: $0.32075 (not $0.042 as before)

Old calculation (incorrect - DALL-E style):

cost = input_cost_per_pixel × width × height × n
cost = 4.0054321e-08 × 1024 × 1024 × 1 = $0.042

New calculation (token-based):

# OpenAI gpt-image-1 pricing:
# - Text Input: $5.00/1M tokens
# - Image Input: $10.00/1M tokens  
# - Image Output: $40.00/1M tokens

Solution

  1. Updated pricing JSON: Changed gpt-image-1 entries from input_cost_per_pixel to token-based fields:

    • input_cost_per_token: 5e-06 ($5/1M)
    • input_cost_per_image_token: 1e-05 ($10/1M)
    • output_cost_per_image_token: 4e-05 ($40/1M)
  2. New cost calculator: litellm/llms/openai/image_generation/cost_calculator.py

    • Uses usage data from the API response
    • Calculates: text_input + image_input + image_output costs
  3. Updated router: Routes gpt-image-1 models to the new token-based calculator while DALL-E continues to use pixel-based calculation

Files changed

  • model_prices_and_context_window.json - Updated gpt-image-1 pricing
  • litellm/model_prices_and_context_window_backup.json - Same
  • litellm/llms/openai/image_generation/cost_calculator.py - New calculator
  • litellm/litellm_core_utils/llm_cost_calc/utils.py - Router update
  • tests/test_litellm/test_gpt_image_cost_calculator.py - 8 new tests

…n-based pricing (BerriAI#13847)

gpt-image-1 uses token-based pricing (like chat models), not pixel-based pricing
like DALL-E. The old code was calculating incorrect costs by treating it as DALL-E.

Changes:
- Update model pricing JSON with correct token-based costs for gpt-image-1
- Add dedicated cost calculator for OpenAI gpt-image models
- Route gpt-image-1 to token-based calculator in cost router
- Add comprehensive tests for the new calculator
@vercel
Copy link

vercel bot commented Dec 12, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
litellm Ready Ready Preview, Comment Dec 12, 2025 10:19pm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: gpt-image-1 cost calculation missing input token costs

1 participant