Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "2.13.0"
".": "2.14.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 137
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-cded37ac004364c2110ebdacf922ef611b3c51258790c72ca479dcfad4df66aa.yml
openapi_spec_hash: 6e615d34cf8c6bc76e0c6933fc8569af
config_hash: d013f4fdd4dd59c6f376a9ca482b7f9e
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-75926226b642ebb2cb415694da9dff35e8ab40145ac1b791cefb82a83809db4d.yml
openapi_spec_hash: 6a0e391b0ba5747b6b4a3e5fe21de4da
config_hash: adcf23ecf5f84d3cadf1d71e82ec636a
18 changes: 18 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,23 @@
# Changelog

## 2.14.0 (2025-12-19)

Full Changelog: [v2.13.0...v2.14.0](https://github.com/openai/openai-python/compare/v2.13.0...v2.14.0)

### Features

* **api:** slugs for new audio models; make all `model` params accept strings ([e517792](https://github.com/openai/openai-python/commit/e517792b58d1768cfb3432a555a354ae0a9cfa21))


### Bug Fixes

* use async_to_httpx_files in patch method ([a6af9ee](https://github.com/openai/openai-python/commit/a6af9ee5643197222f328d5e73a80ab3515c32e2))


### Chores

* **internal:** add `--fix` argument to lint script ([93107ef](https://github.com/openai/openai-python/commit/93107ef36abcfd9c6b1419533a1720031f03caec))

## 2.13.0 (2025-12-16)

Full Changelog: [v2.12.0...v2.13.0](https://github.com/openai/openai-python/compare/v2.12.0...v2.13.0)
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "2.13.0"
version = "2.14.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
9 changes: 7 additions & 2 deletions scripts/lint
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,13 @@ set -e

cd "$(dirname "$0")/.."

echo "==> Running lints"
rye run lint
if [ "$1" = "--fix" ]; then
echo "==> Running lints with --fix"
rye run fix:ruff
else
echo "==> Running lints"
rye run lint
fi

echo "==> Making sure it imports"
rye run python -c 'import openai'
2 changes: 1 addition & 1 deletion src/openai/_base_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -1806,7 +1806,7 @@ async def patch(
options: RequestOptions = {},
) -> ResponseT:
opts = FinalRequestOptions.construct(
method="patch", url=path, json_data=body, files=to_httpx_files(files), **options
method="patch", url=path, json_data=body, files=await async_to_httpx_files(files), **options
)
return await self.request(cast_to, opts)

Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "2.13.0" # x-release-please-version
__version__ = "2.14.0" # x-release-please-version
4 changes: 2 additions & 2 deletions src/openai/resources/audio/speech.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def create(
model:
One of the available [TTS models](https://platform.openai.com/docs/models#tts):
`tts-1`, `tts-1-hd` or `gpt-4o-mini-tts`.
`tts-1`, `tts-1-hd`, `gpt-4o-mini-tts`, or `gpt-4o-mini-tts-2025-12-15`.
voice: The voice to use when generating the audio. Supported voices are `alloy`, `ash`,
`ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`, `shimmer`, and
Expand Down Expand Up @@ -168,7 +168,7 @@ async def create(
model:
One of the available [TTS models](https://platform.openai.com/docs/models#tts):
`tts-1`, `tts-1-hd` or `gpt-4o-mini-tts`.
`tts-1`, `tts-1-hd`, `gpt-4o-mini-tts`, or `gpt-4o-mini-tts-2025-12-15`.
voice: The voice to use when generating the audio. Supported voices are `alloy`, `ash`,
`ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`, `shimmer`, and
Expand Down
65 changes: 36 additions & 29 deletions src/openai/resources/audio/transcriptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,9 @@ def create(
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

model: ID of the model to use. The options are `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, and `whisper-1` (which is powered by our open source
Whisper V2 model).
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
(which is powered by our open source Whisper V2 model), and
`gpt-4o-transcribe-diarize`.

chunking_strategy: Controls how the audio is cut into chunks. When set to `"auto"`, the server
first normalizes loudness and then uses voice activity detection (VAD) to choose
Expand All @@ -102,8 +103,9 @@ def create(
include: Additional information to include in the transcription response. `logprobs` will
return the log probabilities of the tokens in the response to understand the
model's confidence in the transcription. `logprobs` only works with
response_format set to `json` and only with the models `gpt-4o-transcribe` and
`gpt-4o-mini-transcribe`.
response_format set to `json` and only with the models `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
not supported when using `gpt-4o-transcribe-diarize`.

language: The language of the input audio. Supplying the input language in
[ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) (e.g. `en`)
Expand Down Expand Up @@ -239,8 +241,9 @@ def create(
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

model: ID of the model to use. The options are `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, `whisper-1` (which is powered by our open source
Whisper V2 model), and `gpt-4o-transcribe-diarize`.
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
(which is powered by our open source Whisper V2 model), and
`gpt-4o-transcribe-diarize`.

stream: If set to true, the model response data will be streamed to the client as it is
generated using
Expand All @@ -261,9 +264,9 @@ def create(
include: Additional information to include in the transcription response. `logprobs` will
return the log probabilities of the tokens in the response to understand the
model's confidence in the transcription. `logprobs` only works with
response_format set to `json` and only with the models `gpt-4o-transcribe` and
`gpt-4o-mini-transcribe`. This field is not supported when using
`gpt-4o-transcribe-diarize`.
response_format set to `json` and only with the models `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
not supported when using `gpt-4o-transcribe-diarize`.

known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
`known_speaker_references[]`. Each entry should be a short identifier (for
Expand Down Expand Up @@ -346,8 +349,9 @@ def create(
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

model: ID of the model to use. The options are `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, `whisper-1` (which is powered by our open source
Whisper V2 model), and `gpt-4o-transcribe-diarize`.
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
(which is powered by our open source Whisper V2 model), and
`gpt-4o-transcribe-diarize`.

stream: If set to true, the model response data will be streamed to the client as it is
generated using
Expand All @@ -368,9 +372,9 @@ def create(
include: Additional information to include in the transcription response. `logprobs` will
return the log probabilities of the tokens in the response to understand the
model's confidence in the transcription. `logprobs` only works with
response_format set to `json` and only with the models `gpt-4o-transcribe` and
`gpt-4o-mini-transcribe`. This field is not supported when using
`gpt-4o-transcribe-diarize`.
response_format set to `json` and only with the models `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
not supported when using `gpt-4o-transcribe-diarize`.

known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
`known_speaker_references[]`. Each entry should be a short identifier (for
Expand Down Expand Up @@ -535,8 +539,9 @@ async def create(
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

model: ID of the model to use. The options are `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, `whisper-1` (which is powered by our open source
Whisper V2 model), and `gpt-4o-transcribe-diarize`.
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
(which is powered by our open source Whisper V2 model), and
`gpt-4o-transcribe-diarize`.

chunking_strategy: Controls how the audio is cut into chunks. When set to `"auto"`, the server
first normalizes loudness and then uses voice activity detection (VAD) to choose
Expand All @@ -548,9 +553,9 @@ async def create(
include: Additional information to include in the transcription response. `logprobs` will
return the log probabilities of the tokens in the response to understand the
model's confidence in the transcription. `logprobs` only works with
response_format set to `json` and only with the models `gpt-4o-transcribe` and
`gpt-4o-mini-transcribe`. This field is not supported when using
`gpt-4o-transcribe-diarize`.
response_format set to `json` and only with the models `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
not supported when using `gpt-4o-transcribe-diarize`.

known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
`known_speaker_references[]`. Each entry should be a short identifier (for
Expand Down Expand Up @@ -679,8 +684,9 @@ async def create(
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

model: ID of the model to use. The options are `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, `whisper-1` (which is powered by our open source
Whisper V2 model), and `gpt-4o-transcribe-diarize`.
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
(which is powered by our open source Whisper V2 model), and
`gpt-4o-transcribe-diarize`.

stream: If set to true, the model response data will be streamed to the client as it is
generated using
Expand All @@ -701,9 +707,9 @@ async def create(
include: Additional information to include in the transcription response. `logprobs` will
return the log probabilities of the tokens in the response to understand the
model's confidence in the transcription. `logprobs` only works with
response_format set to `json` and only with the models `gpt-4o-transcribe` and
`gpt-4o-mini-transcribe`. This field is not supported when using
`gpt-4o-transcribe-diarize`.
response_format set to `json` and only with the models `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
not supported when using `gpt-4o-transcribe-diarize`.

known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
`known_speaker_references[]`. Each entry should be a short identifier (for
Expand Down Expand Up @@ -786,8 +792,9 @@ async def create(
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.

model: ID of the model to use. The options are `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, `whisper-1` (which is powered by our open source
Whisper V2 model), and `gpt-4o-transcribe-diarize`.
`gpt-4o-mini-transcribe`, `gpt-4o-mini-transcribe-2025-12-15`, `whisper-1`
(which is powered by our open source Whisper V2 model), and
`gpt-4o-transcribe-diarize`.

stream: If set to true, the model response data will be streamed to the client as it is
generated using
Expand All @@ -808,9 +815,9 @@ async def create(
include: Additional information to include in the transcription response. `logprobs` will
return the log probabilities of the tokens in the response to understand the
model's confidence in the transcription. `logprobs` only works with
response_format set to `json` and only with the models `gpt-4o-transcribe` and
`gpt-4o-mini-transcribe`. This field is not supported when using
`gpt-4o-transcribe-diarize`.
response_format set to `json` and only with the models `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
not supported when using `gpt-4o-transcribe-diarize`.

known_speaker_names: Optional list of speaker names that correspond to the audio samples provided in
`known_speaker_references[]`. Each entry should be a short identifier (for
Expand Down
4 changes: 4 additions & 0 deletions src/openai/resources/realtime/calls.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,8 +125,10 @@ def accept(
"gpt-4o-mini-realtime-preview-2024-12-17",
"gpt-realtime-mini",
"gpt-realtime-mini-2025-10-06",
"gpt-realtime-mini-2025-12-15",
"gpt-audio-mini",
"gpt-audio-mini-2025-10-06",
"gpt-audio-mini-2025-12-15",
],
]
| Omit = omit,
Expand Down Expand Up @@ -450,8 +452,10 @@ async def accept(
"gpt-4o-mini-realtime-preview-2024-12-17",
"gpt-realtime-mini",
"gpt-realtime-mini-2025-10-06",
"gpt-realtime-mini-2025-12-15",
"gpt-audio-mini",
"gpt-audio-mini-2025-10-06",
"gpt-audio-mini-2025-12-15",
],
]
| Omit = omit,
Expand Down
11 changes: 5 additions & 6 deletions src/openai/resources/videos.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@
from .. import _legacy_response
from ..types import (
VideoSize,
VideoModel,
VideoSeconds,
video_list_params,
video_remix_params,
Expand All @@ -34,8 +33,8 @@
from .._base_client import AsyncPaginator, make_request_options
from .._utils._utils import is_given
from ..types.video_size import VideoSize
from ..types.video_model import VideoModel
from ..types.video_seconds import VideoSeconds
from ..types.video_model_param import VideoModelParam
from ..types.video_delete_response import VideoDeleteResponse

__all__ = ["Videos", "AsyncVideos"]
Expand Down Expand Up @@ -66,7 +65,7 @@ def create(
*,
prompt: str,
input_reference: FileTypes | Omit = omit,
model: VideoModel | Omit = omit,
model: VideoModelParam | Omit = omit,
seconds: VideoSeconds | Omit = omit,
size: VideoSize | Omit = omit,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
Expand Down Expand Up @@ -130,7 +129,7 @@ def create_and_poll(
*,
prompt: str,
input_reference: FileTypes | Omit = omit,
model: VideoModel | Omit = omit,
model: VideoModelParam | Omit = omit,
seconds: VideoSeconds | Omit = omit,
size: VideoSize | Omit = omit,
poll_interval_ms: int | Omit = omit,
Expand Down Expand Up @@ -421,7 +420,7 @@ async def create(
*,
prompt: str,
input_reference: FileTypes | Omit = omit,
model: VideoModel | Omit = omit,
model: VideoModelParam | Omit = omit,
seconds: VideoSeconds | Omit = omit,
size: VideoSize | Omit = omit,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
Expand Down Expand Up @@ -485,7 +484,7 @@ async def create_and_poll(
*,
prompt: str,
input_reference: FileTypes | Omit = omit,
model: VideoModel | Omit = omit,
model: VideoModelParam | Omit = omit,
seconds: VideoSeconds | Omit = omit,
size: VideoSize | Omit = omit,
poll_interval_ms: int | Omit = omit,
Expand Down
1 change: 1 addition & 0 deletions src/openai/types/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@
from .completion_choice import CompletionChoice as CompletionChoice
from .image_edit_params import ImageEditParams as ImageEditParams
from .video_list_params import VideoListParams as VideoListParams
from .video_model_param import VideoModelParam as VideoModelParam
from .eval_create_params import EvalCreateParams as EvalCreateParams
from .eval_list_response import EvalListResponse as EvalListResponse
from .eval_update_params import EvalUpdateParams as EvalUpdateParams
Expand Down
2 changes: 1 addition & 1 deletion src/openai/types/audio/speech_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ class SpeechCreateParams(TypedDict, total=False):
model: Required[Union[str, SpeechModel]]
"""
One of the available [TTS models](https://platform.openai.com/docs/models#tts):
`tts-1`, `tts-1-hd` or `gpt-4o-mini-tts`.
`tts-1`, `tts-1-hd`, `gpt-4o-mini-tts`, or `gpt-4o-mini-tts-2025-12-15`.
"""

voice: Required[
Expand Down
2 changes: 1 addition & 1 deletion src/openai/types/audio/speech_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@

__all__ = ["SpeechModel"]

SpeechModel: TypeAlias = Literal["tts-1", "tts-1-hd", "gpt-4o-mini-tts"]
SpeechModel: TypeAlias = Literal["tts-1", "tts-1-hd", "gpt-4o-mini-tts", "gpt-4o-mini-tts-2025-12-15"]
12 changes: 6 additions & 6 deletions src/openai/types/audio/transcription_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ class TranscriptionCreateParamsBase(TypedDict, total=False):
model: Required[Union[str, AudioModel]]
"""ID of the model to use.
The options are `gpt-4o-transcribe`, `gpt-4o-mini-transcribe`, `whisper-1`
(which is powered by our open source Whisper V2 model), and
`gpt-4o-transcribe-diarize`.
The options are `gpt-4o-transcribe`, `gpt-4o-mini-transcribe`,
`gpt-4o-mini-transcribe-2025-12-15`, `whisper-1` (which is powered by our open
source Whisper V2 model), and `gpt-4o-transcribe-diarize`.
"""

chunking_strategy: Optional[ChunkingStrategy]
Expand All @@ -49,9 +49,9 @@ class TranscriptionCreateParamsBase(TypedDict, total=False):
Additional information to include in the transcription response. `logprobs` will
return the log probabilities of the tokens in the response to understand the
model's confidence in the transcription. `logprobs` only works with
response_format set to `json` and only with the models `gpt-4o-transcribe` and
`gpt-4o-mini-transcribe`. This field is not supported when using
`gpt-4o-transcribe-diarize`.
response_format set to `json` and only with the models `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, and `gpt-4o-mini-transcribe-2025-12-15`. This field is
not supported when using `gpt-4o-transcribe-diarize`.
"""

known_speaker_names: SequenceNotStr[str]
Expand Down
8 changes: 7 additions & 1 deletion src/openai/types/audio_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,10 @@

__all__ = ["AudioModel"]

AudioModel: TypeAlias = Literal["whisper-1", "gpt-4o-transcribe", "gpt-4o-mini-transcribe", "gpt-4o-transcribe-diarize"]
AudioModel: TypeAlias = Literal[
"whisper-1",
"gpt-4o-transcribe",
"gpt-4o-mini-transcribe",
"gpt-4o-mini-transcribe-2025-12-15",
"gpt-4o-transcribe-diarize",
]
Loading