Update dependency diffusers to v0.38.0#89
Open
renovate[bot] wants to merge 1 commit into
Open
Conversation
75b4d4b to
a2b7aef
Compare
a2b7aef to
7514a41
Compare
7514a41 to
4f0e94c
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==0.35.2→==0.38.0Release Notes
huggingface/diffusers (diffusers)
v0.38.0: Diffusers 0.38.0: New image and audio pipelines, Core library improvements, and moreCompare Source
New Pipelines
LLaDA2
LLaDA2 is a family of discrete diffusion language models that generate text through block-wise iterative refinement. Instead of autoregressive token-by-token generation, LLaDA2 starts with a fully masked sequence and progressively unmasks tokens by confidence over multiple refinement steps.
Nucleus-MoE
NucleusMoE-Image is a 2B active 17B parameter model trained with efficiency at its core. Our novel architecture highlights the scalability of a sparse MoE architecture for Image generation.
Thanks to @sippycoder for the contribution.
Ernie-Image
ERNIE-Image is a powerful and highly efficient image generation model with 8B parameters.
Thanks to @HsiaWinter for the contribution.
LongCat-AudioDiT
LongCat-AudioDiT is a text-to-audio diffusion model from Meituan LongCat.
Thanks to @RuixiangMa for the contribution.
Ace-Step 1.5
ACE-Step 1.5 generates variable-length stereo audio at 48 kHz (10 seconds to 10 minutes) from text prompts and optional lyrics. The full system pairs a Language Model planner with a Diffusion Transformer (DiT) synthesizer; this pipeline wraps the DiT half of that stack, and consists of three components: an AutoencoderOobleck VAE that compresses waveforms into 25 Hz stereo latents, a Qwen3-based text encoder for prompt and lyric conditioning, and an AceStepTransformer1DModel DiT that operates in the VAE latent space using flow matching.
Thanks to @ChuxiJ for the contribution.
Flux.2 Small Decoder
Make your Flux.2 decoding faster with this new small decoder model from the Black Forest Labs. You can check it out here. It was contributed by @huemin-art in this PR.
Modular Pipeline Support
We added modular support for LTX-2 and Hunyuan 1.5.
Core Library
ring_anythingas a new CP backendAll commits
lru_cachewarnings duringtorch.compileby @jiqing-feng in #13384--with_prior_preservationby @chenyangzhu1 in #133960.8.0-rc.0by @McPatate in #13470trust_remote_codeby @hlky in #13448Significant community contributions
The following contributors have made significant changes to the library over the last release:
trust_remote_code(#13448)v0.37.1: Fixes for AutoModel type hints in Modular Pipelines and Flux Klein LoRA loadingCompare Source
ModularPipelineswithAutoModeltype hints in theirmodular_model_index.json#13271torchvisionimport in Cosmos Predict 2.5 #13321v0.37.0: Diffusers 0.37.0: Modular Diffusers, New image and video pipelines, multiple core library improvements, and more 🔥Compare Source
Modular Diffusers
Modular Diffusers introduces a new way to build diffusion pipelines by composing reusable blocks. Instead of writing entire pipelines from scratch, you can now mix and match building blocks to create custom workflows tailored to your specific needs! This complements the existing
DiffusionPipelineclass, providing a more flexible way to create custom diffusion pipelines.Find more details on how to get started with Modular Diffusers here, and also check out the announcement post.
New Pipelines and Models
Image 🌆
Video + audio 🎥 🎼
Improvements to Core Library
New caching methods
New context-parallelism (CP) backends
Misc
@apply_lora_scaledecorator for simplifying model definitions (#12994)device_map(#12811)A lot of the above features/improvements came as part of the MVP program we have been running. Immense thanks to the contributors!
Bug Fixes
T5Tokenizerfor Transformers v5.0+ compatibility (#12877)num_videos_per_prompt > 1and CFG (#13121)txt_seq_lenshandling (#12702)prefix_token_lenbug (#12845)is_fsdpdetermination (#12960)get_image_featuresAPI (#13052)aiteravailability check (#13059)promptandprior_token_idssimultaneously inGlmImagePipeline(#13092)All commits
OvisImagePipelineinAUTO_TEXT2IMAGE_PIPELINES_MAPPINGby @alvarobartt in #12876T5Tokenizerinstead ofMT5Tokenizer(removed in Transformers v5.0+) by @alvarobartt in #12877AutoencoderMixinby @sayakpaul in #12873enable_auto_cpu_offloadby @sayakpaul in #12578is_fsdpis determined by @sayakpaul in #12960Configuration
📅 Schedule: (UTC)
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.