-
Notifications
You must be signed in to change notification settings - Fork 130
Description
Description
I want to use ulysses to achieve parallel inferring.
Steps to Reproduce
I already use the following config and scripts:
{ "infer_steps": 15, "target_video_length": 69, "target_height": 720, "target_width": 1280, "self_attn_1_type": "flash_attn3", "cross_attn_1_type": "flash_attn3", "cross_attn_2_type": "flash_attn3", "sample_guide_scale": 5, "sample_shift": 16, "enable_cfg": true, "cpu_offload": false, "parallel": { "seq_p_size": 4, "seq_p_attn_type": "ulysses" } }
CUDA_VISIBLE_DEVICES=0,1,2,3 \ PYTHONPATH="${workspaceFolder}:/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main" \ torchrun --nproc_per_node=4 -m lightx2v.infer_caption \ --model_cls wan2.1_vace \ --task vace \ --model_path "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/Self-Forcing-Plus/wan_models/Wan2.1-VACE-14B-Genesis-X" \ --config_json "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/configs/wan/wan_vace_mi_1X_ulysses.json" \ --prompt_path "/mnt/evad_fs/worldmodel/sang/data/union_go_dataset_5star_1124_v2/adr::dataclip:prod.lm5-pp.0104::1752214918000000000:1752214953000000000/caption/adr::dataclip:prod.lm5-pp.0104::1752214918000000000:1752214953000000000_0.txt" \ --negative_prompt "Vivid tone, overexposed, blurry details, stylized, overall gray, worst quality, low quality, JPEG artifacts" \ --src_video "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/genesis_go/20260106/genesis-go/outputs/test_vid/GenesisX_VACE_step_500_sunny_69/output_adr::dataclip:prod.lm5-pp.0104::1752214918000000000:1752214953000000000/control_video_adr::dataclip:prod.lm5-pp.0104::1752214918000000000:1752214953000000000.mp4" \ --src_ref_images "/mnt/evad_fs/worldmodel/sang/data/union_go_dataset_5star_1124_v2/adr::dataclip:prod.lm5-pp.0104::1752214918000000000:1752214953000000000/image_1280/imgs/mid_center_top_tele/000001.png" \ --save_result_path "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/save_results/output_lightx2v_wan_vace.mp4"
Expected Result
Describe the normal behavior you expected.
Actual Result
Describe the abnormal situation that actually occurred.
Environment Information
- Operating System: [e.g., Ubuntu 22.04]
- Commit ID: [Version of the project]
Log Information
[rank0]: Traceback (most recent call last): [rank0]: File "<frozen runpy>", line 198, in _run_module_as_main [rank0]: File "<frozen runpy>", line 88, in _run_code [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/infer_caption.py", line 172, in <module> [rank0]: main() [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/infer_caption.py", line 161, in main [rank0]: runner = init_runner(config) [rank0]: ^^^^^^^^^^^^^^^^^^^ [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/infer_caption.py", line 32, in init_runner [rank0]: runner.init_modules() [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/runners/default_runner.py", line 69, in init_modules [rank0]: self.load_model() [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/runners/default_runner.py", line 117, in load_model [rank0]: self.model = self.load_transformer() [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/runners/wan/wan_vace_runner.py", line 34, in load_transformer [rank0]: model = WanVaceModel( [rank0]: ^^^^^^^^^^^^^ [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/networks/wan/vace_model.py", line 20, in __init__ [rank0]: super().__init__(model_path, config, device) [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/networks/wan/model.py", line 97, in __init__ [rank0]: self._init_weights() [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/networks/wan/model.py", line 311, in _init_weights [rank0]: self.transformer_weights = self.transformer_weight_class(self.config) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/networks/wan/weights/vace/transformer_weights.py", line 14, in __init__ [rank0]: super().__init__(config) [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/networks/wan/weights/transformer_weights.py", line 25, in __init__ [rank0]: [ [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/networks/wan/weights/transformer_weights.py", line 26, in <listcomp> [rank0]: WanTransformerAttentionBlock( [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/networks/wan/weights/transformer_weights.py", line 167, in __init__ [rank0]: WanSelfAttention( [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/models/networks/wan/weights/transformer_weights.py", line 349, in __init__ [rank0]: ATTN_WEIGHT_REGISTER[self.config["parallel"].get("seq_p_attn_type", "ulysses")](), [rank0]: ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/mnt/evad_fs/worldmodel/xiongkaixin/local_project/distill/LightX2V-main/lightx2v/utils/registry_factory.py", line 32, in __getitem__ [rank0]: return self._dict[key] [rank0]: ~~~~~~~~~~^^^^^ [rank0]: KeyError: 'ulysses'
Additional Information
If there is any other information that can help solve the problem, please add it here.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status