Skip to content

[AMD/ROCm] kimik2.5 int4 mi355x: upgrade to vllm-openai-rocm:v0.18.0#950

Merged
functionstackx merged 2 commits intomainfrom
claude/kimik2.5-int4-mi355x-v0.18.0
Mar 27, 2026
Merged

[AMD/ROCm] kimik2.5 int4 mi355x: upgrade to vllm-openai-rocm:v0.18.0#950
functionstackx merged 2 commits intomainfrom
claude/kimik2.5-int4-mi355x-v0.18.0

Conversation

@Klaud-Cold
Copy link
Collaborator

Summary

Ports PR #909 with the resolved upstream docker image (vllm/vllm-openai-rocm:v0.18.0).

Changes

Supersedes #909.

CC: @seungrokj @functionstackx

Port changes from PR #909 with the resolved upstream docker image:
- Upgrade image from v0.15.1 to v0.18.0
- Enable AITER MLA (VLLM_ROCM_USE_AITER=1)
- Add --max-num-seqs 256, remove --disable-log-requests
- Update perf-changelog

Co-authored-by: functionstackx <functionstackx@users.noreply.github.com>
Co-authored-by: seungrokj <seungrokj@users.noreply.github.com>
@github-actions
Copy link
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

2 similar comments
@github-actions
Copy link
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

@github-actions
Copy link
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

Co-authored-by: functionstackx <functionstackx@users.noreply.github.com>
Copy link
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM — straightforward image upgrade with correct config and script changes; the pr-link: .../pull/XXX nit is flagged inline.

Extended reasoning...

Overview

This PR touches three files: amd-master.yaml (one-line image bump from v0.15.1 to v0.18.0), benchmarks/single_node/kimik2.5_int4_mi355x.sh (adds VLLM_ROCM_USE_AITER=1, --max-num-seqs 256, removes --disable-log-requests), and perf-changelog.yaml (appends a new changelog entry). All changes are mechanical and self-consistent with the PR description.

Security risks

No security-sensitive code is touched. The image tag bump references a known upstream public image (vllm/vllm-openai-rocm:v0.18.0). No auth, crypto, or permission logic is involved.

Level of scrutiny

This is a low-risk configuration and benchmark-script update following an established pattern in the repository (dozens of similar image-bump entries exist in amd-master.yaml and perf-changelog.yaml). The env var VLLM_ROCM_USE_AITER=1 and --max-num-seqs 256 are standard vLLM/ROCm tuning knobs with no side-effects outside the benchmark runtime.

Other factors

The sole finding is a documentation nit (pr-link: .../pull/XXX placeholder) which has no runtime impact and is already captured as an inline comment. Several pre-existing entries in the file share the same unfilled placeholder, confirming this is a known hygiene issue rather than a blocking defect. No outstanding human reviewer comments exist.

- "Enable AITER MLA, export VLLM_ROCM_USE_AITER=1, https://github.com/vllm-project/vllm/issues/35641"
- "Triton Fused Moe Tuning https://github.com/vllm-project/vllm/pull/35093"
- "Add --max-num-seqs 256, remove --disable-log-requests"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/950
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The new changelog entry for kimik2.5-int4-mi355x-vllm in perf-changelog.yaml still has pr-link: .../pull/XXX — a placeholder that was never replaced with the actual PR number (#950). Please update it to .../pull/950 before merging.

Extended reasoning...

The diff adds a new entry to perf-changelog.yaml (lines 1085-1087) documenting the vLLM ROCm image upgrade for kimik2.5-int4-mi355x-vllm. The pr-link field in that entry reads https://github.com/SemiAnalysisAI/InferenceX/pull/XXX, which is a template placeholder that was never substituted with the real PR number.

The concrete code path is straightforward: the PR author likely copy-pasted a changelog template or prior entry and forgot to replace XXX with 950 before opening the PR. The PR itself is numbered #950 (visible in the PR metadata), so the correct value is unambiguous.

A refutation argued that this is an "established pattern" because five other pre-existing entries in the file also use XXX (the entries for dsr1-fp8-h200-sglang, minimaxm2.5-fp8-h200-vllm, glm5-fp8-mi355x-sglang, qwen3.5-bf16-mi325x-sglang, and qwen3.5-fp8-mi325x-sglang). This is factually correct — those entries exist and also contain unfilled placeholders. However, the existence of prior unfixed instances does not make the new occurrence correct. Each merged PR that still contains XXX in its pr-link is individually a broken changelog reference; this PR would add another one. The pattern being recurrent makes it a systemic documentation hygiene issue, not a justification to perpetuate it.

The impact is limited to documentation and traceability: anyone trying to navigate from the changelog entry to the source PR would land on a 404. It does not affect runtime behavior, benchmarks, or CI in any way.

Fix: change pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXX to pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/950 in the newly added entry. The five pre-existing stale entries are outside the scope of this PR but could be cleaned up in a follow-up.

@functionstackx
Copy link
Contributor

image

@functionstackx functionstackx merged commit da23bb4 into main Mar 27, 2026
42 checks passed
@functionstackx functionstackx deleted the claude/kimik2.5-int4-mi355x-v0.18.0 branch March 27, 2026 01:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Development

Successfully merging this pull request may close these issues.

2 participants