Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/configs/amd-master.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,7 @@ glm5-fp8-mi355x-sglang:
- { tp: 8, conc-start: 4, conc-end: 64 }

kimik2.5-int4-mi355x-vllm:
image: vllm/vllm-openai-rocm:v0.15.1
image: vllm/vllm-openai-rocm:v0.18.0
model: moonshotai/Kimi-K2.5
model-prefix: kimik2.5
runner: mi355x
Expand Down
3 changes: 2 additions & 1 deletion benchmarks/single_node/kimik2.5_int4_mi355x.sh
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,14 @@ PORT=${PORT:-8888}
start_gpu_monitor

set -x
export VLLM_ROCM_USE_AITER=1
vllm serve $MODEL --port $PORT \
--tensor-parallel-size=$TP \
--gpu-memory-utilization 0.95 \
--max-model-len $MAX_MODEL_LEN \
--block-size=64 \
--disable-log-requests \
--trust-remote-code \
--max-num-seqs 256 \
--mm-encoder-tp-mode data > $SERVER_LOG 2>&1 &

SERVER_PID=$!
Expand Down
10 changes: 9 additions & 1 deletion perf-changelog.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1076,4 +1076,12 @@
- "Add expert parallel, TP4, and TP4/EP4 search spaces"
- "Switch block-size 64 to 1 gpu-memory-utilization 0.95 to 0.90"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/936


- config-keys:
- kimik2.5-int4-mi355x-vllm
description:
- "Upgrade vLLM ROCm image from v0.15.1 to v0.18.0"
- "Enable AITER MLA, export VLLM_ROCM_USE_AITER=1, https://github.com/vllm-project/vllm/issues/35641"
- "Triton Fused Moe Tuning https://github.com/vllm-project/vllm/pull/35093"
- "Add --max-num-seqs 256, remove --disable-log-requests"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/950
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The new changelog entry for kimik2.5-int4-mi355x-vllm in perf-changelog.yaml still has pr-link: .../pull/XXX — a placeholder that was never replaced with the actual PR number (#950). Please update it to .../pull/950 before merging.

Extended reasoning...

The diff adds a new entry to perf-changelog.yaml (lines 1085-1087) documenting the vLLM ROCm image upgrade for kimik2.5-int4-mi355x-vllm. The pr-link field in that entry reads https://github.com/SemiAnalysisAI/InferenceX/pull/XXX, which is a template placeholder that was never substituted with the real PR number.

The concrete code path is straightforward: the PR author likely copy-pasted a changelog template or prior entry and forgot to replace XXX with 950 before opening the PR. The PR itself is numbered #950 (visible in the PR metadata), so the correct value is unambiguous.

A refutation argued that this is an "established pattern" because five other pre-existing entries in the file also use XXX (the entries for dsr1-fp8-h200-sglang, minimaxm2.5-fp8-h200-vllm, glm5-fp8-mi355x-sglang, qwen3.5-bf16-mi325x-sglang, and qwen3.5-fp8-mi325x-sglang). This is factually correct — those entries exist and also contain unfilled placeholders. However, the existence of prior unfixed instances does not make the new occurrence correct. Each merged PR that still contains XXX in its pr-link is individually a broken changelog reference; this PR would add another one. The pattern being recurrent makes it a systemic documentation hygiene issue, not a justification to perpetuate it.

The impact is limited to documentation and traceability: anyone trying to navigate from the changelog entry to the source PR would land on a 404. It does not affect runtime behavior, benchmarks, or CI in any way.

Fix: change pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXX to pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/950 in the newly added entry. The five pre-existing stale entries are outside the scope of this PR but could be cleaned up in a follow-up.