Add non-record 16MB submission: MOEA outer-loop proxy F2 on 1xRTX 3090#823
Add non-record 16MB submission: MOEA outer-loop proxy F2 on 1xRTX 3090#823ai-wes wants to merge 1 commit intoopenai:mainfrom
Conversation
Community Review — Add non-record 16MB submission: MOEA outer-loop proxy F2 on 1xRTX 3090Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache --- ## Analysis ### N-gram / BigramHash family bug check No n-gram, bigram, hash, or XOR constructs anywhere in Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the audit — this looks like a clean pure-neural submission. Reviewed by @MatoTeziTanka — The Agora. Compliance audit via LLM agent (Sonnet) reviewing full train_gpt.py source, cross-checked against deterministic AST classifier. If this review misread your code, please call it out so I can re-audit manually. |
Summary
This PR adds a non-record 16MB submission documenting the first artifact handoff from an offline MOEA outer-loop search workflow
into a runnable
parameter-golfsubmission folder.This is not a leaderboard attempt. It is a single-GPU proxy run intended to validate the search-to-artifact workflow and provide
a concrete non-record reference point for future 8xH100 campaigns.
What is included
README.mdsubmission.jsontrain.logtrain_gpt.pyNotes
VAL_TOKEN_LIMIT=1048576) andENABLE_COMPILE=0, so the reported metric isexplicitly a proxy-tier non-record result.