M3: finalize gradio ui, traceability artifacts, and logger override support#7
Open
guru-code-expert wants to merge 46 commits intoAgentOpt:mainfrom
Open
M3: finalize gradio ui, traceability artifacts, and logger override support#7guru-code-expert wants to merge 46 commits intoAgentOpt:mainfrom
guru-code-expert wants to merge 46 commits intoAgentOpt:mainfrom
Conversation
Implements the M1 milestone for Trace-Bench: CLI surface: - trace-bench list-tasks, list-trainers, validate --config --strict, run, ui - Strict validation: trainer kwarg checking, optimizer/guide/logger resolution, trainable parameter detection, matrix expansion with manifest output Runner & training: - BenchRunner with deterministic SHA256-based job IDs - Algorithm-aware kwarg mapping (PrioritySearch vs GEPA-Base/UCB/Beam) - DummyLLM stub mode for offline testing - Training error capture in feedback field Canonical artifact layout: - meta/config.snapshot.yaml, manifest.json, env.json (redacted), git.json - Per-job: job_meta.json, results.json, events.jsonl, artifacts/, tb/ - Run-level: results.csv (16 columns) + summary.json Task coverage: - 4 internal types (code_param, numeric_param, multi_param, non_trainable) - trace_examples:greeting_stub - llm4ad:circle_packing (bounded timeout) - veribench:smoke_placeholder (NotImplementedError stub) Trainer coverage: - PrioritySearch + GEPA-Base exercised in real mode - GEPA-UCB + GEPA-Beam configured (M4 scope) Tests: 30 pass, 2 skipped (m0 smoke, m1 artifacts, matrix e2e, internal tasks, opentrace examples, trainer config, veribench CLI) Notebook: 01_m1_minimal_api.ipynb with Colab badge, auto-detect API key (real/stub mode), 2x2 matrix smoke (4/4 ok), executed outputs committed.
This reverts commit 51622f2.
…nd crossbench smoke
# Conflicts: # LLM4AD/benchmark_tasks/science_discovery_bactgrow/__init__.py # LLM4AD/benchmark_tasks/science_discovery_ode_1d/__init__.py # LLM4AD/benchmark_tasks/science_discovery_oscillator1/__init__.py # LLM4AD/benchmark_tasks/science_discovery_oscillator2/__init__.py # LLM4AD/benchmark_tasks/science_discovery_stresstrain/__init__.py # configs/m2_coverage.yaml # notebooks/02_m2_coverage.ipynb # notebooks/04_m2_full_coverage.ipynb # trace_bench/runner.py
…coverage 56/58 OK
…fig runner passthrough
- Runner passthrough for objective_config - 3 benchmark tasks: convex (SixHumpCamel), BBEH (boolean_expressions), GSM8K - Each task supports weighted/pareto mode switching via eval_kwargs - 3 notebooks comparing BasicSearch/Beamsearch/PrioritySearch x weighted/pareto - BBEH + GSM8K notebooks compare two models with auto-detection (OpenRouter or direct API keys) - UsageTrackingLLM with __deepcopy__ support for ContextVar - 23 pytest tests (15 unit, 4 feature, 2 non-regression) - Config: m3_multiobjective.yaml (18-job matrix, max_workers=6) - End-to-end validated: convex 6/6, BBEH 10/12, GSM8K 6/12
…to m3/deliverable
m3: notebook with output
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
M3 delivery for Gradio launch/monitor + traceability + logger override compatibility.
What’s included
custom | openai | openrouterwith base URL/key auto-fill behavior.picker | upload | editor)realruns_dirtrace-bench runleaderboard.csv(excluding failed jobs)meta/files_index.jsoninitial_state,best_state,final_state(yaml + json)state_history.jsonltoken_scope=trace_optimization_onlytrace-bench run --logger ...default | none | <logger>Compatibility
Validation
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 pytest -q tests/m3passing (42 passed, 1 skipped)notebooks/03_ui_launch_monitor.ipynb