Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .claude/CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Rust library for NP-hard problem reductions. Implements computational problems w
These repo-local skills live under `.claude/skills/*/SKILL.md`.

- [run-pipeline](skills/run-pipeline/SKILL.md) -- Pick a Ready issue from the GitHub Project board, move it through In Progress -> issue-to-pr -> Review pool. One issue at a time; forever-loop handles iteration.
- [issue-to-pr](skills/issue-to-pr/SKILL.md) -- Convert a GitHub issue into a PR with an implementation plan. One item per PR: `[Rule]` issues require both models to exist on `main`; never bundle model + rule in the same PR.
- [issue-to-pr](skills/issue-to-pr/SKILL.md) -- Convert a GitHub issue into a PR with an implementation plan. Default rule: one item per PR. Exception: a `[Model]` issue that explicitly claims direct ILP solvability should implement the model and its direct `<Model> -> ILP` rule together; `[Rule]` issues still require both models to exist on `main`.
- [add-model](skills/add-model/SKILL.md) -- Add a new problem model. Can be used standalone (brainstorms with user) or called from `issue-to-pr`.
- [add-rule](skills/add-rule/SKILL.md) -- Add a new reduction rule. Can be used standalone (brainstorms with user) or called from `issue-to-pr`.
- [review-structural](skills/review-structural/SKILL.md) -- Project-specific structural completeness check: model/rule checklists, build, semantic correctness, issue compliance. Read-only, no code changes. Called by `review-pipeline`.
Expand Down
29 changes: 29 additions & 0 deletions .claude/skills/add-model/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,11 @@ Before implementation, verify that at least one reduction rule exists or is plan

**If associated rules are found:** List them and continue.

**If the issue explicitly claims ILP solvability in "How to solve":**
- One associated rule MUST be a direct `[Rule] <ProblemName> to ILP`
- Treat that direct ILP rule as part of the same implementation scope
- Do NOT split the model and its direct ILP rule into separate PRs

## Reference Implementations

Read these first to understand the patterns:
Expand All @@ -75,6 +80,7 @@ Before implementing, make sure the plan explicitly covers these items that struc
- `declare_variants!` is present with exactly one `default` variant when multiple concrete variants exist
- CLI discovery and `pred create <ProblemName>` support are included where applicable
- A canonical model example is registered for example-db / `pred create --example`
- If the issue explicitly claims direct ILP solving, the plan also includes the direct `<Problem> -> ILP` rule with exact overhead metadata, feature-gated registration, strong regression tests, and ILP-enabled verification
- `docs/paper/reductions.typ` adds both the display-name dictionary entry and the `problem-def(...)`

## Step 1: Determine the category
Expand Down Expand Up @@ -193,6 +199,19 @@ This example is now the canonical source for:
- paper/example exports via `load-model-example()` in `reductions.typ`
- example-db invariants tested in `src/unit_tests/example_db.rs`

## Step 4.7: Implement Direct ILP Rule When Claimed

If the issue explicitly says the model is solvable by reducing **directly** to ILP, implement `src/rules/<problem>_ilp.rs` in the **same PR** as the model. This is the one exception to the normal "one item per PR" policy: the direct `<Problem> -> ILP` rule is part of the model feature, not optional follow-up work.

Completeness bar:
- Feature-gate the rule under `ilp-solver` and register it normally
- Add exact overhead expressions and any required size-field getters; metadata must match the constructed ILP exactly
- Add strong tests in `src/unit_tests/rules/<problem>_ilp.rs`: structure/metadata, closed-loop semantics vs the source problem or brute force, extraction, `solve_reduced()` or ILP path coverage when appropriate, and weighted/infeasible/pathological regressions whenever the model semantics admit them
- Update CLI/example-db/paper paths so the claimed ILP solver route is actually usable and documented
- Verify with ILP-enabled workspace commands, not just non-ILP unit tests

A direct ILP rule shipped with a model issue must match the completeness bar of a standalone production ILP reduction. Do not add a stub just to satisfy the issue text.

## Step 5: Write unit tests

Create `src/unit_tests/models/<category>/<name>.rs`:
Expand All @@ -206,6 +225,8 @@ Every model needs **at least 3 test functions** (the structural reviewer enforce
- **Serialization** — round-trip serde (when the model is used in CLI/example-db flows).
- **Paper example** — verify the worked example from the paper entry (see below).

If Step 4.7 applies, also add a dedicated ILP rule test file under `src/unit_tests/rules/<problem>_ilp.rs`. Use strong direct-to-ILP reductions in the repo as the reference bar: the tests should validate the actual formulation semantics, not just that an ILP file exists.

When you add `test_<name>_paper_example`, it should:
1. Construct the same instance shown in the paper's example figure
2. Evaluate the solution from the issue's **Expected Outcome** section as shown in the paper and assert it is valid (and optimal for optimization problems)
Expand Down Expand Up @@ -259,10 +280,17 @@ Checklist: display name registered, notation self-contained, background present,

## Step 7: Verify

For ordinary model-only work:
```bash
make test clippy # Must pass
```

If Step 4.7 applied, run ILP-enabled workspace verification instead:
```bash
cargo clippy --all-targets --features ilp-highs -- -D warnings
cargo test --features "ilp-highs example-db" --workspace --verbose
```

Structural and quality review is handled by the `review-pipeline` stage, not here. The run stage just needs to produce working code.

## Naming Conventions
Expand Down Expand Up @@ -292,3 +320,4 @@ Structural and quality review is handled by the `review-pipeline` stage, not her
| Schema lists derived fields | Schema should list constructor params, not internal fields (e.g., `matrix, k` not `matrix, m, n, k`) |
| Missing canonical model example | Add a builder in `src/example_db/model_builders.rs` and keep it aligned with paper/example workflows |
| Paper example not tested | Must include `test_<name>_paper_example` that verifies the exact instance, solution, and solution count shown in the paper |
| Claiming direct ILP solving but leaving `<Problem> -> ILP` for later | If the issue promises a direct ILP path, implement that rule in the same PR with exact overhead metadata and production-level ILP tests |
3 changes: 3 additions & 0 deletions .claude/skills/add-rule/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,8 @@ Structural and quality review is handled by the `review-pipeline` stage, not her

- If the target problem already has a solver, use it directly.
- If the solving strategy requires ILP, implement the ILP reduction rule alongside (feature-gated under `ilp-solver`).
- A direct-to-ILP rule is a production reduction, not a stub. Match the completeness bar used by strong ILP reductions in this repo: exact overhead metadata, structure + closed-loop + extraction tests, weighted/infeasible/pathological regressions whenever the semantics require them, and ILP-enabled workspace verification.
- When this rule is the companion to a `[Model]` issue that explicitly claims ILP solvability, it belongs in the same PR as the model.
- If a custom solver is needed, implement in `src/solvers/` and document.

## CLI Impact
Expand Down Expand Up @@ -223,3 +225,4 @@ Aggregate-only reductions currently have a narrower CLI surface:
| Not adding canonical example to `example_db` | Add builder in `src/example_db/rule_builders.rs` |
| Not regenerating reduction graph | Run `cargo run --example export_graph` after adding a rule |
| Source/target model not fully registered | Both problems must already have `declare_variants!`, aliases as needed, and CLI create support -- use `add-model` skill first |
| Treating a direct-to-ILP rule as a toy stub | Direct ILP reductions need exact overhead metadata and strong semantic regression tests, just like other production ILP rules |
4 changes: 3 additions & 1 deletion .claude/skills/check-issue/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,6 +227,7 @@ Applies when the title contains `[Model]`.
5. Check **How to solve** section:
- At least one solver method must be checked (brute-force, ILP reduction, or other)
- If no solver path is identified → **Warn** ("No solver means reduction rules can't be verified")
- If direct ILP solving is claimed, the issue must link a direct `[Rule] <ProblemName> to ILP` companion issue in the "Reduction Rule Crossref" section; otherwise → **Fail**

---

Expand Down Expand Up @@ -305,7 +306,7 @@ Check all template sections are present and substantive:
| Variables | Count, per-variable domain, semantic meaning |
| Schema | Type name, variants, field table |
| Complexity | Best known algorithm with citation **and** a concrete complexity expression in terms of problem parameters (e.g., `q^n`, `2^{0.8765n}`) |
| How to solve | At least one solver method checked |
| How to solve | At least one solver method checked; if ILP is claimed, a direct `[Rule] <ProblemName> to ILP` issue must be linked |
| Example Instance | Concrete instance that exercises the core structure |
| Expected Outcome | Satisfaction: one valid / satisfying solution with brief justification. Optimization: one optimal solution with the optimal objective value |

Expand Down Expand Up @@ -334,6 +335,7 @@ The formal definition must be **precise and implementable**:
- Optimization problems must include a concrete optimal solution and the optimal objective value
- **Detailed enough for paper**: This example will appear in the paper — it needs to be illustrative
- **Round-trip testable**: The example must be complex enough that a round-trip test (construct instance → solve → verify) can catch implementation bugs. A too-simple instance (e.g., 2 vertices, a single clause) may have a trivially correct solution that passes even with a wrong implementation. The example should have multiple feasible configurations with different objective values (for optimization) or a mix of satisfying and non-satisfying configurations (for satisfaction problems), so that correctness is meaningfully tested. Rule of thumb: the instance should have at least 2 suboptimal feasible solutions in addition to the optimal one.
- **ILP-testable when claimed**: If the issue advertises a direct ILP path, the example should be rich enough to support strong ILP closed-loop tests rather than a degenerate "any formulation passes" case.

### 4e: Representation Feasibility

Expand Down
14 changes: 10 additions & 4 deletions .claude/skills/issue-to-pr/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ For `[Rule]` issues, `ISSUE_JSON` already includes `source_problem`, `target_pro
- If both `checks.source_model` and `checks.target_model` are `pass` → continue to step 4.
- If either is `fail` → **STOP**. Comment on the issue: "Blocked: model `<name>` does not exist in main yet. Please implement it first (or file a `[Model]` issue)."

**One item per PR:** Do NOT implement a missing model as part of a `[Rule]` PR. Each PR should contain exactly one model or one rule, never both. This avoids bloated PRs and repeated implementation when the model is needed by multiple rules.
**One item per PR, with one exception:** Do NOT implement a missing model as part of a `[Rule]` PR. `[Rule]` issues still require both models to exist on `main`. The only exception is a `[Model]` issue that explicitly claims direct ILP solvability: that PR should implement both the model and the direct `<Model> -> ILP` rule together.

### 4. Research References

Expand All @@ -89,7 +89,8 @@ Write implementation plan to `docs/plans/YYYY-MM-DD-<slug>.md` using `superpower

The plan MUST reference the appropriate implementation skill and follow its steps:

- **For `[Model]` issues:** Follow [add-model](../add-model/SKILL.md) Steps 1-7 as the action pipeline
- **For ordinary `[Model]` issues:** Follow [add-model](../add-model/SKILL.md) Steps 1-7 as the action pipeline
- **For `[Model]` issues that explicitly claim direct ILP solving:** Follow [add-model](../add-model/SKILL.md) Steps 1-7 **and** [add-rule](../add-rule/SKILL.md) Steps 1-6 for the direct `<Problem> -> ILP` rule in the same plan / PR
- **For `[Rule]` issues:** Follow [add-rule](../add-rule/SKILL.md) Steps 1-6 as the action pipeline

Include the concrete details from the issue (problem definition, reduction algorithm, example, etc.) mapped onto each step.
Expand All @@ -98,9 +99,14 @@ Include the concrete details from the issue (problem definition, reduction algor
- Batch 1: Steps 1-5.5 (implement model, register, CLI, tests)
- Batch 2: Step 6 (write paper entry — depends on batch 1 for exports)

For a `[Model]` issue with an explicit direct ILP claim, use:
- Batch 1: implement the model, register it, add the direct `<Problem> -> ILP` rule, and add model + rule tests
- Batch 2: write both the `problem-def(...)` and `reduction-rule(...)` paper entries, regenerate exports / fixtures, and run final ILP-enabled verification

**Solver rules:**
- Ensure at least one solver is provided in the issue template. Check if the solving strategy is valid. If not, reply under issue to ask for clarification.
- If the solver uses integer programming, implement the model and ILP reduction rule together.
- If a `[Model]` issue explicitly claims direct ILP solving, implement the model and the direct `<Problem> -> ILP` reduction together in the same PR. Do not leave the ILP rule as a follow-up.
- The direct ILP rule must meet the same completeness bar as a standalone production ILP reduction: exact overhead metadata, feature-gated registration, strong closed-loop / extraction / weighted / infeasible / pathological tests when applicable, CLI/example-db/paper integration, and ILP-enabled workspace verification.
- Otherwise, ensure the information provided is enough to implement a solver.

**Example rules:**
Expand Down Expand Up @@ -291,6 +297,6 @@ Run /review-pipeline to run agentic review (structural check, quality check, age
| Dirty working tree | Use `pipeline_worktree.py prepare-issue-branch` — it stops before branching if the worktree is dirty |
| Resuming wrong PR | Always validate `resume_pr.head_ref_name` contains `issue-{N}` before trusting it — GitHub search can return false positives |
| `prepare-issue-branch` inside worktree | Skip it when inside a `run-pipeline` worktree (CWD under `.worktrees/`) — the branch already exists |
| Bundling model + rule in one PR | Each PR must contain exactly one model or one rule — STOP and block if model is missing (Step 3.5) |
| Bundling unrelated model + rule in one PR | Keep the normal one-item-per-PR rule. The only exception is a `[Model]` issue that explicitly claims direct ILP solving, which should ship with its direct `<Model> -> ILP` rule |
| Plan files left in PR | Delete plan files before final push (Step 7c) |
| `make paper` or export steps changed tracked JSON after verification | Run `git status --short`, stage expected generated exports, and STOP if unexpected files remain before push |
Loading
Loading