Catches TLE and MLE by running your solution against adversarial max-constraint inputs and measuring wall-clock time and peak memory.
| File | Purpose | Edit per problem? |
|---|---|---|
gen_perf.cpp |
Worst-case input generator | ✅ make_test() + t |
solution.cpp |
Your solution | ✅ |
measure.cpp |
Times and measures memory of solution | ❌ |
perf_test.bat |
Test runner | ❌ |
After running, these folders are created automatically:
build\ <- compiled binaries
dump\ <- temporary I/O files (auto-cleaned after each run)
perf_result\ <- saved inputs from TLE/MLE/crash tests
perf_test.bat :: 20 tests, TL=2000ms, ML=256MB
perf_test.bat 50 1000 512 :: 50 tests, TL=1000ms, ML=512MB
perf_test.bat 20 2000 256 0 :: skip recompileArguments in order: loops, time_limit_ms, mem_limit_mb, compile. All optional — defaults are 20 2000 256 1.
============================================================
PERF TEST | loops=20 TL=2000ms ML=256MB
============================================================
Test 1 / 20
time=1823ms mem=102.4MB exit=0
[OK]
Test 2 / 20
time=2341ms mem=104.1MB exit=0
[TLE: 2341ms]
============================================================
RESULTS (20 tests | TL=2000ms ML=256MB)
============================================================
Tests run : 20
TLE : 1
MLE : 0
Crashes : 0
Worst time : 2341ms (seed 2)
Worst memory : 104.1MB (seed 2)
TLE seeds : 2
Failing inputs are saved to perf_result\tle_seed2_input.txt etc.
To replay any test:
build\gen_perf <seed> > dump\perf_input.txt
build\solution < dump\perf_input.txtThis is the only file you edit per problem. Unlike gen.cpp in the stress kit, gen_perf.cpp must always emit worst-case inputs — not random ones.
Knowing the constraints alone is not enough. Think about what input structure maximises your specific algorithm's work:
| Algorithm characteristic | Adversarial input |
|---|---|
| O(n²) inner loop | t=1, n=N_MAX |
| Expensive per-test setup | t=T_MAX, n=1, val=VAL_MAX |
| DFS / backtracking | inputs that maximise branching factor |
| Sorting based | reverse sorted input |
| GCD / factorisation | values with many shared prime factors |
Problems often have both a per-test limit and a global sum limit (e.g. n ≤ 5000 and sum of n ≤ 5000). Always test both extremes:
Extreme 1: t=T_MAX, n=1 (many tiny test cases — maximises per-test setup cost)
Extreme 2: t=1, n=N_SUM (one large test case — maximises per-test work)
Edit gen_perf.cpp for one extreme, run perf_test.bat, then switch and run again.
Problem: t ≤ 5000, n ≤ 5000, a[i] ≤ 5000, sum of n ≤ 5000.
Solution has O(maxval²) initialisation per test case.
Worst case is t=T_MAX with n=1 — not t=1 with n=N_SUM.
void make_test()
{
cout << 1 << "\n"; // n = 1
cout << 5000 << "\n"; // a[0] = VAL_MAX, maximises per-test init cost
}
int main(int argc, char* argv[])
{
if (argc > 1) rng.seed(atoll(argv[1]));
int t = 5000; // t = T_MAX
cout << t << "\n";
while (t--) make_test();
}Your machine is typically 3–4x faster than Codeforces judges. A solution that barely TLEs on Codeforces may pass locally. If this happens, look for inputs where your local time exceeds ~500–700ms — that is a strong signal of a TLE on the judge.
1. Get a TLE verdict on Codeforces
2. Identify what input structure maximises your algorithm's work
3. Edit gen_perf.cpp -> make_test() and t to reflect that input
4. Run perf_test.bat to confirm the TLE reproduces locally
5. Optimise solution.cpp
6. Re-run perf_test.bat to verify the fix
7. Resubmit