Conversation
- Add InstructionBenchmark trait for clean separation of concerns - Benchmark owns: SVM setup, keypairs, signing - Framework owns: unsigned tx building, CU measurement, statistics - Implement benchmark_instruction() runner with SVM state accumulation - Convert SOL and SPL token transfer benchmarks to use new framework - Add solana-message dependency for unsigned transaction creation - Simplify benchmark output to single summary line + JSON data - Eliminate 150+ lines of boilerplate from benchmark implementations - Maintain identical CU measurements (300 CU SOL, 4794 CU SPL) The framework enables measuring any Solana instruction's compute unit usage with minimal code while providing structured estimates similar to Helius Priority Fee API.
Add comprehensive CU benchmarking framework with dual instruction/transaction paradigms: Framework Features: - InstructionBenchmark: Pure instruction CU measurement (no framework overhead) - TransactionBenchmark: Complete workflow measurement with multi-program context - Rich execution context discovery through simulation - Percentile-based CU estimates (min/conservative/balanced/safe/very_high/unsafe_max) - Professional logging with env_logger integration - Clean JSON output with proper domain modeling Key Design Decisions: - Remove automatic ComputeBudgetInstruction from instruction benchmarks for transparency - Two-phase measurement: simulation for context + execution for statistics - SVM state accumulation across measurements for realism - StatType enum for clean instruction vs transaction distinction - Comprehensive unit tests for percentile calculations Benchmarks: - SOL transfer: 150 CU (pure instruction) - SPL token transfer: instruction-level benchmark - Token setup workflow: 28,322-38,822 CU transaction benchmark This provides systematic, reproducible CU analysis for both research and production planning.
Transform project positioning from "testing framework" to "testing and benchmarking framework" with comprehensive documentation: Documentation Additions: - Add BENCHMARKING.md: Complete guide with living examples and best practices - Enhance README: Prominently feature CU benchmarking alongside testing - Create clear learning path: README → BENCHMARKING.md → benchmark files Key Documentation Features: - Dual paradigm explanation (instruction vs transaction benchmarking) - Statistical output interpretation (percentile-based estimates) - Production integration patterns for fee estimation - Troubleshooting guide for common benchmark issues - Living documentation that references actual working benchmark files Project Positioning: - README hero section now highlights both testing AND benchmarking capabilities - CU benchmarking quick start with concrete examples (SOL transfer: 150 CU, Token setup: 28K-38K CU) - Updated roadmap showing completed benchmarking framework - Enhanced examples section showcasing benchmark files as primary documentation This establishes systematic CU analysis as a unique differentiator alongside the existing comprehensive testing utilities.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Dual Benchmarking Paradigms
🔬 Instruction Benchmarking - Pure CU measurement
🔄 Transaction Benchmarking - Complete workflow analysis
Statistical Analysis Engine
Percentile-Based Estimates (inspired by Helius Priority Fee API):
{ "cu_estimate": { "min": 28322, // 0th percentile - absolute minimum "conservative": 30145, // 25th percentile - safe for most cases "balanced": 32891, // 50th percentile - good default "safe": 35123, // 75th percentile - high reliability "very_high": 37456, // 95th percentile - very reliable "unsafe_max": 38822, // 100th percentile - maximum observed "sample_size": 100 } }🔧 Major Technical Achievements
1. Context Discovery System
2. Statistical Rigor
3. Clean Architecture
StatTypeenum for instruction vs transaction distinction4. Framework Design Excellence
📊 Working Examples & Living Documentation
Benchmarks as Primary Documentation
cu_bench_sol_transfer_ix.rs- 150 CU pure system program callcu_bench_spl_transfer_ix.rs- Complex multi-account instructioncu_bench_token_setup_tx.rs- 5-instruction, 4-program workflowComprehensive Documentation
BENCHMARKING.md- 274 lines of practical documentation🚀 Key Design Decisions & Evolution
Problems Solved During Development
Architecture Choices
📈 Impact & Results
Ecosystem Positioning
Elevates
litesvm-testingfrom "another testing framework" to unique dual-purpose toolkit:Concrete Value Delivered
Technical Metrics
🔄 Migration & Integration
Existing Users: Zero breaking changes - all existing testing functionality preserved
New Capabilities: Opt-in via
--features cu_benchfor benchmarking functionalityProduction Integration:
This PR establishes
litesvm-testingas the definitive toolkit for Solana program development - combining comprehensive testing utilities with production-ready performance analysis capabilities not available elsewhere in the ecosystem.Ready for review! 🎯