- Add benchmarks/reports/REPORT_TEMPLATE.md — template with __MARKER__ placeholders for all auto-populated fields (latency, throughput, percentiles, cache delta, benchstat block, environment, ONIX version) - Add benchmarks/tools/generate_report.go — reads latency_report.csv, throughput_report.csv, benchstat_summary.txt and run1.txt metadata, fills the template, and writes BENCHMARK_REPORT.md to the results dir. ONIX version sourced from the latest git tag (falls back to 'dev'). - Update run_benchmarks.sh to call generate_report.go after parse_results.go; also derive ONIX_VERSION from git tag and pass to generator - Update README and directory layout to reflect new files and workflow
7.7 KiB
beckn-onix Adapter Benchmarks
End-to-end performance benchmarks for the beckn-onix ONIX adapter, using Go's native testing.B framework and net/http/httptest. No Docker, no external services — everything runs in-process.
Quick Start
# From the repo root
go mod tidy # fetch miniredis + benchstat checksums
bash benchmarks/run_benchmarks.sh # compile plugins, run all scenarios, generate report
Runtime output lands in benchmarks/results/<timestamp>/ (gitignored). Committed reports live in benchmarks/reports/.
What Is Being Benchmarked
The benchmarks target the bapTxnCaller handler — the primary outbound path a BAP takes when initiating a Beckn transaction. Every request travels through the full production pipeline:
Benchmark goroutine(s)
│ HTTP POST /bap/caller/<action>
▼
httptest.Server ← ONIX adapter (real compiled .so plugins)
│
├── addRoute router plugin resolve BPP URL from routing config
├── sign signer + simplekeymanager Ed25519 / BLAKE-512 signing
└── validateSchema schemav2validator Beckn OpenAPI spec validation
│
└──▶ httptest mock BPP (instant ACK — no network)
Mock services replace all external dependencies so results reflect adapter-internal latency only:
| Dependency | Replaced by |
|---|---|
| Redis | miniredis (in-process) |
| BPP backend | httptest mock — returns {"message":{"ack":{"status":"ACK"}}} |
| Beckn registry | httptest mock — returns the dev key pair for signature verification |
Benchmark Scenarios
| Benchmark | What it measures |
|---|---|
BenchmarkBAPCaller_Discover |
Baseline single-goroutine latency for /discover |
BenchmarkBAPCaller_Discover_Parallel |
Throughput under concurrent load; run with -cpu=1,2,4,8,16 |
BenchmarkBAPCaller_AllActions |
Per-action latency: discover, select, init, confirm |
BenchmarkBAPCaller_Discover_Percentiles |
p50 / p95 / p99 latency via b.ReportMetric |
BenchmarkBAPCaller_CacheWarm |
Latency when the Redis key cache is already populated |
BenchmarkBAPCaller_CacheCold |
Latency on a cold cache — full key-derivation round-trip |
BenchmarkBAPCaller_RPS |
Requests-per-second under parallel load (req/s custom metric) |
How It Works
Startup (TestMain)
Before any benchmark runs, TestMain in e2e/setup_test.go:
- Compiles all required plugins to a temporary directory using
go build -buildmode=plugin. The first run takes 60–90 s (cold Go build cache); subsequent runs are near-instant. - Starts miniredis — an in-process Redis server used by the
cacheplugin (no external Redis needed). - Starts mock servers — an instant-ACK BPP and a registry mock that returns the dev signing public key.
- Starts the adapter — wires all plugins programmatically (no YAML parsing) and wraps it in an
httptest.Server.
Per-iteration (buildSignedRequest)
Each benchmark iteration:
- Loads the JSON fixture for the requested Beckn action (
testdata/<action>_request.json). - Substitutes sentinel values (
BENCH_TIMESTAMP,BENCH_MESSAGE_ID,BENCH_TRANSACTION_ID) with fresh values, ensuring unique message IDs per iteration. - Signs the body using the Beckn Ed25519/BLAKE-512 spec (same algorithm as the production
signerplugin). - Sends the signed
POSTto the adapter and validates a200 OKresponse.
Validation test (TestSignBecknPayload)
A plain Test* function runs before the benchmarks and sends one signed request end-to-end. If the signing helper is mis-implemented, this fails fast before any benchmark time is wasted.
Directory Layout
benchmarks/
├── README.md ← you are here
├── run_benchmarks.sh ← one-shot runner script
├── e2e/
│ ├── bench_test.go ← benchmark functions
│ ├── setup_test.go ← TestMain, startAdapter, signing helper
│ ├── mocks_test.go ← mock BPP and registry servers
│ ├── keys_test.go ← dev key pair constants
│ └── testdata/
│ ├── routing-BAPCaller.yaml ← routing config (BENCH_BPP_URL placeholder)
│ ├── discover_request.json ← Beckn search payload fixture
│ ├── select_request.json
│ ├── init_request.json
│ └── confirm_request.json
├── tools/
│ ├── parse_results.go ← CSV exporter for latency + throughput data
│ └── generate_report.go ← fills REPORT_TEMPLATE.md with run data
├── reports/ ← committed benchmark reports and template
│ ├── REPORT_TEMPLATE.md ← template used to generate each run's report
│ └── REPORT_ONIX_v150.md ← baseline report (Apple M5, Beckn v2.0.0)
└── results/ ← gitignored; created by run_benchmarks.sh
└── <timestamp>/
├── BENCHMARK_REPORT.md — generated human-readable report
├── run1.txt, run2.txt, run3.txt — raw go test -bench output
├── parallel_cpu*.txt — concurrency sweep
├── benchstat_summary.txt — statistical aggregation
├── latency_report.csv — per-benchmark latency (from parse_results.go)
└── throughput_report.csv — RPS vs GOMAXPROCS (from parse_results.go)
Reports
Committed reports are stored in benchmarks/reports/. Each report documents the environment, raw numbers, and analysis for a specific run and adapter version.
| File | Platform | Adapter version |
|---|---|---|
REPORT_ONIX_v150.md |
Apple M5 · darwin/arm64 · GOMAXPROCS=10 | beckn-onix v1.5.0 |
The script auto-generates BENCHMARK_REPORT.md in each results directory using REPORT_TEMPLATE.md. To permanently record a run:
- Run
bash benchmarks/run_benchmarks.sh—BENCHMARK_REPORT.mdis generated automatically. - Review it, fill in the B5 bottleneck analysis section.
- Copy it to
benchmarks/reports/REPORT_<tag>.mdand commit. benchmarks/results/stays gitignored; only the curated report goes in.
Running Individual Benchmarks
# Single benchmark, 10 s
go test ./benchmarks/e2e/... \
-bench=BenchmarkBAPCaller_Discover \
-benchtime=10s -benchmem -timeout=30m
# All actions in one shot
go test ./benchmarks/e2e/... \
-bench=BenchmarkBAPCaller_AllActions \
-benchtime=5s -benchmem -timeout=30m
# Concurrency sweep at 1, 4, and 16 goroutines
go test ./benchmarks/e2e/... \
-bench=BenchmarkBAPCaller_Discover_Parallel \
-benchtime=30s -cpu=1,4,16 -timeout=30m
# Race detector check (no data races)
go test ./benchmarks/e2e/... \
-bench=BenchmarkBAPCaller_Discover_Parallel \
-benchtime=5s -race -timeout=30m
# Percentile metrics (p50/p95/p99 in µs)
go test ./benchmarks/e2e/... \
-bench=BenchmarkBAPCaller_Discover_Percentiles \
-benchtime=10s -benchmem -timeout=30m
Comparing Two Runs with benchstat
go test ./benchmarks/e2e/... -bench=. -benchtime=10s -count=6 > before.txt
# ... make your change ...
go test ./benchmarks/e2e/... -bench=. -benchtime=10s -count=6 > after.txt
go tool benchstat before.txt after.txt
Dependencies
| Package | Purpose |
|---|---|
github.com/alicebob/miniredis/v2 |
In-process Redis for the cache plugin |
golang.org/x/perf/cmd/benchstat |
Statistical benchmark comparison (CLI tool) |
Both are declared in go.mod. Run go mod tidy once to fetch their checksums.