scripts to run benchmarks
This commit is contained in:
157
benchmarks/README.md
Normal file
157
benchmarks/README.md
Normal file
@@ -0,0 +1,157 @@
|
||||
# beckn-onix Adapter Benchmarks
|
||||
|
||||
End-to-end performance benchmarks for the beckn-onix ONIX adapter, using Go's native `testing.B` framework and `net/http/httptest`. No Docker, no external services — everything runs in-process.
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# From the repo root
|
||||
go mod tidy # fetch miniredis + benchstat checksums
|
||||
bash benchmarks/run_benchmarks.sh # compile plugins, run all scenarios, generate report
|
||||
```
|
||||
|
||||
Results land in `benchmarks/results/<timestamp>/`.
|
||||
|
||||
---
|
||||
|
||||
## What Is Being Benchmarked
|
||||
|
||||
The benchmarks target the **`bapTxnCaller`** handler — the primary outbound path a BAP takes when initiating a Beckn transaction. Every request travels through the full production pipeline:
|
||||
|
||||
```
|
||||
Benchmark goroutine(s)
|
||||
│ HTTP POST /bap/caller/<action>
|
||||
▼
|
||||
httptest.Server ← ONIX adapter (real compiled .so plugins)
|
||||
│
|
||||
├── addRoute router plugin resolve BPP URL from routing config
|
||||
├── sign signer + simplekeymanager Ed25519 / BLAKE-512 signing
|
||||
└── validateSchema schemav2validator Beckn OpenAPI spec validation
|
||||
│
|
||||
└──▶ httptest mock BPP (instant ACK — no network)
|
||||
```
|
||||
|
||||
Mock services replace all external dependencies so results reflect **adapter-internal latency only**:
|
||||
|
||||
| Dependency | Replaced by |
|
||||
|------------|-------------|
|
||||
| Redis | `miniredis` (in-process) |
|
||||
| BPP backend | `httptest` mock — returns `{"message":{"ack":{"status":"ACK"}}}` |
|
||||
| Beckn registry | `httptest` mock — returns the dev key pair for signature verification |
|
||||
|
||||
---
|
||||
|
||||
## Benchmark Scenarios
|
||||
|
||||
| Benchmark | What it measures |
|
||||
|-----------|-----------------|
|
||||
| `BenchmarkBAPCaller_Discover` | Baseline single-goroutine latency for `/discover` |
|
||||
| `BenchmarkBAPCaller_Discover_Parallel` | Throughput under concurrent load; run with `-cpu=1,2,4,8,16` |
|
||||
| `BenchmarkBAPCaller_AllActions` | Per-action latency: `discover`, `select`, `init`, `confirm` |
|
||||
| `BenchmarkBAPCaller_Discover_Percentiles` | p50 / p95 / p99 latency via `b.ReportMetric` |
|
||||
| `BenchmarkBAPCaller_CacheWarm` | Latency when the Redis key cache is already populated |
|
||||
| `BenchmarkBAPCaller_CacheCold` | Latency on a cold cache — full key-derivation round-trip |
|
||||
| `BenchmarkBAPCaller_RPS` | Requests-per-second under parallel load (`req/s` custom metric) |
|
||||
|
||||
---
|
||||
|
||||
## How It Works
|
||||
|
||||
### Startup (`TestMain`)
|
||||
|
||||
Before any benchmark runs, `TestMain` in `e2e/setup_test.go`:
|
||||
|
||||
1. **Compiles all required plugins** to a temporary directory using `go build -buildmode=plugin`. The first run takes 60–90 s (cold Go build cache); subsequent runs are near-instant.
|
||||
2. **Starts miniredis** — an in-process Redis server used by the `cache` plugin (no external Redis needed).
|
||||
3. **Starts mock servers** — an instant-ACK BPP and a registry mock that returns the dev signing public key.
|
||||
4. **Starts the adapter** — wires all plugins programmatically (no YAML parsing) and wraps it in an `httptest.Server`.
|
||||
|
||||
### Per-iteration (`buildSignedRequest`)
|
||||
|
||||
Each benchmark iteration:
|
||||
1. Loads the JSON fixture for the requested Beckn action (`testdata/<action>_request.json`).
|
||||
2. Substitutes sentinel values (`BENCH_TIMESTAMP`, `BENCH_MESSAGE_ID`, `BENCH_TRANSACTION_ID`) with fresh values, ensuring unique message IDs per iteration.
|
||||
3. Signs the body using the Beckn Ed25519/BLAKE-512 spec (same algorithm as the production `signer` plugin).
|
||||
4. Sends the signed `POST` to the adapter and validates a `200 OK` response.
|
||||
|
||||
### Validation test (`TestSignBecknPayload`)
|
||||
|
||||
A plain `Test*` function runs before the benchmarks and sends one signed request end-to-end. If the signing helper is mis-implemented, this fails fast before any benchmark time is wasted.
|
||||
|
||||
---
|
||||
|
||||
## Directory Layout
|
||||
|
||||
```
|
||||
benchmarks/
|
||||
├── README.md ← you are here
|
||||
├── run_benchmarks.sh ← one-shot runner script
|
||||
├── e2e/
|
||||
│ ├── bench_test.go ← benchmark functions (T8)
|
||||
│ ├── setup_test.go ← TestMain, startAdapter, signing helper (T3/T4/T7)
|
||||
│ ├── mocks_test.go ← mock BPP and registry servers (T5)
|
||||
│ ├── keys_test.go ← dev key pair constants (T6a)
|
||||
│ └── testdata/
|
||||
│ ├── routing-BAPCaller.yaml ← routing config (BENCH_BPP_URL placeholder)
|
||||
│ ├── discover_request.json ← Beckn search payload fixture
|
||||
│ ├── select_request.json
|
||||
│ ├── init_request.json
|
||||
│ └── confirm_request.json
|
||||
├── tools/
|
||||
│ └── parse_results.go ← CSV exporter for latency + throughput data (T10)
|
||||
└── results/
|
||||
└── BENCHMARK_REPORT.md ← report template (populate after a run)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Running Individual Benchmarks
|
||||
|
||||
```bash
|
||||
# Single benchmark, 10 s
|
||||
go test ./benchmarks/e2e/... \
|
||||
-bench=BenchmarkBAPCaller_Discover \
|
||||
-benchtime=10s -benchmem -timeout=30m
|
||||
|
||||
# All actions in one shot
|
||||
go test ./benchmarks/e2e/... \
|
||||
-bench=BenchmarkBAPCaller_AllActions \
|
||||
-benchtime=5s -benchmem -timeout=30m
|
||||
|
||||
# Concurrency sweep at 1, 4, and 16 goroutines
|
||||
go test ./benchmarks/e2e/... \
|
||||
-bench=BenchmarkBAPCaller_Discover_Parallel \
|
||||
-benchtime=30s -cpu=1,4,16 -timeout=30m
|
||||
|
||||
# Race detector check (no data races)
|
||||
go test ./benchmarks/e2e/... \
|
||||
-bench=BenchmarkBAPCaller_Discover_Parallel \
|
||||
-benchtime=5s -race -timeout=30m
|
||||
|
||||
# Percentile metrics (p50/p95/p99 in µs)
|
||||
go test ./benchmarks/e2e/... \
|
||||
-bench=BenchmarkBAPCaller_Discover_Percentiles \
|
||||
-benchtime=10s -benchmem -timeout=30m
|
||||
```
|
||||
|
||||
## Comparing Two Runs with benchstat
|
||||
|
||||
```bash
|
||||
go test ./benchmarks/e2e/... -bench=. -benchtime=10s -count=6 > before.txt
|
||||
# ... make your change ...
|
||||
go test ./benchmarks/e2e/... -bench=. -benchtime=10s -count=6 > after.txt
|
||||
benchstat before.txt after.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
| Package | Purpose |
|
||||
|---------|---------|
|
||||
| `github.com/alicebob/miniredis/v2` | In-process Redis for the `cache` plugin |
|
||||
| `golang.org/x/perf/cmd/benchstat` | Statistical benchmark comparison (CLI tool) |
|
||||
|
||||
Both are declared in `go.mod`. Run `go mod tidy` once to fetch their checksums.
|
||||
186
benchmarks/e2e/bench_test.go
Normal file
186
benchmarks/e2e/bench_test.go
Normal file
@@ -0,0 +1,186 @@
|
||||
package e2e_bench_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"sort"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ── BenchmarkBAPCaller_Discover ───────────────────────────────────────────────
|
||||
// Baseline single-goroutine throughput and latency for the discover endpoint.
|
||||
// Exercises the full bapTxnCaller pipeline: addRoute → sign → validateSchema.
|
||||
func BenchmarkBAPCaller_Discover(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
req := buildSignedRequest(b, "discover")
|
||||
if err := sendRequest(req); err != nil {
|
||||
b.Errorf("iteration %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── BenchmarkBAPCaller_Discover_Parallel ─────────────────────────────────────
|
||||
// Measures throughput under concurrent load. Run with -cpu=1,2,4,8,16 to
|
||||
// produce a concurrency sweep. Each goroutine runs its own request loop.
|
||||
func BenchmarkBAPCaller_Discover_Parallel(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
req := buildSignedRequest(b, "discover")
|
||||
if err := sendRequest(req); err != nil {
|
||||
b.Errorf("parallel: %v", err)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// ── BenchmarkBAPCaller_AllActions ────────────────────────────────────────────
|
||||
// Measures per-action latency for discover, select, init, and confirm in a
|
||||
// single benchmark run. Each sub-benchmark is independent.
|
||||
func BenchmarkBAPCaller_AllActions(b *testing.B) {
|
||||
actions := []string{"discover", "select", "init", "confirm"}
|
||||
|
||||
for _, action := range actions {
|
||||
action := action // capture for sub-benchmark closure
|
||||
b.Run(action, func(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
req := buildSignedRequest(b, action)
|
||||
if err := sendRequest(req); err != nil {
|
||||
b.Errorf("action %s iteration %d: %v", action, i, err)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// ── BenchmarkBAPCaller_Discover_Percentiles ───────────────────────────────────
|
||||
// Collects individual request durations and reports p50, p95, and p99 latency
|
||||
// in microseconds via b.ReportMetric. The percentile data is only meaningful
|
||||
// when -benchtime is at least 5s (default used in run_benchmarks.sh).
|
||||
func BenchmarkBAPCaller_Discover_Percentiles(b *testing.B) {
|
||||
durations := make([]time.Duration, 0, b.N)
|
||||
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
req := buildSignedRequest(b, "discover")
|
||||
start := time.Now()
|
||||
if err := sendRequest(req); err != nil {
|
||||
b.Errorf("iteration %d: %v", i, err)
|
||||
continue
|
||||
}
|
||||
durations = append(durations, time.Since(start))
|
||||
}
|
||||
|
||||
// Compute and report percentiles.
|
||||
if len(durations) == 0 {
|
||||
return
|
||||
}
|
||||
sort.Slice(durations, func(i, j int) bool { return durations[i] < durations[j] })
|
||||
|
||||
p50 := durations[len(durations)*50/100]
|
||||
p95 := durations[len(durations)*95/100]
|
||||
p99 := durations[len(durations)*99/100]
|
||||
|
||||
b.ReportMetric(float64(p50.Microseconds()), "p50_µs")
|
||||
b.ReportMetric(float64(p95.Microseconds()), "p95_µs")
|
||||
b.ReportMetric(float64(p99.Microseconds()), "p99_µs")
|
||||
}
|
||||
|
||||
// ── BenchmarkBAPCaller_CacheWarm / CacheCold ─────────────────────────────────
|
||||
// Compares latency when the Redis cache holds a pre-warmed key set (CacheWarm)
|
||||
// vs. when each iteration has a fresh message_id that the cache has never seen
|
||||
// (CacheCold). The delta reveals the key-lookup overhead on a cold path.
|
||||
|
||||
// BenchmarkBAPCaller_CacheWarm sends a fixed body (constant message_id) so the
|
||||
// simplekeymanager's Redis cache is hit on every iteration after the first.
|
||||
func BenchmarkBAPCaller_CacheWarm(b *testing.B) {
|
||||
body := warmFixtureBody(b, "discover")
|
||||
|
||||
// Warm-up: send once to populate the cache before the timer starts.
|
||||
warmReq := buildSignedRequestFixed(b, "discover", body)
|
||||
if err := sendRequest(warmReq); err != nil {
|
||||
b.Fatalf("cache warm-up request failed: %v", err)
|
||||
}
|
||||
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
req := buildSignedRequestFixed(b, "discover", body)
|
||||
if err := sendRequest(req); err != nil {
|
||||
b.Errorf("CacheWarm iteration %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkBAPCaller_CacheCold uses a fresh message_id per iteration, so every
|
||||
// request experiences a cache miss and a full key-derivation round-trip.
|
||||
func BenchmarkBAPCaller_CacheCold(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
req := buildSignedRequest(b, "discover") // fresh IDs each time
|
||||
if err := sendRequest(req); err != nil {
|
||||
b.Errorf("CacheCold iteration %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── BenchmarkBAPCaller_RPS ────────────────────────────────────────────────────
|
||||
// Reports requests-per-second as a custom metric alongside the default ns/op.
|
||||
// Run with -benchtime=30s for a stable RPS reading.
|
||||
func BenchmarkBAPCaller_RPS(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
|
||||
var count int64
|
||||
start := time.Now()
|
||||
|
||||
b.ResetTimer()
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
var local int64
|
||||
for pb.Next() {
|
||||
req := buildSignedRequest(b, "discover")
|
||||
if err := sendRequest(req); err == nil {
|
||||
local++
|
||||
}
|
||||
}
|
||||
// Accumulate without atomic for simplicity — final value only read after
|
||||
// RunParallel returns and all goroutines have exited.
|
||||
count += local
|
||||
})
|
||||
|
||||
elapsed := time.Since(start).Seconds()
|
||||
if elapsed > 0 {
|
||||
rps := float64(count) / elapsed
|
||||
b.ReportMetric(rps, "req/s")
|
||||
fmt.Printf(" RPS: %.0f over %.1fs\n", rps, elapsed)
|
||||
}
|
||||
}
|
||||
|
||||
// ── helper: one-shot HTTP client ─────────────────────────────────────────────
|
||||
|
||||
// benchHTTPClient is a shared client for all benchmark goroutines.
|
||||
// MaxConnsPerHost caps the total active connections to localhost so we don't
|
||||
// exhaust the OS ephemeral port range. MaxIdleConnsPerHost keeps that many
|
||||
// connections warm in the pool so parallel goroutines reuse them rather than
|
||||
// opening fresh TCP connections on every request.
|
||||
var benchHTTPClient = &http.Client{
|
||||
Transport: &http.Transport{
|
||||
MaxIdleConns: 200,
|
||||
MaxIdleConnsPerHost: 200,
|
||||
MaxConnsPerHost: 200,
|
||||
IdleConnTimeout: 90 * time.Second,
|
||||
DisableCompression: true, // no benefit compressing localhost traffic
|
||||
},
|
||||
}
|
||||
13
benchmarks/e2e/keys_test.go
Normal file
13
benchmarks/e2e/keys_test.go
Normal file
@@ -0,0 +1,13 @@
|
||||
package e2e_bench_test
|
||||
|
||||
// Development key pair from config/local-retail-bap.yaml.
|
||||
// Used across the retail devkit for non-production testing.
|
||||
// DO NOT use in any production or staging environment.
|
||||
const (
|
||||
benchSubscriberID = "sandbox.food-finder.com"
|
||||
benchKeyID = "76EU7VwahYv4XztXJzji9ssiSV74eWXWBcCKGn7jAdm5VGLCdYAJ8j"
|
||||
benchPrivKey = "rrNtVgyASCGlo+ebsJaA37D5CZYZVfT0JA5/vlkTeV0="
|
||||
benchPubKey = "oFIk7KqCqvqRYkLMjQqiaKM5oOozkYT64bfLuc8p/SU="
|
||||
benchEncrPrivKey = "rrNtVgyASCGlo+ebsJaA37D5CZYZVfT0JA5/vlkTeV0="
|
||||
benchEncrPubKey = "oFIk7KqCqvqRYkLMjQqiaKM5oOozkYT64bfLuc8p/SU="
|
||||
)
|
||||
63
benchmarks/e2e/mocks_test.go
Normal file
63
benchmarks/e2e/mocks_test.go
Normal file
@@ -0,0 +1,63 @@
|
||||
package e2e_bench_test
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// startMockBPP starts an httptest server that accepts any POST request and
|
||||
// immediately returns a valid Beckn ACK. This replaces the real BPP backend,
|
||||
// isolating benchmark results to adapter-internal latency only.
|
||||
func startMockBPP() *httptest.Server {
|
||||
ackBody := `{"message":{"ack":{"status":"ACK"}}}`
|
||||
return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
fmt.Fprint(w, ackBody)
|
||||
}))
|
||||
}
|
||||
|
||||
// subscriberRecord mirrors the registry API response shape for a single subscriber.
|
||||
type subscriberRecord struct {
|
||||
SubscriberID string `json:"subscriber_id"`
|
||||
UniqueKeyID string `json:"unique_key_id"`
|
||||
SigningPublicKey string `json:"signing_public_key"`
|
||||
ValidFrom string `json:"valid_from"`
|
||||
ValidUntil string `json:"valid_until"`
|
||||
Status string `json:"status"`
|
||||
}
|
||||
|
||||
// startMockRegistry starts an httptest server that returns a subscriber record
|
||||
// matching the benchmark test keys. The signvalidator plugin uses this to
|
||||
// resolve the public key for signature verification on incoming requests.
|
||||
func startMockRegistry() *httptest.Server {
|
||||
record := subscriberRecord{
|
||||
SubscriberID: benchSubscriberID,
|
||||
UniqueKeyID: benchKeyID,
|
||||
SigningPublicKey: benchPubKey,
|
||||
ValidFrom: time.Now().AddDate(-1, 0, 0).Format(time.RFC3339),
|
||||
ValidUntil: time.Now().AddDate(10, 0, 0).Format(time.RFC3339),
|
||||
Status: "SUBSCRIBED",
|
||||
}
|
||||
body, _ := json.Marshal([]subscriberRecord{record})
|
||||
|
||||
return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Support both GET (lookup) and POST (lookup with body) registry calls.
|
||||
// Respond with the subscriber record regardless of subscriber_id query param.
|
||||
subscriberID := r.URL.Query().Get("subscriber_id")
|
||||
if subscriberID == "" {
|
||||
// Try extracting from path for dedi-registry style calls.
|
||||
parts := strings.Split(strings.TrimPrefix(r.URL.Path, "/"), "/")
|
||||
if len(parts) > 0 {
|
||||
subscriberID = parts[len(parts)-1]
|
||||
}
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
w.Write(body)
|
||||
}))
|
||||
}
|
||||
466
benchmarks/e2e/setup_test.go
Normal file
466
benchmarks/e2e/setup_test.go
Normal file
@@ -0,0 +1,466 @@
|
||||
package e2e_bench_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/ed25519"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/alicebob/miniredis/v2"
|
||||
"github.com/beckn-one/beckn-onix/core/module"
|
||||
"github.com/beckn-one/beckn-onix/core/module/handler"
|
||||
"github.com/beckn-one/beckn-onix/pkg/model"
|
||||
"github.com/beckn-one/beckn-onix/pkg/plugin"
|
||||
"github.com/google/uuid"
|
||||
"github.com/rs/zerolog"
|
||||
"golang.org/x/crypto/blake2b"
|
||||
)
|
||||
|
||||
// Package-level references shared across all benchmarks.
|
||||
var (
|
||||
adapterServer *httptest.Server
|
||||
miniRedis *miniredis.Miniredis
|
||||
mockBPP *httptest.Server
|
||||
mockRegistry *httptest.Server
|
||||
pluginDir string
|
||||
moduleRoot string // set in TestMain; used by buildBAPCallerConfig for local file paths
|
||||
)
|
||||
|
||||
// Plugins to compile for the benchmark. Each entry is (pluginID, source path relative to module root).
|
||||
var pluginsToBuild = []struct {
|
||||
id string
|
||||
src string
|
||||
}{
|
||||
{"router", "pkg/plugin/implementation/router/cmd/plugin.go"},
|
||||
{"signer", "pkg/plugin/implementation/signer/cmd/plugin.go"},
|
||||
{"signvalidator", "pkg/plugin/implementation/signvalidator/cmd/plugin.go"},
|
||||
{"simplekeymanager", "pkg/plugin/implementation/simplekeymanager/cmd/plugin.go"},
|
||||
{"cache", "pkg/plugin/implementation/cache/cmd/plugin.go"},
|
||||
{"schemav2validator", "pkg/plugin/implementation/schemav2validator/cmd/plugin.go"},
|
||||
{"otelsetup", "pkg/plugin/implementation/otelsetup/cmd/plugin.go"},
|
||||
// registry is required by stdHandler to wire KeyManager, even on the caller
|
||||
// path where sign-validation never runs.
|
||||
{"registry", "pkg/plugin/implementation/registry/cmd/plugin.go"},
|
||||
}
|
||||
|
||||
// TestMain is the entry point for the benchmark package. It:
|
||||
// 1. Compiles all required .so plugins into a temp directory
|
||||
// 2. Starts miniredis (in-process Redis)
|
||||
// 3. Starts mock BPP and registry HTTP servers
|
||||
// 4. Starts the adapter as an httptest.Server
|
||||
// 5. Runs all benchmarks
|
||||
// 6. Tears everything down in reverse order
|
||||
func TestMain(m *testing.M) {
|
||||
ctx := context.Background()
|
||||
|
||||
// ── Step 1: Compile plugins ───────────────────────────────────────────────
|
||||
var err error
|
||||
pluginDir, err = os.MkdirTemp("", "beckn-bench-plugins-*")
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "ERROR: failed to create plugin temp dir: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
defer os.RemoveAll(pluginDir)
|
||||
|
||||
moduleRoot, err = findModuleRoot()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "ERROR: failed to locate module root: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Printf("=== Building plugins (first run may take 60-90s) ===\n")
|
||||
for _, p := range pluginsToBuild {
|
||||
outPath := filepath.Join(pluginDir, p.id+".so")
|
||||
srcPath := filepath.Join(moduleRoot, p.src)
|
||||
fmt.Printf(" compiling %s.so ...\n", p.id)
|
||||
cmd := exec.Command("go", "build", "-buildmode=plugin", "-o", outPath, srcPath)
|
||||
cmd.Dir = moduleRoot
|
||||
if out, buildErr := cmd.CombinedOutput(); buildErr != nil {
|
||||
fmt.Fprintf(os.Stderr, "ERROR: failed to build plugin %s:\n%s\n", p.id, string(out))
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
fmt.Printf("=== All plugins compiled successfully ===\n\n")
|
||||
|
||||
// ── Step 2: Start miniredis ───────────────────────────────────────────────
|
||||
miniRedis, err = miniredis.Run()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "ERROR: failed to start miniredis: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
defer miniRedis.Close()
|
||||
|
||||
// ── Step 3: Start mock servers ────────────────────────────────────────────
|
||||
mockBPP = startMockBPP()
|
||||
defer mockBPP.Close()
|
||||
|
||||
mockRegistry = startMockRegistry()
|
||||
defer mockRegistry.Close()
|
||||
|
||||
// ── Step 4: Start adapter ─────────────────────────────────────────────────
|
||||
adapterServer, err = startAdapter(ctx)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "ERROR: failed to start adapter: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
defer adapterServer.Close()
|
||||
|
||||
// ── Step 5: Run benchmarks ────────────────────────────────────────────────
|
||||
// Silence the adapter's zerolog output for the duration of the benchmark
|
||||
// run. Without this, every HTTP request the adapter processes emits a JSON
|
||||
// log line to stdout, which interleaves with Go's benchmark result lines
|
||||
// (BenchmarkFoo-N\t\t<count>\t<ns/op>) and makes benchstat unparseable.
|
||||
// Setup logging above still ran normally; zerolog.Disabled is set only here,
|
||||
// just before m.Run(), so errors during startup remain visible.
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
os.Exit(m.Run())
|
||||
}
|
||||
|
||||
// findModuleRoot walks up from the current directory to find the go.mod root.
|
||||
func findModuleRoot() (string, error) {
|
||||
dir, err := os.Getwd()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
for {
|
||||
if _, err := os.Stat(filepath.Join(dir, "go.mod")); err == nil {
|
||||
return dir, nil
|
||||
}
|
||||
parent := filepath.Dir(dir)
|
||||
if parent == dir {
|
||||
return "", fmt.Errorf("go.mod not found from %s", dir)
|
||||
}
|
||||
dir = parent
|
||||
}
|
||||
}
|
||||
|
||||
// writeRoutingConfig reads the benchmark routing config template, replaces the
|
||||
// BENCH_BPP_URL placeholder with the live mock BPP server URL, and writes the
|
||||
// result to a temp file. Returns the path to the temp file.
|
||||
func writeRoutingConfig(bppURL string) (string, error) {
|
||||
templatePath := filepath.Join("testdata", "routing-BAPCaller.yaml")
|
||||
data, err := os.ReadFile(templatePath)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("reading routing config template: %w", err)
|
||||
}
|
||||
content := strings.ReplaceAll(string(data), "BENCH_BPP_URL", bppURL)
|
||||
f, err := os.CreateTemp("", "bench-routing-*.yaml")
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("creating temp routing config: %w", err)
|
||||
}
|
||||
if _, err := f.WriteString(content); err != nil {
|
||||
f.Close()
|
||||
return "", fmt.Errorf("writing routing config: %w", err)
|
||||
}
|
||||
f.Close()
|
||||
return f.Name(), nil
|
||||
}
|
||||
|
||||
// startAdapter constructs a fully wired adapter using the compiled plugins and
|
||||
// returns it as an *httptest.Server. All external dependencies are replaced with
|
||||
// local mock servers: Redis → miniredis, BPP → mockBPP, registry → mockRegistry.
|
||||
func startAdapter(ctx context.Context) (*httptest.Server, error) {
|
||||
routingConfigPath, err := writeRoutingConfig(mockBPP.URL)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("writing routing config: %w", err)
|
||||
}
|
||||
|
||||
// Plugin manager: load all compiled .so files from pluginDir.
|
||||
mgr, closer, err := plugin.NewManager(ctx, &plugin.ManagerConfig{
|
||||
Root: pluginDir,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating plugin manager: %w", err)
|
||||
}
|
||||
_ = closer // closer is called when the server shuts down; deferred in TestMain via server.Close
|
||||
|
||||
// Build module configurations.
|
||||
mCfgs := []module.Config{
|
||||
buildBAPCallerConfig(routingConfigPath, mockRegistry.URL),
|
||||
}
|
||||
|
||||
mux := http.NewServeMux()
|
||||
if err := module.Register(ctx, mCfgs, mux, mgr); err != nil {
|
||||
return nil, fmt.Errorf("registering modules: %w", err)
|
||||
}
|
||||
|
||||
srv := httptest.NewServer(mux)
|
||||
return srv, nil
|
||||
}
|
||||
|
||||
// buildBAPCallerConfig returns the module.Config for the bapTxnCaller handler,
|
||||
// mirroring config/local-retail-bap.yaml but pointing at benchmark mock services.
|
||||
// registryURL must point at the mock registry so simplekeymanager can satisfy the
|
||||
// Registry requirement imposed by stdHandler — even though the caller path never
|
||||
// performs signature validation, the handler wiring requires it to be present.
|
||||
func buildBAPCallerConfig(routingConfigPath, registryURL string) module.Config {
|
||||
return module.Config{
|
||||
Name: "bapTxnCaller",
|
||||
Path: "/bap/caller/",
|
||||
Handler: handler.Config{
|
||||
Type: handler.HandlerTypeStd,
|
||||
Role: model.RoleBAP,
|
||||
SubscriberID: benchSubscriberID,
|
||||
HttpClientConfig: handler.HttpClientConfig{
|
||||
MaxIdleConns: 1000,
|
||||
MaxIdleConnsPerHost: 200,
|
||||
IdleConnTimeout: 300 * time.Second,
|
||||
ResponseHeaderTimeout: 5 * time.Second,
|
||||
},
|
||||
Plugins: handler.PluginCfg{
|
||||
// Registry is required by stdHandler before it will wire KeyManager,
|
||||
// even on the caller path where sign-validation never runs. We point
|
||||
// it at the mock registry (retry_max=0 so failures are immediate).
|
||||
Registry: &plugin.Config{
|
||||
ID: "registry",
|
||||
Config: map[string]string{
|
||||
"url": registryURL,
|
||||
"retry_max": "0",
|
||||
},
|
||||
},
|
||||
KeyManager: &plugin.Config{
|
||||
ID: "simplekeymanager",
|
||||
Config: map[string]string{
|
||||
"networkParticipant": benchSubscriberID,
|
||||
"keyId": benchKeyID,
|
||||
"signingPrivateKey": benchPrivKey,
|
||||
"signingPublicKey": benchPubKey,
|
||||
"encrPrivateKey": benchEncrPrivKey,
|
||||
"encrPublicKey": benchEncrPubKey,
|
||||
},
|
||||
},
|
||||
SchemaValidator: &plugin.Config{
|
||||
ID: "schemav2validator",
|
||||
Config: map[string]string{
|
||||
"type": "file",
|
||||
"location": filepath.Join(moduleRoot, "benchmarks/e2e/testdata/beckn.yaml"),
|
||||
"cacheTTL": "3600",
|
||||
},
|
||||
},
|
||||
Cache: &plugin.Config{
|
||||
ID: "cache",
|
||||
Config: map[string]string{
|
||||
"addr": miniRedis.Addr(),
|
||||
},
|
||||
},
|
||||
Router: &plugin.Config{
|
||||
ID: "router",
|
||||
Config: map[string]string{
|
||||
"routingConfig": routingConfigPath,
|
||||
},
|
||||
},
|
||||
Signer: &plugin.Config{
|
||||
ID: "signer",
|
||||
},
|
||||
},
|
||||
Steps: []string{"addRoute", "sign", "validateSchema"},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// ── T7: Request builder and Beckn signing helper ──────────────────────────────
|
||||
|
||||
// becknPayloadTemplate holds the raw JSON for a fixture file with sentinels.
|
||||
var fixtureCache = map[string][]byte{}
|
||||
|
||||
// loadFixture reads a fixture file from testdata/ and caches it.
|
||||
func loadFixture(action string) ([]byte, error) {
|
||||
if data, ok := fixtureCache[action]; ok {
|
||||
return data, nil
|
||||
}
|
||||
path := filepath.Join("testdata", action+"_request.json")
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("loading fixture %s: %w", action, err)
|
||||
}
|
||||
fixtureCache[action] = data
|
||||
return data, nil
|
||||
}
|
||||
|
||||
// buildSignedRequest reads the fixture for the given action, substitutes
|
||||
// BENCH_TIMESTAMP / BENCH_MESSAGE_ID / BENCH_TRANSACTION_ID with fresh values,
|
||||
// signs the body using the Beckn Ed25519 spec, and returns a ready-to-send
|
||||
// *http.Request targeting the adapter's /bap/caller/<action> path.
|
||||
func buildSignedRequest(tb testing.TB, action string) *http.Request {
|
||||
tb.Helper()
|
||||
|
||||
fixture, err := loadFixture(action)
|
||||
if err != nil {
|
||||
tb.Fatalf("buildSignedRequest: %v", err)
|
||||
}
|
||||
|
||||
// Substitute sentinels with fresh values for this iteration.
|
||||
now := time.Now().UTC().Format(time.RFC3339)
|
||||
msgID := uuid.New().String()
|
||||
txnID := uuid.New().String()
|
||||
|
||||
body := bytes.ReplaceAll(fixture, []byte("BENCH_TIMESTAMP"), []byte(now))
|
||||
body = bytes.ReplaceAll(body, []byte("BENCH_MESSAGE_ID"), []byte(msgID))
|
||||
body = bytes.ReplaceAll(body, []byte("BENCH_TRANSACTION_ID"), []byte(txnID))
|
||||
|
||||
// Sign the body per the Beckn Ed25519 spec.
|
||||
authHeader, err := signBecknPayload(body)
|
||||
if err != nil {
|
||||
tb.Fatalf("buildSignedRequest: signing failed: %v", err)
|
||||
}
|
||||
|
||||
url := adapterServer.URL + "/bap/caller/" + action
|
||||
req, err := http.NewRequest(http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
tb.Fatalf("buildSignedRequest: http.NewRequest: %v", err)
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set(model.AuthHeaderSubscriber, authHeader)
|
||||
|
||||
return req
|
||||
}
|
||||
|
||||
// buildSignedRequestFixed builds a signed request with a fixed body (same
|
||||
// message_id every call) — used for cache-warm benchmarks.
|
||||
func buildSignedRequestFixed(tb testing.TB, action string, body []byte) *http.Request {
|
||||
tb.Helper()
|
||||
|
||||
authHeader, err := signBecknPayload(body)
|
||||
if err != nil {
|
||||
tb.Fatalf("buildSignedRequestFixed: signing failed: %v", err)
|
||||
}
|
||||
|
||||
url := adapterServer.URL + "/bap/caller/" + action
|
||||
req, err := http.NewRequest(http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
tb.Fatalf("buildSignedRequestFixed: http.NewRequest: %v", err)
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set(model.AuthHeaderSubscriber, authHeader)
|
||||
return req
|
||||
}
|
||||
|
||||
// signBecknPayload signs a request body using the Beckn Ed25519 signing spec
|
||||
// and returns a formatted Authorization header value.
|
||||
//
|
||||
// Beckn signing spec:
|
||||
// 1. Digest: "BLAKE-512=" + base64(blake2b-512(body))
|
||||
// 2. Signing string: "(created): <ts>\n(expires): <ts+5m>\ndigest: <digest>"
|
||||
// 3. Signature: base64(ed25519.Sign(privKey, signingString))
|
||||
// 4. Header: Signature keyId="<sub>|<keyId>|ed25519",algorithm="ed25519",
|
||||
// created="<ts>",expires="<ts+5m>",headers="(created) (expires) digest",
|
||||
// signature="<sig>"
|
||||
//
|
||||
// Reference: pkg/plugin/implementation/signer/signer.go
|
||||
func signBecknPayload(body []byte) (string, error) {
|
||||
createdAt := time.Now().Unix()
|
||||
expiresAt := time.Now().Add(5 * time.Minute).Unix()
|
||||
|
||||
// Step 1: BLAKE-512 digest.
|
||||
hasher, _ := blake2b.New512(nil)
|
||||
hasher.Write(body)
|
||||
digest := "BLAKE-512=" + base64.StdEncoding.EncodeToString(hasher.Sum(nil))
|
||||
|
||||
// Step 2: Signing string.
|
||||
signingString := fmt.Sprintf("(created): %d\n(expires): %d\ndigest: %s", createdAt, expiresAt, digest)
|
||||
|
||||
// Step 3: Ed25519 signature.
|
||||
privKeyBytes, err := base64.StdEncoding.DecodeString(benchPrivKey)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("decoding private key: %w", err)
|
||||
}
|
||||
privKey := ed25519.NewKeyFromSeed(privKeyBytes)
|
||||
sig := base64.StdEncoding.EncodeToString(ed25519.Sign(privKey, []byte(signingString)))
|
||||
|
||||
// Step 4: Format Authorization header (matches generateAuthHeader in step.go).
|
||||
header := fmt.Sprintf(
|
||||
`Signature keyId="%s|%s|ed25519",algorithm="ed25519",created="%d",expires="%d",headers="(created) (expires) digest",signature="%s"`,
|
||||
benchSubscriberID, benchKeyID, createdAt, expiresAt, sig,
|
||||
)
|
||||
return header, nil
|
||||
}
|
||||
|
||||
// warmFixtureBody returns a fixed body for the given action with stable IDs —
|
||||
// used to pre-warm the cache so cache-warm benchmarks hit the Redis fast path.
|
||||
func warmFixtureBody(tb testing.TB, action string) []byte {
|
||||
tb.Helper()
|
||||
fixture, err := loadFixture(action)
|
||||
if err != nil {
|
||||
tb.Fatalf("warmFixtureBody: %v", err)
|
||||
}
|
||||
body := bytes.ReplaceAll(fixture, []byte("BENCH_TIMESTAMP"), []byte("2025-01-01T00:00:00Z"))
|
||||
body = bytes.ReplaceAll(body, []byte("BENCH_MESSAGE_ID"), []byte("00000000-warm-0000-0000-000000000000"))
|
||||
body = bytes.ReplaceAll(body, []byte("BENCH_TRANSACTION_ID"), []byte("00000000-warm-txn-0000-000000000000"))
|
||||
return body
|
||||
}
|
||||
|
||||
// sendRequest executes an HTTP request using the shared bench client and
|
||||
// discards the response body. Returns a non-nil error for non-2xx responses.
|
||||
func sendRequest(req *http.Request) error {
|
||||
resp, err := benchHTTPClient.Do(req)
|
||||
if err != nil {
|
||||
return fmt.Errorf("http do: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
// Drain the body so the connection is returned to the pool for reuse.
|
||||
// Without this, Go discards the connection after each request, causing
|
||||
// port exhaustion under parallel load ("can't assign requested address").
|
||||
_, _ = io.Copy(io.Discard, resp.Body)
|
||||
// We accept any 2xx response (ACK or forwarded BPP response).
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
return fmt.Errorf("unexpected status: %d", resp.StatusCode)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ── TestSignBecknPayload: validation test before running benchmarks ───────────
|
||||
// Sends a signed discover request to the live adapter and asserts a 200 response,
|
||||
// confirming the signing helper produces headers accepted by the adapter pipeline.
|
||||
func TestSignBecknPayload(t *testing.T) {
|
||||
if adapterServer == nil {
|
||||
t.Skip("adapterServer not initialised (run via TestMain)")
|
||||
}
|
||||
fixture, err := loadFixture("discover")
|
||||
if err != nil {
|
||||
t.Fatalf("loading fixture: %v", err)
|
||||
}
|
||||
|
||||
// Substitute sentinels.
|
||||
now := time.Now().UTC().Format(time.RFC3339)
|
||||
body := bytes.ReplaceAll(fixture, []byte("BENCH_TIMESTAMP"), []byte(now))
|
||||
body = bytes.ReplaceAll(body, []byte("BENCH_MESSAGE_ID"), []byte(uuid.New().String()))
|
||||
body = bytes.ReplaceAll(body, []byte("BENCH_TRANSACTION_ID"), []byte(uuid.New().String()))
|
||||
|
||||
authHeader, err := signBecknPayload(body)
|
||||
if err != nil {
|
||||
t.Fatalf("signBecknPayload: %v", err)
|
||||
}
|
||||
|
||||
url := adapterServer.URL + "/bap/caller/discover"
|
||||
req, err := http.NewRequest(http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
t.Fatalf("http.NewRequest: %v", err)
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set(model.AuthHeaderSubscriber, authHeader)
|
||||
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
t.Fatalf("sending request: %v", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var result map[string]interface{}
|
||||
json.NewDecoder(resp.Body).Decode(&result)
|
||||
t.Logf("Response status: %d, body: %v", resp.StatusCode, result)
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Errorf("expected 200 OK, got %d", resp.StatusCode)
|
||||
}
|
||||
}
|
||||
3380
benchmarks/e2e/testdata/beckn.yaml
vendored
Normal file
3380
benchmarks/e2e/testdata/beckn.yaml
vendored
Normal file
File diff suppressed because it is too large
Load Diff
84
benchmarks/e2e/testdata/confirm_request.json
vendored
Normal file
84
benchmarks/e2e/testdata/confirm_request.json
vendored
Normal file
@@ -0,0 +1,84 @@
|
||||
{
|
||||
"context": {
|
||||
"action": "confirm",
|
||||
"bapId": "sandbox.food-finder.com",
|
||||
"bapUri": "http://bench-bap.example.com",
|
||||
"bppId": "bench-bpp.example.com",
|
||||
"bppUri": "BENCH_BPP_URL",
|
||||
"messageId": "BENCH_MESSAGE_ID",
|
||||
"transactionId": "BENCH_TRANSACTION_ID",
|
||||
"timestamp": "BENCH_TIMESTAMP",
|
||||
"ttl": "PT30S",
|
||||
"version": "2.0.0"
|
||||
},
|
||||
"message": {
|
||||
"order": {
|
||||
"provider": {
|
||||
"id": "bench-provider-001"
|
||||
},
|
||||
"items": [
|
||||
{
|
||||
"id": "bench-item-001",
|
||||
"quantity": {
|
||||
"selected": {
|
||||
"count": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"billing": {
|
||||
"name": "Bench User",
|
||||
"address": "123 Bench Street, Bangalore, 560001",
|
||||
"city": {
|
||||
"name": "Bangalore"
|
||||
},
|
||||
"state": {
|
||||
"name": "Karnataka"
|
||||
},
|
||||
"country": {
|
||||
"code": "IND"
|
||||
},
|
||||
"area_code": "560001",
|
||||
"email": "bench@example.com",
|
||||
"phone": "9999999999"
|
||||
},
|
||||
"fulfillments": [
|
||||
{
|
||||
"id": "f1",
|
||||
"type": "Delivery",
|
||||
"stops": [
|
||||
{
|
||||
"type": "end",
|
||||
"location": {
|
||||
"gps": "12.9716,77.5946",
|
||||
"area_code": "560001"
|
||||
},
|
||||
"contact": {
|
||||
"phone": "9999999999",
|
||||
"email": "bench@example.com"
|
||||
}
|
||||
}
|
||||
],
|
||||
"customer": {
|
||||
"person": {
|
||||
"name": "Bench User"
|
||||
},
|
||||
"contact": {
|
||||
"phone": "9999999999",
|
||||
"email": "bench@example.com"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"payments": [
|
||||
{
|
||||
"type": "ON-FULFILLMENT",
|
||||
"params": {
|
||||
"amount": "150.00",
|
||||
"currency": "INR"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
17
benchmarks/e2e/testdata/discover_request.json
vendored
Normal file
17
benchmarks/e2e/testdata/discover_request.json
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"context": {
|
||||
"action": "discover",
|
||||
"bapId": "sandbox.food-finder.com",
|
||||
"bapUri": "http://bench-bap.example.com",
|
||||
"messageId": "BENCH_MESSAGE_ID",
|
||||
"transactionId": "BENCH_TRANSACTION_ID",
|
||||
"timestamp": "BENCH_TIMESTAMP",
|
||||
"ttl": "PT30S",
|
||||
"version": "2.0.0"
|
||||
},
|
||||
"message": {
|
||||
"intent": {
|
||||
"textSearch": "pizza"
|
||||
}
|
||||
}
|
||||
}
|
||||
80
benchmarks/e2e/testdata/init_request.json
vendored
Normal file
80
benchmarks/e2e/testdata/init_request.json
vendored
Normal file
@@ -0,0 +1,80 @@
|
||||
{
|
||||
"context": {
|
||||
"action": "init",
|
||||
"bapId": "sandbox.food-finder.com",
|
||||
"bapUri": "http://bench-bap.example.com",
|
||||
"bppId": "bench-bpp.example.com",
|
||||
"bppUri": "BENCH_BPP_URL",
|
||||
"messageId": "BENCH_MESSAGE_ID",
|
||||
"transactionId": "BENCH_TRANSACTION_ID",
|
||||
"timestamp": "BENCH_TIMESTAMP",
|
||||
"ttl": "PT30S",
|
||||
"version": "2.0.0"
|
||||
},
|
||||
"message": {
|
||||
"order": {
|
||||
"provider": {
|
||||
"id": "bench-provider-001"
|
||||
},
|
||||
"items": [
|
||||
{
|
||||
"id": "bench-item-001",
|
||||
"quantity": {
|
||||
"selected": {
|
||||
"count": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"billing": {
|
||||
"name": "Bench User",
|
||||
"address": "123 Bench Street, Bangalore, 560001",
|
||||
"city": {
|
||||
"name": "Bangalore"
|
||||
},
|
||||
"state": {
|
||||
"name": "Karnataka"
|
||||
},
|
||||
"country": {
|
||||
"code": "IND"
|
||||
},
|
||||
"area_code": "560001",
|
||||
"email": "bench@example.com",
|
||||
"phone": "9999999999"
|
||||
},
|
||||
"fulfillments": [
|
||||
{
|
||||
"id": "f1",
|
||||
"type": "Delivery",
|
||||
"stops": [
|
||||
{
|
||||
"type": "end",
|
||||
"location": {
|
||||
"gps": "12.9716,77.5946",
|
||||
"area_code": "560001"
|
||||
},
|
||||
"contact": {
|
||||
"phone": "9999999999",
|
||||
"email": "bench@example.com"
|
||||
}
|
||||
}
|
||||
],
|
||||
"customer": {
|
||||
"person": {
|
||||
"name": "Bench User"
|
||||
},
|
||||
"contact": {
|
||||
"phone": "9999999999",
|
||||
"email": "bench@example.com"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"payments": [
|
||||
{
|
||||
"type": "ON-FULFILLMENT"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
13
benchmarks/e2e/testdata/routing-BAPCaller.yaml
vendored
Normal file
13
benchmarks/e2e/testdata/routing-BAPCaller.yaml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
# Routing config for v2.0.0 benchmark. Domain is not required for v2.x.x — the
|
||||
# router ignores it and routes purely by version + endpoint.
|
||||
# BENCH_BPP_URL is substituted at runtime with the mock BPP server URL.
|
||||
routingRules:
|
||||
- version: "2.0.0"
|
||||
targetType: "url"
|
||||
target:
|
||||
url: "BENCH_BPP_URL"
|
||||
endpoints:
|
||||
- discover
|
||||
- select
|
||||
- init
|
||||
- confirm
|
||||
55
benchmarks/e2e/testdata/select_request.json
vendored
Normal file
55
benchmarks/e2e/testdata/select_request.json
vendored
Normal file
@@ -0,0 +1,55 @@
|
||||
{
|
||||
"context": {
|
||||
"action": "select",
|
||||
"bapId": "sandbox.food-finder.com",
|
||||
"bapUri": "http://bench-bap.example.com",
|
||||
"bppId": "bench-bpp.example.com",
|
||||
"bppUri": "BENCH_BPP_URL",
|
||||
"messageId": "BENCH_MESSAGE_ID",
|
||||
"transactionId": "BENCH_TRANSACTION_ID",
|
||||
"timestamp": "BENCH_TIMESTAMP",
|
||||
"ttl": "PT30S",
|
||||
"version": "2.0.0"
|
||||
},
|
||||
"message": {
|
||||
"order": {
|
||||
"provider": {
|
||||
"id": "bench-provider-001"
|
||||
},
|
||||
"items": [
|
||||
{
|
||||
"id": "bench-item-001",
|
||||
"quantity": {
|
||||
"selected": {
|
||||
"count": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"fulfillments": [
|
||||
{
|
||||
"id": "f1",
|
||||
"type": "Delivery",
|
||||
"stops": [
|
||||
{
|
||||
"type": "end",
|
||||
"location": {
|
||||
"gps": "12.9716,77.5946",
|
||||
"area_code": "560001"
|
||||
},
|
||||
"contact": {
|
||||
"phone": "9999999999",
|
||||
"email": "bench@example.com"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"payments": [
|
||||
{
|
||||
"type": "ON-FULFILLMENT"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
145
benchmarks/run_benchmarks.sh
Executable file
145
benchmarks/run_benchmarks.sh
Executable file
@@ -0,0 +1,145 @@
|
||||
#!/usr/bin/env bash
|
||||
# =============================================================================
|
||||
# run_benchmarks.sh — beckn-onix adapter benchmark runner
|
||||
#
|
||||
# Usage:
|
||||
# cd beckn-onix
|
||||
# bash benchmarks/run_benchmarks.sh
|
||||
#
|
||||
# Requirements:
|
||||
# - Go 1.24+ installed
|
||||
# - benchstat is declared as a tool in go.mod; invoked via "go tool benchstat"
|
||||
#
|
||||
# Output:
|
||||
# benchmarks/results/<YYYY-MM-DD_HH-MM-SS>/
|
||||
# run1.txt, run2.txt, run3.txt — raw go test -bench output
|
||||
# parallel_cpu1.txt ... cpu16.txt — concurrency sweep
|
||||
# benchstat_summary.txt — statistical aggregation
|
||||
# =============================================================================
|
||||
set -euo pipefail
|
||||
|
||||
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
RESULTS_DIR="$REPO_ROOT/benchmarks/results/$(date +%Y-%m-%d_%H-%M-%S)"
|
||||
BENCH_PKG="./benchmarks/e2e/..."
|
||||
BENCH_TIMEOUT="10m"
|
||||
# ── Smoke-test values (small): swap to the "full run" values below once stable ─
|
||||
BENCH_TIME_SERIAL="10s" # full run: 10s
|
||||
BENCH_TIME_PARALLEL="30s" # full run: 30s
|
||||
BENCH_COUNT=1 # full run: keep at 1; benchstat uses the 3 serial files
|
||||
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# ── benchstat is declared as a go tool in go.mod; no separate install needed ──
|
||||
# Use: go tool benchstat (works anywhere without PATH changes)
|
||||
|
||||
# bench_filter: tee full output to the .log file for debugging, and write a
|
||||
# clean copy (only benchstat-parseable lines) to the .txt file.
|
||||
# The adapter logger is silenced via zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
# in TestMain, so stdout should already be clean; the grep is a safety net for
|
||||
# any stray lines from go test itself (build output, redis warnings, etc.).
|
||||
bench_filter() {
|
||||
local txt="$1" log="$2"
|
||||
tee "$log" | grep -E "^(Benchmark|goos:|goarch:|pkg:|cpu:|ok |PASS|FAIL|--- )" > "$txt" || true
|
||||
}
|
||||
|
||||
# ── Create results directory ──────────────────────────────────────────────────
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
echo "=== beckn-onix Benchmark Runner ==="
|
||||
echo "Results dir : $RESULTS_DIR"
|
||||
echo "Package : $BENCH_PKG"
|
||||
echo ""
|
||||
|
||||
# ── Serial runs (3x for benchstat stability) ──────────────────────────────────
|
||||
echo "Running serial benchmarks (3 runs × ${BENCH_TIME_SERIAL})..."
|
||||
for run in 1 2 3; do
|
||||
echo " Run $run/3..."
|
||||
go test \
|
||||
-timeout="$BENCH_TIMEOUT" \
|
||||
-run=^$ \
|
||||
-bench="." \
|
||||
-benchtime="$BENCH_TIME_SERIAL" \
|
||||
-benchmem \
|
||||
-count="$BENCH_COUNT" \
|
||||
"$BENCH_PKG" 2>&1 | bench_filter "$RESULTS_DIR/run${run}.txt" "$RESULTS_DIR/run${run}.log"
|
||||
echo " Saved → $RESULTS_DIR/run${run}.txt (full log → run${run}.log)"
|
||||
done
|
||||
echo ""
|
||||
|
||||
# ── Concurrency sweep ─────────────────────────────────────────────────────────
|
||||
echo "Running parallel concurrency sweep (cpu=1,2,4,8,16; ${BENCH_TIME_PARALLEL} each)..."
|
||||
for cpu in 1 2 4 8 16; do
|
||||
echo " GOMAXPROCS=$cpu..."
|
||||
go test \
|
||||
-timeout="$BENCH_TIMEOUT" \
|
||||
-run=^$ \
|
||||
-bench="BenchmarkBAPCaller_Discover_Parallel|BenchmarkBAPCaller_RPS" \
|
||||
-benchtime="$BENCH_TIME_PARALLEL" \
|
||||
-benchmem \
|
||||
-cpu="$cpu" \
|
||||
-count=1 \
|
||||
"$BENCH_PKG" 2>&1 | bench_filter "$RESULTS_DIR/parallel_cpu${cpu}.txt" "$RESULTS_DIR/parallel_cpu${cpu}.log"
|
||||
echo " Saved → $RESULTS_DIR/parallel_cpu${cpu}.txt (full log → parallel_cpu${cpu}.log)"
|
||||
done
|
||||
echo ""
|
||||
|
||||
# ── Percentile benchmark ──────────────────────────────────────────────────────
|
||||
echo "Running percentile benchmark (${BENCH_TIME_SERIAL})..."
|
||||
go test \
|
||||
-timeout="$BENCH_TIMEOUT" \
|
||||
-run=^$ \
|
||||
-bench="BenchmarkBAPCaller_Discover_Percentiles" \
|
||||
-benchtime="$BENCH_TIME_SERIAL" \
|
||||
-benchmem \
|
||||
-count=1 \
|
||||
"$BENCH_PKG" 2>&1 | bench_filter "$RESULTS_DIR/percentiles.txt" "$RESULTS_DIR/percentiles.log"
|
||||
echo " Saved → $RESULTS_DIR/percentiles.txt (full log → percentiles.log)"
|
||||
echo ""
|
||||
|
||||
# ── Cache comparison ──────────────────────────────────────────────────────────
|
||||
echo "Running cache warm vs cold comparison..."
|
||||
go test \
|
||||
-timeout="$BENCH_TIMEOUT" \
|
||||
-run=^$ \
|
||||
-bench="BenchmarkBAPCaller_Cache" \
|
||||
-benchtime="$BENCH_TIME_SERIAL" \
|
||||
-benchmem \
|
||||
-count=1 \
|
||||
"$BENCH_PKG" 2>&1 | bench_filter "$RESULTS_DIR/cache_comparison.txt" "$RESULTS_DIR/cache_comparison.log"
|
||||
echo " Saved → $RESULTS_DIR/cache_comparison.txt (full log → cache_comparison.log)"
|
||||
echo ""
|
||||
|
||||
# ── benchstat statistical summary ─────────────────────────────────────────────
|
||||
echo "Running benchstat statistical analysis..."
|
||||
go tool benchstat \
|
||||
"$RESULTS_DIR/run1.txt" \
|
||||
"$RESULTS_DIR/run2.txt" \
|
||||
"$RESULTS_DIR/run3.txt" \
|
||||
> "$RESULTS_DIR/benchstat_summary.txt" 2>&1
|
||||
echo " Saved → $RESULTS_DIR/benchstat_summary.txt"
|
||||
echo ""
|
||||
|
||||
# ── Parse results to CSV ──────────────────────────────────────────────────────
|
||||
if command -v go &>/dev/null; then
|
||||
echo "Parsing results to CSV..."
|
||||
go run benchmarks/tools/parse_results.go \
|
||||
-dir="$RESULTS_DIR" \
|
||||
-out="$RESULTS_DIR" 2>&1 || echo " (parse_results.go: optional step, skipping if errors)"
|
||||
fi
|
||||
|
||||
# ── Summary ───────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "========================================"
|
||||
echo "✅ Benchmark run complete!"
|
||||
echo ""
|
||||
echo "Results written to:"
|
||||
echo " $RESULTS_DIR"
|
||||
echo ""
|
||||
echo "Key files:"
|
||||
echo " benchstat_summary.txt — statistical analysis of 3 serial runs"
|
||||
echo " parallel_cpu*.txt — concurrency sweep results"
|
||||
echo " percentiles.txt — p50/p95/p99 latency data"
|
||||
echo " cache_comparison.txt — warm vs cold Redis cache comparison"
|
||||
echo ""
|
||||
echo "To view benchstat summary:"
|
||||
echo " cat $RESULTS_DIR/benchstat_summary.txt"
|
||||
echo "========================================"
|
||||
258
benchmarks/tools/parse_results.go
Normal file
258
benchmarks/tools/parse_results.go
Normal file
@@ -0,0 +1,258 @@
|
||||
// parse_results.go — Parses raw go test -bench output from the benchmark results
|
||||
// directory and produces two CSV files for analysis and reporting.
|
||||
//
|
||||
// Usage:
|
||||
//
|
||||
// go run benchmarks/tools/parse_results.go \
|
||||
// -dir=benchmarks/results/<timestamp>/ \
|
||||
// -out=benchmarks/results/<timestamp>/
|
||||
//
|
||||
// Output files:
|
||||
//
|
||||
// latency_report.csv — per-benchmark mean, p50, p95, p99 latency, allocs
|
||||
// throughput_report.csv — RPS at each GOMAXPROCS level from the parallel sweep
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/csv"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
// Matches standard go bench output:
|
||||
// BenchmarkFoo-8 1000 1234567 ns/op 1234 B/op 56 allocs/op
|
||||
benchLineRe = regexp.MustCompile(
|
||||
`^(Benchmark\S+)\s+\d+\s+([\d.]+)\s+ns/op` +
|
||||
`(?:\s+([\d.]+)\s+B/op)?` +
|
||||
`(?:\s+([\d.]+)\s+allocs/op)?` +
|
||||
`(?:\s+([\d.]+)\s+p50_µs)?` +
|
||||
`(?:\s+([\d.]+)\s+p95_µs)?` +
|
||||
`(?:\s+([\d.]+)\s+p99_µs)?` +
|
||||
`(?:\s+([\d.]+)\s+req/s)?`,
|
||||
)
|
||||
|
||||
// Matches custom metric lines in percentile output.
|
||||
metricRe = regexp.MustCompile(`([\d.]+)\s+(p50_µs|p95_µs|p99_µs|req/s)`)
|
||||
)
|
||||
|
||||
type benchResult struct {
|
||||
name string
|
||||
nsPerOp float64
|
||||
bytesOp float64
|
||||
allocsOp float64
|
||||
p50 float64
|
||||
p95 float64
|
||||
p99 float64
|
||||
rps float64
|
||||
}
|
||||
|
||||
// cpuResult pairs a GOMAXPROCS value with a benchmark result from the parallel sweep.
|
||||
type cpuResult struct {
|
||||
cpu int
|
||||
res benchResult
|
||||
}
|
||||
|
||||
func main() {
|
||||
dir := flag.String("dir", ".", "Directory containing benchmark result files")
|
||||
out := flag.String("out", ".", "Output directory for CSV files")
|
||||
flag.Parse()
|
||||
|
||||
if err := os.MkdirAll(*out, 0o755); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "ERROR creating output dir: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// ── Parse serial runs (run1.txt, run2.txt, run3.txt) ─────────────────────
|
||||
var latencyResults []benchResult
|
||||
for _, runFile := range []string{"run1.txt", "run2.txt", "run3.txt"} {
|
||||
path := filepath.Join(*dir, runFile)
|
||||
results, err := parseRunFile(path)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "WARNING: could not parse %s: %v\n", runFile, err)
|
||||
continue
|
||||
}
|
||||
latencyResults = append(latencyResults, results...)
|
||||
}
|
||||
|
||||
// Also parse percentiles file for p50/p95/p99.
|
||||
percPath := filepath.Join(*dir, "percentiles.txt")
|
||||
if percResults, err := parseRunFile(percPath); err == nil {
|
||||
latencyResults = append(latencyResults, percResults...)
|
||||
}
|
||||
|
||||
if err := writeLatencyCSV(filepath.Join(*out, "latency_report.csv"), latencyResults); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "ERROR writing latency CSV: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("Written: %s\n", filepath.Join(*out, "latency_report.csv"))
|
||||
|
||||
// ── Parse parallel sweep (parallel_cpu*.txt) ──────────────────────────────
|
||||
var throughputRows []cpuResult
|
||||
|
||||
for _, cpu := range []int{1, 2, 4, 8, 16} {
|
||||
path := filepath.Join(*dir, fmt.Sprintf("parallel_cpu%d.txt", cpu))
|
||||
results, err := parseRunFile(path)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "WARNING: could not parse parallel_cpu%d.txt: %v\n", cpu, err)
|
||||
continue
|
||||
}
|
||||
for _, r := range results {
|
||||
throughputRows = append(throughputRows, cpuResult{cpu: cpu, res: r})
|
||||
}
|
||||
}
|
||||
|
||||
if err := writeThroughputCSV(filepath.Join(*out, "throughput_report.csv"), throughputRows); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "ERROR writing throughput CSV: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("Written: %s\n", filepath.Join(*out, "throughput_report.csv"))
|
||||
}
|
||||
|
||||
// parseRunFile reads a go test -bench output file and returns all benchmark results.
|
||||
func parseRunFile(path string) ([]benchResult, error) {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
var results []benchResult
|
||||
currentBench := ""
|
||||
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
|
||||
// Main benchmark line.
|
||||
if m := benchLineRe.FindStringSubmatch(line); m != nil {
|
||||
r := benchResult{name: stripCPUSuffix(m[1])}
|
||||
r.nsPerOp = parseFloat(m[2])
|
||||
r.bytesOp = parseFloat(m[3])
|
||||
r.allocsOp = parseFloat(m[4])
|
||||
r.p50 = parseFloat(m[5])
|
||||
r.p95 = parseFloat(m[6])
|
||||
r.p99 = parseFloat(m[7])
|
||||
r.rps = parseFloat(m[8])
|
||||
results = append(results, r)
|
||||
currentBench = r.name
|
||||
continue
|
||||
}
|
||||
|
||||
// Custom metric lines (e.g., "123.4 p50_µs").
|
||||
if currentBench != "" {
|
||||
for _, mm := range metricRe.FindAllStringSubmatch(line, -1) {
|
||||
val := parseFloat(mm[1])
|
||||
metric := mm[2]
|
||||
for i := range results {
|
||||
if results[i].name == currentBench {
|
||||
switch metric {
|
||||
case "p50_µs":
|
||||
results[i].p50 = val
|
||||
case "p95_µs":
|
||||
results[i].p95 = val
|
||||
case "p99_µs":
|
||||
results[i].p99 = val
|
||||
case "req/s":
|
||||
results[i].rps = val
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return results, scanner.Err()
|
||||
}
|
||||
|
||||
func writeLatencyCSV(path string, results []benchResult) error {
|
||||
f, err := os.Create(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
w := csv.NewWriter(f)
|
||||
defer w.Flush()
|
||||
|
||||
header := []string{"benchmark", "mean_ms", "p50_µs", "p95_µs", "p99_µs", "allocs_op", "bytes_op"}
|
||||
if err := w.Write(header); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, r := range results {
|
||||
row := []string{
|
||||
r.name,
|
||||
fmtFloat(r.nsPerOp / 1e6), // ns/op → ms
|
||||
fmtFloat(r.p50),
|
||||
fmtFloat(r.p95),
|
||||
fmtFloat(r.p99),
|
||||
fmtFloat(r.allocsOp),
|
||||
fmtFloat(r.bytesOp),
|
||||
}
|
||||
if err := w.Write(row); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func writeThroughputCSV(path string, rows []cpuResult) error {
|
||||
f, err := os.Create(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
w := csv.NewWriter(f)
|
||||
defer w.Flush()
|
||||
|
||||
header := []string{"gomaxprocs", "benchmark", "rps", "mean_latency_ms", "p95_latency_ms"}
|
||||
if err := w.Write(header); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, row := range rows {
|
||||
r := []string{
|
||||
strconv.Itoa(row.cpu),
|
||||
row.res.name,
|
||||
fmtFloat(row.res.rps),
|
||||
fmtFloat(row.res.nsPerOp / 1e6),
|
||||
fmtFloat(row.res.p95),
|
||||
}
|
||||
if err := w.Write(r); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// stripCPUSuffix removes trailing "-N" goroutine count suffixes from benchmark names.
|
||||
func stripCPUSuffix(name string) string {
|
||||
if idx := strings.LastIndex(name, "-"); idx > 0 {
|
||||
if _, err := strconv.Atoi(name[idx+1:]); err == nil {
|
||||
return name[:idx]
|
||||
}
|
||||
}
|
||||
return name
|
||||
}
|
||||
|
||||
func parseFloat(s string) float64 {
|
||||
if s == "" {
|
||||
return 0
|
||||
}
|
||||
v, _ := strconv.ParseFloat(s, 64)
|
||||
return v
|
||||
}
|
||||
|
||||
func fmtFloat(v float64) string {
|
||||
if v == 0 {
|
||||
return ""
|
||||
}
|
||||
return strconv.FormatFloat(v, 'f', 3, 64)
|
||||
}
|
||||
Reference in New Issue
Block a user