4 Commits

Author SHA1 Message Date
Mayuresh
e0d7e3508f feat(benchmarks): auto-generate Interpretation/Recommendation and add -report-only flag
generate_report.go:
- buildInterpretation: derives narrative from p99/p50 tail-latency ratio,
  per-action complexity trend (% increase vs discover baseline), concurrency
  scaling efficiency (GOMAXPROCS=1 vs 16), and cache warm/cold delta
- buildRecommendation: identifies the best throughput/cost GOMAXPROCS level
  from scaling efficiency and adds production sizing guidance

run_benchmarks.sh:
- Add -report-only <dir> flag: re-runs parse_results.go + generate_report.go
  against an existing results directory without rerunning benchmarks

REPORT_TEMPLATE.md:
- Replace manual placeholders with __INTERPRETATION__ and __RECOMMENDATION__
  markers filled by the generator
2026-04-09 22:34:52 +05:30
Mayuresh
e6accc3f26 feat(benchmarks): auto-generate BENCHMARK_REPORT.md at end of run
- Add benchmarks/reports/REPORT_TEMPLATE.md — template with __MARKER__
  placeholders for all auto-populated fields (latency, throughput,
  percentiles, cache delta, benchstat block, environment, ONIX version)

- Add benchmarks/tools/generate_report.go — reads latency_report.csv,
  throughput_report.csv, benchstat_summary.txt and run1.txt metadata,
  fills the template, and writes BENCHMARK_REPORT.md to the results dir.
  ONIX version sourced from the latest git tag (falls back to 'dev').

- Update run_benchmarks.sh to call generate_report.go after parse_results.go;
  also derive ONIX_VERSION from git tag and pass to generator

- Update README and directory layout to reflect new files and workflow
2026-04-09 22:01:56 +05:30
Mayuresh
23e39722d2 fix(benchmarks): fix three parsing bugs in parse_results.go and bench_test.go
- parse_results.go: fix metric extraction order — Go outputs custom metrics
  (p50_µs, p95_µs, p99_µs, req/s) BEFORE B/op and allocs/op on the benchmark
  line. The old positional regex had B/op first, so p50/p95/p99 were always
  empty in latency_report.csv. Replaced with separate regexps for each field
  so order no longer matters.

- parse_results.go: remove p95_latency_ms column from throughput_report.csv —
  parallel sweep files only emit ns/op and req/s, never p95 data. The column
  was structurally always empty.

- bench_test.go: remove fmt.Printf from BenchmarkBAPCaller_RPS — the debug
  print raced with Go's own benchmark output line, garbling the result to
  'BenchmarkRPS-N  RPS: N over Ns' which the framework could not parse,
  causing req/s to never appear in the structured output. b.ReportMetric
  alone is sufficient.
2026-04-09 17:01:13 +05:30
Mayuresh
bccb381bfa scripts to run benchmarks 2026-04-09 12:07:55 +05:30