generate_report.go:
- buildInterpretation: derives narrative from p99/p50 tail-latency ratio,
per-action complexity trend (% increase vs discover baseline), concurrency
scaling efficiency (GOMAXPROCS=1 vs 16), and cache warm/cold delta
- buildRecommendation: identifies the best throughput/cost GOMAXPROCS level
from scaling efficiency and adds production sizing guidance
run_benchmarks.sh:
- Add -report-only <dir> flag: re-runs parse_results.go + generate_report.go
against an existing results directory without rerunning benchmarks
REPORT_TEMPLATE.md:
- Replace manual placeholders with __INTERPRETATION__ and __RECOMMENDATION__
markers filled by the generator
- Add benchmarks/reports/REPORT_TEMPLATE.md — template with __MARKER__
placeholders for all auto-populated fields (latency, throughput,
percentiles, cache delta, benchstat block, environment, ONIX version)
- Add benchmarks/tools/generate_report.go — reads latency_report.csv,
throughput_report.csv, benchstat_summary.txt and run1.txt metadata,
fills the template, and writes BENCHMARK_REPORT.md to the results dir.
ONIX version sourced from the latest git tag (falls back to 'dev').
- Update run_benchmarks.sh to call generate_report.go after parse_results.go;
also derive ONIX_VERSION from git tag and pass to generator
- Update README and directory layout to reflect new files and workflow