v0.7.0 — visual compare + report exports

Turn MySQL ROW binlogs into incident-ready reports

BinlogViz helps DBAs and on-call engineers move from raw binlog events to a readable workload story. Find hot tables, large transactions, write spikes, and incident-window changes, then compare the current window against a trusted baseline and hand the result to your team in a format they can actually use.

$ binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin. --start 2026-03-15T10:00:00Z --end 2026-03-15T10:30:00Z --format json > current.json $ binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin. --start 2026-03-08T10:00:00Z --end 2026-03-08T10:30:00Z --format json > baseline.json $ binlogviz compare current.json baseline.json --format html > compare.html
4Output formats
Streaming pipeline
0Hosted services required
HTMLBrowser-shareable report

Answer the operator questions that matter before the incident room loses patience

BinlogViz does not replace low-level binlog tooling. It sits one layer above it: aggregating workload shape, surfacing abnormal moments, and packaging the result so it can move cleanly into an incident review, issue thread, or postmortem.

Hot tables first

See where writes concentrated instead of scanning raw row events line by line.

!

Large transactions surfaced

Find outsized transactions before they disappear into a long stream of ordinary noise.

Spike-aware investigation

Optional spike detection points attention at the minute most likely to explain impact.

Start where real binlog investigations actually begin

The value of BinlogViz is clearest at the moment somebody already has binlogs and needs answers fast: validate one file, widen to an incident window, then export something a teammate can read without replaying the whole investigation.

🗂

Validate one file quickly

Use a single binlog to prove the path, preview the default report, and sanity-check event volume fast.

Constrain the incident window

Use time, schema, and table filters to reduce noise and keep the workload story centered on the event you care about.

📄

Compare and hand the answer off cleanly

Export JSON reports from analyze, compare current versus baseline, then share the result as HTML for review, JSON for automation, Markdown for docs, or text for terminal-first debugging.

Resolve files, aggregate in a streaming pass, render the answer in the right surface

The architecture stays intentionally direct: parse, normalize, analyze, finalize, render. That makes the tool easier to reason about and keeps the product story focused on fast-to-insight investigation instead of infrastructure complexity.

Provide filessingle file or directory range
Apply filterstime, schema, table, object focus
Get a reportalerts, summaries, charts, exports

Streaming by design

Parsing, normalization, analysis, finalization, and rendering flow forward rather than collecting every event into one giant in-memory buffer first.

Automation-safe I/O contract

Reports stay on stdout while progress and runtime status stay on stderr, so redirecting output into files or pipelines remains predictable.

Analyze locally, compare when needed, hand off in the right surface

Output format is part of the product. Analyze can end as text, JSON, Markdown, or self-contained HTML. Compare turns two exported JSON reports into a text summary, structured JSON delta, or chart-based HTML review artifact.

Preview the reporting surfaces teams actually use

Use the tabs below to see how an incident answer moves across terminals, automation, docs, and browser-based review. Compare reuses text, JSON, and HTML once two analyze JSON reports exist.

=== Workload Summary ===
transactions: 438
rows: 1,842,977
events: 28,194
time range: 2026-03-15 10:00 → 10:30 UTC

=== Top Tables ===
orders.payments      814,420 rows
orders.refunds       242,199 rows
checkout.ledger      138,501 rows

=== Alerts ===
large_transaction · txn_1842 · 17,904 rows
spike · 2026-03-15T10:14:00Z · 4.7× baseline

Why visual compare expands the product story

HTML is no longer just a prettier analyze export. It also gives DBAs a browser-ready compare view for current-versus-baseline review, with charts that make workload changes easier to scan together.

  • Interactive charts summarize workload shape and current-versus-baseline deltas without requiring CLI context.
  • Compare HTML exposes table shifts, operation mix changes, and alert additions or removals in one review surface.
  • Markdown stays useful when the next destination is a wiki or incident note.
  • JSON remains stable for automation, downstream processing, and compare inputs.

Trust the contract, not just the parser

BinlogViz already has a stronger adoption story than “it parses binlogs.” It has published docs, explicit output contracts, release notes around HTML and Markdown exports, and a product boundary that makes sense for DBAs.

📘

Documented behavior

CLI flags, output formats, architecture, and limitations are already documented in a way that supports product messaging.

📄

Stable report surfaces

Text, JSON, Markdown, and HTML make it clear where the answer can go next and who can consume it.

📦

Release-driven story

Recent releases show the project moving from core analysis toward team-facing deliverable artifacts.

🧭

Focused scope

It is not trying to be a general observability platform — and that constraint makes the positioning sharper.

A short release arc that already explains the product direction

The sequence matters: filters and analysis control first, then report exports, then visual compare. That progression tells a better website story than raw feature lists alone.

2026-02
v0.5.x — focus and control
Time filters, object selectors, and better control over the incident analysis surface.
2026-03
v0.6.0 — report exports for handoff
Markdown and HTML exports turned analysis results into artifacts that can move cleanly into docs, reviews, and browser-based sharing.
2026-04
v0.7.0 — visual compare for incident review
Compare turns two analyze JSON reports into text, JSON, and chart-based HTML deltas that teams can review together.
Now
Clear DBA-first positioning
A local CLI that explains workload shifts faster and hands the answer off cleanly.

What the website should emphasize

  • Lead with operator questions, not parser internals.
  • Show that the tool works locally on files teams already have.
  • Highlight HTML as the bridge from CLI investigation to team communication and current-versus-baseline review.
  • Keep the product scope narrow and credible.

Install once, validate one file, widen only when the question demands it

The right onboarding path for a product site is simple and credible: install the binary, analyze one binlog, then move to ordered directory ranges and JSON export when the incident story gets wider enough to compare against a baseline.

Quick start

# Homebrew $ brew tap Fanduzi/binlogviz $ brew install --cask binlogviz # analyze one file $ binlogviz analyze mysql-bin.000123 # export current + baseline JSON, then open a visual compare report $ binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin. --start "2026-03-15T10:00:00Z" --end "2026-03-15T10:30:00Z" --format json > current.json $ binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin. --start "2026-03-08T10:00:00Z" --end "2026-03-08T10:30:00Z" --format json > baseline.json $ binlogviz compare current.json baseline.json --format html > compare.html

What a good first session looks like

  • Validate one file to confirm the default report shape.
  • Use --from-dir and --prefix when the window spans multiple binlogs.
  • Add --start, --end, schema, or table filters to cut noise.
  • Export JSON when you need a baseline, then use compare --format html for browser-based review.

Make the next binlog investigation easier to explain

BinlogViz gives you a local-first path from raw MySQL ROW binlogs to an answer the rest of the incident room can actually read.