Turn MySQL ROW binlogs into incident-ready reports
BinlogViz helps DBAs and on-call engineers move from raw binlog events to a readable workload story. Find hot tables, large transactions, write spikes, and incident-window changes — then hand the result to your team in a format they can actually use.
$ binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin. --start 2026-03-15T10:00:00Z --end 2026-03-15T10:30:00Z --format html > report.html
Why BinlogViz
Answer the operator questions that matter before the incident room loses patience
BinlogViz does not replace low-level binlog tooling. It sits one layer above it: aggregating workload shape, surfacing abnormal moments, and packaging the result so it can move cleanly into an incident review, issue thread, or postmortem.
Hot tables first
See where writes concentrated instead of scanning raw row events line by line.
Large transactions surfaced
Find outsized transactions before they disappear into a long stream of ordinary noise.
Spike-aware investigation
Optional spike detection points attention at the minute most likely to explain impact.
Core Workflows
Start where real binlog investigations actually begin
The value of BinlogViz is clearest at the moment somebody already has binlogs and needs answers fast: validate one file, widen to an incident window, then export something a teammate can read without replaying the whole investigation.
Validate one file quickly
Use a single binlog to prove the path, preview the default report, and sanity-check event volume fast.
Constrain the incident window
Use time, schema, and table filters to reduce noise and keep the workload story centered on the event you care about.
Hand the answer off cleanly
Export HTML for review calls, Markdown for docs, JSON for pipelines, or text for terminal-first debugging.
How It Works
Resolve files, aggregate in a streaming pass, render the answer in the right surface
The architecture stays intentionally direct: parse, normalize, analyze, finalize, render. That makes the tool easier to reason about and keeps the product story focused on fast-to-insight investigation instead of infrastructure complexity.
Streaming by design
Parsing, normalization, analysis, finalization, and rendering flow forward rather than collecting every event into one giant in-memory buffer first.
Automation-safe I/O contract
Reports stay on stdout while progress and runtime status stay on stderr, so redirecting output into files or pipelines remains predictable.
Output Surfaces
One analysis run, four different handoff surfaces
Output format is part of the product. BinlogViz can end as a terminal snapshot, structured payload, Markdown artifact, or self-contained HTML report depending on who needs the result next.
Preview the same result in different forms
Use the tabs below to see how one incident summary can move across terminals, automation, docs, and browser-based review.
=== Workload Summary === transactions: 438 rows: 1,842,977 events: 28,194 time range: 2026-03-15 10:00 → 10:30 UTC === Top Tables === orders.payments 814,420 rows orders.refunds 242,199 rows checkout.ledger 138,501 rows === Alerts === large_transaction · txn_1842 · 17,904 rows spike · 2026-03-15T10:14:00Z · 4.7× baseline
{ "summary": { "total_transactions": 438, "total_rows": 1842977, "total_events": 28194 }, "alerts": [ { "type": "large_transaction" }, { "type": "spike" } ] }
# BinlogViz Report ## Workload Summary | Transactions | Rows | Events | | --- | ---: | ---: | | 438 | 1,842,977 | 28,194 | ## Top Tables | Table | Rows | Txns | | --- | ---: | ---: | | orders.payments | 814,420 | 216 | | orders.refunds | 242,199 | 104 |
- Interactive charts for minute activity and table concentration
- Summary cards for transactions, rows, and range
- Five-theme switcher for presentation-ready exports
- Single-file artifact suitable for incident review
Why HTML changes the product story
The HTML report turns a CLI investigation into a reviewable artifact. It is easier to share, easier to scan, and easier to remember than raw terminal output pasted into chat.
- Interactive charts summarize workload shape without requiring CLI context.
- Theme switching makes the report feel intentional rather than disposable.
- Markdown stays useful when the next destination is a wiki or incident note.
- JSON remains stable for automation and downstream processing.
Trust Signals
Trust the contract, not just the parser
BinlogViz already has a stronger adoption story than “it parses binlogs.” It has published docs, explicit output contracts, release notes around HTML and Markdown exports, and a product boundary that makes sense for DBAs.
Documented behavior
CLI flags, output formats, architecture, and limitations are already documented in a way that supports product messaging.
Stable report surfaces
Text, JSON, Markdown, and HTML make it clear where the answer can go next and who can consume it.
Release-driven story
Recent releases show the project moving from core analysis toward team-facing deliverable artifacts.
Focused scope
It is not trying to be a general observability platform — and that constraint makes the positioning sharper.
Releases
A short release arc that already explains the product direction
The sequence matters: filters and analysis control first, then Markdown and HTML outputs. That progression tells a better website story than raw feature lists alone.
What the website should emphasize
- Lead with operator questions, not parser internals.
- Show that the tool works locally on files teams already have.
- Highlight HTML as the bridge between CLI investigation and team communication.
- Keep the product scope narrow and credible.
Install & Start
Install once, validate one file, widen only when the question demands it
The right onboarding path for a product site is simple and credible: install the binary, analyze one binlog, then move to ordered directory ranges and HTML export when the incident story gets wider.
Quick start
# Homebrew
$ brew tap Fanduzi/binlogviz
$ brew install --cask binlogviz
# analyze one file
$ binlogviz analyze mysql-bin.000123
# export a shareable HTML report
$ binlogviz analyze --from-dir /var/lib/mysql --prefix mysql-bin. --start "2026-03-15T10:00:00Z" --end "2026-03-15T10:30:00Z" --format html > report.html
What a good first session looks like
- Validate one file to confirm the default report shape.
- Use
--from-dirand--prefixwhen the window spans multiple binlogs. - Add
--start,--end, schema, or table filters to cut noise. - Export
htmlwhen the answer needs to leave the terminal.
Make the next binlog investigation easier to explain
BinlogViz gives you a local-first path from raw MySQL ROW binlogs to an answer the rest of the incident room can actually read.