Odiffr provides R bindings to Odiff, a blazing-fast pixel-by-pixel image comparison tool. It’s designed for:
Odiffr requires the Odiff binary to be installed on your system:
# npm (cross-platform, recommended)
npm install -g odiff-bin
# Or download binaries from GitHub releases
# https://github.com/dmtrKovalenko/odiff/releasesIf you cannot install Odiff system-wide, use
odiffr_update() after installing the package to download a
binary to your user cache.
The main function is compare_images(), which returns a
tibble (or data.frame):
The threshold parameter (0-1) controls color sensitivity. Lower values are more precise:
Ignore antialiased pixels that often differ between renders:
Compare multiple image pairs efficiently:
pairs <- data.frame(
img1 = c("baseline/page1.png", "baseline/page2.png", "baseline/page3.png"),
img2 = c("current/page1.png", "current/page2.png", "current/page3.png")
)
results <- compare_images_batch(pairs, diff_dir = "diffs/")
# View failures
results[!results$match, ]Compare all images in two directories by matching filenames:
# Compare baseline/ vs current/ directories
results <- compare_image_dirs("baseline/", "current/")
# Include subdirectories
results <- compare_image_dirs("baseline/", "current/", recursive = TRUE)
# Only compare PNG files
results <- compare_image_dirs("baseline/", "current/", pattern = "\\.png$")Note: compare_image_dirs() matches files by name in both
directories. If there are files in current/ with no
matching baseline, a message is printed showing which files were
skipped.
Extract passing or failing pairs from batch results:
Get aggregate statistics for batch results:
results <- compare_image_dirs("baseline/", "current/")
summary(results)
#> odiffr batch comparison: 50 pairs
#> ───────────────────────────────────
#> Passed: 42 (84.0%)
#> Failed: 8 (16.0%)
#> - pixel-diff: 6
#> - layout-diff: 2
#>
#> Diff statistics (failed pairs):
#> Min: 0.15%
#> Median: 2.34%
#> Mean: 3.21%
#> Max: 12.45%
#>
#> Worst offenders:
#> 1. page_a.png (12.45%, 1245 pixels)
#> 2. page_b.png (8.32%, 832 pixels)The odiffr_batch object returned by
compare_images_batch() and
compare_image_dirs() contains these columns:
| Column | Type | Description |
|---|---|---|
pair_id |
integer | Sequential comparison ID |
match |
logical | TRUE if images match |
reason |
character | "match", "pixel-diff", or
"layout-diff" |
diff_count |
integer | Number of different pixels |
diff_percentage |
numeric | Percentage of pixels different |
diff_output |
character | Path to diff image, or NA |
img1 |
character | Path to baseline image |
img2 |
character | Path to current image |
Speed up batch comparisons using multiple CPU cores (Unix only):
# Compare in parallel on macOS/Linux
results <- compare_images_batch(pairs, parallel = TRUE)
# Also works with directory comparison
results <- compare_image_dirs("baseline/", "current/", parallel = TRUE)Note: On Windows, parallel = TRUE falls back to
sequential processing.
Generate standalone HTML reports for QA review:
# Run batch comparison with diff images
results <- compare_image_dirs(
"baseline/",
"current/",
diff_dir = "diffs/"
)
# Generate HTML report (links to diff images)
batch_report(results, output_file = "qa-report.html")
# Self-contained report with embedded images (for sharing)
batch_report(results, output_file = "qa-report.html", embed = TRUE)
# Portable report with relative paths (move report + diffs together)
batch_report(results, output_file = "output/report.html", relative_paths = TRUE)
# Customize the report
batch_report(
results,
output_file = "report.html",
title = "Dashboard Visual Regression",
n_worst = 20, # Show top 20 failures
show_all = TRUE # Include all comparisons, not just failures
)Reports include: - Pass/fail statistics with visual cards - Failure reason breakdown - Diff statistics (min, median, mean, max) - Worst offenders table with thumbnails
The relative_paths option is useful when you want to
move or share the report along with the diff images folder. With
relative paths, the report will find the images regardless of where the
files are moved.
For the common workflow of comparing directories and generating a
report, use compare_dirs_report():
# Compare and generate report in one step
compare_dirs_report("baseline/", "current/")
# -> Creates diffs/ directory with diff images and report.html
# Self-contained report with embedded images (recommended for sharing)
compare_dirs_report("baseline/", "current/", embed = TRUE)
# See all comparisons, not just failures
compare_dirs_report("baseline/", "current/", show_all = TRUE)
# Portable report with relative image paths
compare_dirs_report("baseline/", "current/", relative_paths = TRUE)
# Combine options: parallel processing with embedded report
compare_dirs_report("baseline/", "current/", parallel = TRUE, embed = TRUE)The compare_dirs_report() one-liner is ideal for CI
pipelines:
# In your CI script
results <- compare_dirs_report("baseline/", "current/")
# Fail the build if any images differ
if (any(!results$match)) {
stop("Visual regression detected! See diffs/ for details.")
}For GitHub Actions, upload diffs/ as an artifact on
failure:
Odiffr integrates with the magick package for preprocessing:
For full control, use odiff_run():
result <- odiff_run(
img1 = "baseline.png",
img2 = "current.png",
diff_output = "diff.png",
threshold = 0.1,
antialiasing = TRUE,
fail_on_layout = TRUE,
diff_mask = FALSE,
diff_overlay = 0.5,
diff_color = "#FF00FF",
diff_lines = TRUE,
reduce_ram = FALSE,
ignore_regions = list(ignore_region(10, 10, 100, 50)),
timeout = 60
)
# Detailed result
result$match
result$reason
result$diff_count
result$diff_percentage
result$diff_lines
result$exit_code
result$durationDownload the latest Odiff binary to your user cache:
Use a specific binary (useful for validated environments):
Odiffr provides dedicated testthat expectations for visual regression testing:
library(testthat)
library(odiffr)
test_that("dashboard renders correctly", {
skip_if_no_odiff()
# Generate current screenshot (using your preferred method)
webshot2::webshot("http://localhost:3838/dashboard", "current.png")
# Compare to baseline using expect_images_match()
expect_images_match(
"current.png",
"baselines/dashboard.png",
threshold = 0.1,
antialiasing = TRUE
)
})
test_that("button changes on hover", {
skip_if_no_odiff()
# Assert that images are different
expect_images_differ(
"button_normal.png",
"button_hover.png"
)
})When expect_images_match() fails, a diff image is
automatically saved to tests/testthat/_odiffr/ for
debugging. Control this behavior with options:
Odiffr and vdiffr are complementary tools: - vdiffr uses SVG-based comparison for ggplot2/grid graphics snapshots - odiffr uses pixel-based comparison for screenshots, rendered images, and bitmaps
Use vdiffr for testing R plots; use odiffr for testing screenshots of Shiny apps, web pages, PDFs, or any raster image comparison.
Odiffr is designed for validated pharmaceutical/clinical research:
options(odiffr.path = ...)odiff_version() to
document binary version for audit trails