Skip to content

pxdiff vs Happo

An honest comparison of pxdiff and Happo.io for visual regression testing. Both are solid tools with real trade-offs — this guide helps you choose the right one.

For Storybook and Ladle, both tools work similarly: your static build is uploaded and screenshots are captured in cloud browsers. Happo renders in multiple browsers (Chrome, Firefox, Safari, Edge, iOS Safari) from a single upload. pxdiff captures in cloud Chromium.

The real architectural difference is in test framework integrations (Playwright, Cypress, Vitest).

When used with Playwright or Cypress, Happo captures a DOM snapshot — serialized HTML, CSS, and assets — from your local browser, uploads it to Happo’s cloud, and re-renders it in remote browsers to produce screenshots.

What this gives you:

  • Cross-browser screenshots from a single test run. Define Chrome, Firefox, and Safari targets — Happo renders your DOM snapshot in all of them.
  • Happo-managed rendering stability. For Playwright/Cypress integrations, Happo controls the rendering environment, so you don’t need to pin browser versions or manage CI consistency yourself. The trade-off is that Happo’s remote render may not match what your app actually looks like — missing styles, wrong input state, or serialization gaps can produce screenshots of an app state that doesn’t exist in reality. (This doesn’t apply to Storybook, where both tools render in cloud browsers.)

What it costs you:

  • Lost JavaScript state. DOM serialization captures outerHTML, but input.value (the JS property for typed text) isn’t reflected in HTML attributes. Happo syncs checked for checkboxes and scrollTop for scroll positions, but text input values, select states, and other JS-only properties are lost. If you type “feat” into a combobox and screenshot it, Happo’s remote render shows an empty input.
  • Shadow DOM limitations. Shadow DOM is inlined into synthetic <happo-shadow-content> elements via shadowRoot.innerHTML, which has the same serialization gaps — input state inside shadow roots is lost too, and the checkbox/radio sync doesn’t apply inside shadows.
  • Constructed stylesheets require monkey-patching. Happo patches CSSStyleSheet.prototype methods to capture CSS-in-JS rules. This works for many cases but can miss edge cases or cause subtle differences.

pxdiff: pixel screenshots for test frameworks

Section titled “pxdiff: pixel screenshots for test frameworks”

When used with Playwright or Vitest, pxdiff captures pixel screenshots directly in your local browser, then uploads the images for diffing.

What this gives you:

  • What you see is what you diff. Screenshots are taken from the actual browser running your code. No serialization gaps, no lost state.
  • Full framework support. Any tool that produces a PNG works — Playwright, Vitest Browser Mode, Puppeteer, Selenium, or a folder of screenshots. No integration-specific capture code needed.
  • No rendering re-interpretation. Canvas elements, WebGL, SVGs, iframes, Shadow DOM, CSS-in-JS — everything renders exactly as the browser sees it.

What it costs you:

  • Single-browser screenshots. Each test run captures from one browser. Cross-browser testing requires running tests in multiple browsers separately.
  • Local rendering variance. Screenshots taken on macOS vs Linux, or with different font stacks, may differ slightly. pxdiff mitigates this with anti-aliasing detection and configurable thresholds, but it’s something to be aware of when comparing CI screenshots against local ones.
IntegrationpxdiffHappo
Storybook✅ GitHub Action + CLI✅ Static build upload
Ladle✅ GitHub Action + CLI
Playwright✅ Native plugin (toMatchPxdiff)✅ Fixture (happoScreenshot, DOM snapshot)
Vitest✅ Native plugin (Browser Mode)
Cypress✅ DOM snapshot
Bring your own PNGs✅ First-class (pxdiff upload)⚠️ Image API (manual, no CLI)
FeaturepxdiffHappo
Screenshot methodPixel captureDOM snapshot + remote render
Cross-browserOne browser per runMultiple browsers from one run
Diffing algorithmpixelmatch (threshold 0.063)Hash comparison, then YIQ color-delta or SSIM
Diff thresholdSingle threshold (per-diff)Two-level: per-pixel + % of pixels allowed to differ
Anti-aliasing detection✅ Built-in (pixelmatch)Not documented
Re-diff✅ Re-run against current baselines
FeaturepxdiffHappo
Baseline modelPer-branch, per-snapshotPer-commit SHA
Rebase / force-push✅ Works naturally⚠️ SHAs change, needs fallback walking
Stacked PRs✅ Branch-based resolution⚠️ May require re-running on base
Carry-forward✅ Auto-approves unchanged snapshots
Stale detection✅ Marks diffs when baselines change
Approval flow✅ Approve/reject/revoke per-snapshot✅ Accept/reject
GitHub check runs
Session groupingsessionId (auto-managed)--nonce (manual + finalize step)
FeaturepxdiffHappo
Local dev mode✅ First-class (pxdiff local)⚠️ Synthetic SHA, manual flags
Local baselines✅ User-scoped, isolated from CI
FeaturepxdiffHappo
Config fileNone (CLI flags + env vars)Required (happo.config.ts)
AuthAPI key onlyAPI key + secret (JWT)
PricingPer-screenshot creditsScreenshot-based tiers
Accessibility testing✅ axe-core target type
Flake managementhappo flake CLI

pxdiff separates capture and diff into explicit steps. For Storybook:

- run: npm run build-storybook
- uses: pxdiff/storybook@v1
with:
api-key: ${{ secrets.PXDIFF_API_KEY }}
source: ./storybook-static

For Playwright or Vitest, screenshots upload inline during tests — no wrapper command needed:

- run: npx playwright test
env:
PXDIFF_API_KEY: ${{ secrets.PXDIFF_API_KEY }}

Happo wraps your test command:

- run: npx happo -- npx playwright test
env:
HAPPO_API_KEY: ${{ secrets.HAPPO_API_KEY }}
HAPPO_API_SECRET: ${{ secrets.HAPPO_API_SECRET }}

For Storybook, Happo builds, uploads, and renders in one step:

- run: npx happo
env:
HAPPO_API_KEY: ${{ secrets.HAPPO_API_KEY }}
HAPPO_API_SECRET: ${{ secrets.HAPPO_API_SECRET }}

Happo requires running on pushes to main so baseline reports exist for PR comparisons. pxdiff resolves baselines from merge-base commits automatically.

pxdiff stores baselines per-branch. When you approve a snapshot, it becomes the baseline for that branch. Approvals persist across commits — push a fix to your PR and unchanged snapshots carry forward automatically. Rebasing, force-pushing, and stacked PRs all work naturally because baselines are tied to branches, not specific commits.

Happo stores baselines per-commit SHA. Each run creates a “report” tied to a SHA, and comparisons are between two SHAs. This means rebasing a branch changes all the SHAs, so Happo needs to search up to 50 ancestor commits to find a matching baseline. If your team uses rebase workflows, squash merges, or stacked PRs, baseline resolution can become unreliable or require re-running Happo on the base branch.

pxdiff has first-class local mode with user-scoped baselines:

Terminal window
pxdiff local -- npm test

Local approvals are isolated per-user — they never affect CI baselines. This means a developer on macOS can approve screenshots locally without breaking CI baselines produced on Linux.

Happo supports local runs but doesn’t have automatic baseline resolution for iterative local development — you need to pass explicit --beforeSha/--afterSha flags to compare runs. There’s also no concept of user-scoped baselines, so local and CI baselines aren’t isolated.

  • You need cross-browser screenshots (Chrome, Firefox, Safari) from a single test run.
  • Your components don’t rely on JavaScript-driven input.value or complex DOM state for visual appearance.
  • You want accessibility testing bundled into your VRT pipeline.
  • You’re primarily using Storybook (Happo’s Storybook integration is mature and well-optimized).
  • You want Happo to manage rendering stability for Playwright/Cypress tests rather than pinning browser versions yourself.
  • You want to test what users actually see — pixel-perfect screenshots from real browser execution.
  • You use Vitest Browser Mode (Happo has no Vitest integration).
  • You need bring-your-own-screenshots as a first-class workflow — drop a folder of PNGs, no integration code required.
  • You want local development mode with user-scoped baselines that don’t affect CI.
  • You prefer no config file — CLI flags and environment variables only.
  • You want transparent, credit-based pricing instead of opaque tiers.
  • Your components use Shadow DOM, Canvas, WebGL, or CSS-in-JS that may not serialize correctly for DOM snapshot re-rendering.
  • You’re building with any test framework — not just the ones Happo integrates with.
Terminal window
npm uninstall happo
npm install -D @pxdiff/cli
env:
HAPPO_API_KEY: ${{ secrets.HAPPO_API_KEY }}
HAPPO_API_SECRET: ${{ secrets.HAPPO_API_SECRET }}
PXDIFF_API_KEY: ${{ secrets.PXDIFF_API_KEY }}

For Storybook:

- run: npx happo
- run: npm run build-storybook
- uses: pxdiff/storybook@v1
with:
api-key: ${{ secrets.PXDIFF_API_KEY }}
source: ./storybook-static

For Playwright, replace the happo wrapper with pxdiff’s native plugin:

import { test } from "happo/playwright";
import { test } from "@playwright/test";
import { createPxdiffFixture } from "@pxdiff/playwright";
test("my test", async ({ page, happoScreenshot }) => {
await page.goto("https://example.com");
await happoScreenshot(page.locator("body"), {
component: "Home",
variant: "default",
});
});
const pxdiffTest = test.extend(createPxdiffFixture());
pxdiffTest("my test", async ({ page }) => {
await page.goto("https://example.com");
await expect(page).toMatchPxdiff("home-default");
});

Delete happo.config.ts. pxdiff doesn’t use a config file — set PXDIFF_API_KEY as an environment variable and you’re done.

HappopxdiffNotes
ReportCaptureA set of screenshots at a point in time
JobDiffComparing two reports/captures
happoScreenshot()toMatchPxdiff()Screenshot + comparison
happo.config.ts(none)pxdiff uses CLI flags and env vars
--noncesessionIdGrouping parallel test shards
happo finalizeAutomaticSessions auto-finalize
data-happo-hidedata-pxdiff="ignore"Hide dynamic content
happo flake(not yet available)Flake tracking
API key + secretAPI key onlySimpler auth model
RemoteBrowserTarget(not applicable)pxdiff uses your local browser
DOM snapshotPixel screenshotFundamentally different capture approach