Documentation
Testing
Reference docs for pydantic-fixturegen.
Testing helpers: pytest integration
Snapshots, fixtures, and CLI assertions tailored for your test suite.
Pytest plugin overview
pydantic-fixturegen exposes a pytest11 entry point so the plugin is available as soon as the package is installed. The plugin offers the pfg_snapshot fixture which delegates to the snapshot runner defined in pydantic_fixturegen.testing.snapshot.
Quick start
from pathlib import Path
from pydantic_fixturegen.testing import JsonSnapshotConfig
def test_user_snapshot(pfg_snapshot):
snapshot = JsonSnapshotConfig(out=Path("tests/snapshots/users.json"), indent=2)
pfg_snapshot.assert_artifacts(
target="./app/models.py",
json=snapshot,
include=["app.models.User"],
seed=42,
)
targetpoints to a Python module that exports Pydantic models, dataclasses, or TypedDicts.- Pass one or more configs (
JsonSnapshotConfig,FixturesSnapshotConfig,SchemaSnapshotConfig). - Use
include/exclude,seed,preset, orfreeze_seedsas you would withpfg diff.
Update modes
By default, the helper fails when drift is detected and shows the unified diff generated by pfg diff. You can enable automatic updates in three ways:
- CLI flag:
pytest --pfg-update-snapshots=update - Environment variable:
PFG_SNAPSHOT_UPDATE=update - Per-call override:
pfg_snapshot.assert_artifacts(..., update="update")
When update mode is active, the helper regenerates the requested artifacts using the deterministic CLI paths and re-diffs to confirm that changes were applied successfully.
If you already depend on pytest-regressions, the pfg_snapshot fixture honours its CLI toggles as well:
pytest --force-regenrefreshes artifacts and still fails the test so you remember to commit the changes.pytest --regen-allrefreshes artifacts and lets tests pass, mirroring the plugin’s default behaviour.
Per-test overrides via marker
Need to bump the timeout, force updates, or flip discovery modes for a single test? Annotate it with @pytest.mark.pfg_snapshot_config(...):
import pytest
@pytest.mark.pfg_snapshot_config(update="update", timeout=10.0, ast_mode=True)
def test_models(pfg_snapshot):
...
Supported keyword arguments:
update— accepts the same values as the CLI flag (fail/update).timeout/memory_limit_mb— feed directly into the safe-import sandbox used during diff discovery.ast_mode/hybrid_mode— opt a single assertion into the matching discovery strategy without toggling the command line.
Markers are enforced per-test, so suite-level defaults still come from the CLI option or PFG_SNAPSHOT_UPDATE unless the marker is present.
Multiple artifact types
Snapshot configs map directly to CLI emitters:
- JsonSnapshotConfig — wraps
pfg gen jsonoptions (count,jsonl,indent,use_orjson,shard_size). - FixturesSnapshotConfig — wraps
pfg gen fixturesoptions (style,scope,cases,return_type). - SchemaSnapshotConfig — wraps
pfg gen schemaoptions (indent).
You can pass any combination in a single assertion to validate multiple outputs at once:
from pydantic_fixturegen.testing import FixturesSnapshotConfig, JsonSnapshotConfig
def test_snapshots(pfg_snapshot):
json_snapshot = JsonSnapshotConfig(out=Path("tests/snapshots/users.json"), indent=2)
fixtures_snapshot = FixturesSnapshotConfig(
out=Path("tests/snapshots/users_fixtures.py"),
style="factory",
cases=3,
)
pfg_snapshot.assert_artifacts(
target="./models.py",
json=json_snapshot,
fixtures=fixtures_snapshot,
include=["app.models.User"],
seed=42,
)
Determinism tips
Combine snapshots with features that keep outputs stable:
seed— set explicit seeds per assertion or rely on project-level configuration.freeze_seeds/freeze_seeds_file— point to the same freeze file used by the CLI.now— supply a fixed timestamp when your models contain temporal data.preset— reuse generation presets defined for production builds.
Troubleshooting snapshot failures
- Inspect the diff in the pytest failure message; it mirrors
pfg diff --show-diffoutput. - Newer releases annotate diffs with hints such as field additions/removals, schema definition churn, or fixture header drift (seed, model digest, style), so scan the
Hint:lines before digging into the raw diff. - When no diff is produced, check that paths in your configs exist and are writable.
- Make sure the models you target are importable and that entry points (if any) are registered before the assertion runs. Use module-level imports or autouse fixtures to register plugins when needed.
- If you run pytest from a different working directory, convert
targetandsnapshotpaths to absolute paths or anchor them onPath(file).parent.
Snapshot CLI
Prefer managing snapshots from the command line instead of pytest? Use the dedicated helpers:
pfg snapshot verify models.py --json-out snapshots/users.json --include app.models.User— regenerates artifacts in-memory and exits with code1if any drift is detected.pfg snapshot write models.py --json-out snapshots/users.json --fixtures-out snapshots/users_fixtures.py— refreshes the on-disk snapshots using the same deterministic config as the pytest helper and prints whether anything changed.
Both commands support the same knobs as pfg diff (--seed, --preset, --freeze-seeds, --rng-mode, --link), so you can script snapshot refreshes in CI or pre-commit hooks without duplicating logic from your test suite.
Coverage tips
To capture code that is imported while pytest is still bootstrapping (for example, package
__init__ modules or CLI entry points), invoke pytest through coverage rather than relying
on pytest --cov:
coverage run -m pytest
coverage report --show-missing
coverage run starts measurement before pytest loads plugins or helper modules, so those early
imports are included in the report. The project’s coverage configuration lives in pyproject.toml,
so the usual flags (branch, fail_under, etc.) still apply. Use coverage erase before rerunning
when you need a clean slate, and coverage html if you want a browsable report.
CLI aids for testing
Even without the pytest plugin, you can script CLI commands inside tests:
from pathlib import Path
from subprocess import run, PIPE
def test_cli_list(tmp_path: Path) -> None:
module = tmp_path / "models.py"
module.write_text("from pydantic import BaseModel\n\nclass User(BaseModel):\n id: int\n", encoding="utf-8")
result = run(["pfg", "list", str(module), "--json-errors"], stdout=PIPE, text=True, check=True)
payload = json.loads(result.stdout)
assert payload["models"][0]["qualname"] == "models.User"
Leverage --json-errors for machine-readable output and point to the pfg script from your virtual environment or Poetry shell.
Where to go next
- Cookbook Recipe 6 – concrete usage examples.
- CLI reference – learn more about
pfg diff,check, and generator flags. - Configuration reference – control seeds, presets, and emitter defaults underpinning your tests.