Announcing pytest-test-categories v1.1.0: Bring Google Testing Philosophy to Python
The Problem With Most Test Suites
Be honest – how many of these apply to your codebase?
- ⏱️ Slow CI pipelines because tests have no time budgets
- 🎲 Flaky tests from network timeouts, race conditions, or shared state
- 🔺 Inverted test pyramid with too many slow integration tests
- 🚫 No enforced boundaries between unit, integration, and system tests
If you nodded at any of these, you’re not alone. These are the most common testing anti-patterns I see in Python projects.
Enter pytest-test-categories
Today I’m releasing v1.1.0 of pytest-test-categories – a pytest plugin that brings Google’s battle-tested testing philosophy (from “Software Engineering at Google”) to Python.
pip install pytest-test-categories
What It Does
1. Categorize tests by size with clear resource constraints:
import pytest
@pytest.mark.small
def test_pure_function():
"""Must complete in <1 second, no I/O allowed"""
assert calculate_total([1, 2, 3]) == 6
@pytest.mark.medium
def test_database_query(db_connection):
"""Can access localhost, up to 5 minutes"""
result = db_connection.query("SELECT * FROM users")
assert len(result) > 0
@pytest.mark.large
def test_external_api():
"""Full network access, up to 15 minutes"""
response = requests.get("https://api.example.com")
assert response.ok
@pytest.mark.xlarge
def test_extended_e2e():
"""Full access, up to 15 minutes, for extensive E2E tests"""
# Long-running end-to-end workflow
pass
2. Enforce hermeticity – When a small test tries to access the network:
======================================================================
[TC001] Network Violation
======================================================================
Category: SMALL
What happened:
SMALL test attempted network connection to api.example.com:443
To fix this (choose one):
• Mock the network call using responses, httpretty, or respx
• Use dependency injection to provide a fake HTTP client
• Change test category to @pytest.mark.medium
======================================================================
3. Validate your test pyramid – Ensure you maintain the recommended 80/15/5 distribution:
======================== Test Size Distribution ========================
Small: 120 tests (80.0%) - Target: 80% ✓
Medium: 22 tests (14.7%) - Target: 15% ✓
Large: 8 tests ( 5.3%) - Target: 5% ✓
========================================================================
The “No Escape Hatches” Philosophy
Unlike other tools, pytest-test-categories is intentionally strict. There’s no @pytest.mark.allow_network marker.
Why? Because escape hatches become the norm:
# This defeats the entire purpose
@pytest.mark.small
@pytest.mark.allow_network # ❌ This marker doesn't exist!
def test_api():
requests.get("https://api.example.com") # Still flaky!
Instead, use the right category:
@pytest.mark.medium # ✓ Honest about what the test does
def test_api():
requests.get("https://api.example.com")
Resource Isolation by Test Size
| Resource | Small | Medium | Large | XLarge |
|---|---|---|---|---|
| Time | 1s | 5min | 15min | 15min |
| Network | ❌ Blocked | Localhost | ✓ Allowed | ✓ Allowed |
| Filesystem | ❌ Blocked | ✓ Allowed | ✓ Allowed | ✓ Allowed |
| Database | ❌ Blocked | ✓ Allowed | ✓ Allowed | ✓ Allowed |
| Subprocess | ❌ Blocked | ✓ Allowed | ✓ Allowed | ✓ Allowed |
| Sleep | ❌ Blocked | ✓ Allowed | ✓ Allowed | ✓ Allowed |
Mocking Libraries Just Work
Small tests can use mocking libraries like responses, respx, pytest-mock, pyfakefs, and VCR.py without triggering violations. Why? Because mocks intercept at the library layer before reaching the actual resource:
import pytest
import responses
import requests
@pytest.mark.small
@responses.activate
def test_api_with_mock():
"""This is hermetic - no real network call is made."""
responses.add(
responses.GET,
"https://api.example.com/users",
json={"users": []},
status=200,
)
response = requests.get("https://api.example.com/users")
assert response.json() == {"users": []}
For filesystem operations in small tests, use pyfakefs or io.StringIO/io.BytesIO for in-memory file handling.
Gradual Adoption
You don’t have to go strict on day one:
# pyproject.toml
# Week 1: Discovery - see what would fail
[tool.pytest.ini_options]
test_categories_enforcement = "off"
# Week 2-4: Migration - fix violations incrementally
test_categories_enforcement = "warn"
# Week 5+: Enforced - violations fail the build
test_categories_enforcement = "strict"
Full pytest-xdist Support
Run your categorized tests in parallel with full support for pytest-xdist:
pytest -n auto
Distribution stats are aggregated correctly across workers, and timer isolation ensures no race conditions.
JSON Reports for CI/CD
Export machine-readable reports for dashboards and CI integration:
pytest --test-size-report=json --test-size-report-file=report.json
{
"summary": {
"total_tests": 150,
"distribution": {
"small": {"count": 120, "percentage": 80.0},
"medium": {"count": 22, "percentage": 14.67},
"large": {"count": 8, "percentage": 5.33}
},
"violations": {
"timing": 0,
"hermeticity": {
"network": 0,
"filesystem": 0,
"subprocess": 0,
"database": 0,
"sleep": 0,
"total": 0
}
}
}
}
Get Started
pip install pytest-test-categories
Then mark your tests and enable enforcement:
# pyproject.toml
[tool.pytest.ini_options]
test_categories_enforcement = "warn"
Links
I’d love to hear your feedback! Have questions or feature requests? Open an issue on GitHub or drop a comment below.
Happy testing! 🧪
