Introduction
test-better is a Rust testing library built around one idea: a test that
returns Result and uses ? is strictly better than a test that panics.
A panicking test stops at the first failure, throws away everything it knew
about why it failed, and gives you a backtrace through the test harness
instead of a description of what went wrong. A Result-returning test keeps
the failure as a value: it carries the expression that failed, the values
involved, the source location, and any context you attached on the way down.
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn the_answer_is_right() -> TestResult {
let answer = compute_answer();
check!(answer).satisfies(eq(42))?;
Ok(())
}
}
When that assertion fails, the message names the expression (answer), not
just its value, and the comparison it expected. There is no .unwrap(), no
assert_eq!, and no panic: the ? turns the failure into an early return that
the test harness reports.
What you get
check!and matchers.check!(value).satisfies(matcher)is the single assertion form. Matchers (eq,lt,contains,some, …) compose with combinators (not,all_of,any_of) and you can write your own.?-friendly conversions.or_failreplaces.unwrap();contextannotates a failure with where you were in the test when it happened.- Rich failure output. Failures render the expression, the expected and actual values, a diff for multi-line text, the source location, and the context chain.
- One surface across test kinds. Async assertions, property tests,
snapshot tests, and fixture-driven tests all return the same
TestResultand compose with the same?.
How this book is organized
Getting Started gets a test file compiling. Migrating
from assert! is the translation table if you
have an existing suite. The remaining chapters each take one area in depth:
writing your own matchers, async,
property testing, snapshots, and
fixtures. Recipes collects shorter answers to
common questions.
The full API reference is the rustdoc; this book is the prose companion to it.
Getting Started
Add the dependency
test-better is a dev-dependency: it is only used by your tests.
[dev-dependencies]
test-better = "0.2"
That single crate is a facade: it re-exports the whole library, so a test file needs one dependency and one import.
The one import
#![allow(unused)]
fn main() {
use test_better::prelude::*;
}
The prelude brings in everything an everyday test uses: the TestResult type,
the check! macro, the matcher constructors (eq, lt, contains, …),
and the ?-friendly extension methods (context, or_fail). Less common
items (the custom-matcher machinery, the structured-failure types) are imported
by name when you need them, so they stay out of the body of every test.
Your first test
A test-better test returns TestResult, which is an alias for
Result<(), TestError>. The body uses ? on each assertion and ends with
Ok(()):
#![allow(unused)]
fn main() {
use test_better::prelude::*;
fn parse_port(input: &str) -> Option<u16> {
input.parse().ok()
}
#[test]
fn parses_a_valid_port() -> TestResult {
let port = parse_port("8080").or_fail_with("8080 is a valid port")?;
check!(port).satisfies(eq(8080))?;
Ok(())
}
}
Three things are happening:
or_fail_withreplaces.unwrap(). OnNoneit produces aTestErrorwhose message is the string you gave it; the?returns it.check!(port)captures both the value and the source textport, so a failure names the expression..satisfies(eq(8080))returns aTestResult. The?propagates a mismatch; on a match it isOk(())and execution continues.
The trailing Ok(()) is the test passing. If the last line of the test is
itself an assertion, you can return it directly and drop the Ok(()):
#![allow(unused)]
fn main() {
use test_better::prelude::*;
fn parse_port(input: &str) -> Option<u16> { input.parse().ok() }
#[test]
fn parses_a_valid_port() -> TestResult {
let port = parse_port("8080").or_fail_with("8080 is a valid port")?;
check!(port).satisfies(eq(8080))
}
}
What a failure looks like
When check!(port).satisfies(eq(8080)) fails, the test does not panic with
assertion failed: left == right. It returns a TestError that renders the
expression, what was expected, and what was found:
assertion failed
check!(port).satisfies(eq(8080))
expected: equal to 8080
actual: 9090
at tests/config.rs:11:5
If you attached context with .context(..) on the way down, that chain is
printed too. The next chapter, Migrating from assert!,
covers the rest of the everyday vocabulary.
Negation and multiple matchers
violates is the negation of satisfies:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn a_fresh_cart_is_empty_and_has_no_total() -> TestResult {
let cart: Vec<u32> = Vec::new();
check!(&cart).satisfies(is_empty())?;
check!(cart.iter().sum::<u32>()).violates(gt(0))?;
Ok(())
}
}
To assert several things about one value in a single check!, combine
matchers with all_of (see Recipes); to keep going after the
first failure and report all of them, use soft (also in Recipes).
Migrating from assert!
If you have an existing test suite, you do not have to rewrite it all at once.
test-better tests are ordinary #[test] functions; a TestResult-returning
test sits next to a panicking one in the same file. Convert a test when you
next touch it.
This chapter is the translation table.
The shape of the function
A panicking test returns () and its assertions panic. A test-better test
returns TestResult and its assertions are ?-propagated:
#![allow(unused)]
fn main() {
// Before
#[test]
fn before() {
let user = load_user(1);
assert_eq!(user.name, "alice");
}
}
#![allow(unused)]
fn main() {
use test_better::prelude::*;
// After
#[test]
fn after() -> TestResult {
let user = load_user(1);
check!(user.name).satisfies(eq("alice"))
}
}
Assertion translation table
| Panicking | test-better |
|---|---|
assert!(x) | check!(x).satisfies(is_true())? |
assert!(!x) | check!(x).satisfies(is_false())? |
assert_eq!(a, b) | check!(a).satisfies(eq(b))? |
assert_ne!(a, b) | check!(a).satisfies(ne(b))? |
assert!(a < b) | check!(a).satisfies(lt(b))? |
assert!(a >= b) | check!(a).satisfies(ge(b))? |
assert!(v.contains(&x)) | check!(&v).satisfies(contains(eq(x)))? |
assert!(v.is_empty()) | check!(&v).satisfies(is_empty())? |
assert!(s.contains("foo")) | check!(s).satisfies(contains_str("foo"))? |
assert!(opt.is_some()) | check!(opt).satisfies(some(always_matches()))? * |
assert_eq!(opt, Some(x)) | check!(opt).satisfies(some(eq(x)))? |
assert!(res.is_ok()) | check!(res).satisfies(ok(always_matches()))? * |
assert_eq!(res, Ok(x)) | check!(res).satisfies(ok(eq(x)))? |
* some and ok take an inner matcher for the contained value. To assert
only that the option or result is the right variant, pass always_matches();
otherwise pass a matcher for the value you expect inside it.
Replacing .unwrap() and .expect()
.unwrap() and .expect("...") panic. Their ?-friendly replacements live on
the OrFail extension trait, in the prelude:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
fn config_path() -> Option<String> { Some("/etc/app.toml".into()) }
fn read(_: &str) -> Result<String, std::io::Error> { Ok(String::new()) }
#[test]
fn loads_the_config() -> TestResult {
// Before: let path = config_path().unwrap();
let path = config_path().or_fail_with("a config path is configured")?;
// Before: let body = read(&path).expect("config is readable");
let body = read(&path).or_fail_with("the config file is readable")?;
check!(body.is_empty()).satisfies(is_true())
}
}
or_fail()uses a generic message;or_fail_with("...")lets you say what you expected. On aResultit preserves the underlying error as the cause, so the original error message is still in the output.- Use these everywhere you would have reached for
.unwrap()in test setup, not just on the value under test.
Annotating where a failure happened: context
.context("...") (and .with_context(|| ...), which builds its message only
on the failure path) attach a frame describing what the test was doing. They
work on any Result whose error implements std::error::Error, and on a
TestResult directly:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
fn open_db() -> Result<(), std::io::Error> { Ok(()) }
fn run_migrations() -> Result<(), std::io::Error> { Ok(()) }
#[test]
fn the_database_is_ready() -> TestResult {
open_db().context("opening the test database")?;
run_migrations().context("running migrations")?;
Ok(())
}
}
A failure inside run_migrations is reported “while running migrations”, so
you do not have to reconstruct what step you were on from a line number.
A pragmatic order of operations
- Change the signature to
-> TestResultand addOk(())at the end. - Replace each
assert*!with thecheck!form from the table,?on each. - Replace
.unwrap()/.expect()in the test’s setup withor_fail*. - Add
.context(..)where a bare failure would be ambiguous.
The result is a test that, when it fails, tells you what it was doing and what it found, rather than just where the panic was caught.
Writing Matchers
The built-in matchers cover most assertions, but a test suite for a real domain
eventually wants its own vocabulary: is_freezing(), is_a_valid_iban(),
settled(). A custom matcher is reusable, composes with the combinators
(not, all_of, some, …), and produces a failure message written in
domain terms rather than in raw field values.
There are two ways to write one. The runnable companion to this chapter is the
examples/custom-matcher/ crate in the repository, and the test_better::cookbook
module in the rustdoc.
Before writing one: check the built-ins
To assert on the shape of a struct, tuple, or enum variant, the structural
macros (matches_struct!, matches_tuple!, matches_variant!) compose
existing matchers and need no new type. To wrap an ad-hoc closure once, without
naming it, satisfies is lighter still:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn the_id_is_even() -> TestResult {
let id = 4096_u32;
check!(id).satisfies(satisfies("an even id", |n| n % 2 == 0))
}
}
Reach for a real matcher when the predicate is reused, or when the failure message needs to be better than “did not satisfy …”.
1. define_matcher!: the declarative shortcut
When the matcher is a predicate plus a description and nothing more,
define_matcher! writes the matcher type, its Matcher impl, and the
constructor function for you:
#![allow(unused)]
fn main() {
use test_better::define_matcher;
define_matcher! {
/// Matches a temperature, in degrees Celsius, at or below freezing.
pub fn is_freezing for f64 {
expects: "a temperature at or below 0°C",
matches: |celsius| *celsius <= 0.0,
}
}
}
The matcher can take parameters; the expects description can be computed from
them:
#![allow(unused)]
fn main() {
use test_better::define_matcher;
define_matcher! {
/// Matches a temperature strictly warmer than `floor` degrees Celsius.
pub fn warmer_than(floor: f64) for f64 {
expects: format!("a temperature warmer than {floor}°C"),
matches: |celsius| *celsius > floor,
}
}
}
Both are used like any built-in matcher: check!(reading).satisfies(is_freezing()),
check!(reading).satisfies(warmer_than(18.0)). This is the right tool for the large
majority of cases.
2. A hand-written impl Matcher<T>: full control
When the shortcut is not enough (you want a structured diff, an inner matcher
applied to a projection, or a failure message phrased for the domain type),
implement Matcher<T> directly. The trait has two methods:
#![allow(unused)]
fn main() {
use test_better::{Description, MatchResult, Matcher, Mismatch};
#[derive(Debug, Clone, Copy, PartialEq)]
pub struct Temperature(pub f64);
struct IsFreezingReading;
impl Matcher<Temperature> for IsFreezingReading {
fn check(&self, actual: &Temperature) -> MatchResult {
if actual.0 <= 0.0 {
MatchResult::pass()
} else {
MatchResult::fail(Mismatch::new(
self.description(),
format!("{:.1}°C, which is above freezing", actual.0),
))
}
}
fn description(&self) -> Description {
Description::text("a temperature at or below 0°C")
}
}
/// Matches a `Temperature` reading at or below freezing.
#[must_use]
pub fn is_freezing_reading() -> impl Matcher<Temperature> {
IsFreezingReading
}
}
checkreturnsMatchResult::pass()orMatchResult::fail(mismatch). TheMismatchcarries theDescriptionof what was expected and a string for what was actually found.descriptionreturns the matcher’s expectation. It is whatnotnegates and what combinators compose, so keep it a noun phrase (“a temperature at or below 0°C”), not a sentence.
The convention is to keep the matcher type private and expose a constructor
function. Mark the constructor #[must_use]: a matcher that is built and
dropped is a bug.
3. A matcher that adapts an inner matcher
The most composable shape takes an inner Matcher<U> and applies it to a
projection of T. This lets every numeric matcher (gt, between,
close_to, …) work on your domain type without a dedicated matcher for each:
#![allow(unused)]
fn main() {
use test_better::{Description, MatchResult, Matcher, Mismatch};
#[derive(Debug, Clone, Copy, PartialEq)]
pub struct Temperature(pub f64);
struct AsCelsius<M>(M);
impl<M: Matcher<f64>> Matcher<Temperature> for AsCelsius<M> {
fn check(&self, actual: &Temperature) -> MatchResult {
let inner = self.0.check(&actual.0);
match inner.failure {
None => MatchResult::pass(),
Some(mismatch) => MatchResult::fail(Mismatch {
expected: Description::labeled("degrees Celsius", mismatch.expected),
..mismatch
}),
}
}
fn description(&self) -> Description {
Description::labeled("degrees Celsius", self.0.description())
}
}
/// Applies `inner` to the underlying degrees-Celsius value of a `Temperature`.
pub fn as_celsius<M: Matcher<f64>>(inner: M) -> impl Matcher<Temperature> {
AsCelsius(inner)
}
}
Description::labeled wraps the inner description with a header, so a nested
failure keeps the layer that failed: the output shows degrees Celsius and,
underneath it, whatever the inner matcher expected.
Describing expectations
Description is the composable account of what a matcher expects:
Description::text("...")is a leaf.Description::labeled(header, child)nests a description under a header.a.and(b)/a.or(b)combine two descriptions;!dnegates one.
Building the description out of these, rather than formatting a string, is what
lets not, all_of, and any_of produce a sensible message when they wrap
your matcher.
Async Testing
test-better tests an async value in three ways: by awaiting it and asserting
on its output, by polling a condition until it becomes true, and by bounding
how long an operation may take. The first is runtime-agnostic; the last two
have a runtime-free form and a runtime-gated form.
Asserting on a future’s output: resolves_to
When the expression handed to check! is a Future, the Subject grows an
await-based method, resolves_to. It awaits the future and applies the
matcher to its output:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
async fn doubled(n: i32) -> i32 {
n + n
}
#[tokio::test]
async fn doubling_resolves_to_the_sum() -> TestResult {
check!(doubled(21)).resolves_to(eq(42)).await?;
Ok(())
}
}
resolves_to only awaits the future, so it is runtime-agnostic: the same
assertion works under #[tokio::test], #[async_std::test],
pollster::block_on, or any other executor. A mismatch is reported the same
way satisfies reports one: the expression (doubled(21)) and the actual
output.
Polling until a condition holds: eventually
Some conditions become true after an operation, not synchronously: a
background task finishes, a file appears, a queue drains. eventually polls a
probe until it passes or a timeout elapses.
The runtime-free form is eventually_blocking. It needs no executor, so it is
an ordinary #[test]:
#![allow(unused)]
fn main() {
use std::time::Duration;
use test_better::prelude::*;
#[test]
fn the_worker_drains_the_queue() -> TestResult {
let queue = start_worker();
eventually_blocking(Duration::from_secs(5), || queue.is_empty())?;
Ok(())
}
}
The async form is eventually: its probe is a future, and it sleeps on the
runtime between attempts. It is gated on a runtime feature of test-better
(tokio, async-std, or smol) being enabled, so the inter-probe sleep has
an executor to run on:
#![allow(unused)]
fn main() {
use std::time::Duration;
use test_better::prelude::*;
#[tokio::test]
async fn the_endpoint_comes_up() -> TestResult {
let server = spawn_server();
eventually(Duration::from_secs(5), || async { server.health().await.is_ok() }).await?;
Ok(())
}
}
Both forms return the moment the probe passes, rather than always waiting out
the budget, and both report the elapsed time and probe count on a timeout. The
eventually_with / eventually_blocking_with variants take a Backoff to
control the inter-probe delay.
Bounding how long an operation may take: completes_within
completes_within asserts that a future finishes inside a time limit. It
needs a real runtime to drive the timeout, so it is gated on one of
test-better’s runtime features and is only callable inside that runtime’s
test:
#![allow(unused)]
fn main() {
use std::time::Duration;
use test_better::prelude::*;
#[tokio::test]
async fn the_cache_lookup_is_fast() -> TestResult {
check!(cache_lookup("key"))
.completes_within(Duration::from_millis(50))
.await?;
Ok(())
}
}
If the future does not complete in time, the failure is an ErrorKind::Timeout
naming the limit. Because the three runtime features are mutually exclusive in
a single build, pick the one matching your test runtime in Cargo.toml:
[dev-dependencies]
test-better = { version = "0.2", features = ["tokio"] }
Choosing the right tool
- The value is a future and you want to assert on its output:
resolves_to. - A condition becomes true asynchronously and you want to wait for it:
eventually(oreventually_blockingwith no runtime). - An operation must finish within a deadline:
completes_within.
Property Testing
A property test asserts that something holds for every input in a range,
rather than for a handful of hand-picked cases. test-better’s property layer
is a thin seam over proptest: you write the property as a closure that
returns TestResult, and a failure is shrunk to a minimal counterexample that
still carries the matcher failure that broke it.
The property! macro
The everyday form is the property! macro. The closure binding’s type names
the strategy: any type that is proptest::Arbitrary (most std types are) is
inferred from the annotation.
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn incrementing_changes_the_value() -> TestResult {
property!(|n: u32| {
check!(n.wrapping_add(1)).satisfies(ne(n))
})
}
}
The macro call is the test body: it returns the TestResult the #[test]
function returns.
To name an explicit strategy instead of inferring one, add a using clause.
The binding is then bare; its type and values come from the strategy. A numeric
range is a proptest strategy, so it works directly:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn values_in_range_stay_in_range() -> TestResult {
property!(|n| {
check!(n).satisfies(lt(10u64))
} using 0u64..10)
}
}
Shrinking and counterexamples
When a property fails, proptest shrinks the failing input toward the simplest
value that still fails, and test-better reports both the shrunk and the
original input, alongside the matcher failure:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn this_property_is_false() -> TestResult {
// "every value in 0..1000 is below 500" is false; the run shrinks the
// counterexample down to exactly 500.
let error = property!(|n: u32| {
check!(n).satisfies(lt(500u32))
} using 0u32..1_000)
.err()
.or_fail_with("values at or above 500 exist in 0..1000")?;
let rendered = error.to_string();
check!(rendered.contains("the shrunk (minimal) input is 500")).satisfies(is_true())?;
check!(rendered.contains("less than 500")).satisfies(is_true())
}
}
The point of carrying the matcher failure through shrinking is that the report
is not just “500 failed”: it is the full check! failure for the minimal
input, so you see what about 500 broke the property.
The function form: for_all and for_all_with
property! expands to a call to for_all. You can call it directly when you
want the Result<(), PropertyFailure<T>> as a value rather than as the test’s
return:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
use test_better::for_all;
#[test]
fn doubling_stays_in_bounds() -> TestResult {
let outcome = for_all(0u32..1_000, |n| check!(n * 2).satisfies(lt(2_000u32)));
check!(outcome.is_ok()).satisfies(is_true())
}
}
PropertyFailure<T> exposes the shrunk and original inputs and the carried
failure: TestError, so a test can assert on the counterexample itself.
for_all_with takes a PropertyConfig (the case count) and a Runner (seeded
deterministically or randomized), for when the defaults are not what you want:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
use test_better::{PropertyConfig, Runner, for_all_with};
#[test]
fn run_more_cases() -> TestResult {
let mut runner = Runner::randomized();
let outcome = for_all_with(PropertyConfig { cases: 32 }, &mut runner, 0u64..10, |n| {
check!(n).satisfies(lt(10u64))
});
check!(outcome.is_ok()).satisfies(is_true())
}
}
Custom strategies
A Strategy<T> describes how to generate and shrink values of T. Any
proptest strategy is a test-better Strategy through a blanket impl, so
proptest’s combinators (prop_map, prop_filter, tuples, collections) are
available with no wrapper. any::<T>() is the default strategy for a type, the
same one property! infers.
There is also an optional quickcheck bridge behind the quickcheck feature:
arbitrary::<T>() turns a quickcheck::Arbitrary type into a Strategy<T>.
proptest is the primary backend; reach for the bridge only when you already
have quickcheck::Arbitrary impls you want to reuse.
Snapshots
A snapshot test asserts that a value still renders the way it did last time. Instead of writing the expected output by hand, you let the test record it once, commit that, and fail on any later change. It is the right tool for output that is large, structured, or tedious to spell out: rendered HTML, serialized payloads, formatted reports, error messages.
test-better has two flavours: file snapshots, stored in a .snap file next
to the test, and inline snapshots, stored in a string literal in the test
itself.
File snapshots
check!(value).matches_snapshot("name") compares the value’s Display
output against tests/snapshots/<module_path>__<name>.snap:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn the_home_page_renders() -> TestResult {
let rendered = render_home_page();
check!(rendered).matches_snapshot("home_page")
}
}
The first time this runs there is no .snap file, so the test fails with a
“missing snapshot” error. Record it by running with UPDATE_SNAPSHOTS=1:
UPDATE_SNAPSHOTS=1 cargo test
That writes the .snap file. Review it, commit it, and from then on the test
compares against it. When the output legitimately changes, re-run with
UPDATE_SNAPSHOTS=1 and commit the updated file; when it changes
unexpectedly, the test fails with a diff.
Inline snapshots
For short values, an inline snapshot keeps the expected output in the test:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn arithmetic_still_works() -> TestResult {
check!(2 + 2).matches_inline_snapshot("4")
}
}
Multi-line values are written as a raw string; leading indentation is normalized, so the literal can be indented to match the surrounding code:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn the_report_renders() -> TestResult {
let report = ["name: alice", "score: 42", "status: active"].join("\n");
check!(report).matches_inline_snapshot(
r#"
name: alice
score: 42
status: active
"#,
)
}
}
An inline snapshot starts empty. Run the test under UPDATE_SNAPSHOTS=1 and it
records a pending patch rather than editing your source mid-run; apply the
pending patches with the cargo test-better accept companion (see the
runner recipe).
Redactions: ignoring the parts that always change
Real output often contains values that change every run (timestamps, UUIDs,
temp paths) but are not what the test is about. Redactions rewrites those to
a stable placeholder before the comparison:
#![allow(unused)]
fn main() {
use test_better::Redactions;
use test_better::prelude::*;
#[test]
fn the_audit_line_renders() -> TestResult {
let line = format!("{} user=alice action=login", now_rfc3339());
let redactions = Redactions::new()
.redact_rfc3339_timestamps()
.redact_uuids();
check!(line).matches_snapshot_with("audit_line", &redactions)
}
}
Redactions is a builder: redact_rfc3339_timestamps and redact_uuids are
built in; replace(needle, placeholder) swaps a fixed string; redact_with
takes an arbitrary rewrite rule. matches_snapshot_with and
matches_inline_snapshot_with take the configured Redactions.
When to snapshot, and when not
Snapshots are powerful but blunt: a snapshot test asserts on the whole
output, so it fails on any change, intended or not. Use one when the output is
genuinely too large or too structured to assert piece by piece. When you care
about one field, a targeted check! with matches_struct! or contains_str
says more about what matters and fails more precisely.
Fixtures
A fixture is a named, reusable piece of test setup. Instead of repeating the same “open a database, run migrations, insert a user” preamble in every test, you write it once as a fixture and name it as a parameter of the tests that need it.
The design goal is that a fixture failure is setup, not an assertion miss.
If the database will not open, the test that needed it fails with an
ErrorKind::Setup error naming the fixture, not a confusing assertion failure
deep in the body.
Defining a fixture
A fixture is a fn returning TestResult<T>, marked #[fixture]:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[fixture]
fn answer() -> TestResult<i32> {
Ok(42)
}
}
The body does whatever setup is needed and returns the value (or an error,
which becomes the Setup failure). Real fixtures build connections, temp
directories, seeded data: anything a test would otherwise construct inline.
Using fixtures in a test
A #[test_with_fixtures] test names fixtures as parameters. Each is resolved
before the body runs, and the resolved value is passed in:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[fixture]
fn answer() -> TestResult<i32> { Ok(42) }
#[test_with_fixtures]
fn the_answer_reaches_the_test(answer: i32) -> TestResult {
check!(answer).satisfies(eq(42))
}
}
The parameter name must match the fixture’s function name; the parameter type
is the T the fixture produces. Several fixtures are resolved left to right:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[fixture]
fn name() -> TestResult<String> {
Ok(String::from("alice"))
}
#[fixture]
fn age() -> TestResult<u32> {
Ok(30)
}
#[test_with_fixtures]
fn both_fixtures_are_available(name: String, age: u32) -> TestResult {
check!(name.len() as u32).satisfies(le(age))
}
}
Fixture scope
By default a fixture runs once per test that names it: each test gets its own fresh value. For expensive setup that is safe to share, declare module scope, and the body runs once and every test gets a clone:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[fixture(scope = "module")]
fn shared_config() -> TestResult<String> {
Ok(String::from("loaded-once"))
}
#[test_with_fixtures]
fn one_test_sees_the_config(shared_config: String) -> TestResult {
check!(shared_config.as_str()).satisfies(eq("loaded-once"))
}
#[test_with_fixtures]
fn another_test_sees_the_same_config(shared_config: String) -> TestResult {
check!(shared_config.is_empty()).satisfies(is_false())
}
}
Use per-test scope (the default) when tests must not see each other’s mutations; use module scope when the value is read-only and the setup is worth doing once.
When a fixture fails
A fixture that returns Err (or whose ? propagates one) makes every test
that depends on it fail with an ErrorKind::Setup error. The failure names the
fixture and preserves the original error’s detail, so the report points at the
broken setup rather than at whatever assertion happened to run first:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[fixture]
fn broken_db() -> TestResult<i32> {
Err(TestError::custom("could not connect to the database"))
}
}
Any #[test_with_fixtures] test taking broken_db fails before its body runs,
and the failure is re-categorized as Setup: it renders “test setup failed”,
names “setting up fixture broken_db”, and still includes the original “could
not connect to the database” detail. In practice a fixture rarely constructs an
error by hand: it propagates a real one with ?, using .context(..) or
.or_fail_with(..) exactly as a test body would. That separation, setup
failure versus assertion failure, is the whole point of the fixture system.
Performance
The short version: check! is slower than assert_eq! per call, by a
single-digit multiple, and it does not matter.
What the benchmark measures
crates/test-better/benches/expect_overhead.rs is a harness = false
benchmark: an ordinary fn main that times two hot loops with
std::time::Instant and prints a table. It compares a passing primitive
assertion written two ways:
assert_eq!(a, b)andassert!(a < b), the stock macros;check!(a).satisfies(eq(b))andcheck!(a).satisfies(lt(b)), thetest-betterform.
Run it with cargo bench -p test-better --bench expect_overhead. A typical
run on a developer laptop:
check! overhead vs the stock assert macros (10000000 iters/loop)
matcher assert (ns) expect (ns) ratio
eq 0.57 4.02 7.1x
lt 0.44 3.51 8.0x
The exact numbers move with the machine, but the shape holds: a passing
check! on a primitive matcher costs a few nanoseconds, a single-digit
multiple of the stock macro. That is comfortably within an order of
magnitude of assert_eq!.
Where the overhead comes from
assert_eq! on two u32s compiles down to a compare and a branch. check!
does a little more on the passing path:
- it constructs a
Subjectwrapping a reference to the value; - it constructs the matcher (
eq(b)is a small value holdingb); - it calls
Matcher::check, which returnsMatchResult::pass().
None of that allocates. The matcher’s Description, the expected/actual
rendering, the source-location capture: those are built only on the failure
path, which a passing test never takes. So the per-call cost is a few struct
moves and a non-inlined call or two, not heap traffic.
Why it does not matter
A few nanoseconds per assertion disappears next to anything a real test does.
Parsing a string, touching the filesystem, allocating a Vec, spawning an
async runtime: each is hundreds to millions of times more expensive than the
gap between assert_eq! and check!. A test suite’s wall time is dominated
by its setup and its I/O, never by the assertion macro.
The one case where assertion cost could be visible is a property test running
the same check! across many thousands of generated inputs. Even there the
matcher call is dwarfed by the strategy’s value generation and shrinking
machinery. If you ever do find an assertion in a genuine hot loop, the fix is
the same as it would be with assert_eq!: hoist it out of the loop, or assert
on the aggregate instead of each element.
test-better buys a great deal at that single-digit-nanosecond price: a
failure that is a value rather than a panic, the expression text, the
expected and actual sides, the source location, and the context chain. The
trade is heavily in your favor.
Recipes
Shorter answers to common questions, each independent of the others.
Assert several things about one value
all_of combines matchers: the value must satisfy every one. any_of is the
or-form. Both take a tuple of matchers:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn the_score_is_in_a_sensible_range() -> TestResult {
let score = 73_u32;
check!(score).satisfies(all_of((ge(0), le(100), ne(50))))?;
Ok(())
}
}
Keep going after the first failure: soft
A ? on a failed check! returns immediately, so a test stops at its first
failure. When you want to see every failure in one run (checking each field
of a response, say), soft collects them:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn every_field_is_checked() -> TestResult {
soft(|s| {
s.check(&1, eq(1));
s.check(&"alice", eq("alice"));
s.check(&true, is_true());
})
}
}
soft returns Ok(()) if every soft assertion passed, or a single TestError
that renders all of them, each with its own source location. Inside the
closure, s.check(&value, matcher) is the soft form of check!, and
s.context("...") opens a labeled scope for the assertions that follow.
Match the shape of a struct, tuple, or enum
The structural macros assert on shape without a custom matcher. Each field
position holds a matcher, and .. ignores the rest:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
use test_better::{matches_struct, matches_tuple, matches_variant};
#[derive(Debug)]
struct User { name: String, age: u32, email: String }
#[derive(Debug)]
struct Point(i32, i32);
#[derive(Debug)]
enum Shape { Circle { radius: f64 } }
#[test]
fn structural_matchers() -> TestResult {
let user = User { name: "alice".into(), age: 30, email: "alice@example.com".into() };
check!(user).satisfies(matches_struct!(User {
name: eq(String::from("alice")),
age: gt(18u32),
..
}))?;
check!(Point(3, 4)).satisfies(matches_tuple!(Point(gt(0), lt(100))))?;
check!(Shape::Circle { radius: 2.0 })
.satisfies(matches_variant!(Shape::Circle { radius: gt(0.0) }))?;
Ok(())
}
}
On a failure, the message names the field or position that did not match. The
matchers nest: an inner matches_struct! is just another matcher expression.
Assert on collections
contains takes a matcher and checks at least one element satisfies it;
every checks all of them; have_len, is_empty, and is_not_empty check
size. contains_in_order checks a subsequence:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn collection_matchers() -> TestResult {
let scores = vec![10, 20, 30, 40];
check!(&scores).satisfies(have_len(4))?;
check!(&scores).satisfies(contains(eq(30)))?;
check!(&scores).satisfies(every(gt(0)))?;
check!(&scores).satisfies(contains_in_order([eq(10), eq(40)]))?;
Ok(())
}
}
Parameterized tests with #[test_case]
#[test_case] turns one function into many generated #[test]s, one per
attribute line. Each line is the argument list, optionally followed by
; "label":
#![allow(unused)]
fn main() {
use test_better::prelude::*;
use test_better::test_case;
#[test_case(2, 2, 4)]
#[test_case(10, 5, 15 ; "bigger numbers")]
fn addition_works(a: i32, b: i32, sum: i32) -> TestResult {
check!(a + b).satisfies(eq(sum))
}
}
The generated tests are gathered into a module named for the function, so the
second case above runs as addition_works::bigger_numbers. Import test_case
by name: it is deliberately kept out of the prelude because std exports an
attribute of the same name.
Match a string
contains_str, starts_with, and ends_with are the substring matchers; with
the regex feature, matches_regex takes a pattern:
#![allow(unused)]
fn main() {
use test_better::prelude::*;
#[test]
fn string_matchers() -> TestResult {
let greeting = "Hello, alice!";
check!(greeting).satisfies(starts_with("Hello"))?;
check!(greeting).satisfies(contains_str("alice"))?;
check!(greeting).satisfies(ends_with("!"))?;
Ok(())
}
}
The cargo test-better runner
test-better-runner provides an optional cargo-test-better binary: a thin
wrapper around cargo test that groups failures by their context area and
prints a run summary. Install it and run it in place of cargo test:
cargo install test-better-runner
cargo test-better
It is opt-in tooling: your tests do not depend on it, and a plain cargo test
behaves exactly as before. The same crate’s cargo test-better accept
subcommand applies the pending patches that inline snapshots record under
UPDATE_SNAPSHOTS=1.
Control colored output
Failure rendering is colored when the output is a terminal. To force it on or off (in CI logs, or when capturing output for a test), set the color choice:
use test_better::{ColorChoice, set_color_choice};
fn main() {
set_color_choice(ColorChoice::Never);
}
ColorChoice is Always, Never, or Auto; color_choice() reads the
current setting.
Inspect a failure as data
For tooling, TestError::to_structured() produces a StructuredError: an
owned, Clone-able, serde-serializable (behind the serde feature) form of
the failure, with the kind, message, location, context chain, and payload. It
is what the cargo-test-better runner consumes; a test that needs to assert on
the structure of a failure rather than its rendered text can use it directly.