24.8 Benchmarking
Benchmarking measures the execution speed (latency) or throughput of code snippets. It complements testing by tracking performance characteristics, helping to identify regressions, and validating optimizations. Systems programming often requires careful performance management, making benchmarking a valuable tool.
Rust offers several approaches to benchmarking:
- Built-in Benchmark Harness: A basic harness available on the nightly Rust toolchain.
- Dedicated Crates: Third-party libraries like
criterion
anddivan
that work on stable Rust and offer more advanced features, statistical analysis, and reporting.
For most comprehensive benchmarking needs, dedicated crates are preferred due to their stability and richer feature sets.
24.8.1 Built-in Benchmarks (Nightly Rust Only)
If you are using the nightly Rust compiler, you can use the language’s built-in, unstable benchmarking support. This can be useful for very simple benchmarks without adding external dependencies.
- Enable Feature and Import: Add
#![feature(test)]
to your crate root (usuallysrc/lib.rs
orsrc/main.rs
) and import thetest
crate. - Write Benchmark Functions: Benchmark functions are typically placed within a
#[cfg(test)]
module, similar to unit tests. They are marked with the#[bench]
attribute and take a mutable reference totest::Bencher
. - Use
Bencher::iter
: Inside the benchmark function, the code to be measured is passed as a closure tob.iter(|| ...)
.
Example:
// In src/lib.rs or src/main.rs
#![feature(test)] // Required for built-in benchmarks
// This line is only needed if you're putting benchmarks in a file that isn't the
// crate root and needs to explicitly import the `test` crate provided by the compiler.
// For benchmarks within the same file as `#![feature(test)]`, it's often implicitly
// available.
extern crate test;
pub fn expensive_calculation(input: u32) -> u32 {
// A simple placeholder for a function to benchmark
(0..input).fold(0, |acc, x| acc.wrapping_add(x))
}
#[cfg(test)]
mod benchmarks {
use super::*;
use test::Bencher; // Import the Bencher type
#[bench]
fn bench_expensive_calculation(b: &mut Bencher) {
// The iter method runs the closure multiple times and measures its execution.
b.iter(|| {
// Code to benchmark goes here
// Use test::black_box to prevent the compiler from optimizing away
// the code being benchmarked if its result isn't used.
expensive_calculation(test::black_box(1000))
});
}
}
Running Built-in Benchmarks:
Use the cargo bench
command. This command will compile your code in a test configuration (enabling #[cfg(test)]
) and run functions annotated with #[bench]
.
cargo bench
Output is typically printed to the console, showing the time taken per iteration.
Note: The built-in benchmark harness is very basic. It lacks statistical rigor and advanced features found in dedicated crates. Compiler optimizations can also heavily affect benchmark results; using
test::black_box
around inputs to and outputs from benchmarked code is crucial to prevent the compiler from optimizing away the work. While available on nightly, for comprehensive analysis, consider usingcriterion
ordivan
.
24.8.2 Benchmarking with criterion
(Stable Rust)
criterion
is a powerful, statistics-driven benchmarking library for stable Rust. It performs multiple runs, analyzes results statistically to mitigate environmental noise, detects performance changes over time, and can generate detailed HTML reports.
-
Add Dependency and Configure Harness: Add
criterion
to your[dev-dependencies]
inCargo.toml
. You also need to configure Cargo to usecriterion
’s harness for benchmark targets.# Cargo.toml [dev-dependencies] criterion = { version = "0.5", features = ["html_reports"] } # Check for the latest version # Tell Cargo to use criterion's test harness for benchmarks. # 'main_bench' corresponds to the benchmark file benches/main_bench.rs [[bench]] name = "main_bench" # This is the name of your benchmark target harness = false # Disables the default libtest harness
-
Create Benchmark File: Create a file in the
benches
directory at the root of your project (e.g.,benches/main_bench.rs
).// benches/main_bench.rs use criterion::{black_box, criterion_group, criterion_main, Criterion, BenchmarkId}; // Example function to benchmark (could be imported from your library) fn fibonacci(n: u64) -> u64 { match n { 0 => 0, 1 => 1, n => fibonacci(n - 1) + fibonacci(n - 2), } } fn fibonacci_benchmarks(c: &mut Criterion) { // Benchmark fibonacci(10) // "fib 10" is a unique string ID for this specific benchmark case. // This ID is used in reports and when comparing performance over time. c.bench_function("fib 10", |b| b.iter(|| fibonacci(black_box(10)))); // Benchmark fibonacci(20) with a different ID c.bench_function("fib 20", |b| b.iter(|| fibonacci(black_box(20)))); // You can also benchmark with varying inputs using a group let mut group = c.benchmark_group("Fibonacci Numbers"); for i in [5u64, 10u64, 15u64].iter() { // BenchmarkId is used to create a unique ID for each parameter value // It takes the group ID, parameter description, and parameter value. group.bench_with_input(BenchmarkId::new("Recursive", i), i, |b, i_val| { b.iter(|| fibonacci(black_box(*i_val))) }); } group.finish(); } // The criterion_group! macro defines a benchmark group. // The first argument `benches` is the name of the group suite. // Subsequent arguments are the benchmark functions to include in this suite. criterion_group!(benches, fibonacci_benchmarks); // The criterion_main! macro generates the main function necessary // to run all benchmark group suites defined by criterion_group!. criterion_main!(benches);
criterion::black_box
: A function that acts as an opaque barrier to compiler optimizations, ensuring the benchmarked code is actually executed.Criterion::bench_function("ID", ...)
: Defines a single benchmark case. The first argument is a string identifier for this benchmark.Criterion::benchmark_group("Group Name")
: Allows grouping related benchmarks and comparing different functions or parameters side-by-side.Bencher::iter
: Runs the provided closure multiple times to gather timing statistics.
-
Run Benchmarks: Execute
cargo bench
.cargo bench
criterion
saves results and generates detailed HTML reports, typically found intarget/criterion/report/index.html
. These reports include plots and statistical analysis, making it easier to understand performance characteristics and regressions.
24.8.3 Benchmarking with divan
(Stable Rust)
divan
is a newer benchmarking library (requires Rust 1.75 or later as of Divan 0.1.x) focused on simplicity, low overhead, and ergonomic features like attribute-based benchmark registration and parameterization.
-
Add Dependency and Configure Harness: Add
divan
to your[dev-dependencies]
inCargo.toml
and configure the benchmark harness.# Cargo.toml [dev-dependencies] divan = "0.1" # Check for the latest version [[bench]] name = "app_benchmarks" # Corresponds to benches/app_benchmarks.rs harness = false
-
Create Benchmark File: Create a file in the
benches
directory (e.g.,benches/app_benchmarks.rs
).// benches/app_benchmarks.rs // Example function to benchmark (could be imported from your library) fn fibonacci_divan(n: u32) -> u64 { if n <= 1 { n as u64 } else { fibonacci_divan(n - 1) + fibonacci_divan(n - 2) } } fn main() { // Run all benchmarks registered in this crate (binary). divan::main(); } // Simple benchmark for a fixed input. // The function itself becomes the benchmark. #[divan::bench] fn fib_10() -> u64 { fibonacci_divan(divan::black_box(10)) } // Parameterized benchmark: runs for each value in `args`. // Divan automatically handles `black_box` for arguments in many cases. #[divan::bench(args = [5, 10, 15])] fn fib_params(n: u32) -> u64 { fibonacci_divan(n) }
divan::main()
: Initializes and runs all benchmarks registered with#[divan::bench]
in the current crate.#[divan::bench]
: Attribute macro that marks a function as a benchmark.args = [...]
: An option for#[divan::bench]
to provide a list of input values for parameterized benchmarks.divan::black_box
is available if explicit control over optimization prevention is needed, thoughdivan
often applies such measures implicitly for arguments.
-
Run Benchmarks: Execute
cargo bench
.cargo bench
divan
outputs benchmark results directly to the console. For more advanced features and configuration options, consult the Divan documentation.
Choosing between criterion
and divan
often depends on specific needs: criterion
is known for its in-depth statistical analysis and historical trend reporting, making it excellent for tracking performance over a project’s lifetime. divan
offers a more lightweight and arguably more ergonomic experience for defining and running benchmarks quickly, with good support for parameterization. Both are excellent choices for benchmarking on stable Rust.