Bitcoin Core 28.99.0
P2P Digital Currency
|
Main entry point to nanobench's benchmarking facility. More...
#include <nanobench.h>
Public Member Functions | |
Bench () | |
Creates a new benchmark for configuration and running of benchmarks. More... | |
Bench (Bench &&other) noexcept | |
Bench & | operator= (Bench &&other) noexcept(ANKERL_NANOBENCH(NOEXCEPT_STRING_MOVE)) |
Bench (Bench const &other) | |
Bench & | operator= (Bench const &other) |
~Bench () noexcept | |
template<typename Op > | |
Bench & | run (char const *benchmarkName, Op &&op) |
Repeatedly calls op() based on the configuration, and performs measurements. More... | |
template<typename Op > | |
Bench & | run (std::string const &benchmarkName, Op &&op) |
template<typename Op > | |
Bench & | run (Op &&op) |
Same as run(char const* benchmarkName, Op op), but instead uses the previously set name. More... | |
Bench & | title (char const *benchmarkTitle) |
Title of the benchmark, will be shown in the table header. More... | |
Bench & | title (std::string const &benchmarkTitle) |
ANKERL_NANOBENCH(NODISCARD) std Bench & | name (char const *benchmarkName) |
Gets the title of the benchmark. More... | |
Bench & | name (std::string const &benchmarkName) |
ANKERL_NANOBENCH(NODISCARD) std Bench & | context (char const *variableName, char const *variableValue) |
Set context information. More... | |
Bench & | context (std::string const &variableName, std::string const &variableValue) |
Bench & | clearContext () |
Reset context information. More... | |
template<typename T > | |
Bench & | batch (T b) noexcept |
Sets the batch size. More... | |
ANKERL_NANOBENCH (NODISCARD) double batch() const noexcept | |
Bench & | unit (char const *unit) |
Sets the operation unit. More... | |
Bench & | unit (std::string const &unit) |
ANKERL_NANOBENCH(NODISCARD) std Bench & | timeUnit (std::chrono::duration< double > const &tu, std::string const &tuName) |
Sets the time unit to be used for the default output. More... | |
ANKERL_NANOBENCH(NODISCARD) std ANKERL_NANOBENCH(NODISCARD) std Bench & | output (std::ostream *outstream) noexcept |
Set the output stream where the resulting markdown table will be printed to. More... | |
ANKERL_NANOBENCH(NODISCARD) std Bench & | clockResolutionMultiple (size_t multiple) noexcept |
Modern processors have a very accurate clock, being able to measure as low as 20 nanoseconds. More... | |
ANKERL_NANOBENCH (NODISCARD) size_t clockResolutionMultiple() const noexcept | |
Bench & | epochs (size_t numEpochs) noexcept |
Controls number of epochs, the number of measurements to perform. More... | |
ANKERL_NANOBENCH (NODISCARD) size_t epochs() const noexcept | |
Bench & | maxEpochTime (std::chrono::nanoseconds t) noexcept |
Upper limit for the runtime of each epoch. More... | |
ANKERL_NANOBENCH(NODISCARD) std Bench & | minEpochTime (std::chrono::nanoseconds t) noexcept |
Minimum time each epoch should take. More... | |
ANKERL_NANOBENCH(NODISCARD) std Bench & | minEpochIterations (uint64_t numIters) noexcept |
Sets the minimum number of iterations each epoch should take. More... | |
ANKERL_NANOBENCH (NODISCARD) uint64_t minEpochIterations() const noexcept | |
Bench & | epochIterations (uint64_t numIters) noexcept |
Sets exactly the number of iterations for each epoch. More... | |
ANKERL_NANOBENCH (NODISCARD) uint64_t epochIterations() const noexcept | |
Bench & | warmup (uint64_t numWarmupIters) noexcept |
Sets a number of iterations that are initially performed without any measurements. More... | |
ANKERL_NANOBENCH (NODISCARD) uint64_t warmup() const noexcept | |
Bench & | relative (bool isRelativeEnabled) noexcept |
Marks the next run as the baseline. More... | |
ANKERL_NANOBENCH (NODISCARD) bool relative() const noexcept | |
Bench & | performanceCounters (bool showPerformanceCounters) noexcept |
Enables/disables performance counters. More... | |
ANKERL_NANOBENCH (NODISCARD) bool performanceCounters() const noexcept | |
template<typename Arg > | |
ANKERL_NANOBENCH(NODISCARD) std Bench & | doNotOptimizeAway (Arg &&arg) |
Retrieves all benchmark results collected by the bench object so far. More... | |
template<typename T > | |
Bench & | complexityN (T n) noexcept |
ANKERL_NANOBENCH (NODISCARD) double complexityN() const noexcept | |
std::vector< BigO > | complexityBigO () const |
template<typename Op > | |
BigO | complexityBigO (char const *name, Op op) const |
Calculates bigO for a custom function. More... | |
template<typename Op > | |
BigO | complexityBigO (std::string const &name, Op op) const |
Bench & | render (char const *templateContent, std::ostream &os) |
Bench & | render (std::string const &templateContent, std::ostream &os) |
Bench & | config (Config const &benchmarkConfig) |
ANKERL_NANOBENCH (NODISCARD) Config const &config() const noexcept | |
template<typename Arg > | |
Bench & | doNotOptimizeAway (Arg &&arg) |
Private Attributes | |
Config | mConfig {} |
std::vector< Result > | mResults {} |
Main entry point to nanobench's benchmarking facility.
It holds configuration and results from one or more benchmark runs. Usually it is used in a single line, where the object is constructed, configured, and then a benchmark is run. E.g. like this:
ankerl::nanobench::Bench().unit("byte").batch(1000).run("random fluctuations", [&] { // here be the benchmark code });
In that example Bench() constructs the benchmark, it is then configured with unit() and batch(), and after configuration a benchmark is executed with run(). Once run() has finished, it prints the result to std::cout
. It would also store the results in the Bench instance, but in this case the object is immediately destroyed so it's not available any more.
Definition at line 627 of file nanobench.h.
ankerl::nanobench::Bench::Bench | ( | ) |
Creates a new benchmark for configuration and running of benchmarks.
|
noexcept |
ankerl::nanobench::Bench::Bench | ( | Bench const & | other | ) |
|
noexcept |
|
noexcept |
|
noexcept |
|
noexcept |
|
noexcept |
|
noexcept |
|
noexcept |
|
noexcept |
|
noexcept |
|
noexcept |
|
noexcept |
|
noexcept |
Sets the batch size.
E.g. number of processed byte, or some other metric for the size of the processed data in each iteration. If you benchmark hashing of a 1000 byte long string and want byte/sec as a result, you can specify 1000 as the batch size.
T | Any input type is internally cast to double . |
b | batch size |
Definition at line 1258 of file nanobench.h.
Bench & ankerl::nanobench::Bench::clearContext | ( | ) |
Reset context information.
This may improve efficiency when using many context entries, or improve robustness by removing spurious context entries.
|
noexcept |
Modern processors have a very accurate clock, being able to measure as low as 20 nanoseconds.
This is the main trick nanobech to be so fast: we find out how accurate the clock is, then run the benchmark only so often that the clock's accuracy is good enough for accurate measurements.
The default is to run one epoch for 1000 times the clock resolution. So for 20ns resolution and 11 epochs, this gives a total runtime of
To be precise, nanobench adds a 0-20% random noise to each evaluation. This is to prevent any aliasing effects, and further improves accuracy.
Total runtime will be higher though: Some initial time is needed to find out the target number of iterations for each epoch, and there is some overhead involved to start & stop timers and calculate resulting statistics and writing the output.
multiple | Target number of times of clock resolution. Usually 1000 is a good compromise between runtime and accuracy. |
std::vector< BigO > ankerl::nanobench::Bench::complexityBigO | ( | ) | const |
Calculates Big O of the results with all preconfigured complexity functions. Currently these complexity functions are fitted into the benchmark results:
, , , , , .
If we e.g. evaluate the complexity of std::sort
, this is the result of std::cout << bench.complexityBigO()
:
So in this case provides the best approximation.
embed:rst See the tutorial :ref:`asymptotic-complexity` for details.
BigO ankerl::nanobench::Bench::complexityBigO | ( | char const * | name, |
Op | op | ||
) | const |
Calculates bigO for a custom function.
E.g. to calculate the mean squared error for , which is not part of the default set of complexityBigO(), you can do this:
The resulting mean squared error can be printed with std::cout << logLogN
. E.g. it prints something like this:
Op | Type of mapping operation. |
name | Name for the function, e.g. "O(log log n)" |
op | Op's operator() maps a double with the desired complexity function, e.g. log2(log2(n)) . |
Definition at line 1246 of file nanobench.h.
BigO ankerl::nanobench::Bench::complexityBigO | ( | std::string const & | name, |
Op | op | ||
) | const |
|
noexcept |
embed:rst Sets N for asymptotic complexity calculation, so it becomes possible to calculate `Big O <https://en.wikipedia.org/wiki/Big_O_notation>`_ from multiple benchmark evaluations. Use :cpp:func:`ankerl::nanobench::Bench::complexityBigO` when the evaluation has finished. See the tutorial :ref:`asymptotic-complexity` for details.
T | Any type is cast to double . |
n | Length of N for the next benchmark run, so it is possible to calculate bigO . |
Definition at line 1265 of file nanobench.h.
ANKERL_NANOBENCH(NODISCARD) std Bench & ankerl::nanobench::Bench::context | ( | char const * | variableName, |
char const * | variableValue | ||
) |
Set context information.
The information can be accessed using custom render templates via {{context(variableName)}}
. Trying to render a variable that hasn't been set before raises an exception. Not included in (default) markdown table.
variableName | The name of the context variable. |
variableValue | The value of the context variable. |
Bench & ankerl::nanobench::Bench::context | ( | std::string const & | variableName, |
std::string const & | variableValue | ||
) |
ANKERL_NANOBENCH(NODISCARD) std Bench & ankerl::nanobench::Bench::doNotOptimizeAway | ( | Arg && | arg | ) |
Retrieves all benchmark results collected by the bench object so far.
Each call to run() generates a Result that is stored within the Bench instance. This is mostly for advanced users who want to see all the nitty gritty details.
embed:rst Convenience shortcut to :cpp:func:`ankerl::nanobench::doNotOptimizeAway`.
Bench & ankerl::nanobench::Bench::doNotOptimizeAway | ( | Arg && | arg | ) |
|
noexcept |
Sets exactly the number of iterations for each epoch.
Ignores all other epoch limits. This forces nanobench to use exactly the given number of iterations for each epoch, not more and not less. Default is 0 (disabled).
numIters | Exact number of iterations to use. Set to 0 to disable. |
|
noexcept |
Controls number of epochs, the number of measurements to perform.
The reported result will be the median of evaluation of each epoch. The higher you choose this, the more deterministic the result be and outliers will be more easily removed. Also the err%
will be more accurate the higher this number is. Note that the err%
will not necessarily decrease when number of epochs is increased. But it will be a more accurate representation of the benchmarked code's runtime stability.
Choose the value wisely. In practice, 11 has been shown to be a reasonable choice between runtime performance and accuracy. This setting goes hand in hand with minEpochIterations() (or minEpochTime()). If you are more interested in median runtime, you might want to increase epochs(). If you are more interested in mean runtime, you might want to increase minEpochIterations() instead.
numEpochs | Number of epochs. |
|
noexcept |
Upper limit for the runtime of each epoch.
As a safety precaution if the clock is not very accurate, we can set an upper limit for the maximum evaluation time per epoch. Default is 100ms. At least a single evaluation of the benchmark is performed.
t | Maximum target runtime for a single epoch. |
|
noexcept |
Sets the minimum number of iterations each epoch should take.
Default is 1, and we rely on clockResolutionMultiple(). If the err%
is high and you want a more smooth result, you might want to increase the minimum number of iterations, or increase the minEpochTime().
numIters | Minimum number of iterations per epoch. |
|
noexcept |
Minimum time each epoch should take.
Default is zero, so we are fully relying on clockResolutionMultiple(). In most cases this is exactly what you want. If you see that the evaluation is unreliable with a high err%
, you can increase either minEpochTime() or minEpochIterations().
t | Minimum time each epoch should take. |
ANKERL_NANOBENCH(NODISCARD) std Bench & ankerl::nanobench::Bench::name | ( | char const * | benchmarkName | ) |
Gets the title of the benchmark.
Name of the benchmark, will be shown in the table row.
Bench & ankerl::nanobench::Bench::name | ( | std::string const & | benchmarkName | ) |
|
noexcept |
Set the output stream where the resulting markdown table will be printed to.
The default is &std::cout
. You can disable all output by setting nullptr
.
outstream | Pointer to output stream, can be nullptr . |
|
noexcept |
Enables/disables performance counters.
On Linux nanobench has a powerful feature to use performance counters. This enables counting of retired instructions, count number of branches, missed branches, etc. On default this is enabled, but you can disable it if you don't need that feature.
showPerformanceCounters | True to enable, false to disable. |
|
noexcept |
Marks the next run as the baseline.
Call relative(true)
to mark the run as the baseline. Successive runs will be compared to this run. It is calculated by
See the tutorial section "Comparing Results" for example usage.
isRelativeEnabled | True to enable processing |
Bench & ankerl::nanobench::Bench::render | ( | char const * | templateContent, |
std::ostream & | os | ||
) |
embed:rst Convenience shortcut to :cpp:func:`ankerl::nanobench::render`.
Bench & ankerl::nanobench::Bench::render | ( | std::string const & | templateContent, |
std::ostream & | os | ||
) |
Bench & ankerl::nanobench::Bench::run | ( | char const * | benchmarkName, |
Op && | op | ||
) |
Repeatedly calls op()
based on the configuration, and performs measurements.
This call is marked with noinline
to prevent the compiler to optimize beyond different benchmarks. This can have quite a big effect on benchmark accuracy.
embed:rst .. note:: Each call to your lambda must have a side effect that the compiler can't possibly optimize it away. E.g. add a result to an externally defined number (like `x` in the above example), and finally call `doNotOptimizeAway` on the variables the compiler must not remove. You can also use :cpp:func:`ankerl::nanobench::doNotOptimizeAway` directly in the lambda, but be aware that this has a small overhead.
Op | The code to benchmark. |
Definition at line 1234 of file nanobench.h.
Bench & ankerl::nanobench::Bench::run | ( | Op && | op | ) |
Same as run(char const* benchmarkName, Op op), but instead uses the previously set name.
Op | The code to benchmark. |
Definition at line 1212 of file nanobench.h.
Bench & ankerl::nanobench::Bench::run | ( | std::string const & | benchmarkName, |
Op && | op | ||
) |
Definition at line 1240 of file nanobench.h.
ANKERL_NANOBENCH(NODISCARD) std Bench & ankerl::nanobench::Bench::timeUnit | ( | std::chrono::duration< double > const & | tu, |
std::string const & | tuName | ||
) |
Sets the time unit to be used for the default output.
Nanobench defaults to using ns (nanoseconds) as output in the markdown. For some benchmarks this is too coarse, so it is possible to configure this. E.g. use timeUnit(1ms, "ms")
to show ms/op
instead of ns/op
.
tu | Time unit to display the results in, default is 1ns. |
tuName | Name for the time unit, default is "ns" |
Bench & ankerl::nanobench::Bench::title | ( | char const * | benchmarkTitle | ) |
Title of the benchmark, will be shown in the table header.
Changing the title will start a new markdown table.
benchmarkTitle | The title of the benchmark. |
Bench & ankerl::nanobench::Bench::title | ( | std::string const & | benchmarkTitle | ) |
Bench & ankerl::nanobench::Bench::unit | ( | char const * | unit | ) |
Sets the operation unit.
Defaults to "op". Could be e.g. "byte" for string processing. This is used for the table header, e.g. to show ns/byte
. Use singular (byte, not bytes). A change clears the currently collected results.
unit | The unit name. |
Bench & ankerl::nanobench::Bench::unit | ( | std::string const & | unit | ) |
|
noexcept |
Sets a number of iterations that are initially performed without any measurements.
Some benchmarks need a few evaluations to warm up caches / database / whatever access. Normally this should not be needed, since we show the median result so initial outliers will be filtered away automatically. If the warmup effect is large though, you might want to set it. Default is 0.
numWarmupIters | Number of warmup iterations. |
|
private |
Definition at line 1011 of file nanobench.h.
|
private |
Definition at line 1012 of file nanobench.h.