Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make warmup_time be configurable #1844

Merged
merged 1 commit into from
Feb 1, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs/command-line.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
[Specify the number of resamples for bootstrapping](#specify-the-number-of-resamples-for-bootstrapping)<br>
[Specify the confidence-interval for bootstrapping](#specify-the-confidence-interval-for-bootstrapping)<br>
[Disable statistical analysis of collected benchmark samples](#disable-statistical-analysis-of-collected-benchmark-samples)<br>
[Specify the amount of time in milliseconds spent on warming up each test](#specify-the-amount-of-time-in-milliseconds-spent-on-warming-up-each-test)<br>
[Usage](#usage)<br>
[Specify the section to run](#specify-the-section-to-run)<br>
[Filenames as tags](#filenames-as-tags)<br>
Expand Down Expand Up @@ -64,6 +65,7 @@ Click one of the following links to take you straight to that option - or scroll
<a href="#benchmark-resamples"> ` --benchmark-resamples`</a><br />
<a href="#benchmark-confidence-interval"> ` --benchmark-confidence-interval`</a><br />
<a href="#benchmark-no-analysis"> ` --benchmark-no-analysis`</a><br />
<a href="#benchmark-warmup-time"> ` --benchmark-warmup-time`</a><br />
<a href="#use-colour"> ` --use-colour`</a><br />

</br>
Expand Down Expand Up @@ -317,6 +319,14 @@ Must be between 0 and 1 and defaults to 0.95.
When this flag is specified no bootstrapping or any other statistical analysis is performed.
Instead the user code is only measured and the plain mean from the samples is reported.

<a id="benchmark-warmup-time"></a>
## Specify the amount of time in milliseconds spent on warming up each test
<pre>--benchmark-warmup-time</pre>

> [Introduced](https://github.com/catchorg/Catch2/pull/1844) in Catch X.Y.Z.

Configure the amount of time spent warming up each test.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should get an "introduced in" placeholder.

<a id="usage"></a>
## Usage
<pre>-h, -?, --help</pre>
Expand Down
13 changes: 13 additions & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
[Windows header clutter](#windows-header-clutter)<br>
[Enabling stringification](#enabling-stringification)<br>
[Disabling exceptions](#disabling-exceptions)<br>
[Overriding Catch's debug break (`-b`)](#overriding-catchs-debug-break--b)<br>

Catch is designed to "just work" as much as possible. For most people the only configuration needed is telling Catch which source file should host all the implementation code (```CATCH_CONFIG_MAIN```).

Expand Down Expand Up @@ -257,6 +258,18 @@ namespace Catch {
}
```

## Overriding Catch's debug break (`-b`)

> [Introduced](https://github.com/catchorg/Catch2/pull/1846) in Catch X.Y.Z.

You can override Catch2's break-into-debugger code by defining the
`CATCH_BREAK_INTO_DEBUGGER()` macro. This can be used if e.g. Catch2 does
not know your platform, or your platform is misdetected.

The macro will be used as is, that is, `CATCH_BREAK_INTO_DEBUGGER();`
must compile and must break into debugger.


---

[Home](Readme.md#top)
4 changes: 2 additions & 2 deletions include/internal/benchmark/catch_benchmark.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,10 @@ namespace Catch {
template <typename Clock>
ExecutionPlan<FloatDuration<Clock>> prepare(const IConfig &cfg, Environment<FloatDuration<Clock>> env) const {
auto min_time = env.clock_resolution.mean * Detail::minimum_ticks;
auto run_time = std::max(min_time, std::chrono::duration_cast<decltype(min_time)>(Detail::warmup_time));
auto run_time = std::max(min_time, std::chrono::duration_cast<decltype(min_time)>(cfg.benchmarkWarmupTime()));
auto&& test = Detail::run_for_at_least<Clock>(std::chrono::duration_cast<ClockDuration<Clock>>(run_time), 1, fun);
int new_iters = static_cast<int>(std::ceil(min_time * test.iterations / test.elapsed));
return { new_iters, test.elapsed / test.iterations * new_iters * cfg.benchmarkSamples(), fun, std::chrono::duration_cast<FloatDuration<Clock>>(Detail::warmup_time), Detail::warmup_iterations };
return { new_iters, test.elapsed / test.iterations * new_iters * cfg.benchmarkSamples(), fun, std::chrono::duration_cast<FloatDuration<Clock>>(cfg.benchmarkWarmupTime()), Detail::warmup_iterations };
}

template <typename Clock = default_clock>
Expand Down
3 changes: 3 additions & 0 deletions include/internal/catch_commandline.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,9 @@ namespace Catch {
| Opt( config.benchmarkNoAnalysis )
["--benchmark-no-analysis"]
( "perform only measurements; do not perform any analysis" )
| Opt( config.benchmarkWarmupTime, "benchmarkWarmupTime" )
["--benchmark-warmup-time"]
( "amount of time in milliseconds spent on warming up each test (default: 100)" )
| Arg( config.testsOrTags, "test name|pattern|tags" )
( "which test or tests to use" );

Expand Down
9 changes: 5 additions & 4 deletions include/internal/catch_config.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -72,10 +72,11 @@ namespace Catch {
bool Config::showInvisibles() const { return m_data.showInvisibles; }
Verbosity Config::verbosity() const { return m_data.verbosity; }

bool Config::benchmarkNoAnalysis() const { return m_data.benchmarkNoAnalysis; }
int Config::benchmarkSamples() const { return m_data.benchmarkSamples; }
double Config::benchmarkConfidenceInterval() const { return m_data.benchmarkConfidenceInterval; }
unsigned int Config::benchmarkResamples() const { return m_data.benchmarkResamples; }
bool Config::benchmarkNoAnalysis() const { return m_data.benchmarkNoAnalysis; }
int Config::benchmarkSamples() const { return m_data.benchmarkSamples; }
double Config::benchmarkConfidenceInterval() const { return m_data.benchmarkConfidenceInterval; }
unsigned int Config::benchmarkResamples() const { return m_data.benchmarkResamples; }
std::chrono::milliseconds Config::benchmarkWarmupTime() const { return std::chrono::milliseconds(m_data.benchmarkWarmupTime); }

IStream const* Config::openStream() {
return Catch::makeStream(m_data.outputFilename);
Expand Down
2 changes: 2 additions & 0 deletions include/internal/catch_config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ namespace Catch {
unsigned int benchmarkSamples = 100;
double benchmarkConfidenceInterval = 0.95;
unsigned int benchmarkResamples = 100000;
std::chrono::milliseconds::rep benchmarkWarmupTime = 100;

Verbosity verbosity = Verbosity::Normal;
WarnAbout::What warnings = WarnAbout::Nothing;
Expand Down Expand Up @@ -113,6 +114,7 @@ namespace Catch {
int benchmarkSamples() const override;
double benchmarkConfidenceInterval() const override;
unsigned int benchmarkResamples() const override;
std::chrono::milliseconds benchmarkWarmupTime() const override;

private:

Expand Down
10 changes: 6 additions & 4 deletions include/internal/catch_debugger.h
Original file line number Diff line number Diff line change
Expand Up @@ -48,10 +48,12 @@ namespace Catch {
#define CATCH_TRAP() DebugBreak()
#endif

#ifdef CATCH_TRAP
#define CATCH_BREAK_INTO_DEBUGGER() []{ if( Catch::isDebuggerActive() ) { CATCH_TRAP(); } }()
#else
#define CATCH_BREAK_INTO_DEBUGGER() []{}()
#ifndef CATCH_BREAK_INTO_DEBUGGER
#ifdef CATCH_TRAP
#define CATCH_BREAK_INTO_DEBUGGER() []{ if( Catch::isDebuggerActive() ) { CATCH_TRAP(); } }()
#else
#define CATCH_BREAK_INTO_DEBUGGER() []{}()
#endif
#endif

#endif // TWOBLUECUBES_CATCH_DEBUGGER_H_INCLUDED
2 changes: 2 additions & 0 deletions include/internal/catch_interfaces_config.h
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
#include "catch_common.h"
#include "catch_option.hpp"

#include <chrono>
#include <iosfwd>
#include <string>
#include <vector>
Expand Down Expand Up @@ -81,6 +82,7 @@ namespace Catch {
virtual int benchmarkSamples() const = 0;
virtual double benchmarkConfidenceInterval() const = 0;
virtual unsigned int benchmarkResamples() const = 0;
virtual std::chrono::milliseconds benchmarkWarmupTime() const = 0;
};

using IConfigPtr = std::shared_ptr<IConfig const>;
Expand Down
2 changes: 2 additions & 0 deletions projects/SelfTest/Baselines/compact.sw.approved.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1116,6 +1116,8 @@ CmdLine.tests.cpp:<line number>: passed: cli.parse({ "test", "--benchmark-confid
CmdLine.tests.cpp:<line number>: passed: config.benchmarkConfidenceInterval == Catch::Detail::Approx(0.99) for: 0.99 == Approx( 0.99 )
CmdLine.tests.cpp:<line number>: passed: cli.parse({ "test", "--benchmark-no-analysis" }) for: {?}
CmdLine.tests.cpp:<line number>: passed: config.benchmarkNoAnalysis for: true
CmdLine.tests.cpp:<line number>: passed: cli.parse({ "test", "--benchmark-warmup-time=10" }) for: {?}
CmdLine.tests.cpp:<line number>: passed: config.benchmarkWarmupTime == 10 for: 10 == 10
Misc.tests.cpp:<line number>: passed: std::tuple_size<TestType>::value >= 1 for: 3 >= 1
Misc.tests.cpp:<line number>: passed: std::tuple_size<TestType>::value >= 1 for: 2 >= 1
Misc.tests.cpp:<line number>: passed: std::tuple_size<TestType>::value >= 1 for: 1 >= 1
Expand Down
2 changes: 1 addition & 1 deletion projects/SelfTest/Baselines/console.std.approved.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1381,5 +1381,5 @@ due to unexpected exception with message:

===============================================================================
test cases: 305 | 231 passed | 70 failed | 4 failed as expected
assertions: 1662 | 1510 passed | 131 failed | 21 failed as expected
assertions: 1664 | 1512 passed | 131 failed | 21 failed as expected

24 changes: 21 additions & 3 deletions projects/SelfTest/Baselines/console.sw.approved.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8046,7 +8046,7 @@ with expansion:
-------------------------------------------------------------------------------
Process can be configured on command line
Benchmark options
resamples
confidence-interval
-------------------------------------------------------------------------------
CmdLine.tests.cpp:<line number>
...............................................................................
Expand All @@ -8064,7 +8064,7 @@ with expansion:
-------------------------------------------------------------------------------
Process can be configured on command line
Benchmark options
resamples
no-analysis
-------------------------------------------------------------------------------
CmdLine.tests.cpp:<line number>
...............................................................................
Expand All @@ -8079,6 +8079,24 @@ CmdLine.tests.cpp:<line number>: PASSED:
with expansion:
true

-------------------------------------------------------------------------------
Process can be configured on command line
Benchmark options
warmup-time
-------------------------------------------------------------------------------
CmdLine.tests.cpp:<line number>
...............................................................................

CmdLine.tests.cpp:<line number>: PASSED:
CHECK( cli.parse({ "test", "--benchmark-warmup-time=10" }) )
with expansion:
{?}

CmdLine.tests.cpp:<line number>: PASSED:
REQUIRE( config.benchmarkWarmupTime == 10 )
with expansion:
10 == 10

-------------------------------------------------------------------------------
Product with differing arities - std::tuple<int, double, float>
-------------------------------------------------------------------------------
Expand Down Expand Up @@ -13285,5 +13303,5 @@ Misc.tests.cpp:<line number>: PASSED:

===============================================================================
test cases: 305 | 215 passed | 86 failed | 4 failed as expected
assertions: 1679 | 1510 passed | 148 failed | 21 failed as expected
assertions: 1681 | 1512 passed | 148 failed | 21 failed as expected

7 changes: 4 additions & 3 deletions projects/SelfTest/Baselines/junit.sw.approved.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<testsuitesloose text artifact
>
<testsuite name="<exe-name>" errors="17" failures="132" tests="1680" hostname="tbd" time="{duration}" timestamp="{iso8601-timestamp}">
<testsuite name="<exe-name>" errors="17" failures="132" tests="1682" hostname="tbd" time="{duration}" timestamp="{iso8601-timestamp}">
<properties>
<property name="filters" value="~[!nonportable]~[!benchmark]~[approvals]"/>
<property name="random-seed" value="1"/>
Expand Down Expand Up @@ -1015,8 +1015,9 @@ Message.tests.cpp:<line number>
<testcase classname="<exe-name>.global" name="Process can be configured on command line/use-colour/error" time="{duration}"/>
<testcase classname="<exe-name>.global" name="Process can be configured on command line/Benchmark options/samples" time="{duration}"/>
<testcase classname="<exe-name>.global" name="Process can be configured on command line/Benchmark options/resamples" time="{duration}"/>
<testcase classname="<exe-name>.global" name="Process can be configured on command line/Benchmark options/resamples" time="{duration}"/>
<testcase classname="<exe-name>.global" name="Process can be configured on command line/Benchmark options/resamples" time="{duration}"/>
<testcase classname="<exe-name>.global" name="Process can be configured on command line/Benchmark options/confidence-interval" time="{duration}"/>
<testcase classname="<exe-name>.global" name="Process can be configured on command line/Benchmark options/no-analysis" time="{duration}"/>
<testcase classname="<exe-name>.global" name="Process can be configured on command line/Benchmark options/warmup-time" time="{duration}"/>
<testcase classname="<exe-name>.global" name="Product with differing arities - std::tuple&lt;int, double, float>" time="{duration}"/>
<testcase classname="<exe-name>.global" name="Product with differing arities - std::tuple&lt;int, double>" time="{duration}"/>
<testcase classname="<exe-name>.global" name="Product with differing arities - std::tuple&lt;int>" time="{duration}"/>
Expand Down
5 changes: 3 additions & 2 deletions projects/SelfTest/Baselines/sonarqube.sw.approved.txt
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,9 @@
<testCase name="Process can be configured on command line/use-colour/error" duration="{duration}"/>
<testCase name="Process can be configured on command line/Benchmark options/samples" duration="{duration}"/>
<testCase name="Process can be configured on command line/Benchmark options/resamples" duration="{duration}"/>
<testCase name="Process can be configured on command line/Benchmark options/resamples" duration="{duration}"/>
<testCase name="Process can be configured on command line/Benchmark options/resamples" duration="{duration}"/>
<testCase name="Process can be configured on command line/Benchmark options/confidence-interval" duration="{duration}"/>
<testCase name="Process can be configured on command line/Benchmark options/no-analysis" duration="{duration}"/>
<testCase name="Process can be configured on command line/Benchmark options/warmup-time" duration="{duration}"/>
<testCase name="Test with special, characters &quot;in name" duration="{duration}"/>
</file>
<file path="projects/<exe-name>/IntrospectiveTests/GeneratorsImpl.tests.cpp">
Expand Down
30 changes: 26 additions & 4 deletions projects/SelfTest/Baselines/xml.sw.approved.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10140,7 +10140,7 @@ Nor would this
<OverallResults successes="2" failures="0" expectedFailures="0"/>
</Section>
<Section name="Benchmark options" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Section name="resamples" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Section name="confidence-interval" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Expression success="true" type="CHECK" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Original>
cli.parse({ "test", "--benchmark-confidence-interval=0.99" })
Expand All @@ -10162,7 +10162,7 @@ Nor would this
<OverallResults successes="2" failures="0" expectedFailures="0"/>
</Section>
<Section name="Benchmark options" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Section name="resamples" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Section name="no-analysis" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Expression success="true" type="CHECK" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Original>
cli.parse({ "test", "--benchmark-no-analysis" })
Expand All @@ -10183,6 +10183,28 @@ Nor would this
</Section>
<OverallResults successes="2" failures="0" expectedFailures="0"/>
</Section>
<Section name="Benchmark options" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Section name="warmup-time" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Expression success="true" type="CHECK" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Original>
cli.parse({ "test", "--benchmark-warmup-time=10" })
</Original>
<Expanded>
{?}
</Expanded>
</Expression>
<Expression success="true" type="REQUIRE" filename="projects/<exe-name>/IntrospectiveTests/CmdLine.tests.cpp" >
<Original>
config.benchmarkWarmupTime == 10
</Original>
<Expanded>
10 == 10
</Expanded>
</Expression>
<OverallResults successes="2" failures="0" expectedFailures="0"/>
</Section>
<OverallResults successes="2" failures="0" expectedFailures="0"/>
</Section>
<OverallResult success="true"/>
</TestCase>
<TestCase name="Product with differing arities - std::tuple&lt;int, double, float>" tags="[product][template]" filename="projects/<exe-name>/UsageTests/Misc.tests.cpp" >
Expand Down Expand Up @@ -15871,7 +15893,7 @@ loose text artifact
</Section>
<OverallResult success="true"/>
</TestCase>
<OverallResults successes="1510" failures="149" expectedFailures="21"/>
<OverallResults successes="1512" failures="149" expectedFailures="21"/>
</Group>
<OverallResults successes="1510" failures="148" expectedFailures="21"/>
<OverallResults successes="1512" failures="148" expectedFailures="21"/>
</Catch>
10 changes: 8 additions & 2 deletions projects/SelfTest/IntrospectiveTests/CmdLine.tests.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -503,17 +503,23 @@ TEST_CASE( "Process can be configured on command line", "[config][command-line]"
REQUIRE(config.benchmarkResamples == 20000);
}

SECTION("resamples") {
SECTION("confidence-interval") {
CHECK(cli.parse({ "test", "--benchmark-confidence-interval=0.99" }));

REQUIRE(config.benchmarkConfidenceInterval == Catch::Detail::Approx(0.99));
}

SECTION("resamples") {
SECTION("no-analysis") {
CHECK(cli.parse({ "test", "--benchmark-no-analysis" }));

REQUIRE(config.benchmarkNoAnalysis);
}

SECTION("warmup-time") {
CHECK(cli.parse({ "test", "--benchmark-warmup-time=10" }));

REQUIRE(config.benchmarkWarmupTime == 10);
}
}
}

Expand Down