The track format has changed a bit due a more flexible approach in how benchmarks are executed:
- Operations are defined in the
operations
section, execution details like number of warmup iterations, warmup time etc. are defined as part of theschedule
. - Each query needs to be defined as a separate operation and referenced in the
schedule
- You can (and in fact should) specify a
warmup-time-period
(defined in sections) for bulk index operations. The warmup time period is specified in seconds.
For details please refer to the updated JSON schema for Rally tracks.
Hint: This is just relevant for you, if you have defined your own tracks. We already took care of updating the official Rally tracks.
We have separated the previously known "track setup" into two parts:
- Challenges: Which describe what happens during a benchmark (whether to index or search and with which parameters)
- Cars: Which describe the benchmark candidate settings (e.g. heap size, logging configuration etc.)
This influences the command line interface in a couple of ways:
- To list all known cars, we have added a new command
esrally list cars
. To select a challenge, use now--challenge
instead of--track-setup
and also specify a car now with--car
. - Tournaments created by older versions of Rally are incompatible
- Rally must now be invoked with only one challenge and only one car (previously it was possible to specify multiple track setups)
We have also moved tracks to a dedicated repository. This allows you to support tracks for multiple versions of Elasticsearch but also requires that all users have git
installed.
We have spent a lot of time to simplify first time setup of Rally. For starters, you are not required to setup your own metrics store if you don't need it.
However, you are then just able to run individual benchmarks but you cannot compare results or visualize anything in Kibana. If you don't need this, it is recommended that you
remove the configuration directory and run esrally configure
. Rally will notify you on its first start of this change and guide you through the process.
Please raise a ticket in case of problems.
- Add a tournament mode. More information in the user docs
- External benchmarks can now specify target hosts and ports
- Ability to add a user-defined tag as metric meta-data
- Support .gzipped benchmark data contributed by @monk-ee. Thanks!
- Support for perf profiler
- Add a fulltext benchmark
Major changes:
- Rally can now benchmark a binary Elasticsearch distribution (starting with Elasticsearch 5.0.0-alpha1).
- Reporting improvements for query latency and indexing throughput on the command line.
- We store benchmark environment data alongside metrics.
- A new percolator track](elastic#74) contributed by Martijn van Groningen. Thanks!
Major changes:
- Added a JIT profiler. This allows to check warmup times but also in-depth inspection which optimizations were performed by the JIT compiler. If the HotSpot disassembler library is available, the logs will also contain the disassembled JIT compiler output which can be used for low-level analysis. We recommend to use JITWatch for analysis.
- Added pipeline support. Pipelines allow to define more flexibly which steps Rally executes during a benchmark. One of the use-cases for this is to run a benchmark based on a released build of Elasticsearch rather than building it ourselves.
Major changes:
-
Migrated the metrics data store from file-based to a dedicated Elasticsearch instance. Graphical reports can be created with Kibana (optional but recommended). It is necessary to setup an Elasticsearch cluster to store metrics data (a single node is sufficient). The cluster will be configured automatically by Rally. For details please see the README.
Related issues: #8, #21, #46,