diff --git a/docs/Advanced-usage.md b/docs/Advanced-usage.md new file mode 100644 index 0000000000..561098de0b --- /dev/null +++ b/docs/Advanced-usage.md @@ -0,0 +1,2686 @@ +# Advanced usage + +## Configuring Client resources + +Client resources are configuration settings for the client related to +performance, concurrency, and events. A vast part of Client resources +consists of thread pools (`EventLoopGroup`s and a `EventExecutorGroup`) +which build the infrastructure for the connection workers. In general, +it is a good idea to reuse instances of `ClientResources` across +multiple clients. + +Client resources are stateful and need to be shut down if they are +supplied from outside the client. + +### Creating Client resources + +Client resources are required to be immutable. You can create instances +using two different patterns: + +**The `create()` factory method** + +By using the `create()` method on `DefaultClientResources` you create +`ClientResources` with default settings: + +``` java +ClientResources res = DefaultClientResources.create(); +``` + +This approach fits the most needs. + +**Resources builder** + +You can build instances of `DefaultClientResources` by using the +embedded builder. It is designed to configure the resources to your +needs. The builder accepts the configuration in a fluent fashion and +then creates the ClientResources at the end: + +``` java +ClientResources res = DefaultClientResources.builder() + .ioThreadPoolSize(4) + .computationThreadPoolSize(4) + .build() +``` + +### Using and reusing `ClientResources` + +A `RedisClient` and `RedisClusterClient` can be created without passing +`ClientResources` upon creation. The resources are exclusive to the +client and are managed itself by the client. When calling `shutdown()` +of the client instance `ClientResources` are shut down. + +``` java +RedisClient client = RedisClient.create(); +... +client.shutdown(); +``` + +If you require multiple instances of a client or you want to provide +existing thread infrastructure, you can configure a shared +`ClientResources` instance using the builder. The shared Client +resources can be passed upon client creation: + +``` java +ClientResources res = DefaultClientResources.create(); +RedisClient client = RedisClient.create(res); +RedisClusterClient clusterClient = RedisClusterClient.create(res, seedUris); +... +client.shutdown(); +clusterClient.shutdown(); +res.shutdown(); +``` + +Shared `ClientResources` are never shut down by the client. Same applies +for shared `EventLoopGroupProvider`s that are an abstraction to provide +`EventLoopGroup`s. + +#### Why `Runtime.getRuntime().availableProcessors()` \* 3? + +Netty requires different `EventLoopGroup`s for NIO (TCP) and for EPoll +(Unix Domain Socket) connections. One additional `EventExecutorGroup` is +used to perform computation tasks. `EventLoopGroup`s are started lazily +to allocate Threads on-demand. + +#### Shutdown + +Every client instance requires a call to `shutdown()` to clear used +resources. Clients with dedicated `ClientResources` (i.e. no +`ClientResources` passed within the constructor/`create`-method) will +shut down `ClientResources` on their own. + +Client instances with using shared `ClientResources` (i.e. +`ClientResources` passed using the constructor/`create`-method) won’t +shut down the `ClientResources` on their own. The `ClientResources` +instance needs to be shut down once it’s not used anymore. + +### Configuration settings + +The basic configuration options are listed in the table below: + +| Name | Method | Default | +|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------|------------------------| +| **I/O Thread Pool Size** | `ioThreadPoolSize` | `Number of processors` | +| The number of threads in the I/O thread pools. The number defaults to the number of available processors that the runtime returns (which, as a well-known fact, sometimes does not represent the actual number of processors). Every thread represents an internal event loop where all I/O tasks are run. The number does not reflect the actual number of I/O threads because the client requires different thread pools for Network (NIO) and Unix Domain Socket (EPoll) connections. The minimum I/O threads are `3`. A pool with fewer threads can cause undefined behavior. | | | +| **Computation Thread Pool Size** | `comput ationThreadPoolSize` | `Number of processors` | +| The number of threads in the computation thread pool. The number defaults to the number of available processors that the runtime returns (which, as a well-known fact, sometimes does not represent the actual number of processors). Every thread represents an internal event loop where all computation tasks are run. The minimum computation threads are `3`. A pool with fewer threads can cause undefined behavior. | | | + +### Advanced settings + +Values for the advanced options are listed in the table below and should +not be changed unless there is a truly good reason to do so. + +
Name | +Method | +Default | +
---|---|---|
Provider for EventLoopGroup | +eve ntLoopGroupProvider |
+none |
+
For those who want to reuse existing netty infrastructure or the
+total control over the thread pools, the
+Eve ntLoopGroupProvider API provides a way to do so.
+EventLoopGroup s are obtained and managed by an
+Even tLoopGroupProvider . A provided
+Eve ntLoopGroupProvider is not managed by the client and
+needs to be shut down once you do not longer need the resources. |
++ | + |
Provided EventExecutorGroup | +eventExecutorGroup |
+none |
+
For those who want to reuse existing netty infrastructure or the
+total control over the thread pools can provide an existing
+EventExecutorGroup to the Client resources. A provided
+EventExecutorGroup is not managed by the client and needs
+to be shut down once you do not longer need the resources. |
++ | + |
Event bus | +eventBus |
+DefaultEventBus |
+
The event bus system is used to transport events from the client to
+subscribers. Events are about connection state changes, metrics, and
+more. Events are published using a RxJava subject and the default
+implementation drops events on backpressure. Learn more about the Reactive API. You can also publish your own
+events. If you wish to do so, make sure that your events implement the
+Event marker interface. |
++ | + |
Command latency collector options | +commandLate ncyCollectorOptions |
+DefaultCommandLat encyCollectorOptions |
+
The client can collect latency metrics during while dispatching
+commands. The options allow configuring the percentiles, level of
+metrics (per connection or server) and whether the metrics are
+cumulative or reset after obtaining these. Command latency collection is
+enabled by default and can be disabled by setting
+commandLatency PublisherOptions(…) to
+D efaultEventPublisher Options.disabled() . Latency
+collector requires LatencyUtils to be on your class
+path. |
++ | + |
Command latency collector | +comm andLatencyCollector |
+DefaultCom mandLatencyCollector |
+
The client can collect latency metrics during while dispatching
+commands. Command latency metrics is collected on connection or server
+level. Command latency collection is enabled by default and can be
+disabled by setting commandLatency CollectorOptions(…) to
+DefaultCom mandLatencyCollector Options.disabled() . |
++ | + |
Latency event publisher options | +commandLate ncyPublisherOptions |
+DefaultE ventPublisherOptions |
+
Command latencies can be published using the event bus. Latency
+events are emitted by default every 10 minutes. Event publishing can be
+disabled by setting commandLatency PublisherOptions(…) to
+D efaultEventPublisher Options.disabled() . |
++ | + |
DNS Resolver | +dnsResolver |
+DnsRe solvers.JVM_DEFAULT ( or netty if present) |
+
Since: 3.5, 4.2 +Configures a DNS resolver to resolve hostnames to a
+ Since 4.4: Defaults to |
++ | + |
Reconnect Delay | +reconnectDelay |
+Delay.exponential() |
+
Since: 4.2 +Configures a reconnect delay used to delay reconnect attempts.
+Defaults to binary exponential delay with an upper boundary of
+ |
++ | + |
Netty Customizer | +NettyCustomizer |
+none |
+
Since: 4.4 +Configures a netty customizer to enhance netty components. Allows
+customization of |
++ | + |
Tracing | +tracing |
+disabled |
+
Since: 5.1 +Configures a |
++ | + |
Name | +Method | +Default | +
---|---|---|
PING before activating connection | +pingBefor eActivateConnection |
+true |
+
Since: 3.1, 4.0 +Perform a lightweight Failed The |
++ | + |
Auto-Reconnect | +autoReconnect |
+true |
+
Since: 3.1, 4.0 +Controls auto-reconnect behavior on connections. As soon as a +connection gets closed/reset without the intention to close it, the +client will try to reconnect, activate the connection and re-issue any +queued commands. +This flag also has the effect that disconnected connections will +refuse commands and cancel these with an exception. |
++ | + |
Cancel commands on reconnect failure | +cancelCommand sOnReconnectFailure |
+false |
+
Since: 3.1, 4.0 +This flag is deprecated and should not be used as it can lead +to race conditions and protocol offsets. SSL is natively supported by +Lettuce and does no longer requires the use of SSL tunnels where +protocol traffic can get out of sync. +If this flag is |
++ | + |
Policy how to reclaim decode buffer memory | +decodeBufferPolicy |
+ratio-based at 75% |
+
Since: 6.0 +Policy to discard read bytes from the decoding aggregation buffer to
+reclaim memory. See |
++ | + |
Suspend reconnect on protocol failure | +suspendReconne ctOnProtocolFailure |
+false (was introduced in 3. 1 with default true) |
+
Since: 3.1, 4.0 +If this flag is Reconnection can be activated again, but there is no public API to
+obtain the |
++ | + |
Request queue size | +requestQueueSize |
+2147483647 (Integer#MAX_VALUE) |
+
Since: 3.4, 4.1 +Controls the per-connection request queue size. The command
+invocation will lead to a |
++ | + |
Disconnected behavior | +d isconnectedBehavior |
+DEFAULT |
+
Since: 3.4, 4.1 +A connection can behave in a disconnected state in various ways. The +auto-connect feature allows in particular to retrigger commands that +have been queued while a connection is disconnected. The disconnected +behavior setting allows fine-grained control over the behavior. +Following settings are available: +
|
++ | + |
Protocol Version | +protocolVersion |
+L atest/Auto-discovery |
+
Since: 6.0 +Configuration of which protocol version (RESP2/RESP3) to use. Leaving +this option unconfigured performs a protocol discovery to use the +lastest available protocol. |
++ | + |
Script Charset | +scriptCharset |
+UTF-8 |
+
Since: 6.0 +Charset to use for Luascripts. |
++ | + |
Socket Options | +socketOptions |
+10 seconds Connecti on-Timeout, no keep-a live, no TCP noDelay |
+
Since: 4.3 +Options to configure low-level socket options for the connections +kept to Redis servers. |
++ | + |
SSL Options | +sslOptions |
+(non e), use JDK defaults |
+
Since: 4.3 +Configure SSL options regarding SSL providers (JDK/OpenSSL) and key +store/trust store. |
++ | + |
Timeout Options | +timeoutOptions |
+Do n ot timeout commands. |
+
Since: 5.1 +Options to configure command timeouts applied to timeout commands
+after dispatching these (active connections, queued while disconnected,
+batch buffer). By default, the synchronous API times out commands using
+ |
++ | + |
Publish Reactive Signals on Scheduler | +publishOnScheduler |
+Use I/O thread. |
+
Since: 5.1.4 +Use a dedicated |
++ | + |
Name | +Method | +Default | +
---|---|---|
Periodic cluster topology refresh | +en ablePeriodicRefresh |
+false |
+
Since: 3.1, 4.0 +Enables or disables periodic cluster topology refresh. The refresh is
+handled in the background. Partitions, the view on the Redis cluster
+topology, are valid for a whole The refresh job is regularly executed, the period between the runs
+can be set with |
++ | + |
Cluster topology refresh period | +refreshPeriod |
+60 SECONDS |
+
Since: 3.1, 4.0 +Set the period between the refresh job runs. The effective interval +cannot be changed once the refresh job is active. Changes to the value +will be ignored. |
++ | + |
Adaptive cluster topology refresh | +enableAda ptiveRefreshTrigger |
+(none) |
+
Since: 4.2 +Enables selectively adaptive topology refresh triggers. Adaptive +refresh triggers initiate topology view updates based on events happened +during Redis Cluster operations. Adaptive triggers lead to an immediate +topology refresh. These refreshes are rate-limited using a timeout since +events can happen on a large scale. Adaptive refresh triggers are +disabled by default. Following triggers can be enabled: +
|
++ | + |
Adaptive refresh triggers timeout | +adaptiveRef reshTriggersTimeout |
+30 SECONDS |
+
Since: 4.2 +Set the timeout between the adaptive refresh job runs. Multiple +triggers within the timeout will be ignored, only the first enabled +trigger leads to a topology refresh. The effective period cannot be +changed once the refresh job is active. Changes to the value will be +ignored. |
++ | + |
Reconnect attempts (Adaptive topology refresh trigger) | +refreshTrigge rsReconnectAttempts |
+5 |
+
Since: 4.2 +Set the threshold for the |
++ | + |
Dynamic topology refresh sources | +dy namicRefreshSources |
+true |
+
Since: 4.2 +Discover cluster nodes from the topology and use only the discovered
+nodes as the source for the cluster topology. Using dynamic refresh will
+query all discovered nodes for the cluster topology details. If set to
+ Note that enabling dynamic topology refresh sources uses node
+addresses reported by Redis |
++ | + |
Close stale connections | +cl oseStaleConnections |
+true |
+
Since: 3.3, 4.1 +Stale connections are existing connections to nodes which are no
+longer part of the Redis Cluster. If this flag is set to
+ |
++ | + |
Limitation of cluster redirects | +maxRedirects |
+5 |
+
Since: 3.1, 4.0 +When the assignment of a slot-hash is moved in a Redis Cluster and a
+client requests a key that is located on the moved slot-hash, the
+Cluster node responds with a |
++ | + |
Filter nodes from Topology | +nodeFilter |
+no filter |
+
Since: 6.1.6 +When providing a |
++ | + |
Validate cluster node membership | +validateCl usterNodeMembership |
+true |
+
Since: 3.3, 4.0 +Validate the cluster node membership before allowing connections to
+that node. The current implementation performs redirects using
+ There are some scenarios, where the strict validation is an +obstruction: +
Connecting to non-cluster members to reconfigure those while using +the RedisClusterClient connection. |
++ | + |