-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Direct Buffers maybe not released for certain HTTP/2 clients #12661
Comments
Root cause analysisAfter finding some configuration options to disable DirectBuffers i was now able to figure out that my issue was related to output buffers. As i mentioned the Then i used the following code snippet to track the currently used Direct buffer allocations. val timer = new Timer()
timer.schedule(
new TimerTask {
override def run(): Unit = {
log.debug("[byte-buffer-pool-debug] {}", byteBufferPool.toString)
val builder = new StringBuilder
builder.append("[gc-debug]")
val directPools = ManagementFactory.getPlatformMXBeans(classOf[BufferPoolMXBean])
for (pool <- directPools.iterator()) {
builder.append(
s" [pool:${pool.getName}, count:${pool.getCount}, used:${FileUtils.byteCountToDisplaySize(pool.getMemoryUsed)}] ")
}
log.debug(builder.toString())
}
},
0,
TimeUnit.SECONDS.toMillis(1)
) Then i tried all permutations of enabled/disabled output buffers for input and output: val httpConfiguration = new HttpConfiguration()
// Tested (true,true), (true, false), (false, true), (false, false)
httpConfiguration.setUseInputDirectByteBuffers(true)
httpConfiguration.setUseOutputDirectByteBuffers(true)
val h1ConnectionFactory = new HttpConnectionFactory()
val h2cConnectionFactory = new HTTP2CServerConnectionFactory(httpConfiguration)
val serverConnector = new ServerConnector(server, h1ConnectionFactory, h2cConnectionFactory) The outcome was that it must be related to output buffers, as just then a significant increase of the direct buffers usage (debug logs) was noticeable. Next i tried to figure out which parts of the Jetty code make use of the output buffers. I did another test with the same warmup tool (https://github.com/ExpediaGroup/mittens) as before, but this time a proxy in between. I was able to identify that the reason must be client related as this time the buffer usage was constant at 70 MB. Hence the problem was either connection related or in the case of HTTP/2 maybe stream related. Navigating along to the After debugging my application locally i was able to follow the stacktrace to a method // MaxHeaderListSize is the http2 SETTINGS_MAX_HEADER_LIST_SIZE to
// send in the initial settings frame. It is how many bytes
// of response headers are allowed. Unlike the http2 spec, zero here
// means to use a default limit (currently 10MB). If you actually
// want to advertise an unlimited value to the peer, Transport
// interprets the highest possible value here (0xffffffff or 1<<32-1)
// to mean no limit.
MaxHeaderListSize [uint32](https://pkg.go.dev/builtin#uint32) After modyfing the Mittens code base to use 8Kb here and doing the load test again the issue was gone. Direct Buffer allocation was now consistently < 10 MB. Before it went up to 7 GB on my machine. I also tried some calls with I found a patch for the buffer issue in the Jetty PRs which is already merged to the current code base: I guess this will be released with Jetty v12.1.0. ConclusionIn my opinion this was not only an smaller issue, but also a potential memory leak and DoS attack vendor when the Jetty Webserver is not behind a proxy. |
@jrauschenbusch thanks for the detailed report! The main cause of the issue you reported was a combination of the exceedingly large value for By default, This is the cause of the large memory consumption you were seeing, and explains why changing the Mittens configuration to 8 KiB (a capacity that would be pooled by We have filed #12689 to better track out-of-bucket allocations, and #12690 to cap Alternatively, you can use I don't think this issue is a vulnerability, it is just a misconfiguration of the server for a given client that has a very aggressive configuration. We have filed and resolved issues to protect Jetty against aggressive peer configurations, and report information about buffer pooling, so I consider this issue resolved. |
FYI: https://github.com/jetty/jetty.project/releases/tag/jetty-12.0.17 solved the issue as it caps the value by the server-configured value. So a client is not able to exceed this setting anymore. |
Jetty version(s)
Jetty v12.0.16
Jetty Environment
org.eclipse.jetty:jetty-server:12.0.16
org.eclipse.jetty.ee10:jetty-ee10-servlet:12.0.16
org.eclipse.jetty.http2:jetty-http2-server:12.0.16
Java version/vendor
(use: java -version)
openjdk version "25-ea" 2025-09-16 OpenJDK Runtime Environment (build 25-ea+2-135) OpenJDK 64-Bit Server VM (build 25-ea+2-135, mixed mode, sharing)
OS type/version
Description
When using Mittens (https://github.com/ExpediaGroup/mittens) as warmup tool running inside a Kubernetes Pod (configured as sidecar) which performs a high volume of concurrent HTTP/2 requests, it seems that Direct Buffers are not be released properly by Jetty (or the underlying OS).
After a short time the Kubernetes container exists with status 137 OOMKilled. Analyzing the Heap and non-heap memory was not bringing any insights. After doing a deeper analysis with a bunch of tools (JFR, Eclipse MAT, Native Memory Tracking, ...) it was indicating Native memory allocations of type=Other were the reason for the OOMKill. This kind of memory allocations increased all the time, but never decreased. After some time i stumbled over the Direct Buffer settings of Jetty. I also tried to find out more details by using the
ByteBufferPool.Tracking#dump()
output, but from this point of view it was not indicating that there are bigger issues.Then i tried to disable direct InputDirectBuffers and the problem was gone with same configuration for the Mittens warmup.
A load test test with with 7k req/s and direct buffers enabled (this time no mittens warmup sidecar was in place) was not leading to an OOMKill. This time an Envoy proxy was in front of the Java application. Load test tool was a custom implementation in NodeJS.
The issue did not occurred when using Jetty 11.0.24 also configured to use direct buffers for input and output. Something has been changed as it seems underneath regarding the buffer handling.
During my tests I was able to make the following observations:
new ByteBufferPool.NonPooling()
pool was better than using the ArrayByteBufferPool one (OOMKill was coming later)How to reproduce?
Create an application with following configuration:
Example of warmup.json structure. Of course filled with content.
The text was updated successfully, but these errors were encountered: