Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct Buffers maybe not released for certain HTTP/2 clients #12661

Closed
jrauschenbusch opened this issue Dec 20, 2024 · 3 comments
Closed

Direct Buffers maybe not released for certain HTTP/2 clients #12661

jrauschenbusch opened this issue Dec 20, 2024 · 3 comments
Labels
Bug For general bugs on Jetty side

Comments

@jrauschenbusch
Copy link

jrauschenbusch commented Dec 20, 2024

Jetty version(s)

Jetty v12.0.16

Jetty Environment

  • Jetty embedded in Scala application
  • Libraries in use:
    • org.eclipse.jetty:jetty-server:12.0.16
    • org.eclipse.jetty.ee10:jetty-ee10-servlet:12.0.16
    • org.eclipse.jetty.http2:jetty-http2-server:12.0.16

Java version/vendor (use: java -version)

openjdk version "25-ea" 2025-09-16
OpenJDK Runtime Environment (build 25-ea+2-135)
OpenJDK 64-Bit Server VM (build 25-ea+2-135, mixed mode, sharing)

OS type/version

  • Docker Image: openjdk:25-slim-bullseye
  • Debian GNU/Linux 11 (bullseye)
  • Kubernetes Container w/ QoS Guaranteed (4 CPU, 10 GiB Memory)

Description

When using Mittens (https://github.com/ExpediaGroup/mittens) as warmup tool running inside a Kubernetes Pod (configured as sidecar) which performs a high volume of concurrent HTTP/2 requests, it seems that Direct Buffers are not be released properly by Jetty (or the underlying OS).

After a short time the Kubernetes container exists with status 137 OOMKilled. Analyzing the Heap and non-heap memory was not bringing any insights. After doing a deeper analysis with a bunch of tools (JFR, Eclipse MAT, Native Memory Tracking, ...) it was indicating Native memory allocations of type=Other were the reason for the OOMKill. This kind of memory allocations increased all the time, but never decreased. After some time i stumbled over the Direct Buffer settings of Jetty. I also tried to find out more details by using the ByteBufferPool.Tracking#dump() output, but from this point of view it was not indicating that there are bigger issues.

Then i tried to disable direct InputDirectBuffers and the problem was gone with same configuration for the Mittens warmup.

A load test test with with 7k req/s and direct buffers enabled (this time no mittens warmup sidecar was in place) was not leading to an OOMKill. This time an Envoy proxy was in front of the Java application. Load test tool was a custom implementation in NodeJS.

The issue did not occurred when using Jetty 11.0.24 also configured to use direct buffers for input and output. Something has been changed as it seems underneath regarding the buffer handling.

During my tests I was able to make the following observations:

  • Using ZGC instead of G1 makes the problem even worse. OOMKill comes much faster than with G1.
  • Using -XX:MaxDirectMemorySize=4g was stabilizing the Java app, but Jetty did not accepted all requests anymore and rejected requests which led to EOFs in the Mittens warmup tool
  • Using a new ByteBufferPool.NonPooling() pool was better than using the ArrayByteBufferPool one (OOMKill was coming later)

How to reproduce?

Create an application with following configuration:

val server = new Server(threadPool)
val httpConfig = new HttpConfiguration()
//httpConfig.setUseInputDirectByteBuffers(false)
//httpConfig.setUseOutputDirectByteBuffers(false)
val connector = new ServerConnector(server, new HttpConnectionFactory(httpConfig), new HTTP2CServerConnectionFactory(httpConfig))
connector.setHost("0.0.0.0")
connector.setPort(8080)
server.addConnector(connector)
val servletContextHandler = new ServletContextHandler()
servletContextHandler.addServlet(new ServletHolder(new DataServlet), "/postData")
   servletContextHandler.addServlet(new ServletHolder(new HealthzServlet), "/healthz")
server.setHandler(servletContextHandler)
server.start()
class HealtzServlet extends HttpServlet {
  override def doGet(req: HttpServletRequest, resp: HttpServletResponse): Unit = {
    // empty
  }
}
class DataServlet extends HttpServlet {
  override def doPost(req: HttpServletRequest, resp: HttpServletResponse): Unit = {
    // empty
  }
}
docker run mittens:latest \
       --concurrency=1000 \
       --concurrency-target-seconds=100 \ 
       --max-duration-seconds=600  \ 
       --max-warmup-seconds=180  \ 
       --max-readiness-wait-seconds=240  \ 
       --target-readiness-http-path=/healthz  \ 
       --target-http-protocol=h2c  \ 
       --http-requests=post:/postData:file:/tmp/warmup.json  \ 
       --http-requests-compression=gzip  \ 
       --http-headers=content-type:application/json  \ 
       -fail-readiness=true

Example of warmup.json structure. Of course filled with content.

{
  "object1": {
     ...
   },
  "object2": {
    ...
  },
  "object3": {
     ...
   },
   "array": [{
       ...
       object4: {
         ...
       }
    }],
}
@jrauschenbusch jrauschenbusch added the Bug For general bugs on Jetty side label Dec 20, 2024
@jrauschenbusch
Copy link
Author

jrauschenbusch commented Feb 5, 2025

Root cause analysis

After finding some configuration options to disable DirectBuffers i was now able to figure out that my issue was related to output buffers.

As i mentioned the ArrayByteBufferPool.Tracking()#dumpLeaks() method did not brought any indications. Printing a log after warmup was done (system was idling), no leaks could be detected.

Then i used the following code snippet to track the currently used Direct buffer allocations.

val timer = new Timer()

timer.schedule(
  new TimerTask {
    override def run(): Unit = {
      log.debug("[byte-buffer-pool-debug] {}", byteBufferPool.toString)

      val builder = new StringBuilder
      builder.append("[gc-debug]")
      val directPools = ManagementFactory.getPlatformMXBeans(classOf[BufferPoolMXBean])
      for (pool <- directPools.iterator()) {
        builder.append(
          s" [pool:${pool.getName}, count:${pool.getCount}, used:${FileUtils.byteCountToDisplaySize(pool.getMemoryUsed)}] ")
      }
      log.debug(builder.toString())
    }
  },
  0,
  TimeUnit.SECONDS.toMillis(1)
)

Then i tried all permutations of enabled/disabled output buffers for input and output:

val httpConfiguration = new HttpConfiguration()
// Tested (true,true), (true, false), (false, true), (false, false)
httpConfiguration.setUseInputDirectByteBuffers(true)
httpConfiguration.setUseOutputDirectByteBuffers(true)
val h1ConnectionFactory = new HttpConnectionFactory()
val h2cConnectionFactory = new HTTP2CServerConnectionFactory(httpConfiguration)
val serverConnector = new ServerConnector(server, h1ConnectionFactory, h2cConnectionFactory)

The outcome was that it must be related to output buffers, as just then a significant increase of the direct buffers usage (debug logs) was noticeable.

Next i tried to figure out which parts of the Jetty code make use of the output buffers. I did another test with the same warmup tool (https://github.com/ExpediaGroup/mittens) as before, but this time a proxy in between. I was able to identify that the reason must be client related as this time the buffer usage was constant at 70 MB. Hence the problem was either connection related or in the case of HTTP/2 maybe stream related.

Navigating along to the HTTP2CServerConnectionFactory class i stumbled upon the AbstractHTTP2ServerConnectionFactory#newConnection() method. By digging a bit deeper i found out that the HeaderGenerator#generate was the the right place to find my issue. Within the method there is a statement RetainableByteBuffer buffer = getByteBufferPool().acquire(capacity, isUseDirectByteBuffers());.

After debugging my application locally i was able to follow the stacktrace to a method FrameGenerator#encode(). Following the stacktrace further i was able to identify that the root cause of my issue was HTTP2Session#configure(). Here a SETTINGS frame was the cause, which decoded a SETTINGS_MAX_HEADER_LIST_SIZE property to 10485760 and updated it in the HPackEncoder. After some readings about the HTTP/2 settings frame i was able to identify that this information must be coming from the HTTP/2 client. In my case the warmup tool "Mittens". As Mittens is written in Golang which i already contributed to, i was able to check out that this was the default value of the http2 library in Golang.

// MaxHeaderListSize is the http2 SETTINGS_MAX_HEADER_LIST_SIZE to
// send in the initial settings frame. It is how many bytes
// of response headers are allowed. Unlike the http2 spec, zero here
// means to use a default limit (currently 10MB). If you actually
// want to advertise an unlimited value to the peer, Transport
// interprets the highest possible value here (0xffffffff or 1<<32-1)
// to mean no limit.
MaxHeaderListSize [uint32](https://pkg.go.dev/builtin#uint32)

After modyfing the Mittens code base to use 8Kb here and doing the load test again the issue was gone. Direct Buffer allocation was now consistently < 10 MB. Before it went up to 7 GB on my machine.

I also tried some calls with curl. And there i could identify that it was related to the HTTP/2 upgrade. The buffer allocation was only be done when using curl --http2-prior-knowledge. Using curl --http2 did not showed the same issue.

I found a patch for the buffer issue in the Jetty PRs which is already merged to the current code base:
#12690

I guess this will be released with Jetty v12.1.0.

Conclusion

In my opinion this was not only an smaller issue, but also a potential memory leak and DoS attack vendor when the Jetty Webserver is not behind a proxy.

@sbordet
Copy link
Contributor

sbordet commented Feb 5, 2025

@jrauschenbusch thanks for the detailed report!

The main cause of the issue you reported was a combination of the exceedingly large value for SETTINGS_MAX_HEADER_LIST_SIZE and the behavior of Jetty's ArrayByteBufferPool.

By default, ArrayByteBufferPool pools buffers up to 64 KiB, and when asked for larger buffers, it just allocates them on-the-fly.
When requested to allocate 10 MiB buffers, it would just allocate them on-the-fly and it would never pool them.

This is the cause of the large memory consumption you were seeing, and explains why changing the Mittens configuration to 8 KiB (a capacity that would be pooled by ArrayByteBufferPool) solved the issue.

We have filed #12689 to better track out-of-bucket allocations, and #12690 to cap SETTINGS_MAX_HEADER_LIST_SIZE.

Alternatively, you can use ArrayByteBufferPool.Quadratic, configured to pool at least up to capacities of 10 MiB.

I don't think this issue is a vulnerability, it is just a misconfiguration of the server for a given client that has a very aggressive configuration.

We have filed and resolved issues to protect Jetty against aggressive peer configurations, and report information about buffer pooling, so I consider this issue resolved.

@sbordet sbordet closed this as completed Feb 5, 2025
@jrauschenbusch
Copy link
Author

FYI: https://github.com/jetty/jetty.project/releases/tag/jetty-12.0.17 solved the issue as it caps the value by the server-configured value. So a client is not able to exceed this setting anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug For general bugs on Jetty side
Projects
None yet
Development

No branches or pull requests

2 participants