[Git][java-team/netty][trixie] 2 commits: Backport patches

Bastien Roucariès (@rouca) gitlab at salsa.debian.org
Sat Nov 29 11:46:21 GMT 2025



Bastien Roucariès pushed to branch trixie at Debian Java Maintainers / netty


Commits:
184728a3 by Bastien Roucariès at 2025-11-29T12:42:51+01:00
Backport patches

- - - - -
8c6650d7 by Bastien Roucariès at 2025-11-29T12:45:36+01:00
Fix changelog

- - - - -


5 changed files:

- debian/changelog
- debian/patches/CVE-2025-58056.patch
- + debian/patches/CVE-2025-58057.patch
- + debian/patches/CVE-2025-59419.patch
- debian/patches/series


Changes:

=====================================
debian/changelog
=====================================
@@ -15,6 +15,15 @@ netty (1:4.1.48-10+deb13u1) trixie-security; urgency=high
     no limit in how often it calls pull, decompressing
     data 64K bytes at a time. The buffers are saved in
     the output list, and remain reachable until OOM is hit.
+  * Fix CVE-2025-58057:
+    When supplied with specially crafted input, BrotliDecoder
+    and certain other decompression decoders will allocate
+    a large number of reachable byte buffers, which can lead
+    to denial of service. BrotliDecoder.decompress has no limit
+    in how often it calls pull, decompressing data 64K bytes at
+    a time. The buffers are saved in the output list, and remain
+    reachable until OOM is hit.
+    (Closes: #1113994)
   * Fix CVE-2025-59419 (Closes: #1118282)
     SMTP Command Injection Vulnerability Allowing Email Forgery
     An SMTP Command Injection (CRLF Injection) vulnerability


=====================================
debian/patches/CVE-2025-58056.patch
=====================================
@@ -1,48 +1,247 @@
-From: Norman Maurer <norman_maurer at apple.com>
-Date: Wed, 3 Sep 2025 10:35:05 +0200
-Subject: Merge commit from fork (#15612)
+From 39d3ecf8f0c57a7469ba927b2163d4cb4314b138 Mon Sep 17 00:00:00 2001
+From: Chris Vest <christianvest_hansen at apple.com>
+Date: Tue, 2 Sep 2025 23:25:09 -0700
+Subject: [PATCH] Merge commit from fork
+
+* Prevent HTTP request/response smuggling via chunk encoding
 
 Motivation:
+Transfer-Encoding: chunked has some strict rules around parsing CR LF delimiters.
+If we are too lenient, it can cause request/response smuggling issues when combined with proxies that are lenient in different ways.
+See https://w4ke.info/2025/06/18/funky-chunks.html for the details.
 
-We should ensure our decompressing decoders will fire their buffers
-through the pipeliner as fast as possible and so allow the user to take
-ownership of these as fast as possible. This is needed to reduce the
-risk of OOME as otherwise a small input might produce a large amount of
-data that can't be processed until all the data was decompressed in a
-loop. Beside this we also should ensure that other handlers that uses
-these decompressors will not buffer all of the produced data before
-processing it, which was true for HTTP and HTTP2.
+Modification:
+- Make sure that we reject chunks with chunk-extensions that contain lone Line Feed octets without their preceding Carriage Return octet.
+- Make sure that we issue HttpContent objects with decoding failures, if we decode a chunk and it isn't immediately followed by a CR LF octet pair.
 
-Modifications:
+Result:
+Smuggling requests/responses is no longer possible.
 
-- Adjust affected decoders (Brotli, Zstd and ZLib) to fire buffers
-  through the pipeline as soon as possible
-- Adjust HTTP / HTTP2 decompressors to do the same
-- Add testcase.
+Fixes https://github.com/netty/netty/issues/15522
 
-Result:
+* Enforce CR LF line separators for HTTP messages by default
+
+But also make it configurable through `HttpDecoderConfig`, and add a system property opt-out to change the default back.
+
+* Remove property for the name of the strict line parsing property
 
-Less risk of OOME when doing decompressing
+* Remove HeaderParser.parse overload that only takes a buffer argument
 
-Co-authored-by: yawkat <jonas.konrad at oracle.com>
-origin: backport, https://github.com/netty/netty/commit/39d3ecf8f0c57a7469ba927b2163d4cb4314b138
-bug: https://github.com/netty/netty/security/advisories/GHSA-3p8m-j85q-pgmj
-bug-debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1113994
+Origin: backport, https://github.com/netty/netty/commit/39d3ecf8f0c57a7469ba927b2163d4cb4314b138
+
+[Ubuntu note: This patch uses a new constructor to configure strict line
+parsing (since HttpDecoderConfig.java does not exist), and uses a new field to
+pass strictCRLFCheck from HeaderParser.parse() to HeaderParser.process().
+-- Edwin Jiang <edwin.jiang at canonical.com]
 ---
- .../codec/compression/JZlibIntegrationTest.java    | 31 ++++++++++++++++++++++
- .../codec/compression/JdkZlibIntegrationTest.java  | 31 ++++++++++++++++++++++
- 2 files changed, 62 insertions(+)
- create mode 100644 codec/src/test/java/io/netty/handler/codec/compression/JZlibIntegrationTest.java
- create mode 100644 codec/src/test/java/io/netty/handler/codec/compression/JdkZlibIntegrationTest.java
+ codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java                |   71 +++-
+ codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java               |    9 
+ codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java              |    9 
+ codec-http/src/main/java/io/netty/handler/codec/http/InvalidChunkExtensionException.java   |   45 +++
+ codec-http/src/main/java/io/netty/handler/codec/http/InvalidChunkTerminationException.java |   45 +++
+ codec-http/src/main/java/io/netty/handler/codec/http/InvalidLineSeparatorException.java    |   48 +++
+ codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java           |  148 ++++++++--
+ codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java          |   62 ++++
+ 8 files changed, 403 insertions(+), 34 deletions(-)
+ create mode 100644 codec-http/src/main/java/io/netty/handler/codec/http/InvalidChunkExtensionException.java
+ create mode 100644 codec-http/src/main/java/io/netty/handler/codec/http/InvalidChunkTerminationException.java
+ create mode 100644 codec-http/src/main/java/io/netty/handler/codec/http/InvalidLineSeparatorException.java
 
-diff --git a/codec/src/test/java/io/netty/handler/codec/compression/JZlibIntegrationTest.java b/codec/src/test/java/io/netty/handler/codec/compression/JZlibIntegrationTest.java
-new file mode 100644
-index 0000000..252f134
+--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java
++++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java
+@@ -27,6 +27,7 @@
+ import io.netty.handler.codec.TooLongFrameException;
+ import io.netty.util.ByteProcessor;
+ import io.netty.util.internal.AppendableCharSequence;
++import io.netty.util.internal.SystemPropertyUtil;
+ 
+ import java.util.List;
+ 
+@@ -104,11 +105,28 @@
+ public abstract class HttpObjectDecoder extends ByteToMessageDecoder {
+     public static final boolean DEFAULT_ALLOW_DUPLICATE_CONTENT_LENGTHS = false;
+     private static final String EMPTY_VALUE = "";
++    public static final boolean DEFAULT_STRICT_LINE_PARSING =
++            SystemPropertyUtil.getBoolean("io.netty.handler.codec.http.defaultStrictLineParsing", true);
++
++    private static final Runnable THROW_INVALID_CHUNK_EXTENSION = new Runnable() {
++        @Override
++        public void run() {
++            throw new InvalidChunkExtensionException();
++        }
++    };
++
++    private static final Runnable THROW_INVALID_LINE_SEPARATOR = new Runnable() {
++        @Override
++        public void run() {
++            throw new InvalidLineSeparatorException();
++        }
++    };
+ 
+     private final int maxChunkSize;
+     private final boolean chunkedSupported;
+     protected final boolean validateHeaders;
+     private final boolean allowDuplicateContentLengths;
++    private final Runnable defaultStrictCRLFCheck;
+     private final HeaderParser headerParser;
+     private final LineParser lineParser;
+ 
+@@ -180,6 +198,15 @@
+             int maxInitialLineLength, int maxHeaderSize, int maxChunkSize,
+             boolean chunkedSupported, boolean validateHeaders, int initialBufferSize,
+             boolean allowDuplicateContentLengths) {
++        this(maxInitialLineLength, maxHeaderSize, maxChunkSize, chunkedSupported,
++             validateHeaders, initialBufferSize, allowDuplicateContentLengths,
++             DEFAULT_STRICT_LINE_PARSING);
++    }
++
++    protected HttpObjectDecoder(
++            int maxInitialLineLength, int maxHeaderSize, int maxChunkSize,
++            boolean chunkedSupported, boolean validateHeaders, int initialBufferSize,
++            boolean allowDuplicateContentLengths, boolean strictLineParsing) {
+         checkPositive(maxInitialLineLength, "maxInitialLineLength");
+         checkPositive(maxHeaderSize, "maxHeaderSize");
+         checkPositive(maxChunkSize, "maxChunkSize");
+@@ -191,6 +218,7 @@
+         this.chunkedSupported = chunkedSupported;
+         this.validateHeaders = validateHeaders;
+         this.allowDuplicateContentLengths = allowDuplicateContentLengths;
++        defaultStrictCRLFCheck = strictLineParsing ? THROW_INVALID_LINE_SEPARATOR : null;
+     }
+ 
+     @Override
+@@ -203,7 +231,7 @@
+         case SKIP_CONTROL_CHARS:
+             // Fall-through
+         case READ_INITIAL: try {
+-            AppendableCharSequence line = lineParser.parse(buffer);
++            AppendableCharSequence line = lineParser.parse(buffer, defaultStrictCRLFCheck);
+             if (line == null) {
+                 return;
+             }
+@@ -313,11 +341,11 @@
+             return;
+         }
+         /**
+-         * everything else after this point takes care of reading chunked content. basically, read chunk size,
++         * Everything else after this point takes care of reading chunked content. Basically, read chunk size,
+          * read chunk, read and ignore the CRLF and repeat until 0
+          */
+         case READ_CHUNK_SIZE: try {
+-            AppendableCharSequence line = lineParser.parse(buffer);
++            AppendableCharSequence line = lineParser.parse(buffer, THROW_INVALID_CHUNK_EXTENSION);
+             if (line == null) {
+                 return;
+             }
+@@ -352,16 +380,16 @@
+             // fall-through
+         }
+         case READ_CHUNK_DELIMITER: {
+-            final int wIdx = buffer.writerIndex();
+-            int rIdx = buffer.readerIndex();
+-            while (wIdx > rIdx) {
+-                byte next = buffer.getByte(rIdx++);
+-                if (next == HttpConstants.LF) {
++            if (buffer.readableBytes() >= 2) {
++                int rIdx = buffer.readerIndex();
++                if (buffer.getByte(rIdx) == HttpConstants.CR &&
++                        buffer.getByte(rIdx + 1) == HttpConstants.LF) {
++                    buffer.skipBytes(2);
+                     currentState = State.READ_CHUNK_SIZE;
+-                    break;
++                } else {
++                    out.add(invalidChunk(buffer, new InvalidChunkTerminationException()));
+                 }
+             }
+-            buffer.readerIndex(rIdx);
+             return;
+         }
+         case READ_CHUNK_FOOTER: try {
+@@ -560,7 +588,7 @@
+         final HttpMessage message = this.message;
+         final HttpHeaders headers = message.headers();
+ 
+-        AppendableCharSequence line = headerParser.parse(buffer);
++        AppendableCharSequence line = headerParser.parse(buffer, defaultStrictCRLFCheck);
+         if (line == null) {
+             return null;
+         }
+@@ -580,7 +608,7 @@
+                     splitHeader(line);
+                 }
+ 
+-                line = headerParser.parse(buffer);
++                line = headerParser.parse(buffer, defaultStrictCRLFCheck);
+                 if (line == null) {
+                     return null;
+                 }
+@@ -661,7 +689,7 @@
+     }
+ 
+     private LastHttpContent readTrailingHeaders(ByteBuf buffer) {
+-        AppendableCharSequence line = headerParser.parse(buffer);
++        AppendableCharSequence line = headerParser.parse(buffer, defaultStrictCRLFCheck);
+         if (line == null) {
+             return null;
+         }
+@@ -701,7 +729,7 @@
+                 name = null;
+                 value = null;
+             }
+-            line = headerParser.parse(buffer);
++            line = headerParser.parse(buffer, defaultStrictCRLFCheck);
+             if (line == null) {
+                 return null;
+             }
+@@ -865,14 +893,19 @@
+         private final int maxLength;
+         private int size;
+ 
++        private Runnable strictCRLFCheck;
++
+         HeaderParser(AppendableCharSequence seq, int maxLength) {
+             this.seq = seq;
+             this.maxLength = maxLength;
+         }
+ 
+-        public AppendableCharSequence parse(ByteBuf buffer) {
++        public AppendableCharSequence parse(ByteBuf buffer, Runnable strictCRLFCheck) {
+             final int oldSize = size;
+             seq.reset();
++
++            this.strictCRLFCheck = strictCRLFCheck;
++
+             int i = buffer.forEachByte(this);
+             if (i == -1) {
+                 size = oldSize;
+@@ -895,6 +928,10 @@
+                 if (len >= 1 && seq.charAtUnsafe(len - 1) == HttpConstants.CR) {
+                     -- size;
+                     seq.setLength(len - 1);
++                } else {
++                    if (strictCRLFCheck != null) {
++                        strictCRLFCheck.run();
++                    }
+                 }
+                 return false;
+             }
+@@ -927,9 +964,9 @@
+         }
+ 
+         @Override
+-        public AppendableCharSequence parse(ByteBuf buffer) {
++        public AppendableCharSequence parse(ByteBuf buffer, Runnable strictCRLFCheck) {
+             reset();
+-            return super.parse(buffer);
++            return super.parse(buffer, strictCRLFCheck);
+         }
+ 
+         @Override
 --- /dev/null
-+++ b/codec/src/test/java/io/netty/handler/codec/compression/JZlibIntegrationTest.java
-@@ -0,0 +1,31 @@
++++ b/codec-http/src/main/java/io/netty/handler/codec/http/InvalidChunkExtensionException.java
+@@ -0,0 +1,45 @@
 +/*
-+ * Copyright 2014 The Netty Project
++ * Copyright 2025 The Netty Project
 + *
 + * The Netty Project licenses this file to you under the Apache License,
 + * version 2.0 (the "License"); you may not use this file except in compliance
@@ -56,30 +255,41 @@ index 0000000..252f134
 + * License for the specific language governing permissions and limitations
 + * under the License.
 + */
-+package io.netty.handler.codec.compression;
++package io.netty.handler.codec.http;
 +
-+import io.netty.channel.embedded.EmbeddedChannel;
++import io.netty.handler.codec.CorruptedFrameException;
 +
-+public class JZlibIntegrationTest extends AbstractIntegrationTest {
++/**
++ * Thrown when HTTP chunk extensions could not be parsed, typically due to incorrect use of CR LF delimiters.
++ * <p>
++ * <a href="https://datatracker.ietf.org/doc/html/rfc9112#name-chunked-transfer-coding">RFC 9112</a>
++ * specifies that chunk header lines must be terminated in a CR LF pair,
++ * and that a lone LF octet is not allowed within the chunk header line.
++ */
++public final class InvalidChunkExtensionException extends CorruptedFrameException {
++    private static final long serialVersionUID = 536224937231200736L;
 +
-+    @Override
-+    protected EmbeddedChannel createEncoder() {
-+        return new EmbeddedChannel(new JZlibEncoder());
++    public InvalidChunkExtensionException() {
++        super("Line Feed must be preceded by Carriage Return when terminating HTTP chunk header lines");
 +    }
 +
-+    @Override
-+    protected EmbeddedChannel createDecoder() {
-+        return new EmbeddedChannel(new JZlibDecoder(0));
++    public InvalidChunkExtensionException(String message, Throwable cause) {
++        super(message, cause);
++    }
++
++    public InvalidChunkExtensionException(String message) {
++        super(message);
++    }
++
++    public InvalidChunkExtensionException(Throwable cause) {
++        super(cause);
 +    }
 +}
-diff --git a/codec/src/test/java/io/netty/handler/codec/compression/JdkZlibIntegrationTest.java b/codec/src/test/java/io/netty/handler/codec/compression/JdkZlibIntegrationTest.java
-new file mode 100644
-index 0000000..6dca41d
 --- /dev/null
-+++ b/codec/src/test/java/io/netty/handler/codec/compression/JdkZlibIntegrationTest.java
-@@ -0,0 +1,31 @@
++++ b/codec-http/src/main/java/io/netty/handler/codec/http/InvalidChunkTerminationException.java
+@@ -0,0 +1,45 @@
 +/*
-+ * Copyright 2014 The Netty Project
++ * Copyright 2025 The Netty Project
 + *
 + * The Netty Project licenses this file to you under the Apache License,
 + * version 2.0 (the "License"); you may not use this file except in compliance
@@ -93,19 +303,422 @@ index 0000000..6dca41d
 + * License for the specific language governing permissions and limitations
 + * under the License.
 + */
-+package io.netty.handler.codec.compression;
++package io.netty.handler.codec.http;
++
++import io.netty.handler.codec.CorruptedFrameException;
++
++/**
++ * Thrown when HTTP chunks could not be parsed, typically due to incorrect use of CR LF delimiters.
++ * <p>
++ * <a href="https://datatracker.ietf.org/doc/html/rfc9112#name-chunked-transfer-coding">RFC 9112</a>
++ * specifies that chunk bodies must be terminated in a CR LF pair,
++ * and that the delimiter must follow the given chunk-size number of octets in chunk-data.
++ */
++public final class InvalidChunkTerminationException extends CorruptedFrameException {
++    private static final long serialVersionUID = 536224937231200736L;
 +
-+import io.netty.channel.embedded.EmbeddedChannel;
++    public InvalidChunkTerminationException() {
++        super("Chunk data sections must be terminated by a CR LF octet pair");
++    }
 +
-+public class JdkZlibIntegrationTest extends AbstractIntegrationTest {
++    public InvalidChunkTerminationException(String message, Throwable cause) {
++        super(message, cause);
++    }
 +
-+    @Override
-+    protected EmbeddedChannel createEncoder() {
-+        return new EmbeddedChannel(new JdkZlibEncoder());
++    public InvalidChunkTerminationException(String message) {
++        super(message);
 +    }
 +
-+    @Override
-+    protected EmbeddedChannel createDecoder() {
-+        return new EmbeddedChannel(new JdkZlibDecoder(0));
++    public InvalidChunkTerminationException(Throwable cause) {
++        super(cause);
 +    }
 +}
+--- /dev/null
++++ b/codec-http/src/main/java/io/netty/handler/codec/http/InvalidLineSeparatorException.java
+@@ -0,0 +1,48 @@
++/*
++ * Copyright 2025 The Netty Project
++ *
++ * The Netty Project licenses this file to you under the Apache License,
++ * version 2.0 (the "License"); you may not use this file except in compliance
++ * with the License. You may obtain a copy of the License at:
++ *
++ *   https://www.apache.org/licenses/LICENSE-2.0
++ *
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
++ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
++ * License for the specific language governing permissions and limitations
++ * under the License.
++ */
++package io.netty.handler.codec.http;
++
++import io.netty.handler.codec.DecoderException;
++
++/**
++ * Thrown when {@linkplain HttpDecoderConfig#isStrictLineParsing() strict line parsing} is enabled,
++ * and HTTP start- and header field-lines are not seperated by CR LF octet pairs.
++ * <p>
++ * Strict line parsing is enabled by default since Netty 4.1.124 and 4.2.4.
++ * This default can be overridden by setting the {@value HttpObjectDecoder#PROP_DEFAULT_STRICT_LINE_PARSING} system
++ * property to {@code false}.
++ * <p>
++ * See <a href="https://datatracker.ietf.org/doc/html/rfc9112#name-message-format">RFC 9112 Section 2.1</a>.
++ */
++public final class InvalidLineSeparatorException extends DecoderException {
++    private static final long serialVersionUID = 536224937231200736L;
++
++    public InvalidLineSeparatorException() {
++        super("Line Feed must be preceded by Carriage Return when terminating HTTP start- and header field-lines");
++    }
++
++    public InvalidLineSeparatorException(String message, Throwable cause) {
++        super(message, cause);
++    }
++
++    public InvalidLineSeparatorException(String message) {
++        super(message);
++    }
++
++    public InvalidLineSeparatorException(Throwable cause) {
++        super(cause);
++    }
++}
+--- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java
++++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java
+@@ -19,6 +19,7 @@
+ import io.netty.buffer.Unpooled;
+ import io.netty.channel.embedded.EmbeddedChannel;
+ import io.netty.handler.codec.TooLongFrameException;
++import io.netty.handler.codec.DecoderResult;
+ import io.netty.util.AsciiString;
+ import io.netty.util.CharsetUtil;
+ import org.junit.Test;
+@@ -43,6 +44,12 @@
+     private static final byte[] CONTENT_MIXED_DELIMITERS = createContent("\r\n", "\n");
+     private static final int CONTENT_LENGTH = 8;
+ 
++    private static final int DEFAULT_MAX_INITIAL_LINE_LENGTH = 4096;
++    private static final int DEFAULT_MAX_HEADER_SIZE = 8192;
++    private static final int DEFAULT_MAX_CHUNK_SIZE = 8192;
++    private static final boolean DEFAULT_VALIDATE_HEADERS = true;
++    private static final int DEFAULT_INITIAL_BUFFER_SIZE = 128;
++
+     private static byte[] createContent(String... lineDelimiters) {
+         String lineDelimiter;
+         String lineDelimiter2;
+@@ -80,18 +87,45 @@
+         testDecodeWholeRequestAtOnce(CONTENT_MIXED_DELIMITERS);
+     }
+ 
++    @Test
++    public void testDecodeWholeRequestAtOnceFailesWithLFDelimiters() {
++        testDecodeWholeRequestAtOnce(CONTENT_LF_DELIMITERS, DEFAULT_MAX_HEADER_SIZE, true, true);
++    }
++
++    @Test
++    public void testDecodeWholeRequestAtOnceFailsWithMixedDelimiters() {
++        testDecodeWholeRequestAtOnce(CONTENT_MIXED_DELIMITERS, DEFAULT_MAX_HEADER_SIZE, true, true);
++    }
++
+     private static void testDecodeWholeRequestAtOnce(byte[] content) {
+-        EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder());
++        testDecodeWholeRequestAtOnce(content, DEFAULT_MAX_HEADER_SIZE, false, false);
++    }
++
++    private static void testDecodeWholeRequestAtOnce(byte[] content, int maxHeaderSize, boolean strictLineParsing,
++                                                     boolean expectFailure) {
++        EmbeddedChannel channel =
++                new EmbeddedChannel(new HttpRequestDecoder(DEFAULT_MAX_INITIAL_LINE_LENGTH,
++                                                           maxHeaderSize,
++                                                           DEFAULT_MAX_CHUNK_SIZE,
++                                                           DEFAULT_VALIDATE_HEADERS,
++                                                           DEFAULT_INITIAL_BUFFER_SIZE,
++                                                           strictLineParsing));
+         assertTrue(channel.writeInbound(Unpooled.copiedBuffer(content)));
+         HttpRequest req = channel.readInbound();
+         assertNotNull(req);
+-        checkHeaders(req.headers());
+-        LastHttpContent c = channel.readInbound();
+-        assertEquals(CONTENT_LENGTH, c.content().readableBytes());
+-        assertEquals(
+-                Unpooled.wrappedBuffer(content, content.length - CONTENT_LENGTH, CONTENT_LENGTH),
+-                c.content().readSlice(CONTENT_LENGTH));
+-        c.release();
++        if (expectFailure) {
++            assertTrue(req.decoderResult().isFailure());
++            assertThat(req.decoderResult().cause(), instanceOf(InvalidLineSeparatorException.class));
++        } else {
++            assertFalse(req.decoderResult().isFailure());
++            checkHeaders(req.headers());
++            LastHttpContent c = channel.readInbound();
++            assertEquals(CONTENT_LENGTH, c.content().readableBytes());
++            assertEquals(
++                    Unpooled.wrappedBuffer(content, content.length - CONTENT_LENGTH, CONTENT_LENGTH),
++                    c.content().readSlice(CONTENT_LENGTH));
++            c.release();
++        }
+ 
+         assertFalse(channel.finish());
+         assertNull(channel.readInbound());
+@@ -116,27 +150,45 @@
+ 
+     @Test
+     public void testDecodeWholeRequestInMultipleStepsCRLFDelimiters() {
+-        testDecodeWholeRequestInMultipleSteps(CONTENT_CRLF_DELIMITERS);
++        testDecodeWholeRequestInMultipleSteps(CONTENT_CRLF_DELIMITERS, true, false);
+     }
+ 
+     @Test
+     public void testDecodeWholeRequestInMultipleStepsLFDelimiters() {
+-        testDecodeWholeRequestInMultipleSteps(CONTENT_LF_DELIMITERS);
++        testDecodeWholeRequestInMultipleSteps(CONTENT_LF_DELIMITERS, false, false);
+     }
+ 
+     @Test
+     public void testDecodeWholeRequestInMultipleStepsMixedDelimiters() {
+-        testDecodeWholeRequestInMultipleSteps(CONTENT_MIXED_DELIMITERS);
++        testDecodeWholeRequestInMultipleSteps(CONTENT_MIXED_DELIMITERS, false, false);
+     }
+ 
+-    private static void testDecodeWholeRequestInMultipleSteps(byte[] content) {
++    @Test
++    public void testDecodeWholeRequestInMultipleStepsFailsWithLFDelimiters() {
++        testDecodeWholeRequestInMultipleSteps(CONTENT_LF_DELIMITERS, true, true);
++    }
++
++    @Test
++    public void testDecodeWholeRequestInMultipleStepsFailsWithMixedDelimiters() {
++        testDecodeWholeRequestInMultipleSteps(CONTENT_MIXED_DELIMITERS, true, true);
++    }
++
++    private static void testDecodeWholeRequestInMultipleSteps(
++            byte[] content, boolean strictLineParsing, boolean expectFailure) {
+         for (int i = 1; i < content.length; i++) {
+-            testDecodeWholeRequestInMultipleSteps(content, i);
++            testDecodeWholeRequestInMultipleSteps(content, i, strictLineParsing, expectFailure);
+         }
+     }
+ 
+-    private static void testDecodeWholeRequestInMultipleSteps(byte[] content, int fragmentSize) {
+-        EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder());
++    private static void testDecodeWholeRequestInMultipleSteps(
++            byte[] content, int fragmentSize, boolean strictLineParsing, boolean expectFailure) {
++                EmbeddedChannel channel =
++                new EmbeddedChannel(new HttpRequestDecoder(DEFAULT_MAX_INITIAL_LINE_LENGTH,
++                                                           DEFAULT_MAX_HEADER_SIZE,
++                                                           DEFAULT_MAX_CHUNK_SIZE,
++                                                           DEFAULT_VALIDATE_HEADERS,
++                                                           DEFAULT_INITIAL_BUFFER_SIZE,
++                                                           strictLineParsing));
+         int headerLength = content.length - CONTENT_LENGTH;
+ 
+         // split up the header
+@@ -158,6 +210,12 @@
+ 
+         HttpRequest req = channel.readInbound();
+         assertNotNull(req);
++        if (expectFailure) {
++            assertTrue(req.decoderResult().isFailure());
++            assertThat(req.decoderResult().cause(), instanceOf(InvalidLineSeparatorException.class));
++            return; // No more messages will be produced.
++        }
++        assertFalse(req.decoderResult().isFailure());
+         checkHeaders(req.headers());
+ 
+         for (int i = CONTENT_LENGTH; i > 1; i --) {
+@@ -531,6 +589,66 @@
+     }
+ 
+     @Test
++    public void mustRejectImproperlyTerminatedChunkExtensions() throws Exception {
++        // See full explanation: https://w4ke.info/2025/06/18/funky-chunks.html
++        String requestStr = "GET /one HTTP/1.1\r\n" +
++                "Host: localhost\r\n" +
++                "Transfer-Encoding: chunked\r\n\r\n" +
++                "2;\n" + // Chunk size followed by illegal single newline (not preceded by carraige return)
++                "xx\r\n" +
++                "45\r\n" +
++                "0\r\n\r\n" +
++                "GET /two HTTP/1.1\r\n" +
++                "Host: localhost\r\n" +
++                "Transfer-Encoding: chunked\r\n\r\n" +
++                "0\r\n\r\n";
++        EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder());
++        assertTrue(channel.writeInbound(Unpooled.copiedBuffer(requestStr, CharsetUtil.US_ASCII)));
++        HttpRequest request = channel.readInbound();
++        assertFalse(request.decoderResult().isFailure()); // We parse the headers just fine.
++        assertTrue(request.headers().names().contains("Transfer-Encoding"));
++        assertTrue(request.headers().contains("Transfer-Encoding", "chunked", false));
++        HttpContent content = channel.readInbound();
++        DecoderResult decoderResult = content.decoderResult();
++        assertTrue(decoderResult.isFailure()); // But parsing the chunk must fail.
++        assertThat(decoderResult.cause(), instanceOf(InvalidChunkExtensionException.class));
++        content.release();
++        assertFalse(channel.finish());
++    }
++
++    @Test
++    public void mustRejectImproperlyTerminatedChunkBodies() throws Exception {
++        // See full explanation: https://w4ke.info/2025/06/18/funky-chunks.html
++        String requestStr = "GET /one HTTP/1.1\r\n" +
++                "Host: localhost\r\n" +
++                "Transfer-Encoding: chunked\r\n\r\n" +
++                "5\r\n" +
++                "AAAAAXX" + // Chunk body contains extra (XX) bytes, and no CRLF terminator.
++                "45\r\n" +
++                "0\r\n" +
++                "GET /two HTTP/1.1\r\n" +
++                "Host: localhost\r\n" +
++                "Transfer-Encoding: chunked\r\n\r\n" +
++                "0\r\n\r\n";
++        EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder());
++        assertTrue(channel.writeInbound(Unpooled.copiedBuffer(requestStr, CharsetUtil.US_ASCII)));
++        HttpRequest request = channel.readInbound();
++        assertFalse(request.decoderResult().isFailure()); // We parse the headers just fine.
++        assertTrue(request.headers().names().contains("Transfer-Encoding"));
++        assertTrue(request.headers().contains("Transfer-Encoding", "chunked", false));
++        HttpContent content = channel.readInbound();
++        assertFalse(content.decoderResult().isFailure()); // We parse the content promised by the chunk length.
++        content.release();
++
++        content = channel.readInbound();
++        DecoderResult decoderResult = content.decoderResult();
++        assertTrue(decoderResult.isFailure()); // But then parsing the chunk delimiter must fail.
++        assertThat(decoderResult.cause(), instanceOf(InvalidChunkTerminationException.class));
++        content.release();
++        assertFalse(channel.finish());
++    }
++
++    @Test
+     public void testContentLengthHeaderAndChunked() {
+         String requestStr = "POST / HTTP/1.1\r\n" +
+                 "Host: example.com\r\n" +
+--- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java
++++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java
+@@ -18,6 +18,7 @@
+ import io.netty.buffer.ByteBuf;
+ import io.netty.buffer.Unpooled;
+ import io.netty.channel.embedded.EmbeddedChannel;
++import io.netty.handler.codec.DecoderResult;
+ import io.netty.handler.codec.PrematureChannelClosureException;
+ import io.netty.handler.codec.TooLongFrameException;
+ import io.netty.util.CharsetUtil;
+@@ -672,6 +673,63 @@
+     }
+ 
+     @Test
++    public void mustRejectImproperlyTerminatedChunkExtensions() throws Exception {
++        // See full explanation: https://w4ke.info/2025/06/18/funky-chunks.html
++        String requestStr = "HTTP/1.1 200 OK\r\n" +
++                "Transfer-Encoding: chunked\r\n" +
++                "\r\n" +
++                "2;\n" + // Chunk size followed by illegal single newline (not preceded by carraige return)
++                "xx\r\n" +
++                "1D\r\n" +
++                "0\r\n\r\n" +
++                "HTTP/1.1 200 OK\r\n" +
++                "Transfer-Encoding: chunked\r\n\r\n" +
++                "0\r\n\r\n";
++        EmbeddedChannel channel = new EmbeddedChannel(new HttpResponseDecoder());
++        assertTrue(channel.writeInbound(Unpooled.copiedBuffer(requestStr, CharsetUtil.US_ASCII)));
++        HttpResponse response = channel.readInbound();
++        assertFalse(response.decoderResult().isFailure()); // We parse the headers just fine.
++        assertTrue(response.headers().names().contains("Transfer-Encoding"));
++        assertTrue(response.headers().contains("Transfer-Encoding", "chunked", false));
++        HttpContent content = channel.readInbound();
++        DecoderResult decoderResult = content.decoderResult();
++        assertTrue(decoderResult.isFailure()); // But parsing the chunk must fail.
++        assertThat(decoderResult.cause(), instanceOf(InvalidChunkExtensionException.class));
++        content.release();
++        assertFalse(channel.finish());
++    }
++
++    @Test
++    public void mustRejectImproperlyTerminatedChunkBodies() throws Exception {
++        // See full explanation: https://w4ke.info/2025/06/18/funky-chunks.html
++        String requestStr = "HTTP/1.1 200 OK\r\n" +
++                "Transfer-Encoding: chunked\r\n\r\n" +
++                "5\r\n" +
++                "AAAAXX" + // Chunk body contains extra (XX) bytes, and no CRLF terminator.
++                "1D\r\n" +
++                "0\r\n" +
++                "HTTP/1.1 200 OK\r\n" +
++                "Transfer-Encoding: chunked\r\n\r\n" +
++                "0\r\n\r\n";
++        EmbeddedChannel channel = new EmbeddedChannel(new HttpResponseDecoder());
++        assertTrue(channel.writeInbound(Unpooled.copiedBuffer(requestStr, CharsetUtil.US_ASCII)));
++        HttpResponse response = channel.readInbound();
++        assertFalse(response.decoderResult().isFailure()); // We parse the headers just fine.
++        assertTrue(response.headers().names().contains("Transfer-Encoding"));
++        assertTrue(response.headers().contains("Transfer-Encoding", "chunked", false));
++        HttpContent content = channel.readInbound();
++        assertFalse(content.decoderResult().isFailure()); // We parse the content promised by the chunk length.
++        content.release();
++
++        content = channel.readInbound();
++        DecoderResult decoderResult = content.decoderResult();
++        assertTrue(decoderResult.isFailure()); // But then parsing the chunk delimiter must fail.
++        assertThat(decoderResult.cause(), instanceOf(InvalidChunkTerminationException.class));
++        content.release();
++        assertFalse(channel.finish());
++    }
++
++    @Test
+     public void testConnectionClosedBeforeHeadersReceived() {
+         EmbeddedChannel channel = new EmbeddedChannel(new HttpResponseDecoder());
+         String responseInitialLine =
+@@ -718,7 +776,7 @@
+         EmbeddedChannel channel = new EmbeddedChannel(new HttpResponseDecoder());
+         String requestStr = "HTTP/1.1 200 OK\r\n" +
+                 "Transfer-Encoding : chunked\r\n" +
+-                "Host: netty.io\n\r\n";
++                "Host: netty.io\r\n\r\n";
+ 
+         assertTrue(channel.writeInbound(Unpooled.copiedBuffer(requestStr, CharsetUtil.US_ASCII)));
+         HttpResponse response = channel.readInbound();
+@@ -787,7 +845,7 @@
+         testHeaderNameEndsWithControlChar(0x0c);
+     }
+ 
+-    private void testHeaderNameEndsWithControlChar(int controlChar) {
++    private static void testHeaderNameEndsWithControlChar(int controlChar) {
+         ByteBuf responseBuffer = Unpooled.buffer();
+         responseBuffer.writeCharSequence("HTTP/1.1 200 OK\r\n" +
+                 "Host: netty.io\r\n", CharsetUtil.US_ASCII);
+--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java
++++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpRequestDecoder.java
+@@ -81,6 +81,15 @@
+         super(maxInitialLineLength, maxHeaderSize, maxChunkSize, true, validateHeaders, initialBufferSize);
+     }
+ 
++    public HttpRequestDecoder(
++            int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders,
++            int initialBufferSize, boolean strictLineParsing) {
++        super(maxInitialLineLength, maxHeaderSize, maxChunkSize, true,
++              validateHeaders, initialBufferSize,
++              HttpObjectDecoder.DEFAULT_ALLOW_DUPLICATE_CONTENT_LENGTHS,
++              strictLineParsing);
++    }
++
+     @Override
+     protected HttpMessage createMessage(String[] initialLine) throws Exception {
+         return new DefaultHttpRequest(
+--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java
++++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpResponseDecoder.java
+@@ -112,6 +112,15 @@
+         super(maxInitialLineLength, maxHeaderSize, maxChunkSize, true, validateHeaders, initialBufferSize);
+     }
+ 
++    public HttpResponseDecoder(
++            int maxInitialLineLength, int maxHeaderSize, int maxChunkSize, boolean validateHeaders,
++            int initialBufferSize, boolean strictLineParsing) {
++        super(maxInitialLineLength, maxHeaderSize, maxChunkSize, true,
++              validateHeaders, initialBufferSize,
++              HttpObjectDecoder.DEFAULT_ALLOW_DUPLICATE_CONTENT_LENGTHS,
++              strictLineParsing);
++    }
++
+     @Override
+     protected HttpMessage createMessage(String[] initialLine) {
+         return new DefaultHttpResponse(


=====================================
debian/patches/CVE-2025-58057.patch
=====================================
@@ -0,0 +1,930 @@
+From: Norman Maurer <norman_maurer at apple.com>
+Date: Wed, 3 Sep 2025 10:35:05 +0200
+Subject: [PATCH] Merge commit from fork (#15612)
+
+Motivation:
+
+We should ensure our decompressing decoders will fire their buffers
+through the pipeliner as fast as possible and so allow the user to take
+ownership of these as fast as possible. This is needed to reduce the
+risk of OOME as otherwise a small input might produce a large amount of
+data that can't be processed until all the data was decompressed in a
+loop. Beside this we also should ensure that other handlers that uses
+these decompressors will not buffer all of the produced data before
+processing it, which was true for HTTP and HTTP2.
+
+Modifications:
+
+- Adjust affected decoders (Brotli, Zstd and ZLib) to fire buffers
+through the pipeline as soon as possible
+- Adjust HTTP / HTTP2 decompressors to do the same
+- Add testcase.
+
+Result:
+
+Less risk of OOME when doing decompressing
+
+Co-authored-by: yawkat <jonas.konrad at oracle.com>
+
+origin: backport,  https://github.com/netty/netty/commit/34894ac73b02efefeacd9c0972780b32dc3de04f
+---
+ .../handler/codec/http/HttpContentDecoder.java     | 239 +++++++++++----------
+ .../codec/http/HttpContentDecompressorTest.java    |  88 ++++++++
+ .../http2/DelegatingDecompressorFrameListener.java | 177 +++++++--------
+ .../handler/codec/compression/JZlibDecoder.java    |  32 ++-
+ .../handler/codec/compression/JdkZlibDecoder.java  |  45 ++--
+ .../codec/compression/AbstractIntegrationTest.java |  62 ++++++
+ 6 files changed, 416 insertions(+), 227 deletions(-)
+
+diff --git a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecoder.java b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecoder.java
+index d2513e4..3c43900 100644
+--- a/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecoder.java
++++ b/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecoder.java
+@@ -17,6 +17,7 @@ package io.netty.handler.codec.http;
+ 
+ import io.netty.buffer.ByteBuf;
+ import io.netty.channel.ChannelHandlerContext;
++import io.netty.channel.ChannelInboundHandlerAdapter;
+ import io.netty.channel.embedded.EmbeddedChannel;
+ import io.netty.handler.codec.CodecException;
+ import io.netty.handler.codec.DecoderResult;
+@@ -52,125 +53,136 @@ public abstract class HttpContentDecoder extends MessageToMessageDecoder<HttpObj
+     private EmbeddedChannel decoder;
+     private boolean continueResponse;
+     private boolean needRead = true;
++    private ByteBufForwarder forwarder;
+ 
+     @Override
+     protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out) throws Exception {
+-        try {
+-            if (msg instanceof HttpResponse && ((HttpResponse) msg).status().code() == 100) {
++        needRead = true;
++        if (msg instanceof HttpResponse && ((HttpResponse) msg).status().code() == 100) {
+ 
+-                if (!(msg instanceof LastHttpContent)) {
+-                    continueResponse = true;
+-                }
+-                // 100-continue response must be passed through.
+-                out.add(ReferenceCountUtil.retain(msg));
+-                return;
++            if (!(msg instanceof LastHttpContent)) {
++                continueResponse = true;
+             }
++            // 100-continue response must be passed through.
++            needRead = false;
++            ctx.fireChannelRead(ReferenceCountUtil.retain(msg));
++            return;
++        }
+ 
+-            if (continueResponse) {
+-                if (msg instanceof LastHttpContent) {
+-                    continueResponse = false;
+-                }
+-                // 100-continue response must be passed through.
+-                out.add(ReferenceCountUtil.retain(msg));
+-                return;
++        if (continueResponse) {
++            if (msg instanceof LastHttpContent) {
++                continueResponse = false;
+             }
++            needRead = false;
++            ctx.fireChannelRead(ReferenceCountUtil.retain(msg));
++            return;
++        }
+ 
+-            if (msg instanceof HttpMessage) {
+-                cleanup();
+-                final HttpMessage message = (HttpMessage) msg;
+-                final HttpHeaders headers = message.headers();
++        if (msg instanceof HttpMessage) {
++            cleanup();
++            final HttpMessage message = (HttpMessage) msg;
++            final HttpHeaders headers = message.headers();
+ 
+-                // Determine the content encoding.
+-                String contentEncoding = headers.get(HttpHeaderNames.CONTENT_ENCODING);
+-                if (contentEncoding != null) {
+-                    contentEncoding = contentEncoding.trim();
++            // Determine the content encoding.
++            String contentEncoding = headers.get(HttpHeaderNames.CONTENT_ENCODING);
++            if (contentEncoding != null) {
++                contentEncoding = contentEncoding.trim();
++            } else {
++                String transferEncoding = headers.get(HttpHeaderNames.TRANSFER_ENCODING);
++                if (transferEncoding != null) {
++                    int idx = transferEncoding.indexOf(",");
++                    if (idx != -1) {
++                        contentEncoding = transferEncoding.substring(0, idx).trim();
++                    } else {
++                        contentEncoding = transferEncoding.trim();
++                    }
+                 } else {
+                     contentEncoding = IDENTITY;
+                 }
+-                decoder = newContentDecoder(contentEncoding);
+-
+-                if (decoder == null) {
+-                    if (message instanceof HttpContent) {
+-                        ((HttpContent) message).retain();
+-                    }
+-                    out.add(message);
+-                    return;
+-                }
++            }
++            decoder = newContentDecoder(contentEncoding);
+ 
+-                // Remove content-length header:
+-                // the correct value can be set only after all chunks are processed/decoded.
+-                // If buffering is not an issue, add HttpObjectAggregator down the chain, it will set the header.
+-                // Otherwise, rely on LastHttpContent message.
+-                if (headers.contains(HttpHeaderNames.CONTENT_LENGTH)) {
+-                    headers.remove(HttpHeaderNames.CONTENT_LENGTH);
+-                    headers.set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
++            if (decoder == null) {
++                if (message instanceof HttpContent) {
++                    ((HttpContent) message).retain();
+                 }
+-                // Either it is already chunked or EOF terminated.
+-                // See https://github.com/netty/netty/issues/5892
++                needRead = false;
++                ctx.fireChannelRead(message);
++                return;
++            }
++            decoder.pipeline().addLast(forwarder);
+ 
+-                // set new content encoding,
+-                CharSequence targetContentEncoding = getTargetContentEncoding(contentEncoding);
+-                if (HttpHeaderValues.IDENTITY.contentEquals(targetContentEncoding)) {
+-                    // Do NOT set the 'Content-Encoding' header if the target encoding is 'identity'
+-                    // as per: http://tools.ietf.org/html/rfc2616#section-14.11
+-                    headers.remove(HttpHeaderNames.CONTENT_ENCODING);
+-                } else {
+-                    headers.set(HttpHeaderNames.CONTENT_ENCODING, targetContentEncoding);
+-                }
++            // Remove content-length header:
++            // the correct value can be set only after all chunks are processed/decoded.
++            // If buffering is not an issue, add HttpObjectAggregator down the chain, it will set the header.
++            // Otherwise, rely on LastHttpContent message.
++            if (headers.contains(HttpHeaderNames.CONTENT_LENGTH)) {
++                headers.remove(HttpHeaderNames.CONTENT_LENGTH);
++                headers.set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
++            }
++            // Either it is already chunked or EOF terminated.
++            // See https://github.com/netty/netty/issues/5892
+ 
+-                if (message instanceof HttpContent) {
+-                    // If message is a full request or response object (headers + data), don't copy data part into out.
+-                    // Output headers only; data part will be decoded below.
+-                    // Note: "copy" object must not be an instance of LastHttpContent class,
+-                    // as this would (erroneously) indicate the end of the HttpMessage to other handlers.
+-                    HttpMessage copy;
+-                    if (message instanceof HttpRequest) {
+-                        HttpRequest r = (HttpRequest) message; // HttpRequest or FullHttpRequest
+-                        copy = new DefaultHttpRequest(r.protocolVersion(), r.method(), r.uri());
+-                    } else if (message instanceof HttpResponse) {
+-                        HttpResponse r = (HttpResponse) message; // HttpResponse or FullHttpResponse
+-                        copy = new DefaultHttpResponse(r.protocolVersion(), r.status());
+-                    } else {
+-                        throw new CodecException("Object of class " + message.getClass().getName() +
+-                                                 " is not an HttpRequest or HttpResponse");
+-                    }
+-                    copy.headers().set(message.headers());
+-                    copy.setDecoderResult(message.decoderResult());
+-                    out.add(copy);
+-                } else {
+-                    out.add(message);
+-                }
++            // set new content encoding,
++            CharSequence targetContentEncoding = getTargetContentEncoding(contentEncoding);
++            if (HttpHeaderValues.IDENTITY.contentEquals(targetContentEncoding)) {
++                // Do NOT set the 'Content-Encoding' header if the target encoding is 'identity'
++                // as per: https://tools.ietf.org/html/rfc2616#section-14.11
++                headers.remove(HttpHeaderNames.CONTENT_ENCODING);
++            } else {
++                headers.set(HttpHeaderNames.CONTENT_ENCODING, targetContentEncoding);
+             }
+ 
+-            if (msg instanceof HttpContent) {
+-                final HttpContent c = (HttpContent) msg;
+-                if (decoder == null) {
+-                    out.add(c.retain());
++            if (message instanceof HttpContent) {
++                // If message is a full request or response object (headers + data), don't copy data part into out.
++                // Output headers only; data part will be decoded below.
++                // Note: "copy" object must not be an instance of LastHttpContent class,
++                // as this would (erroneously) indicate the end of the HttpMessage to other handlers.
++                HttpMessage copy;
++                if (message instanceof HttpRequest) {
++                    HttpRequest r = (HttpRequest) message; // HttpRequest or FullHttpRequest
++                    copy = new DefaultHttpRequest(r.protocolVersion(), r.method(), r.uri());
++                } else if (message instanceof HttpResponse) {
++                    HttpResponse r = (HttpResponse) message; // HttpResponse or FullHttpResponse
++                    copy = new DefaultHttpResponse(r.protocolVersion(), r.status());
+                 } else {
+-                    decodeContent(c, out);
++                    throw new CodecException("Object of class " + message.getClass().getName() +
++                                             " is not an HttpRequest or HttpResponse");
+                 }
++                copy.headers().set(message.headers());
++                copy.setDecoderResult(message.decoderResult());
++                needRead = false;
++                ctx.fireChannelRead(copy);
++            } else {
++                needRead = false;
++                ctx.fireChannelRead(message);
+             }
+-        } finally {
+-            needRead = out.isEmpty();
+         }
+-    }
+-
+-    private void decodeContent(HttpContent c, List<Object> out) {
+-        ByteBuf content = c.content();
+-
+-        decode(content, out);
+-
+-        if (c instanceof LastHttpContent) {
+-            finishDecode(out);
+ 
+-            LastHttpContent last = (LastHttpContent) c;
+-            // Generate an additional chunk if the decoder produced
+-            // the last product on closure,
+-            HttpHeaders headers = last.trailingHeaders();
+-            if (headers.isEmpty()) {
+-                out.add(LastHttpContent.EMPTY_LAST_CONTENT);
++        if (msg instanceof HttpContent) {
++            final HttpContent c = (HttpContent) msg;
++            if (decoder == null) {
++                needRead = false;
++                ctx.fireChannelRead(c.retain());
+             } else {
+-                out.add(new ComposedLastHttpContent(headers, DecoderResult.SUCCESS));
++                // call retain here as it will call release after its written to the channel
++                decoder.writeInbound(c.content().retain());
++
++                if (c instanceof LastHttpContent) {
++                    boolean notEmpty = decoder.finish();
++                    decoder = null;
++                    assert !notEmpty;
++                    LastHttpContent last = (LastHttpContent) c;
++                    // Generate an additional chunk if the decoder produced
++                    // the last product on closure,
++                    HttpHeaders headers = last.trailingHeaders();
++                    needRead = false;
++                    if (headers.isEmpty()) {
++                        ctx.fireChannelRead(LastHttpContent.EMPTY_LAST_CONTENT);
++                    } else {
++                        ctx.fireChannelRead(new ComposedLastHttpContent(headers, DecoderResult.SUCCESS));
++                    }
++                }
+             }
+         }
+     }
+@@ -228,6 +240,7 @@ public abstract class HttpContentDecoder extends MessageToMessageDecoder<HttpObj
+     @Override
+     public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+         this.ctx = ctx;
++        forwarder = new ByteBufForwarder(ctx);
+         super.handlerAdded(ctx);
+     }
+ 
+@@ -249,30 +262,30 @@ public abstract class HttpContentDecoder extends MessageToMessageDecoder<HttpObj
+         }
+     }
+ 
+-    private void decode(ByteBuf in, List<Object> out) {
+-        // call retain here as it will call release after its written to the channel
+-        decoder.writeInbound(in.retain());
+-        fetchDecoderOutput(out);
+-    }
++    private final class ByteBufForwarder extends ChannelInboundHandlerAdapter {
++
++        private final ChannelHandlerContext targetCtx;
+ 
+-    private void finishDecode(List<Object> out) {
+-        if (decoder.finish()) {
+-            fetchDecoderOutput(out);
++        ByteBufForwarder(ChannelHandlerContext targetCtx) {
++            this.targetCtx = targetCtx;
+         }
+-        decoder = null;
+-    }
+ 
+-    private void fetchDecoderOutput(List<Object> out) {
+-        for (;;) {
+-            ByteBuf buf = decoder.readInbound();
+-            if (buf == null) {
+-                break;
+-            }
++        @Override
++        public boolean isSharable() {
++            // We need to mark the handler as sharable as we will add it to every EmbeddedChannel that is
++            // generated.
++            return true;
++        }
++
++        @Override
++        public void channelRead(ChannelHandlerContext ctx, Object msg) {
++            ByteBuf buf = (ByteBuf) msg;
+             if (!buf.isReadable()) {
+                 buf.release();
+-                continue;
++                return;
+             }
+-            out.add(new DefaultHttpContent(buf));
++            needRead = false;
++            targetCtx.fireChannelRead(new DefaultHttpContent(buf));
+         }
+     }
+ }
+diff --git a/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentDecompressorTest.java b/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentDecompressorTest.java
+index 4a659fa..f54e98d 100644
+--- a/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentDecompressorTest.java
++++ b/codec-http/src/test/java/io/netty/handler/codec/http/HttpContentDecompressorTest.java
+@@ -15,6 +15,8 @@
+  */
+ package io.netty.handler.codec.http;
+ 
++import io.netty.buffer.ByteBuf;
++import io.netty.buffer.PooledByteBufAllocator;
+ import io.netty.buffer.Unpooled;
+ import io.netty.channel.ChannelHandlerContext;
+ import io.netty.channel.ChannelInboundHandlerAdapter;
+@@ -23,6 +25,8 @@
+ import org.junit.Assert;
+ import org.junit.Test;
+ 
++import java.util.ArrayList;
++import java.util.List;
+ import java.util.concurrent.atomic.AtomicInteger;
+ 
+ public class HttpContentDecompressorTest {
+@@ -67,4 +71,92 @@
+         Assert.assertEquals(2, readCalled.get());
+         Assert.assertFalse(channel.finishAndReleaseAll());
+     }
++
++    @Test
++    public void testZipBombGzip() {
++        testZipBomb("gzip");
++    }
++
++    @Test
++    public void testZipBombDeflate() {
++        testZipBomb("deflate");
++    }
++
++    @Test
++    public void testZipBombSnappy() {
++        testZipBomb("snappy");
++    }
++
++    private static void testZipBomb(String encoding) {
++        int chunkSize = 1024 * 1024;
++        int numberOfChunks = 256;
++        int memoryLimit = chunkSize * 128;
++
++        EmbeddedChannel compressionChannel = new EmbeddedChannel(new HttpContentCompressor());
++        DefaultFullHttpRequest req = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/");
++        req.headers().set(HttpHeaderNames.ACCEPT_ENCODING, encoding);
++        compressionChannel.writeInbound(req);
++
++        DefaultHttpResponse response = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
++        response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
++        compressionChannel.writeOutbound(response);
++
++        for (int i = 0; i < numberOfChunks; i++) {
++            ByteBuf buffer = compressionChannel.alloc().buffer(chunkSize);
++            buffer.writeZero(chunkSize);
++            compressionChannel.writeOutbound(new DefaultHttpContent(buffer));
++        }
++        compressionChannel.writeOutbound(LastHttpContent.EMPTY_LAST_CONTENT);
++        compressionChannel.finish();
++        compressionChannel.releaseInbound();
++
++        ByteBuf compressed = compressionChannel.alloc().buffer();
++        HttpMessage message = null;
++        while (true) {
++            HttpObject obj = compressionChannel.readOutbound();
++            if (obj == null) {
++                break;
++            }
++            if (obj instanceof HttpMessage) {
++                message = (HttpMessage) obj;
++            }
++            if (obj instanceof HttpContent) {
++                HttpContent content = (HttpContent) obj;
++                compressed.writeBytes(content.content());
++                content.release();
++            }
++        }
++
++        PooledByteBufAllocator allocator = new PooledByteBufAllocator(false);
++
++        ZipBombIncomingHandler incomingHandler = new ZipBombIncomingHandler(memoryLimit);
++        EmbeddedChannel decompressChannel = new EmbeddedChannel(new HttpContentDecompressor(), incomingHandler);
++        decompressChannel.config().setAllocator(allocator);
++        decompressChannel.writeInbound(message);
++        decompressChannel.writeInbound(new DefaultLastHttpContent(compressed));
++
++        Assert.assertEquals((long) chunkSize * numberOfChunks, incomingHandler.total);
++    }
++
++    private static final class ZipBombIncomingHandler extends ChannelInboundHandlerAdapter {
++        final int memoryLimit;
++        long total;
++
++        ZipBombIncomingHandler(int memoryLimit) {
++            this.memoryLimit = memoryLimit;
++        }
++
++        @Override
++        public void channelRead(ChannelHandlerContext ctx, Object msg) {
++            PooledByteBufAllocator allocator = (PooledByteBufAllocator) ctx.alloc();
++            Assert.assertTrue(allocator.metric().usedHeapMemory() < memoryLimit);
++            Assert.assertTrue(allocator.metric().usedDirectMemory() < memoryLimit);
++
++            if (msg instanceof HttpContent) {
++                HttpContent buf = (HttpContent) msg;
++                total += buf.content().readableBytes();
++                buf.release();
++            }
++        }
++    }
+ }
+diff --git a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
+index 6793f28..af15318 100644
+--- a/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
++++ b/codec-http2/src/main/java/io/netty/handler/codec/http2/DelegatingDecompressorFrameListener.java
+@@ -17,6 +17,7 @@ package io.netty.handler.codec.http2;
+ import io.netty.buffer.ByteBuf;
+ import io.netty.buffer.Unpooled;
+ import io.netty.channel.ChannelHandlerContext;
++import io.netty.channel.ChannelInboundHandlerAdapter;
+ import io.netty.channel.embedded.EmbeddedChannel;
+ import io.netty.handler.codec.ByteToMessageDecoder;
+ import io.netty.handler.codec.compression.ZlibCodecFactory;
+@@ -63,7 +64,7 @@ public class DelegatingDecompressorFrameListener extends Http2FrameListenerDecor
+             public void onStreamRemoved(Http2Stream stream) {
+                 final Http2Decompressor decompressor = decompressor(stream);
+                 if (decompressor != null) {
+-                    cleanup(decompressor);
++                    decompressor.cleanup();
+                 }
+             }
+         });
+@@ -78,66 +79,7 @@ public class DelegatingDecompressorFrameListener extends Http2FrameListenerDecor
+             // The decompressor may be null if no compatible encoding type was found in this stream's headers
+             return listener.onDataRead(ctx, streamId, data, padding, endOfStream);
+         }
+-
+-        final EmbeddedChannel channel = decompressor.decompressor();
+-        final int compressedBytes = data.readableBytes() + padding;
+-        decompressor.incrementCompressedBytes(compressedBytes);
+-        try {
+-            // call retain here as it will call release after its written to the channel
+-            channel.writeInbound(data.retain());
+-            ByteBuf buf = nextReadableBuf(channel);
+-            if (buf == null && endOfStream && channel.finish()) {
+-                buf = nextReadableBuf(channel);
+-            }
+-            if (buf == null) {
+-                if (endOfStream) {
+-                    listener.onDataRead(ctx, streamId, Unpooled.EMPTY_BUFFER, padding, true);
+-                }
+-                // No new decompressed data was extracted from the compressed data. This means the application could
+-                // not be provided with data and thus could not return how many bytes were processed. We will assume
+-                // there is more data coming which will complete the decompression block. To allow for more data we
+-                // return all bytes to the flow control window (so the peer can send more data).
+-                decompressor.incrementDecompressedBytes(compressedBytes);
+-                return compressedBytes;
+-            }
+-            try {
+-                Http2LocalFlowController flowController = connection.local().flowController();
+-                decompressor.incrementDecompressedBytes(padding);
+-                for (;;) {
+-                    ByteBuf nextBuf = nextReadableBuf(channel);
+-                    boolean decompressedEndOfStream = nextBuf == null && endOfStream;
+-                    if (decompressedEndOfStream && channel.finish()) {
+-                        nextBuf = nextReadableBuf(channel);
+-                        decompressedEndOfStream = nextBuf == null;
+-                    }
+-
+-                    decompressor.incrementDecompressedBytes(buf.readableBytes());
+-                    // Immediately return the bytes back to the flow controller. ConsumedBytesConverter will convert
+-                    // from the decompressed amount which the user knows about to the compressed amount which flow
+-                    // control knows about.
+-                    flowController.consumeBytes(stream,
+-                            listener.onDataRead(ctx, streamId, buf, padding, decompressedEndOfStream));
+-                    if (nextBuf == null) {
+-                        break;
+-                    }
+-
+-                    padding = 0; // Padding is only communicated once on the first iteration.
+-                    buf.release();
+-                    buf = nextBuf;
+-                }
+-                // We consume bytes each time we call the listener to ensure if multiple frames are decompressed
+-                // that the bytes are accounted for immediately. Otherwise the user may see an inconsistent state of
+-                // flow control.
+-                return 0;
+-            } finally {
+-                buf.release();
+-            }
+-        } catch (Http2Exception e) {
+-            throw e;
+-        } catch (Throwable t) {
+-            throw streamError(stream.id(), INTERNAL_ERROR, t,
+-                    "Decompressor error detected while delegating data read on streamId %d", stream.id());
+-        }
++        return decompressor.decompress(ctx, stream, data, padding, endOfStream);
+     }
+ 
+     @Override
+@@ -218,7 +160,7 @@ public class DelegatingDecompressorFrameListener extends Http2FrameListenerDecor
+             }
+             final EmbeddedChannel channel = newContentDecompressor(ctx, contentEncoding);
+             if (channel != null) {
+-                decompressor = new Http2Decompressor(channel);
++                decompressor = new Http2Decompressor(channel, connection, listener);
+                 stream.setProperty(propertyKey, decompressor);
+                 // Decode the content and remove or replace the existing headers
+                 // so that the message looks like a decoded message.
+@@ -250,36 +192,6 @@ public class DelegatingDecompressorFrameListener extends Http2FrameListenerDecor
+         return stream == null ? null : (Http2Decompressor) stream.getProperty(propertyKey);
+     }
+ 
+-    /**
+-     * Release remaining content from the {@link EmbeddedChannel}.
+-     *
+-     * @param decompressor The decompressor for {@code stream}
+-     */
+-    private static void cleanup(Http2Decompressor decompressor) {
+-        decompressor.decompressor().finishAndReleaseAll();
+-    }
+-
+-    /**
+-     * Read the next decompressed {@link ByteBuf} from the {@link EmbeddedChannel}
+-     * or {@code null} if one does not exist.
+-     *
+-     * @param decompressor The channel to read from
+-     * @return The next decoded {@link ByteBuf} from the {@link EmbeddedChannel} or {@code null} if one does not exist
+-     */
+-    private static ByteBuf nextReadableBuf(EmbeddedChannel decompressor) {
+-        for (;;) {
+-            final ByteBuf buf = decompressor.readInbound();
+-            if (buf == null) {
+-                return null;
+-            }
+-            if (!buf.isReadable()) {
+-                buf.release();
+-                continue;
+-            }
+-            return buf;
+-        }
+-    }
+-
+     /**
+      * A decorator around the local flow controller that converts consumed bytes from uncompressed to compressed.
+      */
+@@ -360,24 +272,93 @@ public class DelegatingDecompressorFrameListener extends Http2FrameListenerDecor
+      */
+     private static final class Http2Decompressor {
+         private final EmbeddedChannel decompressor;
++
+         private int compressed;
+         private int decompressed;
++        private Http2Stream stream;
++        private int padding;
++        private boolean dataDecompressed;
++        private ChannelHandlerContext targetCtx;
+ 
+-        Http2Decompressor(EmbeddedChannel decompressor) {
++        Http2Decompressor(EmbeddedChannel decompressor,
++                          final Http2Connection connection, final Http2FrameListener listener) {
+             this.decompressor = decompressor;
++            this.decompressor.pipeline().addLast(new ChannelInboundHandlerAdapter() {
++                @Override
++                public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
++                    ByteBuf buf = (ByteBuf) msg;
++                    if (!buf.isReadable()) {
++                        buf.release();
++                        return;
++                    }
++                    incrementDecompressedBytes(buf.readableBytes());
++                    // Immediately return the bytes back to the flow controller. ConsumedBytesConverter will convert
++                    // from the decompressed amount which the user knows about to the compressed amount which flow
++                    // control knows about.
++                    connection.local().flowController().consumeBytes(stream,
++                            listener.onDataRead(targetCtx, stream.id(), buf, padding, false));
++                    padding = 0; // Padding is only communicated once on the first iteration.
++                    buf.release();
++
++                    dataDecompressed = true;
++                }
++
++                @Override
++                public void channelInactive(ChannelHandlerContext ctx) throws Exception {
++                    listener.onDataRead(targetCtx, stream.id(), Unpooled.EMPTY_BUFFER, padding, true);
++                }
++            });
+         }
+ 
+         /**
+-         * Responsible for taking compressed bytes in and producing decompressed bytes.
++         * Release remaining content from the {@link EmbeddedChannel}.
+          */
+-        EmbeddedChannel decompressor() {
+-            return decompressor;
++        void cleanup() {
++            decompressor.finishAndReleaseAll();
+         }
+ 
++        int decompress(ChannelHandlerContext ctx, Http2Stream stream, ByteBuf data, int padding, boolean endOfStream)
++                throws Http2Exception {
++            final int compressedBytes = data.readableBytes() + padding;
++            incrementCompressedBytes(compressedBytes);
++            try {
++                this.stream = stream;
++                this.padding = padding;
++                this.dataDecompressed = false;
++                this.targetCtx = ctx;
++
++                // call retain here as it will call release after its written to the channel
++                decompressor.writeInbound(data.retain());
++                if (endOfStream) {
++                    decompressor.finish();
++
++                    if (!dataDecompressed) {
++                        // No new decompressed data was extracted from the compressed data. This means the application
++                        // could not be provided with data and thus could not return how many bytes were processed.
++                        // We will assume there is more data coming which will complete the decompression block.
++                        // To allow for more data we return all bytes to the flow control window (so the peer can
++                        // send more data).
++                        incrementDecompressedBytes(compressedBytes);
++                        return compressedBytes;
++                    }
++                }
++                // We consume bytes each time we call the listener to ensure if multiple frames are decompressed
++                // that the bytes are accounted for immediately. Otherwise the user may see an inconsistent state of
++                // flow control.
++                return 0;
++            } catch (Throwable t) {
++                // Http2Exception might be thrown by writeInbound(...) or finish().
++                if (t instanceof Http2Exception) {
++                    throw (Http2Exception) t;
++                }
++                throw streamError(stream.id(), INTERNAL_ERROR, t,
++                        "Decompressor error detected while delegating data read on streamId %d", stream.id());
++            }
++        }
+         /**
+          * Increment the number of bytes received prior to doing any decompression.
+          */
+-        void incrementCompressedBytes(int delta) {
++        private void incrementCompressedBytes(int delta) {
+             assert delta >= 0;
+             compressed += delta;
+         }
+@@ -385,7 +366,7 @@ public class DelegatingDecompressorFrameListener extends Http2FrameListenerDecor
+         /**
+          * Increment the number of bytes after the decompression process.
+          */
+-        void incrementDecompressedBytes(int delta) {
++        private void incrementDecompressedBytes(int delta) {
+             assert delta >= 0;
+             decompressed += delta;
+         }
+diff --git a/codec/src/main/java/io/netty/handler/codec/compression/JZlibDecoder.java b/codec/src/main/java/io/netty/handler/codec/compression/JZlibDecoder.java
+index 6c65cd5..9bc684e 100644
+--- a/codec/src/main/java/io/netty/handler/codec/compression/JZlibDecoder.java
++++ b/codec/src/main/java/io/netty/handler/codec/compression/JZlibDecoder.java
+@@ -28,6 +28,7 @@ public class JZlibDecoder extends ZlibDecoder {
+ 
+     private final Inflater z = new Inflater();
+     private byte[] dictionary;
++    private boolean needsRead;
+     private volatile boolean finished;
+ 
+     /**
+@@ -125,6 +126,7 @@ public class JZlibDecoder extends ZlibDecoder {
+ 
+     @Override
+     protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
++        needsRead = true;
+         if (finished) {
+             // Skip data received after finished.
+             in.skipBytes(in.readableBytes());
+@@ -166,6 +168,14 @@ public class JZlibDecoder extends ZlibDecoder {
+                     int outputLength = z.next_out_index - oldNextOutIndex;
+                     if (outputLength > 0) {
+                         decompressed.writerIndex(decompressed.writerIndex() + outputLength);
++                        if (maxAllocation == 0) {
++                            // If we don't limit the maximum allocations we should just
++                            // forward the buffer directly.
++                            ByteBuf buffer = decompressed;
++                            decompressed = null;
++                            needsRead = false;
++                            ctx.fireChannelRead(buffer);
++                        }
+                     }
+ 
+                     switch (resultCode) {
+@@ -196,10 +206,13 @@ public class JZlibDecoder extends ZlibDecoder {
+                 }
+             } finally {
+                 in.skipBytes(z.next_in_index - oldNextInIndex);
+-                if (decompressed.isReadable()) {
+-                    out.add(decompressed);
+-                } else {
+-                    decompressed.release();
++                if (decompressed != null) {
++                    if (decompressed.isReadable()) {
++                        needsRead = false;
++                        ctx.fireChannelRead(decompressed);
++                    } else {
++                        decompressed.release();
++                    }
+                 }
+             }
+         } finally {
+@@ -212,6 +225,17 @@ public class JZlibDecoder extends ZlibDecoder {
+         }
+     }
+ 
++    @Override
++    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
++        // Discard bytes of the cumulation buffer if needed.
++        discardSomeReadBytes();
++
++        if (needsRead && !ctx.channel().config().isAutoRead()) {
++            ctx.read();
++        }
++        ctx.fireChannelReadComplete();
++    }
++
+     @Override
+     protected void decompressionBufferExhausted(ByteBuf buffer) {
+         finished = true;
+diff --git a/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibDecoder.java b/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibDecoder.java
+index 7e69422..426b84e 100644
+--- a/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibDecoder.java
++++ b/codec/src/main/java/io/netty/handler/codec/compression/JdkZlibDecoder.java
+@@ -57,6 +57,7 @@ public class JdkZlibDecoder extends ZlibDecoder {
+     private GzipState gzipState = GzipState.HEADER_START;
+     private int flags = -1;
+     private int xlen = -1;
++    private boolean needsRead;
+ 
+     private volatile boolean finished;
+ 
+@@ -178,6 +179,7 @@ public class JdkZlibDecoder extends ZlibDecoder {
+ 
+     @Override
+     protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
++        needsRead = true;
+         if (finished) {
+             // Skip data received after finished.
+             in.skipBytes(in.readableBytes());
+@@ -239,14 +241,20 @@ public class JdkZlibDecoder extends ZlibDecoder {
+                     if (crc != null) {
+                         crc.update(outArray, outIndex, outputLength);
+                     }
+-                } else {
+-                    if (inflater.needsDictionary()) {
+-                        if (dictionary == null) {
+-                            throw new DecompressionException(
+-                                    "decompression failure, unable to set dictionary as non was specified");
+-                        }
+-                        inflater.setDictionary(dictionary);
++                    if (maxAllocation == 0) {
++                        // If we don't limit the maximum allocations we should just
++                        // forward the buffer directly.
++                        ByteBuf buffer = decompressed;
++                        decompressed = null;
++                        needsRead = false;
++                        ctx.fireChannelRead(buffer);
++                    }
++                } else if (inflater.needsDictionary()) {
++                    if (dictionary == null) {
++                        throw new DecompressionException(
++                                "decompression failure, unable to set dictionary as non was specified");
+                     }
++                        inflater.setDictionary(dictionary);
+                 }
+ 
+                 if (inflater.finished()) {
+@@ -278,11 +286,13 @@ public class JdkZlibDecoder extends ZlibDecoder {
+         } catch (DataFormatException e) {
+             throw new DecompressionException("decompression failure", e);
+         } finally {
+-
+-            if (decompressed.isReadable()) {
+-                out.add(decompressed);
+-            } else {
+-                decompressed.release();
++            if (decompressed != null) {
++                if (decompressed.isReadable()) {
++                    needsRead = false;
++                    ctx.fireChannelRead(decompressed);
++                } else {
++                    decompressed.release();
++                }
+             }
+         }
+     }
+@@ -454,4 +464,15 @@ public class JdkZlibDecoder extends ZlibDecoder {
+         return (cmf_flg & 0x7800) == 0x7800 &&
+                 cmf_flg % 31 == 0;
+     }
++
++    @Override
++    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
++        // Discard bytes of the cumulation buffer if needed.
++        discardSomeReadBytes();
++
++        if (needsRead && !ctx.channel().config().isAutoRead()) {
++            ctx.read();
++        }
++        ctx.fireChannelReadComplete();
++    }
+ }
+diff --git a/codec/src/test/java/io/netty/handler/codec/compression/AbstractIntegrationTest.java b/codec/src/test/java/io/netty/handler/codec/compression/AbstractIntegrationTest.java
+index 5eaed2f..dc05eb6 100644
+--- a/codec/src/test/java/io/netty/handler/codec/compression/AbstractIntegrationTest.java
++++ b/codec/src/test/java/io/netty/handler/codec/compression/AbstractIntegrationTest.java
+@@ -17,7 +17,10 @@
+ 
+ import io.netty.buffer.ByteBuf;
+ import io.netty.buffer.CompositeByteBuf;
++import io.netty.buffer.PooledByteBufAllocator;
+ import io.netty.buffer.Unpooled;
++import io.netty.channel.ChannelHandlerContext;
++import io.netty.channel.ChannelInboundHandlerAdapter;
+ import io.netty.channel.embedded.EmbeddedChannel;
+ import io.netty.util.CharsetUtil;
+ import io.netty.util.ReferenceCountUtil;
+@@ -166,4 +169,63 @@
+         decompressed.release();
+         in.release();
+     }
++
++    @Test
++    public void testHugeDecompress() {
++        int chunkSize = 1024 * 1024;
++        int numberOfChunks = 256;
++        int memoryLimit = chunkSize * 128;
++
++        EmbeddedChannel compressChannel = createEncoder();
++        ByteBuf compressed = compressChannel.alloc().buffer();
++        for (int i = 0; i <= numberOfChunks; i++) {
++            if (i < numberOfChunks) {
++                ByteBuf in = compressChannel.alloc().buffer(chunkSize);
++                in.writeZero(chunkSize);
++                compressChannel.writeOutbound(in);
++            } else {
++                compressChannel.close();
++            }
++            while (true) {
++                ByteBuf buf = compressChannel.readOutbound();
++                if (buf == null) {
++                    break;
++                }
++                compressed.writeBytes(buf);
++                buf.release();
++            }
++        }
++
++        PooledByteBufAllocator allocator = new PooledByteBufAllocator(false);
++
++        HugeDecompressIncomingHandler endHandler = new HugeDecompressIncomingHandler(memoryLimit);
++        EmbeddedChannel decompressChannel = createDecoder();
++        decompressChannel.pipeline().addLast(endHandler);
++        decompressChannel.config().setAllocator(allocator);
++        decompressChannel.writeInbound(compressed);
++        decompressChannel.finishAndReleaseAll();
++        assertEquals((long) chunkSize * numberOfChunks, endHandler.total);
++    }
++
++    private static final class HugeDecompressIncomingHandler extends ChannelInboundHandlerAdapter {
++        final int memoryLimit;
++        long total;
++
++        HugeDecompressIncomingHandler(int memoryLimit) {
++            this.memoryLimit = memoryLimit;
++        }
++
++        @Override
++        public void channelRead(ChannelHandlerContext ctx, Object msg) {
++            ByteBuf buf = (ByteBuf) msg;
++            total += buf.readableBytes();
++            try {
++                PooledByteBufAllocator allocator = (PooledByteBufAllocator) ctx.alloc();
++                assertThat(allocator.metric().usedHeapMemory(), lessThan((long) memoryLimit));
++                assertThat(allocator.metric().usedDirectMemory(), lessThan((long) memoryLimit));
++            } finally {
++                buf.release();
++            }
++        }
++    }
+ }


=====================================
debian/patches/CVE-2025-59419.patch
=====================================
@@ -0,0 +1,180 @@
+From: DepthFirst Disclosures <disclosures at depthfirst.com>
+Date: Tue, 14 Oct 2025 01:41:47 -0700
+Subject: CVE-2025-59419: Merge commit from fork
+
+* Patch 1 of 3
+
+* Patch 2 of 3
+
+* Patch 3 of 3
+
+* Fix indentation style
+
+* Update 2025
+
+* Optimize allocations
+
+* Update codec-smtp/src/main/java/io/netty/handler/codec/smtp/SmtpUtils.java
+
+Co-authored-by: Chris Vest <christianvest_hansen at apple.com>
+
+---------
+
+Co-authored-by: Norman Maurer <norman_maurer at apple.com>
+Co-authored-by: Chris Vest <christianvest_hansen at apple.com>
+origin: https://github.com/netty/netty/commit/2b3fddd3339cde1601f622b9ce5e54c39f24c3f9
+bug: https://github.com/netty/netty/security/advisories/GHSA-jq43-27x9-3v86
+bug-debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1118282
+---
+ .../handler/codec/smtp/DefaultSmtpRequest.java     |  2 +
+ .../io/netty/handler/codec/smtp/SmtpUtils.java     | 44 +++++++++++++
+ .../netty/handler/codec/smtp/SmtpRequestsTest.java | 73 ++++++++++++++++++++++
+ 3 files changed, 119 insertions(+)
+ create mode 100644 codec-smtp/src/test/java/io/netty/handler/codec/smtp/SmtpRequestsTest.java
+
+--- a/codec-smtp/src/main/java/io/netty/handler/codec/smtp/DefaultSmtpRequest.java
++++ b/codec-smtp/src/main/java/io/netty/handler/codec/smtp/DefaultSmtpRequest.java
+@@ -43,6 +43,7 @@
+      */
+     public DefaultSmtpRequest(SmtpCommand command, CharSequence... parameters) {
+         this.command = ObjectUtil.checkNotNull(command, "command");
++        SmtpUtils.validateSMTPParameters(parameters);
+         this.parameters = SmtpUtils.toUnmodifiableList(parameters);
+     }
+ 
+@@ -55,6 +56,7 @@
+ 
+     DefaultSmtpRequest(SmtpCommand command, List<CharSequence> parameters) {
+         this.command = ObjectUtil.checkNotNull(command, "command");
++        SmtpUtils.validateSMTPParameters(parameters);
+         this.parameters = parameters != null ?
+                 Collections.unmodifiableList(parameters) : Collections.<CharSequence>emptyList();
+     }
+--- a/codec-smtp/src/main/java/io/netty/handler/codec/smtp/SmtpUtils.java
++++ b/codec-smtp/src/main/java/io/netty/handler/codec/smtp/SmtpUtils.java
+@@ -28,5 +28,49 @@
+         return Collections.unmodifiableList(Arrays.asList(sequences));
+     }
+ 
++    /**
++     * Validates SMTP parameters to prevent SMTP command injection.
++     * Throws IllegalArgumentException if any parameter contains CRLF sequences.
++     */
++    static void validateSMTPParameters(CharSequence... parameters) {
++        if (parameters != null) {
++            for (CharSequence parameter : parameters) {
++                if (parameter != null) {
++                    validateSMTPParameter(parameter);
++                }
++            }
++        }
++    }
++
++    /**
++     * Validates SMTP parameters to prevent SMTP command injection.
++     * Throws IllegalArgumentException if any parameter contains CRLF sequences.
++     */
++    static void validateSMTPParameters(List<CharSequence> parameters) {
++        if (parameters != null) {
++            for (CharSequence parameter : parameters) {
++                if (parameter != null) {
++                    validateSMTPParameter(parameter);
++                }
++            }
++        }
++    }
++
++    private static void validateSMTPParameter(CharSequence parameter) {
++        if (parameter instanceof String) {
++            String paramStr = (String) parameter;
++            if (paramStr.indexOf('\r') != -1 || paramStr.indexOf('\n') != -1) {
++                throw new IllegalArgumentException("SMTP parameter contains CRLF characters: " + parameter);
++            }
++        } else {
++            for (int i = 0; i < parameter.length(); i++) {
++                char c = parameter.charAt(i);
++                if (c == '\r' || c == '\n') {
++                    throw new IllegalArgumentException("SMTP parameter contains CRLF characters: " + parameter);
++                }
++            }
++        }
++    }
++
+     private SmtpUtils() { }
+ }
+--- /dev/null
++++ b/codec-smtp/src/test/java/io/netty/handler/codec/smtp/SmtpRequestsTest.java
+@@ -0,0 +1,73 @@
++/*
++ * Copyright 2025 The Netty Project
++ *
++ * The Netty Project licenses this file to you under the Apache License,
++ * version 2.0 (the "License"); you may not use this file except in compliance
++ * with the License. You may obtain a copy of the License at:
++ *
++ *   https://www.apache.org/licenses/LICENSE-2.0
++ *
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
++ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
++ * License for the specific language governing permissions and limitations
++ * under the License.
++ */
++package io.netty.handler.codec.smtp;
++
++import org.junit.jupiter.api.Test;
++import org.junit.jupiter.api.function.Executable;
++
++import static org.junit.jupiter.api.Assertions.assertThrows;
++
++public class SmtpRequestsTest {
++    @Test
++    public void testSmtpInjectionWithCarriageReturn() {
++        assertThrows(IllegalArgumentException.class, new Executable() {
++            @Override
++            public void execute() {
++                SmtpRequests.mail("test at example.com\rQUIT");
++            }
++        });
++    }
++
++    @Test
++    public void testSmtpInjectionWithLineFeed() {
++        assertThrows(IllegalArgumentException.class, new Executable() {
++            @Override
++            public void execute() {
++                SmtpRequests.mail("test at example.com\nQUIT");
++            }
++        });
++    }
++
++    @Test
++    public void testSmtpInjectionWithCRLF() {
++        assertThrows(IllegalArgumentException.class, new Executable() {
++            @Override
++            public void execute() {
++                SmtpRequests.rcpt("test at example.com\r\nQUIT");
++            }
++        });
++    }
++
++    @Test
++    public void testSmtpInjectionInAuthParameter() {
++        assertThrows(IllegalArgumentException.class, new Executable() {
++            @Override
++            public void execute() {
++                SmtpRequests.auth("PLAIN", "dGVzdA\rQUIT");
++            }
++        });
++    }
++
++    @Test
++    public void testSmtpInjectionInHelo() {
++        assertThrows(IllegalArgumentException.class, new Executable() {
++            @Override
++            public void execute() {
++                SmtpRequests.helo("localhost\r\nQUIT");
++            }
++        });
++    }
++}


=====================================
debian/patches/series
=====================================
@@ -25,8 +25,9 @@ CVE-2023-34462.patch
 CVE-2023-44487.patch
 22-java-21.patch
 CVE-2024-29025.patch
-CVE-2025-59419
+CVE-2025-59419.patch
 CVE-2025-55163_before-1.patch
 CVE-2025-55163_1.patch
 CVE-2025-55163_2.patch
+CVE-2025-58057.patch
 CVE-2025-58056.patch



View it on GitLab: https://salsa.debian.org/java-team/netty/-/compare/3c4ac603134815b9e7431c7a80fb73f766cdfdc3...8c6650d73aef30f08dc856635793729aa05439bc

-- 
View it on GitLab: https://salsa.debian.org/java-team/netty/-/compare/3c4ac603134815b9e7431c7a80fb73f766cdfdc3...8c6650d73aef30f08dc856635793729aa05439bc
You're receiving this email because of your account on salsa.debian.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-java-commits/attachments/20251129/d6566012/attachment.htm>


More information about the pkg-java-commits mailing list