Content-Encoding

The HTTP Content-Encoding representation header identifies the compression algorithm applied to a message body, allowing the client to decode the content and recover the original resource.

Usage

The Content-Encoding header lists the encodings applied to the representation data, in the order they were applied. The client reverses the encoding chain to reconstruct the original content. Compression is the most common use case, reducing transfer size at the cost of CPU time on both server and client.

The Accept-Encoding request header advertises which encodings the client supports. The server selects an encoding from the list and indicates the choice in the Content-Encoding response header. When no acceptable encoding exists, the server sends the response uncompressed.

When Content-Encoding is present, metadata headers like Content-Length refer to the encoded form, not the original resource. Pre-compressed media formats like JPEG, PNG, and ZIP already contain internal compression and gain little from an additional encoding pass. Applying a content encoding to these formats wastes CPU and sometimes increases transfer size.

The Content-Encoding header describes the representation encoding, which is an end-to-end property of the resource. This differs from Transfer-Encoding, which is a hop-by-hop property applied and removed at each network intermediary.

Note

Gzip remains the most widely deployed encoding. Brotli offers the best compression ratios for static text content. Zstandard provides a balance of fast compression and strong ratios with broad browser support.

Note

The original media type of the resource is described by the Content-Type header. Content-Encoding reflects the compression state of the current representation, not the format of the underlying content.

Directives

gzip

The Lempel-Ziv coding (LZ77) algorithm with a 32-bit CRC. Introduced in 1996, gzip remains the most widely supported encoding on the web. Servers also recognize x-gzip as an alias.

compress

The Lempel-Ziv-Welch (LZW) algorithm, originally from the UNIX compress program. Patent concerns led to the algorithm's decline and no modern browser supports compress.

deflate

The zlib structure wrapping the deflate algorithm. Supported across all browsers but largely superseded by gzip and newer algorithms.

br

The Brotli algorithm. Brotli achieves higher compression ratios than gzip, particularly on text-based web content, by using a built-in static dictionary of common web terms. Brotli requires HTTPS. Supported by all major browsers.

zstd

The Zstandard algorithm. Zstandard compresses at speeds comparable to gzip while achieving compression ratios closer to Brotli. The algorithm supports configurable compression levels from 1 (fastest) to 22 (highest ratio), letting servers trade CPU time for smaller payloads. Decompression speed remains consistently fast regardless of the compression level used.

Zstandard also supports dictionary-based compression, where a pre-shared dictionary trained on representative content reduces payload sizes further. The Use-As-Dictionary header and the Compression Dictionary Transport mechanism build on this capability.

Supported by all major browsers.

dcb

Dictionary-compressed Brotli. Defined as part of the Compression Dictionary Transport specification. The client and server negotiate a shared dictionary through the Use-As-Dictionary and Available-Dictionary headers. The server compresses the response using Brotli with the shared dictionary and signals the encoding as dcb.

dcz

Dictionary-compressed Zstandard. Uses the same dictionary transport mechanism as dcb, with the Zstandard algorithm instead of Brotli. The server compresses with the negotiated dictionary and signals the encoding as dcz.

Example

A response compressed with gzip. This is the most common encoding on the web and has universal browser support.

Content-Encoding: gzip

A response compressed with Brotli. Brotli typically produces smaller payloads than gzip for HTML, CSS, and JavaScript resources.

Content-Encoding: br

A response compressed with Zstandard. Servers choosing Zstandard benefit from fast compression with ratios comparable to Brotli.

Content-Encoding: zstd

Multiple encodings applied in sequence. The resource was first encoded with deflate, then with gzip. The client decodes in reverse order: gzip first, then deflate.

Content-Encoding: deflate, gzip

A response using dictionary-compressed Zstandard. The client previously received a dictionary via the Use-As-Dictionary header and advertised the dictionary in the Available-Dictionary request header. The server compressed the response against the shared dictionary.

Content-Encoding: dcz

Troubleshooting

Compression-related failures surface as browser decoding errors, corrupted downloads, or responses arriving uncompressed.

  1. Double-compression produces corrupted responses. A reverse proxy or CDN re-compresses an already-compressed response from the origin, producing an invalid payload the client cannot decode. Check the Content-Encoding header for stacked values like gzip, gzip. In nginx, disable proxy compression when the origin already compresses:

    proxy_set_header Accept-Encoding "";
    

    In Cloudflare, disable "Brotli" in the Speed settings when the origin sends pre-compressed responses.

  2. Browser shows ERR_CONTENT_DECODING_FAILED. This error appears when the Content-Encoding header declares an encoding that does not match the actual body content. Common causes: the origin sends an uncompressed body with Content-Encoding: gzip, or middleware strips the encoding but leaves the header intact. Run curl -v -H "Accept-Encoding: gzip" https://example.re and inspect whether the raw body matches the declared encoding. Pipe through gunzip to verify: curl -s --compressed https://example.re | head.

  3. Content-Length mismatch after compression. The Content-Length value must reflect the compressed body size, not the original size. A mismatch causes browsers to truncate or reject the response. When using nginx gzip on, nginx recalculates Content-Length automatically. Middleware that sets Content-Length before compression runs causes this problem. Move Content-Length assignment to after compression, or remove the header and let the server use chunked Transfer-Encoding.

  4. Compression not applied despite server configuration. The server checks the Accept-Encoding request header before compressing. Missing or empty Accept-Encoding means no compression. Some proxies strip Accept-Encoding from forwarded requests. In nginx, verify gzip on is set and the MIME types match:

    gzip on;
    gzip_types text/plain application/json
               text/css application/javascript;
    

    In Apache, enable mod_deflate and verify the filter is applied:

    AddOutputFilterByType DEFLATE text/html
    AddOutputFilterByType DEFLATE application/json
    
  5. Serving pre-compressed files for better performance. Compressing on every request wastes CPU. Serve static files pre-compressed from disk instead. In nginx, enable gzip_static on to serve .gz files alongside originals. In Apache, enable MultiViews or use mod_rewrite to map requests to .gz or .br variants. Ensure the pre-compressed files stay in sync with the originals during deployments.

See also

Last updated: April 4, 2026