Prevent Zip bombs in .NET

  • Security

This post is part of the series 'Vulnerabilities'. Be sure to check out the rest of the blog posts of the series!

If your website allows the user to upload a zip file that will be extracted on the server, you should be very careful. Often websites are protected again uploading big files. By default, Kestrel and IIS limit the size of a request to 30MB. Apache and nginx also have limits. However, this limit only applied to the uploaded file, not to its decompressed size. In the case of a zip file, a 10MB file can be compressed to a 281TB file (source). So, imagine a file of 30MB. If you choose to decompress the file in memory or on the disk, you may run out of memory or disk space very quickly. This is a Denial-of-service (DOS) attack.

You need to check the content of the zip file before processing it. In the case of a zip file, the header of each entry of a zip file indicates the uncompressed size. However, you cannot trust this value. It is just an informational value set by the tools that create the zip file. While regular applications set it correctly, an attacker won't set it correctly… So, you need first to check the value from the header to detect very big files, but then you also need to throw an exception if you detect the file is bigger than expected while extracting it.

const long MaxLength = 10 * 1024 * 1024; // 10MB

using var zipFile = ZipFile.OpenRead("");

// Quickly check the value from the zip header
var declaredSize = zipFile.Entries.Sum(entry => entry.Length);
if(declaredSize > MaxLength)
    throw new Exception("Archive is too big");

foreach (var entry in zipFile.Entries)
    using var entryStream = entry.Open();

    // Use MaxLengthStream to ensure we don't read more than the declared length
    using var maxLengthStream = new MaxLengthStream(entryStream, entry.Length);

    // Be sure to use the maxLengthSteam variable to read the content of the entry, not entryStream
    using var ms = new MemoryStream();

Here's the code of the MaxLengthStream class. It a read-only stream that throws an exception when you read more than the specified maximum length.

internal sealed class MaxLengthStream : Stream
    private readonly Stream _stream;
    private long _length = 0L;

    public MaxLengthStream(Stream stream, long maxLength)
        _stream = stream ?? throw new ArgumentNullException(nameof(stream));
        MaxLength = maxLength;

    public long MaxLength { get; }

    public override bool CanRead => _stream.CanRead;
    public override bool CanSeek => false;
    public override bool CanWrite => false;
    public override long Length => _stream.Length;

    public override long Position
        get => _stream.Position;
        set => throw new NotSupportedException();

    public override int Read(byte[] buffer, int offset, int count)
        var result = _stream.Read(buffer, offset, count);
        _length += result;
        if (_length > MaxLength)
            throw new Exception("Stream is larger than the maximum allowed size");

        return result;

    // TODO ReadAsync

    public override void Flush() => throw new NotSupportedException();
    public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();
    public override void SetLength(long value) => throw new NotSupportedException();
    public override void Write(byte[] buffer, int offset, int count) => throw new NotSupportedException();

    protected override void Dispose(bool disposing)

You are now safe when extracting archive files on your server.

Do you have a question or a suggestion about this post? Contact me!

Follow me:
Enjoy this blog?Buy Me A Coffee