nginx: allow large file uploads for /api/v4/files endpoint#184
nginx: allow large file uploads for /api/v4/files endpoint#184madest92 wants to merge 1 commit intomattermost:mainfrom
Conversation
📝 WalkthroughWalkthroughA new Nginx location block was added for Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
nginx/conf.d/default.conf (1)
113-126: Consider addingproxy_request_buffering offandproxy_send_timeoutfor large uploads.The configuration correctly removes the nginx body size limit and increases timeouts. However, for large file uploads, consider these improvements:
proxy_request_buffering off;– Streams the upload directly to the backend without buffering in nginx memory, reducing memory pressure for large files.
proxy_send_timeout– Currently missing; defaults to 60s. For slow uploads of large files, this timeout could be reached while nginx sends the request body to the backend. Consider setting it to matchproxy_read_timeout(600s) or higher.♻️ Suggested improvement
location = /api/v4/files { client_max_body_size 0; client_body_timeout 1200s; + proxy_request_buffering off; proxy_set_header Connection ""; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Frame-Options SAMEORIGIN; proxy_set_header Early-Data $ssl_early_data; + proxy_send_timeout 600s; proxy_read_timeout 600s; proxy_http_version 1.1; proxy_pass http://backend; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nginx/conf.d/default.conf` around lines 113 - 126, In the location block matching "= /api/v4/files" add proxy_request_buffering off to stream request bodies to the backend (avoid nginx buffering large uploads) and set proxy_send_timeout (e.g., 600s to match proxy_read_timeout) so nginx won't time out while sending the request body to the upstream; update the location = /api/v4/files block to include these two directives alongside the existing proxy_read_timeout and proxy_http_version settings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@nginx/conf.d/default.conf`:
- Line 113: The nginx config currently has an exact match location "location =
/api/v4/files" which raises client_max_body_size for legacy single-file uploads
but does not cover resumable/chunked uploads under "/api/v4/uploads"; if your
Mattermost uses resumable uploads add a corresponding location block for the
uploads endpoints (e.g. a prefix or regex location for "/api/v4/uploads" and
"/api/v4/uploads/*") and set the same client_max_body_size value as used for
"location = /api/v4/files" so chunk initialization (POST /api/v4/uploads) and
chunk POSTs (POST /api/v4/uploads/{upload_id}) are not rejected. Ensure the new
block(s) use the same directives (client_max_body_size and any other relevant
proxy/fastcgi settings) and test upload flows after deploying the change.
---
Nitpick comments:
In `@nginx/conf.d/default.conf`:
- Around line 113-126: In the location block matching "= /api/v4/files" add
proxy_request_buffering off to stream request bodies to the backend (avoid nginx
buffering large uploads) and set proxy_send_timeout (e.g., 600s to match
proxy_read_timeout) so nginx won't time out while sending the request body to
the upstream; update the location = /api/v4/files block to include these two
directives alongside the existing proxy_read_timeout and proxy_http_version
settings.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d9d0e957-bc4c-4666-8d44-4818d6f8e68a
📒 Files selected for processing (1)
nginx/conf.d/default.conf
|
@coderabbitai approve |
✅ Actions performedComments resolved and changes approved. |
|
@hanzei Maybe you can take a look at this PR? |
|
@NARSimoes hi! |
aminvakil
left a comment
There was a problem hiding this comment.
This requires maintainers to follow api version and change this if it would be changed to /api/v5/ or some other endpoint.
I think 50M suffices many use cases and you can change it locally for yourself (as I did).
This better be documented on how to increase file upload limit size IMHO.
Yes. You're right. But now users are forced to find a way to increase the file upload limit.
In 99% of cases, yes. But in the remaining 1%, files are larger than 50 MB. Plus, the file is stored in Git, which complicates updates if there are conflicts with the pull. |
|
This PR has been automatically labelled "stale" because it hasn't had recent activity. |
|
@NARSimoes Are you the right person to review this PR? |
lieut-data
left a comment
There was a problem hiding this comment.
@edgarbellot, curious if you have any input for why we would need to enforce a body payload size at the proxy server? It would be ideal if we could just remove the setting below.
| } | ||
|
|
||
| location / { | ||
| client_max_body_size 50M; |
There was a problem hiding this comment.
I'd like to understand why we even have this limit at all. Should we just lift this instead?
There was a problem hiding this comment.
It's 1M by default (https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size).
Setting it to 0 makes it pretty easy for a malicious actor to fill up the system resources even without authentication. I'd rather not to go into details in a public comment :)
There was a problem hiding this comment.
Yes, that's why I only removed the body size limitation from one method(uploading files). Because there's an additional check at the backend level.
There was a problem hiding this comment.
In that case, I'm wondering if we just raise the default to 100MB to match the Mattermost default, and clarify in docs that if you're using this and tweaking the configuration, you'll want to tweak your proxy as well.
Note too that /api/v4/files is one file upload path, but there may be other endpoints (e.g. plugins).
There was a problem hiding this comment.
In that case, I'm wondering if we just raise the default to 100MB to match the Mattermost default,
That would be a better default IMHO as well, as it may raise confusion right now in case a user sees that mattermost default is 100MB, but they can not upload something larger than 50MB right now.
and clarify in docs that if you're using this and tweaking the configuration, you'll want to tweak your proxy as well.
Sounds like a plan.
Note too that
/api/v4/filesis one file upload path, but there may be other endpoints (e.g. plugins).
Thanks for bringing that up, I was wondering later versions in /api/v5/ and so on in #184 (review) , seems like there is already a problem with just changing /api/v4/files.
There was a problem hiding this comment.
clarify in docs that if you're using this and tweaking the configuration, you'll want to tweak your proxy as well.
Agree, documenting this is definitely useful 👍
But there’s a responsibility boundary aspect: Mattermost and proxy are often managed by different teams. Even a simple limit change may require coordination between both, which adds unnecessary friction.
So it might be better, where possible, to keep such settings on the Mattermost side, allowing admins to manage them without depending on infrastructure changes.
Note too that /api/v4/files is one file upload path, but there may be other endpoints (e.g. plugins).
According to the code, there is currently only one place for message attachments. I’m addressing a narrow issue: the attachment size setting in the admin UI is misleading if frontend/proxy limits are stricter.
There was a problem hiding this comment.
@madest92 makes a valid point about operational boundaries - forcing infra and app teams to stay in sync on every MaxFileSize change can create some friction. That said, client_max_body_size 0 alone doesn't safely delegate enforcement to Mattermost. With nginx's default proxy_request_buffering on (not overridden anywhere in this config), nginx fully buffers the request body to disk before forwarding - so Mattermost's MaxBytesReader only fires after nginx has already written the entire payload. An attacker can exploit that to exhaust disk with unbounded payloads to /api/v4/files, which is the risk @aminvakil flagged.
proxy_request_buffering off would fix that, but it introduces a streaming mode not used anywhere else in this config and goes beyond what this PR needs to solve.
I think @lieut-data's suggestion is the right call: raise client_max_body_size from 50M to 100M in location / to match Mattermost's MaxFileSize default, remove the specific location = /api/v4/files block, and document that the two values should be kept in sync. It's consistent with the existing config pattern, covers plugin endpoints and future API paths, and keeps the DoS surface bounded. I think the coupling between nginx and Mattermost config is a reasonable tradeoff when it's explicitly documented.
There was a problem hiding this comment.
OK, the buffering and DoS risk arguments make sense — I agree the limit should be enforced at the nginx level.
So let’s raise it to 100 MB globally, add information to the "Maximum file size" tooltip and to the documentation.
At the same time, it might be useful to keep /api/v4/files explicitly defined with the same limit — this simplifies administration. In that case, increasing the limit is just changing client_max_body_size, without introducing additional location blocks.
How about this as a compromise?
There was a problem hiding this comment.
Thanks, @madest92 -- what's the rationale for distinguishing /api/v4/files after the global uplift?
I'd advocate for keeping the configuration as simple as possible: if an admin wants to take on the complexity of managing which endpoints can accept larger payloads, they should opt into that instead of assuming /api/v4/files is the only place that would matter. (It's probably the primary place, but I underscore that this could have knock on effects for other parts of the system and I wouldn't recommend it as the default.)

This prevents 413 (Request Entity Too Large) errors when uploading large files.
Key changes:
Note:

File size limits are still enforced at the application level via Mattermost settings, where administrators can configure the maximum allowed upload size.
This change only removes nginx as a limiting factor and delegates control to the application.