Using s3cmd/s5cmd with Cloudflare R2

Pedro Gomes
1 min readMay 15, 2023

--

tl;dr don’t use R2 if you need to upload files over 5gb using these tools.

s3cmd:

s3cmd -c /etc/s3cmd-r2.cfg --no-check-md5 --multipart-chunk-size-mb=50 sync my-dir s3://my-bucket

Basically Cloudflare is not returning the expected md5 hash in the etag header hence breaking the upload of large files.

You might try set `multipart-chunk-size-mb` to a higher value than the default of 15MB, up to 5gb, if that doesn’t suit your needs you may need another tool.

contents of s3cmd-r2.cfg

[default]
access_key = [redacated]
secret_key = [redacted]
bucket_location = auto
host_base = [redacted-account-id].r2.cloudflarestorage.com
host_bucket = [redacted-account-id].r2.cloudflarestorage.com
enable_multipart = False

I’ll get back to update this s5cmd section later.

s5cmd:

s5cmd --credentials-file s5cmd-r2.cfg --part-size=50

contents of s5cmd-r2.cfg

[default]
aws_access_key_id = [redacated]
aws_secret_access_key = [redacated]
aws_region = auto
host_base = [redacted-account-id].r2.cloudflarestorage.com
host_bucket = [redacted-account-id].r2.cloudflarestorage.com
enable_multipart = False

--

--

Responses (1)