Skip to content

How it works

s3m uses different upload paths depending on the source data:

  • If the input file is smaller than the multipart buffer, it is uploaded in a single request.
  • If the input file is larger than the multipart buffer, it is uploaded in multipart mode and can be resumed.
  • If the input comes from STDIN, compression, or encryption, the upload is streamed and is not resumable.

The default multipart buffer for file uploads is 10 MiB, configurable with -b/--buffer.

Buffer sizing

For regular files, s3m can size multipart uploads based on the file size. You can also choose a buffer explicitly when you already know the size and want more control.

Examples:

sh
s3m /path/to/5TB.file backup/archive/full-backup -b 536870912
sh
s3m /path/to/300GB.file backup/archive/quarterly -b 31457280

Current S3 multipart limits:

  • maximum object size: 5 TiB
  • maximum parts per upload: 10,000
  • part size: 5 MiB to 5 GiB

For a 500 GB file, 50 MiB parts fit within the 10,000 part limit:

txt
524288000000 / 10000 = 52428800
sh
s3m /path/to/500GB.file backup/archive/500GB -b 52428800

Resume behavior

When the size is known in advance, s3m calculates a checksum and stores multipart state locally so interrupted uploads can resume later.

Clean the local state with:

sh
s3m --clean

STDIN and transformed uploads

There is no resumable checksum/state path for --pipe input.

That affects:

  • STDIN / --pipe
  • --compress
  • encrypted uploads using enc_key
  • combined compress + encrypt flows

For unknown-size STDIN streams, s3m uses fixed 512 MiB parts. This is intentional because the final size is not known up front for live sources such as pg_dump, mariadb-dump, or mariabackup.

Released under the BSD License.