Aws command line - s3put EntityTooLarge

dreamobjects

#1

Hi

I got this command line tool working with dreamobjects:
http://timkay.com/aws/

by simply editing this line
vi aws
$s3host ||= $ENV{S3_URL} || “objects.dreamhost.com”;

and by adding my key info to.
vi ~/.awssecret

I can issue commands correctly,
example:

~/Desktop$ perl ./aws ls
±----------±-------------------------+
| Name | CreationDate |
±----------±-------------------------+
| mybackups | 2012-09-14T22:28:18.000Z |
±----------±-------------------------+

So that was good, however I wanted to use this tool to upload a large backup file:

~/Desktop$ perl ./aws put mybackups/40Gig.file /Volumes/FILES/40Gig.file
400 Bad Request
±---------------+
| Code |
±---------------+
| EntityTooLarge |
±---------------+

Is there a maximum file size limit ?
Or should I be issuing multipart put somehow?

Thanks


#2

According to our overview

“Objects are limited to 10TBs, but will need to be uploaded in 5GB chunks.”

So yes, that’s probably what you’re running into.


#3

That’s correct, you’ll want to use multipart upload. Unfortunately, I’m not sure if this is supported in the aws command line tool you’re using.

I know boto includes a command line utility called s3multiput. It doesn’t support specifying a host though. I hacked the s3multiput script to include an endpoint option. Just include -e objects.dreamhost.com or --endpoint objects.dreamhost.com

I’ll warn you now that I’m no developer! If you’re willing to try it out, I’ve put a copy of it here - modified s3multiput.


#4

thanks for the responses guys.
I’ll give s3multiput a try (if I can ever get boto to work with my version of python…)


#5

k got those python dependencies resolved. :slight_smile:

Tried a couple small files with your modified s3multiput script and they worked fine. Thanks!
Trying the large 40gig file now.
It says its copying but there is no progress meter and I assume this could take a couple days to complete.

This was the command I used:
python s3multiput -e objects.dreamhost.com -a accesidhere -s secretkeyhere -b mybackups /Volumes/FILES/40gig.file

All you see is:
Copying /Volumes/FILES/40gig.file to mybackups/Volumes/FILES/40gig.file

Anybody know if it will resume the put where it left off if the connection is interrupted and I reissue the command again?
I doubt I’ll be able to guarantee an uninterrupted connection for large files like this.

Gonna give s3fs a try with rsync next.


#6

The file is chunked into pieces and is only a “success” if all the pieces make it and the pieces are assembled back together. I think partial uploads are just considered failed.