S3fs-fuse and DreamObjects

dreamobjects

#1

Has anyone been successful in mounting a bucket using s3fs-fuse?

I think I have the options correct, but am getting this when trying to mount/connect:

CURLE_SSL_CONNECT_ERROR

[hr]
Nevermind. I figured it out.

I was using https in my url, which did not agree with the link that Dreamhost gives me for that bucket.

So, when setting the option for url in sf3s, I did this:
s3fs nameofmybucket /mnt/bucket -o url=https://objects.dreamhost.com

but should have used:
s3fs nameofmybucket /mnt/bucket -o url=http://objects.dreamhost.com

and now it seems to be working.


#2

and for anyone interested, I was able to add this line to fstab for auto mount:

s3fs#nameofmybucket /mnt/bucket fuse allow_other,_netdev,nosuid,nodev,url=http://objects.dreamhost.com 0 0


#3

If you’re going to keep a s3fs-fuse bucket mounted all the time, keep in mind that that it’ll use bandwidth on your DreamObjects user every time it gets accessed. Depending on your system, this may include some automated processes like file indexing and thumbnailing — you may want to keep a close eye on your bandwidth usage to make sure things don’t get out of hand.


#4

Thanks for the heads up. I will keep an eye on that. I should probably just write a script to mount, make backups, then unmount for the buckets I use as backups.

I was considering using additional buckets for file storage (photos and such) but if it is going to be costly I may as well just buy more hard drives.


#5

Just to follow up on this: I have been running 3 different VPS on DigitalOcean, with DreamObjects mounted on each one, for the past week or so.
I run backups every night, and another set of backups once a month.

So far, total “download” cost is 6 cents. Of course, I am only uploading content, which is free with DreamObjects. Paying for the occasional download when I need to access the backups will be well worth it.

In summary: keeping DreamObjects mounted 24/7 has not cost me anything extra, as I am only using them for uploading backups.
So far, my monthly costs for usage is 28 cents. Very good deal and exactly what I need for storing backups.


#6

I’m glad to hear it’s working so well for you! If you’re interested in writing a tutorial, I’d be happy to post it on the DreamHost blog.


#7

Here go!

https://firefli.de/tutorials/s3fs-and-dreamobjects.html


#8

Thanks!


#9

I followed your tutorial, Adam,
but can not mount my bucket.
If I try with plain mount, I get

mount: wrong fs type, bad option, bad superblock on s3fs#myownbucket, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program) In some cases useful info is found in syslog - try dmesg | tail or so

if I try with
s3fs -f -d $bucket $mountpoint
I get

set_moutpoint_attribute(3389): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) s3fs_init(2660): init s3fs_check_service(2978): check services. CheckBucket(2366): check a bucket. RequestPerform(1571): connecting to URL http://mybucketname.s3.amazonaws.com/ RequestPerform(1587): HTTP response code 403 RequestPerform(1606): HTTP response code 403 was returned, returning EPERM CheckBucket(2400): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>Th1sISN0Tmy4CC355K3y</AWSAccessKeyId><RequestId>SOMET3XTHER3</RequestId><HostId>/0THERT3XTH3R3</HostId></Error>

so s3fs only tries to access stuff on s3.amazonaws…but mount can’t mount it…


#10

http://mybucketname.s3.amazonaws.com/ is probably the issue.

In my tutorial, the mounting is done from fstab, and uses:

s3fs#YOUR_BUCKET /mnt/bucket fuse allow_other,_netdev,nosuid,nodev,url=http://objects.dreamhost.com 0 0

note the url: http://objects.dreamhost.com


#11

Got it…
I now have a backup script which cron will run weekly for me with

[code]#!/bin/bash

filedate=$(date +%m%d%y)
exec 1>/var/log/backup/$filedate.backup.log
exec 2>/var/log/backup/$filedate.berror.log

log() {
echo date : “$@”
}

if [ ! -d /mnt/bucket/backups ] ; then

    s3fs -o use_cache=/tmp/cache bucket /mnt/bucket -o allow_other,url=http://objects.dreamhost.com
    log "mounting backup drive"

fi
if [ ! -d /mnt/bucket/backups ] ; then

    log "failed to mount backup drive! BACKUP FAILURE! exiting..."
    exit

fi

for i in $(ls /home/); do

    log "backing up /home/$i ..."
    tar -czf /mnt/bucket/backups/$i.$filedate.tgz /home/$i/
    sleep 10

done
cd /mnt/bucket/backups

find . -maxdepth 1 -name ‘*.tgz’ -mtime +15 -exec rm -f {} ;
log “db backups complete for $date”

exit[/code]

It will delete backups older than 15 days (i.e. only keep last two weekly backups. I think I learned that from you, too, somewhere…).


#12

Nice one!

I notice some people like to compress their backups. I prefer to just rsync my backups. That way, only files that have been modified are updated, rather than doing a full backup every time.

Also, it makes retrieving that random file I need much easier, as I just have to open the backup folder and browse to the file I need

I run rsync every night at midnight…


#13

I figured tarring things up would economize storage space and bandwidth.
I backup DBs daily, and have uncompressed backups of 3 days of them in my /home, so this way,
I will have thu/fri/sat from previous week in the bucket, and the most recent three days in my backups in $HOME, but it’s a lotta stuff.


#14

Hi,

I have used your guide, but run into a snag: the dreamobject s3fs-fuse file system appears to be ‘mounted’ as an executable file, rather than a directory.

I get the same result when I mount the s3fs manually (“s3fs -o url=http://objects.dreamhost.com”) and through fstab on boot. To diagnose, I’ve tried running with fuse’s -f option:

s3fs -f <mybucket> /mnt/<mountdir> -o url=http://objects.dreamhost.com set_moutpoint_attribute(3537): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) s3fs_init(2713): init s3fs_check_service(3070): check services. CheckBucket(2525): check a bucket. insertV4Headers(1961): computing signature [GET] [/] [] [] url_to_host(99): url is http://objects.dreamhost.com RequestPerform(1648): HTTP response code 400 was returned, returing EIO. CheckBucket(2563): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code></Error> s3fs_check_service(3125): Could not connect, so retry to connect by signature version 2. CheckBucket(2525): check a bucket. RequestPerform(1636): HTTP response code 200
So it seems to me that the bucket is connecting fine, since the second RequestPerform returned correctly. Still, when I try to access the mounted drive, I find:

#>ls /mnt/<mountdir> -l
total 1
-rwxr-xr-x 1 root root 0 Apr 23 03:18 <root_dir_of_s3fs>

It shows up as an executable file instead of a directory! wtf! Any ideas? I swear that I had this sorta-working like a week ago when I initially set it up…


#15

I cannot at this point recommend using DreamObjects. Two days ago I had some issues and the response from support was basically “sorry you are having trouble. good luck.”

I am going to try a few things, but it looks like this is not a good option at this point.


#16

As stated in another thread in this forum, DreamObjects did have a bout of slowness and issues earlier this week. We had a 6X traffic spike on Monday and Tuesday that caused instability. The issue has since been resolved. During this issue with the slower response times and possible 500 or 503 errors, software like s3fs that may not react to such issues as expected and have unexpected issues. There are alternatives such as using boto-rsync or backup software like duplicity that are better at handling these kind of issues. We have plans to improve the stability of the product as well coming up early next week. If you have any questions on how to use any of these please let us know and we would be happy to help.


#17

*DreamObjects have worked fine for me for the past year, with a few minor glitches
*On May 5th, there were some problems with DreamObjects, and afterwards I am having issues across multiple servers with multiple buckets
*I contacted support, and they told me that it wasn’t them, it was what I was doing, and that I would have the same issues even if I switched to a different storage service.
*I am switching to a different storage service.

If I end up having the same issues, I will follow up here, and switch back to DreamObjects, and find a different way to back up my servers than using s3fs.


#18

Switched to Amazon S3 and everything has been rock solid. It is a little more expensive, but at this point I am willing to pay the extra than having to go through all that again.

The thing is, I really was rooting for Dreamhost on this one, but it completely takes the wind out of my sails to have so many issues and have support tell me its my fault.

Not sure I would have enough faith to switch back anymore.