Sorry DreamObjects... I tried. I really did

dreamobjects

#1

I’m hoping to read from other DHers about their experience with DreamObjects thus far. In the two months I’ve used the service (lightly), I must have experienced at least a dozen outages due to 503 Service Unavailable error responses. I canceled my DreamObjects hosting as a result. One of the main reasons I moved my operation to DH was because of DreamObjects. I see many other posts complaining of the same issues. I thought DreamObjects was out of beta. All other hosting features with DH are excellent and seem to be of high quality.

Anyone have any insight?

David


#2

Hi, not insight but just to say, except for (up until) the current web access intermittent time outs and 503’s that just became an issue for me in the last couple days, maybe 4-5 days . . .
DreamObjects has worked great over the last year for me, with great access times & consistent good web access to the files.
(hopefully can be resolved)
.


#3

The last couple weeks have been really bad. We typically upload 10-25k objects each night for archiving (very small files) into a bucket approaching 7.5 million objects. Previously it would get done in about 20 minutes. Lately it’s been taking 12+ hours to complete with a lot of 503 errors.

Even just hitting the endpoint straight up gives a 503 half the time: http://objects.dreamhost.com

I had a brief email exchange with Justin last week. They are working on expanding the cluster and updating software versions (see upcoming maintenance). Hopefully that gets it back to normal, since we were very happy with the service up until about a month ago.


#4

Hello folks, indeed DreamObjects started to respond more slowly than usually (hence the 503 errors) only recently. The investigation we’re running to find the cause of the issue points at two concurrent causes: one is a radical increase in cluster’s utilization, where the API endpoint receive lots more requests per second. At the same time when more requests are coming, the cluster also started to be expanded which generates lots of disk and network activity (because of the way Ceph works). These two concurrent events have put more stress on the request queues between the haproxy and the RGWs, causing the errors.

The expansion of the cluster is almost complete, from the monitors we see things have already improved today and will keep on getting better.

Steve-o: a bucket with millions of objects is more likely to create issues. There are some best practices to follow, if you’re not doing them already. Feel free to reach out privately to me (or share here) and describe how you’re storing the objects. I’m thinking of writing those best practices and using a real life example could be more useful than a general purpose article.


#5

[quote=“smaffulli, post:4, topic:63700”]I’m thinking of writing those best practices and using a real life example could be more useful than a general purpose article.
[/quote]
I’ve noticed the slow down, and complained here. I’d love to see an updated best practices. I know there have been changes since I started using DHO.
-mort


#6

Our use case is pretty simple: we archive the content of emails our web app sends in dreamobjects. The object name is just the incremental id assigned by the database. So everything is an integer (e.g. 123456.mail). We have no need to ever actually list the bucket contents, just read specific objects out.

The bucket is pretty old too, creation date of 2014-06-27. Which I know means it’s missing some of the features and functionality of newer buckets. It’s also approaching the 10 million object quota that exists. We will hit it by the end of the year I think.

I can pretty easily just mod 10 the id and shard the data into 10 different buckets.


#7

Sharding would make things a lot faster, even if you don’t list the content of the buckets. We’re working on a more comprehensive tutorial, but in general any of the suggestions to improve performance of S3 buckets that are valid for Amazon’s cloud can be applied in similar ways to Ceph/RGW/DreamObjects. Stay tuned for more details.