Setting expires header on objects stored on buckets




I’ve managed to setup WordPress with W3TC to store static content inside a DreamObjects bucket. Now I’m testing it from the outside, to see what is working well and what isn’t. Testing from Pingdom reveals that many of the stored objects might have an expiry header that is not long enough (e.g. at least a week or so).

Is this user-settable, and, if so, is this something that can be done somehow through the S3 API? The nearest thing I saw was the “bucket lifecyle” command, but which is currently unsupported by DreamObjects, and I’m not sure if it does what I want (apparently, no, but it might also set the appropriate headers as a side-effect).


Our DreamObject developers are reviewing your request and we’ll follow up soon, sorry about the wait.


Dear Cedric,

There is no need to apologise! You guys rock and are lightning-fast in your replies :slight_smile:

There is a good reason for my many requests/suggestions/emails to tech support and whatnot. In a moment of insanity, I thought that I’d do a serious crash test on DH’s technology. Joking! :slight_smile: But on October 27th, an article will be posted on a major newspaper which usually has 100.000 online readers — and it will include links to the three websites I’ve been pushing the static content to DreamObjects. So, expect your poor DreamObjects servers to squeal under the load :slight_smile: But, then again, what better way to test something than a “real life” test…? :slight_smile:

While I have CloudFlare in front of all the three sites, we can give them a little help. The first one is to allow DreamObjects to use CNAMEs, or to use the hack suggested by justinlund — this should allow CloudFlare to cache DreamObjects content, too. And the second thing, if all else fails, is to at least have longer expire headers (which obviously CloudFlare will also appreciate).

My goal is to survive the extra traffic on October 27th without being kicked out by you guys :slight_smile:


I did implement justinlund’s suggestion on my hacked version of W3TC. Still, there seems nothing I can do about the expire headers.

I imagined that, when serving content from DreamObjects via CloudFlare, CloudFlare would correctly assume this content is static, and thus send its own expire headers. But apparently this doesn’t happen. Either CloudFlare doesn’t touch the headers, or it’s “confused” about the “staticness” of DreamObjects, or, well, there are some options elsewhere which I couldn’t find (yet).


I guess you know the S3 api lets you set expire headers for objects when you store them.

You just cannot change headers after it’s stored - at least not on amazon, and apparently here too.

Oh wait - you know what - you are right.

I just tested with a custom expire date way in the future that works on amazon but not here on dreamobjects.

Here on dreamobjects it’s showing the current time as the expire date.

Wow this is a showstopper. Cannot use dreamobjects until expire date is allowed and keep-alive is supported.

Because without an expire date, the bandwidth use for public objects will skyrocket.

7 cents vs 12.5 cents is meaningless if the object is downloaded twice as much (or more)


Actually, assume I’m completely ignorant of the S3 API :slight_smile: It’s been years since I last looked at its specs, but yes, I assumed that this was something to be set via one of the calls. I agree it’s a show-stopper for now and completely perverts the point of offloading static content on a cloud…

And good thing for both of us that DH is looking into it. With my configuration, I sometimes get CloudFlare to add its own expire headers for at least 4 hours — but that’s way, way too low, and apparently doesn’t work that well.

So we have to be patient and wait until DH fixes this. Remember, it’s all part of the beta-testing :slight_smile: It’s good to know we found this out BEFORE we started to get charged for it :slight_smile:


Basically aws S3 allows you to store almost any header you want with an object. It’s simply served as you stored it. You can add “expires” and “cache-control” etc.

Because dreamobjects uses apache, unless it’s modified and/or allowed to serve objects raw, I am not sure how they are going to fix this, but I’ll wait and see.

So other than private backups, dreamobjects is currently useless for public content, unless you want to pay for the same visitors downloading your images over and over and over again. In theory a browser may still cache it and just do a http 304 check. But because they do not use keep-alive, the visitors browser will have to open a new connection each time for each object and never re-use the same connection, so there is still some overhead (though unlike aws s3 dreamobjects wouldn’t charge for the 304 requests).


Expires headers aren’t available yet, sadly. Our next roll-out should have these, but I don’t have an ETA on that for you. I updated our FAQ with some information about it.


Hm. There is a lot in what you say. I was taking a look at to see what kind of headers are returning. In fact, it is just as you say: DreamObjects runs behind Apache and sets almost no headers, and this will for sure hurt caching performance. You’re right: this makes it bad for serving content publicly (unlike Amazon and other competing cloud storage services). But except for backups, what is DreamObjects worth for? :slight_smile:

So this “request” is actually a bit more important than I thought, but probably also harder to implement than Cedric let us believe :slight_smile: DH’s devs tend to amaze and surprise us, though, so it’s still early to say…

My current workaround is simply to place CloudFlare on front of everything stored by DreamObjects. CloudFlare (which is free to use) can selectively set the expire headers, so I added 8 days for the static content and 4 hours for the “mostly dynamic” content. This is really easy to set up!

Since CloudFlare caches everything anyway, it remains to be seen if DreamObjects is actually worth the trouble of configuring! I mean, I could set up the static content on a different subdomain name, and not even activate PHP on it, and do pretty much the same that I do with DreamObjects, e.g. push static content automatically from WP into a self-hosted, FTP-based domain. The only advantage of using DreamObjects is diminishing the traffic requests to the shared server I’m using…

Again, for a beta trial, the good thing is that it works and that there are workarounds :slight_smile: But I would agree with you that maybe, just maybe, the 7 cents might be overpriced if we have far less control over how our content is stored and delivered… I’m sure DH will be looking very closely at this!

Ah, I posted the other message before reading this :slight_smile:

Ipstenu, I guess that’s why some of your team are also busy working on a CDN solution: maybe you’re trying to “fix” the issue by pushing it into the CDN layer (which is pretty much what I’m doing right now with CloudFlare). The question then will be how much the CDN layer will additionally cost to us end-users :slight_smile:


Dreamobjects should probably warn people on your import tool that any custom headers from aws s3 will be lost on import too, at least I suspect that will happen if you aren’t even storing them?

I cannot tell from this side if headers are stored and stripped when served or never stored at all.

Will check back in November and see if there is any progress.


Gwenyth: I’m only an adjunct to DHO :slight_smile: (I know about it because I want to do a CDN like you for WordPress integration!) so I’m not 100% clear on what’s up with that. I’m fairly sure that the expires headers are step one for our CDN solution.

c.k.: Good question! I’ll ask!


I found out more information about this. Right now Ceph’s RADOS Gateway only supports x-amz-meta- and content-type headers. Anything else is stripped. The next version of Ceph addresses this and is in the final stages of development. We’ll do testing on our development and staging clusters before pushing to production. At the very least, I can add some documentation in the wiki.


That’s cool to know! Thanks for keeping us updated!