I’m familiar with everything you’ve mentioned - I just haven’t ‘put it all together’ like this. So thank you VERY much for your time and consideration.
I didn’t think about changing users. It would be helpful to see some actual rules from DH on per-user resource allocation versus per-account. I will have to do that with symlinks since some sites are under a primary users, though others are under other users. But on this topic I do see that when remote operations like site updates are initiated externally, all of my sites (10ish) on all users (4ish) get reported as being down. My guess is that they’re all being hammered and the account-level resource limits are kicking in.
Using ps in a loop is a good idea, as long as ps itself doesn’t get terminated as a spinning process. LOL Your command samples are VERY helpful, thanks again.
It’s troubling that every request needs to go through the same process of reading code from disk, simply multiplying memory consumption for each concurrent user. I understand that users are only concurrent for a few seconds and that they can be executing different code. So the life cycle of the code for each process isn’t consistent. But it would be nice to be able to core-lock specific modules like the WordPress core and some plugins that we know get hit on every transaction. I guess I’m showing igorance of what memcached might be able to do for us. I’ll have to do some research on these WordPress-specific optimizations. But it’s tough without solid Dreamhost-specific information to combine with that to understand how This environment can work most effectively with That environment.
I’ll do my homework and post back here soon. Comments from DH staff would be most welcome as well.
Another point on this. Again, I understand that each concurrent user will consume some MB of RAM for some number of seconds. I’m thinking a VPS with 2GB of RAM might require about <=500MB for core functionality, leaving about 1.5GB for traffic. So with basic math and an average visitor hit consuming about 80MB, that means I can expect about 18 truly concurrent users.
And with 4GB minus the same 500MB for server resources, that leaves 3.5GB for transactions, divided by 80MB/trans = about 43 concurrent users.
Is that the right way to look at this?
I understand in a VPS we can get some better resource usage with NginX vs Apache, so between OS and web server that core allocation will vary a bit, maybe up to a full GB.
I also understand we’re talking about users requesting resources at the very same second, and only higher traffic sites really face regular bombardment from 18-53 concurrent usuers. And with timeout settings, etc, we can expect some “concurrent” users to queue for a few seconds so that actual user count is a lot more in such peak usage.
Given what I’ve described here, for a few sites with initially low traffic, it’s seeming to me like a VPS with just 2GB RAM might serve the expected load. What I’m trying to avoid is making an assumption like that and then getting blind-sided with huge unexpected resource consumption - like getting a good deal on mobile phone service and later finding out about all of the extra little fees that weren’t considered up-front.