Understanding RAM use in shared space

I’ve been doing this IT thing for almost 40 years but I’m going to admit to being dense about how RAM is used in DH shared space.

I’ve been trying to test the limits of a WordPress site I’m building, adding plugins and watching memory usage. As I add each plugin I see memory use rise a few hundred K, a couple use over 1MB. Now I have an average hit to the admin dashboard at just over 90MB.

So a single user/developer is hitting the site and using 90MB RAM for about 2 seconds. What happens when a second person hits the same page? Is shared space now attempting to consume 180MB? Is anything cached so that the code pulled into memory by the first is re-used by the second? How long does it remain in cache?

I know there’s some fudge-space between 90MB and 128MB. I’m not going to push it but I’m trying to figure out how many users can hit that page if the first one consumes 90MB. If code is cached for multiple users and each additional user adds memory usage just for session-specific data, then is it reasonable in this scenario to guess that this one site might be able to support about 20 simultaneous users @90MB for code and about 1MB per session for each additional user?

That’s one site. I have a number of domains on my account. So is that 90MB allocated from a pool available to the account? In other words: That one hit of 90MB only works because no one is hitting one of the other domains. A single hit on another site might consume maybe another 70MB, and I’m guessing that a total of 160MB at any given second i likely to trigger Procwatch to randomly kill one of the connections.

To respond in advance to expected comments: Yes, I know this one site is heavy with plugins. I want to move it to VPS here at DH or elsewhere. What I’m trying to figure out is how much RAM I actually consume on an average hit, so that I know how much RAM I need to buy to support X simultaneous visitors. Without metrics I don’t just want to throw hardware at a site and hope that it stays up. I want to understand how the resources are being used and make solid decisions based on that data.


1 Like

You are correct in your assumption that if a page uses X amount of RAM that it would stack depending on the amount of concurrent visitors you have. This would be why site optimization and caching are important.
The server will cache your site(s) based on the caching rules you have setup for it, it would be per site rules, depending on the CMS you are using there should be some plugins that you would be able to add and configure to set the caching properties of your site. With that being said, the admin dashboard of most CMS’ tend to utilize more resources than the actual web page itself, so viewing the admin page metrics wouldn’t be the best to measure, unless you plan on having several people work in the admin section at once.

There is the Dreampress service that Dreamhost offers as a managed wordpress solution which offers server side caching with varnish and memcached.
Information on Dreampress here. https://help.dreamhost.com/hc/en-us/articles/214581728

On shared hosting, your account would have an overall cap to how much resources you can use at a given time, your domain user(s) would also have it’s own limits within that as well. Because of that, it would be best to split up your domains to have their own individual domain users. For example if you had three domains examp1e.com, example_2.com and exampl3.com you could set them up under user1, user2, or user3


With this setup if examp1e.com received a lot of traffic and used up enough resources to start being killed by procwatch, example_2.com and exampl3.com would not be affected because they are under different users and have their own separate resource limits. This is assuming that example_2.com and exampl3.com are receiving little to no traffic and are not contributing to your overall resource usage for the account. If all of your domains are using up a ton of resources, enough to go over your accounts resource limits, the procwatch service may start to stop all of your scripts from running until the usage is low again.

One way you would be able to find out how much RAM your domain user is using with the following shell command “ps” over ssh, ps displays information about a selection of the active processes.

where you would replace $user with your domains shell username, you would also be able to put in the specific php cgi your domain is running instead of the user to just get the domain RAM usage. e.g if your domain is using php 7.0 you would replace $user with php70.cgi (or for php 5.6 to php56.cgi) to find out how much RAM is being used by all of the php scripts running running at that moment.

You should get an output similar to this.

RAM, Process ID, User, Process
28.043MB 22016 12345678 php70.cgi
32.5391MB 21677 12345678 php70.cgi
30.51172MB 31822 12345678 php70.cgi

You would be able to run the above in a loop also to see an output every few seconds, This can be helpful as you would be able to have this running while you click around the site a bit to generate a bit of traffic.

I know it’s a lot of information, but hope it helps!

I’m familiar with everything you’ve mentioned - I just haven’t ‘put it all together’ like this. So thank you VERY much for your time and consideration.

I didn’t think about changing users. It would be helpful to see some actual rules from DH on per-user resource allocation versus per-account. I will have to do that with symlinks since some sites are under a primary users, though others are under other users. But on this topic I do see that when remote operations like site updates are initiated externally, all of my sites (10ish) on all users (4ish) get reported as being down. My guess is that they’re all being hammered and the account-level resource limits are kicking in.

Using ps in a loop is a good idea, as long as ps itself doesn’t get terminated as a spinning process. LOL Your command samples are VERY helpful, thanks again.

It’s troubling that every request needs to go through the same process of reading code from disk, simply multiplying memory consumption for each concurrent user. I understand that users are only concurrent for a few seconds and that they can be executing different code. So the life cycle of the code for each process isn’t consistent. But it would be nice to be able to core-lock specific modules like the WordPress core and some plugins that we know get hit on every transaction. I guess I’m showing igorance of what memcached might be able to do for us. I’ll have to do some research on these WordPress-specific optimizations. But it’s tough without solid Dreamhost-specific information to combine with that to understand how This environment can work most effectively with That environment.

I’ll do my homework and post back here soon. Comments from DH staff would be most welcome as well.

Thanks again.
Another point on this. Again, I understand that each concurrent user will consume some MB of RAM for some number of seconds. I’m thinking a VPS with 2GB of RAM might require about <=500MB for core functionality, leaving about 1.5GB for traffic. So with basic math and an average visitor hit consuming about 80MB, that means I can expect about 18 truly concurrent users.

And with 4GB minus the same 500MB for server resources, that leaves 3.5GB for transactions, divided by 80MB/trans = about 43 concurrent users.

Is that the right way to look at this?

I understand in a VPS we can get some better resource usage with NginX vs Apache, so between OS and web server that core allocation will vary a bit, maybe up to a full GB.

I also understand we’re talking about users requesting resources at the very same second, and only higher traffic sites really face regular bombardment from 18-53 concurrent usuers. And with timeout settings, etc, we can expect some “concurrent” users to queue for a few seconds so that actual user count is a lot more in such peak usage.

Given what I’ve described here, for a few sites with initially low traffic, it’s seeming to me like a VPS with just 2GB RAM might serve the expected load. What I’m trying to avoid is making an assumption like that and then getting blind-sided with huge unexpected resource consumption - like getting a good deal on mobile phone service and later finding out about all of the extra little fees that weren’t considered up-front.


Uhm This Is interesting info!

So (Insert scratching head here) Let’s say some idiot (AKA: Me) every time he started a new instance of WordPress instead of creating a new domain user, he used a User he started way back when he had only 2 sites… now all of his sites have that same domain user…

What you are saying is that: For best practice? I should have a separate user for each domain and possibly subdomains?

Is this possibly a common thread point with us users having issues with 404 issues
on the backends of or wordpress sites?

[quote=“MrJoelieC, post:4, topic:64569”]
What you are saying is that: For best practice? I should have a separate user for each domain and possibly subdomains? Is this possibly a common thread point with us users having issues with 404 issues on the backends of or wordpress sites?[/quote]

Inquiring minds wanna know.

I’m really hoping someone at DH can answer the second part of my question above about VPS memory. Specifically:

Is it reasonable to assume that in a VPS with nothing else installed, the OS and web server and (what else?) will probably consume an average of about .5GB?

And, not to be too redundant, but that leads me to believe that:

  • in a 2GB
  • there is about 1.5GB for transactions,
  • and (ignoring cron and other server processes) at about 100MB per hit
  • that means I should be able to process up to about 15 truly concurrent users without bumping up against the allocation
  • during any given 25ms-1200ms transaction? (Stat estimates in this DH thread.)

If that’s the right way to look at this then a 2GB VPS is in my near future.

I just replied to your other thread (https://discussion.dreamhost.com/thread-148870-post-192173.html#pid192173). Posting here too: a newly created VPS uses about 100MB for the basic OS and the rest is free for the applications.

Some good discussion on this! This is a really complicated question and the real-world memory consumption will vary significantly from site to site. Some technical tidbits just to add to the confusion:

  • The underlying OS will try to re-use as much existing memory as possible so two concurrent users will probably not use 2x as much memory as one
  • Our existing memory limits are applied on a per-user as well as per-account basis (different limits), but they are not applied in real-time so you may sometimes be able to do something that will not work another time.
  • We tend to avoid stating exactly what our limits are because we change them regularly to adapt to changing use cases. We shoot for limits that most people will rarely, if ever, bump into, but that will protect the overall server from out of control sites.
  • Shared servers can be a great value in some cases, but the conditions can vary wildly based on the activity of other users.
1 Like

I’m working with a colleague on a project where he just purchased BrandH VPS services. Frankly I’m stunned at their low pricing for 6cores and 6GB of RAM (Cloud Site, Business), but perhaps you can tell me what he’s not getting that we do.

The screenshot below was taken from his dashboard where memory, pids, and CPU time is tracked live. We can actually watch this site from one minute to the next. He has the option to allocate an extra 2GB for peak usage at the click of a button for just $2. It seems too good to be true. But my point here is that this is the kind of data that I’d like to see, so that perhaps during peak usage I can disable some heavy plugins, and throttle down my site before it start throwing server error 500 to visitors.

About the memory_consumed value there, at consistently under 35k I have to wonder if they’re not including OS/webserver resource usage? Also, that’s a live WordPress site with a good number of plugins - why are we seeing hits recorded in the tens of K when my DH shared space is reporting similar hits of 90+MB?

I don’t intend to ask you how BrandH calculates their stats, but it looks like their RAM allocation does not include the underlying tiers. And those stats make me doubt my understanding of how memory is being utilized or reported.


That is an impressive dashboard. I wonder if that VPS is also managed or unmanaged like DreamCompute would be.

I don’t know how to answer that. There is no way that a WordPress site uses anything less than 1Megabyte. Is it possible that the Ks are thousands of something else but bytes? There are two Y axis, not sure what their units are.

I have started loooking at monitoring tools that are useful and can be installed in a DreamHost managed VPS (without root/sudo). I found two interesting ones that I plan on testing more:

If anybody wants to help with testing them out let me know, we’ll coordinate. Once I’m done with the tests, I’ll share a brief tutorial on #howto.

I see you are both interested and active on this, so having made my point I’ll happily move on to other interests and look forward to whatever decisions are made there.

I’ll try to get more info from BrandH but don’t want to make this look like a competitive comparison. That said, their offering is extremely compelling and (for the few sites for which I need a VPS) I must consider them. I’m really hoping DH Marketing can step up to convince me that DH VPS is a superior offering.

As to testing, perhaps I can help. Please see my ticket #139215171 and contact me by email if we can help one another.


Still hoping DreamHost Marketing can help us with this. I’ve removed sites from my shared space, moved others to different users, and all of my sites are going down every day when a third-party pushes in auto-updates. I need to migrate to a VPS, preferably with DH. But I’m lacking thorough and compelling information here to make this business decision.

I have yet to get all info from competitors as described in this thread, and I have not run the scripts provided here either. So I still need to do my homework. But I’m doing that now.

I feel like I’m having to go around myself to ask food companies about nutrition info, where now we have laws in the USA that mandate this info clear and readily available on the package.
Is it too much to ask that I’m literally begging DH Marketing for technical information to help sell me more services?

OK, I’m whining … I’ll post what I find as possible. Thanks.

Indeed, their right-side Y axis for the blue/memory usage is incorrectly labelled and that usage is in MB, not K. The left side is for both PIDs and CPU time, though there’s no clear definition for what that CPU time actually is, especially where multiple cores are involved.

Related to DH, my WP plugins send my memory usage into the 90+MB range per transaction, and with procwatch cutoff somewhere from the 90-128MB range, my sites simply won’t survive in shared space, which is why I’m moving soon to VPS.