Current time: 08-20-2017, 03:51 PM Hello There, Guest! (LoginRegister)

Post Reply 
Understanding RAM use in shared space
08-11-2017, 10:09 AM
Post: #1
Understanding RAM use in shared space
I've been doing this IT thing for almost 40 years but I'm going to admit to being dense about how RAM is used in DH shared space.

I've been trying to test the limits of a WordPress site I'm building, adding plugins and watching memory usage. As I add each plugin I see memory use rise a few hundred K, a couple use over 1MB. Now I have an average hit to the admin dashboard at just over 90MB.

So a single user/developer is hitting the site and using 90MB RAM for about 2 seconds. What happens when a second person hits the same page? Is shared space now attempting to consume 180MB? Is anything cached so that the code pulled into memory by the first is re-used by the second? How long does it remain in cache?

I know there's some fudge-space between 90MB and 128MB. I'm not going to push it but I'm trying to figure out how many users can hit that page if the first one consumes 90MB. If code is cached for multiple users and each additional user adds memory usage just for session-specific data, then is it reasonable in this scenario to guess that this one site might be able to support about 20 simultaneous users @90MB for code and about 1MB per session for each additional user?

That's one site. I have a number of domains on my account. So is that 90MB allocated from a pool available to the account? In other words: That one hit of 90MB only works because no one is hitting one of the other domains. A single hit on another site might consume maybe another 70MB, and I'm guessing that a total of 160MB at any given second i likely to trigger Procwatch to randomly kill one of the connections.

To respond in advance to expected comments: Yes, I know this one site is heavy with plugins. I want to move it to VPS here at DH or elsewhere. What I'm trying to figure out is how much RAM I actually consume on an average hit, so that I know how much RAM I need to buy to support X simultaneous visitors. Without metrics I don't just want to throw hardware at a site and hope that it stays up. I want to understand how the resources are being used and make solid decisions based on that data.

Thanks.
Find all posts by this user
Quote this message in a reply
08-12-2017, 05:45 AM
Post: #2
RE: Understanding RAM use in shared space
(08-11-2017 10:09 AM)Starbuck Wrote:  I've been doing this IT thing for almost 40 years but I'm going to admit to being dense about how RAM is used in DH shared space.

I've been trying to test the limits of a WordPress site I'm building, adding plugins and watching memory usage. As I add each plugin I see memory use rise a few hundred K, a couple use over 1MB. Now I have an average hit to the admin dashboard at just over 90MB.

So a single user/developer is hitting the site and using 90MB RAM for about 2 seconds. What happens when a second person hits the same page? Is shared space now attempting to consume 180MB? Is anything cached so that the code pulled into memory by the first is re-used by the second? How long does it remain in cache?

I know there's some fudge-space between 90MB and 128MB. I'm not going to push it but I'm trying to figure out how many users can hit that page if the first one consumes 90MB. If code is cached for multiple users and each additional user adds memory usage just for session-specific data, then is it reasonable in this scenario to guess that this one site might be able to support about 20 simultaneous users @90MB for code and about 1MB per session for each additional user?

That's one site. I have a number of domains on my account. So is that 90MB allocated from a pool available to the account? In other words: That one hit of 90MB only works because no one is hitting one of the other domains. A single hit on another site might consume maybe another 70MB, and I'm guessing that a total of 160MB at any given second i likely to trigger Procwatch to randomly kill one of the connections.

To respond in advance to expected comments: Yes, I know this one site is heavy with plugins. I want to move it to VPS here at DH or elsewhere. What I'm trying to figure out is how much RAM I actually consume on an average hit, so that I know how much RAM I need to buy to support X simultaneous visitors. Without metrics I don't just want to throw hardware at a site and hope that it stays up. I want to understand how the resources are being used and make solid decisions based on that data.

Thanks.

You are correct in your assumption that if a page uses X amount of RAM that it would stack depending on the amount of concurrent visitors you have. This would be why site optimization and caching are important.
The server will cache your site(s) based on the caching rules you have setup for it, it would be per site rules, depending on the CMS you are using there should be some plugins that you would be able to add and configure to set the caching properties of your site. With that being said, the admin dashboard of most CMS' tend to utilize more resources than the actual web page itself, so viewing the admin page metrics wouldn't be the best to measure, unless you plan on having several people work in the admin section at once.

There is the Dreampress service that Dreamhost offers as a managed wordpress solution which offers server side caching with varnish and memcached.
Information on Dreampress here. https://help.dreamhost.com/hc/en-us/articles/214581728

On shared hosting, your account would have an overall cap to how much resources you can use at a given time, your domain user(s) would also have it's own limits within that as well. Because of that, it would be best to split up your domains to have their own individual domain users. For example if you had three domains examp1e.com, example_2.com and exampl3.com you could set them up under user1, user2, or user3

/home/user1/examp1e.com
/home/user2/example_2.com
/home/user3/exampl3.com

With this setup if examp1e.com received a lot of traffic and used up enough resources to start being killed by procwatch, example_2.com and exampl3.com would not be affected because they are under different users and have their own separate resource limits. This is assuming that example_2.com and exampl3.com are receiving little to no traffic and are not contributing to your overall resource usage for the account. If all of your domains are using up a ton of resources, enough to go over your accounts resource limits, the procwatch service may start to stop all of your scripts from running until the usage is low again.

One way you would be able to find out how much RAM your domain user is using with the following shell command "ps" over ssh, ps displays information about a selection of the active processes.
Code:
ps -eo rss,pid,euser,args:100 --sort %mem | grep -v grep | grep -i $user | awk '{printf $1/1024 "MB"; $1=""; print }'
where you would replace $user with your domains shell username, you would also be able to put in the specific php cgi your domain is running instead of the user to just get the domain RAM usage. e.g if your domain is using php 7.0 you would replace $user with php70.cgi (or for php 5.6 to php56.cgi) to find out how much RAM is being used by all of the php scripts running running at that moment.

Code:
ps -eo rss,pid,euser,args:100 --sort %mem | grep -v grep | grep -i php70.cgi | awk '{printf $1/1024 "MB"; $1=""; print }'

You should get an output similar to this.
RAM, Process ID, User, Process
28.043MB 22016 12345678 php70.cgi
32.5391MB 21677 12345678 php70.cgi
30.51172MB 31822 12345678 php70.cgi


You would be able to run the above in a loop also to see an output every few seconds, This can be helpful as you would be able to have this running while you click around the site a bit to generate a bit of traffic.
Code:
while true ; do ps -eo rss,pid,euser,args:100 --sort %mem | grep -v grep | grep -i php70.cgi | awk '{printf $1/1024 "MB"; $1=""; print }'; sleep 5; done

I know it's a lot of information, but hope it helps!
Find all posts by this user
Quote this message in a reply
08-14-2017, 02:12 PM (This post was last modified: 08-14-2017 02:43 PM by Starbuck.)
Post: #3
RE: Understanding RAM use in shared space
I'm familiar with everything you've mentioned - I just haven't 'put it all together' like this. So thank you VERY much for your time and consideration.

I didn't think about changing users. It would be helpful to see some actual rules from DH on per-user resource allocation versus per-account. I will have to do that with symlinks since some sites are under a primary users, though others are under other users. But on this topic I do see that when remote operations like site updates are initiated externally, all of my sites (10ish) on all users (4ish) get reported as being down. My guess is that they're all being hammered and the account-level resource limits are kicking in.

Using ps in a loop is a good idea, as long as ps itself doesn't get terminated as a spinning process. LOL Your command samples are VERY helpful, thanks again.

It's troubling that every request needs to go through the same process of reading code from disk, simply multiplying memory consumption for each concurrent user. I understand that users are only concurrent for a few seconds and that they can be executing different code. So the life cycle of the code for each process isn't consistent. But it would be nice to be able to core-lock specific modules like the WordPress core and some plugins that we know get hit on every transaction. I guess I'm showing igorance of what memcached might be able to do for us. I'll have to do some research on these WordPress-specific optimizations. But it's tough without solid Dreamhost-specific information to combine with that to understand how This environment can work most effectively with That environment.

I'll do my homework and post back here soon. Comments from DH staff would be most welcome as well.

Thanks again.

Another point on this. Again, I understand that each concurrent user will consume some MB of RAM for some number of seconds. I'm thinking a VPS with 2GB of RAM might require about <=500MB for core functionality, leaving about 1.5GB for traffic. So with basic math and an average visitor hit consuming about 80MB, that means I can expect about 18 truly concurrent users.

And with 4GB minus the same 500MB for server resources, that leaves 3.5GB for transactions, divided by 80MB/trans = about 43 concurrent users.

Is that the right way to look at this?

I understand in a VPS we can get some better resource usage with NginX vs Apache, so between OS and web server that core allocation will vary a bit, maybe up to a full GB.

I also understand we're talking about users requesting resources at the very same second, and only higher traffic sites really face regular bombardment from 18-53 concurrent usuers. And with timeout settings, etc, we can expect some "concurrent" users to queue for a few seconds so that actual user count is a lot more in such peak usage.

Given what I've described here, for a few sites with initially low traffic, it's seeming to me like a VPS with just 2GB RAM might serve the expected load. What I'm trying to avoid is making an assumption like that and then getting blind-sided with huge unexpected resource consumption - like getting a good deal on mobile phone service and later finding out about all of the extra little fees that weren't considered up-front.

Thanks!
Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump: