Nginx Problems - 502 Bad Gateway Error due to crashing php-cgi

Hey everyone,

Our site is currently on a VPS and using nginx as the web server (which we found is LOADS better for memory usage than apache!). However, on occasion, it seems like our php-cgi processes just suddenly crash, and our users are left with a “502 Bad Gateway” error from nginx.

Our php fastcgi settings are the following:
PHP_FCGI_CHILDREN = 8 (should be plenty for our traffic)
PHP_FCGI_MAX_REQUESTS = 400 (used to be 1000, we tried lowering to see if it would help, and na-da.)

It is not memory issues, since we run monitoring to check our memory usage and we have enough buffer room to accommodate our processes.

Upon reading the http error logs, we see these sort of error messages:

[code]9 connect() to unix:/home/**/.php.sock failed (111: Connection refused) while connecting to upstream, client: *****, server: ****, request: “GET *, upstream: "fastcgi://unix:/home//.php.sock:”

28898 connect() to unix:/home/***/.php.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: **, server: *, request: "GET ", upstream: "fastcgi://unix:/home//.php.sock:"[/code]

We wrote a daemon that restarts the nginx web server when it detects the processes crash, or these error codes pop up, but sometimes restarting the web server fails to bring back the php5.cgi processes, and our users are left without any access to our site material for a while.

Why in the world is the php locking up and crashing like this, and why can’t nginx automatically recover? Is there another way to serve php to our users that doesn’t use php-cgi? (I heard some use php-fpm or something but I’m not too familar with using it). Going back to apache is not an option, since we have decently high traffic and nginx does a much better job of handling it than apache does.

We also use MySQL extensively, so could it be that the php is locking up waiting for the sql? If so, why does the php become unstable?

Thanks to anyone who can help… this problem has been driving us berserk for the last month or so.

I would LOVE to know what causes this. I’ve seen it happen sporadically for PHP nginx sites (even this discussion forum, once or twice!), but I’ve never been able to track down exactly what causes it. It may be associated with the nightly log rotation script, but I’m really not certain.

If you are able to figure anything out (or even if you make any headway), please let me know! :slight_smile:

I am going to give this a try. Mine is always crashing about an hour after midnight like clock work. Only time it goes down. And I don’t know why.

But it turns out that any kind of error will make php5.cgi crash and nginx doesn’t restart it. So we have to have a monitor to do it for us.

and here is another I will also try

and the last I will try if the other 2 do not work is


We are seeing the exact same problem. Ran Apache before but our traffic-pattern is a lot better suited for nginx so we switched a few days ago. First night around 1 it started throwing the 502. We restarted php-processes and nginx and it came back just fine. three days later the same thing happened.

error log says:
connect() to unix:/home/nginxdf/.php.sock failed (11: Resource temporarily unavailable) while connecting to upstream,

Have you had any luck whatsoever with you attempts? We are desperate to find a reliable solution to this.

Yeah it is working great for me now. I worked with a dreamhost tech support and have a pl script running every 1 min via cron.

It checks to see if php is running, and if it isnt it restarts it. Havent had a problem since :slight_smile:

Here is the entire pl file.



if [ $? -ne 0 ] # No php processes, restart then # restart nginx
sudo /etc/init.d/nginx stopphp
sudo /etc/init.d/nginx startphp

Are you sure that is the script in full? It seems to miss a perl-statement in the beginning and ; signs to end each statement.

Also it seems the script requires no php-processes running to restart. Last time I had this problem (this morning) I actually had four processes, only that they were stalled or something. I will however activate this script once I can get it running properly and see if it will help me.

You think you can confirm your script with this post please

Last night it happened again. Dreamhost said the solution was to move to Debian Squeeze and run FPM and they helped us moving. Once on Squeeze they didn’t install FPM but instead gave us a broken script that would restart php every minute.

Last night it died again at 1am and this time nginx startphp did not solve the trick. We had one php5.cgi process stuck in the list and it prevented any php to start up. We had to kill the process to get it back.

Im drawing a blank on this one, think I need to go with FPM.

Sorry for bringing back the old thread, but I am facing the same issue with occasional logs:

2013/05/27 20:52:24 [error] 15687#0: *916 connect() to unix:/home/shout_test/.php.sock failed (111: Connection refused) while connecting to upstream, client:, server:, request: “GET /shoutbox/start.php?key=624193435 HTTP/1.1”, upstream: “fastcgi://unix:/home/shout_test/.php.sock:”, host: “”, referrer: “

and crashes. Any update on that?

Just for the record, I have had this problem too and it just started last night.


  • After midnight, 502 Bad Gateway errors
  • Reboot and it comes up fine.

Running Wordpress with about 20+ plugins.

I know this is an old threat but I’m having this exact issue and wondering if anyone has found a fix yet?