Lifetime of Rails fastcgi processes

software development


My Rails fcgi processes are working ok, but after about five minutes they are exiting. Next time the site is accessed they startup again fine. Can I change the delay before fcgi exits? There is a considerable pause when the fcgi app first starts up and establishes the database connections.


I changed my dispatch.fcgi to this. Something is sending the fastcgi processes a kill ‘TERM’ after five minutes. This makes the rails app ignore it and the fcgi process stays running.

require File.dirname(FILE) + "/…/config/environment"
require ‘fcgi_handler’

class MyRailsFCGIHandler < RailsFCGIHandler
‘TERM’ => :exit_now,

def exit_now_handler(signal)
dispatcher_log :info, "ignoring request to terminate immediately"

MyRailsFCGIHandler.process! nil, 50


Wouldn’t this make the other (bigger) rails issue even worse? 36429&page=0&view=&sb=&part=1&vc=1&o=


I think this should help the Error 500 problem. I get Error 500 when Apache has my Rails app halfway up. By stabilzing my fcgi process I reduce that window. I can also see that my FCGI process is getting swapped out when idle so it shouldn’t be consuming resources.

The -TERM that comes five minutes after starting is likely a source of Error 500 too. -TERM will kill the process in the middle of a request. You need to send -USR1 for a graceful shutdown. The way Rails is currently implemented receiving the USR1 does nothing initially, but after the next web request comes in the fcgi process exits and restarts itself. dreamhost still sends my FCGI process a USR1 signal every four hours.

I’m using this code now and it seems to be working. I’ve been playing around with code to do an immediate restart on -USR1 but it is complex to do in the Rails current code. I’m also trying to figure out how to write an FCGI authorize handler.

ENV[‘GEM_PATH’]=’/home/jonsmirl/gems’ if ENV[“RAILS_ENV”] == “production”

class MyRailsFCGIHandler < RailsFCGIHandler
def initialize(log_file_path = nil, gc_request_period = nil)
super(log_file_path, gc_request_period)
trap(‘TERM’, method(:exit_now_handler).to_proc);

def exit_now_handler(signal)
dispatcher_log :info, "ignoring request to terminate immediately"

MyRailsFCGIHandler.process! nil, 50


Sure it would help YOU not get as many 500 errors, but wouldn’t it hog all the FCGI resources so that there aren’t as many to go around?

I’m certainly no expert in this area, but it seems like that is why they would want to kill them after 5 minutes.


There is no pool of specific FCGI resources, FCGI is a communication procotol. FCGI process are process just like all of the other processes in the system so they are constrained by the overall limits of the operating system.

The processes get swapped from RAM out to disk when they aren’t active. A swapped out process consumes very few resources other than some disk space. If there is a problem somewhere is is probably because something is misconfigured; these hosts are running nowhere close to the limits of Linux. For example the one I am is only using about 10% of it’s OS resources right now.

Think of it like when you laptop goes into sleep mode. Sleep mode takes more power than being turned off, but not much. But you can come back from sleep mode much more quickly than from power off.

I suspect these intermittent error 500’s are caused by a bug or misconfiguration somewhere that no one has found yet.


They probably want to kill them because people many not know what they are doing and create hundreds of processes. It is easy to do if there is a bug and you are unaware that it is happening.

There are also two classes of long lived processes, ones that are idle and ones that keep using the CPU. Things like bittorrent may keep up a permanent CPU load and never go idle and get swapped out. That class should be on a dedicated host.

Another reason to kill processes is because they have memory leaks and grow without bounds. I am honoring the -USR1 kill that comes every four hours. Except there is a bug in Rails where it receives the USR1 and won’t actually exit until the next web request comes in. In the middle of the night the process doesn’t exit immediately since there are no web requests. But that isn’t a big deal since if there aren’t any web requests no one is using the app and it can just sit idle in swap space without doing any harm.


New version of code that makes kill -USR1 work corrently at DH.

[code]if ENV[“RAILS_ENV”] == “production”


class MyRailsFCGIHandler < RailsFCGIHandler
def process!(provider = FCGI)

Make a note of $" so we can safely reload this instance.


run_gc! if gc_request_period

usr1 = trap(“USR1”, “DEFAULT”)
provider.each_cgi do |cgi|
trap(“USR1”, usr1)

case when_ready
when :reload
when :restart
when :exit

trap(“USR1”, “DEFAULT”)

dispatcher_log :info, “terminated gracefully”

rescue SystemExit => exit_error
dispatcher_log :info, “terminated by explicit exit”

rescue Object => fcgi_error

retry on errors that would otherwise have terminated the FCGI process,

but only if they occur more than 10 seconds apart.

if !(SignalException === fcgi_error) && - @last_error_on > 10
@last_error_on =
dispatcher_error(fcgi_error, “almost killed by this error”)
#dispatcher_error(fcgi_error, “killed by this error”)

def exit_now_handler(signal)
dispatcher_log :info, "ignoring request to terminate immediately"

MyRailsFCGIHandler.process! nil, 50


RailsFCGIHandler.process! nil, 50