How do I avoid process kill?


Certain operations in a shell, such as tar or gzip on large sites or files often result in the process being killed, and my terminal disconnected. This seems to happen after about a minute of clock time.

I assume that I’m being smacked for using too much CPU over a short time.

I need to do long tars for backups, but they don’t happen very often. How can I avoid having my processes killed?


I don’t think it should be getting killed. I make backups with tar on a weekly basis, but I only have aobut 200mb max size. It could be some other issue with your server… might check with support.



Was there ever an answer discovered for this question? Its a problem that has been annoying me for well over a year now. It appears that process accounting or something like that kills processes when they use more than x cpu minutes.

For instance I bzip2 some web log files. After a big day of traffic, the weblogs cannot be compressed as the bzip2 process gets killed after it runs for a while.

I have asked support about this before but they said there is no such mechanism in place. This is false, there is definately something that kills my processes and history tends to indicate its when bzip2 is trying to compress a larger than usual log. Meaning its probably CPU related.


try running it under the ‘nice’ command mayhaps?


I can’t really provide any help, just wanted to say I noticed this the other day while compressing about 50mb of data (I used the zip command). This is not a usual problem for me, but I wanted to point out its not only affecting the person who posted.

use promo code CHEAPESTHOST when you sign up and get $50 off your yearly plan


DreamHost said:

[quote]It’s most likely procwatch killing it off. Try using nice +19 on your
commands and that’ll give it a lower priority so it might not get killed.


This has worked for me so far.


Thanks for the various responses. I had been using nice but only using a value of 15. I have tried with 19 and it seems to be working okay. I’ll see how it goes for a few days.

Regarding why I am using bzip2 rather than waiting for the auto gzip is to gain the maximum compression on the log files so I can transfer them off to another host. My key reason for using Dreamhost is to make use of the significantly cheaper bandwidth charges for hosting availible in the US. Whilst I am not hosting a download site, heavy html pages do add up. Remote hosting sections of a site, compressing, shipping, and merging web logs back into the master sites logs works out quite financially better. However a timezone difference of around 17 hours between sites and the couple of days lag on auto gzip compresson, made me opt for the bzip process which is automated, and runs from cron.