My git push is being killed for excessive resource usage

Hi all. I’ve got a shared dreamhost server which holds some git repositories. I was having problems pushing to it (with a commit holding a lot of binary files), getting a “index-pack died of signal 9” error. So, i copied my working folder onto the repo server, so that i could do the push locally rather than over the internets. And, discovered that Dreamhost was killing it for taking (i presume) too much memory:

Yikes! One of your processes (git, pid 7746) was just killed for excessive resource usage.
Please contact DreamHost Support for details.

Does anyone know a less memory-intensive way to do a git push? Maybe a blob at a time or something? Any other helpful advice would be welcome…

EDIT - following recommendations on and, i made my own local git binaries, with the NO_MMAP=1 option set. I made sure these were before the dreamhost-installed versions in my path, and “which git” shows my local version. But, i get exactly the same problem. Do i need to do something with my repo to make the NO_MMAP option work, or is the problem something else do you think?


I’m having the same issue. Git was compiled with NO_MMAP=1.[hr]
Just checked with DH technical support. The deployed a new Procwatch, and this is why my git is breaking.

It seems the NO_MMAP=1 flag doesn’t work anymore. We need to find another way to lower git’s memory footprint.

Any ideas?


Right…i don’t have any ideas for reducing git’s memory i’m afraid. I emailed DH customer support and they temporarily raised my memory limit. In the meantime though i had fixed my problem by

  • tar’ing my repo
  • copying it to the server containing the project
  • untaring it
  • changing the project’s git config to point to a local repo instead of the dh server
  • doing the push there
  • taring it again and copying it back to the dh server
  • untaring it and replacing the old one

Bit of a hassle :slight_smile: I think if i’d just waited for the DH tech support reply it would have been easier, but neither solution is very satisfactory.

Thanks. Yeah, that’s not a permanent option.

I really need to find a way to either reduce git’s footprint or move the repos from DH.

BTW, I was able to push by doing it commit by commit.

My problem now is with the Pull. I have some people on the team unable to get the latest version of the code :expressionless:


ah, how do you push a single commit? Or do you mean that you rolled back to an earlier commit, did a push, then rolled forward one, did a push etc?

tbh i don’t think it’s the fault of dh so much as a problem with shared servers - are you on a shared server (i am in this instance)? I’ve had similar problems (not with git) on shared servers in the past (not dh as it happens) and now would never set up a shared server again for precisely this reason. I always get a slice server, ie a vps, where there are no ‘rules’. Slices are pretty cheap, especially when you factor in even an hour of dev time a month trying to deal with this sort of stuff.

I created a new branch. Made a reset --hard commit by commit and pushed each commit to master. But, you can push the commit directly.

Well, it’s not DH fault because they do not officially support GIT.

But procwatch was updated recently. I have git working on DH for two years now. Its a PITA to have it broken. If DH doesn’t support it, it means it’s not possible to have git repos on DH anymore because procwatch will kill it. You can have a small repo, but if grows a bit and your team too… boom.

That’s life.

yeah that does suck.

Btw, it would be nice to remove this page from DH’s wiki:

Since git doesn’t work anymore with NO_MMAP=1, this will avoid a lot of trouble to new comers.

Create an account on the wiki (it’s open to all) and edit the page to be more accurate. Removing it entirely isn’t necessary, as there’s plenty of other useful information on there…

Ok, I’m now getting the repo to my HD to repack it with packs of 10m. Repacking on DH kills the git process, no kidding.

Then I’ll put the repacked repo on DH and see if it works. Damn procwatch.[hr]
Btw, unless there is a solution to get git running on DH, that wiki page really doesn’t make sense.

It’s pointless to have someone setting up git on their account to find out 10 commits later that it is breaking.

Let’s see how it goes.

Out of curiosity, how large of a repository are you dealing with?

Aprox. 2G.[hr]
It’s not much for a git repo once you put a PSD on it.

Ehhh… 2 GB is pretty damn big for a git repository. Our internal source code repository is around 1.2 GB, and that’s with 10+ years of history and about 50,000 revisions. The Linux kernel is even smaller, at barely 500 MB.

Git is not intended to be used for large files. (Linus Torvalds has said as much himself.) It’s mainly intended for managing repositories with lots of small text files (like source code). Large binary files, like Photoshop sources, aren’t its intended use, and it doesn’t deal with them well.

I too am hosting a large git repo on DH, using gitolite, and recently a pull of the repo starting failing with the kill -9 error during object receive. With NO_MMAP things used to work… :frowning:

Some Googling around brought to light these git config settings, which I have applied (using git config --global).

I’m unsure of what the largest (e.g. best performance) settings should be, but setting everything down quite small seems to be working, with a test clone of the full repo succeeding, albeit more slowly than before. The kill would hit around 70Mb transferred. I don’t know if that correlates to actual VM in use by git, but setting sizes to about 50m-60m was my first, and apparently successful, guess.

I need to research these settings. I only have a vague idea about what they actually configure.

The settings I used were ( in ~/.gitconfig):

windowMemory = 60m
packSizeLimit = 60m
deltacachesize = 50m
packedgitwindowsize = 50m
packedgitlimit = 50m
deltacachesize = 50m

This is affecting me too! And my GIT Repo is really small.