Okay, the issue is kind of 'solved', after spending a long time searching, I finally discovered there is apparently a 'procwatch' daemon running on dreamhost, that kills processes that use 'too much memory'. So by disabling the -O2 switch of gcc, the memory used was less, and it worked.
I just want to say that the people at Dreamhost should be muuuuch more open about this, and should put some warning or so in their wiki. I completely understand the reason 'why' it is done, and I would implement some mechanism myself if I was running a host like this, but I only have problems with the 'how' it is implemented: in a cloud of secrecy (I suppose it is not intended this way)
A friend and I have spend a looong time figuring out what was going on, since it generated all kinds of random errors while you're compiling something ... Could it at least be arranged that the daemon sends some kind of warning to an email address of the user or something ?
The result is that we recompiled gcc twice (because on nova, a very old gcc is available, so we first thought that that might be the reason, it gave internal errors), and we also recompiled a couple of dependencies with different kinds of options.
And you might say: yeah, but you know that you shouldn't run jobs that use too much memory. Well, I do, but is very difficult to guess how much gcc will use (for example compiling gcc itself gave no problems), and I also had no clue about what happens when it does use too much.
And in a previous post of a user with the same problem, an answer was that he shouldn't be compiling daemons on nova. True, but that was not what I was doing, we were just compiling some piece of software to get my friends website working (which doesn't act as a daemon, but as an agent).
So my request is, please, to prevent other users from wasting their time, to, or put very clear information about procwatch on the website, or to send emails when a job is killed ...
Werner Van Geit