Overloading a server often causes instability. This is why Distributed Denial of Service (DDOS) attacks are so popular with hackers. The server becomes so busy handling illegitimate requests that it can't handle legitimate requests fast enough.
As the legitimate requests begin to timeout, those requests are also retried. The server also has to context switch between handling network requests, I/O requests, running system processes, running user apps, etc. The situation continues to snowball until the server appears completely unresponsive, or even crashes.
The problem is that the servers need to be able to distinguish between legitimate and illegitimate requests, and they need to do this really, really quickly. DreamHost doesn't have an easy way to do this in general, since that would require that they make a lot of assumptions about the programs that each one of us are running on the shared servers. All combined, users are running thousands of unique applications on these servers, and new vulnerabilities are reported each day.
Over time, DreamHost can detect common issues like comment spamming, and then they can start setting up blacklists, etc., to block the incoming requests. This isn't easy to do, though, without causing side effects that legitimate users are affected by. A more effective solution is to handle as much of the problem as possible at the application. If you keep up to date on patches and make it hard for hackers and spammers to even attack your apps, then DreamHost will have a better chance of thwarting the smaller number of them that do gain a foothold.
Limiting load on a single server is also a challenging problem. How do you choose what to limit? The easiest approach is to move some of the processes (which typically implies users) to another server. That's not quick, though, and it's really hard to do when a server is already under attack. Linux and Unix do have some effective ways to adjust the priorities of processes, but again, it's hard to do when you're already under attack. It is also very time consuming for the system administrators to deal with on a case by case basis.
I'm part of an engineering team that runs applications we have written on Linux-based app server clusters that are co-located at multiple data centers, and I can speak from personal experience as to how challenging of a task it is to keep servers secure, yet doing their jobs, with minimal to no downtime.
We should all be glad we aren't on Windows servers (of course, many of us wouldn't be here if DH used Windows). I've had to write, deploy, and manage apps on Windows servers at previous jobs, and it feels like you are fighting hand-to-hand combat with one hand tied between your back and sand thrown in your eyes. Attempting to control process priority on a Windows server is generally an exercise in futility. It's easier on Linux, but it's still not a walk in the park.