Slooooooow cpu

Are the VCPUs supposed to be super slow, or is there some sort of throttling going on? I need CPU speed and nothing else. I can work with as low as 5 GB of HDD, and 256MB of RAM, but the speed I’m seeing would get me nowhere fast.

According to /proc/cpuinfo, the instance has a 2.2 GHz cpu. In testing, it’s the slowest VPS instance I’ve ever seen (I’ve only worked with 4 or 5 providers, but still…). It’s even slower than one of my other VPS instances at another provider with a 2.0 GHz processor. I ran Unixbench on it, and ran one of my image optimization tests (with optipng) for benchmarking.

Hi Shane. The vCPUs are not supposed to be slow and there’s no throttling going on. We’re using KVM on boxes with 64 AMD cores - none of which are oversubscribed.

Can you post more details on the benchmarks you ran? I’d like to see the numbers in comparison to the other instance you tested.

Sure thing, I run two tests on every VPS node I bring online to ensure it is up to my standards, particularly because raw CPU speed is critical to my application (usage is on demand, not steady, but nodes must be as responsive as possible):

Test #1: Unixbench
-higher scores are better
Test #2: time optipng -o7 Foto.png
-lower scores are better, and the image is here:

All of these results are from single core VPS instances:

3.4 GHz VCPU, 768MB RAM, SSD storage
unixbench: 2637
optipng: 18 seconds

3.3 GHz VCPU, 256MB RAM, SSD storage
unixbench: 1831
optipng: 23 seconds

2.3 GHz VCPU, 512MB RAM, SSD storage
unixbench: 1438
optipng: 28 seconds

2.0 GHz VCPU, 512MB RAM, SSD storage
unixbench: 1250
optipng: 32 seconds

2.2 GHz VCPU, 1024MB RAM, Ceph storage (dreamcompute)
unixbench: 518
optipng: 81 seconds

I’ve been spending some time on and off looking into this for you.

The first thing that jumped out at me is that the VPS instances are all SSD. Local SSD storage is going to be significantly faster than Ceph-backed storage. With Ceph, we’re aiming for desktop drive speeds. We made the tradeoff of speed for reliability (though we do gain a speed advantage on instance boots with copy on write). Ceph allows us to keep 3 copies of your data so it’s redundant. It’s also distributed and fault-tolerant. Those are three things we love in a storage system!

The Unixbench tests do seem to spend a lot time on file copy tests. Local SSD systems are going to outperform here.

We’re also using AMD processors clocked at 2.2GHz. These are likely to be slower than results on Intel processors. Still, the numbers do seem lower than I’d expect and we’ll spend some more time looking into these results.

Yeah, I’m sure it has an effect on unixbench, and I’m not terribly concerned about storage speed. It just happens that all my current nodes have been ssd. Raw CPU horsepower is much more important to my workload, which is why I also do #2, the optipng test. I’ve never seen a machine take that long on the optipng test, not even close. I expected somewhere around 25-30 seconds based on past tests, but my jaw dropped when I saw 82 seconds…

Hi Shane, I wanted to post and update for you. We’re rolling out configuration changes to the hypervisors this week that should increase CPU speed for you (we discovered some incorrect bios settings for power management). I tested an instance that was on a rebooted hypervisor and optipng ran in 42 seconds. Still not as fast as we’d like, but a big step in the right direction! And we’ll continue investigating for further tweaks.

Here are my results from optipng:

[code]dhc-user@jumpbox:~$ time optipng -o7 Foto.png
OptiPNG 0.6.4: Advanced PNG optimizer.
Copyright © 2001-2010 Cosmin Truta.

** Processing: Foto.png
480x640 pixels, 4x8 bits/pixel, RGB+alpha
Reducing image to 3x8 bits/pixel, RGB
Input IDAT size = 557694 bytes
Input file size = 558159 bytes

zc = 9 zm = 9 zs = 0 f = 1 IDAT size = 445407
zc = 8 zm = 9 zs = 0 f = 1 IDAT size = 445407
zc = 9 zm = 9 zs = 1 f = 1 IDAT size = 437923
zc = 8 zm = 9 zs = 1 f = 1 IDAT size = 437923
zc = 9 zm = 9 zs = 0 f = 3 IDAT size = 431429
zc = 9 zm = 8 zs = 0 f = 3 IDAT size = 431195
zc = 9 zm = 9 zs = 1 f = 5 IDAT size = 428145

Selecting parameters:
zc = 9 zm = 9 zs = 1 f = 5 IDAT size = 428145

Output IDAT size = 428145 bytes (129549 bytes decrease)
Output file size = 428202 bytes (129957 bytes = 23.28% decrease)

real 0m42.243s
user 0m42.183s
sys 0m0.036s[/code]

I just ran the UnixBench tests and got a much more respectable 1293.5.