Only 100BaseT network links?

I was trying to figure out why my gallery 2 site is so slow on Dreamhost tonight, and noticed that Dreamhost puts your home directory on a NFS fileserver.

What really surprised me was to find the server network interfaces running at 100BT. This of course would really slow down NFS performance to the file server where my data is located.

eth0: Link is up at 100 Mbps, full duplex.
eth1: Link is up at 100 Mbps, full duplex

Of course, this is going to make NFS traffic to the file server very slow, yet the NIC is a Gigabit NIC

0000:02:03.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10)
0000:02:03.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10)


Whats up Dreamhost? Gigabit switches are the standard in datacenters now, and have been for a while. Why haven’t you upgraded your switches ?

That is really interesting! I’ve never thought to check that before, and I’m interested to hear a response from DH about that. Have you submitted a support ticket and asked them “what up”? :wink:


It depends on network load.

If only a little part of 100Mbps are used, why goes for gigabit ?

There is probably gigabit between their switch, but on the final link, 100Mbps is enough most time.

Get [color=#CC0000]$97[/color] Off with promo code :[color=#6600CC]97USD[/color] :cool: use “moua” as referrer

I work in a large data center where almost every application is NFS based, and I can tell you that 100BT is definitely not a big enough pipe. Even a small amount of NFS traffic can saturate a 100BT link. Gig-e is 125 MB/s, where 100BT is only 12.5 MB/s, and then subtract for packet overhead.

We also run Debian Sarge clients with Network Appliance NFS servers, and all clients are connected to gigabit switches. The fileservers themselves are etherchannel dual-gig links.

Gig-e is the standard in datacenters, and the switches have
become very affordable, there is no reason to be at 100BT.

What is your domain coorsleftfield? unless your server is doing at least 10Mbits/sec of traffic a Gbit line isn’t going to make much difference. Checking the machine and switch for rlparker for instance his machine averages below 1 Mbit/sec. There are a couple machines on his switch that average almost 10Mbit during peak times. The biggest spikes are about 20Mbit for very short durations. Managing a large data center is about finding the bottlenecks and removing them. For instance I have been adding alot of extra web servers and mysql servers, as well as upgrading many existing web servers to 4GB of memory. Without access to our network graphs or usage statistics you are diagnosing a patient you have never even seen. The file storage switches are Gbit as they push more traffic. We do buy Gbit for our new swithces, but there isn’t much point in taking down a lot of websites to replace barely loaded switches, it would just take time, money and planning from other more important projects.


Thanks for that response (and for checking my machine :wink: ). What you are saying makes a lot of sense, and I appreciate your description of the traffic on the switch in my case.

It seems obvious that, at least on that switch, it doesn’t make much sense to take “down” a “lot of websites” to replace it, as it is “barely loaded”. It makes me feel good that you looked into this and posted back (and that you are trying to put your resources where they are need most). Thanks for the report on this.


My domain is It’s running Gallery 2 on I find the speed to be acceptable during off-peak hours, but horrible during the day.

Looking at average port traffic is interesting, but not as relavant as burst speed. Here is a simple example. Time how long it takes to copy a 50 megabyte file over NFS. Assuming there is no other competing traffic on the link it should take about 5 seconds on a 100BT link, and about 1 second on a gig-e link. So if these servers are only connected at 100BT, it’s going to take Gallery/Apache longer to fetch files from the NFS drive no matter how busy the link is, simply by the fact that it’s a smaller size pipe, and it takes more time to transfer the bits through a smaller pipe.

We run gig-e everywhere not because our links were averaging more than 10MB/s, but because when we do a file copy or NFS write, we want it to go fast.

The argument you’re making is basically saying that there is NO performance difference between 100BT ethernet and gig ethernet if the link isn’t loaded, and that just isn’t true, the gig link is 10X faster for every operation.

Since you said you have some hosts on gig, I would like to request my hosting be moved to one of these servers, because right now, it really is unusable.

My point is that when it comes to websites the average file size <50k. The 1/100th of a second you would save by changing the port to gigE is dwarfed by other factors such as the load on the mysql server, web server, or file server. So when looking at the page load chain you try to spend your time working on the slowest item. It looks like the performance hit you are seeing is caused by some unindexed queries being run on your mysql server. I will have someone work on that item first to see if we can improve your site’s performance.

You mention that you work in a large data center, which gigE switches are you using? (vendor/model)

I agree that it wouldn’t be much difference when you are only transferring small files.

Our current data center has a mix of Cisco 6509 switches and some HP Procurve 5308. We have a new datacenter under construction and the
plans I’ve seen call for all 6509s at the top level with multiple trunked 10G interconnects between them, and then Cisco 4506s as a distribution layer.

We are almost 100% Debian Sarge on HP DL380/DL360 servers, with new Woodcrest 1U systems coming in. These are all in a grid pool running various validation jobs, and then writing results back to NFS servers which are Netapp.

I appriciate you looking into the gallery performance issues, you guys are the best bang for the buck hosting service, and I’d like to host my domains here.