It's a dedicated box installed 5.5 years ago
I built the computer I'm using in 2009 so not much after this one.
Bare Metal Server Installed 2/19/2009 in Seattle @ Softlayer
CentOS 2.6.18-92.e15
SuperMicro X7QCE Intel Xeon HexCore QuadProc Sata 4Proc
The "Hexcore" part refers to Dunnington compatibility which came out after Tigerton.
This is incorrect. This motherboard only uses FB-DIMM. This was Intel's failed attempt at a new memory standard that used serial links to avoid the problem of fan-out on a common memory bus. This was intended to allow massive memory. This particular motherboard has 24 separate FB-DIMM slots.
4x2.13GHz Intel Xeon-Tigerton (7320-Quadcore)
The 2.13 Ghz is the middle range. These went from 1.6 - 2.93 Ghz. Also, it's not a true quad core. It's actually two separate dual core chips in the same package.
SuperMicro AOC-SIMSO-plus Remote Management Card
2xSuperMicro PWS-1K01-1R Power Supply
SuperMicro BPN-SAS-828TQ Backplane
Adaptec 3405 Drive Controller
Western Digital Raptor 10,000 RPM WD1500ADFD (sdb) for Database
Seagate Cheetah ST373455SS [73GB] (sda) for system
Back then, 10,000 RPM drives were fairly expensive. The motherboard ran $1,000 and the four CPU's were $1,100 apiece. The FB-DIMM memory ran about $400. This system had to cost at least $8,000 so they weren't skimping when they bought it.
The problem with this system is that it is the wrong kind. These processors are still based on Intel's old Front Side Bus technology which connects the processors via a shared bus to the northbridge. The northbridge has four separate memory channels going out to the four FB-DIMM banks. Each bank can handle six separate FB-DIMMs and each one can be 1, 2, 4, or 8 GB. This would allow for 192 GBs of memory. These FB-DIMMs are probably 533 Mhz (although they could be 667).
The northbridge could transfer 17 GBs of data per second to and from memory. The FSB is clocked at 1066 Mhz (technically 266 quad pumped). This FSB can move 8.5 GBs per second or half of what the memory controller can handle. However, since these are actually twin package chips, the FSB sees them as eight separate processors. If each of these processors wanted to access memory the demand is 8x greater than the FSB can handle. This configuration works pretty well if you have floating point intensive operations and you want a large model to be held in memory. This might happen with a large CAD, engineering, or graphic file.
Unfortunately, this system does nothing like that. It does not make use of Tigerton's excellent floating point crunching capability at all but instead is memory intensive where the FSB is a massive log-jam. To be honest, three of those four processors in this configuration are doing nothing most of the time since they are mostly waiting on memory. As some have pointed out, two separate systems each with one processor would actually have been faster as well as cheaper. It's very unfortunate because clearly someone was trying to set up a robust system and ended up with something inadequate even though it was not a cheap system. Someone mentioned memory. Adding memory to this system would not help since the FSB is the problem.
I assume from that it's on a 100Mbps network.
Probably although the motherboard actually has two Gigabit ethernet ports so it would top out at 2000Mbps if both were used.
I'm not an expert on interpreting vmstat but this sample doesn't appear too bad (ie it's not a time when things are lagging badly) but a cpu upgrade would provide some immediate benefits, as I suspect would moving images out of the db, if they're not already.
I assume here you mean a system upgrade rather than just the CPUs.
It was a good motherboard for a particular task. Although it's not being used, the drive controller can do RAID 1, 5, and 10. It's a shame that it was so far off for this particular application.