New CPUs! (And more?)

Written by John Miller

The CPUs and heatsink actually arrived on Friday, two days ago, and I'm just now getting around to writing the post. I don't have a whole lot to say about the upgrade process, it was pretty typical. I only really ran into two snafus: 1) I slightly dropped one of them while placing it in the socket, and the corner fell right on the pins. Normally I wouldn't really be worried but these E5s are rather big and heavy, and two-three pins were definitely bent. Thankfully, with a needle and magnifying glass I was able to easily realign them and the CPU has registered just fine. Also thankfully it was the second CPU socket, so in the event that I trashed the pins, at least the first socket could still work (AFAIK it's not possible to put a CPU just in the second socket of these boards, and I'm not too keen to find out anyways). The 2)nd snafu was that I didn't realize that hyperthreading was disabled - presumably because the precious CPU was just 4c/4t - so all of my initial benchmarks are useless.

Anyway, pictures!

First was the way the heatsink seller packed the heatsink. It's in this very nice little enclosure of cardboard and styrofoam. I have actually received heatsinks in the mail that were crushed slightly and had bent fins, so seeing this is nice.

And here's all the bits laid out on top of the chassis pre-upgrade: test!

Aww yiss...

Now for the "And more?" ....

Well, apparently, my ESXi license only allows me to allocate 8 vCPUs per VM, which just wasn't going to cut it with 24 available. I should have known better than to configure this machine right away as a production environment, because OF COURSE I would want to play with it in different configurations with different OSes. So, what I did was re-configure my Enterprise GIS server (SFF ThinkCentre M91p) with Ubuntu 16.04 so it now runs caddy, observium, UNMS, (other misc. docker bits), plex, my SSH bouncer, and others I am probably forgetting. This frees up the C220 M3 to be more of a playground.

The first thing I did was pull the two 128GB SSDs and the 750GB laptop drive, leaving just the 600GB SAS and the two 480GB SSDs. I also pulled the Adaptec 2405 and switched back to using the onboard RAID controller, which can apparently be configured as either intel or LSI softRAID. Server 2012r2 didn't see the intel RAID arrays, but the LSI ones worked fine, so that's what I'm using now. I have setup the Hyper-V role but have not tested it out much yet, so that's next. I have done some remedial benchmarking and overall it comes out a bit ahead of the ML350 G6 - not significantly, but this is to be expected. The X5660s in the G6 have a 400MHz higher ceiling, but the ML350 G6 also pulls about twice as much power (both under load and at idle) than the C220M3 does.

More testing and fiddling with Hyper-V is forthcoming.

"New" Desktop

Written by John Miller

Since beginning the process of moving services to the C220 and phasing out the TS140, I have been wondering what to do with the TS140. It's still a very capable machine, but since the C220 has replaced it as a server it needs a new role. I knew the E3-1225v3 (TS140) and i5-6400 (desktop) were fairly closely matched, but I never compared them directly. They are very similar CPUs, but the 1225 does win out by about 10%, likely because of the higher base clock and turbo boost speed. Otherwise they both have 4c/4t and consume about the same amount of power, though the 6400 is a newer generation and probably better.

The main things that drew me to using the TS140 as a desktop were:

  • Nicer, smaller case
  • More, faster, RAM (32GB ECC vs 16GB)
  • My 1050ti doesn't require PCIe power, which the TS140 doesn't offer without swapping the power supply
  • The TS140 has more USB3 ports
  • Onboard Intel Soft-RAID

There were some issues in switching though:

  • The 1050ti blocks one of the SATA ports as well as the front panel USB3 connector
  • The TS140 doesn't have front-panel audio jacks (circumvented by using a USB-Audio adapter plugged into the back for headphones, speakers use the back panel connectors)
  • Fitting a 5.25" optical drive seems to be troublesome, and I would like to burn DVDs. I may have a slimline SATA DVD burner I can swap in.
  • The TS140 firmware prevents it from sleeping/hibernating

So far I have been switched for a couple of days now, and it has been great. I went ahead and opted for a fresh new Windows 10 Pro install and it's been flowing very smoothly. In Cinebench the CPU benches 50-60 points higher than the 6400 and the GPU scores about 2-3FPS better, so I'm not missing out by changing over systems. I also decided to go ahead and sell off some of the parts of the previous desktop - specifically the CPU and motherboard (RAM, SSD, PSU, etc. can be used elsewhere). I was able to sell the CPU very quickly, and that funded the CPUs and heatsink for the C220 (more on that soon!) and while I don't expect much for the motherboard (since I bought the cheapest one possible) I imagine someone can find it useful, and I could probably get $40-50 for it.

I also moved the i5-3475S from the 7010 into the 3010 and I have configured the 3010 with Ubuntu Desktop 18.04 as my secondary desktop which is synergy-linked to the TS140. The 7010 has the i3-2something from the 3010 and will be sold soon, though since I had to hack up the drive cage a bit to fit in a big GPU it may not sell quickly, but all the same I'd like to move it along.

First Wave of Disk Benchmarks

Written by John Miller

First off, I will heartily admit that these benchmarks are by no means good, I just wanted to do them quickly. They were performed in Arch with the gnome-disks tool's built in benchmark.

All this tells us is "striped SSDs are faster than single HDDs!" What I really should do is get some good (and identical) HDDs and perform the same test, and test the SSDs standalone as well. I don't want to fiddle with it that much, so this is what we have.

It would be better to run these tests bare metal, not through a VM/Host, but seeing as I am going to be using this machine solely for VMs, it made sense to me to know what the VM performance is going to be. I don't really care about the absolute performance, since all it will tell me is how much overhead ESXi is injecting into the situation, and I don't really care about that.

In any case, I think my RAID1+0 of four 240GB drives will perform great.

Adaptec in the Cisco!

Written by John Miller

So far these posts have been severely lacking in media, so this one is going to have a ton of pictures to overcompensate.

I got home, and after talking to and congratulating my wife for completing her first day at a new job (woo!) began the process of installing the Adaptec 2405 into the C220M3. The longest part of the process was waiting for updates to finish running on a Windows 7 VM. Once that was done it was pop the lid, install the card. Done. Sort of.

The card fits just right into the PCI1 riser, you can see the two SAS connectors very conveniently close on the motherboard... but there may not be enough slack in the cables....

Yay! Cisco provided additional cabling under one of the plastic shields!

The SAS cable reaches the extra two inches with no problems. Time to boot it up...

It works! Hooray! (Ignore the 15 minutes it took me to figure out how to re-enable option ROMs in the BIOS...)

I moved my two 128GB SSDs to bays 7 and 8 for the test.

Pulled them into a striped array.

And ESXi sees it!

Up next, benchmarking!

Figuring out the servers Part II

Written by John Miller

In the past week or so since my last post, I've been chewing on the questions I listed pretty much non-stop, and I'm pretty sure I have made a decision - or series of decisions, about what to do.

I have the ML350 G6 listed, right now for $300, but who knows. I put both X5660s back in it as well as 32GB RAM (6x4s and 4x2s) leaving me with 32GB left for the C220M3. I also dropped a dual-gig ethernet card in, a 3.5" 500GB spinner, and left in two 2.5" drive trays. Hopefully it doesn't take too long to sell, but that's what I said last time.

As for the C220M3, the main issue was on deciding what CPUs to go with. Previously, I had been dead set on the v2 versions of the E5 CPUs, for no good reason other than "new!" Having now spent some time looking at benchmarks, I'm ready to loosen up a bit. The v1s are a year older and about 5-15% slower than their equivalent v2s, but they are significantly less expensive. For example: the E5-2630v2 (2.6GHz 6c/12t) retails at about $70-80 each, whereas the E5-2630v1 (2.3GHz 6c/12t) retails at half that (or less!), and if you look at the benchmark scores the v1 scores a 1379 (per core) whereas the v2 scores a 1552 - just 11% slower for not even half the cost? Yeah, that's a pretty good deal.

Unfortunately the 8c/16t E5s are still fairly pricey unless you're okay with having a very low clock. Even the 2GHz E5-2650 usually hits $60-70/per - I'd be interested in the 2.4GHz E5-2665 but they still usually run at least $80/per, and I don't have a very valid need for 16c/32t. That said, eventually the v2 8c/16ts will fall in price, and maybe in another year or so I could do another big upgrade - this is what I did for the ML350G6 in the past.

Drives are my next concern for the C220M3, and as SSDs continue to fall in price they look better and better. I saw a 1TB SSD for $230 at Best Buy the other day! The Intel soft-RAID controller onboard is not supported by ESXi except as a pass through device, so I will need a hardware RAID controller. I have my Adaptec 2405, which, while older, does work well in ESXi and offers the good enough performance at it's price point. Granted, it only has one SAS connector and therefore only supports four drives, but it should be able to fit into the chassis nicely and host bays 5-8. I only have two so far, but ultimately I'd like to have four 240GB SSDs running in a RAID1+0. I'll test fit the 2405 in the C220M3 later this afternoon and see how it goes.

So the proposed buildout for the C220M3 is as follows:

  • 2nd heatsink - $35
  • Two E5-2630s - $85
  • 64GB RAM total - free
  • four drive sleds - $40
  • Adaptec 2405 - free
  • Four 240GB SSDs (two needed) - $110? (haven't priced it out)

Now for something completely different. I decomissioned the Optiplex 3010 as my pfsense box - I am now back to running the EdgeRouter X. Upgrading to EdgeOS 1.10.1 was a chore, but it is running great and my previous DNS woes have been resolved. I moved the i5-3245S from the 7010 into the 3010 and have set it up as my secondary desktop, sharing a kb/mouse with my primary desktop, and it's running ubuntu 18. The 7010 has the 3010s i3 and once configured with 4GB RAM and a base OS, will be sold.