Posts tagged with “infrastructure”

New CPUs! (And more?)

The CPUs and heatsink actually arrived on Friday, two days ago, and I'm just now getting around to writing the post. I don't have a whole lot to say about the upgrade process, it was pretty typical. I only really ran into two snafus: 1) I slightly dropped one of them while placing it in the socket, and the corner fell right on the pins. Normally I wouldn't really be worried but these E5s are rather big and heavy, and two-three pins were definitely bent. Thankfully, with a needle and magnifying glass I was able to easily realign them and the CPU has registered just fine. Also thankfully it was the second CPU socket, so in the event that I trashed the pins, at least the first socket could still work (AFAIK it's not possible to put a CPU just in the second socket of these boards, and I'm not too keen to find out anyways). The 2)nd snafu was that I didn't realize that hyperthreading was disabled - presumably because the precious CPU was just 4c/4t - so all of my initial benchmarks are useless.

Anyway, pictures!

First was the way the heatsink seller packed the heatsink. It's in this very nice little enclosure of cardboard and styrofoam. I have actually received heatsinks in the mail that were crushed slightly and had bent fins, so seeing this is nice.

And here's all the bits laid out on top of the chassis pre-upgrade: test!

Aww yiss...

Now for the "And more?" ....

Well, apparently, my ESXi license only allows me to allocate 8 vCPUs per VM, which just wasn't going to cut it with 24 available. I should have known better than to configure this machine right away as a production environment, because OF COURSE I would want to play with it in different configurations with different OSes. So, what I did was re-configure my Enterprise GIS server (SFF ThinkCentre M91p) with Ubuntu 16.04 so it now runs caddy, observium, UNMS, (other misc. docker bits), plex, my SSH bouncer, and others I am probably forgetting. This frees up the C220 M3 to be more of a playground.

The first thing I did was pull the two 128GB SSDs and the 750GB laptop drive, leaving just the 600GB SAS and the two 480GB SSDs. I also pulled the Adaptec 2405 and switched back to using the onboard RAID controller, which can apparently be configured as either intel or LSI softRAID. Server 2012r2 didn't see the intel RAID arrays, but the LSI ones worked fine, so that's what I'm using now. I have setup the Hyper-V role but have not tested it out much yet, so that's next. I have done some remedial benchmarking and overall it comes out a bit ahead of the ML350 G6 - not significantly, but this is to be expected. The X5660s in the G6 have a 400MHz higher ceiling, but the ML350 G6 also pulls about twice as much power (both under load and at idle) than the C220M3 does.

More testing and fiddling with Hyper-V is forthcoming.

First Wave of Disk Benchmarks

First off, I will heartily admit that these benchmarks are by no means good, I just wanted to do them quickly. They were performed in Arch with the gnome-disks tool's built in benchmark.

All this tells us is "striped SSDs are faster than single HDDs!" What I really should do is get some good (and identical) HDDs and perform the same test, and test the SSDs standalone as well. I don't want to fiddle with it that much, so this is what we have.

It would be better to run these tests bare metal, not through a VM/Host, but seeing as I am going to be using this machine solely for VMs, it made sense to me to know what the VM performance is going to be. I don't really care about the absolute performance, since all it will tell me is how much overhead ESXi is injecting into the situation, and I don't really care about that.

In any case, I think my RAID1+0 of four 240GB drives will perform great.

Adaptec in the Cisco!

So far these posts have been severely lacking in media, so this one is going to have a ton of pictures to overcompensate.

I got home, and after talking to and congratulating my wife for completing her first day at a new job (woo!) began the process of installing the Adaptec 2405 into the C220M3. The longest part of the process was waiting for updates to finish running on a Windows 7 VM. Once that was done it was pop the lid, install the card. Done. Sort of.

The card fits just right into the PCI1 riser, you can see the two SAS connectors very conveniently close on the motherboard... but there may not be enough slack in the cables....

Yay! Cisco provided additional cabling under one of the plastic shields!

The SAS cable reaches the extra two inches with no problems. Time to boot it up...

It works! Hooray! (Ignore the 15 minutes it took me to figure out how to re-enable option ROMs in the BIOS...)

I moved my two 128GB SSDs to bays 7 and 8 for the test.

Pulled them into a striped array.

And ESXi sees it!

Up next, benchmarking!

Figuring out the servers Part II

In the past week or so since my last post, I've been chewing on the questions I listed pretty much non-stop, and I'm pretty sure I have made a decision - or series of decisions, about what to do.

I have the ML350 G6 listed, right now for $300, but who knows. I put both X5660s back in it as well as 32GB RAM (6x4s and 4x2s) leaving me with 32GB left for the C220M3. I also dropped a dual-gig ethernet card in, a 3.5" 500GB spinner, and left in two 2.5" drive trays. Hopefully it doesn't take too long to sell, but that's what I said last time.

As for the C220M3, the main issue was on deciding what CPUs to go with. Previously, I had been dead set on the v2 versions of the E5 CPUs, for no good reason other than "new!" Having now spent some time looking at benchmarks, I'm ready to loosen up a bit. The v1s are a year older and about 5-15% slower than their equivalent v2s, but they are significantly less expensive. For example: the E5-2630v2 (2.6GHz 6c/12t) retails at about $70-80 each, whereas the E5-2630v1 (2.3GHz 6c/12t) retails at half that (or less!), and if you look at the benchmark scores the v1 scores a 1379 (per core) whereas the v2 scores a 1552 - just 11% slower for not even half the cost? Yeah, that's a pretty good deal.

Unfortunately the 8c/16t E5s are still fairly pricey unless you're okay with having a very low clock. Even the 2GHz E5-2650 usually hits $60-70/per - I'd be interested in the 2.4GHz E5-2665 but they still usually run at least $80/per, and I don't have a very valid need for 16c/32t. That said, eventually the v2 8c/16ts will fall in price, and maybe in another year or so I could do another big upgrade - this is what I did for the ML350G6 in the past.

Drives are my next concern for the C220M3, and as SSDs continue to fall in price they look better and better. I saw a 1TB SSD for $230 at Best Buy the other day! The Intel soft-RAID controller onboard is not supported by ESXi except as a pass through device, so I will need a hardware RAID controller. I have my Adaptec 2405, which, while older, does work well in ESXi and offers the good enough performance at it's price point. Granted, it only has one SAS connector and therefore only supports four drives, but it should be able to fit into the chassis nicely and host bays 5-8. I only have two so far, but ultimately I'd like to have four 240GB SSDs running in a RAID1+0. I'll test fit the 2405 in the C220M3 later this afternoon and see how it goes.

So the proposed buildout for the C220M3 is as follows:

  • 2nd heatsink - $35
  • Two E5-2630s - $85
  • 64GB RAM total - free
  • four drive sleds - $40
  • Adaptec 2405 - free
  • Four 240GB SSDs (two needed) - $110? (haven't priced it out)

Now for something completely different. I decomissioned the Optiplex 3010 as my pfsense box - I am now back to running the EdgeRouter X. Upgrading to EdgeOS 1.10.1 was a chore, but it is running great and my previous DNS woes have been resolved. I moved the i5-3245S from the 7010 into the 3010 and have set it up as my secondary desktop, sharing a kb/mouse with my primary desktop, and it's running ubuntu 18. The 7010 has the 3010s i3 and once configured with 4GB RAM and a base OS, will be sold.

Getting started and figuring stuff out (what am I doing with these servers?)

Okay, well, I have had this blogging platform (or whatever it's called) up and running for about three months now, so I figured I might as well go ahead and post something. I don't have a good introductory post idea or a well structured project, but I have been thinking about my home lab quite a bit lately and I need to start recording some of this mess.

I have a lot of computers, new and old, super powered and super dinky. Right now what I'm most interested in organizing and planning for are the newer machines that can handle modern workloads and are actually (potentially) useful to me. Right now what I'm looking at is as follows: (specifically for "infrastructure" machines)

  • TS140
    • Main server, some aspects change a lot. Right now it runs all my services aside from storage.
    • E3-1220v3, 4x4GB DDR3 ECC REG, 2x480GB SSD RAID0, Ubuntu 16.04.4
    • SSDs don't need to be RAID'd, or setup dual 240GB in RAID0 instead.
    • RAID0 only used recently for performance, and it's not appreciably better, so IDK.
    • Have thought about upgrading to an E3-1275v3 but it's not a huge bump in performance, mostly just going from 4c/4t to 4c/8t. Still expensive...
    • Would also like a 40g Mellanox card in it.
    • Might get another? (Have a possible lead)
  • Optiplex 7010
    • i5-3475S, 2x4GB & 2x2GB DDR3, 240GB(?) SSD, Win10
    • This box needs a life. It's a good machine but doesn't get enough use. It has been a dedicated GIS workstation as well as a dedicated 3D printing workstation but neither persisted. I think it will be my desktop linux box on a DVI KVM with Theseus
  • Cisco UCS C220 M3
    • E5-2609v2, 8x4GB DDR3 ECC REG, 600GB SAS Spinner, ESXi 6.7
    • New from defor, and also the machine that prompted this post. It is a bit older than the TS140 but I think more capable.
    • I got it with 16GB RAM but moved 4 more 4GB DIMMs from ML350G6. Will add another 32GB when I get a 2nd CPU.
    • Would like a 4c/8t or 6c/12t pair of CPUs for it, but those cost dollars. See below plans...
    • CPU Plans:
      • Plan A: Do nothing, leave it at a single E5-2609v2 for now
      • Plan B: Get another E5-2609v2 for cheap.
        • Pros? Cheap. $40 shipped.
        • Cons? Only 4c/4t 2.4GHz. Have to get a second heatsink - $40-50
      • Plan C: Get two beefier E5s. e.g. 2620v2 (6c/12t)
        • Pros? More power (can compete with TS140 more gooder), should last longer
        • Cons? Expensive. $150ish including heatsink.
      • Plan $$$: Get two E5-2697 v2s
        • Pros? Awesome.
        • Cons? $$$$$$$$$$$$$$$$$$$$$
    • Storage. Right now this thing has the single stock 600GB 10k SAS drive. It's not cutting it for a VM host.
      • Unfortunately, sleds are not as cheap as I would like. Hovering around $10-15/ea. I would like to get 8, but spending $70-110 on trays is out of the question at the moment. I'll be happy with two more if I can get them at $10/ea.
      • Media - HDDs or SSDs? I lean to SSDs as often as possible for a few reasons: speed, noise, and heat. All my bulk storage is handled by the NAS which has ~5TB available across both volumes. My VM projects do not tend to be data heavy, at the most requiring 40-60GB space for media, and even then most of that is scratch/temp. Windows VMs occupy the most space - right now even a base windows 7 vhd is like.. 40GB (and I always thin provision). I like to keep compute VMs below 8GB unless necessary. I would really like faster network storage, and I have something in the works (sneak peek: it involves 40g Mellanox/Infiniband cards with point-to-point connections). Ultimately, right now I'm thinking about starting with two 240s in either RAID1 or AID0 (because RAID0 don't be redundant, y'all) depending on how dangerous I feel. The TS140 will remain my "production" box, so by and large I don't expect anything on the C220 to be critical in the event that a RAID0-SSD dies on me.
        • HDDs - if I can find a fourth caddy cheap enough, I would like to throw a 1TB 7.2kRPM laptop drive in there - a cheap seagate or WD will do. This would be for slow storage, e.g. backups, ISOs, VM overflow, etc, and would provide me with four tiers of storage: fast SSDs, slow 600GB 10k SAS, really slow 1TB HDD for "cool" (lukewarm?) storage, and a gig link for NFS - to be upgraded to 40g IB(?).
  • ML350 G6
    • Ugh. It's complicated. Parts are scattered in other machines. But...
    • 1 or 2 X5660s, up to 72GB RAM (18 4GB ECC REG DIMMs), Adaptec 2405 SATA RAID card + onboard RAID
    • I want to sell it, but I don't know anyone who wants it. Too big and heavy to ship, probably asking too much locally. Should pull the X5660 back from the T3500 to make it complete.
  • T3500
    • Also a bit ugh.
    • X5660 (from above), 8GB RAM, GTX265, misc SSD, two 3TB reds in RAID1
    • Newest machine with PCI slots, will be driving the new tape drive.
    • Need a standalone CPU for it, probably just 4c/8t - E5630 is less than $10 shipped.
  • Optiplex 3010 (honorable mention)
    • i3-2100 (?), 2GB DDR3, misc. HDD, HP nc630 dual Gig-E NIC, pfsense
    • Nothing about this needs to be changed, though I may switch down to an ARM based router (e.g. ER-X, if the firmware is fixed). Just wanted to mention it because it's technically a PC in my infrastructure.
  • Three DL380 G5s
    • Also from defor, have not yet tested them at all. Dual CPUs in each, and I think 16GB (2x8GB FBDIMM) in each. Need more 2.5" drives and caddies.
    • Do not need all three, will likely give at least two away, eventually all three. Depends on noise and power consumption.

So, there's that. Right now my plan is basic: go ahead and get a heatsink for the C220M3, since I know I will drop another CPU in it. Filling it to 64GB RAM (using the 4GB DIMMs I have) will leave 12 (48GB) for the ML350G6, leaving it still viable for sale. Get a cheap CPU for the T3500 - like a $25 hex core or a $6 E-series 4/8, giving the X5660 back to the ML350G6. Then continue to eye prices for either another E5-2609v2, or splurge on a pair of 2620s, 2637s, or 2640s. If I can find a super cheap 2609 I may just jump on it, and likewise if I can get a good deal on a set of hex or octa core chips. Frankly I'd rather go ahead and invest in chips with more, faster, cores because I know I have workloads that require fast cores, and I have many that also can leverage massively parallel distribution (e.g. raster processing). I also don't want to have two cheap 2609s holding me back from upgrading - one big step is wiser than two medium steps. Just have to wait for a good deal.

Expect to see more rambling on this in the future...