Thursday, August 11, 2016

Adventures in 10 gigabit Ethernet for a home lab

I wanted 10 gigabit to my home 3 node vSphere cluster, perhaps excessive, but even with 4 gigabit ports per host vMotion and VSAN performance is less than I wanted.  My side plan being to retire my old NetApp 2020 in favor of all flash VSAN, as the NetApp though reliable, is dog slow being based on 7200 RPM 500GB SATA drives.

The best option I could find was an old HP 6400CL, which is 6 ports of CX4 plus a slot for an extra 2 ports, so 8 ports for circa $250.  My existing 3400CL 48 port gig switch took one of the same modules so now they have 20 gig between them.  The only spoiler was the immense cost of SFP+ to CX4 cables, over 300 for 6.  I found low profile Mellanox single port PCIe NICs for $10 each.

Foolishly I purchased the above but didn't get around to installing it for the best part of a year, and then lo and behold, it doesn't work.  I got link lights on the NIC end (and status in ESXi) but the switch didn't see link so no traffic passed.  Troubleshooting was going to be expensive - I could buy an Intel X520 NIC (my preferred choice but much more expensive than the Mellanox ones), new cables, or find another CX4 switch.  I might have been more inclined to go this route were my lab at home, but driving to a colo and paying for parking / losing half a day = not attractive.

I bought a H3C S5820X and 6 X SFP+ to SPF+ cables, which was much simpler.  Switch was 300 and SFP+ cables $25 per on Amazon with Prime delivery, I could have got them for 15 had I been prepared to wait for them to come from Hong Kong.  Installed and working, almost fine.  Turns out one of my NICs is bad too!  Argh.  (yes I switches cables/switchports to be sure)  I can address that another day, but at least my VSAN and vMotion traffic has 10 gigabit now.

I did try a CX4 - SFP+ connection between the new and old switches - no dice, which makes me think that despite my finding cables SFP+ connections do not ordinarily support the CX4 protocol at all and that path was a rat hole.  The 5820 has 14 X SFP+ ports and 4 X 10/100/1000, so I also have enough ports that were I to add a 4th host it wouldn't be a blocker (and would enable VSAN dedupe / erasure coding)


Postscript
I couldn't find another matching Mellanox NIC, so I bought 3 X Intel X540-DA2 cards complete with 2 SFP+ cables each on eBay.  I switched out the bad card and one in each of the other host, so now all three have 30 Gigabits into the switch - bit excessive but whatever.  I like that the Mellanox can handle VSAN traffic and be left alone, while I regularly upgrade / mess about with NSX on the Intel NICs.

No comments:

Post a Comment