Monday, August 29, 2016

RSA SecureID Authentication Manager 8.2

To update the notes from the 8.1 post, I had a working setup with a primary and replica 8.1 AM server, and a web server for each.

Updating the Authentication Manager's themselves was straightforward, edit the VMs to add a CD-ROM drive and mount the ISO of the 8.1SP1 update - 8.1.0 directly to 8.2 is not supported.  Take a snapshot of the working 8.1 VM.  Enter the Service Console, and navigate to updates in the maintenance menu.  Then set the CD as the update source, do a scan, then select install on the resulting option.  This got both AM servers to 8.1.1 in fairly short order.  Delete the snapshots when complete.

Repeat to go from 8.1.1 to 8.2.

In theory the web servers are similar, in practice I tried to update them to 8.1.1 and somewhere along the line things went awry and the primary one went into status 'reinstall required' while the secondary just became disconnected altogether.

I uninstalled the RSA software from each of them and reinstalled complete with a new web tier package file from the Manager, and all was well.

Update

- All wasn't well, replication was broken.  I found RSA DOC 49528 with a fix for it:

SSH to the primary as rsaadmin,

cd /opt/rsa/am/utils
./rsautil manage-secrets -a get com.rsa.db.dba.password
com.rsa.db.dba.password: blah blah long password here
cd ../pgsql/bin
./psql -h localhost -p 7050 -d db -U rsa_dba
./psql -h localhost -p 7050 -d db -U rsa_dba
Password for user rsa_dba: blah blah long password her
psql.bin (9.4.1)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-SHA, bits: 256, compression: off)
Type "help" for help.

db=# select * from rsa_rep.IMS_INSTANCE_NODE;

(returns a table of your authentication manager instances)

db=# update RSA_REP.IMS_INSTANCE set deployed_state='out_of_sync' where is_primary='FALSE';
UPDATE 1
db=# 

Then you can go back into the Operations Console and select manual sync within replication reports and things are then fixed.

Thursday, August 11, 2016

Adventures in 10 gigabit Ethernet for a home lab

I wanted 10 gigabit to my home 3 node vSphere cluster, perhaps excessive, but even with 4 gigabit ports per host vMotion and VSAN performance is less than I wanted.  My side plan being to retire my old NetApp 2020 in favor of all flash VSAN, as the NetApp though reliable, is dog slow being based on 7200 RPM 500GB SATA drives.

The best option I could find was an old HP 6400CL, which is 6 ports of CX4 plus a slot for an extra 2 ports, so 8 ports for circa $250.  My existing 3400CL 48 port gig switch took one of the same modules so now they have 20 gig between them.  The only spoiler was the immense cost of SFP+ to CX4 cables, over 300 for 6.  I found low profile Mellanox single port PCIe NICs for $10 each.

Foolishly I purchased the above but didn't get around to installing it for the best part of a year, and then lo and behold, it doesn't work.  I got link lights on the NIC end (and status in ESXi) but the switch didn't see link so no traffic passed.  Troubleshooting was going to be expensive - I could buy an Intel X520 NIC (my preferred choice but much more expensive than the Mellanox ones), new cables, or find another CX4 switch.  I might have been more inclined to go this route were my lab at home, but driving to a colo and paying for parking / losing half a day = not attractive.

I bought a H3C S5820X and 6 X SFP+ to SPF+ cables, which was much simpler.  Switch was 300 and SFP+ cables $25 per on Amazon with Prime delivery, I could have got them for 15 had I been prepared to wait for them to come from Hong Kong.  Installed and working, almost fine.  Turns out one of my NICs is bad too!  Argh.  (yes I switches cables/switchports to be sure)  I can address that another day, but at least my VSAN and vMotion traffic has 10 gigabit now.

I did try a CX4 - SFP+ connection between the new and old switches - no dice, which makes me think that despite my finding cables SFP+ connections do not ordinarily support the CX4 protocol at all and that path was a rat hole.  The 5820 has 14 X SFP+ ports and 4 X 10/100/1000, so I also have enough ports that were I to add a 4th host it wouldn't be a blocker (and would enable VSAN dedupe / erasure coding)


Postscript
I couldn't find another matching Mellanox NIC, so I bought 3 X Intel X540-DA2 cards complete with 2 SFP+ cables each on eBay.  I switched out the bad card and one in each of the other host, so now all three have 30 Gigabits into the switch - bit excessive but whatever.  I like that the Mellanox can handle VSAN traffic and be left alone, while I regularly upgrade / mess about with NSX on the Intel NICs.

Thursday, August 4, 2016

Verify Cisco IOS against MD5 / SHA hash

I'm not sure if this is exactly a problem:

2911-2 uptime is 2 years, 42 weeks, 3 days, 21 hours, 7 minutes

but it seems sensible to update IOS once every few years (yes I am joking, a actual maintenance cycle of six monthly or whenever there's a critical security patch) just for the many security patches that will have occurred.  Now as this box is a long way from me and I don't have the time or money to travel to it I wanted to actual verify the bits I'd installed on the flash were good.  

To verify Cisco IOS image is valid against it's internal SHA hash:

2911-2#verify flash0:/c2900-universalk9-mz.SPA.154-3.M4.bin
Starting image verification
Hash Computation:    100% Done!
Computed Hash   SHA2: 4363F1CFF3EF05BB32E48BB49C9E03B3
                      5D7C9D91F351C095E94E82267DCC5719
                      7C5D1CC1669184B20A37CF9DD710806B
                      7388298DB7DD5B18581330D3F388B77A
                     
Embedded Hash   SHA2: 4363F1CFF3EF05BB32E48BB49C9E03B3
                      5D7C9D91F351C095E94E82267DCC5719
                      7C5D1CC1669184B20A37CF9DD710806B
                      7388298DB7DD5B18581330D3F388B77A
                     
CCO Hash        MD5 : 9F652984B1DBB1146AF25DCD5F6F5020

Digital signature successfully verified in file flash0:/c2900-universalk9-mz.SPA.154-3.M4.bin


verify /md5 (flash0:/c2900-universalk9-mz.SPA.154-3.M4.bin) = 9f652984b1dbb1146af25dcd5f6f5020



In both cases reassuringly the same, which made me feel better about scheduling a reboot and not waiting up for it to happen at 2:00 AM.