Thursday, March 11, 2021

Supermicro IPMI yet again

So I broke one of my ESXi hosts by installing 7.0U2 as a patch baseline instead of an upgrade baseline in Lifecycle Manager (formerly Update Manager).  Failed to boot getting stuck at 'loading crypto...'

Easy fix supposedly, boot from CD and upgrade install over the top of the existing install, boot right back into the cluster.

Forums abounded with other people hitting the same thing and using iDRAC, iLO etc to mount the image and recover, I tried to do the same with Supermicro IMPI.  That's how I installed 6.7 on these in first place so I knew I had the capability to mount an ISO from an SMB share.  However install was at home where I was mounting the images off a Synology.  I dimly remembered having to mess with Synology but couldn't remember just how.

After wasting a long time trying to mount the ISO of a Windows box I gave up and installed a fresh Ubuntu VM to use, figuring correctly that Samba logging would help me figure it out.  Supermicro's SMB client not only speaks only SMB 1, 'server min protocol = NT1' but also doesn't support any decent authentication methods.  So after also adding 'ntlm auth = yes' the mount worked and I could recover.  The Samba VM got 150 random SMB hits from the Internet during its brief lifespan too, though all either zero length log files or ones filled with auth failures.  (My IPMI ports are out on the internet but with ACLs to limit access to just some static IPs I have access to, I'm not completely crazy)

[global]

server min protocol = NT1

ntlm auth = yes

[shared]

path = /home/simon/shared

valid users = simon

read only = no


To get IPMI settings of local system from ESXi:

localcli hardware ipmi bmc get

esxcli hardware ipmi bmc get

Wednesday, March 3, 2021

What to do when VCSA 7 runs out of space

 In my case ‘var/log’ was full, it being one of the smaller 10GB virtual disks.

The beauty of vCSA having 16 disks all in separate files is the ease with which you can grow one.

Get onto the console via virtual console or SSH, run a shell, then you can 'df -h' to confirm the full mount point, then use 'lsblk' to trace that back from it’s ‘Dev/wrapper’ mountpoint to an actual device like ‘Dev/sde’.  E being the 5th letter of the alphabet correlates with it being a 10GB device here and also my disk 5 in the VM settings.

Now take a backup.  Of course you're already doing nightly backups but then check that they're actually working, mine hadn't been for six weeks without my noticing due to an NFS permissions issue.  

Gracefully shutdown vCSA taking note of which host it’s on.  Connect to that host and edit settings for the vCSA, edit that virtual disk to increase its size, feel free to expand any other disks while you're there, it's not like most virtual storage isn't thin provisioned anyhow.  I took the opportunity to increase my RAM and CPU count too as I’m not resource constrained and I figured 4 vCPUs and 24GB would make my vCenter snappier.  Power back on and get a coffee while it boots/starts services.  

If you get 'editing host resources is disabled because this host is managed by vCenter' you can workaround by SSHing to the host and restarting vpxa and hostd - this will kick you out of the GUI, but then once you have re-authenticated you can make changes.


Console or SSH in again, open a shell and run the ‘/usr/lib/applmgmt/support/scripts/autogrow.sh‘ script, it should find your extra space and grow both the partition and the file system.

Done.