Change in Plans

     Instead of backing up the existing www configuration, I am going to attempt to move the web server from a virtual server to a physical server.  The reason for this is that 5.16 kernel is not handling interrupts as efficiently as previous kernels did but previous kernels have an exploit known as a “dirty pipe” exploit that make running them hazardous.

     So by moving the web server to a physical machine I’ll more than half the number of ethernet interface related interrupts the CPU has to handle and those are the majority.  The reason for this is that the emulated ethernet in the virtual machine has no hardware offloading so it has to generate an interrupt for every packet, and the CPU has to process both the interrupts for the emulated ethernet and the interrupts for the real ethernet though the latter are fewer because it does have hardware offloading.  By eliminating the emulated ethernet, I will cut the ethernet generated interrupt traffic by more than half.

Web / Debian / Ubuntu

     The work I set out to do last night is completed except the final configuration was not what I had initially intended, however, I was not able to boot from flash owing to the BIOS of the machine only recognizing SATA flash drives and not PCIe drives which is unfortunate since all the SATA slots are full and the PCIe provides superior performance.  But I was able to use it as a data drive so the database is moved off of IO/fusion drive to it, and the entire web server is on it.  The defective disk has been replaced and there have been no more disk errors.

     Starting around 11PM tonight, I plan on taking the web server down for about an hour to backup the new configuration.  This will impact all eskimo.com web services including nextcloud and our social media sites friendica and hubzilla.

     I plan to take Ubuntu down for about an hour to move it to another physical host for load balancing purposes.

Maintenance Tonight 11pm – Sometime Saturday Morning

     This is a reminder that we will be doing some fairly extensive maintenance on Iglulik, which is the server that hosts ftp/www, ubuntu, debian, and mint virtual machines, the MariaDB database, and /home directory partitions.  As a consequence all services that require any of these facilities will be unavailable for a number of hours.

     I am going to be:

     1) Replacing another failed drive in the RAID array that houses /home.  This drive failed literally two hours after replacing it’s mate.  Both drives only have a handful of bad sectors (7 on the first, 2 on the second) but the firmware on these drives seems to be defective and not re-mapping the failed sectors even though there are two full tracks available for this purpose.  Had this not been used in a RAID array I could make the OS do it by adding it to the bad block inode but this does not work in RAID.

     2) Replacing the existing flash drive that the database is operating on with a larger but more conventional model that is fully supported by the Linux kernel so that I’m not forced to run buggy insecure kernels while I wait for the manufacturer to port the drivers.

     I will also be replacing with a much larger flash drive so I can move the root file system to a partition on the flash drive which should speed up boot significantly.  As it is now, it takes about five minutes for the OS to fully boot on this machine because of all the virtual machine start-ups.  The copying of this data is what will be the major time factor and I have no idea how long it will take so may be a couple of hours or might be five or six.

     This will also affect https://friendica.eskimo.com/, https://hubzilla.eskimo.com/, https://nextcloud.eskimo.com/, and https://www.eskimo.com/.  Incoming mail will be stopped to prevent errors due to unavailability of the /home partition during this operation.

Contact Info – Important Please Read!

     If you have not updated your contact information and the telephone number you registered with is no longer current, please contact me to update it.

     Tonight I’ve had a spate of people calling to request password changes from out of state numbers that did not match the client at all but they KNEW the clients correct number.

     Because all the users they tried to change date back to 2000, I suspect this is someone who has gotten information from the root compromise hack back in 2000.

     If you request a password change, I will only change it if I can contact you at the number we have on file so please if you have any doubts, make sure I have current contact info.

Post Office and Payments

     I’m having difficulty accessing my Shoreline Post Office Box because they have taken to locking the building after hours.  I’ve had a PO Box there for around three decades but this has only recently become an issue.  So if you are making a payment that is near your expiration date, please use a debit or credit card rather than mailing a check because there may be a significant delay before I can pick up the check.

More Extended Maintenance

     More extended maintenance is planned for the server that hosts /home directories and web service this Friday March 18th starting at around 11PM through Saturday around 4AM Pacific Daylight Time (GMT-7).  This time frame is approximate at best.

     The mate to the drive that failed developed two bad sectors right after this drive was replaced so I guess there is some value to the recommendation that you not buy drives in a RAID array from the same place since they will likely be manufactured at the same date and thus prone to simultaneous failure.

     There also seems to be a firmware bug with these particular drives, they have two spare tracks for sector re-allocation but neither of them automatically re-allocated the failed sectors.  I don’t have a spare now so have one on order but probably won’t make it in time for Friday’s maintenance.

     This Friday, unless the spare arrives, the primary maintenance will be installing this new flash drive and copying existing data over to it.  I don’t know how long this copy will take which is primarily why the uncertainty of the time interval.

     If the replacement drive arrives by then I’m going to change out the drive as well even though it only has two flawed sectors.  The firmware is supposed to handle automatic re-assignment internally, but two drives of the same model failed to do so, so I assume this is a firmware bug.  The new drives have a 4x larger cache anyway so worth replacing from a performance standpoint anyway.

     If the drive does not arrive in time for replacement Friday, then when it does arrive I’m going to attempt to manually force re-assignment, but I don’t want to do this until I have a spare just in case I brick the drive.  At any rate, I will replace at the next time it is convenient to do a maintenance on a Friday night.

Only Accomplished Part of What I Planned

     Replaced the failed drive but did not install the new flash owing to the mounting screw was missing from the motherboard and there is a card that is in the way.  So instead of using the socket on the motherboard, I’m going to get a PCIe adapter card for a whole of $9, and stuff it in another PCIe slot at a later date.  But the failed drive is replaced with a brand new drive and the new drive has 4x as much cache memory as the old so that should help performance a tad.

Eskimo North Extended Maintenance Outage 11PM March 12th – 4AM March 13th

     Tonight I am going to perform hardware surgery on the machine that hosts home directories.  As a result, ALL services except virtual private servers will be down for a number of hours.

     The server which hosts the /home directory partition has an ill drive in the RAID array for this partition.  It has about seven bad sectors which if they were HARD failed would not be a big deal, the drive would re-map them and life would go on, but they aren’t.  Instead if you write them and read immediately they will pass but sometime in a week or so following reads will fail again.

     If the mate to this drive in the RAID array were to fail, this would result in data corruption so I’m going to replace this drive tonight.

     The other issue, when I tried to get the kernel upgraded on this machine last night, the drivers for the Fusion I/O flash drive would not compile under the 5.16 kernel.  Earlier kernels have a bug that can result in either data corruption or privilege escalation, either of which are undesirable.  The Fusion I/O folks tell me it may be a while before drivers are fixed as there were extensive changes to the kernel.

     So I am going to replace the fusion I/O drive with a Western Digital Black 1TB drive which is natively supported by the Linux kernel.  This drive is much larger so I am going to put both the root file system and boot block and database on it.  It will take some time to copy all this data and change the boot block to this drive.  The database copy should go fast as it is flash-to-flash but the rest will take several hours.  This drive also does not have a conflict with the Broadcom NIC card, so when this is completed and I can remove this drive, I can restore the Broadcom NIC which handles hardware offloading properly.

     This will affect ALL services EXCEPT for virtual private servers which do not depend upon the site wide /home directories.  It will affect all web services including virtual domains, hosting packages, and virtual domains.  I had hoped to put the new server in place and just transfer these services over to it and then fix the old but the security flaw being publicized in the Linux kernel no longer affords me this luxury.

     This will also affect https://friendica.eskimo.com/, https://hubzilla.eskimo.com/, https://nextcloud.eskimo.com/, and our main website https://www.eskimo.com/.

Kernel Upgrades

     Kernel upgrades are done except for four machines, one of them is just borked, somehow the image got corrupted so it’s being restored from backups.  Could not get the drivers for the Fusion I/O drive to compile under 5.16 so I’ve ordered a more main brand conventional SSD, it won’t provide quite the I/O rate but it still should be adequate.  So the main physical server is still on 5.13.19 for now.  UUCP also won’t work with a modern kernel, issue with Centos’s start up routines wanting a feature that was deprecated.  Zorin is bored.  And Manjaro had to be restored from backups but hasn’t been brought totally current with upgrades yet as I am having some issues with the AUR processing.

Oops

     I was doing half a dozen things at once and got into the wrong terminal and accidentally rebooted one of the physical servers before I had intended to and without stopping virtual servers first, and under this circumstance it takes forever and a day to reboot so it will probably be 15-30 minutes before the mail system and many of the shell servers is available again.  My apologies but was trying to get kernel upgrades in place to address a potential security issue and two older machines required a lot of modification to run a modern kernel.