Outage Difficulties

First some background..

     Our old web server was over burdened, particularly when it came to RAM.  Also it booted off a rotary disk and only the mariadb was on nvme memory thus it was slow to boot.  Linux likes having a lot more RAM than it needs because it uses any not required by something else as I/O cache and this speeds up average disk latency considerably because frequently required items will always be in memory.  Cache was configured as write-back so system never slowed waiting on writes.  The disks themselves had 512MB buffers so even if it waited on the drive it would not have to wait for physical write to media.

     So I decided to build a new server, and for this new server I had several things on the wish list.  One, it would address more RAM, and for this reason primarily I went with an i9-10900x CPU.  This CPU could address 256GB of RAM and it had four memory channels instead of two.  It also had ten cores and twenty threads, a step up from six cores and twelve threads.  The primary limit to this CPU’s performance is cooling. It’s rated a TDP of 165 watts but this is at stock 3.6Ghz clock.  One does not buy a binned ‘X’ CPU to run at stock speed.

     Some testing revealed this was electrically stable up to about 4.7Ghz but at 4.7Ghz busy it drew 360 watts of power.  I used a Noctua 15D cooler, but rather than use the stock quiet fans, I used some noisy after market fans that produced about twice the CFM and about 10x the noise level but if you’ve ever been in a data center, noise is not a big concern.  With these fans testing revealed that it could keep the CPU at or below 90C at 4.6Ghz and at that speed it drew 320 watts.

     I wanted to avoid a water based cooler because at home you get a leak and you ruin a few thousand worth of equipment.  In a data center you get a leak, it goes into the under the floor power and you burn down a building and go out of business.

     So I only had to give up about 2-1/2% of the performance of this CPU to avoid water cooling, not bad.  Then I wanted everything on RAID and I wanted all the time sensitive data on nvram so it would go fast.  I tried to find a hardware nvme RAID controller but if they make such a beast I was unable to find one.  I could only find “fake raid” devices, these work with Whenblows but but not Linux.

     So I ended up going with software RAID.  The one thing I could not RAID was the EFI system partition because this is read by the machines UEFI and it does not know about Linux software RAID.  So while that was un-raided, I had duplicated the EFI system disk on each nvme drive so if one drive failed the system would still be bootable and all I had to do to keep them in sync was modify the scripts that installed a kernel to do a grub-install to both devices.

     And it worked for a while.  Then we lost our forth router there (fried) and at that point I decided to spring for a Juniper router.  The reason I went with this brand is that when we first moved our equipment to the co-lo at ELI, they used Junipers and we never once had a data outage there and they were not at all easy to packet flood which is what made it possible for us to run IRC servers there.  After Citizens bought them, they sold the Junipers and replaced them with Ciscos and packet flooding then took the whole co-lo center which basically left us in a situation where either we got rid of the IRC server or they got rid of us.  So having had such a good experience with the routers there I decided to go that route.  But it’s a command syntax I’m not entirely familiar with and I’m still learning (it is similar to Cisco’s but not the same).

     Meanwhile I decided to use one of the Linux boxes as a router and I used the newest server only because at the time it was the only machine with multiple interfaces.  But it was not stable routing and I did not understand why but after a bit I moved it to another machine that I just put a 1G Intel ethernet into it.  It ran for a bit then ate it’s interface card and became unstable.  I had some spare cards but they all had realtek chipsets.  What I didn’t know about Realtek is that the Linux drivers for them are absolute crap.  They work ok at 100mb/s but a 1Gb/s they randomly loose carrier or cycle up and down.  So I put one of these cards in a machine and set it up to act as a router, that lasted about two days before it crashed.  I went over and found no carrier lights, but after playing with it for a while I thought ok, this is just a bad card and so went to replace it thinking it was a 20 minute job.

     Three cards later and now 10AM the next day it still wasn’t working so I drove from the co-location facility down to Re-PC and picked up an Intel based industrial model 4-port card, these are much more robust requiring multiple PCIe lanes so  you need to use a big slot but that’s ok as I only had a wimpy graphics card that only required one.  That solved the networking issue for now.  The Juniper still will be a better solution but I could completely saturate the 1G interface so we’re not losing any speed with this arrangement.

     But the fun and games were still not over.  I got all of the machines up and running except the new web server.  For some reason it would not automatically assemble the RAID arrays and come up online.  It would go into emergency mode.  There I could type mdadm –assemble –scan and it would assemble the RAID partitions and I could mount them and bring the machine up, but if it crashed while I wasn’t there it would not come up on it’s own.  I spent until 6pm trying to troubleshoot and fix, in the past when this has happened it has always either been an issue with the EFI system partition, and I had already re-installed grub 32 times to no avail, or it was a problem with the initramfs, solved by re-creating it, but neither of those things were the cause and I wasn’t successful at locating the error causing it in the logs.

     So finally at 6pm I just re-installed Linux and resolved myself to recovering everything from backups.  So I re-installed Linux, went home, and by then 8pm, I had been working on this for about 33 hours without sleep (I had started working on it at home before deciding to go down and swap out the Network Interface cards).  So went to sleep.

     This morning I proceeded to work on installing software and restoring things from backups and getting the machine configured again and part of that process required a reboot, from which it did not recover.  So I drove to the co-lo thinking I just forgot to configure the proper boot partition in the UEFI bios or something like that and instead found it in the same condition it was in before I installed Linux.

     But this time after a number of attempts I caught an error message it through that was only on the screen for I would guess less than a tenth of a second and what I noticed was that it started with initrd, suggesting an issue with the initramfs, it took about ten more reboots to make out that the message was: initrd: duplicate entry in mdadm.conf file.

     So I checked and sure enough the system had added an entry to those I had entered by hand, identical.  So I took the extra entry out, did a chattr +i file to mark the file immutable so the operating system didn’t modify it for me again, and went home, hoping I could finish restoring it to service, but when I got home it was again dead.

     I drove back to the co-location center (and it is 25 miles each way) it was on but did not have power.  So I power cycled the power supply and it came back up, but by the time I got home it was dead again..  If I move the cord around it goes on and off so I am assuming there is a bad connection from the pin on the end side, maybe a cold solder joint or something.  At any rate, I ordered a new supply which should get here between 2pm-6pm tomorrow and will go back and replace it when it arrives.  Right now I also have one customer on this new machine, MartinMusic.com, so before I replace it I will try to grab the data for his website just in case it is something else so that I can put it on the old server until this one is solid.

     So hopefully I can get this stable and then go back to learning the Juniper syntax and get that installed.  Then I’m going to work on upgrading the old web server for other work.  The motherboard has one bad USB port on it now so not really sure how long it is going to last.

Outage

     I still have not restored some services, particularly the newer web server upon which a few customers sites are on.  I’ve been at this constantly for about 33 hours without sleep and I have reached my physical endurance limit.  It was necessary to completely re-install one systems operating system as something in the boot process had become corrupted and I could not figure out what.

     I will go into greater details when it’s all done, but right now I really must sleep.

Maintenance Outage

I plan to take most services off line between 11pm-12pm tonight, March
12th, 2024, for about twenty minutes to replace a failed network interface
card in one of the servers. Because this server provides some disk storage
for most of the machines via NFS this means most services will be unavailable
during that interval.

Outage

     The machine that is doing double duty as a webhost and router is ill.  I have it running in a crippled state but if it reboots it will not come up automatically.

     For those who are interested in the technical details and may have encountered this before and can provide some hints, what is happening is that when it boots it tried to create system users with systemd-sysusers.service which runs systemd-sysusers as a one-shot, it does not complete however and times out.

     The other thing that is not working is mdmonitor which uses mdadm to build the RAID devices (everything on this machine is on RAID).

     So after both of these timeout, I can login to single user shell, run them by hand, and they run fine.  Hence the big mystery.  I am going to go back tonight and try rebuilding initramfs on the off chance it is broken.  If that fails I’m going to attempt an upgrade to 24.04, I did manage to get the old web server which was doing something similar working this way.

     But if worst comes to worst I am going to have to re-install the machine and that may take a while.  So service may be spotty tonight.  I apologize for that but sometimes things just don’t give you an option.

Server Issues

    We had an issue with the web server eating itself after an upgrade introduced a library that conflicted with a library I had compiled in order to enable http2 protocol before Ubuntu included it in their distribution.

     I ended up having to restore this server from backups and bring it forward again, removing the offending library in advance so that this would proceed properly.

     I upgraded the server to php 8.1, however some apps even which were supposedly 8.1 compatible did not work.  It suggests a problem with our 8.1 install.  I am working to remove existing users and apps from this server and transferring it to a new server.

     I am laying some things out differently, in particular some webapps, like roundcube, which are presently https://www.eskimo.com/roundcube, will instead get their own subdomain, https://roundcube.eskimo.com/.  There are several reasons for this.  First, it allows each to exist in the root directory of it’s subdomain, most code doesn’t care but there are some applications that do.  Second, it allows each application to have it’s own .htaccess file so I can tailor the server environment for that specific application.  Third, it allows moving applications to different servers for load balancing reasons.

      I am upgrading some servers to 24.04 early just because it has more compatibilities with a lot of my self-compiled libraries than does 22.04 which is two years old now.  There are some elements of 24.04 that are improvements, but pretty much all the systemd bugs from 22.04 were retained.  There seems to be a new bug in which NIS reports to systemd that it’s up and running about three seconds before this is actually the case.  This causes issues with applications depending upon NIS to be running first.  I’ve worked around this by adding automatic restarts to services thusly affected, so that if they fail to start the first time, they will restart three seconds later.

Carl Jung

“The spirit of evil is fear, negation, the adversary who opposes life in its struggle for eternal duration and thwarts every great deed, who infuses into the body the poison of weakness and age through the treacherous bite of the serpent; he is the spirit of regression, who threatens us with bondage to the mother and with dissolution and extinction in the unconscious. For the hero, fear is a challenge and a task, because only boldness can deliver from fear. And if the risk is not taken, the meaning of life is somehow violated.” —C. G. Jung, Symbols of Transformation, par 551.

SSH Key Vulnerability

     A new ssh key vulnerability has been found affecting RSA keys which are the default in many older Linux implementations.  I strongly suggest generating a new ed25519 key using the command ssh-keygen -t ed25519 and remove any RSA keys you may be using. To do this remove relevant lines from your ~/,ssh/authorized_keys file. RSA keys will all start with ssh-rsa.

     Although any keys other than RSA will be safe from this particular attack, I recommend ed25519 because, being an elliptical curve algorithm, it is presently safe from all known quantum computer based attacks.  Any algorithm that depends upon the ability to factor the product of two large primes is vulnerable to quantum computers because they make this a very fast and easy task.

     After you make this new key and delete your old RSA keys you will need to use ssh-copy-id login@hostname to be able to use ssh-key authentication on that machine.

     Here is the article where I learned about this new exploit.  Unfortunately they don’t get into detail about what comprises a computational error, whether this is a hardware or an algorithmic error.

https://arstechnica.com/security/2023/11/hackers-can-steal-ssh-cryptographic-keys-in-new-cutting-edge-attack/amp/

Outage

Sorry it took me so long to get this back up.  I thought the machine that was serving as router had crashed, it had not, however, the  device driver for the ethernet card had unloaded.

When I tried to load it it told me invalid argument.  Odd since I hadn’t changed any arguments, in fact I had no arguments at all.

I spent about an hour and a half futzing with that and then decided to see if that driver was included with the generic kernel (different version), it was.  I loaded the generic kernel (sub-optimal for our needs but okay until we get the new router running), and then I was able to get the card operational.  However, I had messed up the network settings by this time and had to spend another two hours figuring it out, primarily because at some point I transposed the two connectors for network and LAN.

Stability or the Lack Thereof

This morning I finally figured out the source of the most recent instability (since we changed out the bad NIC card).

We kept having this incidence where I’d go to the co-lo, thought I had
everything working but in minutes or hours or sometimes a few days it would
just stop talking to the Internet.

One of those occurred this morning, I went down, looked at settings, nothing appeared to have changed, but it wasn’t routing. Rebooted the server,
started routing again. Went back home, couldn’t ping anything.

And I’m really half a sleep and my workstation is busted on account of the
fact that the night before I tried to upgrade the OS, it failed, and so I tried
to restore from backup but afterwards I could not boot. And at this point I
have had approximately two hours of sleep in the past 48 hours.

So I drove back down to the co-lo center again, and keep in mind it’s
22 miles each way and pretty close to rush hour so not at all a pleasant
drive. This time I rebooted again but still no route, several times, still
the same.

So at this point I got the NOC involved, neither of us could ping the
other end of the wire and that should not have been difficult since it’s just
a single Ethernet to Ethernet cable. But we couldn’t, so I got the idea of
unplugging the LAN interface and after doing that the WAN interface immediately
came up, so this pointed to something wrong on my end, didn’t know what but
had to be something on my end.

So I took a look at the routing table and it quickly became apparent what
was wrong, there were not one but TWO default routes. This is not legal under
Linux so how did it come about?

Well on the WAN interface, I had the correct gateway address, but on the
LAN side I was pointed at my own machine instead of his router. Why I did
his is because if I had a rounder in between my machines and LAN and his router
my router’s IP would be the gateway IP we point all the local machines to.

So corrected this to the correct gateway IP, everything came up from a
routing perspective and has remained that ever since.

But once I got home, I got another telephone call, not able to receive
e-mail from g-mail. At first I had the problem of not having a workstation
to use, totally forgot about my laptop, probably sleep deprivation.

But I did remember I had an antique Dell loaded with Linux, fired it up
and SSH’d into the mail servers, found both were operational but both had a
bunch of jobs stopped but no logged errors indicating why. I rebooted them
and they came up and ran fine except that the load went up to around 200 for about half an hour then settled down to a normal below 1 load.

At this point I’m speculating that without a network queue runs got stuck
until they exhausted memory then things died.

At this point I returned to my workstation. I had restored a corrupted
root partition from backups but after doing so it would not boot. I finally
chased this down to the fact that I did a reformat prior to reloading from
backups and this changed the partition UUID so that it no longer matched the
fstab file. Fixed that up and now it’s running properly again.

I expect to receive the new router sometime between this Friday and next
Monday, I don’t know how long it will take for me to learn how to use it will
enough to put it into service but it is very thoroughly documented, has eight
manuals, the administrative manual is 3061 pages. Also some free online courses and if you want to get into it deeply you can spend as much as $6000 on
non-free training. It supports damned near every communication protocol known
to man.