I made the announcement some time ago that I was going to move nextcloud. This is because in it’s old location it’s .htaccess conflicted with WordPress .htaccess when it came to well-known web addresses. To resolve this, I gave NextCloud it’s own hostname.
The OLD URL was: https://www.eskimo.com/nextcloud/
The NEW URL is: https://nextcloud.eskimo.com/
Please update your devices and any links to reflect the NEW URL. Some devices may not properly follow the redirect.
I’ve also updated to version 20.0.9 which is the current stable release. I am also installing and configuring a bunch of new applications including connectors to other fediverse social media.
Kernel upgrades completed without any major incidents at 11:38 PM. Everything was mostly up by 11:15 PM, the remaining time was just checking NFS mounts and such.
https://friendica.eskimo.com/ and https://hubzilla.eskimo.com/ are available.
I am planning another kernel upgrade this Friday April 16th starting at 11pm. If all goes well it should be concluded by 11:30pm. If it does not, it may be as late as 12:30pm.
These upgrades will affect virtually all of our services including https://friendica.eskimo.com/ and https://hubzilla.eskimo.com/ Fediverse social media sites.
There is some issue with the current grub that is sometimes causing the update after installing a new kernel to fail requiring manual intervention.
One of our servers rebooted at 2:30AM with no oops, crash dump, or error. I thought I had crash-dump enabled but did not, so corrected that, also upgraded to the latest point release of the kernel.
This server has most of the private virtual machines so is the reason your virtual machines would have rebooted.
It was not a clean upgrade. Grub got corrupted on one machine, one server didn’t properly export the exports until I manually restarted the nfs server. There were quite a few broken NFS mounts. So, some things were back up at 11:16, and the broken systems were gradually restored with restoration completed at 12:07.
https://friendica.eskimo.com/ and https://hubzilla.eskimo.com/ are back up and operational. Actually this was one of the things operational at 11:16.
I plan to upgrade kernels on all the servers tonight starting at 11PM. Upgrades should be concluded by midnight. This will result in outages lasting 10-20 minutes for each physical server and perhaps another ten minutes for virtual machines on that server depending upon whether NFS and NIS properly bind after reboots or not.
We had one of our physical servers, which mainly services virtual private servers, crash and reboot this morning.
The crash was caused by snapd attempting to apply livepatch to our kernel which is not a Canonical kernel. Livepatch was disabled in the software properties but it tried anyway.
I’ve removed snapd from our physical servers to prevent a recurrence and any security risk that snapd may represent.
Mint is back online and is now Mint 20.1, however it is a fresh install and there is not a lot of software installed yet. If there is something you’d like that is missing, please go to our website https://www.eskimo.com/ and select Support -> Tickets and generate a trouble ticket with your request.
I attempted to upgrade Mint again today and again it failed, this time leaving the machine in an unusable state where python3 wasn’t properly installed and python3 is needed for apt so I could not fix it.
I could have restored from backups but instead opted to try a fresh install of Mint 20.1 (it was on 19.3), unlike Mint 20, this fresh install succeeded and Mate is working.
I am still in the process of getting other needed software installed and at least a basic configuration in place so Mint may be unavailable this evening and possibly part of tomorrow depending upon how much I complete tonight.
Our web server was largely unavailable between 12:30AM and about 4AM on March 30th owing to one of the system database tables in MariaDB getting corrupted. I could have restored everything from backups in a shorter period of time but instead loaded backups, rescued the hurt system table structure from backups, then returned the server to the current image and dropped the corrupted table and re-created it from backups up data. That way no user data was lost.