I will be taking the web server down for approximately 1/2 hour to image the machine in case future restoration is necessary. It will be unavailable from 5:15AM until 5:45AM May 29th, 2017.
I will be moving vps1-vps5 private virtual servers to a different physical host for load balancing. You will experience a 20-30 minute outage of these servers during this operation.
The client mail server is back up. The copy took less time than I had anticipated owing to the faster disk subsystem. Mail is now on a RAID10 system with 4 Western Digital 7200RPM Black drives. These drives are each capable of sustained write speeds of around 120MB/s and the machine has adequate bandwidth to fully saturate all four of them continuously. This should make for a much nicer mail experience.
There will be some slowness for a couple of more hours as I move some other hosts. The network interface is 1GB and this tends to saturate it when I do these moves but it will help balance the load system wide to provide the best user experience possible.
The last hold out, ‘eskimo.com’, is back up and running!
I am going to be taking the client mail server down for approximately 1/2 hour to move it to different hardware both for load balancing purposes and to get it on the same physical machine as the mail spool files so that 1GB files don’t need to constantly be sent back and forth across the internal network. This will commence at 4:40AM on May 29th and should conclude by 5:10AM.
During this interval mail will still be received and spooled. It can also be accessed via shell mailers, but not webmail. Any outgoing mail sent from shell mailers should be spooled unless your client connects directly in which case you will get a rejection during this interval.
Everything is back up except for the old eskimo shell server. That will require another trip to the co-location facility and, after being there all night, and then working on it more from home, I’m too tired for another 44 mile round trip drive. So please use centos6 or some other server for today and I will get back down and restore it later tonight.
There is a possibility some e-mail may be rejected between about 3:30AM and around 6:30AM this morning when all of the mail servers were off line or inoperative because the spool was not mounted or NIS bound.
This evening I had to reboot a server that was the server that currently all of the files are served from because updates required it. It did not boot. It took me three hours to get it to boot. Something in new nvidia drivers conflicted with kernel-nfs-server used to serve those files and the system purged the latter including the /exports file that tells it what machines to make the files available to.
I have solved the video conflict by removing the nvidia drivers and using the Linux nouveau drivers instead. They are slower but it’s not like I am going to play video games on this machine. They are adequate for everything else.
I am now rebuilding the exports file by hand. Hope to have everything operational again in a couple of hours. I will focus on the most used services first.
Good news! I successfully got SunOS 4.1.4 to boot in a qemu sparc emulation of an ss-10. I was told SunOS 4.1.4 would not boot under OpenProm but apparently they’ve made some improvements because it did. It spewed a number of errors but none of them were show stoppers. I haven’t installed it yet but this is the first time I’ve even got it to boot. It’s been so long since I installed SunOS that it’s now mostly a human memory problem not a machine problem but I’ll get there.
I’ve got three remaining Sparc machines, an LX that is substituting rather poorly for the SS-10 that is eskimo.com. I still would much like to find a good SS-10 chassis with a non-fried DMA chip or an image of the SS-10 ROMS so I can try to get Qemu emulation to work, the latter would really be the preferred solution since then I would not have to maintain antique hardware. So if anyone can help with either of these things it would be much appreciated.
Another advantage of getting Sparc emulation working is that these machines would then become virtual machines and I could remotely reboot or power cycle them or perform other operations that as physical machines requires me to drive 22 miles to perform.
I got NIS working except one Radius server won’t bind to them and I can’t login remotely to fix it as it is a physical machine and not a virtual. I would like to fix this but so far have not been able to get the modern Radius to work. Documentation says it will read the old style configuration files but I’ve not been able to get it to function. The examples they give are unfortunately much simpler than the insanely messy setup we have here which is mostly the result of dealing with a large number of dial-up and DSL providers in the past (now only a handful remain).
We are presently running less one important server and so the load is still being shared by the remaining two is higher than normal.
When I left, things appeared to be working but when I returned, NIS was not functional.
I still can’t seem to get one of the NIS servers operational and haven’t figured out why yet but one will allow things to operate.
I have to get some sleep but I will troubleshoot the other NIS server and get the other file server back on line soon so any delay or lag is temporary.