<![CDATA[Eskimo North]]> http://www.eskimo.com/forums 2018-09-21T02:30:26-07:00 Smartfeed extension for phpBB <![CDATA[Health :: Mitochondria :: Author Nanook]]> 2018-09-10T00:34:04-07:00 2018-09-10T00:34:04-07:00 http://www.eskimo.com/forums/viewtopic.php?f=35&t=277&p=589#p589
I achieved the degree of healing of my nervous system as I have by following something on Facebook I stumbled across called "The Protocol Works", the protocol did indeed work but had many expensive components. I researched each using studies on nih.gov and eliminated all that did not show benefits in studies. I stumbled across several other agents in the process and added them.

One of the things I found in common with every protocol that worked is that they all aimed at restoring mitochondrial health. Mitochondria are the tiny organelles inside your cells that metabolize sugars, fats, and proteins into ATP, the energy chemical that drives all of our energy processes. And it happens that every one of the degenerative diseases I have are directly related to mitochondria except for cataracts that seem to be secondary to that and affected greatly by blood glucose.

I started researching things that can help mitochondrial health that were not already on my stack and discovered mitoQ. It has improved my energy, decreased my
appetite and need for sleep, and greatly reduced cataract symptoms. Before taking it I had trouble driving at night and had to avoid the inside lane because of the glare of oncoming headlamps. Any light had a halo, the moon was only a yellow-white blob in the sky, I could not even tell the phase. I could only see a few of the brightest stars and they were whitish blobs not point light sources. After taking mitoQ for about three months lights no longer have halos, I can use the inside lane at night without problems, the moon is now sharp and well defined, no longer a blob, and stars are once again a point source of light and there are a lot of them.

Unfortunately, the two things that would probably help the most I'm not doing a good job of yet, and those are shorting myself of calories but not nutrition and strenuous exercise, not doing either at present. Cholesterol and blood pressure presently controlled by medicines but home can get both of those down in time and get away from the need.

How mitoQ is helping the eye lens is a mystery since mature lens cells do not have mitochondria. I speculate that it must be helping some support cells.]]>

I achieved the degree of healing of my nervous system as I have by following something on Facebook I stumbled across called "The Protocol Works", the protocol did indeed work but had many expensive components. I researched each using studies on nih.gov and eliminated all that did not show benefits in studies. I stumbled across several other agents in the process and added them.

One of the things I found in common with every protocol that worked is that they all aimed at restoring mitochondrial health. Mitochondria are the tiny organelles inside your cells that metabolize sugars, fats, and proteins into ATP, the energy chemical that drives all of our energy processes. And it happens that every one of the degenerative diseases I have are directly related to mitochondria except for cataracts that seem to be secondary to that and affected greatly by blood glucose.

I started researching things that can help mitochondrial health that were not already on my stack and discovered mitoQ. It has improved my energy, decreased my
appetite and need for sleep, and greatly reduced cataract symptoms. Before taking it I had trouble driving at night and had to avoid the inside lane because of the glare of oncoming headlamps. Any light had a halo, the moon was only a yellow-white blob in the sky, I could not even tell the phase. I could only see a few of the brightest stars and they were whitish blobs not point light sources. After taking mitoQ for about three months lights no longer have halos, I can use the inside lane at night without problems, the moon is now sharp and well defined, no longer a blob, and stars are once again a point source of light and there are a lot of them.

Unfortunately, the two things that would probably help the most I'm not doing a good job of yet, and those are shorting myself of calories but not nutrition and strenuous exercise, not doing either at present. Cholesterol and blood pressure presently controlled by medicines but home can get both of those down in time and get away from the need.

How mitoQ is helping the eye lens is a mystery since mature lens cells do not have mitochondria. I speculate that it must be helping some support cells.]]>
<![CDATA[Introductions :: Re: Hello, :: Reply by castle]]> 2018-08-28T19:11:19-07:00 2018-08-28T19:11:19-07:00 http://www.eskimo.com/forums/viewtopic.php?f=15&t=238&p=580#p580
____________________
gmail sign up new account]]>

____________________
gmail sign up new account]]>
<![CDATA[Music :: Hurdy Gurdy Man :: Author Nanook]]> 2018-08-28T14:58:35-07:00 2018-08-28T14:58:35-07:00 http://www.eskimo.com/forums/viewtopic.php?f=28&t=270&p=579#p579 https://www.youtube.com/watch?v=3lKCUuyojDI


But until now I never knew what hurdy gurty was, now thanks to this video I do:
https://www.youtube.com/watch?v=gYJg9cLk1us]]>
https://www.youtube.com/watch?v=3lKCUuyojDI


But until now I never knew what hurdy gurty was, now thanks to this video I do:
https://www.youtube.com/watch?v=gYJg9cLk1us]]>
<![CDATA[Music :: Big Audio Dynamite Ripped Off The Clash :: Author Nanook]]> 2018-09-18T05:15:54-07:00 2018-09-18T05:15:54-07:00 http://www.eskimo.com/forums/viewtopic.php?f=28&t=285&p=598#p598 <![CDATA[Random Noise :: Grumps :: Author Nanook]]> 2018-08-22T16:43:18-07:00 2018-08-22T16:43:18-07:00 http://www.eskimo.com/forums/viewtopic.php?f=6&t=257&p=566#p566 <![CDATA[Random Noise :: The Enslaved :: Author Nanook]]> 2018-08-23T14:47:43-07:00 2018-08-23T14:47:43-07:00 http://www.eskimo.com/forums/viewtopic.php?f=6&t=259&p=568#p568 [/BBvideo]]]> [/BBvideo]]]> <![CDATA[Random Noise :: ET @ Mt Shasta :: Author Nanook]]> 2018-08-23T17:06:22-07:00 2018-08-23T17:06:22-07:00 http://www.eskimo.com/forums/viewtopic.php?f=6&t=260&p=569#p569 https://youtu.be/19HH4L79cNY
Dr. Steven Greer also takes groups of 20-30 people to Mt. Shasta and some other locations and they go equipped with night vision equipment and lasers, and almost always
they have the sightings. I have seen video where the sky looked so busy it seemed like a miracle they did not crash into each other.]]>
https://youtu.be/19HH4L79cNY
Dr. Steven Greer also takes groups of 20-30 people to Mt. Shasta and some other locations and they go equipped with night vision equipment and lasers, and almost always
they have the sightings. I have seen video where the sky looked so busy it seemed like a miracle they did not crash into each other.]]>
<![CDATA[Random Noise :: Personal Space :: Author Nanook]]> 2018-08-23T17:46:00-07:00 2018-08-23T17:46:00-07:00 http://www.eskimo.com/forums/viewtopic.php?f=6&t=262&p=571#p571 ]]> ]]> <![CDATA[Random Noise :: Complete Bullshit but Fun Story :: Author Nanook]]> 2018-08-23T19:24:43-07:00 2018-08-23T19:24:43-07:00 http://www.eskimo.com/forums/viewtopic.php?f=6&t=263&p=572#p572 https://youtu.be/e5nJA8Hkvyc]]> https://youtu.be/e5nJA8Hkvyc]]> <![CDATA[Random Noise :: I have two dead cousins that got dead more or less this way. :: Author Nanook]]> 2018-09-03T16:30:18-07:00 2018-09-03T16:30:18-07:00 http://www.eskimo.com/forums/viewtopic.php?f=6&t=275&p=586#p586 https://youtu.be/L7e71uWlzlU]]> https://youtu.be/L7e71uWlzlU]]> <![CDATA[Random Noise :: Re: I have two dead cousins that got dead more or less this way. :: Reply by carl]]> 2018-09-18T18:03:15-07:00 2018-09-18T18:03:15-07:00 http://www.eskimo.com/forums/viewtopic.php?f=6&t=275&p=599#p599 <![CDATA[Random Noise :: Re: I have two dead cousins that got dead more or less this way. :: Reply by Nanook]]> 2018-09-18T20:53:58-07:00 2018-09-18T20:53:58-07:00 http://www.eskimo.com/forums/viewtopic.php?f=6&t=275&p=600#p600 <![CDATA[Space, UFO, Et, Aliens :: Gene Carnan - The Last Man to Walk On The Moon :: Author Nanook]]> 2018-09-01T19:14:52-07:00 2018-09-01T19:14:52-07:00 http://www.eskimo.com/forums/viewtopic.php?f=36&t=274&p=585#p585 https://social.eskimo.com/file/view/989 ... -interview[/BBvideo]
Sorry the embed isn't working quite right on my own server, but if you click on the link it will play.]]>
https://social.eskimo.com/file/view/989 ... -interview[/BBvideo]
Sorry the embed isn't working quite right on my own server, but if you click on the link it will play.]]>
<![CDATA[Tech Talk :: Getting Chipped :: Author Nanook]]> 2018-09-19T14:38:52-07:00 2018-09-19T14:38:52-07:00 http://www.eskimo.com/forums/viewtopic.php?f=7&t=286&p=601#p601 https://youtu.be/O01e5loduus
One thing to know about getting chipped, animals that receive chips have a 1-in-2000 chance of developing cancer at the chip site. Because people live longer, that probably means our chances of developing cancer are even greater. I have noticed a growing trend towards chipping Alzheimer's patience so that they can be identified if they escape and wander off. This is not a trend I find encouraging.]]>
https://youtu.be/O01e5loduus
One thing to know about getting chipped, animals that receive chips have a 1-in-2000 chance of developing cancer at the chip site. Because people live longer, that probably means our chances of developing cancer are even greater. I have noticed a growing trend towards chipping Alzheimer's patience so that they can be identified if they escape and wander off. This is not a trend I find encouraging.]]>
<![CDATA[World Events :: Under Control :: Author Nanook]]> 2018-08-25T03:37:32-07:00 2018-08-25T03:37:32-07:00 http://www.eskimo.com/forums/viewtopic.php?f=26&t=264&p=573#p573 ]]> ]]> <![CDATA[World Events :: Drumpf :: Author Nanook]]> 2018-08-25T05:34:45-07:00 2018-08-25T05:34:45-07:00 http://www.eskimo.com/forums/viewtopic.php?f=26&t=267&p=576#p576 https://youtu.be/XYviM5xevC8]]> https://youtu.be/XYviM5xevC8]]> <![CDATA[World Events :: Re: NSA Whisle Blower 3 Hour Interview :: Author Nanook]]> 2018-08-30T23:00:12-07:00 2018-08-30T23:00:12-07:00 http://www.eskimo.com/forums/viewtopic.php?f=26&t=273&p=584#p584 https://social.eskimo.com/file/view/9897/nsa[/BBvideo]
This won't embed properly but click on the link to view. I have brought the video here specifically to prevent it from being censored.]]>
https://social.eskimo.com/file/view/9897/nsa[/BBvideo]
This won't embed properly but click on the link to view. I have brought the video here specifically to prevent it from being censored.]]>
<![CDATA[World Events :: Overcriminalization :: Author Nanook]]> 2018-09-06T15:39:01-07:00 2018-09-06T15:39:01-07:00 http://www.eskimo.com/forums/viewtopic.php?f=26&t=276&p=588#p588 https://www.youtube.com/watch?v=xJxFBAAHLWk]]> https://www.youtube.com/watch?v=xJxFBAAHLWk]]> <![CDATA[World Events :: 911 :: Author Nanook]]> 2018-09-13T20:49:18-07:00 2018-09-13T20:49:18-07:00 http://www.eskimo.com/forums/viewtopic.php?f=26&t=283&p=595#p595 https://social.eskimo.com/file/download ... sed%21.mp4[/BBvideo]]]> https://social.eskimo.com/file/download ... sed%21.mp4[/BBvideo]]]> <![CDATA[World Events :: The De-evolution of the Telephone System :: Author Nanook]]> 2018-09-20T17:03:40-07:00 2018-09-20T17:03:40-07:00 http://www.eskimo.com/forums/viewtopic.php?f=26&t=288&p=603#p603
When I started, George Walker was the company President. George Walker was a guy that had worked his way up the ranks from a lineman to the Presidents office. George Walker knew the business and was committed to good customer service.

When I started service was the number one thing, nothing else mattered more. To the degree that it was less than perfect it was pretty much the best the technology of the day could provide. It wasn't a result of cost cutting or other human considerations. We did everything humanly possible to provide the best service the technology of the day allowed.

The records back then were kept in paper bins. Most of the switches were electromechanical, not operating under stored program control. Those that were under control consisted of electromechanical switched operated by computers that used discrete transistors. Many thousands of circuit packs consuming many tens of thousands of watts of electricity to effect an 8 Mhz CPU cycle speed. Permanent memory was stored on aluminum cards with little bar magnets glued to them. There were 64 words of 44 bits on each card. Of these 37 bits were instruction or data, the rest was parity and hamming for error correction. Our first #1ESS was already 13 years old when I started with the company, it was installed in Bellevue Glencourt in 1965. The CPUs were duplicated and had matching circuitry. They ran in step and any error caused a mis-match which was discovered by the matching circuitry which would then invoke diagnostics to determine which until was in error and remove it from service.

The software for the #1ESS was extremely efficient. It allowed an 8Mhz CPU to effective service more than 100,000 calls per hour in Bellevue Glencourt.

The switching fabric of these machines were built around a special type of ferreed switch called a remreed. It was called a remreed because rather than requiring a continuously applied external magnetic field to close, the remreed switches reeds were made out of a ferrous material that could be magnetized or demagnetized by an external force and then would remember and either stay closed or open.

What was amazing to me about the early technology was the sheer cleverness of it's implementation. So much functionality was squeezed out of so little and primitive, by
today's standards, technology. Back in those days we would have anywhere from 2-5 technicians in a central office, and often a much larger crew of special services people, these were people who would wire up design circuits for customers. Things like getting a radio station connected from it's studios to it's transmitter, private networks, and alarm circuits.

What I saw over the 17 years I was there was an explosive evolution of hardware sophistication along with a destruction of management infrastructure and later overall personnel infrastructure. The company gradually evolved from one in which nothing mattered more than customer service to nothing mattered more than profits. Although both the costs of capital infrastructure and the cost of maintenance decreased substantially over the years, the cost of a basic phone line went from around $8 to about $60/month today.

As we prepared to comply with the AT&T consent decree in which ATT would sell off all of it's local exchange carriers including PNB, for whom I worked, we were told that new "holding companies" would be formed, but management and the way we did business would remain the same. This was the biggest boldest lie ever told.

Under George Walker's rule a hell of a lot of trust was placed in employees. I had a key that would get me into pretty much any PNB facility except the Presidents office and a couple of doors in the tunnel between main and mutual. One door lead down to train tracks some 50 feet below the buildings, and the other to the main power room was exceptionally huge and dangerous. Because there was still a lot of cross-bar equipment the normal current in the power bus was around 5,000 AMPS and God only knows what the fault current capabilities were but it was something truly phenomenal. I could get into power rooms at any other facility except main.

Very little was walled off. We'd have several switches on a floor with no wall between the CPUs and switches, the switches and main frames and we had under ground tunnels connecting three buildings, 1101 4th Avenue (now a Hotel), 1122 3rd avenue (Main), and 1200 3rd avenue (Mutual, now a retain center).

Fairly early on several things happened after the break-up. They relieved George Walker of his duty and most of the upper management. They were replaced by a bunch of MBAs with zero knowledge of the business. They split the company up into 17 subsidiaries, one of which held all the property and they managed by some tricky stock manipulations to get most of that property into their own pockets. We then leased back many of the properties no longer owned by the company.

They took away our keys and put card locks on all the doors so they could limit us to only where we needed to go and keep track of our entries and exits. By this time a number of #1ESS machines including the mains had been upgraded to #1AESS. In the mains they walled of the #1AESS CPUs from everything else with no reason given. In some ways it was nice as the clatter of 100,000 relays was at least dampened by the walls but it also made you less aware of the operational status of the machine. When the walls were not there, you could hear when something was wrong, a frame was acting up, etc. Now you were not able to be so aware of what the periphery was doing while you were doing work involving the CPUs.

Along with the new CPU technology, the #1AESS also brought new switching fabric technology. Although the old equipment was still supported, a newer much smaller switching fabric consisting of remreed grid switches as a sealed unit and many switches per grid. This freed up a lot of floor space and we started seeing things like customer equipment being placed there. I suppose this was one of the reasons they had to segment our access more. It was one thing if our own equipment were affected by an employee, quite another if it was a customers, and just the psychological aspects of allowing the customer to feel their equipment is secure.

Another disturbing trend I saw was the use of our test equipment by law enforcement authorities to bug customers phone lines. All of our switches were equipped with what were called number test verticals. This basically offered a way to connect into an existing line or trunk and monitor it. The original purpose of these were to troubleshoot problems in the circuits but it ended up being used heavily by law enforcement for monitoring. Another disturbing change was in what we called AMA which was basically the accounting data necessary to charge someone for usage, being it on a measured rate line or a long distance call. Initially we only recorded chargeable information on these but later we started recording the meta data to all calls. A lot of expense was associated with this and when I asked why we were doing it they never provided an answer. Well now Edward Snowden has told us the answer.

When I started, the second line there was Bill Pittman. I didn't like Bill because he wasn't exactly a people person. If he was walking by and you attempted to ask him a question and he didn't like you, he would just keep walking as if you didn't exist. But I respected Bill, he was extremely technically knowledgeable and he was extremely customer service oriented.

Jan Cyre was also technically knowledgeable and very service conscious but she was also quite a bit more personable, at least towards me, than Bill was. She was still a no-nonsense get the job done whatever it takes manager but she solicited input from people and would at least take a minute to listen to input if you had an idea you wanted to bring to her. I had a lot of respect for here.

Denny Eckhert was another great second line and really the last in the line because the next person we got was not technically knowledgeable, I mean she didn't even know how to use a fax machine. I don't even remember her name and if I did I wouldn't name it as I do feel when you can't say something nice about someone it's better not to say anything at all. I only bring this up to demonstrate the type of decisions that upper management was making as we got further into being US West and eventually Qwest.

By this time we had replaced pretty much all of the electromechanical switches and some of the #1 and #1AESS switches with #5ESS switches. The #5 went away from a central processor driving everything to distributed intelligence. The central processor was now a 3B20D which is a dual 3B20 processor with matching circuitry. This switch used a much different software architecture. Still very much table driven in terms of translations but unlike the #1 and #1A ESS which was coded mostly in assembly language, this new processor ran Unix-RTR and was mostly coded in the C processing language. By this time we also had some switches from other vendors, Erickson and Northern Telecom. I was never trained in the DMS 100 switches so I do not know what they used for an operating system but I do know they were coded in Pascal. They never handled the traffic they were advertised as being able to handle and from a maintenance perspective it was a real pain in the ass even though I didn't work on it, I worked on switches with trunking to it and they would take the circuits out of service because it was taking too long to respond.

If we had encountered this in PNB time we would have taken immediate action to address the service issue, instead we kept demanding they fix it and so these problems went on for a very long time. Customers would have to wait a long time for dial-tone. They'd get re-order when they dialed a number because all the trunks were out of service and somehow this seemed to be okay with management. As an employee who still believed customer service should be number one in priority I really felt out of place and frustrated working in this environment.

Another thing I didn't like about working in the field was the lack of any company. Used to be there were at least two of us in a central office, now one of us would go between five different central office and do the bare minimal maintenance to keep them alive.

I went from doing maintenance and/or surveillance from the Switching Control Center, basically watching for troubles, doing pattern analysis on switching fabric failures, etc, to doing cutover work. There again there was a team, we had a solid objective, and the plans management came up with never worked so we always ended up largely winging it and always ended up making it work in the end so some sense of satisfaction. Later in the process I started getting invited to both the planning meetings before the cutovers and the critique meetings afterwards. It was good having a chance at some input but frustrating that management would ignore the realities of the situation. For example, we had to work with other regional companies, most notably GTE or General Telephone, and we'd schedule people to do trunk testing but they'd never be there at the date and time scheduled, thus a trunk testing schedule that management always insisted we create, was always ignored in the field so seemed like such a waste of time to make it in the first place. We also knew it was a bad idea to reuse trunks to GTE as odds were good they wouldn't have anybody in the central office to make the cutover changes necessary. Often we'd go from DP signaling in the mechanical switch to MF signaling or later SS7 signaling and in spite of scheduling in advance, GTE would not do it at the scheduled time so customers would not be able to call numbers in those office until someone came in on day shift and finished the job.

Then I left the field and worked as first trunking and then a system admin in the switching control center. Worked on building a new network operations center to replace the existing systems. I know we spent around 14 million of company money on this project and yet two weeks after it was fully operational they decided to close it down and fold everything into Denver. This is the point I left the company.

Now, 23 years later I can see from the outside that the state of telephone service has continued to go to hell in a hand basket. It used to be you had defined signals for various call states. Call a number that was busy, get a busy signal. Call a number that didn't have the necessary equipment to complete the call and you'd get a re-order, what people call fast busy. Call a number that was disconnected and you'd either get a disconnect recording or what we called an intercept where it refers you to the new number.

Now, if I call a number that is disconnected, rarely will I get a recording, virtually never an intercept. Often I'll get busy 24x7, sometimes re-order. It is very frustrating, as a customer, to call someone for two weeks always getting busy eventually to find out they've died, the number was disconnected and a busy is what you get rather than a disconnect recording.

I wish it were possible to have the technical advances we have enjoyed along with the upper management that was service driven like we had when I started in 1978 and really at the local level until almost before I left. But at that point there was no support from upper management so things were going to hell despite our best efforts.

Now the land line is almost dead. I'm still using them for my business because the voice quality of most wireless networks is not so good. But recently some of the networks have provided an option for broadband high quality voice so I may go fully wireless soon as well. I really feel the phone company missed the boat in terms of capabilities they could have provided customers but now the wireless companies are starting to pick up on many of these.]]>

When I started, George Walker was the company President. George Walker was a guy that had worked his way up the ranks from a lineman to the Presidents office. George Walker knew the business and was committed to good customer service.

When I started service was the number one thing, nothing else mattered more. To the degree that it was less than perfect it was pretty much the best the technology of the day could provide. It wasn't a result of cost cutting or other human considerations. We did everything humanly possible to provide the best service the technology of the day allowed.

The records back then were kept in paper bins. Most of the switches were electromechanical, not operating under stored program control. Those that were under control consisted of electromechanical switched operated by computers that used discrete transistors. Many thousands of circuit packs consuming many tens of thousands of watts of electricity to effect an 8 Mhz CPU cycle speed. Permanent memory was stored on aluminum cards with little bar magnets glued to them. There were 64 words of 44 bits on each card. Of these 37 bits were instruction or data, the rest was parity and hamming for error correction. Our first #1ESS was already 13 years old when I started with the company, it was installed in Bellevue Glencourt in 1965. The CPUs were duplicated and had matching circuitry. They ran in step and any error caused a mis-match which was discovered by the matching circuitry which would then invoke diagnostics to determine which until was in error and remove it from service.

The software for the #1ESS was extremely efficient. It allowed an 8Mhz CPU to effective service more than 100,000 calls per hour in Bellevue Glencourt.

The switching fabric of these machines were built around a special type of ferreed switch called a remreed. It was called a remreed because rather than requiring a continuously applied external magnetic field to close, the remreed switches reeds were made out of a ferrous material that could be magnetized or demagnetized by an external force and then would remember and either stay closed or open.

What was amazing to me about the early technology was the sheer cleverness of it's implementation. So much functionality was squeezed out of so little and primitive, by
today's standards, technology. Back in those days we would have anywhere from 2-5 technicians in a central office, and often a much larger crew of special services people, these were people who would wire up design circuits for customers. Things like getting a radio station connected from it's studios to it's transmitter, private networks, and alarm circuits.

What I saw over the 17 years I was there was an explosive evolution of hardware sophistication along with a destruction of management infrastructure and later overall personnel infrastructure. The company gradually evolved from one in which nothing mattered more than customer service to nothing mattered more than profits. Although both the costs of capital infrastructure and the cost of maintenance decreased substantially over the years, the cost of a basic phone line went from around $8 to about $60/month today.

As we prepared to comply with the AT&T consent decree in which ATT would sell off all of it's local exchange carriers including PNB, for whom I worked, we were told that new "holding companies" would be formed, but management and the way we did business would remain the same. This was the biggest boldest lie ever told.

Under George Walker's rule a hell of a lot of trust was placed in employees. I had a key that would get me into pretty much any PNB facility except the Presidents office and a couple of doors in the tunnel between main and mutual. One door lead down to train tracks some 50 feet below the buildings, and the other to the main power room was exceptionally huge and dangerous. Because there was still a lot of cross-bar equipment the normal current in the power bus was around 5,000 AMPS and God only knows what the fault current capabilities were but it was something truly phenomenal. I could get into power rooms at any other facility except main.

Very little was walled off. We'd have several switches on a floor with no wall between the CPUs and switches, the switches and main frames and we had under ground tunnels connecting three buildings, 1101 4th Avenue (now a Hotel), 1122 3rd avenue (Main), and 1200 3rd avenue (Mutual, now a retain center).

Fairly early on several things happened after the break-up. They relieved George Walker of his duty and most of the upper management. They were replaced by a bunch of MBAs with zero knowledge of the business. They split the company up into 17 subsidiaries, one of which held all the property and they managed by some tricky stock manipulations to get most of that property into their own pockets. We then leased back many of the properties no longer owned by the company.

They took away our keys and put card locks on all the doors so they could limit us to only where we needed to go and keep track of our entries and exits. By this time a number of #1ESS machines including the mains had been upgraded to #1AESS. In the mains they walled of the #1AESS CPUs from everything else with no reason given. In some ways it was nice as the clatter of 100,000 relays was at least dampened by the walls but it also made you less aware of the operational status of the machine. When the walls were not there, you could hear when something was wrong, a frame was acting up, etc. Now you were not able to be so aware of what the periphery was doing while you were doing work involving the CPUs.

Along with the new CPU technology, the #1AESS also brought new switching fabric technology. Although the old equipment was still supported, a newer much smaller switching fabric consisting of remreed grid switches as a sealed unit and many switches per grid. This freed up a lot of floor space and we started seeing things like customer equipment being placed there. I suppose this was one of the reasons they had to segment our access more. It was one thing if our own equipment were affected by an employee, quite another if it was a customers, and just the psychological aspects of allowing the customer to feel their equipment is secure.

Another disturbing trend I saw was the use of our test equipment by law enforcement authorities to bug customers phone lines. All of our switches were equipped with what were called number test verticals. This basically offered a way to connect into an existing line or trunk and monitor it. The original purpose of these were to troubleshoot problems in the circuits but it ended up being used heavily by law enforcement for monitoring. Another disturbing change was in what we called AMA which was basically the accounting data necessary to charge someone for usage, being it on a measured rate line or a long distance call. Initially we only recorded chargeable information on these but later we started recording the meta data to all calls. A lot of expense was associated with this and when I asked why we were doing it they never provided an answer. Well now Edward Snowden has told us the answer.

When I started, the second line there was Bill Pittman. I didn't like Bill because he wasn't exactly a people person. If he was walking by and you attempted to ask him a question and he didn't like you, he would just keep walking as if you didn't exist. But I respected Bill, he was extremely technically knowledgeable and he was extremely customer service oriented.

Jan Cyre was also technically knowledgeable and very service conscious but she was also quite a bit more personable, at least towards me, than Bill was. She was still a no-nonsense get the job done whatever it takes manager but she solicited input from people and would at least take a minute to listen to input if you had an idea you wanted to bring to her. I had a lot of respect for here.

Denny Eckhert was another great second line and really the last in the line because the next person we got was not technically knowledgeable, I mean she didn't even know how to use a fax machine. I don't even remember her name and if I did I wouldn't name it as I do feel when you can't say something nice about someone it's better not to say anything at all. I only bring this up to demonstrate the type of decisions that upper management was making as we got further into being US West and eventually Qwest.

By this time we had replaced pretty much all of the electromechanical switches and some of the #1 and #1AESS switches with #5ESS switches. The #5 went away from a central processor driving everything to distributed intelligence. The central processor was now a 3B20D which is a dual 3B20 processor with matching circuitry. This switch used a much different software architecture. Still very much table driven in terms of translations but unlike the #1 and #1A ESS which was coded mostly in assembly language, this new processor ran Unix-RTR and was mostly coded in the C processing language. By this time we also had some switches from other vendors, Erickson and Northern Telecom. I was never trained in the DMS 100 switches so I do not know what they used for an operating system but I do know they were coded in Pascal. They never handled the traffic they were advertised as being able to handle and from a maintenance perspective it was a real pain in the ass even though I didn't work on it, I worked on switches with trunking to it and they would take the circuits out of service because it was taking too long to respond.

If we had encountered this in PNB time we would have taken immediate action to address the service issue, instead we kept demanding they fix it and so these problems went on for a very long time. Customers would have to wait a long time for dial-tone. They'd get re-order when they dialed a number because all the trunks were out of service and somehow this seemed to be okay with management. As an employee who still believed customer service should be number one in priority I really felt out of place and frustrated working in this environment.

Another thing I didn't like about working in the field was the lack of any company. Used to be there were at least two of us in a central office, now one of us would go between five different central office and do the bare minimal maintenance to keep them alive.

I went from doing maintenance and/or surveillance from the Switching Control Center, basically watching for troubles, doing pattern analysis on switching fabric failures, etc, to doing cutover work. There again there was a team, we had a solid objective, and the plans management came up with never worked so we always ended up largely winging it and always ended up making it work in the end so some sense of satisfaction. Later in the process I started getting invited to both the planning meetings before the cutovers and the critique meetings afterwards. It was good having a chance at some input but frustrating that management would ignore the realities of the situation. For example, we had to work with other regional companies, most notably GTE or General Telephone, and we'd schedule people to do trunk testing but they'd never be there at the date and time scheduled, thus a trunk testing schedule that management always insisted we create, was always ignored in the field so seemed like such a waste of time to make it in the first place. We also knew it was a bad idea to reuse trunks to GTE as odds were good they wouldn't have anybody in the central office to make the cutover changes necessary. Often we'd go from DP signaling in the mechanical switch to MF signaling or later SS7 signaling and in spite of scheduling in advance, GTE would not do it at the scheduled time so customers would not be able to call numbers in those office until someone came in on day shift and finished the job.

Then I left the field and worked as first trunking and then a system admin in the switching control center. Worked on building a new network operations center to replace the existing systems. I know we spent around 14 million of company money on this project and yet two weeks after it was fully operational they decided to close it down and fold everything into Denver. This is the point I left the company.

Now, 23 years later I can see from the outside that the state of telephone service has continued to go to hell in a hand basket. It used to be you had defined signals for various call states. Call a number that was busy, get a busy signal. Call a number that didn't have the necessary equipment to complete the call and you'd get a re-order, what people call fast busy. Call a number that was disconnected and you'd either get a disconnect recording or what we called an intercept where it refers you to the new number.

Now, if I call a number that is disconnected, rarely will I get a recording, virtually never an intercept. Often I'll get busy 24x7, sometimes re-order. It is very frustrating, as a customer, to call someone for two weeks always getting busy eventually to find out they've died, the number was disconnected and a busy is what you get rather than a disconnect recording.

I wish it were possible to have the technical advances we have enjoyed along with the upper management that was service driven like we had when I started in 1978 and really at the local level until almost before I left. But at that point there was no support from upper management so things were going to hell despite our best efforts.

Now the land line is almost dead. I'm still using them for my business because the voice quality of most wireless networks is not so good. But recently some of the networks have provided an option for broadband high quality voice so I may go fully wireless soon as well. I really feel the phone company missed the boat in terms of capabilities they could have provided customers but now the wireless companies are starting to pick up on many of these.]]>
<![CDATA[Announcements :: BBCodes Mostly Working :: Author Nanook]]> 2018-08-23T17:12:20-07:00 2018-08-23T17:12:20-07:00 http://www.eskimo.com/forums/viewtopic.php?f=4&t=261&p=570#p570 <![CDATA[Announcements :: Slow Service Early Wednesday Morning (1-4AM) :: Author Nanook]]> 2018-09-12T04:38:05-07:00 2018-09-12T04:38:05-07:00 http://www.eskimo.com/forums/viewtopic.php?f=4&t=278&p=590#p590
There are aspects of this attack that I do not understand. They forged an address of 204.122.16.248 from outside (udp packets so no three-way connect) and directed requests at 204.122.16.8, so our name servers would attempt to reply to 204.122.16.248 but there was no host on that IP address and the result was that our router didn’t know what to do with it and it overloaded it logging what it considered “Martian” packets.

The puzzling aspect of this is I have a firewall rule that SHOULD block all traffic from an external interface which has an internal address. I was able to mitigate the attack by blackholeing 204.122.16.248 at the name servers and rate limiting responses.]]>

There are aspects of this attack that I do not understand. They forged an address of 204.122.16.248 from outside (udp packets so no three-way connect) and directed requests at 204.122.16.8, so our name servers would attempt to reply to 204.122.16.248 but there was no host on that IP address and the result was that our router didn’t know what to do with it and it overloaded it logging what it considered “Martian” packets.

The puzzling aspect of this is I have a firewall rule that SHOULD block all traffic from an external interface which has an internal address. I was able to mitigate the attack by blackholeing 204.122.16.248 at the name servers and rate limiting responses.]]>
<![CDATA[Announcements :: New Server – JuLinux.Yellow-Snow.Net :: Author Nanook]]> 2018-09-12T18:50:20-07:00 2018-09-12T18:50:20-07:00 http://www.eskimo.com/forums/viewtopic.php?f=4&t=279&p=591#p591
We have a new server available for your use but this one is in the yellow-snow.net domain. The full server name is julinux.yellow-snow.net. If you use this server, e-mail you send will by default by username@yellow-snow.net. E-mail to this address will also come to your INBOX.

This server is a new Linux distribution called JULinux but it is only barely a distribution as it is essentially the Mate spin of Ubuntu configured to look like Windows with very nice artwork. The software is all 100% Ubuntu so it has the stability, security, and is current like Ubuntu.

The KDE implementation is broken in as much as the logout does not work so I would ask that you avoid using KDE with x2go on this server until I can figure out how to get it fixed. All other x2go compatible window managers are working.

If you do not have an existing Mate configuration and connect to this machine with x2go using mate, it will look like Windows. If you do have an existing configuration then it will look the same as all the other Debian / Ubuntu derivatives.

We do not yet have a corresponding yellow-snow.net web appearance but this is in the works.]]>

We have a new server available for your use but this one is in the yellow-snow.net domain. The full server name is julinux.yellow-snow.net. If you use this server, e-mail you send will by default by username@yellow-snow.net. E-mail to this address will also come to your INBOX.

This server is a new Linux distribution called JULinux but it is only barely a distribution as it is essentially the Mate spin of Ubuntu configured to look like Windows with very nice artwork. The software is all 100% Ubuntu so it has the stability, security, and is current like Ubuntu.

The KDE implementation is broken in as much as the logout does not work so I would ask that you avoid using KDE with x2go on this server until I can figure out how to get it fixed. All other x2go compatible window managers are working.

If you do not have an existing Mate configuration and connect to this machine with x2go using mate, it will look like Windows. If you do have an existing configuration then it will look the same as all the other Debian / Ubuntu derivatives.

We do not yet have a corresponding yellow-snow.net web appearance but this is in the works.]]>
<![CDATA[Announcements :: Mush on All Servers :: Author Nanook]]> 2018-09-12T18:51:24-07:00 2018-09-12T18:51:24-07:00 http://www.eskimo.com/forums/viewtopic.php?f=4&t=280&p=592#p592 <![CDATA[Announcements :: Investigating New Location for User Meetings :: Author Nanook]]> 2018-09-12T18:52:09-07:00 2018-09-12T18:52:09-07:00 http://www.eskimo.com/forums/viewtopic.php?f=4&t=281&p=593#p593
I am investigating a new location for our user meetings. This would be Amante Pizza on 123rd and Roosevelt Northeast in Seattle.

The Pizza is excellent, they have spirits for those who care to imbibe, and they have a big screen TV that in theory can be connected to a computer and we can use for presentations. Many people seem to want a more structured meeting but trying to do presentations on paper doesn’t work so well. Being able to fire up a computer with a big screen live would be a huge plus.

They do not know what inputs it has so I have to stop by and determine how to connect.

I had been to Amante before when they were on 196th and just off of 44th in Lynnwood. That restaurant had good food, great decor, but piss poor service. This restaurant has excellent food, excellent service, but marginal decor. However I think the room there is much more suited to our needs, it is completely walled off with glass walls from the rest of the restaurant so our noise won’t interfere with other diners and vice versa.]]>

I am investigating a new location for our user meetings. This would be Amante Pizza on 123rd and Roosevelt Northeast in Seattle.

The Pizza is excellent, they have spirits for those who care to imbibe, and they have a big screen TV that in theory can be connected to a computer and we can use for presentations. Many people seem to want a more structured meeting but trying to do presentations on paper doesn’t work so well. Being able to fire up a computer with a big screen live would be a huge plus.

They do not know what inputs it has so I have to stop by and determine how to connect.

I had been to Amante before when they were on 196th and just off of 44th in Lynnwood. That restaurant had good food, great decor, but piss poor service. This restaurant has excellent food, excellent service, but marginal decor. However I think the room there is much more suited to our needs, it is completely walled off with glass walls from the rest of the restaurant so our noise won’t interfere with other diners and vice versa.]]>
<![CDATA[Announcements :: Hardware Maintenance Sept 17-18th 2018 :: Author Nanook]]> 2018-09-17T16:48:52-07:00 2018-09-17T16:48:52-07:00 http://www.eskimo.com/forums/viewtopic.php?f=4&t=284&p=596#p596 Posted on September 17, 2018

I will be taking various machines down tonight for about fifteen minutes each to install new NIC cards with non-Intel chip sets. From 4.15.0 forward, the Linux kernel has had a bug in the Intel E-1000 drivers that cause the cards to lock-up when hardware offloading is used. Usually these lock-ups are transient resulting in 2-3 second delays in data but occasionally the cards will lock hard and require a drive to the co-lo facility to physically reset the machine.

Because the servers most affected are those carrying heavy traffic, the NFS server providing the home directories in particular, I will be replacing the NIC cards on all the NFS servers. This will affect virtually all of our services but will prevent long down times like we suffered Sunday morning from recurring.

I filed a bug report April of this year on this problem. Canonical has offered me various kernels to try, many of them either did not boot at all or were extremely unstable. At this point I feel it’s more cost effective and less service affecting just to replace the hardware.]]>
Posted on September 17, 2018

I will be taking various machines down tonight for about fifteen minutes each to install new NIC cards with non-Intel chip sets. From 4.15.0 forward, the Linux kernel has had a bug in the Intel E-1000 drivers that cause the cards to lock-up when hardware offloading is used. Usually these lock-ups are transient resulting in 2-3 second delays in data but occasionally the cards will lock hard and require a drive to the co-lo facility to physically reset the machine.

Because the servers most affected are those carrying heavy traffic, the NFS server providing the home directories in particular, I will be replacing the NIC cards on all the NFS servers. This will affect virtually all of our services but will prevent long down times like we suffered Sunday morning from recurring.

I filed a bug report April of this year on this problem. Canonical has offered me various kernels to try, many of them either did not boot at all or were extremely unstable. At this point I feel it’s more cost effective and less service affecting just to replace the hardware.]]>
<![CDATA[Announcements :: Re: Hardware Maintenance Sept 17-18th 2018 :: Reply by Nanook]]> 2018-09-18T04:42:58-07:00 2018-09-18T04:42:58-07:00 http://www.eskimo.com/forums/viewtopic.php?f=4&t=284&p=597#p597 Posted on September 18, 2018

I’ve replaced the Intel NIC’s with TP-Link NIC’s. The first machine took close to two hours because at first I was not able to get it to work. I finally chased it down to lack of patience on my part, these cards take approximately 1 minute to initialize.

I was not able to use the old cards in a private network as I had planned because as soon as I configured one, localhost became bound to it and broke the other connection. I’m sure this is operator malfunction but I will need to do further research.]]>
Posted on September 18, 2018

I’ve replaced the Intel NIC’s with TP-Link NIC’s. The first machine took close to two hours because at first I was not able to get it to work. I finally chased it down to lack of patience on my part, these cards take approximately 1 minute to initialize.

I was not able to use the old cards in a private network as I had planned because as soon as I configured one, localhost became bound to it and broke the other connection. I’m sure this is operator malfunction but I will need to do further research.]]>
<![CDATA[Linux Shells :: Supported Desktop Environments :: Author Nanook]]> 2018-08-28T20:26:16-07:00 2018-08-28T20:26:16-07:00 http://www.eskimo.com/forums/viewtopic.php?f=13&t=271&p=581#p581
The exceptions are: eskimo.com 1995 vintage 32 bit Sparc server SunOS 4.1.4 does not support remote graphical environments, not enough memory for anything but text applications via ssh. It is there for nostalgia purposes, to show where we've come from.

Scientific and Centos6 both do not support Mate.

Zorin does not support Gnome.

All other machines should support all of the above Desktops.]]>

The exceptions are: eskimo.com 1995 vintage 32 bit Sparc server SunOS 4.1.4 does not support remote graphical environments, not enough memory for anything but text applications via ssh. It is there for nostalgia purposes, to show where we've come from.

Scientific and Centos6 both do not support Mate.

Zorin does not support Gnome.

All other machines should support all of the above Desktops.]]>
<![CDATA[Linux Shells :: Servers :: Author Nanook]]> 2018-09-13T05:19:04-07:00 2018-09-13T05:19:04-07:00 http://www.eskimo.com/forums/viewtopic.php?f=13&t=282&p=594#p594
We have a variety of Linux Servers and a SunOS 4.1.4 Sparc server available for your use. Of the Linux servers there are two basic lines, those derived from RedHat, such as Fedora, Centos, and Scientific Linux, and those derived from Debian such as Debian, Ubuntu, and Mint. Ubuntu is the most current and feature rich.

All of these servers are available if you have a shell account here. You can use them to learn various versions, decide which to install, and as one place you can develop applications for all versions. We install a wide variety of development tools on all of our publicly available servers.

Centos6.Eskimo.Com
This server is running CentOS 6 Linux. This shell server brings a lot of capabilities to our customers. The option of using a graphical environment, either KDE or Gnome Desktops, an integrated software development environment, Eclipse, Office Productivity Software (both OpenOffice and LibreOffice), and a huge assortment of software, graphics, and text development tools.Ubuntu.Eskimo.Com

Centos7.Eskimo.Com
This is a recently added shell server based upon the newly released Centos7 Linux distribution (based upon Red Hat Enterprise Linux 7). To connect to this one graphically, you’ll need to use x2go and select the Mate Desktop. The native Gnome3 Desktop does not work with x2go but Mate, which is a fork of Gnome2. There are no games on this machine but if you are interested in developing or learning the most recent version of Centos or RedHat Enterprise Linux upon which it is based, this server will provide these capabilities. We have all of the available development utilities, headers, libraries, and tools that are available for Centos7. This includes tools for C, C++, Java, Go, Erlang, Perl, Ruby, PHP, F77, and shells.

Debian.Eskimo.Com
This server is running on Debian Stretch. It has a rich assortment of software installed, including hundreds of Games, a huge variety of Office Productivity software, many Educational and Scientific applications, a variety of Integrated Development Environments, and many programming languages. All of the documentation available online these applications is also loaded.

Eskimo.Com
This is our oldest shell server. It is an SS-10 running SunOS 4.1.4. It has aftermarket 125Mhz CPUs instead of the stock 40Mhz CPUs provided by Sun. It is equipped with a whopping 512MB of RAM. The only way you can access a desktop remotely is manually start openwin redirected to the display on your computer and then the graphical tools available are few. It is most suited to command line applications that were in vogue in the 1990 time frame.This machine has a bunch of old BSD games like Rogue, Hack, Adventure, Trek, Phantasia, and a lot of early utilities like the Zoo archiver. It has both gcc and a K&R C compiler, cc.There are a variety of shell based mail programs like mutt, elm, pine, and BSD mail.

Fedora.Eskimo.Com
This server runs the current version of Fedora. It can be accessed graphically using x2go and the Mate desktop. It has the most applications of any of the Redhat derived servers. I install just about everything I can get to work on this machine so it has a huge variety of applications installed.

Julinux.Yellow-Snow.Net
This server is our first server in yellow-snow.net. E-mail sent from this server will be from user@yellow-snow.net. This server is basically ubuntu-mate spin with new very nice artwork and configured to look like Windows. It is best to use this server via x2go with the Mate desktop. If you have an existing Mate configuration then it will look like Ubuntu-Mate except the icons and other artwork has changed. If not, it will look like the layout in Windows.

Mint.Eskimo.Com
This is another recent addition to our shell servers. It is running Mint, although we have the Mate Desktop installed for graphical access. This is necessary because Unity and other compositing Desktops are incompatible with x2go and really work very roughly with VNC. If you like games, this is a good server to use and there are many installed. Mint is a derivative of Ubuntu which in turn is derived from Debian and this server thus follows the structure and layout of Debian and Ubuntu.

MxLinux.Eskimo.Com
MxLinux is an excellent choice for a computer with limited resources. Before the overhead of a Desktop is considered, MxLinux uses 200MB less RAM out of the box than does Ubuntu. If you’re not a fan of Poettering and Systemd, you’ll like this OS as it still uses a System-V init with Systemd shims to allow packages requiring systemd to function. The default Desktop is LXDE which is also small and efficient. But if you like eye candy you can install any Desktop you like and as with other servers I will install all that I can get to work on this OS.

OpenSuse.Eskimo.Com
OpenSuse is unique in it’s approach to software installation and configuration in that it handles both software installation and configuration in one utility, Yast. Yast is also unique in being the only Linux software installer that resolves potential conflicts on the fly as you select packages which is a major labor saver. Overall OpenSuse is a very complete distribution.

Scientific.Eskimo.Com
This machine is running Scientific Linux 6.9. It has many additional libraries and applications from additional third party repositories or compiled from source. This server is very similar to Centos6 but brings some newer capabilities owing to better compatibility between the Scientific Linux and Enterprise Linux applications and libraries.

Scientific7.Eskimo.Com
Scientific7 is very much like Centos7. This machine can be accessed graphically using x2go. I recommend the Mate Desktop, Gnome will not work with x2go on this machine because it is Gnome3 which is a compositing Desktop that is incompatible with x2go.

Ubuntu.Eskimo.Com
This is Ubuntu 17.10. It can be accessed graphically using x2go and the Mate Desktop. The native Unity desktop is a compositing desktop which is incompatible with x2go. This machine is equipped with many applications including a huge number of games, a rich assortment of development tools and computer programming languages, a huge assortment of Office productivity software, many scientific, electronic, and educational tools.

Zorin.Eskimo.Com
Zorin is a Ubuntu derived operating system. It combines the eye candy of some other Ubuntu flavors such as Mint, with the security awareness and up to date nature of Ubuntu providing a really superb server environment. This machine is our best equipped server. It has the full Ubuntu Studio Suite, a very large assortment of development tools, language packs for all supported languages, many games, both Libre and Caligra Office Suites, and much more. X2go is supported on this machine.]]>

We have a variety of Linux Servers and a SunOS 4.1.4 Sparc server available for your use. Of the Linux servers there are two basic lines, those derived from RedHat, such as Fedora, Centos, and Scientific Linux, and those derived from Debian such as Debian, Ubuntu, and Mint. Ubuntu is the most current and feature rich.

All of these servers are available if you have a shell account here. You can use them to learn various versions, decide which to install, and as one place you can develop applications for all versions. We install a wide variety of development tools on all of our publicly available servers.

Centos6.Eskimo.Com
This server is running CentOS 6 Linux. This shell server brings a lot of capabilities to our customers. The option of using a graphical environment, either KDE or Gnome Desktops, an integrated software development environment, Eclipse, Office Productivity Software (both OpenOffice and LibreOffice), and a huge assortment of software, graphics, and text development tools.Ubuntu.Eskimo.Com

Centos7.Eskimo.Com
This is a recently added shell server based upon the newly released Centos7 Linux distribution (based upon Red Hat Enterprise Linux 7). To connect to this one graphically, you’ll need to use x2go and select the Mate Desktop. The native Gnome3 Desktop does not work with x2go but Mate, which is a fork of Gnome2. There are no games on this machine but if you are interested in developing or learning the most recent version of Centos or RedHat Enterprise Linux upon which it is based, this server will provide these capabilities. We have all of the available development utilities, headers, libraries, and tools that are available for Centos7. This includes tools for C, C++, Java, Go, Erlang, Perl, Ruby, PHP, F77, and shells.

Debian.Eskimo.Com
This server is running on Debian Stretch. It has a rich assortment of software installed, including hundreds of Games, a huge variety of Office Productivity software, many Educational and Scientific applications, a variety of Integrated Development Environments, and many programming languages. All of the documentation available online these applications is also loaded.

Eskimo.Com
This is our oldest shell server. It is an SS-10 running SunOS 4.1.4. It has aftermarket 125Mhz CPUs instead of the stock 40Mhz CPUs provided by Sun. It is equipped with a whopping 512MB of RAM. The only way you can access a desktop remotely is manually start openwin redirected to the display on your computer and then the graphical tools available are few. It is most suited to command line applications that were in vogue in the 1990 time frame.This machine has a bunch of old BSD games like Rogue, Hack, Adventure, Trek, Phantasia, and a lot of early utilities like the Zoo archiver. It has both gcc and a K&R C compiler, cc.There are a variety of shell based mail programs like mutt, elm, pine, and BSD mail.

Fedora.Eskimo.Com
This server runs the current version of Fedora. It can be accessed graphically using x2go and the Mate desktop. It has the most applications of any of the Redhat derived servers. I install just about everything I can get to work on this machine so it has a huge variety of applications installed.

Julinux.Yellow-Snow.Net
This server is our first server in yellow-snow.net. E-mail sent from this server will be from user@yellow-snow.net. This server is basically ubuntu-mate spin with new very nice artwork and configured to look like Windows. It is best to use this server via x2go with the Mate desktop. If you have an existing Mate configuration then it will look like Ubuntu-Mate except the icons and other artwork has changed. If not, it will look like the layout in Windows.

Mint.Eskimo.Com
This is another recent addition to our shell servers. It is running Mint, although we have the Mate Desktop installed for graphical access. This is necessary because Unity and other compositing Desktops are incompatible with x2go and really work very roughly with VNC. If you like games, this is a good server to use and there are many installed. Mint is a derivative of Ubuntu which in turn is derived from Debian and this server thus follows the structure and layout of Debian and Ubuntu.

MxLinux.Eskimo.Com
MxLinux is an excellent choice for a computer with limited resources. Before the overhead of a Desktop is considered, MxLinux uses 200MB less RAM out of the box than does Ubuntu. If you’re not a fan of Poettering and Systemd, you’ll like this OS as it still uses a System-V init with Systemd shims to allow packages requiring systemd to function. The default Desktop is LXDE which is also small and efficient. But if you like eye candy you can install any Desktop you like and as with other servers I will install all that I can get to work on this OS.

OpenSuse.Eskimo.Com
OpenSuse is unique in it’s approach to software installation and configuration in that it handles both software installation and configuration in one utility, Yast. Yast is also unique in being the only Linux software installer that resolves potential conflicts on the fly as you select packages which is a major labor saver. Overall OpenSuse is a very complete distribution.

Scientific.Eskimo.Com
This machine is running Scientific Linux 6.9. It has many additional libraries and applications from additional third party repositories or compiled from source. This server is very similar to Centos6 but brings some newer capabilities owing to better compatibility between the Scientific Linux and Enterprise Linux applications and libraries.

Scientific7.Eskimo.Com
Scientific7 is very much like Centos7. This machine can be accessed graphically using x2go. I recommend the Mate Desktop, Gnome will not work with x2go on this machine because it is Gnome3 which is a compositing Desktop that is incompatible with x2go.

Ubuntu.Eskimo.Com
This is Ubuntu 17.10. It can be accessed graphically using x2go and the Mate Desktop. The native Unity desktop is a compositing desktop which is incompatible with x2go. This machine is equipped with many applications including a huge number of games, a rich assortment of development tools and computer programming languages, a huge assortment of Office productivity software, many scientific, electronic, and educational tools.

Zorin.Eskimo.Com
Zorin is a Ubuntu derived operating system. It combines the eye candy of some other Ubuntu flavors such as Mint, with the security awareness and up to date nature of Ubuntu providing a really superb server environment. This machine is our best equipped server. It has the full Ubuntu Studio Suite, a very large assortment of development tools, language packs for all supported languages, many games, both Libre and Caligra Office Suites, and much more. X2go is supported on this machine.]]>
<![CDATA[Distributions :: My Personal Favorites :: Author Nanook]]> 2018-08-25T04:18:18-07:00 2018-08-25T04:18:18-07:00 http://www.eskimo.com/forums/viewtopic.php?f=30&t=265&p=574#p574
Eskimo Started as a single line BBS running on a Tandy model III, later model IV. It was written in a modified version of BASIC that I wrote which I called ComBASIC. It was written in Z-80 assembly and was probably the first Trs-80 model III ever that multi-tasked. It had two processes, the main process that interpreted the modified BASIC, and a small process that was given CPU via the clock interrupt, which monitored modem carrier and either ended a session when carrier was lost or started one when it received carrier. It also had a watchdog function in case the ComBASIC code hung it would reset the computer. The modified BASIC code was a combination of my BBS functions which had upload / download (multiple directories in spite of Tandy's flat file system), e-mail, and game section, and Glen Gormans Minibin which primarily I adapted his room based message system to my language and modem driver and replaced the BASIC screen formatting routines with Z-80 routines which did some trick things including spin up the floppy drives before the buffer was empty so there would not be a delay for the next read.

Then I moved to a Tandy 16B Xenix based system, originally Microsoft Xenix and then later moved to SCO Xenix. This rapidly grew into a Tandy 6000, and then something Tandy never intended, a machine with 4MB of memory (Tandy 6000 at the time maxes out at 1MB but after I made the mod and a friend in I think it was Wisconson, disassembled the Xenix memory management routines, we had 4MB machines before Tandy did, and their machine officially maxed out at 8 ports, mine had 11. Anyway, hardware was easier to hack back then when you didn't have 26 layer PC boards and gigahertz speeds.

When that maxed out I bought a Sun 3/180 with 16 ports, added two more muxes for a total of 48 ports, and that was all the machine could support, so then went to using external Ethernet connected MUXes for a total of 256 ports and modems. Went from a 3/180 to a 3/280 and then a 4/280. The 3/180 and 3/280 ran SunOS 3.5 but I had to upgrade to SunOS 4.1 to go to the 4/280, not an upgrade I liked because 3.5 was rock solid, where was 4.1.x was finicky and crashed occasionally, also ports would hang, it was just generally problematic but we still run one machine with 4.1.4 here (eskimo.com) so you can see what that was like. It crashes from time to time but not frequently as it is not used heavily.

Then I added a 4/330 to handle Usenet News because the volume was becoming too much for the old machine, and then latter 4/670MP's which after a bit I upgraded
to SS-10's. The SS-10's were much smaller, drew much less electricity, but otherwise used the same CPUs and the 4/670MPs. I had high end Ross Hypersparc CPUs in those,
$20k a piece new. They were not cheap but nothing else could touch them in terms of performance back then.

When Linux came out we played around with a home built kernel, user-land, and stuff but did not really get very good performance, a 300 Mhz Intel chip was out performed by 40 Mhz Sparc (later upgraded to 120Mhz Sparc). But over time Linux improved and gradually I phased on the Suns (save for eskimo.com old shell server and two Ultra2s that are Radius servers. The first Linux distro I used was Redhat 4.1 and 4.2, later moved to 6.2. The Intel machines the first "distro" I used was CentOS6 and then in 2012, I moved most of the servers to Ubuntu 12.04, and upgraded with each release until present day they are 18.04.

I run shell servers with all of the post Redhat 6.2 distros mentioned at the beginning of this article for customers. They are all accessible via ssh or for full graphics
and sound, via x2go which tunnels everything over ssh so it is secure. Because I've run all these different distros I've had lots of experience with them and formed some
pretty solid opinions. I prefer debian based to redhat based for three reasons. 1st, Debian distros can be updated in place across major releases, Redhat forces you to
re-install, not cool when you have a lot of customers depending on various apps. 2nd, Debian has about 5x as much software ported to it as does Redhat. Lastly, in the case of Ubuntu, it's FAR more up to date than Redhat. When the meltdown exploit came out, Ubuntu was the first distro with a fix, Centos6 the last.

Right now I like Ubuntu and MxLinux the best based upon functionality and aesthetics but I have not taken MxLinux across a major upgrade yet so I am not using it for server infrastructure until I know it will behave in that event.]]>

Eskimo Started as a single line BBS running on a Tandy model III, later model IV. It was written in a modified version of BASIC that I wrote which I called ComBASIC. It was written in Z-80 assembly and was probably the first Trs-80 model III ever that multi-tasked. It had two processes, the main process that interpreted the modified BASIC, and a small process that was given CPU via the clock interrupt, which monitored modem carrier and either ended a session when carrier was lost or started one when it received carrier. It also had a watchdog function in case the ComBASIC code hung it would reset the computer. The modified BASIC code was a combination of my BBS functions which had upload / download (multiple directories in spite of Tandy's flat file system), e-mail, and game section, and Glen Gormans Minibin which primarily I adapted his room based message system to my language and modem driver and replaced the BASIC screen formatting routines with Z-80 routines which did some trick things including spin up the floppy drives before the buffer was empty so there would not be a delay for the next read.

Then I moved to a Tandy 16B Xenix based system, originally Microsoft Xenix and then later moved to SCO Xenix. This rapidly grew into a Tandy 6000, and then something Tandy never intended, a machine with 4MB of memory (Tandy 6000 at the time maxes out at 1MB but after I made the mod and a friend in I think it was Wisconson, disassembled the Xenix memory management routines, we had 4MB machines before Tandy did, and their machine officially maxed out at 8 ports, mine had 11. Anyway, hardware was easier to hack back then when you didn't have 26 layer PC boards and gigahertz speeds.

When that maxed out I bought a Sun 3/180 with 16 ports, added two more muxes for a total of 48 ports, and that was all the machine could support, so then went to using external Ethernet connected MUXes for a total of 256 ports and modems. Went from a 3/180 to a 3/280 and then a 4/280. The 3/180 and 3/280 ran SunOS 3.5 but I had to upgrade to SunOS 4.1 to go to the 4/280, not an upgrade I liked because 3.5 was rock solid, where was 4.1.x was finicky and crashed occasionally, also ports would hang, it was just generally problematic but we still run one machine with 4.1.4 here (eskimo.com) so you can see what that was like. It crashes from time to time but not frequently as it is not used heavily.

Then I added a 4/330 to handle Usenet News because the volume was becoming too much for the old machine, and then latter 4/670MP's which after a bit I upgraded
to SS-10's. The SS-10's were much smaller, drew much less electricity, but otherwise used the same CPUs and the 4/670MPs. I had high end Ross Hypersparc CPUs in those,
$20k a piece new. They were not cheap but nothing else could touch them in terms of performance back then.

When Linux came out we played around with a home built kernel, user-land, and stuff but did not really get very good performance, a 300 Mhz Intel chip was out performed by 40 Mhz Sparc (later upgraded to 120Mhz Sparc). But over time Linux improved and gradually I phased on the Suns (save for eskimo.com old shell server and two Ultra2s that are Radius servers. The first Linux distro I used was Redhat 4.1 and 4.2, later moved to 6.2. The Intel machines the first "distro" I used was CentOS6 and then in 2012, I moved most of the servers to Ubuntu 12.04, and upgraded with each release until present day they are 18.04.

I run shell servers with all of the post Redhat 6.2 distros mentioned at the beginning of this article for customers. They are all accessible via ssh or for full graphics
and sound, via x2go which tunnels everything over ssh so it is secure. Because I've run all these different distros I've had lots of experience with them and formed some
pretty solid opinions. I prefer debian based to redhat based for three reasons. 1st, Debian distros can be updated in place across major releases, Redhat forces you to
re-install, not cool when you have a lot of customers depending on various apps. 2nd, Debian has about 5x as much software ported to it as does Redhat. Lastly, in the case of Ubuntu, it's FAR more up to date than Redhat. When the meltdown exploit came out, Ubuntu was the first distro with a fix, Centos6 the last.

Right now I like Ubuntu and MxLinux the best based upon functionality and aesthetics but I have not taken MxLinux across a major upgrade yet so I am not using it for server infrastructure until I know it will behave in that event.]]>
<![CDATA[How To's. Tips & Tricks :: Installing Linux on a UEFI boot system :: Author Nanook]]> 2018-08-22T06:17:33-07:00 2018-08-22T06:17:33-07:00 http://www.eskimo.com/forums/viewtopic.php?f=34&t=256&p=565#p565
With UEFI boot, a gpt type partition table is required. This form of partition table has many advantages. GPT can support drive sizes up to 2^64 logical blocks in length. In
theory it can support 256 primary partitions but Linux without kernel modifications limits this to 16 partitions for most devices (RAID devices and some others can support
more).

Here is my recommendation for installing Linux on a drive that will be given a gpt partition table. Most Linux distribution these days include a live boot ISO that can both boot the system into Linux and install from one DVD. With your existing operating system download a copy of the live boot / install ISO and burn it to a DVD. On Macs you can use the same software that burns Mac .img files, an .img is Mac's name for a .iso and the software will work fine with an ISO.

After you have burned the DVD, backup any data on the hard drive you wish to save as you will be creating a new partition table and wiping the old.

Boot off of the DVD, when it comes up do not immediately install Linux, first invoke gparted (it is present with most distributions but if not install it), and fire it up.

Start by creating a new partition table, choose "gpt" as the partition table type.

Next create a 1MB partition with the data type set to "cleared". After the partition is created use the partition modify flags function to set the grub_bios flag.

Next create a 300MB partition (you can get away with 250MB or even smaller if your BIOS is small and you have an old 512 byte physical block disk). You should select FAT32 for the file system on this partition.

Next create a 512MB to 1GB partition with ext4 type file system. This will mount on /boot.

Next create a swap partition, you should set the file system type to "Linux Swap". This should be 2x your physical memory. If you do not create it, Linux will create a swap file but a partition keeps it in one contiguous location for better performance.

If this is a single user workstation, at this point I recommend creating a final partition with file system with a file system type of ext4, and mount point will be /.

If this is a multi user machine or server, then I suggest making the root partition around 40GB, if a large server a separate /var partition for logging, and the remainder of the space to a /home partition, all of these being ext4 file systems.

Exit gparted.

Now, if you have a static IP table, now is the time to configure the interface with the parameters of your static IP, else if you have dynamic IP, Linux will default to DHCP
to get the address. It is important that you set up networking and verify it functions (ping 8.8.8.8) before you start the install so you can install 3rd party drivers and updates concurrently.

Now you are ready to install Linux so choose the Install option. It will give you several prefab partitioning schemes, ignore them and select "Something Else".

Use the change option on each partition. On the 1MB cleared partition, make sure the system knows it is the grub_bios partition.

Next use change on the fat32 partition and make sure it is set to EFI System Disk.

Select the 512M-1GB boot partition and set the file system type to EXT4 and the mount point to /boot. It is not necessary to format because gparted already did that.

You can ignore the swap, Linux will see the file system type and use it appropriately automatically.\

Next select the remaining partitions and set the file system type to EXT4 and the mount points as you desire.

Exit the partitioner. Next it will ask where to write the boot block, tell it the drive that you just partitioned, i.e., /dev/sda or whatever.

It will then ask a series of questions such s keyboard layout, language, time zone, the name of your computer, an account login and password, and a root password, and then it will proceed to copy files and install. When it finishes, remove the DVD and reboot.

Play .. or work if you must.]]>

With UEFI boot, a gpt type partition table is required. This form of partition table has many advantages. GPT can support drive sizes up to 2^64 logical blocks in length. In
theory it can support 256 primary partitions but Linux without kernel modifications limits this to 16 partitions for most devices (RAID devices and some others can support
more).

Here is my recommendation for installing Linux on a drive that will be given a gpt partition table. Most Linux distribution these days include a live boot ISO that can both boot the system into Linux and install from one DVD. With your existing operating system download a copy of the live boot / install ISO and burn it to a DVD. On Macs you can use the same software that burns Mac .img files, an .img is Mac's name for a .iso and the software will work fine with an ISO.

After you have burned the DVD, backup any data on the hard drive you wish to save as you will be creating a new partition table and wiping the old.

Boot off of the DVD, when it comes up do not immediately install Linux, first invoke gparted (it is present with most distributions but if not install it), and fire it up.

Start by creating a new partition table, choose "gpt" as the partition table type.

Next create a 1MB partition with the data type set to "cleared". After the partition is created use the partition modify flags function to set the grub_bios flag.

Next create a 300MB partition (you can get away with 250MB or even smaller if your BIOS is small and you have an old 512 byte physical block disk). You should select FAT32 for the file system on this partition.

Next create a 512MB to 1GB partition with ext4 type file system. This will mount on /boot.

Next create a swap partition, you should set the file system type to "Linux Swap". This should be 2x your physical memory. If you do not create it, Linux will create a swap file but a partition keeps it in one contiguous location for better performance.

If this is a single user workstation, at this point I recommend creating a final partition with file system with a file system type of ext4, and mount point will be /.

If this is a multi user machine or server, then I suggest making the root partition around 40GB, if a large server a separate /var partition for logging, and the remainder of the space to a /home partition, all of these being ext4 file systems.

Exit gparted.

Now, if you have a static IP table, now is the time to configure the interface with the parameters of your static IP, else if you have dynamic IP, Linux will default to DHCP
to get the address. It is important that you set up networking and verify it functions (ping 8.8.8.8) before you start the install so you can install 3rd party drivers and updates concurrently.

Now you are ready to install Linux so choose the Install option. It will give you several prefab partitioning schemes, ignore them and select "Something Else".

Use the change option on each partition. On the 1MB cleared partition, make sure the system knows it is the grub_bios partition.

Next use change on the fat32 partition and make sure it is set to EFI System Disk.

Select the 512M-1GB boot partition and set the file system type to EXT4 and the mount point to /boot. It is not necessary to format because gparted already did that.

You can ignore the swap, Linux will see the file system type and use it appropriately automatically.\

Next select the remaining partitions and set the file system type to EXT4 and the mount points as you desire.

Exit the partitioner. Next it will ask where to write the boot block, tell it the drive that you just partitioned, i.e., /dev/sda or whatever.

It will then ask a series of questions such s keyboard layout, language, time zone, the name of your computer, an account login and password, and a root password, and then it will proceed to copy files and install. When it finishes, remove the DVD and reboot.

Play .. or work if you must.]]>
<![CDATA[How To's. Tips & Tricks :: Virtual Machine Performance Tricks :: Author Nanook]]> 2018-08-25T05:02:18-07:00 2018-08-25T05:02:18-07:00 http://www.eskimo.com/forums/viewtopic.php?f=34&t=266&p=575#p575
The simple rule of thumb in computing is memory is faster than any other device. So if code can be executed out of memory without first reading from disk, if HTML and PHP and all of that can be served from memory, it's going to be faster than even a flash memory device or drive.

So ways to make that happen. First lots of RAM, Linux will use any RAM it doesn't use for something else as disk cache. So whatever system board and CPU you choose, stuff the board completely full of as much RAM as will fit and can be addressed. Know that Intel sometimes likes about this. They will quote max RAM based upon the number of memory modules allowed and the maximum size of memory available at the time. If you instead look at the number of address lines the processor makes available externally, take 2 raised to that number and you will know how much memory the processor can actually address. Then it is a question of whether or not your motherboard brings all of those lines out to the CPU. Asus and Gigabit are usually pretty good about this, the others, not so much.

So, some of Intels processors which they say can take 32GB may take 64GB with larger DIMMS and some that they say take 64GB might take 128GB with larger DIMMS. Tomms Hardware is a good source for information about this as someone there has tried just about anything conceivable and you can usually find out in advance if something will work.

Now to take full advantage of some of the suggestions I am going to be offering, it is important to have redundant disk I/O and UPS power, else you may lose data, because
what I recommend is using writeback caching. This allows a write call to return as soon as the data is in cache. If your machine was to lose power or a drive fail at this point, data loss will result. Thus I do this and I also use RAID10. Get the largest drives you can afford, do this because the bigger the drive the higher the density which means more data is transferred for each rotation. Get the highest spindle speed you can and equip your machine with a butt-load of fans.

There is an advantage to RAID10 in Linux that most people are not aware of. During reads, it will use all four drives at once so the read speeds are 4x as fast, essentially in write mode it's striping and then writing in parallel on two sets of two drives but in read it strips all four. This is assuming you are using the software RAID function in the Linux kernel.

Before I leave hardware, also a word on overclocking. Modern Intel processors tend to be limited in speed by one factor and that's heat. If you can get the heat out of the CPU fast, you can clock them ridiculously high. I run all my i7-6700k's at 4.4 GHZ ALL CORES, and my i7-6850's at 4.3 GHZ ALL CORES, and that is air cooled. I use a cooler rated to dissipate 220 watts with two fans per cooler and am very careful about applying an extremely thin layer of thermal paste and use a very good quality silver based paste. Do NOT put a big glob in the middle of the CPU and then tighten the cooler down. This will result in a thick layer and poor heat conductivity. Instead take a credit card or something similar and spread it as thin as you can across the surface of the CPU, then bolt the cooler down. I could actually clock these machines faster because under typical load they are damn near at room temperature, but I test with 2x as many copies as cores of an AVX version of mprime which is about as bad as it gets for CPU heat, and I clock them accordingly so nothing my customers can do will overheat them or even get them hot enough to thermally throttle.

Now on the machine machines with the exception of /root, you can add ordered to the fstab options and that will basically make it do writeback mode as well. On the virtual machines, I've bench-marked both Xen and KVM/Qemu under Linux and found little in the way of differences in performance but KVM/Qemu is simpler to setup, more flexible than Xen, and more secure. It can emulate non-native CPU, Xen can not. However, if you're working with ancient 32-bit hardware you'll need to go with Xen as KVM no longer supports 32 bit. At one point I di d have a 32 bit machine with 28GB of RAM running Xen and it mostly worked but KVM much better.

I use a tool called virt-manager, and under Ubuntu it is a much more complete implementation. Redhat's is not complete, no support for non-native CPU for example. When you create a drive in virt-manager in a new virtual machine you can either emulate real hardware or you can use virtio drivers if the guest operating system supports it. Most do, some don't, some like Windows, you have to obtain the virtio drives separately but I wouldn't use Windows for any Internet facing anything. Ubuntu and CentOS will both support virtio drivers. These are more efficient than emulating hardware. The drive has some performance options, choose writeback and threads driver. This will allow writes in your virtual machine to return as soon as it's written to cache.

Now there are a number of places you can use tmpfs, an in-memory file system to speed things up where temporary files are involved, for example in /etc/fstab, I use:

tmpfs /tmp tmpfs defaults,noatime,nosuid,nodev,mode=1777 0 0
tmpfs /var/tmp tmpfs defaults,noatime,nosuid,nodev,mode=1777 0 0
tmpfs /var/lib/php/sessions tmpfs defaults,noatime,nosuid,mode=1777 0 0

For applications that support memcached, use it. Much better storing temporary data in cache. It can be used for numerous things in Apache, and in many PHP applications such as phpbb3, wordpress, etc.

There are many tweaks you can use to optimize keeping things in memory, hopefully the above will help. The bottom line is try to serve everything out of RAM any time you can and that will optimize performance.]]>

The simple rule of thumb in computing is memory is faster than any other device. So if code can be executed out of memory without first reading from disk, if HTML and PHP and all of that can be served from memory, it's going to be faster than even a flash memory device or drive.

So ways to make that happen. First lots of RAM, Linux will use any RAM it doesn't use for something else as disk cache. So whatever system board and CPU you choose, stuff the board completely full of as much RAM as will fit and can be addressed. Know that Intel sometimes likes about this. They will quote max RAM based upon the number of memory modules allowed and the maximum size of memory available at the time. If you instead look at the number of address lines the processor makes available externally, take 2 raised to that number and you will know how much memory the processor can actually address. Then it is a question of whether or not your motherboard brings all of those lines out to the CPU. Asus and Gigabit are usually pretty good about this, the others, not so much.

So, some of Intels processors which they say can take 32GB may take 64GB with larger DIMMS and some that they say take 64GB might take 128GB with larger DIMMS. Tomms Hardware is a good source for information about this as someone there has tried just about anything conceivable and you can usually find out in advance if something will work.

Now to take full advantage of some of the suggestions I am going to be offering, it is important to have redundant disk I/O and UPS power, else you may lose data, because
what I recommend is using writeback caching. This allows a write call to return as soon as the data is in cache. If your machine was to lose power or a drive fail at this point, data loss will result. Thus I do this and I also use RAID10. Get the largest drives you can afford, do this because the bigger the drive the higher the density which means more data is transferred for each rotation. Get the highest spindle speed you can and equip your machine with a butt-load of fans.

There is an advantage to RAID10 in Linux that most people are not aware of. During reads, it will use all four drives at once so the read speeds are 4x as fast, essentially in write mode it's striping and then writing in parallel on two sets of two drives but in read it strips all four. This is assuming you are using the software RAID function in the Linux kernel.

Before I leave hardware, also a word on overclocking. Modern Intel processors tend to be limited in speed by one factor and that's heat. If you can get the heat out of the CPU fast, you can clock them ridiculously high. I run all my i7-6700k's at 4.4 GHZ ALL CORES, and my i7-6850's at 4.3 GHZ ALL CORES, and that is air cooled. I use a cooler rated to dissipate 220 watts with two fans per cooler and am very careful about applying an extremely thin layer of thermal paste and use a very good quality silver based paste. Do NOT put a big glob in the middle of the CPU and then tighten the cooler down. This will result in a thick layer and poor heat conductivity. Instead take a credit card or something similar and spread it as thin as you can across the surface of the CPU, then bolt the cooler down. I could actually clock these machines faster because under typical load they are damn near at room temperature, but I test with 2x as many copies as cores of an AVX version of mprime which is about as bad as it gets for CPU heat, and I clock them accordingly so nothing my customers can do will overheat them or even get them hot enough to thermally throttle.

Now on the machine machines with the exception of /root, you can add ordered to the fstab options and that will basically make it do writeback mode as well. On the virtual machines, I've bench-marked both Xen and KVM/Qemu under Linux and found little in the way of differences in performance but KVM/Qemu is simpler to setup, more flexible than Xen, and more secure. It can emulate non-native CPU, Xen can not. However, if you're working with ancient 32-bit hardware you'll need to go with Xen as KVM no longer supports 32 bit. At one point I di d have a 32 bit machine with 28GB of RAM running Xen and it mostly worked but KVM much better.

I use a tool called virt-manager, and under Ubuntu it is a much more complete implementation. Redhat's is not complete, no support for non-native CPU for example. When you create a drive in virt-manager in a new virtual machine you can either emulate real hardware or you can use virtio drivers if the guest operating system supports it. Most do, some don't, some like Windows, you have to obtain the virtio drives separately but I wouldn't use Windows for any Internet facing anything. Ubuntu and CentOS will both support virtio drivers. These are more efficient than emulating hardware. The drive has some performance options, choose writeback and threads driver. This will allow writes in your virtual machine to return as soon as it's written to cache.

Now there are a number of places you can use tmpfs, an in-memory file system to speed things up where temporary files are involved, for example in /etc/fstab, I use:

tmpfs /tmp tmpfs defaults,noatime,nosuid,nodev,mode=1777 0 0
tmpfs /var/tmp tmpfs defaults,noatime,nosuid,nodev,mode=1777 0 0
tmpfs /var/lib/php/sessions tmpfs defaults,noatime,nosuid,mode=1777 0 0

For applications that support memcached, use it. Much better storing temporary data in cache. It can be used for numerous things in Apache, and in many PHP applications such as phpbb3, wordpress, etc.

There are many tweaks you can use to optimize keeping things in memory, hopefully the above will help. The bottom line is try to serve everything out of RAM any time you can and that will optimize performance.]]>
<![CDATA[How To's. Tips & Tricks :: JDK 10 on Ubuntu 18.04 :: Author Nanook]]> 2018-08-25T21:01:40-07:00 2018-08-25T21:01:40-07:00 http://www.eskimo.com/forums/viewtopic.php?f=34&t=268&p=577#p577 To do this:

add-apt-repository ppa:linuxuprising/java
apt update
apt install oracle-java10-installer

This will uninstall other versions and it will make itself the default.]]>
To do this:

add-apt-repository ppa:linuxuprising/java
apt update
apt install oracle-java10-installer

This will uninstall other versions and it will make itself the default.]]>
<![CDATA[How To's. Tips & Tricks :: Don't Use UPPER CASE in hostnames. :: Author Nanook]]> 2018-08-27T15:25:05-07:00 2018-08-27T15:25:05-07:00 http://www.eskimo.com/forums/viewtopic.php?f=34&t=269&p=578#p578 <![CDATA[Future Trends :: Density - Agenda 21 superceded by Agenda 2030 :: Author Nanook]]> 2018-08-22T17:00:29-07:00 2018-08-22T17:00:29-07:00 http://www.eskimo.com/forums/viewtopic.php?f=19&t=258&p=567#p567
Shoreline exempts developments and redevelopments that involve 4 units or more from property taxes to encourage this kind of crap and to line the pockets of developers at the expense of the rest of us who are seeing increased taxes to cover what the property developers should be paying for. By increasing density, they put a strain on all of the cities resources yet those of us in single family dwellings are left picking up the bill. The obvious intent is to force us out so they can bulldoze our homes and build more high density apartments.]]>

Shoreline exempts developments and redevelopments that involve 4 units or more from property taxes to encourage this kind of crap and to line the pockets of developers at the expense of the rest of us who are seeing increased taxes to cover what the property developers should be paying for. By increasing density, they put a strain on all of the cities resources yet those of us in single family dwellings are left picking up the bill. The obvious intent is to force us out so they can bulldoze our homes and build more high density apartments.]]>