Author Archives: Nanook

Physics Website

I really want to understand aspects of relativity and aspects of quantum mechanics that don’t totally make sense to me, but so far I just can’t wrap my mind around them.

I did stumble across an interesting website that does attempt to explain some of these complex concepts in physics in an easy to understand way and thought it might be of interest to others.

The website is called Physics 2000 and it was created by the University of Colorado.

General Relativity Quandary

I’m just having a hard time getting my mind around some science concepts today. Maybe it’s my new drugs (methylphenidate, generic Ritalin), but things which I only used to have mild discomfort trying to grasp are now screaming out, help I’m not making sense!

Yea, I know what Einstein said about “common sense” being things which are drilled into your head before eighteen and how if we had spent the first eighteen years of our lives flying about the universe in close to light velocity spacecraft common sense might actually make sense.

Still, anytime something seems paradoxical or contradictory it says to me, either my understanding of the model is incomplete or the model itself is flawed. So I post these things hoping someone who thinks they understand it can explain it to me.

Here is my current general relativity quandary… As I understand general relativity, the underlying principal is that all reference points are equally valid.

The Hafele-Keating experiment put atomic clocks on commercial airlines and compared them to clocks on the ground in the US naval observatory. I first heard of this vaguely and the way I heard it was they flew one clock around the world, they kept the other on the ground, and the one flown around the world and when they landed the time was different, and the way I heard it, the flown clock was slightly slower.

Now actually there are two issues here. There is gravitational time dilation and that one I’m only a little uncomfortable with (but I’m still uncomfortable) and there is kinematic time dilation that I am a lot uncomfortable with.

The reason I am uncomfortable with the kinematic time dilation component is this; if all frames of reference are equally valid, then it is equally valid to say the plane went up and then circled around the earth, as it is to say the plane went up and the earth circled underneath the plane and it came back down. In other words, what moved relative to what?

The moving clocks are moving relative to each other so I don’t see how one can gain and one can lose time relative to one another. It seems terribly arbitrary deciding which clock should go slower than the other because both should appear to go faster relative to the other from the others point of view. But when they’re back together stationary seems to me like that ought to cancel and they should stay the same but of coarse they don’t.

Now the gravitational time dilation, at least I can see what is different in terms of the reference points, one is in a gravitational well, one is to a lesser degree, and time flows at different rates within that well.

But the time dilation one I’m having problems grasping, especially when it matters if it’s east-west or west-east. Yea, I know it’s a different velocity but it’s a relative velocity either way. Why should one frame of reference be favored?

Quantum Why

Quantum phenomena is disconcerting to think about because they are so counterintuitive to our everyday way of thinking about the world around us.

I have a background in radio. I obtained my first class radio telephone operators license in my junior year of high school. I operated pirate radio stations during junior high and high school. We had a radio station in our high school that I participated in.

As a result of my background in radio, I am familiar with the subject of electromagnetic radiation, antennas, polarization, the geometric relationship between the electrostatic field, the electromagnetic field, and the direction of propagation.

At radio frequencies the packet or particle nature of light does not make it’s presence felt to any significant degree. Quite different than say at gamma wavelengths where the particle nature of light seems to predominate.

We generate these radio waves, with “electronics”. And just what are electronics? They are devices that utilize and manipulate the motions of electrons.

It is the motion of electrons, charge carriers, in a material (metal conductor of some sort) that creates a magnetic field, and the oscillation of those electrons back and forth creates an oscillating magnetic field, and also the moving charges in the conductor setup an electrostatic field, and between the two which both radiate away from the conductor (antenna) we have electromagnetic radiation.

But those moving electrons in a conductor, they are valence electrons that are relatively free to move, but what about electrons bound more tightly in orbits, they are weird.

If moving oscillating free electrons in a conductor generate a radio wave, then why don’t electrons orbiting an atom create electromagnetic radiation, they are a charge moving in free space right?

Neils Bohr addressed this issue nearly a century ago. All matter has a wave nature to it, including electrons. Electrons, like photons, and all other particles, can be made to interfere with themselves proving a wave nature.

The “orbit” of an electron around an atom has to be an integral multiple of it’s wavelength. This means that the orbit can not decay slowly radiating electromagnetic energy away as it does so. So then it is possible to have this moving charge orbiting an atom and not radiating which seems to defy everything I had come to learn about electromagnetism.

This gets back to viewing the electron as a particle. If viewed instead as a standing wave, then the charge really isn’t moving, it’s just a standing wave distributed about the the atoms nucleus.

At the same time though it does act like a particle, the angular momentum, the energy associated with it, does act as it would for a particle orbiting an atom and the velocity one would expect given the distances, the mass of the particles, and the electrostatic attraction between the nucleus and the electron.

The nature of the wave is odd too, the wave is a probability of a particle being in any one place. So if you send electrons through two slits and they then project onto a fluorescent target, you will see a wave interference pattern which manifests in terms of the probability of an electron striking the target at a given location.

It all seems so odd. I can’t help but think of Einsteins statement, “God does not play dice”, and yet, it would seem at times that God not only plays dice but is obsessed with them.

Retro-Causality

Professor John G. Cramer at the University of Washington had planned an experiment in November of 2006 which was intended to test retro-causality, the ability for something in the future to effect the past.

His experiment was designed to use quantum entangled photons to send information back in time, granted only 80 microseconds or so, but it was the proof of principal that was important.

I have tried to find results on the web but have been unsuccessful. If anyone knows anything about this experiment and what the results are, or if the men in black stepped in and confiscated the whole thing, please let me know. Just leave a comment here please.

I am very interested in the results. I know there is some way for information to get from the future back to the past because of my own experiences with predictive dreams.

I’ve also heard of a dream study, the intent of which was to determine how much information from the previous day gets incorporated in the dreams. In that study, researchers were surprised to find that in addition to information from the previous day, information from the following day was also incorporated.

D-Wave Quantum Computer

D-Wave claims to have the first commercially viable quantum computer. If it really works as advertised the world is in for tremendous disruption because it will make it possible to almost instantly break the largest encryption keys used for the secure transmission of financial data electronically.

On the other hand, it would also make problems such as weather forecasting and protein folding modeling trivial, so the potential for new medicines and cures for many ailments is a positive side of the coin. It would also make new encryption schemes possible.

Lithium Iron-Disulfide Cells Non-Rechargeable?

Those of you familiar with lithium ion rechargeable batteries may wonder how these new lithium AA batteries work given that common rechargeable lithium batteries have a nominal voltage of 3 volts or greater depending upon chemistry.

The rechargeable lithium cells are usually lithium manganese. Lithium iron disulfide cells are a nominal 1.5 volt cell. These are sold as a higher capacity replacement for alkaline batteries. These batteries are mostly primary cells (non-rechargeable) but there are some versions that are rechargeable, however I do not know if there are any in actual production.

There are experimental high temperature versions of lithium iron disulfide cells that are being used in electric vehicles. These might be workable for commercial vehicles that are always operating but to keep a battery at the required temperatures (around 350 centigrade) while idle would be wasteful and consumers aren’t going to wait for their car battery to warm up before they drive somewhere. There is at least one company that has a ambient temperature rechargeable lithium iron disulfide cell with patents pending.

I’ve seen numerous references on the web to secondary cells but I have yet to find anyone actually selling a rechargeable lithium iron disulfide cell. The potential advantage of such a cell, in terms of power to weight ratio, would be substantial.

I’ve tried to find information on the web on exactly what it is that makes this particular chemistry non-rechargeable and what it is that does make the rechargeable varieties rechargeable and I’ve found nothing. So if you happen to know, please leave a comment. Any good references would be appreciated.

Moore’s Law and uProcessor Technology

For many years Moore’s law which held that the most economic per unit transistor density on an integrated circuit would double every two years held true. In fact, for much of that time it doubled every 18 months. Note that this is the most economic per transistor not the maximum physically possible.

Recently, Moore’s law has stalled. You may have noticed there hasn’t been a noticeable increase in CPU speeds for the last year or so. Manufacturers, unable to economically scale CPU speeds, have gone to multiple core CPU’s.

IBM has promised an 80 core multi-core CPU within five years. Parallel processing has it’s advantages and limitations. The human brain relies on massive parallel processing to overcome the speed limitations of individual neurons and when you consider the huge amount of information the human brain processes, it does quite well.

However, not all tasks lend themselves to parallel processing and those that do require special programming. The human brain gets around this problem by being largely self-programming. That is, neural nets self-train and essentially wire themselves and adjust weightings of individual connections to produce the right results.

So far we haven’t really got this capability in silicon except for some crude neural network chips that attempt to emulate (poorly) neurons and only a very small number of them at that.

For the foreseeable future, parallel processing computers will need to be programmed specifically to use those parallel processors. Some tasks, like encryption key breaking, weather prediction, graphics rendering, shaped charge explosives design, protein folding problems, many of these tasks do lend themselves well to parallel computing, but many other tasks require an intermediate answer to be obtained before the next step can proceed and these tasks do not lend themselves well to parallel processing. These tasks require a single instruction execution unit with a faster clock speed to gain performance.

Heat has become the limiting factor in increasing CPU clock speeds. For years, making components smaller allowed faster speeds because power was dissipated primarily during switching, so the faster transistors switched on or off, the more heat was lost.

As the size of the transistors were reduced, a point was reached where the gates of the transistors became so small that the leakage current through the gate became the predominate source of heat and shrinking components further no longer allowed faster operational speeds.

About 80% of the dissipated power comes from this gate leakage current with CPU’s made with current 65 nanometer line width technology. The gate insulator on these transistors is just five atoms wide.

Intel will be coming out with new CPU’s later this year based upon 45 nanometer line width technology. To address the problem of gate leaking, Intel will be substituting a compound made with hafnium for the current silicon dioxide gate insulator. In addition they are using a metal other than silicon (but they aren’t specific) for the gate structure. Combined, these two things will greatly reduce gate leakage and thus power demands and will allow faster speeds as a result of the reduced heat production.

Interestingly, IBM is also coming out with a new hafnium based gate insulating material for their transition to 45nm design line processors. It seems difficult to believe that Intel and IBM came up with the same solution independently. They announced this development independently on the same day, January 26, 2007, as competing developments. They also will be replacing the polysilicon gate with another metal compound.

Both Intel and IBM have working 45nm processor prototypes running Vista, MacOS and others to demonstrate the functionality of this technology.

AMD is taking a different approach although they may ultimately incorporate hafnium based gate insulators in later 45nm production. IBM and AMD are required to share some information as part of a former legal settlement.

AMD is experimenting with making the transistor gates out of nickel rather than silicon. This allows a thicker gate insulator to be used and still obtain the same control over current flow through the gate.

AMD is also experimenting with “strained silicon” gates. They embed a certain number of germanium atoms within the silicon lattice to stretch it and force the atoms to be farther apart. This allows electronics to move more freely resulting in greater conductivity, higher drive currents, and thus results in faster speeds, up to 20-25% faster. This work is being done in conjunction with IBM.

AMD is also experimenting with adding a silicon on insulator layer that reduces leaking by eliminating stray electron carriers from the chip.

Lastly, AMD is looking into various multi-gate FET transistors that may have some advantages over conventional single gate IGFET’s.

It appears that, although Moore’s law took a temporary break, it is back on track at least for another decade or so. Ultimately these new technologies are expected to also enable a move to 32nm and then later 24nm line width designs. Maybe then we can have accurate weather forecasts.

Abiotic Oil

I hate to harp on this subject because I really feel that for environmental reasons, we do need to shift from hydrocarbon fuels to clean and sustainable sources of energy.

However, the oil companies are doing everything in their power to suppress that transition and keeping the price of oil artificially high through various means such as wars in oil producing countries and pushing the “peak oil” hype.

Since the beginning there has been debate with respect to whether oil is of biological origin or not. Most geologists in the United States seem to be of the opinion that oil is of biological origin.

To anyone with any science knowledge, you know that biological entities do not create or destroy elements. You also know that hydrocarbons are compromised of hydrogen and carbon. All of the hydrocarbons that exist came from hydrogen and carbon that are primordial, whether or not they were cycled through biotic systems.

We also know from lab experiments that given carbon, water, and a few other ingredients present in the earth’s mantle and subject to the temperature and pressure present in the earth’s mantle that we get a mixture of hydrocarbons that agrees with what we pump out of the ground.

We know the raw ingredients are there. We know that subject to the conditions present in the mantle they combine into oil. So I’m hard pressed to understand the opposition to the abiotic oil theory other than vested economic interests wanting to keep supplies limited and by extension, prices high.

Another asinine argument I have heard is that the abiotic oil theory is based upon the fact that we see hydrocarbons on other planets in space. However, the argument is made, this doesn’t seem to be the case for the inner rocky planets. Understandably this is the case with mercury, it is so hot and so small that it would have had problems hanging onto it’s volatiles.

On Venus, while we may not be able to explore and drill for hydrocarbons, we do know that it has a CO2 atmosphere 100x as dense as Earth’s. Given that Earth’s presently got around 380 parts per million that means we’d need to burn about 30,000x as much hydrocarbons as we have so far to get as much CO2 in our atmosphere as Venus, or in other words plenty of reason to believe there are plenty of hydrocarbons left in the Earth. In short, hydrocarbons aren’t the problem, atmosphere is.

No doubt a portion of that carbon and hydrogen has been recycled through life forms and that’s why we do find oil when we drill into old sedimentary deposits, but that is secondary.

The Russians have proved it, by drilling through granite and basaltic capstone, below any sedimentary deposits, they have struck large quantities of oil and become the worlds second largest oil producer, and for a short while before the Yukos siezure, the worlds largest.

Now similar deposits have been tapped by the Russians in Viet Nam (White Tiger field) and Chinese in North Korea, and by a small company in Utah here in the United States.

Now I’ve got really mixed feelings about all of this because on the one hand, we’re not tapping any more atmospheric sources and we’re polluting those and altering our climate. On the other hand, as long as oil remains so highly profitable the oil companies are going to exert huge political influences and prevent alternative technologies from being implemented, however, I fear that if the price of oil fell to realistic levels people would also be reluctant to develop alternatives. Seems like catch-22 situation.

Zero Point Energy

I had an idea for producing thrust from zero point energy. Hopefully you are familiar with the Casimir effect. You place to parallel plates very close to each other:

–>||
You get a net attractive force between the plates because wavelengths of virtual energy that are longer than the spacing of the plates are absent between then but present outside and thus push the plates together. The shorter the distance between the plates, the more wavelengths are excluded and the stronger the force pushing them together.

Let’s suppose instead you don’t make the plates parallel:

–>\/
This would make more force pressing on the plates where they are close together than where they are farther apart. Given that the force is applied at an angle relative to the plates, shouldn’t that make them then want to move in a net direction away from the point where they are closest together?

I’m wondering if you etched a bunch of these very tiny structures (since most energy is contained in the smallest wavelengths) into a chip if you could get a measurable thrust?