The First Direct Measurements of the PVT Surface and Melting Curve Minimum of Solid Helium-Three


Ernst L. Wall

Senior Scientist 

Institute for Basic Research

Palm Harbor, FL

To see the broad spectrum of the ongoing work at the Institute for Basic Research, visit their web site by
clicking here.

Also, see an analysis of global warming on the website by this author,




From Thesis for Masters in Science

The First Direct Measurements of Solid Helium-Three’s Melting Curve Minimum and its PVT Surface

Ernst L Wall

The University of Florida

MS Physics Degree awarded April, 1965

 Sometime in my first year at Florida I was looking around for a thesis topic, and I approached Dr. Dwight Adams who was a cryophysicist (specifically, a helium-three) and asked what he had that I could do.  His immediate suggestion was “See if you can devise a way to measure the compressibility and thermal expansion of solid helium three, which will require pressure measurements in a cryostat.  No one has ever been able to figure out how to make these measurements inside a cryostat.  Here is the cryostat”, he said pointing to a really interesting Erector set kludge in the middle of the lab.  The kludge had a piece of apparatus ( a small Dewar) hanging down that could contain liquid helium-four and that went inside a long, narrow Dewar that contained liquid nitrogen that surrounded the helium-four. 

The Erector set supported a maze of glassware for containing and moving the helium-three.  The primary movement of the gas was carried out by a toeppler pump, a kludge whereby mercury is used to force the gases from one container to another.  This was something I had to have a go at.  Furthermore, to an old redneck farmer who had grown up around various pumps with the attendant pressure gauges, the solution was obvious. 

I told him “We can hang a capacitor plate on a bourdon gauge in it and measure the movement caused by temperature and pressure change by measuring the frequency change in an oscillator which we can make using a tunnel diode.” 

Helium-three had been around since the mid 1940s and was a by-product of the war time atomic bomb development. Here, some 20 years later, no one had figured out how to make such a simple measurement.  One can only surmise that was because electronics was not taught to physicists to any great extent, and they didn’t learn it on their own.

That was not the case with me.  In addition to being intimately familiar with the innards of pressure gauges (my grandfather was the local plumber, electrician, and water pump expert in addition to being a farmer), I had grown up as an electronics freak, having learned how vacuum tubes worked when I was in the fourth or fifth grade.  I was later able to “liberate” some electronics books from an uncle who had graduated from World War II and went to electronics school on the GI bill.  Later, while in high school, I liberated an Air Force radar training manual from another uncle that had attended USAF radar school while enrolled in the Air National Guard.  As a result, by the time I graduated from high school, my knowledge of radio and radar circuitry from a theoretical point of view was about as good as that of an Air Force radar technician from what I could judge.

Also, I worked at Cape Canaveral for a year and a half before coming to the university, and there I developed an interest in various strain gauges that were used to evaluate aircraft under test, so this fell right into my prior experience as an electronics freak.

In any case, after listening to my suggestion, he showed me the existing sample chamber that went down inside the cryostat.  It was about half an inch long and had an inside diameter of about one eighth of an inch, if memory serves me correctly.  That was a big “Oops!!” moment.

There simply wasn’t enough helium-three to fill a bourdon gauge, so instead of using a bourdon gauge, it was necessary to try hanging a capacitor plate on the end of the existing sample chamber and hoping it would stretch linearly with pressure.  I put some sketches together and we agreed on what needed to be done.  I then made some more precise drawings which I took to the machine shop (I didn’t have the time to do the machining myself.)  The drawing of the final product is below.  (For the younger generation, this was in the days long before computer generated graphics, so every drawing had to be done by hand.)

Note that I was the only graduate student at the time, around the September, 1964 time frame, and further, I was Adams’ first graduate student.  The lab was new, having been recently put together by Dr. Adams with some help of an undergraduate or two.  Shortly after I started work on the device, Dick Scribner joined the lab, and a short time later, Mike Panczyk joined us.  Gerry Straty did not join the lab until I was finalizing my thesis in April, 1965.  Our results were first published in the Bulletin of the American Physical Society 10, p. 519 from the April, 1965 Washington meeting of the APS.


I state the above because Dr. Adams claimed in a magazine article that Gerry was his first graduate student and further, he gave him credit for inventing the capacitive strain gauge.  I am not sure what his motivation was, but the was totally unethical and dishonest.  Further, in a phone conversation I had with Straty some years later, I thought he stated that he invented the capacitive strain gauge.  I was extremely surprised, but chose not to fight over it.  It was only after it I found out that my invention was placed in the Smithsoniam as the “Straty-Adams” strain gauge without my name on it that I decided to set the record straight.

The final product is shown in the drawing below.




Strain gauge:  By measuring the capacitance of the two plates as a function of pressure at liquid helium temperature, it was possible to calibrate the gauge.  Note also that the sample chamber and the body that holds the lower capacitor plate, were made of nylon so that it could be used for NMR.  Metal could also be used, but with an insulator being used isolate the lower capacitor plate.

Later on, we put it on the cryostat and I built an oscillator using a tunnel diode with the capacitor plates down inside the cryostat.  This was about the most unstable piece of junk one could possibly imagine.  Whoever was advertising tunnel diodes for use in oscillators turned out to be a charlatan.  I then decided to build a vacuum tube oscillator which I knew to be stable, but then Dr. Adams came into the lab with a General Radio precision capacitance bridge that he had “liberated” from another professor’s lab.  (Looted, perhaps?). 

We hooked up the bridge to the capacitor, put some pressure in the chamber, and it easily detected a small pressure changes.  We were now ready to go so that, within a few weeks, we had some preliminary measurements of the melting curve minimum, the compressibility, and the thermal expansion coefficients.

Some of the final measurements from my Master’s thesis are shown below, along with some theoretical points from several authors.





The compressibility is calculated from these curves.


Actually, the pressure vs temperature curve is flatter than show here.  The upward volume is due to slippage in the capillary as the sample nears the melting value.                           




In addition to be above, I had made some calculations of the specific heat of solid helium three based on the measured compressibility and thermal expansion data using methodology from Kittel’s and Dekker’s solid state physics books.  I found that it agreed quite well with experiment.  

I also made some preliminary measurements of solid helium-4 also.


More on the Dishonest Awarding of Credit for the Capacitance Strain Gauge

About the time I finished my thesis, we published our results in the article, E. D. Adams and E. L. Wall.  "Thermal Expansion Coefficient and Compressibility of Solid Helium-three”, Bulletin of the American Physical Society 10, p. 519, a verbal presentation at the April, 1965 Washington meeting of the APS.  He did not permit a description of the device in that presentation for obvious reasons.  This was done much later in an article by Dr. Adams and Gerry Straty. 

At this point, I was mentally exhausted, and that, coupled with what I now know was ADHD, made being able to study for the PhD qualifying exam very questionable. Hence, I decided to take a year off and settle for a Masters degree for the time being, but with the intent to return in a year and complete a PhD.  I was, however, offered an assistantship to continue. Dr Adams was not happy that I was leaving.

Also, a month or so after the April, 1965 APS meeting, Dr. Adams published an APS Letters article with his and my name on it that provided a pathetic little graph or two of thermal expansion, as I recall. It did not include the melting curve minimum or, as I recall, the compressibility ( Physical Review Letters 15, p. 549 )

 In short, it was a piece of junk that left out some of my better measurements.  I was horrified!  At least, it did have my name on it.  He published an article later in August that was hardly better than the first paper, but with Gerry’s name on it as well as his and my name on it, which was fair enough if Gerry refined the measurements.  That was Gerry’s first publication on the matter, almost a year after I started working in the lab.  

That public record makes it a little hard to credit Gerry with inventing the strain gauge.  If he had invented it, his name would have been on the first publications, not mine.

Neither the specific heat calculation nor the helium-4 measurements were included in my thesis, however, because Dr. Adams forbad me to include them ( for some weird reason ) but claiming they were not appropriate for some vague reason. The whole thing sounded a little strange. On a visit there a year or so later, I found he had put my calculations in Gerry Straty’s dissertation draft!  (No disrespect here towards Straty. I am sure he had no knowledge of my prior work here. )  I still consider him to be a friend.

Note that there were some upward movements of the isochores near the melting curve edges as shown in my results, above.  Dr Adams felt it was capillary slippage as the sample warmed up and approached the melting curve minimum.  He stated that he was going to tie the capillary to the heat sink so that as the sample warmed, the heat sink (which stayed cold) would keep a section of the capillary cold and solid.  I assume he implemented it himself or later had Gerry implement it.  In any case, this discussion took place not long after taking the measurements and before Gerry got there.  In any case, this was purely Adams’ idea, and I make no claims to it.

Also, it should be noted that he may have changed the configuration of the gauge.  But if it is a capacitive gauge for use in a cryostat or another similar container, it is still my invention.

In any case, of the many fun projects I have had in my career, this was one of the funist. It was a real pleasure to work with Dr. Adams, and was a great learning experience, even if his later conduct was completely somewhat dishonest.  Further, at the above mentioned Washington APS meeting, I was contacted by a little superconducting magnet company, Cryonetics, in Burlington, MA.  I went up for an interview, and got a great job. Some pictures of my two magnets are shown below in Section 19.  This was in the greater Boston area, the medical capital of the world.  That came in handy a year later when the company doctor hooked me up with a surgeon, Dr Eugene Guralnick, then at Mt.Auburn Hospital in Cambridge;.  He had started the melanoma program at Mass General Hospital.  That helped out immensely when that “innocent” little lump in my left groin turned out to be metastatic malignant melanoma with a 2% survival rate.  I was initially given 1.5 to 5 years to live in 1966 when I was 27 years old.  So much for starting work on a PhD that I might not live to complete. Further, the Boston-Cambridge was one big party down, so I chose to party with what time I had left.  At the same time, I could do superconducting magnet physics at my job with Cryonetics, as well as study other physics issues as well as other subjects in my spare time in the evenings and weekends.  Needless to say, I am now 80 years old, and I am still above ground and vertical thanks to a sequence of events that got me to Dr. Guralnick. In any case, my comments here are to set the historical record straight as to who invented the strain gauge, namely me!

I had maintained contact with Dr. Adams for a year or so after my cancer surgery, so he was aware that I had malignant melanoma and what my prognosis was.  I then had many things going on in my life, so I lost contact with him until the last year or so.  I speculate he thought I had died and hence, that I would never know about the credit for my work being given to someone else, or, if I he thought there was any chance that I was alive, he might have also had dreams of winning a Nobel prize and only wanted to split it 2 ways.  But even if I had died, that was still totally dishonest.

It is true that Adams and Straty produced a large volume of work.  However, they were able to do that only because of my invention.  Had I not invented it, there is no reason to believe they would have suddenly done so because helium-three had been available for some 20 years, but no one had figured out how to measure its characteristics in the solid state.  Neither of them were electronics freaks. Further, Adams had worked with helium three for a number of years at Duke and Stamford, and had not figured out the obvious.  It is more likely that Duke University would have beaten them to the measurements.

In spite of this, Dick Scribner (who stayed on for his PhD and who now lives if FL) and I recently discussed getting together and taking Dr. Adams out to supper some night if I should visit FL.  Obviously, the above credit issue would not have been discussed because I have no desire to fight over it.  Regrettably, we found out he was in a nursing home after having suffered a stroke, so we will not be able to take him out. I wish him the best because of what I got out of working for him, as it provided me with opportunities in superconducting magnets, semiconductor device physics, and later, digital circuit design and scientific algorithm programming and signal processing, and work in the aerospace industry.  Part of that included a sequence of events that got me interested in electron physics which led to the above particle models. In writing this, I am only trying to set the historical record straight as to who invented the capacitive strain gauge.



*  It should be commented that, while we were working on the measurements, a former undergraduate student that had helped Dr. Adams build the lab, came back in town for a visit.  He was, at the time, doing graduate work at Duke.  He described a capacitive stain gauge they were working on that used radial expansion rather than linear expansion as we were doing.  We said nothing about what we were doing for obvious reasons.  But the implication was clear.  Had we not started at the time we did, then Duke might have beaten us to the measurements.  Obviously, timing is everything!

Comments from the book Helium-3 and Helium-4, Wm. E. Keler, Plenujm Pres, 1969 are shown below.  My strain gauge seems to have done well!  However, my name seems to have gotten lost somewhere here. 

I would like to note that I was Dr. Adams’ first graduate student.  Shortly after I got there, Dick Scribner joined the lab a month or so after I got there, and later helped out with some of the measurements because they turned out to be a lot of work.  This was because once the cryostat was cooled, you needed to keep moving because the liquid helium, that was used to cool the apparatus, was quite expensive.  We had no reclamation equipment, so the evaporated helium was lost.  Dick, Dr. Adams and I were working long shifts, and I even slept on a cot in the lab for a short time. 

Dick’s help, by the way, was most appreciated.

Also, Mike Panczyk joined the lab not long after Dick got there, and sometime later, David Heberlein joined the lab about the time I was finishing. 

Both Mike’s and Dick’s pictures are shown below.  I had no pictures of David or Gerry.

We were Dr. Adams’ first four graduate students, and Gerry Straty was the fifth.  I mention this because in an article that I have lost track of, Dr. Adams stated that Gerry was his first graduate student and implied that he was the inventor of the strain gauge (pressure transducer).  I am not sure what he was thinking, but providing Straty with my specific heat calculations and knowingly giving Straty credit for my invention (lying) was grossly dishonest.

This sensor made it to the Smithsonian as the “Straty-Adams” pressure transducer.  Although I invented the device, my name was left off.   Appendix 1.

Also, some of the data was published in August, 1965 with my name and Gerry’s name on it, showing that I was there.  This was some 4 months after my thesis was completed. This was in addition to the above Bulletin of the APS with only mine and Adams’ names on it.  See Appendix 2

It is worth noting that my strain gauge was used by Osheroff,  Richardson, and Lee to discover superfluidity in liquid helium-3.  They were awarded the 1996 Nobel prize for this.


The University of FL Helium-Three Cryostat in the 1964 – 1965 Time Frame

  Note:  I located the following pictures in a pile of old photos on August 13, 2013, nearly 50 years after they were taken.  I had forgotten about their existence.

The author standing by the University of FL cryostat.   Was I not one handsome devil, or what?  Not only that,  I had a full head of hair and I had a relatively small pot. These statements are now far, far from true now, almost 50 years later. I am now a wrinkled up old geezer, I don’t have much hair, and I now have a rather prominent and generous pot (I believe “portly” is the term for my present appearance).  Photo taken by Dick Scribner, as I recall.



Still another picture of the author and the cryostat.  The sample chamber hung from the column of straight pipes (hard to see in this dark photo) on the left side of the erector set.  A long nitrogen dewar was pulled up around the column of pipes and the test chamber and it was filled with liquid nitrogen during operation.


Details of the sample cooling system (Based on a 50 year recall):   The large circular chamber just below the cross beam contained liquid helium-four, and when a vacuum was pulled on it, the helium-four boiled, thus cooling itself and the helium-three refrigerator hanging below.    When a vacuum is pulled on the helium-three chamber, it boils the helium-three which cools itself and the sample chamber below it even more.  The small cylinder at the bottom is the solid helium sample chamber that contains the strain gauge.  It is suspended from the helium-three refrigerator, a dark, near invisible entity that terminates several curved tubes in the photo above. 

In operation, the refrigerators and sample chamber were sealed in a cylindrical chamber that was bolted to the circular plate above the helium-four refrigerator.  This chamber was evacuated, thus insulating the refrigerators and sample chamber from the outside world.  That way, their only thermal contact with the outside world was the circular plate, so that after cooling down to the minimum temperature, the temperature of the sample chamber will drift up slowly at a fixed molar volume so the pressure/temperature profile can be observed.  (Again, the complete details are a little hazy as this is based on my memory after having not viewed this apparatus, these pictures for nearly 50 years, nor have I even even thought about it since then.)

Innards of the cryostat.

Dr. Adams is the man with the rubber hose, and I am working on the cryostat.  The graduate student was Ed Garbaty.  He was from another lab, as I recall.  The picture was likely taken by Dick Scribner.

Dr. Adams and graduate student Mike Panszyk operating a vacuum pumping station, probably evacuating the helium-three glassware.



Graduate student Dick Scribner working on the cryostat, Dr. Adams is in the background.  We were taking measurements at the time, and Dick’s help with the measurement was most helpful and was most appreciated.  David Heberlein probably joined not long after these pictures were taken, as I recall.

Note also there are no pictures of Gerald Straty because he had not joined the lab as yet.  He joined about the time I left, about 6 – 8 months after I built the first version of the strain gauge, and just before I left.



Mack, the Cryo Tech at the Collins cryostat.  The machine liquefied the helium for use by the entire physics department. The square black object by the technicians hand is the expansion engine which is like a little steam engine that turns the flywheel (with the silver rim), the exhaust helium being cooled in the process.  The large shiny silver canister in the lower right is a helium Dewar for holding the liquid helium.

© Ernst L Wall 2007, All Rights Reserved



 Nuclear Physics at the University of GA





In the late 1950s, particle physics was relatively rare.  Most research projects involved low energy nuclear physics.  The 2 MeV, state of the art Van de Graaff accelerator shown here was used for polarized n – p scattering experiments.  Recognizable figure is Dr. Lewis Rayburn, the physics department chairman.


I spent several quarters machining hardware and assisting in carrying out experiments with this machine.


The closest I ever had to getting a job involving nuclear physics was to have an interview at Oak Ridge National Laboratories. The job would have involved the thrill and joy of babysitting a calutron that separated U238 and U235.  Wow!!  However, I never got an offer, thank goodness!!!


Terminology:  vortex electron,  vortex electron theory, vortex electron physics, tachyon, tachyons,  negative mass, fine structure constant, negative mass tachyon, spin angular momentum, de Broglie wave, mass energy,  photon, neutrino, electron, Bohr magneton, magnetic moment, vortex muon, meson, vortex proton, vortex neutron, vortex deuteron, solid helium-three pvt surface & melting curve minimum.


Design Techniques for Large Bore, Split Pair, Superconducting Magnets:  A Breakthrough!


The Fort Belvoir Magnet.  The Lucite rod shows the path that light would take going through the high homogeneity center field.  Room temperature access to the center of the magnet was via a “finger Dewar” that went through the center bore of the magnet.  This was my second magnet.



After I finished my Masters thesis, I decided to take a sabbatical to work at Cryonetics, a cryogenics company in Burlington, MA, with the intent of coming back to FL for a PhD later.  The president of the Company was Richard Morse, a physicist who had founded National Research Corp  and was the inventor of Minute Maid orange juice.  The VP was Dr. Conrad Rauch, to whom I reported.


When I arrived there, the standard technique for winding superconducting magnets was a method designed by Laverick at Argonne National Laboratories.  It specified a layer of stainless steel screen between layers of superconducting wire.  This allowed the liquid helium to flow in and cool the wire.  That was the state of the art method that was universally used at the time.


However, the day before I arrived, Cryonetics’ first magnet, a 3 inch bore, 50 kilogauss magnet had been tested and it went from the superconducting state to normal at about 25 kilogauss and it ripped itself apart in a cloud of helium.  I.e., it exploded. 


Obviously, Laverick’s method had a problem.  Worse, when you had a split-pair, i.e., two magnets with a gap between them, that was known to exacerbate the problems with reaching high fields.  That, by the way, is not meant in any way to disparage the management at Cryonetics, who were obviously an extremely capable bunch.  Had I designed that magnet for the first time, I would have used Laverick’s method just as they did because that was the standard industrial methodology.


In any case, the problem was assigned to me.  That put me in a bit of a pickle because I knew nothing about superconductive and superconducting magnet design.  Also, there was no enough money in the till to waste on useless experiments because superconducting wire and liquid helium were very expensive.  It had to work the next time, or else.


Hence, I stopped all work on the project and discussed the problem with Dr. Rauch. He theorized that the problem was caused by “flux jumping” wherein localized high magnetic field caused the superconducting wire (Niobium Zirconium) to momentarily go normal,  and the localized field would then spread out allowing the wire to go back to the superconducting state.  At a high enough current, the I2R heating could be so extreme as to cause the wire to go into thermal runaway rather than going back to the superconducting state. 


After thinking about it, the solution seemed obvious.  I had the magnet rewound, but with a layer of OFHC copper sheeting between each layer of wire.  The copper sheeting was shorted back on itself and the overlapping ends joined with low temperature solder so as to form a conducting copper sheet between each layer of wire.  This way, any localized flux jumps would generate a counter current in the copper that would damp out the jump.  Winding it was a very laborious job carried out by our excellent cryotech, Guy Petagna.  It should be commented, by the way, that Guy was one of the wittiest, funniest people I ever worked with.


When the magnet was complete, we put it in a Dewar and charged it up.  It then easily made the 50 kilogauss specified field.  We ran it up in current until it quenched (went normal), but instead of exploding, it seemed to give off what seemed like a very gentle sigh of relief with the liquid helium slowly boiling off.  We then wound the second magnet of the pair and put them together into a split pair, but using this configuration, we could only get them up to 45 kilogauss before it went normal. 


However, the customer was very content to accept it at that field.  Having room temperature access to its large bore via a center tunnel in the Dewar, It was to be used to for beta decay studies by Dr. James Blue of NASA Lewis Research Center.  It had a variable spacing between the magnets using a bellows like Dewar section between the Dewar for the upper magnet and a Dewar for the lower magnet.

NASA Lewis magnet partially assembled.  The lower magnet is contained in the lower Dewar, and the magnet on the bench on the left will later be placed in a Dewar that goes on top.  The two holes in the center plate are where the cryo bellows connect the upper and lower Dewars.  The large center bore allows room temperature access to the sample.  The spacing between the Dewars could be varied, depending on sample size and desired magnetic field homogeneity.  This was my first magnet.


The second magnet, for Fort Belvoir, was next. (It is shown at the top of this section. I overdesigned it in terms of the amount of wire so as to be sure of meeting the 50 kilogauss spec.  When we tested it, we heard metal tools slamming against the door of a nearby metal cabinet as they were attracted to it.  Then, we noticed the bottom of the locked door of the cabinet begin to bend outward towards us.  However, we could only get it up to 65 kilogauss because of the current limits of our power supply. 


We finally made it quench by shaking and kicking the Dewar, and like the first magnet, it quenched by giving a slow, gentle sigh of relief.


The wires, multi stranded Niobium Zirconium, were spliced together by spot welding them together.  Because the weld joints will go normal at a reduced field, we put the  splices on top of the outer edge of the magnet and shielded them with thin, flat sheets of NbZr.


This later magnet is one shown above.  It was meant to be used for the study of optical properties of materials in high magnetic fields.  Hence the radial holes in the flat center plate could be used for passing light through the material.  We use commercial sapphire windows for passing light through the Dewar.  A “finger dewar” allowed the room temperature sample to be placed in the center of the magnet for study.


In hindsight, I should have published the methodology because it was, to my knowledge, never done previously.  However, I had other issues to deal with at the time, including the fact that Cryonetics went under and was bought out by Magnion.


Also, in hindsight, aside from damping out the flux jumps (which you could hear as a thumping sound in the Dewar), it is likely that the thermal mass of the copper helped prevent thermal runaway, and also, it helped conduct the heat away from the hot spot towards the edge of the magnet.


When the second magnet was shipped, I was canned.  That seemed like a disaster because I had just gotten out of the hospital with a serious ailment (metastatic melanoma), but it indirectly led to my hearing about in a job with Transitron, an early semiconductor house in Wakefield, MA.  I got the job, and I was now in a new field, and I had 10 incredible years in signal processing before I got canned again.  I then taught myself digital design and built a single board computer which I controlled with a teletype. I later wound up in microcomputer design at the ITT Advanced Telecommunications Center.  I and later in moved into embedded software, signal processing, and after ITT sold System 12, a 100 thousand line telephone exchange to the French company Alcatel, I became an embedded software contract engineer, as well as a signal processing and algorithm contractor.


The moral of this is: A physics degree can be an incredible ticket to incredibly fun jobs.   Also, sometimes getting canned is the case of a door being slammed in your face ( even with health issues ) only to have a much better door open before you down the hall. 




Guy Petagna standing by a magnet test Dewar.  The partially assembled NASA Lewis magnet is on the hoist on the right and is covered with frost because it was just taken out of the Dewar after testing..  Note the cloud from the helium boiling off from the Dewar.


I had likely been testing a magnet here, and the helium is boiling off of the test Dewar. This was obviously a very expensive proposition as we were too small a company to own a helium reclamation system.

© Ernst L Wall 2007, All Rights Reserved




The Savannah Symphony

(Physics isn’t Absolutely Everything!)


I was the second bassoonist for the Savannah Symphony for 2 years, the first year being while I was a senior year in high school, and the second year was during my first year of college at Ga Southern University in my home town of Statesboro. (It was then called Ga. Teacher’s College and boasted about 500 students.)  During my first year, I made enough money to buy my own G. H. Huller bassoon, an East German make.  A Heckle bassoon was far beyond my reach, as much as I would have loved to have had one.  In my second year, I made enough to pay for my first year of college plus a little left over for my second year at the University of GA in Athens.  I was even a member of the Musicians Union, James C. Petrilo, President. 

It almost tore my guts out when I had to resign from the Symphony to go to the University of Ga at Athens.  The conductor, Chauncy Kelly,  invited me to come back and audition anytime I wanted.  That pleased me mightily.

I had taken lessons from the bassoonist for the Longines Symphonette the summer before I got the job.  He was a graduate of Julliard and was working in Savannah for the summer, and at the end of the summer he volunteered that he could get me a scholarship at Julliard if I was interested.  I then had to choose to be a physicist and do music on the side, or be a musician and do physics on the side.  The latter seemed to be quite improbable.  But for a pore ol’ redneck music freak, the possibility of going to a musician’s mecca like Julliard was an opportunity beyond belief!  In the end, however, I chose physics.  That proved to be a far wiser choice in the end as I did play with several orchestras in the evenings while working.  However, some health problems and an attendant long bit of surgery (for metastatic melanoma) that I had at while at Cryonetics left me too debilitated to play regularly in the evenings.



I am the scrawny little bassoonist on the right, just in front of the first trombonist, Dana King.  Dana was my band director at Ga Teachers College.  He just loved to put the bell of his trombone right in back of my head and try to blow my ears forward.  Also, the slide from the trombone went over my shoulder and I frequently got little drops of slobber from his leaky spit valve on my cloths.

I weighed about 115 pounds soaking wet at this time.  I was so skinny that if I walked outside with my shirt off, you could hear the wind whistling through my ribs.




A Derivation of the Magnetic Moments of the Electron and the Muon by Means of a Negative Mass Tachyon Model


In order to produce the results shown above without resorting to ad hoc methodologies, it is necessary to take a contrarian approach to the πμ and μe transitions.  Instead of saying that a pion “decays” into a muon and a muon “decays” into an electron, we take the approach that the pion captures a negative mass particle and becomes a lighter muon, and the muon in turn captures still another negative mass particle and becomes an electron.  (Note that for those who worry about neutrinos, not only do we acknowledge that they have been observed, we have an appropriate model that we discuss later.)  


Note that the dimensions of this model are precisely defined insofar as its spatial and velocity dimensions are concerned.  This is an issue that is likely to severely try the patience of any self respecting quantum mechanic.  He would assume that any particle such as this must be described by a wave function and that its dimensions could not be precisely determined.  More will be said later about why this is not necessarily so within the context of this model even if it is not otherwise obvious from the de Broglie wave model.                  


The critical issue here is why the electron’s charge was assumed to revolve at all, let alone revolve in an orbit having a  Compton wavelength as the circumference.  The reason was, quite simply, that the tachyonic model and the cutoff energy of the μe curve required the Compton wavelength as a dimension, as will be shown below.  But note that the tiny charge, in high energy collisions, will still appear to be tiny.  The fact that it is revolving will not cause it to appear larger in a scattering experiment than if it were not revolving.                  


It should be noted that the expression for the magnetic moment is the same as the orbital magnetic moment of the ground state orbit of the Bohr Hydrogen atom.  That was first derived by Neils Bohr around 1913. Why these two different states of the electron produce the same magnetic moment is not clear at this time.





Figure 1.  This graph shows the experimental transition of rates (relative) as a function of energy for the muon to electron and the rare, direct pion to electron conversions.  This graph heart of this model in that the magnetic moments of the electron and muon are obtained directly from the cutoff energies of the two curves. 

In fact, from the standpoint of this particular model, the above curves  are the most important in all particle physics. 

(While the V-A Model describes the μ e  curve quite well, we utilize the cutoff energies here in an entirely different manner.  )


Also, note that the use of this curve resulted from a suggestion by Joel Schoen, a physicist who worked at the same company I did in the mid 1980s.  After looking at some rather primitive sketches of a model I had written, he commented “Why don’t you simply do what everyone else does and find some appropriate energy levels?”  At first, that seemed logical on the one hand but absurd on the other hand because there were no tachyon energy levels available!  But after thinking about it for a few days, it began to dawn on me that this curve might contain something useful.  It was a good suggestion after all!  After applying it, my magnetic moment was off by only about 380 times the actual value!!  I sent those results to Walter Niblack, a former colleague, who promptly stated that I used a negative mass balance condition (both the dynamic and static conditions) and that would do it.  That proved to be the final critical element which was promptly applied, and the results are shown below.


To this author’s knowledge there is no other derivation of the Bohr Magneton for the electron spin itself based on its internal structure.  That is, there is no other derivation that states that the electron has an internal structure as opposed to being a simple point particle.  There are some angular momentum (quantum mechanical) requirements that state that the magnetic moment is given by the Bohr magneton, but no derivation of a revolving particle with a finite orbital radius whose value is obtained from some basic, measured energy level.           


However, the fact that the electron’s magnetic moment had this numerical value was well known since the early 1920s.  This was based on the observations of the splitting of Hydrogen’s spectral lines (the fine structure) and the on the Stern-Gerlach experiment.


What was not known at that time was exactly why it had this value.                  


Also, it should be noted, the magnetic moment as calculated here is somewhat smaller than observed experimentally.  To have the correct value, it must be multiplied by the gyromagnetic ratio, ge/2, where ge= 2.002319394367.  This value has been measured out to some 12 decimal places.  That is, the Bohr Magneton as calculated here is too small by about 1 part per thousand.               


We have no interest in pursuing a one ppt error at this time insofar as it would apply to this model because there are far more interesting things to pursue.  Some approaches were taken to earlier to add this correction are described in this author’s book, The Physics of Tachyons that is listed below.        


We have no further comment on the electron at this time.  However, we do note that the same methodology holds for the muon.



Hypothesis:  A muon converts to an electron by capturing a negative mass tachyon.  A pion converts to a muon by capturing a negative mass tachyon.  A pion converts directly to an electron by capturing two negative mass tachyons.                  



To begin the derivation of the characteristics of the electrons, the masses of the muon's and electron's tachyons are obtained by subtracting the heavier particle from the lighter particle, i.e.,





Next, we will need to utilize half of these masses as binding energies. I.e., we have





The sum of these energies is



The right most curve, the direct π e conversion curve, is less well known and describes the relatively rare, direct conversion of a pion into an electron.  This event occurs about one in 104 pion conversions.                           


Next, examine Fig. 1, above. It is a composite of two particle conversion curves. The μe curve on the left is well known and is contained in most particle physics books.                   


It should be noted that accurate fits to the μe curve have been produced by the V-A theory, so that we make no attempt to claim that the positive results of this model invalidate the Standard Model .  This model is simply a different approach.                   


The generally accepted assumption is that two neutrinos are produced by the decay of an electron into a muon, and the shape of the curve is determined by the relative angles of emission of the two neutrinos. That is to say, the curves are normally considered to be decay spectra.                  


Furthermore, neutrinos have been observed, and the residual energies of this model are 20 eV for the electron model, and 123 MeV for the muon model, more than enough to account for the generally estimated masses of the neutrinos.                  


The interpretation used here is that the reaction during the capture of a tachyon by a muon has a residual energy whose distribution is described by the  μe curve. However, if the reaction energy is greater than that of the binding energy of the electron's tachyon to the charged particle,  there will be no capture and hence, no electrons will be produced. The point at which this happens, 52.6 MeV, is the cutoff energy of the μe curve. This compares favorably with the energy of Eq. 4.                  


But having said that, the possibility of a neutrino carrying away part the energy but leaving a tachyon is not precluded.  (See the neutrino model, below.)

Figure 2.  This shows an analog for the balance condition that is used to calculate the center of mass for the positive mass charged particle and the negative mass tachyon.  Note the use of two parallel strings to attach the weight and the balloon (the negative mass tachyon) to the shaft.  Probably the only particle model in which strings have proven to be useful! 


Dr. Walter K. Niblack is thanked for pointing out the above balance condition for a negative mass with a positive mass as well as the dynamic balance conditions of Figure 3, below.


The πμ capture, on the other hand, produces monoenergetic muons at an energy 4.119 MeV, so that there is no cutoff energy. Therefore, another approach must be taken. So compare Eq. 5 with the 69.5 MeV cutoff energy of the  μ e curve. The double tachyon capture implies that the total binding energy of the muon and electron's tachyons is half of sum of their masses, and hence, the binding energy of the muon's tachyon is also half of its mass energy. Note, incidentally, that the difference in the two cutoff energies is 16.9 MeV, which is half the muon's tachyon's mass energy as given in Eq. 3.                  


Again, as in the case of the the  πe  conversion, a neutrino is emitted.  But in any case, we have no state transition model as of this time that will give the energy balance between the neutrinos and the tachyons.                  


Because of its negative mass, a revolving tachyon will have an inwardly directed force, not an outwardly directed force. This inwardly directed force of the tachyon balances the outwardly directed force of the orbiting charged particle, thus maintaining the particle systems in tightly bound orbits. The balance conditions are similar to that of a helium balloon (a negative mass analog) on one end of a massless rod balanced by a less massive weight placed between the balloon and a pivot on the other end of the rod. Because of the negative mass, the center of mass of the system is at the pivot, and is thus external to the line connecting the charged particle's orbit and the tachyon. This is shown in Fig. 3.

Figure 3.  This shows the rather bizarre behavior of the electron’s revolving charge around the center of mass that is external to the line joining the charge q and  the tachyon.  From the the tachyon’s perspective, it revolves around the charge with an orbital circumference equal to its de Broglie wavelength,  λ Te .



Based on the above, in general, the magnitude of the binding energy, which is the same as the ground state energy, is given by




Considering the above, the de Broglie wavelength for the tachyon is given simply by




where h is Planck's constant, MT is the mass of the tachyon in grams, and ET is the energy of the tachyon. Using Eq. 6 for the energy in Eq.7, we have





It could be argued that it is naive to apply this simple equation to tachyons and ignore relativity. But there is no experimental evidence one way or the other as to how they behave. Certainly it is no more naive than extending the Lorenz transformation to hyperluminal regions and concluding that tachyons have an imaginary mass as has been the accepted practice. Therefore, we will work with what we have and see how the model develops.                  


If we assume a single de Broglie wavelength, lambda, for the circumference of the tachyon's orbit around the charged particle, we may divide equation 8 by 2 p. This gives us the tachyon's orbital radius, r lT, as it orbits the charged particle in the charged particle's frame of reference. That is,




Here, the subscriptrefers to the de Broglie wavelength of the tachyon, and   .                  


While the original model used this concept, another way of looking at it is to consider that both the tachyon and charged particle revolve around the common, external center of mass. The tachyon has some 207 de Broglie wavelengths in its orbit, which is, in this case, larger than that of the charged particles orbit.                  


We will now explore the balance conditions for a negative mass particle that is coupled to a positive mass. This is illustrated in Fig. 3. For the electron, we define



For the muon,



The equations describing the balance of this system for the electron model is






where we used the fact that  .     

Using Eq. 2 ( for  ) in Eq. 12, we have that








The     terms cancel, so that Eq. 13 becomes, after a little rearrangement,




Dividing both sides of 14 by m e , and then using Eq. 10, we obtain



Also, rewrite Eq. 2 using Eq. 10 to obtain


Using Eq. 9 for , Eq. 15 becomes



Using MTe as defined by Eq. 16, we eliminate (Re - 1) and MTe from Eq. 17 so that we have for the electron




 Using an identical approach for the muon model, the orbital radius of the muon's pion is




The magnetic moment of a current loop is, in general,



where I is the current in the loop, and A is its area. (Note that using  for the magnetic moment is not to be confused with the subscript  representing the muon.)                  


Current is, in general, given by the number of charges passing a point multiplied by the charge per particle. Also, recall that in the Gaussian system of units, the charge in statcoulombs divided by the speed of light is the unit of charge used to calculate the magnetic field. Hence, the current at a point caused by a single charged particle revolving about a center point is



where f is the frequency of the particle's rotation, and for a light speed particle is given by



where c is the velocity of the charged particle and rc is its orbital radius. Hence, the magnetic moment of a single, revolving charged particle is obtained from Eqs. 20, 21, and 22, as




where    is used for the area, A, of the current loop of Eq. 20.   Eq. 23 then becomes



Using equation 18 in Eq. 24, the magnetic moment of the electron is

        .                                                                        (25)

Using Eq. 20 in Eq. 24, the magnetic moment for the muon is


                         .                                                                        (26)



These are the Bohr magnetons for the electron and muon respectively. These values for the magnetic moments agree with experiment to within 0.17 % for the electron and 0.12 % for the muon. No particular significance is attached to the plus and minus versions of the magnetic moments at this time.                  


( Yes, we acknowledge that this is one bizarre methodology!! However, everything on this web page was ultimately derived from these results.  )


But to take it a step further, by requiring that the electron's charged particle have an integral number of wavelengths, the accuracy of the electron's magnetic moment is improved to within 39 parts per million. That is, the gyromagnetic ratio is g/2 = 1.0011208. (QED does better than this, but with hundreds or workers and almost 60 years, this should be the normal course of events.)                  


It should be noted, for contrast, that the self-energy calculation for the electron provides the well known classical electron radius of 2.8179 fm, which is far smaller than that of the electron as given above. However, it is less than twice that of the muon. No particular significance is attached to this, however. But it is interesting to note that if we divide the electron's charged particle's radius (the reduced Compton wavelength) by the classical electron radius, the result is the fine structure constant. Again, the significance of this with respect to this model, if any, is not clear at this time.              


One objection that may be raised is that the electron is much larger than the high energy scattering data indicates it is rather small. The electron's charged particle's orbit has a radius of 386.15933 fm, and the muon's charged particle's orbital radius is 1.8675947 fm. In spite of these large orbital radii, the actual scattering cross section of muons and electrons would be expected to be much smaller at high energies because the actual charged particle itself is no larger than the pion. That is, the upper most limit of its radius is 0.185 fm (2.15 Mb). This does not contradict the much lower experimental value of 5 - 30 Nb. (No lower limit is available from the model.)                  


In the above, we obviously assumed that negative mass tachyons exist based on the results from this current model.  However, those tachyons are bound to charged particles trapped in light speed orbits. We do not really know if they are created, say, as “holes” in space when the particle transitions from one state to another, or if they are free particles that are captured by these charged particles.  We assume that latter for the moment.


But for the sake of argument we assume here that free tachyons exist.  However, we have no idea what their interactions with each other are, or if in fact, there are any.   Furthermore, we have no evidence that indicates that they interact with free photons nor any reason to believe that they should in the first place.


In Section 9, we address the issues with synchrotron radiation in the case of a revolving charged particle.                  


Finally, it should be noted that this entire derivation was initially done numerically on a Melcor (now obsolete) LED display hand calculator.  I.e., it was done without algebra because the author was trying to get a feel for the small numbers used in electron calculations.  A short time later, it seemed rational to carry out the algebraic derivation shown above.


For the original publications that the above is derived from, see:

1.       Ernst L. Wall.   "The Role of Tachyons in Electron Spin and Muon Spin",  Bulletin of the American Physical Society  30, p. 729 (1985).

2.       Ernst L. Wall.  "Indirect Evidence for the Existence of Tachyons; A Unified Approach to the Pion ® Muon ® Electron Conversion Problem", Hadronic Journal 8, p. 311 (1985).

3.       Ernst L. Wall.  Book, The Physics of Tachyons, 234 pp., ( Hadronic Press, 1995 )



(See publications 1, 21, 25,  and 26, below.)

© Ernst L. Wall 2007, all rights reserved.


The Imaginary Mass Tachyon Model


This is section that this author would rather not write because it could be construed as an attack on the very capable researchers that tried to obtain something useful from the imaginary mass model.  Nothing could be further from the truth. They have this author’s utmost respect and even admiration for their efforts.  They were doing their job which is to investigate a possible physical model. 


A Religious Icon That is Not Even Wrong!

The internet is filled with references to imaginary mass tachyons.  In spite of the fact that an imaginary mass has no physical meaning, one would think their existence is a confirmed fact.  “Learned” web pages are filled with algebra that is logically correct and that produces countless pretty graphics. However, none of this leads to any evidence of the existence these entities or to any kind of suggestion of reality.


The imaginary mass tachyon has been around for some 50 years now.  In that period of time it has produced nothing that agrees with experiment. (We do note the one exception in Section 12.c where Recami and Mignani derived a negative mass tachyon from it, to put it in simple terms.)


The Emperors New Cloths and a Mindless Clinging to an Unproven Idea.

But in spite of this, this model has become a religious icon that many physicists mindlessly cling to with what can only be described as a death grip that feeds on itself like the case of the Emperor’s new cloths. 


As a result, any attempted discussion of negative mass tachyons will be met with the mindless insistence that tachyons can only have an imaginary mass to the exclusion of all other models.


The implication is that if you don’t believe in the imaginary mass tachyon, you are incompetent and unworthy of your position as was the case with the emperor and the members of his court.


Hence, it seems useful to show why these things are bogus, but with great reluctance.



Why the Imaginary Mass Tachyon is Totally Bogus


First and foremost, there is absolutely no experimental evidence whatsoever to support the existence of imaginary mass tachyons.


The imaginary mass tachyon is derived simply by setting the particle velocity in the Lorenz transformation to a speed greater than c, the speed of light in Eq. 15.1, below.  Here, m0 is the rest mass of some subluminal particle and m is its mass at some velocity v. 





This was originally derived from the results obtained from the Michaelson-Morley experiment that was carried out in 1887.


For subluminal particles, this is known to be correct.  That is a fact.


Or course, it is algebraically correct to say that m would be imaginary if  v > 0.  However, by what empirical authority would one say that this is physically correct.  Furthermore, in the case of a tachyon, what is m0?


The most obvious reason this is bogus is because:


1.       The Lorenz transformation is based on experiments that take place below the speed of light.


2.       In the subluminal domain, all particle interactions (based on this model) occur by means of light speed interactions.


3.       In the superluminal domain photons cannot catch a tachyon.  Further, even if they collide there is no reason whatsoever to assume that a tachyon and a photon would interact, and if they did interact, how they would interact.


But to take bullet 3 further, in the superluminal domain we have no idea what the interactions would be like, if there are any.  Certainly there is no experimental evidence of any kind to suggest what they would be like.  But whatever they are like, there is no reason to assume that they behave like subluminal interactions.


Worse, according to this model, the energy contained in a particle whose velocity is just above light speed is greater than that of a particle moving at an infinite velocity.


Hence, there is utterly no a priori justification whatsoever to believe that extending a subluminal model to the superluminal domain is valid.  In the light of its failure to produce experimentally verifiable results, it is clearly a bogus procedure.



           Figure 12-1.  A ballistic missile launched from point A on the Earth

would follow a Keplerian trajectory to point B at which  point it

                would strike the Earth.  The trajectory’s focus would be at the

                point x, the center of the Earth.

                                                             a.      While the calculation of the trajectory of the missile

                above Earth would be carried out using a Keplerian orbit, that

model would not be correct once it strikes the Earth.

                                                            b.      Likewise, the Lorentz transformation is totally valid

below the speed of light, but it has no meaning above the speed of




A more concrete example of such a bogus procedure is shown in Figure 12-1.  Here, the Keplerian trajectory of a missile launched from point A to point B would be modeled using an elliptical orbit with its focus at the center of the Earth.  Clearly, that model would no longer be valid from point B onward unless you believe in the imaginary mass tachyon.


If you believe that the imaginary mass tachyon model is a correct tachyon model, then you have to believe that the ballistic missile would continue in its ballistic trajectory through the interior of the Earth.  It is highly likely that geologists everywhere would welcome your model because they could use it to insert probes into the Earth’s interior to study it, especially the core!!


In any case, if anyone has any experimental evidence to support the existence of imaginary mass tachyons, or the validity of the extended Lorenz transformation, this author would like to see it.  An email address is provided at the top of this page, so please, I implore you, let me know about it.


Finally, we note that Recami and Mignani did show back in 1972 that the imaginary mass would manifiest itself as a negative mass particle in the subluminal world.  This was used as a justification in the original publication of the tachyon model.  However, beyond that, the imaginary mass tachyon is of no value.



12.B.  The Origins of the Imaginary Mass Tachyon


The extension of the Lorentz transformation to superluminal velocities was first published by Bilaniuk, Deshpande, and Sudarshan in 1962, some 50 years ago.  (This is frequently called the extended Lorentz transformation.)  That was not an illogical approach for a first cut at attempting to prove the existence of tachyons, and these authors are to be commended for their originality.


Further, these authors were the pioneers that started the faster-than-light movement in the physics community.  They have truly earned their rightful place in the history of physics.


In the time since then, probably hundreds of papers based on the extended Lorentz have been produced by many very capable investigators.  But in all of that time, and in spite of the obvious capability of those authors, no agreement with experiment whatsoever was achieved.  On reading these papers, the impressive capabilities of the authors is manifest, and if these capable people were unable to achieve any experimentally verifiable results, then clearly the model has failed.


It goes without saying that reasonable agreement with experiment is necessary for a physical model to be considered viable, and the imaginary mass tachyon is certainly not exempt from this rule.                    


But before continuing, we would like to state that the comments here are not intended to criticized or demean any of the many capable workers that have worked with the imaginary mass tachyon.  They were doing what a researcher is supposed to do:  They were investigating a proposed model in order to determine if it had any validity, a procedure that is nothing less than good science in itself.  They gave it a maximum effort.  These workers, especially Bilaniuk, et. al.,  (the authors of the imaginary mass tachyon) should be applauded for their efforts.  In fact, this author himself spent some time attempting to apply that theory, but to no avail.     


It could be argued that a few papers might have produced a vague suggestion of physical reality, for the most part there have been little or no specific models that could be compared with experiment.  I.e., it would not be totally incorrect to say that the imaginary mass tachyon is “not even wrong”.  


As to the “not even wrong” phrase, this was used by Wolfgang Pauli.   Pauli was noted for his frequently blunt and uncomplimentary assessment of other people’s work.  “That is ridiculous!”, he would exclaim.  However, in one particular instance someone asked him what he thought of a paper that he was reviewing.  “Its not even wrong!”, he said in referring its lack of a model that was testable through experiment.  This same phrase was utilized as a title by Woit in his book that discusses the failure of string theory to produce anything that is testable via experiment.                  


Because the imaginary mass tachyon has never produced any significant theory or model that was testable by experiment after 50 years, it is appropriate to use Pauli’s phrase here.                  


Regardless of the validity or lack of validity of the extended Lorentz transformation, we simply state that if one simply posits a negative mass tachyon and uses it to develop a particle model, then one can show that that model can produce agreement with experiment.  We are not required to extend relativity into a domain in which it has no empirical validation.  We have demonstrated that above.                  


Again, this is said with all apologies to the very talented and capable investigators that have worked in this area and tried to obtain something from it.



The One Case Where There was a Successful Utilization of the Imaginary Mass Tachyon to Derive a Negative Mass Tachyon


It was published in Rivista Del Nuovo Cimento 4,209 (1974)  by Recami and Mignani.  Those authors showed that an imaginary mass tachyon would manifest itself as a negative mass (gravitationally repulsive) particle.  That was used as a justification in this author’s publication of the derivation of the electron’s and muon’s Bohr magnetons [25].


Back in the mid 1970s I was interested in faster than light phenomena, including tachyons.  Because I was working in the semiconductor industry at the time, I was also interested in electrons.  At some point, I got this bug in my head that I could use tachyons to model an electron.  Specifically, I felt I could model an electron as a muon and a negative mass particle.  However, since a negative mass particle had never been detected, if one existed it would have to be a tachyon merely because that was the only way it would not have been observed.  Publishing something like that would have been difficult because getting it past the referee would have been problematic.


About that time, Walter Niblack found their paper and gave me a copy of it.  That was encouraging, and so in the mid 1980s I had some spare time and developed the electron and muon models.  With Recami and Mignani’s paper, I had the justification I needed for publication of my results.                                                


©Ernst L Wall 2007, All Rights Reserved

Terminology:  vortex electron,  vortex electron theory, vortex electron physics, tachyon, tachyons,  negative mass, fine structure constant, negative mass tachyon, spin angular momentum, de Broglie wave, mass energy,  photon, neutrino, electron, Bohr magneton, magnetic moment, vortex muon, meson, vortex proton, vortex neutron, vortex deuteron, solid helium-three pvt surface & melting curve minimum.





Appendix 1.  My Helium-3 Transducer/Strain Gauge Appears to Have Made it to the Smithsonian, but under someone else’s name.


The version I built is more like the white one on the far left in the picture below.   See drawing in Section 17.  It is most strange to note, however, that my name seems to have gotten lost somewhere.






Appendix 2.  The second hard journal publication of some of the results I had previously taken for my MS Thesis.  It appears that at least I got some more hard publication credit.  However, it left out the description of the melting curve minimum.  Also, it appears that nowhere did I get any credit for inventing the strain gauge.  The first paper, in the Physical Review Letters, has been misplaced.  It was published with the two names Adams and Wall only.

The first actual public mention of the gauge and data was prior to this at the Spring, 1965 meeting of the American Physical Society.  See E. D. Adams and E. L. Wall.  "Thermal Expansion Coefficient and Compressibility of Solid Helium-three”, Bulletin of the American Physical Society   10, p. 519.

 (  By the way, superfluidity in liquid helium-3, for which the 1996 Nobel prize was awarded, was discovered by Osheroff, Richardson, and Lee using the “Straty-Adams” ( Actually, Wall-Adams ) strain gauge to study the melting pressure. They would have not made the discovery without the ”Straty-Adams” ( Wall-Adams ) strain gauge.









Appendix 3


A Digital State Machine Simulation of the Universe and the Difficulties of Time Travel


Ernst L. Wall

The Institute for Basic Research

Palm Harbor, FL 34684

April 26, 2000


Published:   Hadronic Journal Supplement 15, p. 231 (2000).  



The flow of time, in previous scientific literature, has been discussed in terms of classical thermodynamics and statistical mechanics.  Here, we propose a new approach to the study of time flow by taking advantage of concepts derived from modern computer science.  We devise a thought experiment that uses a hypothetical, gigantic digital state machine to simulate the universe. This simulation will, at least in concept,  process objects that include atoms, nuclei, particles, and photons.  These objects change state on a regular basis at a rate determined by a clock whose period is based on the frequency of a gamma ray.  This clock provides a high time resolution so that the total state count, as it progresses from one discrete state to the next most probable discrete state, provides a new definition of absolute time.  Absolute time is a count of the all states of the universe from its beginning to any given count.  Based on this state machine argument, time travel to some absolute past would require that copies of all past states of the universe be stored in some medium, somewhere, so that the time traveler could rewind the universe.  This would seem unlikely with today’s technology as well as the technology of the foreseeable future, so that time travel would seem to be an unlikely possibility.  Further, we demonstrate that time could not exist without the existence of matter.



1.  Introduction.


In many publications in recent years, especially in the popular press, science fiction articles, and even the movies, much has been presented about human beings undertaking reverse time travel that ostensibly occurs as a consequence of such diverse phenomena as traversing wormholes and exceeding the speed of light. 


But reverse time is a very real concern today for those who investigate tachyons, or particles whose velocity exceeds the velocity of light.  This has been a consideration from the earliest days of the investigations of these particles  because of the causality issues, or the assumption that these particles travel backwards in time and cause difficulties with the present( 1, 2, 3, 4 ). 


The usual classical thermodynamic counter to the argument for the possibility of reverse time travel, at least for large macroscopic bodies, is to simply state that increasing entropy, the arrow of time, is always in the direction of increasing time, so that reverse time movement is impossible (5). 


While a study of time flow using the concept of increasing entropy is not a difficult concept, we will develop a new method that is conceptually even simpler than the entropy argument, but at the same time, it provides a far greater conceptual extent.  This methodology easily demonstrates that the phenomena of time travel for a macroscopic body is a highly questionable possibility, at least based on physics as we know it today. This new method is based on a more modern concept, namely, state machines as implemented by modern computer technology.  

2.  Scope of Investigation


In this work we will describe a method of simulating the universe by means of a hypothetical digital state machine.  We will use this state machine model to arrive at a new definition of time, specifically, a definition of absolute universal time.  This definition of time will show that matter is necessary for time to exist.


We will use this simulation to demonstrate that to go backwards in time, you would either have to rewind the entire past universe while the future universe continues its forward trajectory, or you would have to have a record of all states of the universe from the present to the point in the past that you wished to visit.  We also demonstrate that merely exceeding the speed of light, or transiting a worm hole,  does not rewind the universe, nor access hypothetical records of the past.  We will use these to demonstrate that time travel is inherently impossible in the physical universe as we know it today.


In this work, we are only interested in introducing a new, basic concept.  We are not interested in answering all possible questions that arise from this model.  We are not interested advancing computer science, or even in providing an optimum methodology from computer science.  We are only interested in a very simple, very basic state machine concept that will illustrate time flow from the standpoint of basic physics.  And, it is not necessary to consider relativistic or quantum mechanical aspects of this model in order to introduce it.   These would be interesting enhancements of the model, and including them in it would not be extraordinarily difficult.  But neither of these are necessary in order to illustrate the basic state machine method of studying the flow of time, and so we will not consider them in this present work.



3.  A Simulation of the Universe by Means of a Digital State Machine.


In order to arrive at an improved method of analyzing the difficulties associated with time travel, we describe a hypothetical model of the universe that is a gigantic digital state machine that will simulate the general behavior of the universe as time advances. 


State machines are commonly used in the analysis of modern digital logic systems.  Not only are they simple to understand,  they also provide a more definite methodology for general simulation of statistical phenomena than generalizing from a statistical ensemble.  And because a state machine implementation of physical phenomena is generally scaleable, a computer simulation can be implemented at various levels of complexity that range from huge simulations on complex multiprocessor systems to simple simulations in household computers. 


This state machine can be sufficiently general as to process a covariant model when it is desired to do so for a large scale, relativistic model of the universe. 

This state machine will process  a set of objects.  Specifically, these objects are particles, including atoms, nuclei, alpha particles, beta particles, electrons and photons, and even tachyons, if desired.  Each of these objects has a state that is uniquely determined by parameters that include its mass, cross section, position, velocity, and spin. 

We will define the state of the universe at some integral time, t, as


                                  ,                                                                                                 (1)


where s t , i (m, r, v, k) is the state of some particle i at time t.  The state includes mass m, position r, velocity v, and spin k.   N is the total number of particles in the universe.  Because each particle is in motion, the state of the universe will change from instant to instant.  The nature of this change will determine the  new state of the particle as it progresses to the next time interval.  The new state can be generalized as



                              .                                                                           (2)


Here,  I(s t, i , s t, j ) represents an interaction that relates a particular particle, i, at some time, t, to all other particles, j, in the universe.    Conceptually, at least, it is inherently symmetric with respect to time reversal because time is merely the sequential progress of the universe from state to state, regardless of whether the state count goes backwards or forwards. 


However, digital numbers are inherently limited in precision.  As a result, the limited precision of the specification of the targets state could cause motion under time reversal to have a slightly different trajectory than the exact reverse of the trajectory of a preceding,  forward state.  This provides a built in randomness, of sorts, to I(s t, i , s t, j ).


 But even so, it would still be necessary to provide a time independent random number generator in order to model a more probabilistic trajectory to the next state for each particle, i, as opposed to a definite path.  This is because the randomness build into the real universe allows for many possible trajectories into the future.  Without this inclusion of randomness, each time the simulation is started from the same point, the forward trajectory would be exactly the same.  This randomness must be very small, however.


For a realistic simulation of the universe, the states of all objects, near and far, must all change before a universal state is complete.  This is simultaneity of state change.  Because of the simulation of  simultaneity,  the  interaction, I(s t, i , s t, j ), of any two objects must be processed in such a manner as to account for the time of propagation of the interaction from one object to the other.  I(s t, i , s t, j ) is,  in fact, an object of simulation in itself.

However, if we wished to simulate a synchronization of distant clocks by means of light signals, then time delays at the macroscopic level would have to be considered as measured by the simulated clocks in the same manner as is used in a typical textbook introduction to special relativity.


4.  The Nature of Forward Time Flow


In a digital simulation, the time, t, is an integer value, not a continuous value.  Further, the division of time into intervals of seconds is meaningless for this state machine.  It is too gross a quantity to calculate the effects of atomic and nuclear transitions because the state of the universe will change millions, or even billions of time in one second.  Therefore, a rational calculation of one state based on the previous state is not possible for time divisions or one second or greater. That is, the end state based on such a gross sequence interval is a completely random state with respect to its starting state.   What we must have is a time division that is smaller than that of the interval of the fastest changing object in the state set that composes the universe.  Therefore, we will define:


The fundamental universal time sequence interval  is the minimum time that is required to resolve the state change of the fastest changing object in the set of all objects that constitutes the universe.


In order to implement this definition, we propose that a hypothetical clock having the time sequence interval based on the frequency of a high energy gamma ray be used to separate one nuclear state from the next.  In this, we have a mechanical definition of time that is a natural, fundamental state change integer through which the universe can unfold.  This fine division of time does increase the difficulties of simultaneity insofar as the sheer size of the model we must process, but we are dealing with a generalized hypothetical model that will deal conceptually with the general passage of time, and this model will be very adequate for that purpose.


But first, we must relate this to the real physical universe.  Here, we make simultaneous, hypothetical digital “samples” of the all of the parameters of an object, and store the data in a computer memory. This defines the state of the object.  This hypothetical sampling would be done in the same manner as the analog-to-digital sampling that is used in modern day digital signal processing, where we would use the above clock to trigger the samples


 Using this, we define a non-subjective, or non-anthropomorphic time as follows:


Absolute universal time is the total count of the state transitions that occur, starting at some initial time of  t = 0 at the beginning of the universe and continuing forward to any specified time. These state counts occur when the universe makes regular transitions  from one discrete state to the next discrete state.1


This definition is not dependent on an anthropomorphic definition of time as derived from earth based intervals.  There are no years, days, seconds, etc.  It is based only on the requirements that the simulation provide for the most probable trajectory of one state of the universe to the next state based on the behavior of the smallest, fastest objects in the universe.


It is to be noted that in the definition, we specified the “next discrete state” of the universe.  But it is important to note that it is also the “next most probable state” of the universe.  If our hypothetical computer were used to implement Eq. 2 with the intent of simulating the real universe, then the simulation would calculate each object’s new state based on its current state and I( s t, i , s t, j ), which provides for the most probable next state, not a predetermined, definite state.  It is because of the slightly probabilistic nature of  I(s t, i , s t, j )  that the future in the simulation is not absolutely ordained in advance.


Based on the definition of absolute universal time, it is obvious that without physical matter, time has no states to count.  And with no state count, there is no passage of time.  Therefore, we state that:


The timeless, eternal void hypothesis:  In the absence of matter , there are no state transitions to count.  Without a state count, there can be no time.  Therefore, in the absence of matter, time is devoid of any meaning, and hence, is  nonexistent.



5.  Reverse Time Flow


Suppose we were to reverse the clock in the simulation and begin processing the state machine  in reverse.   Starting from the last state that occurred during positively advancing time, the objects would begin to retrace their previous trajectories.  However, the randomness that is built into I(s t, i , s t, j )  would cause them to follow trajectories that are slightly different from their original trajectories. The reverse path would be random, and entropy would continue to increase, just as it did while time was moving forward.  However, time reversal would also imply velocity reversal, which would have the effect of reversing the velocity of the objects.


But this velocity includes the not only the velocity of the individual objects, but the composite velocities of all objects composing a macroscopic body.  As a result, this macroscopic body would also reverse its velocity, providing that the precision of the digital state specification is sufficient to include the large particle velocities and the slower velocities of the macroscopic objects that are composed of these particles. 


While there might be a trajectory to an approximate near past point, there would be no trajectory to any previous, but distant, exact point in the past.  As time advances in reverse, the effect on the universe would, in time, behave similarly to the forward movement of time in that the same random state changes and movement of events would be the same as if clock had been counting forward.


For example, suppose we simulate a billiard game.  The balls are racked on the table into a triangle, the triangle is broken, and the balls scatter randomly on the table.  Several shots latter, we reverse the simulation.  Because of the time independent, very small randomness built into I, the balls will not go back to their exact original triangular, racked condition.  Disorder, or entropy, has increased.


Similarly, we could simulate the process of adding a drop of milk to a container of water.  After a few minutes, the milk will be dispersed.  If we reverse the simulation, the randomness built into I will not permit the milk molecules to re-coalesce into the spatially bound drop of pure milk that they started out as.


What is more difficult to predict is the effect of simulated humans and their free will on the progress of reverse time.  We will not cover this subject in this work.


6.  Tachyons and Time Travel


As previously noted, a tachyon is a particle whose velocity exceeds the speed of light, and in the literature of the past, it it has generally been assumed to travel backwards in time(1).   This is another object whose effects are suitable for a very simple simulation within the hypothetical universe.


We make no assumptions about the characteristics of a tachyon, only that it has a velocity greater than the speed of light, and that it has the ability to interact with a subliminal particle.  (To date, there has been no direct detection of a tachyon, although indirect evidence for their existence has been proposed (2, 3). )


In a simulation involving a tachyon, two interacting particles, A and B, might have tachyons that serve to carry information back and forth between them.  While it would be true that the tachyons would carry information faster than photons, particles A and B still exist in their environment in the present state, not the past or the future.  If a tachyon and a photon were simultaneously emitted from particle A and both of them travel toward particle B, the tachyon would scatter B before the photon was able to reach it.   This is not to say that there is a causality violation. The tachyon merely beat the photon to the target.  Only if an observer at A were attempting to measure a characteristic of particle B by using a photon based signal would there be any reason for an uninformed observer at A to question whether or not causality was violated.  This would be a measurement problem, not an actual case of time reversal.


Further, the trajectories of the particles A and B would still progress in a near random fashion before and after the collisions.  The presence of the tachyon would merely serve as a different signaling mechanism.  A more mundane analogy would be the use of optical observation of an object that was simultaneously being observed by a sonar scan.  The light does not present a causality issue with regards to the sonar scan.


In a simulation, a tachyon, even though its velocity exceeds the velocity of light, will not go backwards in time.  Neither will the two particles, A and B, above, backwards in time.


7.  The Plight of a Would Be Time Traveler


Next, consider the spatial extent of the present day universe, and an individual who wishes to return to some point in the rather gigantic past.  If we were travel to some time and location in the past, and if he has the means and the desire to move about the galaxy to any random point, then the entire galaxy must be available to him.   That would constitute true time travel.    Or, if his means of transport is to be limited, at least he should be able to use a high powered telescope and be able to view the entire galaxy as it existed back at that time.  (But even that reduced capability in a real universe would still be a rather substantial achievement.)


There would be two hypothetical options available to the traveler.  He could try to rewind the universe itself,  or he could try to find a record of the past history and use that to recreate a point in the past.


The probable past would be different from the absolute recorded past because of randomness built into I(s t, i , s t, j ). In fact, as already stated, the mere attempt to run the universe in reverse would produce, after a short interval of counts, a different past that the actual past.  In fact, after a time, the randomness of the rewind of the universe would make it difficult to say that time was really reversed.  It is more likely that after a short time of disorientation, the residents of the reverse universe would begin to carry on as if nothing had happened.  They would continue to age, have children, and do their jobs.


To simulate our traveler’s visit to an exact point in the past, he must stop the entire universe and then rewind a record it for some specified number of state counts.  This requires that a copy of the entire universe for all the past times must be saved somewhere, somehow 2.   That is, it requires that all previous absolute recorded states of the entire physical universe must be recorded.  We specify the need to use absolute, recorded states to visit the real past because he does not wish to revisit a mere probable past. 


Having reached some point in the past, if our traveler is to move forward from that past point to exactly where he came from in the present, not only must he not cause any influence on the past, he must travel forward in a recorded time sequence, or he will arrive at a substantially different point than that which he departed from because of the random nature of the state change.  That is, if the recorded sequence is not allowed to replay, and the universe begins its progress forward in a random manner, then he will progress forward to a present that might be quite different than the one he departed from.  This is especially true if he interferes with some critical event in the past.

Further,  while the traveler is rewinding the past,  the universe must continue to move forward from the point in the present time from which he departs on his journey,  and the events of this unfolding reverse state sequence must also be recorded if further visits are to be made to correct any problems that a “previous” traveler may have caused.  Further, a new recording of the universe must be made after the present point is reached in order to account for the changes that he caused going forwards from the past, as well as the future point from his departure point.


It could be argued that if a time traveler has only a limited part of space available in a simulation, then he might be able to regenerate a small spatial part of the universe at a particular past time, and then let it move forward in time.  This would be a localized time journey.  But what would happen if he moved to the edge of this localized spatial environment?  What would happen to past residents of this region whose paths crossed over the borders of this region?  Would they step into another universe, or vanish? What would that do the future of that local region?  These might present severe difficulties for the traveler as well as the previous occupants of the time-spatial region near his trajectory.  Obviously the future of this local region might be severely disturbed during the return trip to the traveler’s original point of departure, especially on both sides of its borders.


But to complicate matters further, suppose there were multiple time travelers who start out on their journeys at the same time but from different locations.  We must ask, which time traveler gets to rewind the universe first?  Or, which one gets to go to which copy of which part of the universe at what time?

This problem can become even more complicated if one time traveler has rewound the past universe and moved backwards in time, and is followed some time later by another time traveler who begins to unwind this past universe.  We must ask what happens to the previous time traveler in his rewound past universe, and what happens as he returns to the time from which he started his journey.


It is to be noted that we have utilized the term “rewinding” the universal record as an analog to a rewinding a VCR tape or a binary tape from a computer.   This is because it is a closer analog to running the universe in reverse.  But in these times of random access computer storage, our simulated time traveler could pick a point in the past and return there immediately.


But the simulation of traversing a black hole and jumping back to some time in the past could be done by, essentially, accessing a random point in a mass storage system.  This would be an example of near immediate access to a specific point in the past that involved no rewind.


As a brief aside, it is to be noted that as time progresses forward in the recorded universe, the residents have no free will.  The traveler, assumedly would have free will, but this depends on the simulation.  In any case, it is suggested that some interesting philosophical points could be raised from this issue of free will versus predestination.

These are some of the questions that are more clearly enunciated by the use of a digital state machine simulation than we could obtain from a continuous time, statistical ensemble model of the universe.  A continuous time model (i.e., an analog model ) that is developed from a statistical mechanical ensemble has no definite transition from one particle state to another particle state (6).   An analog recording of the state of a universe and the interaction of its components, or a recording of even a small ensemble of objects, is rather difficult to envision.  Therefore, the classical analog model does not permit a hypothetical storage methodology that will permit the concept of storing and rewinding the universe that is as conceptually simple as that obtained from the digital model.  The illustrative capability of the analog model is severely limited as compared to a digital state machine.



8.  A Digression On Macroscopic Bodies at Hyperluminal Velocities


To depart somewhat from a pure state machine argument for a moment, we will consider a more general discussion of the argument that an object that moves faster than the speed of light would experience time reversal(1,4).  For example,  the space ship Enterprise, in moving away from Earth at hyperluminal velocities, would overtake the light that was emitted by events that occurred while it was still on the earth.  It would then see the events unfold in reverse time order as it progressed on its path.  This phenomena would be, in effect, a review of the record of a portion of the Earth=s history in the same manner that one views a sequence of events on a VCR as the tape is run backwards.  But this does not mean that the hyperluminal spacecraft or the universe is actually going backwards in time anymore than a viewer watching the VCR running in reverse is moving backwards in time. 


Further, it must be asked what would happen to the universe itself under these circumstances.  To illustrate this, suppose a colony were established on Neptune.  Knowing the distance to Neptune, it would be trivial, even with today’s technology, to synchronize the clocks on Earth and Neptune so that they kept the same absolute time to within microseconds or better.  Next, suppose that the Enterprise left Earth at a hyperluminal velocity for a trip to Neptune.  When the crew and passengers of the Enterprise arrive at Neptune, say 3 minutes later in Earth time, it is unlikely that the clocks on Neptune would be particularly awed or even impressed by the arrival of the travelers. When the Enterprise arrives at Neptune, it would get there 3 minutes later in terms of the time as measured on both Neptune and Earth, regardless of how long its internal clocks indicated that the trip was.  Neither the Enterprise nor its passengers would have moved backwards in time as measured on earth or Neptune.


The hands of a clock inside the Enterprise, as simulated by a state machine, would not be compelled to reverse themselves just because it is moving at a hyperluminal velocity.  This is because the universal state machine is still increasing its time count, not reversing it.  Nor would any molecule that is not in, or near the trajectory of the space ship, be affected insofar as time is concerned, provided it does not actually collide with the space ship.


In the scheme above, reverse time travel will not occur merely because an object is traveling at hyperluminal velocities.  Depending on the details of the simulation, hyperluminal travel may cause the local time sequencing to slow down, but a simulated, aging movie queen who is traveling in a hyperluminal spacecraft will not regain her lost youth.  Simulated infants will not reenter their mother’s wombs.  Simulated dinosaurs will not be made to reappear.  A simulated hyperluminal spacecraft cannot go back in time retrieve objects and bring them back to the present.  Nor would any of the objects in the real universe go backward in time as a result of the passage of the hyperluminal spacecraft.


The mere hyperluminal transmission of information or signals from point to point, nor objects traveling at hyperluminal velocities from point to point, does not cause a  change in the direction of the time count at the point of departure nor at the point of arrival of these hyperluminal entities, nor at any point in between. 



9.  Conclusion.


Based on concepts derived from modern computer science, we have developed a new method of studying the flow of time.  It is different from the classical statistical mechanical method of viewing continuous time flow in that we have described a hypothetical simulation of the universe by means of a gigantic digital state machine implemented in a gigantic computer.  This machine has the capability of mirroring the general  non-deterministic, microscopic behavior of the real universe


Based on these concepts, we have developed a new definition of absolute time as a measure of the count of discrete states of the universe that occurred from the beginning of the universe to some later time that might be under consideration.   In the real universe, we would use a high energy gamma ray as a clock to time the states, these states being determined by regular measurements of an object’s parameters by analog-to-digital samples taken at the clock frequency.


And based on this definition of time, it is clear that, without the physical universe to regularly change state, time has no meaning whatsoever.  That is, matter in the physical universe is necessary for time to exist.  In empty space, or an eternal void, time would have utterly no meaning


This definition of time and its use in the simulation has permitted us to explore the nature of time flow in a statistical, non-determinate universe. This exploration included a consideration of the possibility of reverse time travel.  But by using the concept of a digital state machine as the basis of a thought experiment, we show clearly that to move backward in time, you would have to reverse the state count on the universal clock, which would have the effect of reversing the velocity of the objects. But this velocity includes the not only the velocity of the individual objects, but the composite velocities of all objects composing a macroscopic body.  As a result, this macroscopic body would also reverse its velocity, providing the state was specified with sufficient precision. 


But if you merely counted backward and obtained a reversal of motion, at best you could only move back to some probable past because of the indeterminate nature of the process.  You could not go back to some exact point in the past that is exactly the way it was.   In fact, after a short time, the process would be come so random that there would be no real visit to the past.  A traveler would be unable to determine if he was going back in time, or forward in time.  Entropy would continue to increase.

But doing even this in the real universe, of course, would present a problem because you would need naturally occurring, synchronized, discrete states (outside of quantized states, which are random and not universally synchronized).  You would need to be able to control a universal clock that counts these transitions, and further, cause it to go back to previous states simultaneously over the entire universe.   Modern physics has not found evidence of naturally occurring universal synchronized states, nor such an object as a naturally occurring clock that controls them.  And even if the clock were found, causing the clock to reverse the state transition sequence would be rather difficult.


Without these capabilities, it would seem impossible to envision time reversal by means of rewinding the universe.  This would not seem to be a possibility even in a microscopic portion of the universe, let alone time reversal over the entire universe.


But aside from those difficulties, if you wished to go back to an exact point in the past, the randomness of time travel by rewind requires need an alternative to rewinding the universe.  This is true for the simulated universe, and a hypothetical rewind of the real universe.  Therefore, the only way to visit an exact point in the past is to have a record of the entire past set of all states of the universe, from the point in the past that you wish to visit onward to the present.  This record must be stored somewhere, and a means of accessing this record, visiting it, becoming assimilated in it, and then allowing time to move forward from there must be available.  And, while all of this is happening in the past, the traveler’s departure point at the present state count, or time, must mover forward in time while the traveler takes his journey.


Even jumping back in time because of a wormhole transit would require that a record of the past be stored somewhere.  And, of course, the wormhole would need the technology to access these records, to place the traveler into the record and then to allow him to be assimilated there.  This would seem to be a rather difficult problem.

This then, is the problem with time travel to an exact point in the past in the real universe.  Where would the records be stored?  How would you access them in order just to read them?  And even more difficult, how would you be able to enter this record of the universe, become assimilated into this time period, and then and have your body begin to move forward in time.  At a very minimum our time traveler would have to have answers to these questions. 


Still another conundrum is how the copy of the past universe would merge with the real universe at the traveler’s point of departure.  And then, if he had caused any changes that affected his departure point, they would have to be incorporated into that part of the universal record that is the future from his point of departure, and these changes would then have to be propagated forward to the real universe itself and incorporated into it.  This is assuming that the record is separate from the universe itself.

But if this hypothetical record of the universe were part of the universe itself,  or even the universe itself, then that would imply that all states of the entire universe, past, present, and future, exist in that record.  This would further imply that we, as macroscopic objects in the universe, have no free will and are merely stepped along from state to state, and are condemned to carry out actions that we have no control over whatsoever. 


In such a universe, if our traveler had access to the record, he might be able to travel in time.  But he were to be able to alter the record and affect the subsequent flow of time, he would have to have free will, which would seem to contradict the condition described above.  We obviously would be presented with endless recursive sequences that defy rationality in all of the above.


This is all interesting philosophy, but it seems to be improbable physics.


Therefore, in a real universe, and based on our present knowledge of physics, it would seem that time travel is highly unlikely, if not downright impossible.  

We do not deny the usefulness of time reversal as a mathematical artifact in the calculation of subatomic particle phenomena(7).  However,  it does not seem possible even for particles to actually go backwards in time and influence the past and cause consequential changes to the present. 

Further, there is no reason to believe that exceeding the speed of light would cause time reversal in either an individual particle or in a macroscopic body.  Therefore, any objections to tachyon models that are based merely on causality considerations have little merit.


For the sake of completeness, it should be commented that the construction of a computer that would accomplish the above feats exactly would require that the computer itself be part of the state machine. This could add some rather interesting problems in recursion that should be of interest to computer scientists.  And, it is obvious that the construction of such a machine would be rather substantial boon to the semiconductor industry.

We already know from classical statistical mechanics that increasing entropy dictates that the arrow of time can only move in the forward direction (5).  We have not only reaffirmed this principle here, but have gone considerably beyond it. These concepts would be extremely difficult, if not impossible, to develop with an analog, or continuous statistical mechanical model of the universe.


We have defined time on the basis of a state count based on the fastest changing object in the universe.  But it is interesting to note that modern day time is based on photons from atomic transitions, and is no longer based on the motion of the earth.  Conceptually, however, it is still an extension of earth based time.


But finally, history is filled with instances of individuals who have stated that various phenomena are impossible, only later to be proven wrong, and even ridiculous. Most of the technology that we take for granted today would have been thought to be impossible several hundred years ago, and some of it would have been thought impossible only decades ago.  Therefore, it is emphasized here that we do not say that time travel is absolutely impossible.  We will merely take a rather weak stance on the matter and simply say that, based on physics as we know it today, there are some substantial difficulties that must be overcome before time travel becomes a reality.



1.  G. Feinberg, Phys. Rev. 159, 1089 (1967).

2.  Ernst L. Wall, Hadronic Journal 8, 311 (1985).

3.  Ernst L. Wall, The Physics of Tachyons., (Hadronic Press, 1995).

4.  P. Davies, About Time, p. 234 (Simon & Schustere, 1995)

5.  P. Davies, op. cit. p. 196.

6.  K. Huang, Statistical Mechanics, p. 156 (John Wiley & Sons, 1963).

7.  E. Condon & H. Odishaw, Encyclopedia of Physics, p. 9-139 (McGraw-Hill, 1967).



1 Webster’s New Collegiate Dictionary (1976, G & C Merriam Co) provides multiple definitions of time. The closest definition to what we propose here is “the measured or measurable period during which an action, process or condition exists or continues.”  It then defines period as “a portion of time determined by some recurring phenomena”.  This is, of course, circular reasoning.  It also provides another definition wherein time is  “a continuum which lacks spatial dimensions and in which events succeed one another from past through present to future.  This is somewhat of a weak definition compared to what we introduce here.


2 On a philosophical note, it is of interest to note that in Hindu cosmology every thought, word, and action that occurs in the universe is stored in the Akashic, or heavenly records.  ( I.e., the ethers, whatever the ethers are).  It is unlikely that most Hindu mystics believe that time travel is possible: only that it is possible to read the records.  However, some of them do profess to be able to see a probable future. But having mentioned this as a fragment of distantly related philosophy, it is emphasized that we do not intend to resort to mysticism or pseudo science  in this  investigation.







Appendix 4

What Waves When You Have de Broglie Waves?”

( A Question Better Left Unasked by Physics Students Before this Current de Broglie Wave Model Came Out)



Question:  Who is this?





Only minutes before this snapshot was taken, he was a healthy, mid 20s graduate student at the University of FL.  He made the mistake of asking a quantum mechanic, “What waves when you have de Broglie waves?”, without wearing blast proof clothing.  He was instantly desiccated by the blast of hot air fanned on by the wildly waving hands of the quantum mechanic. 

(No flys on the quantum mechanic, another graduate student, he was just using the standard line regarding this troublesome question that has probably been asked millions of times.)

But not to worry, there was a happy ending!  He was left under a large tree in the Florida humidity  (covered with a less-than-clean table cloth from the U of FL President’s private dining area to protect him from the residue of the very large birds that roosted in the tree),  and he was reconstituted by the humidity. Thus it was that, after three days and three nights, he arose and went forth unto class.

Later, when asked if he finally understood what waved when you have de Broglie waves, he answered dryly “Yes!  The quantum mechanic’s hands!!”.

© Ernst L Wall 2007, All Rights Reserved




Appendix 5. Publications by Ernst Wall


1.       Ernst L. Wall.  Book, The Physics of Tachyons, 234 pp., ( Hadronic Press, 1995 )

2.      Ernst L. Wall.  “A Study of the Fundamental Origin of the Dimensions of the Bohr Radius of the Hydrogen Atom as Determined by the Quantized Dynamic Electric Field Surrounding a Vortex Model of the Electron.”  The Hadronic Journal 39, p. 71, (2016)

3.       Ernst L. Wall. “Maintaining the First Bohr Orbit Radius by Photon Suppression”, Bulletin on the American Physical Society, 2015 Fall New England Section, NEF15-2015-000044.

4.      Ernst Wall, The Vortex Electron and the Origin of the Bohr Radius and its Fine Structure Constant, and Pilot Waves”, Bulletin of the American Physical Society 60, New England Section Spring, B5.00026, 2015

5.       Ernst L. Wall, “A Possible Origin of the Fine Structure Constant”, Bulletin of the American Physical Society, 59, New England Section Fall Meeting, E2.0005, 2014.

6.       Ernst L. Wall,  “Revisiting the Bohr Atom 100 Years Later”, Bulletin of the American Physical Society, N32.00002, 58, 2013.

7.       Ernst L. Wall.  “The Electron’s Angular Momentum”, Bulletin of the American Physical Society 58, NEF13-2013_000008, October, 2013.

8.       Ernst L. Wall.  “Earth’s Atmospheric CO2 Saturated IR Absorption”, Bulletin of the American Physical Society, New England Section Fall Meeting, 2008,

9.       Ernst L. Wall, “A Longitudinal Electrical Impulse Field Neutrino and its Origin in Virtual Quanta of the Tachyonic Electron, Muon, and Pion”, Hadronic Journal 24, p. 207 (2001).

10.    Ernst L. Wall, “The Tachyonic Electron’s Revolving Light Speed Particle as a Non-Radiating, Bound Photon”,  Hadronic Journal Supplement 15,  p. 419 (2000). 

11.    Ernst L. Wall, “A Digital State Machine Simulation of the Universe and the Difficulties of Time Travel”,  Hadronic Journal Supplement 15, p. 231 (2000).  

12.    Ernst L. Wall, “The Fundamental Electrodynamic Origin of Electron de Broglie Waves”,  Hadronic Journal Supplement 15, p. 123 (2000).  

13.    Ernst L. Wall,  “Electrodynamics of Revolving Light Speed Particles and A Fundamental Basis for de Broglie Waves”,  Hadronic Journal Supplement 14,  p. 79  (1999).

14.    Ernst L. Wall.  “Radial Stability in a Longitudinal Electrical Field Neutrino”,  Bulletin of the American Physical Society 44, p. 34 (1999).

15.    Ernst L. Wall. “Origin of de Broglie Waves in a Tachyonic Electron Model”,  Bulletin of the American Physical Society 44, p. 35 (1999).

16.    Ernst L. Wall. “A First Tangible Step in the Quest for  Hyperluminal Space Travel”,  Proceedings of NASA’s Breakthrough Propulsion Physics Workshop, NASA/CP - 1999-208694,  p. 349 (Jan 1999).

17.    Ernst L. Wall.  “A Longitudinal Electromagnetic Impulse Neutrino Model”,  Bulletin of the American Physical Society 43, p. 2163 (1998).

18.    Ernst L. Wall.  “A Possible Fundamental Origin of the de Broglie Equation”,  Bulletin of the American Physical Society 43, p. 2163 (1998).

19.    Ernst L. Wall.  “Electrodynamics of Revolving Light Speed Particles”,  Bulletin of the American Physical Society,  43, p. 1399 (1998) .

20.    Ernst L. Wall.  "On Pion Resonances and Mesons, Time Cancellation, and Neutral Particles",  Hadronic Journal 12, p. 309 (1989).

21.    Ernst L. Wall.  "Time Cancellation Hypothesis",  Bulletin of the American Physical Society  33, p. 1076 (1988).

22.    Ernst L. Wall.  "Charm, Other Resonances, and the Tachyonic Particle Model",  Bulletin of the American Physical Society   33, p. 1076 (1988).

23.    Ernst L. Wall.  "Unresolved Problems of the Tachyonic Models of the Electron and the Muon", Hadronic Journal 9, p. 263, (1987).

24.    Ernst L. Wall.  "On Tachyons and Hadrons",  Hadronic Journal  9, p. 239, 1986.

25.    Ernst L. Wall.  "Indirect Evidence for the Existence of Tachyons; A Unified Approach to the Pion ® Muon ® Electron Conversion Problem", Hadronic Journal 8, p. 311 (1985).

26.    Ernst L. Wall.   "The Role of Tachyons in Electron Spin and Muon Spin",  Bulletin of the American Physical Society  30, p. 729 (1985).

27.    Ernst L. Wall.   "The Role of Tachyons in Proton Spin",  Bulletin of the American Physical Society 30, p. 729 (1985),

28.    Ernst L. Wall.  "Hamming Code Error Correction for Microprocessors", Chapter 3,  Microprocessor Applications Handbook, edited by D. Stout.  McGraw‑Hill.

29.    Ernst L. Wall.  "Applying the Hamming Code to Microprocessor - Based Systems", Electronics (McGraw-Hill) 52, p. 103.  (Note that this was the feature (cover) article of this issue.)

30.   Ernst L. Wall.  "Edge Injection Currents and Their Effects on 1/f Noise in Planar Schottky Diodes", Solid State Electronics  19, p. 389 .

31.   E. D. Adams, G. C. Straty, and E. Wall.  "Thermal Expansion Coefficient and Compressibility of Solid Helium‑three", Physical Review Letters   15, p. 549

32.   E. D. Adams and E. L. Wall.  "Thermal Expansion Coefficient and Compressibility of Solid Helium-three”, Bulletin of the American Physical Society   10, p. 519

33.   Kerson Huang.  “American Journal of Physics”, 20, p. 479 (1952)

34.   US PATENT: U. S. Patent 3,800,412 awarded to Walter K. Niblack and Ernst L. Wall, "Process for Producing Surface‑Oriented Devices", April 2, 1974.


Appendix 6.  Max Planck Quote

“A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.”