Arcadian Functor

occasional meanderings in physics' brave new world

My Photo
Name:
Location: New Zealand

Marni D. Sheppeard

Friday, July 06, 2007

On The Way

Well, I'll be on the plane to Sydney in a few hours. D. Sudarsky just gave a refreshing talk here on his cosmic structure paper, which takes Penrose's gravitational collapse ideas seriously. He was not afraid to criticize common thinking in cosmology, arguing that some basic alteration to quantum mechanics really is essential. Unfortunately, he used what looked like a fairly conventional inflaton to analyse collapse, but at least this was observationally motivated and the analysis has predictive power.

As time passes, I am becoming more and more confused about the fact that many physicists still appear to be quite content to study quantum gravity using an old notion of observable, and their assumption that this should dictate the existence of enormous numbers of particles which have never been observed. And not just a little neutrino (predicted on the basis of conservation of energy) or not just a few quarks (predicted on the basis of lattice patterns), but ridiculous numbers of squishies which are conveniently too massive to have been observed. How can one possibly investigate new physical regimes in a framework so clearly based on guesswork? I just don't get it, and I would really like someone to explain it to me.

4 Comments:

Blogger Matti Pitkänen said...

Dear Kea,

once one gives up the notion of pointlike particle, and this seems necessary to tame the infinities, one ends up with an infinite spectrum of excitations unless one has topological QFT.

I would not see the direct non-observability as a problem if the effects of ultraheavy excitations on the physics in the massless sector make them visible in a testable manner.

In p-adic thermodynamics just this occurs. One can loosely say, that electron gets its rest mass by spending a fraction of time of order 10^-38 (Mersenne prime M_127) in ultraheavy states with mass of order 10^-4 Planck mass.

Since excellent predictions for both lepton and quark masses and even hadron masses result to say nothing about predictive new view about what non-perturbative hadron physics is, one can say that the superheavy particles are in a well-defined sense visible in TGD Universe.

The ultraheavy cosmic rays propagating long distances along dark space-time sheets, where Planck constant is large, and dissipation low, could allow us to enjoy about direct collisions of say M_31 protons with mass about 2.6*10^11 GeV and above GZK bound with ordinary matter (see my latest posting).

July 06, 2007 7:01 PM  
Blogger Kea said...

Matti, I wasn't really talking about TGD, but rather strings. Either way, the question is: what is a strict notion of observable?

July 07, 2007 6:50 PM  
Anonymous Anonymous said...

Hi Kea,

my explanation of the inflation of hypothesized particles predicted by this or that theory is that theorists are smart, and they recognized that as long as their models predict things that cannot easily be ruled out, they deserve a salary to continue working on it.

Disastrous is instead constructing a theory that can be falsifiable with next year's worth of data at this or that experiment.

So what I really think is at work is a sort of evolutionism of theorists. Those who predict more stuff hard to reach are the ones that survive more in academia. That ends up governing the asymptotical behavior of particle theory.

Cheers,
T.

July 08, 2007 4:30 AM  
Blogger nige said...

Tommaso:

What you write about evolution explaining the rise of string theory is excellent: falsifiable theory are (ironically) "fitter" and better able to survive than theories which make easily-checkable predictions.

So falsifiable theories get filtered out, while non-falsifiable theories paradoxically survive. The rise of string theory is a freak of evolution. In the early days - and sometimes even now - string theorists defend themselves by saying that the theory is too complex to fully evaluate quickly, and begging for more and more time to work on it, in the hope of making falsifiable predictions one day. As that hope recedes out of sight (far into the unknown landscape of 10^500 possibilities), the string theorists then start saying that string theory is a replacement to religion, and we must believe it true not because of experimental evidence (it has none) but because of some abstract quality called beauty, much like a religion.

Kea:

The paradox you question whereby mainstream supersymmetric theory is non-falsifiable, reminds me of Lee Smolin's lengthy discussion in The Trouble with Physics (referred to as TTWP hereafter), chapters 18 and 19.

He starts with the (to my mind totally false) claim:

"The one thing everyone who cares about fundamental physics seems to agree on is that new ideas are needed. From the most skeptical critics to the most strenuous advocates of string theory, you hear the same thing: We are missing something big."

That is on page 308 of the U.S. edition of TTWP.

Problem is, string theorists don't admit that something really "big" is missing: they have built up a framework of ideas which can't accommodate any really big changes in thinking. String theorists merely think that some technical innovation is required to help select the correct vacuum state from the 10^500 theories of the landscape. They don't expect, and they certainly don't want, any radical innovation which sweeps away their framework of ideas. This prohibits their expectation of some big new insight.

On page 311, Smolin writes:

"When I first encountered Kuhn's categories of revolutionary and normal science as an undergraduate, I was confused, because I couldn't tell which period we were in. If I looked at the kinds of questions that remained open, we were clearly partway through a revolution. But if I looked at how the people arund me worked, we were just as obviously doing normal science. ... We are indeed in a revolutionary period, but we are trying to get out of it using the inadequate tools and organization of normal science."

On the next page (p312) he writes that during the revolution of quantum mechanics circa 1925:

"People who couldn't let go of their misgivings over the meaning of quantum theory were regarded as losers who couldn't do the work."

That really is the key. Einstein's opposition to quantum mechanics basically amounted to the incompleteness of the early theory of quantum mechanics. John von Neumann falsely claimed to have disproved the existence of hidden variables in 1932, but Bohr and Heisenberg were already saying that at the 1927 Solvay Congress.

The great fallacy is the stupid claim that "any new theory must encompass all that has gone before it".

Not so - quantum mechanics and classical mechanics were initially separated by Bohr's "Complementarity principle" which asserted that the apparent contradiction between classical waves and quantum particles is actually a complement, not a contradiction because (he asserted) in any given experiment you can detect particle-like or wave-like behaviour but not both.

This is what Einstein objected to. Feynman got rid of the Complementarity principle but inventing path integrals:

'I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas ... But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, "Your old-fashioned ideas are no damn good when ..." If you get rid of all the old fashioned ideas and instead use the ideas that I'm explaining in these lectures - adding arrows for all the ways an event can happen - there is no need for an uncertainty principle!' - R. P. Feynman, QED, Penguin, 1990, footnote on pages 55-6.

‘When we look at photons on a large scale ... there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small ... these rules fail ... The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the 1.3*10^18 v/m Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’ - R. P. Feynman, QED, Penguin, 1990, pages 84-5.

So there is a Bohr versus Feynman problem. Smolin writes in TTWP that Bohr was not a Feynman "shut up and calculate" physicist:

"Mara Beller, a historian who has studied his [Bohr's] work in detail, points out tha there was not a single calculation in his research notebooks, which were all verbal argumen and pictures."

As you might expect, Feynman's path integrals were savagely attacked by Bohr at the 1948 Pocono conference:

' ... Bohr ... said: "... one could not talk about the trajectory of an electron in the atom, because it was something not observable." ... Bohr thought that I didn’t know the uncertainty principle ... it didn’t make me angry, it just made me realize that ... [ they ] ... didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up ...' - The Beat of a Different Drum: The Life and Sciece of Richard Feynman, by Jagdish Mehra (Oxford 1994) (pp. 245-248). [Quoted by Tony Smith.]

As you can see from Feynman's explanation, path integrals replaces the uncertainty principle. That's radical, and certainly not to Bohr's liking. So Bohr pretended that Feynman had made an error. As Feynman explained, you cannot educate people with the mindset of Bohr. They assume that any new theory must be completely consistent with all previous ideas, instead of replacing obsolete ideas.

The idea that every new theory must contain every old theory as a subset is widely acknowledged to be true, when it is obviously false as we see from examples of caloric and phlogiston.

Maybe you can argue that flat earth theory is an approximation to curved earth where the curvature is trivial (but even that approximation ultimately is limited in its applicability, because on small scales the ground or ocean is not completely flat; and on very small scales, you find that the ground is bumpy not flat, because the particles and atoms aren't smooth but are lumpy).

But even if you can claim that the earth is flat on small distance scales, you can't so easily claim that modern thermodynamics includes caloric and phlogiston as a subset. Caloric was a fluid heat theory, and convection currents are fluids of hot air, but this misses out radiation and conduction of energy. Phlogiston is even harder because it was supposed that phlogiston escaped from burning wood when in fact the dynamics are far more complex and carbon in wood gets oxidised to gases like CO_2 which escape into the air.

String theorists are totally deluded if they think that any future science must include supersymmetry and spin-2 stringy gravitons. But deluded they are!

They are deluded because they choose, like the followers of Bohr such as Oppenheimer (who rigorously opposed Feynman's theory for as long as possible against explanations by Dyson and Bethe), to believe in old-fashioned ideas with fanatical, religious-like passion. Most of the time, these fanatics can't be reasoned with because they simply ignore alternatives completely or sneer at them.

The only way to proceed at all is to apply Smolin's summary of bigots on p312 of TTWP, to stringers:

"People who couldn't let go of their misgivings over the meaning of quantum theory were regarded as losers who couldn't do the work."

That passage applies directly to mainstream string theorists! Mainstream string theorists aren't doing checkable physics, they have no falsifiable calculations for anything because the 6 compactified dimensions are unobservable so their sizes and shapes have 10^500 different possible combinations and can't lead to falsifiable predictions. So mainstream string theorists are losers whose work is totally uncheckable speculation and whose philosophical arguments about how they think particle-wave duality is explained (by oscillating string) misses the point that physical theories should address observables, not spin-2 gravitons etc.

July 08, 2007 11:51 AM  

Post a Comment

<< Home