### M Theory Revision

The modern understanding of Feynman diagrams comes from a beautiful body of mathematical work, such as that of Kreimer et al on the Hopf algebra structure of renormalisation. In a category theory setting, we know that such structures rely on the concept of operad. Moreover, higher dimensional operads appear essential in moving beyond CFT and addressing the problem of describing mass quantum numbers.

The emphasis on renormalisation is a mistake. This picture still works in a Minkowski background QFT, the idea being that the Standard Model in its rigorous guise will not be much altered. But we have seen that a twistor correspondence must be implemented, for it is only in this setting that the physical logic has a chance of being written in a topos like language. The twistor point of view changes our use of operads. The need for higher dimensional structures becomes even more apparent as we match the 1-operad associahedra to mere real moduli.

The emphasis on renormalisation is a mistake. This picture still works in a Minkowski background QFT, the idea being that the Standard Model in its rigorous guise will not be much altered. But we have seen that a twistor correspondence must be implemented, for it is only in this setting that the physical logic has a chance of being written in a topos like language. The twistor point of view changes our use of operads. The need for higher dimensional structures becomes even more apparent as we match the 1-operad associahedra to mere real moduli.

## 7 Comments:

Yes, emphasis on renormalisation is a mistake. Physicists should be trying to predict these values rather than just "renormalise." Fascinating if the Feynman diagrams can be derived.

When I read these papers they are over my head. There is too much machinery needed for calculations in QFT to allow a lazy dilettante like me to easily follow these papers. There are six complications.

The first is that they assume Minkowski space but have to convert to Euclidean to calculate. This doesn't seem right. I believe in my version of Euclidean space which is easier.

The second is that they represent quantum states with spinors, and then use operators to operate on them. I use operators (i.e. density matrices) to represent quantum states, so I only have to know one sort of mathematical object. To get density operators to cover more complicated states, I use Schwinger's measurement algebra, with everything geometrized by assuming a Clifford algebra.

The third is that they represent spin and symmetry with matrix representations. I like to write things in operator form without specifying a representation. The problem with representations is that there are so many choices for how to make them. Of course I use those same Clifford algebra objects for this.

The fourth is that their quantum states are defined only up to gauge transformation. I think that there is only one universe and only one correct way to describe it. Since density matrices eliminate the arbitrary complex phase from spinors, I use density operators only, in the hope of eliminating all the gauge freedoms. (I need to have something in the foreground to give it some scale.) The usual method is to fix the gauge, which becomes very complicated for real life problems.

The fifth complication is that symmetry is insufficient to describe the known particles so they assume symmetry breaking of the vacuum state. In Schwinger's measurement algebra, the vacuum is a fictitious state that is there for calculational convenience only. This implies a preon model.

Finally, all I'm trying to explain is the point particles, so I assume all the interactions happen at a single point in space and time. They're working on interactions between point particles at different points so they have to deal with very complicated Feynman integrals over spacetime. I just have to deal with finite dimensional arithmetic.

Nevertheless, despite all the differences, the problem is the same, starting with a collection of bare (free field Feynman diagram) propagators, how do you assemble them into a dressed propagator. With all the advantages I've given myself, you'd think I'd be already done, but I'm fairly slow.

Hi Louise and Carl

Yes, we must be very slow indeed not to have polished this all off yet, because as you say, everything is quite simple. Maybe if a few more really slow people join in the fun we'll start getting somewhere.

Hi Kea

1 - I think Feyman diagrams are related to EE phasors.

[From U-CO net-page, author and source text not known]

c11 Particle reality

term phasors used on p13

appendix figures 11.6 and 11.7 are phasor-like

http://www.colorado.edu/philosophy/vstenger/Timeless/11-particle.pdf

2- I finally started reading 'Moonshine beyond the Monster: The Bridge Connecting Algebra, Modular Forms and Physics' (Cambridge Monographs on Mathematical Physics) (Hardcover)

by Terry Gannon (Author)

Available on Amazon with Search inside this book feature.

Great review text with 575 references to proofs etc.

Even I can understand some material in the first two chapters and first section of c3.

Thanks for the link to the fairly interesting string-orientated conformal field theory introduction paper by Matthias R Gaberdiel, http://arxiv.org/abs/hep-th/9910156

"From an abstract point of view, conformal field theories are Euclidean quantum field theories that are characterised by the property that their symmetry group contains, in addition to the Euclidean symmetries, local conformal transformations, i.e. transformations that preserve angles but not lengths. The local conformal symmetry is of special importance in two dimensions since the corresponding symmetry algebra is infinite-dimensional in this case."

Regards renormalization, I disagree that emphasis on it is a mistake, because the physical basis of renormalization by the role of vacuum polarization (between the IR and UV cutoffs) shielding the core charge of a particle, is highly attractive to me and helps to give a clear picture of what QFT is all about!

The entire difference between Maxwellian electrodynamics and QED lies in the polarization of the vacuum by the shell or rather hollow sphere full of virtual particle loops which get polarized and shield the core, and also cause the Lamb shift and anomalous magnetic moment of the electron (i.e., they increase the electron's magnetic moment by 0.116% from 1 Bohr magneton predicted by Dirac's non-renormalized theory, to approximately 1 + alpha/(2*Pi) = 1.00116 after the Schwinger correction (with more corrective terms for more complex, less important interactions between the electron and its own vacuum field loops).

This interaction picture of loops allowing polarization which has a physical interpretation in simple, visualizable Dirac sea models, is vital to get a solid handle of what the mathematical structure of QFT is physically corresponding to.

Also, renormalization is vital to get any checkable predictions from QFT beyond those Dirac obtained like the electron properties and antimatter, and I think the Klien-Nishina cross-section for the Compton effect.

You need renormalization to predict vacuum perturbation effects like the Lamb shift and anomalous magnetic moments of leptons, electron and muon.

Dr Oakley, who is an opponent of string theory, is completely against renormalization it seems, see http://www.cgoakley.demon.co.uk/qft/

The danger if you ignore empirically defensible physics like renormalization, is that the theorems you are left with are not much use. Trying to fiddle around with mathematical theorems, without a physical picture of the vacuum such as Dirac and Feynman had (Dirac sea phenomena), and you can go on forever getting nowhere. This is like trying to stop the Titanic from sinking by moving the deck chairs around a bit to shift the weight.

Even with simple algebra, it's pretty hopeless to play around without experimental input, trying to discover something.

You have to build a theory on factual evidence, and then make predictions. Otherwise, it's like string theory or even worse.

Hi Nigel

Oh, I think renormalisation theory (especially a la Connes and Marcolli) is very important and useful. But it isn't enough on it's own to understand QFT. From our perspective it is an effective description in a Hopf algebraic regime, within which one cannot hope to describe mass.

Hi Kea,

Obviously a gravitational field, where the "charges" are masses, is non-renormalizable because you can't polarize a field of mass.

In an electric field above 10^18 v/m or so, the field of electric charges are polarizable in the Dirac sea which appear spontaneously as part of photon -> electron + positron creation-annihilation "loops" in spacetime.

This is because virtual positrons are attracted to the real electron core while virtual electrons are repelled from it, so there is a slight vacuum displacement, resulting in a cancellation of part of the core charge of the electron.

This explains how electric charge is a renormalizable quantity. Problem is, this heuristic picture doesn't explain why mass is renormalized. For consistency, mass as well as electric charge should be renormalizable to get a working quantum gravity. However, Lunsford's unification - which works - of gravity and electromagnetism shows that both fields are different aspects of the same thing.

Clearly the charge for quantum gravity is some kind of vacuum particle, like a Higgs boson, which via electric field phenomena can be associated with electric charges, giving them mass.

Hence for electric fields, the electron couples directly with the electric field.

For gravitational fields and inertia (ie, spacetime "curvature" in general) the interaction is indirect: the electron couples to a vacuum particle such as one or more Higgs bosons, which in turn couple with the background field (gravity causing Yang-Mills exchange radiation).

In this way, renormalization of gravity is identical to renormalization of electric field, because both gravity and electromagnetism depend on the renormalizable electric charge (directly in the case of electromagnetic fields, but indirectly in the case of spacetime curvature).

The renormalization of electric charge and mass for an electron is discussed vividly by Rose in an early introduction to electrodynamics (books written at the time the theory was being grounded are more likely to be helpful for physical intuition than the modern expositions, which try to dispense with physics and present only abstract maths):

Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy ... The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. ... If these contributions were cut off in any reasonable manner, m’ - m and e’ - e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

‘All this means that the present theory of electrons and fields is not complete. ... The particles ... are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example ... the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001 ...’

Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.

Most of the charge is screened out by polarised charges in the vacuum around the electron core:

‘... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ - arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

The way that both electromagnetism and gravity can arise from a single mechanism is quite simple when you try to calculate the ways electric charge can be summed in the universe, assuming say 10^80 positive charges and a similar number of negative charges randomly distributed.

If you assume that Yang-Mills exchange radiation is permitted to take any path possible between all of the charges, you end up with two solutions. Think of it as a lot of charged capacitors with vacuum or air dielectric between the charged plates, all arranged at random orientations throughout the volume of a large room. The drunkard's walk of gauge boson radiation between similar charges results in a strong electromagnetic force which can be either positive or negative and can result in either attraction or repulsion. The net force strength turns out to be proportional to the square root of the number of charges, because the inverse square law due to geometric divergence is totally cancelled out due to the fact that the divergence of gauge radiation going away from one particular charge is cancelled out by the convergence of gauge boson radiation going towards that charge. Hence, the only influence on the resulting net force strength is the number of charges. The force strength turns out to be the average contribution per charge multiplied by the square root of the total number of charges.

However, the alternative solution is to ignore a random walk between similar charges (this zig-zag is is required to avoid near cancellation by equal numbers of positive and negative charges in the universe) and consider a radial line addition across the universe.

The radial line addition is obviously much weaker, because if you draw a long line through the universe, you expect to find that 50% of the charges it passes through are positive, and 50% are negative.

However, there is the also the vitally saving grace that such a line is 50% likely to have an even number of charges, and is 50% likely to have an odd number of charges.

The situation we are interested in is the case of an odd number of charges, because then there definitely be

alwaysbe a net charge present!! (in the even number, there will on average be no net charge). Hence, the relative force strength for this radial line summation (which is obviously the LeSage "shadowing" effect of gravity), is one unit (from the one odd charge). It turns out that this is an atractive force, gravity.By comparison to the electromagnetic force mechanism, gravity is smaller in strength than electromagnetism by the square root of the number of charges in the universe.

Since it is possible from the mechanism based on Lunsford's unification (which has three orthagonal time dimensions, SO(3,3)) of electromagnetism and gravity to predict the strength constant of gravity, it is thus possible to predict the strength of electromagnetism by multiplying the gravity coupling constant by the square root of the number of charges.

Lunsford, Int. J. Theoretical Physics, v 43 (2004), No. 1, pp.161-177:

http://cdsweb.cern.ch/record/688763

http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

Post a Comment

<< Home