Arcadian Functor

occasional meanderings in physics' brave new world

My Photo
Name:
Location: New Zealand

Marni D. Sheppeard

Tuesday, July 29, 2008

Origin of Species

Two weeks ago, Theoretical Atlas brought to our attention the work of Rivasseau, the constructive field theorist. Now kneemo, in a frenzy of blogging, points out a new paper by Rivasseau et al on rewriting QFT using trees.

The proposal rests on the use of combinatorial species, introduced long ago by Andre Joyal, the great category theorist. A lot of our playing with trees and funny infinite sums in M theory is about combinatorial species, although we haven't yet worried about the exact relation. A species is just a functor from the groupoid of finite sets (and bijections) to itself. Recall that this groupoid is a lot like the finite ordinals, which correspond to cardinalities of sets. An example that often comes up in topos theory is the functor that sends each set to its power set, namely the set of all subsets of the set. This example illustrates the general idea of sending a collection of objects to another collection, equipped with some structure related to the original.

3 Comments:

Blogger Unknown said...

It may be true that species as originally defined by Joyal were endofunctors from the groupoid of finite sets to itself, but I think it was recognized very early on that it is much more flexible and useful to consider more generally V-species (for any category V), as functors from that groupoid to V. Most of the essential species operations, including for example convolution product and substitution product, are definable if we assume that V is symmetric monoidally cocomplete (symmetric monoidal and closed under colimits, such that the monoidal product preserves colimits in each of its arguments). For example, some of the most fruitful examples come from taking V to be the category of vector spaces or super-vector spaces over some field. (As in Joyal's marvelous article in Lecture Notes in Mathematics 1234.)

I hope I'm not being too obnoxiously anticipatory by also remarking that the usual notion of permutative V-operad can be defined very concisely as a monoid in the monoidal category of V-species, where the monoidal product is species substitution.

-- Todd

July 30, 2008 2:34 PM  
Blogger nige said...

Thanks for the link to Tree Quantum Field Theory paper by R. Gurau, J. Magnen, and V. Rivasseau, which I read hoping for some physical insights. It starts very nicely by describing the limitations of the path integral. Page 2 of http://arxiv.org/PS_cache/arxiv/pdf/0807/0807.4122v1.pdf states:

"In this paper ... we show how to base quantum field theory on trees, which lie at the right middle point between functional integrals and Feynman graphs so that they share the advantages of both, but none of their problems."

The Feynman graphs represent physical processes albeit in a fairly abstract way. Any move away from them, towards greater mathematical abstraction, risks being a step away from concrete interaction modelling, towards some less physical mathematical abstraction. Page 3 adds:

"Model-dependent details such as space-time dimension, interactions and propagators are therefore no longer considered fundamental. They just enter the definition of the matrix elements of this scalar product. These matrix elements are just finite sums of finite dimensional Feynman integrals. It is just the packaging of perturbation theory which is redone in a better way. This is essentially why this formalism accommodates all nice features of perturbative field theory, just curing its single but giant defect, namely divergence."

Divergencies are only a mathematical problem if you insist that cutoffs are unphysical. Actually, you get infinities everywhere in physics if you don't use physically justified cutoffs to prevent absurb infinities. E.g., if you observe that the sun's radiant power in watts per square metre varies as the inverse square law of the distance of the Earth from the sun, then you extrapolate to find the radiant power in the middle of the sun (zero distance), you get an infinity. In reality, the radiant power in the middle of the sun is not infinity; it's the radiation flux associated with hydrogen fusion at 15 million K. The inverse square law here only works outside the sun. Once you start looking at what happens inside the sun, the physics changes and the mathematical inverse square "law" is no longer valid.

In the case of infinities at high energy (or small distances from fermions) in path integrals which require a cutoff, the divergences occur as you go towards zero distance because pair production charges would gain infinite amounts of momentum in the simple mathematical model, and as a result they would cause unphysical effects we don't observe. The error here is that loops (pair production and annihilation) requires space for a pair of oppositely charged fermions to briefly separate before annihilating! Because you are going to smaller distances (less space) at higher energy, eventually the reduction in the available amount of space stops loops from forming. This means that there is a grain-size to the vacuum below which (or for physical collisions at energy above a cutoff corresponding to that grain size distance), you don't get any loop effects because the space is too small for significant pair production and annihilation cycles to occur.

So I disagree that there is a physical problem with high energy divergences: it's physically clear why there is a need to impose a cutoff at high energy to avoid infinities (i.e. renormalize charges to prevent running couplings from going to infinity or zero as zero distance is approached). It's a pseudo-problem to try to get away from this by reformulating the physical model into more abstract concepts, a procedure which strikes me as akin to the old party game of trying to pin the tail on the donkey when blindfolded, e.g. pages 8-9:

"A QFT model is defined perturbatively by gluing propagators and vertices and computing corresponding amplitudes according to certain Feynman rules.

"Model dependent features imply the types of lines (propagators for various particles, bosons or fermions), the type of vertices, the space time dimension and its metric (Euclidean/Minkowsky, curved...).

"We now reorganize the Feynman rules by breaking Feynman graphs into pieces and putting them into boxes labeled by trees according to the canonical formula. The content of each tree box is then considered the matrix element of
a certain scalar product."

The danger here is that you're moving away from a physical understanding of what goes on in interactions, and it's not clear from the paper that any benefit really exists. Rearranging the abstract mathematical model into a new form that causes the physical problems to be less clear is pure obfuscation! It's mathematically wallpapering over the physical questions, not physically addressing them.

The Feynman graphs are the nearest thing you have to a depiction of a set of physical processes for what is really going on in producing forces, so this paper seems to be taking a step away from model building, and heading instead towards greater abstraction.

Even with the simplest mathematics, as soon as you get away from physical correspondence between the mathematics and the physical process, you get a great increase in possible mathematical models. This is the problem with all the speculations in physics: getting the mathematics to describe something that is too far from a physical process.

What I'd have loved to see is some effort in the opposite direction, trying to make the physical processes in the Feynman diagrams even more concrete, instead of breaking them up and forming a more abstract model.

The key Feynman diagrams for fundamental interactions at low energy only have two vertices. As Feynman shows in his book QED, the refraction of light by water is even simpler (for a swimmer underwater who sees a distorted position of the sun): path integrals describe it by just one vertice, namely the deflection in the light ray when it hits the water surface and is slowed down.

You work out the action for all paths with this one vertice (i.e., different angles), giving then different weightings for interference: the phase vector in the path integral maximises contributions from light paths that take the minimum time to travel from light source to the observer underwater. This is how the classical law of refraction (Snell's law) emerges from the path integral with a varying phase vector for different paths.

I think it is therefore a mistake to try to look at Feynman diagrams with lots of vertices and turn them into a tree. The many vertex Feynman diagrams arise from pair production followed by annihilation, a "loop" cycle of physical processes. I don't see how moving to more abstract territory will be an improvement, and this paper doesn't seem to demonstrate any gains. Moving off the beaten track without a good reason is a recipe for ending up in quicksand. E.g., take a look at what happened to Dr Chris Oakley when he tried to build a quantum field theory without divergencies and renormalization: http://www.cgoakley.demon.co.uk/qft/. As soon as you move away from modelling the physical interactions and mechanisms for what is going on in quantum field theory, you will end up in a mathematical world of modelling things which are totally abstract rather than physical, and then you can't make falsifiable predictions.

The problem is analogous to trying to someone realizing that the inverse square law for solar radiation irrationally predicts infinite energy density at zero distance from the middle of the sun, and then trying to guess a new mathematical law that gets around this problem, instead of simply imposing an arbitrary cutoff which acknowledges that the law breaks down at small distances. There is a large landscape of mathematical explanations and alternative model you could come up to replace an existing law. E.g., 1/r^2 can be replaced by 1/(X + r^2) or maybe 1/(X + r)^2, or many other such possibilities, where X is a small distance that has no effect on the inverse square law at big distances, but prevents it from giving infinity at zero distance from the middle of the sun.

But unless you are modelling the actual physical processes, the mathematics is guesswork. Also, if it happens that there is a large number of possible different mathematical reformulations, then there is a low probability of any one given alternative formulation is really going to be useful.

I wish this paper gave illustrations of which Feynman diagrams were being broken up and reassembled. If it were a case that physical Feynman diagrams were being reassembled individually in order to make calculations easier, then I could understand it. E.g. when you sum a lot of vectors you can add them in any order and the resultant vector is the same. If this was being done to make calculations easier, I could appreciate it. I wish category theory could be used to improve the path integral calculations.

"Perhaps most importantly it removes the space-time background from its central place in QFT, paving the way for a nonperturbative definition of field theory in noninteger dimension."

This statement in the abstract conveys very abstract ambitions. But I haven't been through all the maths in the paper because it's technical and very time-consuming, so if I'm missing anything important, please let me know. (If this comment appears badly written or unhelpful, sorry and please delete it. I just don't see what real physical problem it is solving by moving into more abstract territory.)

July 30, 2008 9:52 PM  
Anonymous Anonymous said...

Thanks Todd! Sorry, must run now and play in the snow. Will hopefully sort out web connection soon.

Kea

July 31, 2008 9:20 AM  

Post a Comment

<< Home