# Arcadian Functor

occasional meanderings in physics' brave new world

## Saturday, May 31, 2008

I'm going to leave the Saturday report until later, because I need to catch up on sleep before waitressing all day tomorrow. The committee chose Boston for 2014 and confirmed Athens 2010 and Kyoto 2012. As Schneps pointed out, the weather was perfect all week because everybody got given an umbrella in their conference bag. There was a lot of talk about New Physics, but almost nothing other than sterile neutrinos or stringy or loopy models were discussed.

### Neutrino08 - GSI

It turns out that the GSI poster did in fact belong to Manfred Lindner, who gave a 15 minute talk on the anomaly late Friday. As indicated by the poster, he wanted to stress to theorists the silliness of rushing to publish explanations for rumoured oscillations, which it turns out cannot have anything to do (directly) with neutrino oscillations.

GSI has the ability to see single ions via Schottky noise detection, after creating monoisotopic beams. The systems in question are decays of 140Pr58+ to Ce and 142Pm60+ to Nd. On observing the decays over a period of about 80 seconds, they find superimposed oscillations (at very high $\sigma$) in the count rate of $T = 7$ sec in both cases. Also note that the phases are different in each case, whereas one might expect them to be the same if set at $t = 0$ by some mechanism. They say the oscillations cannot be due to neutrino oscillations, because the capture process should be independent of neutrino mixing.

So what is it? There was no clear answer given, but suggestions include a tiny splitting in the mother system, which sounds reasonable. A new run to clarify the situation should be made this fall.

GSI has the ability to see single ions via Schottky noise detection, after creating monoisotopic beams. The systems in question are decays of 140Pr58+ to Ce and 142Pm60+ to Nd. On observing the decays over a period of about 80 seconds, they find superimposed oscillations (at very high $\sigma$) in the count rate of $T = 7$ sec in both cases. Also note that the phases are different in each case, whereas one might expect them to be the same if set at $t = 0$ by some mechanism. They say the oscillations cannot be due to neutrino oscillations, because the capture process should be independent of neutrino mixing.

So what is it? There was no clear answer given, but suggestions include a tiny splitting in the mother system, which sounds reasonable. A new run to clarify the situation should be made this fall.

### Neutrino08 Day 5g

Perhaps you were thinking we had reached the end of the neutrino experiment catalogue? Hah, hah! The organisers saved the biggest experiments for last. Moving on to deep water and Antarctic ice high energy detectors, T. DeYoung presented results from 7 years of AMANDA, which was fully deployed in 2000.

The 7 year data set amounts to 3.8 years of total live time for 667 optical modules on 19 strings reaching down to 2500m below the surface. The point source search considered 6595 events and a preliminary skymap was shown. Taking into account that 95 out of 100 background maps have point sources with $> 3.38 \sigma$, they conclude that there are no clear observations, but upward fluctuations include MGRO J2019+37 and Geminga. For the solar WIMP search, a preliminary limit on 4 years of data beats the SuperK bound for higher neutralino masses. The IceCube experiment should yield a significant improvement in the 30-100 GeV range. A 7 year analysis paper is due out soon, and AMANDA is being fully incorporated into the IceCube experiment.

The ANTARES ocean detector was completed only 12 hours before Carr's report on it. Analysis is progressing on 400 $\nu$ events. Finally, Migneco covered Baikal, Nestor and KM3NeT.

The 7 year data set amounts to 3.8 years of total live time for 667 optical modules on 19 strings reaching down to 2500m below the surface. The point source search considered 6595 events and a preliminary skymap was shown. Taking into account that 95 out of 100 background maps have point sources with $> 3.38 \sigma$, they conclude that there are no clear observations, but upward fluctuations include MGRO J2019+37 and Geminga. For the solar WIMP search, a preliminary limit on 4 years of data beats the SuperK bound for higher neutralino masses. The IceCube experiment should yield a significant improvement in the 30-100 GeV range. A 7 year analysis paper is due out soon, and AMANDA is being fully incorporated into the IceCube experiment.

The ANTARES ocean detector was completed only 12 hours before Carr's report on it. Analysis is progressing on 400 $\nu$ events. Finally, Migneco covered Baikal, Nestor and KM3NeT.

### Neutrino08 Day 5f

By mid afternoon the guy next to me, in the front row near the stage right under the bright lights, has fallen asleep. I guess not everybody here finds the theory talks interesting. Anyway, Richard Easther, an ex local, was next up with a talk on neutrinos and Future Concordance. He described the 6 basic input parameters, in particular $\Omega_{\Lambda}$, his description of which invoked a Tooth Fairy, which he pointed out might actually be a real tooth fairy under anthropomorphic thinking. The question is, how will the parameter set develop? It could shrink from knowledge of masses, or perhaps expand.

The important point is that the WMAP observations are now sufficiently accurate that neutrino physics is becoming essential to progress in cosmology. Limits indicate that $\sum m_{i}$ is probably less than 1 eV (yes, 0.06 eV) and Planck may well see to around 0.15. The second half of the talk was an intriguing analysis of the idea of a 21cm high $z$ map of the sky. This would require new foreground removal techniques and a very quiet radio location (perhaps the SKA in Western Australia)! See also this paper.

Let me briefly sketch the rest of Friday theory. Another theorist, Shaposhnikov, discussed his sterile neutrino scenario and methods for detecting them. Nir described the Sakharov conditions. In particular, the MSSM should be testable at the LHC, because parameter constraints indicate a $m_{\chi} < 250$ GeV (that's some funny stringy particle). Here is his introduction to Leptogenesis.

The important point is that the WMAP observations are now sufficiently accurate that neutrino physics is becoming essential to progress in cosmology. Limits indicate that $\sum m_{i}$ is probably less than 1 eV (yes, 0.06 eV) and Planck may well see to around 0.15. The second half of the talk was an intriguing analysis of the idea of a 21cm high $z$ map of the sky. This would require new foreground removal techniques and a very quiet radio location (perhaps the SKA in Western Australia)! See also this paper.

Let me briefly sketch the rest of Friday theory. Another theorist, Shaposhnikov, discussed his sterile neutrino scenario and methods for detecting them. Nir described the Sakharov conditions. In particular, the MSSM should be testable at the LHC, because parameter constraints indicate a $m_{\chi} < 250$ GeV (that's some funny stringy particle). Here is his introduction to Leptogenesis.

### Neutrino08 Day 5e

M. Nakahata introduced supernova detection with 1987A and a long list of past and current detectors including Borexino, KamLAND, SuperK and LVD. He estimated about a 7% probability that a supernova at < 3.16 kpc will generate a 10% improvement in statistics. A detector with a reach of 20 kpc would cover 97% of galactic supernova potential. For SuperK, one expects around 100 16Oxygen CC events for a supernova at 10 kpc, and a total of 7300 ${\overline{\nu}}_{e} + p$ events. This should be capable of distinguishing between some models. The original neutrino temperatures would be discerned to around 10%. See here for a flux limit on relic neutrinos. To achieve high background reduction one requires neutron tagging in the water detector, and SuperK is introducing Gd for this purpose.

Astrophysical candidate sources were discussed by N. Bell. Deviations from the 1:1:1 ratio at Earth could arise for a number of exotic reasons. For example, a normal hierarchy $\nu$ decay could lead to a 5:1:1 (overabundant in electron neutrinos) observation.

Astrophysical candidate sources were discussed by N. Bell. Deviations from the 1:1:1 ratio at Earth could arise for a number of exotic reasons. For example, a normal hierarchy $\nu$ decay could lead to a 5:1:1 (overabundant in electron neutrinos) observation.

## Friday, May 30, 2008

### Neutrino08 Day 5d

Senjanovic suggested there was a moderately optimistic hope of new dynamics at the LHC. The most promising channel is considered to be same sign dileptons plus jets. Budge quickly outlined standard models of supernovae, from 1938 to the present. He then discussed numerical techniques and the possible need for a tooth fairy since transport methods have difficulty producing an explosion. But John Learned, the star commenter of the conference, pointed out that 3d simulations might well be necessary here, especially given the established correlation with GRBs.

One expects a release of roughly 120 foe in the initial collapse, followed by 240 foe during the cooling phase. However, supernova 1987A released 1.7 foe visibly, which is consistent with an antineutrino energy of only 100 foe.

Dighe spoke about the potential of another big supernova observation. A day before the explosion, in the neutronization phase, there is $\nu$ emission for about 10 ms. In the cooling phase $\nu$ emission lasts about 10 seconds. The Early Warning system will hopefully help us catch all we can if one goes off. The important thing to note is that $\nu$, $\overline{\nu}$ fluxes at Earth should separate (i) hierarchy type (which is normal from Carl's mass values) and (ii) large/small $\theta_{13}$ (TBM says zero, ie. small). See this paper.

One expects a release of roughly 120 foe in the initial collapse, followed by 240 foe during the cooling phase. However, supernova 1987A released 1.7 foe visibly, which is consistent with an antineutrino energy of only 100 foe.

Dighe spoke about the potential of another big supernova observation. A day before the explosion, in the neutronization phase, there is $\nu$ emission for about 10 ms. In the cooling phase $\nu$ emission lasts about 10 seconds. The Early Warning system will hopefully help us catch all we can if one goes off. The important thing to note is that $\nu$, $\overline{\nu}$ fluxes at Earth should separate (i) hierarchy type (which is normal from Carl's mass values) and (ii) large/small $\theta_{13}$ (TBM says zero, ie. small). See this paper.

### Neutrino08 Day 5c

One of the best talks of the conference (I might be biased) was S. King's outline of neutrino mass models. The introduction explained why we really need to go beyond the Standard Model (where neutrinos should be massless) to understand neutrino mass. King then presented his personal roadmap, a large flowchart (ultimately ending in Some Big Theory) to guide one through a series of true and false questions. He opted to begin with the LSND result, now assumed false. What about large extra dimensions and the string scale? The Majorana option seems nicer, because one can get naturally small neutrino masses via a lepton number violating operator involving some heavier particle. Following the chart, if the hierarchy is normal, then the natural mixing appears to be tribimaximal. This would make $\theta_{13} = 0$. Why should this be precise?

Consider instead an expansion around the TBM matrix (see here). This suggests a new family symmetry such as the $A_{4}$ group of the tetrahedron! This could arise from something like 6D orbifolding. See also here. Due to its agreement so far with experimental results, the TBM matrix is considered a key question. In summary, he stresses that the status quo is not an option.

Consider instead an expansion around the TBM matrix (see here). This suggests a new family symmetry such as the $A_{4}$ group of the tetrahedron! This could arise from something like 6D orbifolding. See also here. Due to its agreement so far with experimental results, the TBM matrix is considered a key question. In summary, he stresses that the status quo is not an option.

### Neutrino08 Day 5b

J. Monroe had such beautiful diagrams that I didn't take so many notes on neutrino backgrounds for DM searches (see this paper). The first directional limits come from the NEWAGE experiment. Anyway, most slides are now available on the website and the remainder should be there soon. Another short talk, by D. Nygren, discussed $0 \nu \beta \beta$ and WIMPS using high pressure Xenon. There was also a mention of DUET.

### Neutrino08 Day 5a

Sadoulet started Friday morning with Dark Matter detection via cosmology, noble liquids, phonon mediated detectors and possibly DAMA. He started with the statement that WIMPs are a generic consequence of new physics at the TeV scale. This was followed by a clear, brief overview of a BBN/WMAP figure for baryonic fraction of $\Omega_{b} = 0.047 \pm 0.006$. His focus was on direct detection of DM via elastic scattering, occurring via a nuclear recoil signal at an expected rate of one event per kg target every two months. Signatures include uniformity throughout the detector and galaxy correlation. The big challenge is freedom from backgrounds.

A noble liquid detector using Xenon or Argon (see XENON, ZEPLIN-II, DEAP, MiniCLEAN, WARP and ArDM) takes advantage of recent breakthroughs in electron extraction and in separation of electron and nuclear recoils. Plots for cross section (vs WIMP mass) exclusion were shown for these experiments and also for phonon mediated experiments such as CDMS. A new CDMS result which improves the previous best XENON bound has been submitted. CDMS should run until December this year. We should reach $10^{-44} \textrm{cm}^{2}$ per nucleon by 2009 and he thinks $10^{-47}$ presents a considerable challenge, but by this level one would have to question the WIMP hypothesis.

Regarding the DAMA claim, Sadoulet says he is convinced there is a modulation, but it cannot be a WIMP. Could it be an axion like particle? Or perhaps it is an effect related to the well known modulation of muon flux (due to seasonal atmospheric differences), which has the same phase.

A noble liquid detector using Xenon or Argon (see XENON, ZEPLIN-II, DEAP, MiniCLEAN, WARP and ArDM) takes advantage of recent breakthroughs in electron extraction and in separation of electron and nuclear recoils. Plots for cross section (vs WIMP mass) exclusion were shown for these experiments and also for phonon mediated experiments such as CDMS. A new CDMS result which improves the previous best XENON bound has been submitted. CDMS should run until December this year. We should reach $10^{-44} \textrm{cm}^{2}$ per nucleon by 2009 and he thinks $10^{-47}$ presents a considerable challenge, but by this level one would have to question the WIMP hypothesis.

Regarding the DAMA claim, Sadoulet says he is convinced there is a modulation, but it cannot be a WIMP. Could it be an axion like particle? Or perhaps it is an effect related to the well known modulation of muon flux (due to seasonal atmospheric differences), which has the same phase.

### Neutrino08 continued

Here is Marc Bergevin's authentic conference cell phone photo of Pania, the tagged little blue penguin at the Antarctic Centre (which we visited for a very jolly banquet dinner on Wednesday evening). Lincoln and Poppy were too busy squawking at each other to pay us much attention and the others were sleeping, but the lonely Pania was more curious.

## Thursday, May 29, 2008

### Neutrino08 Day 3d

Double Beta decay $0 \nu \beta \beta$ was properly introduced by G. Gratta, who listed candidate nuclei. The one with the highest isotope abundance, at 34.5%, is 130Te. The MOON experiment would use 100Mo or 82Se foils and they have a 142g prototype in operation. Xenon is ideal for a large experiment, because it can be purified in real time, enrichment is easier and safer and the final 136Ba state can be identified using optical spectroscopy (PRC 44 (1991) 931).

R. Flack presented results from NEMO-3, situated in a tunnel in the European alps. This has a 10kg source of isotopes and a calorimeter with 1940 plastic scintillators coupled to PMTs. Electron, positron, $\gamma$ and $\alpha$ particle determination is possible in full event reconstruction, which recovers trajectories for the $e^{+}$ and $e^{-}$, their energies, time of flight and track curvature in a magnetic field. Phase II was a radon reduced phase. A preliminary result for phase I/II is a half life for 130Te of $7.6 \pm 1.5 \pm 0.8 \times 10^{20}$ yr. A new value for 96Zr at 90% confidence is $8.6 \times 10^{21}$ yr and for 150Nd a value of $1.8 \times 10^{22}$ yr, also at 90%. SuperNEMO is a future project with initial construction hopefully in 2010. It requires 100-200 kg isotope mass and energy resolution down to 4% (at 3 MeV). All modules should be ready by 2013. The target sensitivity is 50-100 meV by 2016.

The morning poster session, accompanied by coffee and some delicious pastries, was pleasantly interactive. Several people humoured me with a very basic outline of their work. There was a very intriguing poster on the GSI anomaly, belonging to nobody as far as I could tell. Only a few people wandered away from their posters, including one theorist who referenced an interesting paper.

R. Flack presented results from NEMO-3, situated in a tunnel in the European alps. This has a 10kg source of isotopes and a calorimeter with 1940 plastic scintillators coupled to PMTs. Electron, positron, $\gamma$ and $\alpha$ particle determination is possible in full event reconstruction, which recovers trajectories for the $e^{+}$ and $e^{-}$, their energies, time of flight and track curvature in a magnetic field. Phase II was a radon reduced phase. A preliminary result for phase I/II is a half life for 130Te of $7.6 \pm 1.5 \pm 0.8 \times 10^{20}$ yr. A new value for 96Zr at 90% confidence is $8.6 \times 10^{21}$ yr and for 150Nd a value of $1.8 \times 10^{22}$ yr, also at 90%. SuperNEMO is a future project with initial construction hopefully in 2010. It requires 100-200 kg isotope mass and energy resolution down to 4% (at 3 MeV). All modules should be ready by 2013. The target sensitivity is 50-100 meV by 2016.

The morning poster session, accompanied by coffee and some delicious pastries, was pleasantly interactive. Several people humoured me with a very basic outline of their work. There was a very intriguing poster on the GSI anomaly, belonging to nobody as far as I could tell. Only a few people wandered away from their posters, including one theorist who referenced an interesting paper.

### Neutrino08 Day 3c

Kayser then moved on to nuclear matrix elements for $0 \nu \beta \beta$, for example 76Ge, whose element lies somewhere between 2 and 6, a rather large uncertainty. He then discussed dipole moments and the present Borexino bound of $5.4 \times 10^{11} \mu_{B}$.

G. Drexlin spoke about direct mass measurements at KATRIN and MARE. In cosmology, KATRIN could shift the allowed region for the equation of state $w$ away from $w = -1$ for Dark Energy to quintessence. His 2d plot of $w$ vs $\sum m_{i}$ is of course restricted to a line by Carl's normal hierarchy mass values, so a determination of $w$ is definitely on the cards. KATRIN hopes to begin long term data taking in 2011. Sensitivity is at $m (\nu)$ below 200meV at 90% confidence. Drexlin spent the second half of the talk focusing on the big spectrometer and the structure of the windowless gaseous source. To obtain an injection rate down to $\pm 0.1%$, the flow out must be reduced by an amazing $10^{14}$ and the column density needs to get to $\pm 0.1%$.

MARE starts phase I soon, in which they hope to improve sensitivity by a factor of 10 for their 187Re $\beta$ emitter and AgReO4 crystal pixel array detector. If this phase is successful, phase II would again improve sensitivity by a factor of 10, requiring much R&D.

G. Drexlin spoke about direct mass measurements at KATRIN and MARE. In cosmology, KATRIN could shift the allowed region for the equation of state $w$ away from $w = -1$ for Dark Energy to quintessence. His 2d plot of $w$ vs $\sum m_{i}$ is of course restricted to a line by Carl's normal hierarchy mass values, so a determination of $w$ is definitely on the cards. KATRIN hopes to begin long term data taking in 2011. Sensitivity is at $m (\nu)$ below 200meV at 90% confidence. Drexlin spent the second half of the talk focusing on the big spectrometer and the structure of the windowless gaseous source. To obtain an injection rate down to $\pm 0.1%$, the flow out must be reduced by an amazing $10^{14}$ and the column density needs to get to $\pm 0.1%$.

MARE starts phase I soon, in which they hope to improve sensitivity by a factor of 10 for their 187Re $\beta$ emitter and AgReO4 crystal pixel array detector. If this phase is successful, phase II would again improve sensitivity by a factor of 10, requiring much R&D.

## Wednesday, May 28, 2008

### Neutrino08 Day 3b

H. Ray discussed the Osc-SNS project for performing precision measurements using a spallation neutron source. They expect to reach 0.8 MW by the end of the (northern) summer and 1.4 MW at full power. Studying $\pi^{+}$ at rest decay for 29.8 MeV muon neutrinos allows, for instance, removal of the cosmic ray background. Of course one thing they plan to test is the LSND and MiniBooNE low E excess. This experiment can probe the 0.00001 to 0.01 $\textrm{sin}^{2} 2 \theta$ range (and 0.001 to 10 mass squared range), which heavily impacts supernovae and BBN physics. It should have 100 times the KARMEN statistics for sterile neutrino tests. The beam structure allows simultaneous neutrino and antineutrino modes. In question time, she estimated a 3.5 year wait until data taking, if all goes well.

Vanucci gave an interesting talk on searches for sterile neutrinos (of type [1]). He showed a plot of present limits from BBN and SM decays, which puts the allowed region above about 200 MeV. Can the MiniBooNE excess be interpreted this way? What about LHCb and ATLAS/CMS? In principle, these could extend the mass region to 4 GeV and 50 GeV respectively.

Afternoon sessions began with a run of talks on neutrinoless Double Beta decay, known as $0 \nu \beta \beta$. Kayser from Fermilab introduced the fundamental question of whether or not there is mass gap for the neutrino hierarchy. Cosmology puts $\sigma m_{i}$ at less than 0.17 to 1.0 eV. If there are 3 generations, this constrains the heaviest mass $m_{H}$ to be less than 0.07 to 0.4. Then the question motivating most of the afternoon's talks: are they Majorana? The $0 \nu \beta \beta$ amplitude is proportional to the effective Majorana mass $m_{\beta \beta} = | \sum m_{i} U_{ei}^{2} |$. How large is $m_{\beta \beta}$? A measurement of this value could tell us many things. For example, if the hierarchy is known to be inverted, and we find that $m_{\beta \beta} < 0.01$ eV, then the neutrinos are most probably not Majorana. More on this later.

I'm afraid I skipped the last session on Double Beta decay to, er, blog! Now off to the banquet and a trip to see the little blue penguins!

[1] Shaposhnokov, Nucl. Phys. B763 (2007) 49

Vanucci gave an interesting talk on searches for sterile neutrinos (of type [1]). He showed a plot of present limits from BBN and SM decays, which puts the allowed region above about 200 MeV. Can the MiniBooNE excess be interpreted this way? What about LHCb and ATLAS/CMS? In principle, these could extend the mass region to 4 GeV and 50 GeV respectively.

Afternoon sessions began with a run of talks on neutrinoless Double Beta decay, known as $0 \nu \beta \beta$. Kayser from Fermilab introduced the fundamental question of whether or not there is mass gap for the neutrino hierarchy. Cosmology puts $\sigma m_{i}$ at less than 0.17 to 1.0 eV. If there are 3 generations, this constrains the heaviest mass $m_{H}$ to be less than 0.07 to 0.4. Then the question motivating most of the afternoon's talks: are they Majorana? The $0 \nu \beta \beta$ amplitude is proportional to the effective Majorana mass $m_{\beta \beta} = | \sum m_{i} U_{ei}^{2} |$. How large is $m_{\beta \beta}$? A measurement of this value could tell us many things. For example, if the hierarchy is known to be inverted, and we find that $m_{\beta \beta} < 0.01$ eV, then the neutrinos are most probably not Majorana. More on this later.

I'm afraid I skipped the last session on Double Beta decay to, er, blog! Now off to the banquet and a trip to see the little blue penguins!

[1] Shaposhnokov, Nucl. Phys. B763 (2007) 49

### Neutrino08 Day 3a

M. Sorel spoke next about the MIPP and HARP hadron production facilities. HARP is currently looking at a factor of 2 reduction in the 16% muon neutrino normalisation uncertainty from $\pi^{+}$ production.

S. Zeller then discussed low E neutrino cross sections at a range of experiments, including MINERvA, which plans to take data in 2009 using He, C, Fe and Pb targets. For K2K, new charge current $\pi^{0}$ results indicate

$\frac{\sigma_{CC}}{\sigma_{QE}} = 0.306 \pm 0.023 \pm 0.02$

which is 40% higher than Monte Carlo predictions. For the new CC $\pi^{+}$ result of $0.734$ see 0805.0186.

A very nice outline of QE scattering was given along with a new K2K Carbon12 estimate for axial mass of

$M_{A} = 1.144 \pm 0.077 \pm 0.08$ GeV

The MiniBooNE result for this was $1.23 \pm 0.20$ GeV (PRL 100 (2008) 032301). Statistics for this result are so good that they did a 2D distribution analysis, for which they quote a $\chi^{2}$ of 45/53 (at 77%). There is apparently 'stunning agreement' across phase space with the oscillation data. Note that modern estimates of $M_{A}$ all tend to give higher values than expected. Is the mass absorbing some nuclear effect, or what?

SciBooNE is performing as expected, having taken $0.99 \times 10^{20}$ POT for 21431 events, 16% pure CC. Some preliminary results were (very quickly) shown. Charge current $\pi^{+}$ seen at high E do not show up. Theorists?

S. Zeller then discussed low E neutrino cross sections at a range of experiments, including MINERvA, which plans to take data in 2009 using He, C, Fe and Pb targets. For K2K, new charge current $\pi^{0}$ results indicate

$\frac{\sigma_{CC}}{\sigma_{QE}} = 0.306 \pm 0.023 \pm 0.02$

which is 40% higher than Monte Carlo predictions. For the new CC $\pi^{+}$ result of $0.734$ see 0805.0186.

A very nice outline of QE scattering was given along with a new K2K Carbon12 estimate for axial mass of

$M_{A} = 1.144 \pm 0.077 \pm 0.08$ GeV

The MiniBooNE result for this was $1.23 \pm 0.20$ GeV (PRL 100 (2008) 032301). Statistics for this result are so good that they did a 2D distribution analysis, for which they quote a $\chi^{2}$ of 45/53 (at 77%). There is apparently 'stunning agreement' across phase space with the oscillation data. Note that modern estimates of $M_{A}$ all tend to give higher values than expected. Is the mass absorbing some nuclear effect, or what?

SciBooNE is performing as expected, having taken $0.99 \times 10^{20}$ POT for 21431 events, 16% pure CC. Some preliminary results were (very quickly) shown. Charge current $\pi^{+}$ seen at high E do not show up. Theorists?

### Neutrino08 MiniBooNE

Apologies to experimentalists for my complete inability to jot down all error figures when speakers flash up 40 slides in half an hour. Don't worry. Slides will all be made available on the conference site.

The 9am start, by S. Brice, was the MiniBooNE talk on oscillation searches! Brice briefly sketched the motivation of LSND observations and the detector setup, namely 3m of top dirt on a 12m sphere of 800 kiloton pure oil with 1280 inner PMTs*.

MiniBooNE has the largest sample of neutral current muon neutrino $\pi^{0}$, with 28600 $\pi^{0}$ events. The $\pi^{0}$ rate is measured to a few percent, which is important in considering backgrounds. A draft paper promises a 10% to 30% improvement (at 90% confidence) under about 1 $eV^{2}$ (he mentioned a $\chi^{2} = 12.45$). For comparisons with LSND, Karmen and Bugey see arxiv: 0805.1764. A maximum compatibility for these four experiments is estimated at only 4% at $\Delta m^{2} = 0.242$ and $\textrm{sin}^{2} 2 \theta = 0.023$.

Regarding the low E excess, they are near the end of comprehensive review, which is not quite ready, but Brice indicated that there really isn't any change. New effects considered in the analysis are

1. induced photonuclear effect from absorption removing 1 photon from a muon $\nu$ induced $\pi^{0}$ decay

2. some other hadronic processes. These turn out to have a small effect on the excess.

3. now have a better handling of beam $\pi^{+}$ production, which can decrease the excess

4. $\nu$ induced $\pi^{0}$ now better measured

5. better handling of radiative decay of $\Delta$ resonance

and new low E electron neutrino cuts, along with upgrades, indicate no appreciable excess above 475 MeV. He also showed some preliminary results regarding NuMI events (from 745km away) which indicate good agreement with Monte Carlo for muon neutrino CCQE. For electron neutrino CCQE there is a 1.26 $\sigma$ excess under 900 MeV.

For muon neutrino disappearance events they can reach a new region of phase space, and this result is also due out this summer. In summary, they have $6.6 \times 10^{20}$ protons on target (POT) in neutrino mode and $2.5 \times 10^{20}$ POT in antineutrino mode.

*that acronym was easy, but I spend a substantial fraction of talk time trying to figure out some of the more obscure ones

The 9am start, by S. Brice, was the MiniBooNE talk on oscillation searches! Brice briefly sketched the motivation of LSND observations and the detector setup, namely 3m of top dirt on a 12m sphere of 800 kiloton pure oil with 1280 inner PMTs*.

MiniBooNE has the largest sample of neutral current muon neutrino $\pi^{0}$, with 28600 $\pi^{0}$ events. The $\pi^{0}$ rate is measured to a few percent, which is important in considering backgrounds. A draft paper promises a 10% to 30% improvement (at 90% confidence) under about 1 $eV^{2}$ (he mentioned a $\chi^{2} = 12.45$). For comparisons with LSND, Karmen and Bugey see arxiv: 0805.1764. A maximum compatibility for these four experiments is estimated at only 4% at $\Delta m^{2} = 0.242$ and $\textrm{sin}^{2} 2 \theta = 0.023$.

Regarding the low E excess, they are near the end of comprehensive review, which is not quite ready, but Brice indicated that there really isn't any change. New effects considered in the analysis are

1. induced photonuclear effect from absorption removing 1 photon from a muon $\nu$ induced $\pi^{0}$ decay

2. some other hadronic processes. These turn out to have a small effect on the excess.

3. now have a better handling of beam $\pi^{+}$ production, which can decrease the excess

4. $\nu$ induced $\pi^{0}$ now better measured

5. better handling of radiative decay of $\Delta$ resonance

and new low E electron neutrino cuts, along with upgrades, indicate no appreciable excess above 475 MeV. He also showed some preliminary results regarding NuMI events (from 745km away) which indicate good agreement with Monte Carlo for muon neutrino CCQE. For electron neutrino CCQE there is a 1.26 $\sigma$ excess under 900 MeV.

For muon neutrino disappearance events they can reach a new region of phase space, and this result is also due out this summer. In summary, they have $6.6 \times 10^{20}$ protons on target (POT) in neutrino mode and $2.5 \times 10^{20}$ POT in antineutrino mode.

*that acronym was easy, but I spend a substantial fraction of talk time trying to figure out some of the more obscure ones

## Tuesday, May 27, 2008

### Neutrino08 Day 2d

K. Lesko introduced the multidisciplinary big cavern DUSEL proposal for the Homestake mine. Construction would take 6 to 8 years and a hopeful timeline is 2012-2018. Funds for concept proposals will be announced in October. The perfectly antipodean J. Gomez-Cadenas decided to start in 2016, now that we were into the swing of living in the future. He discussed superbeams at 1-4 MW and Beta beams, which would be pure neutrino beams.

Maltoni chose to spend 1/3 of his talk on the LSND problem and the apparent requirement of sterile neutrinos, which he explained were ruled out in 2+2 gen models by solar and atmospheric results, ruled out in 3+1 by short baseline data, ruled out in the 3+2 case (which attempted to reconcile LSND and MiniBooNE) by appearance and disappearance data, and ... well, he reckons it's all ruled out.

The last afternoon talk (before a 'generous' 10 minute break before the short evening talks) by Shaevitz discussed NuSOnG, an exciting generation III, TeV scale Fermilab neutrino scattering project using 800 GeV protons from Tevatron. It would have pure $\nu$ or pure $\overline{\nu}$ run modes and a possibility of a sizable tau neutrino fraction in the beam dump. It complements the LHC (see 0803.0354). Schedule estimate: 2009 proposal submission to 2016 data taking.

Maltoni chose to spend 1/3 of his talk on the LSND problem and the apparent requirement of sterile neutrinos, which he explained were ruled out in 2+2 gen models by solar and atmospheric results, ruled out in 3+1 by short baseline data, ruled out in the 3+2 case (which attempted to reconcile LSND and MiniBooNE) by appearance and disappearance data, and ... well, he reckons it's all ruled out.

The last afternoon talk (before a 'generous' 10 minute break before the short evening talks) by Shaevitz discussed NuSOnG, an exciting generation III, TeV scale Fermilab neutrino scattering project using 800 GeV protons from Tevatron. It would have pure $\nu$ or pure $\overline{\nu}$ run modes and a possibility of a sizable tau neutrino fraction in the beam dump. It complements the LHC (see 0803.0354). Schedule estimate: 2009 proposal submission to 2016 data taking.

### Neutrino08 Day 2c

H. Minakata continued with 2 possibilities, (i) $\theta_{13} > 3$deg, in which case conventional superbeams and megaton water detectors should work and (ii) small $\theta_{13}$, which would require new beam technologies, although liquid argon technology could change the situation. He promised to mention unconventional physics, but was forced to skip that section when the chair meanly rang the bell a little early. For varying E, he mentioned a possible 100 kiloton argon facility (3 or 4 times more sensitive than water Cherenkov detectors). For varying L, a test of CP violation would best use a low energy, short L setup.

Moving on to T2K, a 300km baseline Tokai to Kamioka project: I. Kato sketched the aim of observing $\theta_{23}$ and $\Delta m_{23}^{2}$ via muon neutrino disappearance with the help of the J-PARC accelerator. Achievable precision is apparently 0.01 in $\textrm{sin}^{2} 2 \theta$ and $< 10^{-4}$ for $\Delta m^{2}$. Installation and commissioning is on schedule: the LINAC at 181 MeV had good beam stability in Jan '07, the beam line tunnel was completed in Dec '06 and the main ring synchrotron is expected to be operational in 2009. After 5 years at SuperK at 0.75 kW they expect from 103 events (for 0.1 $\textrm{sin}^{2}$) to 10 events (for 0.01).

Despite the excellent IT support, R. Ray had to fight a Mac vs Bill Gates battle (which some people blamed on Fermilab) before commencing his talk on NOvA. This is a second generation NuMI beam line experiment requiring an accelerator upgrade to 700 kW beam power. A surface detector would be placed at Ash River, 810 km away. This requires a 6 storey, football field sized building on a site needing 40 ft of blasting in solid granite! A top cover of concrete/barite would shield the detector, which is a liquid scintillator in homemade highly reflective PVC cells. He stressed the importance of complementarity in experiments and comparisons of multiple results. For example, NOvA with Daya Bay and Chooz can determine if $\nu_{3}$ couples to the muon or tau neutrino (at 95% confidence). They expect a 36% event efficiency for electron neutrinos.

Future neutrino beams at J-PARC and Fermilab were discussed by Kajita and Saoulidou. For J-PARC, a Korean detector would give a 1000km+ baseline. Rubbia talked about proposed megaton detectors, for which there is a positive general consensus after reports in the US, Japan and Europe. In the 100 to 1000 kt range, one needs precise tracking and good calorimetric information. A feasibility study will be carried out 2008-2009. One interesting possibility is supernovae observation: the estimate is for 2 antineutrino events per year at 10 megaparsecs (with a 5 megaton water Cherenkov detector). Deep-TITAND (see hep-ex/0110005) is a 1km deep modular steel proposal.

As a pathologically punctual person, I have observed that the bell needs to be rung loudly before each session as chatting participants demonstrate their enthusiasm to their colleagues by pretending not to hear the bell.

Moving on to T2K, a 300km baseline Tokai to Kamioka project: I. Kato sketched the aim of observing $\theta_{23}$ and $\Delta m_{23}^{2}$ via muon neutrino disappearance with the help of the J-PARC accelerator. Achievable precision is apparently 0.01 in $\textrm{sin}^{2} 2 \theta$ and $< 10^{-4}$ for $\Delta m^{2}$. Installation and commissioning is on schedule: the LINAC at 181 MeV had good beam stability in Jan '07, the beam line tunnel was completed in Dec '06 and the main ring synchrotron is expected to be operational in 2009. After 5 years at SuperK at 0.75 kW they expect from 103 events (for 0.1 $\textrm{sin}^{2}$) to 10 events (for 0.01).

Despite the excellent IT support, R. Ray had to fight a Mac vs Bill Gates battle (which some people blamed on Fermilab) before commencing his talk on NOvA. This is a second generation NuMI beam line experiment requiring an accelerator upgrade to 700 kW beam power. A surface detector would be placed at Ash River, 810 km away. This requires a 6 storey, football field sized building on a site needing 40 ft of blasting in solid granite! A top cover of concrete/barite would shield the detector, which is a liquid scintillator in homemade highly reflective PVC cells. He stressed the importance of complementarity in experiments and comparisons of multiple results. For example, NOvA with Daya Bay and Chooz can determine if $\nu_{3}$ couples to the muon or tau neutrino (at 95% confidence). They expect a 36% event efficiency for electron neutrinos.

Future neutrino beams at J-PARC and Fermilab were discussed by Kajita and Saoulidou. For J-PARC, a Korean detector would give a 1000km+ baseline. Rubbia talked about proposed megaton detectors, for which there is a positive general consensus after reports in the US, Japan and Europe. In the 100 to 1000 kt range, one needs precise tracking and good calorimetric information. A feasibility study will be carried out 2008-2009. One interesting possibility is supernovae observation: the estimate is for 2 antineutrino events per year at 10 megaparsecs (with a 5 megaton water Cherenkov detector). Deep-TITAND (see hep-ex/0110005) is a 1km deep modular steel proposal.

As a pathologically punctual person, I have observed that the bell needs to be rung loudly before each session as chatting participants demonstrate their enthusiasm to their colleagues by pretending not to hear the bell.

### Neutrino08 Day 2b

Zukanovitch-Funchal gave an overview of mixings and masses, starting with a 1978 quote from Froggart and Nielsen which refers to neutrino oscillations as exotic. Two mass hierarchies are possible with current results, that is the mass of $\nu_{2}$ is closer to only one of the other 2 masses. Although 2-generation analyses worked well, a 3-generation analysis has been carried out since 2001 (see for instance Prog. Part. Nucl. Phys. 57(2006)742). Parameters have been approaching the tribimaximal mixing values. It is exciting that parameter determinations are weakly correlated and we are entering a precision era! Cosmological bounds were briefly mentioned: a combination analysis sets $\Sigma m < 0.19 eV$.

V. Datar described the status of INO, in particular the proposal for detectors (iron) at Pushep, which has a baseline of about 7000km from CERN. See hep-ph 0805.3474. If 1 megaton per year is achieved, then the hierarchy type may be determined. A prototype will be put together in Kolkata in about one month's time. Minakata's talk focussed on long baseline proposals, and he began with a nice picture of Darth Vader to represent our life in the Dark Ages. But if it turns out that $\theta_{13}$ is 'large' then the Dark Ages might end before Neutrino2010!

Sigh. Only half way through Day 2 and already I feel like I'm living on a planet of neutrino physicists, with more detector cities than I can name! Must get coffee and beer ...

V. Datar described the status of INO, in particular the proposal for detectors (iron) at Pushep, which has a baseline of about 7000km from CERN. See hep-ph 0805.3474. If 1 megaton per year is achieved, then the hierarchy type may be determined. A prototype will be put together in Kolkata in about one month's time. Minakata's talk focussed on long baseline proposals, and he began with a nice picture of Darth Vader to represent our life in the Dark Ages. But if it turns out that $\theta_{13}$ is 'large' then the Dark Ages might end before Neutrino2010!

Sigh. Only half way through Day 2 and already I feel like I'm living on a planet of neutrino physicists, with more detector cities than I can name! Must get coffee and beer ...

### Neutrino08 Day 2a

J. Raaf was first up today with a report on Super Kamiokande, a 50 kiloton water Cherenkov detector under 1km of rock. Solar neutrinos: the focus was on phase III (mid 2006 to late 2008) results using 2 data sets, (i) full (E > 6.5 MeV) and (ii) radon reduced (E > 5 MeV), which are expected to achieve a 60cm elastic scattering vertex resolution. Phase II results showed no correlation with solar activity nor any day-night asymmetry (measured at -0.063 with larger errors). Atmospheric: a re-analysis of phase I data looking for exotic effects can exclude many models.

H. Gallagher represented MINOS, a long baseline experiment based at Fermilab and a Minnesota mine 735km away. Analyses of both charged and neutral current events were done blind. There are about $10^{18}$ protons hitting the target per day at the main injector, and 92.9% of neutrinos produced are muon $\nu$. Charged case: new run 1 and 2 results indicate a $\Delta m^{2} = 2.43 \times 10^{-3} eV^{2}$ and $\textrm{sin}^{2} 2 \theta = 1.00$, or rather $> 0.90$ at 90% confidence. Neutral case: depletion of neutral events is expected in the far detector but no evidence for it was found, the bound being 17% in a 0-120 GeV range. Neutrino decoherence is disfavoured $5.7 \sigma$.

OPERA is a 730km baseline (from CERN) emulsion tracking device which hopes to observe $\nu_{\tau}$ events. Muon neutrino flux is optimized with L/E = 43 km/GeV. Rosa described the detector modules, constructed of scintillator strip target modules embedded in 31 walls, each built from up to 3000 custom bricks of layered emulsion and Pb sheets. See 0804.1985. The short 2007 run saw 38 triggered candidate events with (at the end) 64060 bricks. With a high intensity beam at about 200 events per week, it is expected that the new run (starting around June 16) will see 1.2 $\nu_{\tau}$ events.

H. Gallagher represented MINOS, a long baseline experiment based at Fermilab and a Minnesota mine 735km away. Analyses of both charged and neutral current events were done blind. There are about $10^{18}$ protons hitting the target per day at the main injector, and 92.9% of neutrinos produced are muon $\nu$. Charged case: new run 1 and 2 results indicate a $\Delta m^{2} = 2.43 \times 10^{-3} eV^{2}$ and $\textrm{sin}^{2} 2 \theta = 1.00$, or rather $> 0.90$ at 90% confidence. Neutral case: depletion of neutral events is expected in the far detector but no evidence for it was found, the bound being 17% in a 0-120 GeV range. Neutrino decoherence is disfavoured $5.7 \sigma$.

OPERA is a 730km baseline (from CERN) emulsion tracking device which hopes to observe $\nu_{\tau}$ events. Muon neutrino flux is optimized with L/E = 43 km/GeV. Rosa described the detector modules, constructed of scintillator strip target modules embedded in 31 walls, each built from up to 3000 custom bricks of layered emulsion and Pb sheets. See 0804.1985. The short 2007 run saw 38 triggered candidate events with (at the end) 64060 bricks. With a high intensity beam at about 200 events per week, it is expected that the new run (starting around June 16) will see 1.2 $\nu_{\tau}$ events.

### Neutrino08 Day 1c

For a change of topic, we heard from J. Coller on coherent neutrino scattering, which is a Standard Model process not yet measured due to current limitations in detector technology. This group uses cryogenic bolometers and works in the Chicago sewer system! The cross section is the same for all standard neutrinos, so an observation of oscillations would imply the existence of sterile neutrinos. Applications include prospecting, planetary tomography, light WIMP searches and other dark matter phenomenology. DAMA results were also mentioned in a noncommittal yet humorous fashion.

Potzel discussed the antineutrino Mossbauer effect, which is a recoilless resonant emission from decays such as 3H $\rightarrow$ 3He. To achieve minimum recoil one considers situating sources and targets in metallic lattices. For the 3H/3He system it appears possible to achieve recoil free fractions of $f_{(3H)} \cdot f_{(3He)} = 0.07$ at low temperature, but the whole project has the potentially serious problem of lattice contraction and expansion due to different storage volumes for 3H and 3He.

The most charming accent award goes to T. Lasserre, who spoke very rapidly about Double Chooz in France. Supposedly systematic errors for the two 7m x 7m detectors have been reduced to 0.2% for proton count and 0.5% for detector efficiency. Data collection should begin in the next year and after 3 years they hope for at least 0.03 sensitivity in $\textrm{sin}^{2} \theta_{13}$. C. White spoke for Daya Bay and RENO. The Hong Kong experiment, which should be fully operational by 2011, uses 0.1% Gd doped liquid scintillator detectors and aims for an impressive 0.01 sensitivity in $\textrm{sin}^{2} \theta_{13}$.

Exhausted after microphone wallah duty, I desisted from note taking in the pizza and beer session, which started at 6.45pm. This was a long series of brief talks, chaired by the town crier with his bell, associated to posters.

Potzel discussed the antineutrino Mossbauer effect, which is a recoilless resonant emission from decays such as 3H $\rightarrow$ 3He. To achieve minimum recoil one considers situating sources and targets in metallic lattices. For the 3H/3He system it appears possible to achieve recoil free fractions of $f_{(3H)} \cdot f_{(3He)} = 0.07$ at low temperature, but the whole project has the potentially serious problem of lattice contraction and expansion due to different storage volumes for 3H and 3He.

The most charming accent award goes to T. Lasserre, who spoke very rapidly about Double Chooz in France. Supposedly systematic errors for the two 7m x 7m detectors have been reduced to 0.2% for proton count and 0.5% for detector efficiency. Data collection should begin in the next year and after 3 years they hope for at least 0.03 sensitivity in $\textrm{sin}^{2} \theta_{13}$. C. White spoke for Daya Bay and RENO. The Hong Kong experiment, which should be fully operational by 2011, uses 0.1% Gd doped liquid scintillator detectors and aims for an impressive 0.01 sensitivity in $\textrm{sin}^{2} \theta_{13}$.

Exhausted after microphone wallah duty, I desisted from note taking in the pizza and beer session, which started at 6.45pm. This was a long series of brief talks, chaired by the town crier with his bell, associated to posters.

## Monday, May 26, 2008

### Neutrino08 Day 1b

The afternoon's talks began with a report on the KamLAND antineutrino scintillator detector by Decowski. The antineutrinos come from 55 reactor cores throughout Japan, giving KamLAND an effective baseline of 180km. Current best values for the standard parameters, including solar neutrino results, are

$\Delta m^2 = 7.59 \times 10^{-5} (eV)^{2}$

$\textrm{tan}^{2} \theta = 0.47$

There is now a 6.2 terawatt upper limit on the (popular new crackpot idea of a) Earth's core georeactor. This was also discussed by McDonough, a real geochemist. He presented a beautiful introduction to the history of collaboration between physicists and geologists, from Lord Kelvin and Wiechert to the new potential of neutrino physics for geochemistry. One of the big questions in this field is the K/U ratio for Earth. Geoneutrinos result from U, K and Th $\beta$ decay chains. They form a small flux on top of the reactor background. To understand the mantle, this would best be investigated far from crustal regions, say near Hawaii deep under the ocean. Hanohano is an exciting proposal for a mobile detector, whose size is limited only by the requirement that the transporting barge fit through the Panama canal.

$\Delta m^2 = 7.59 \times 10^{-5} (eV)^{2}$

$\textrm{tan}^{2} \theta = 0.47$

There is now a 6.2 terawatt upper limit on the (popular new crackpot idea of a) Earth's core georeactor. This was also discussed by McDonough, a real geochemist. He presented a beautiful introduction to the history of collaboration between physicists and geologists, from Lord Kelvin and Wiechert to the new potential of neutrino physics for geochemistry. One of the big questions in this field is the K/U ratio for Earth. Geoneutrinos result from U, K and Th $\beta$ decay chains. They form a small flux on top of the reactor background. To understand the mantle, this would best be investigated far from crustal regions, say near Hawaii deep under the ocean. Hanohano is an exciting proposal for a mobile detector, whose size is limited only by the requirement that the transporting barge fit through the Panama canal.

### Neutrino08 Day 1a

C. Galbiati represents the Borexino experiment, which observes solar neutrinos in real time using a spherical scintillation detector. Both 7Be and pep neutrinos are good sources for exploring the so called vacuum-matter transition. New results for 192 days of data were announced in this morning's talk: the 7Be result is $49 \pm 3$ counts per day per 100 ton. This is in good agreement with the MSW-LMA oscillation prediction of around 48, and rules out the no oscillation scenario at $4 \sigma$ (arxiv preprint 0805.3843).

Galbiati began with a summary of the (old) standard solar model and its agreement with helioseismology, which is no longer in such good agreement since the new estimate for metallicity appears to be a factor of 2 different. Can neutrino physics explain this discrepancy? One would like to use CNO* neutrinos to measure the metallicity of the core of the sun.

The next speaker was H. Robertson from the SNO collaboration. This is a 12 meter diameter, 1000 ton heavy water detector, with outer water shields. It has operated in three phases: (i) $D_{2}O$ (ii) $D_{2}O$ plus salt and (iii) $D_{2}O$ with 3He detectors. In the final phase, 36 strings of 3He detectors were deployed at a total length of 398m.

R. Hahn then confronted chemically challenged physicists with a talk about radiochemical experiments, including an historical interlude on Ray Davis, who was the first to observe solar neutrinos. He discussed the SAGE and GALLEX experiments. New results are a better fit to the constant flux line than previous results.

J. Klein outlined future solar neutrino experiments, noting the current focus on real time observations. One major goal is to look at the metallicity problem. Did Jupiter or Saturn somehow steal metals from the planetary protosphere? Or is something else going on? The correct value for solar surface metallicity may be obtained from 0805.2013.

* think Chemistry when you see capital letters, except in the last post

Galbiati began with a summary of the (old) standard solar model and its agreement with helioseismology, which is no longer in such good agreement since the new estimate for metallicity appears to be a factor of 2 different. Can neutrino physics explain this discrepancy? One would like to use CNO* neutrinos to measure the metallicity of the core of the sun.

The next speaker was H. Robertson from the SNO collaboration. This is a 12 meter diameter, 1000 ton heavy water detector, with outer water shields. It has operated in three phases: (i) $D_{2}O$ (ii) $D_{2}O$ plus salt and (iii) $D_{2}O$ with 3He detectors. In the final phase, 36 strings of 3He detectors were deployed at a total length of 398m.

R. Hahn then confronted chemically challenged physicists with a talk about radiochemical experiments, including an historical interlude on Ray Davis, who was the first to observe solar neutrinos. He discussed the SAGE and GALLEX experiments. New results are a better fit to the constant flux line than previous results.

J. Klein outlined future solar neutrino experiments, noting the current focus on real time observations. One major goal is to look at the metallicity problem. Did Jupiter or Saturn somehow steal metals from the planetary protosphere? Or is something else going on? The correct value for solar surface metallicity may be obtained from 0805.2013.

* think Chemistry when you see capital letters, except in the last post

### Neutrino08 - Smirnov

The first (slightly) technical talk of Day One was by A. Smirnov, who began with a very entertaining explanation of his title: Where are we? Where are we going? He pointed out that there were 52 (relatively recent) neutrino papers on SPIRES-HEP with headings including the words where are we? Similarly, he found multiple papers in other HEP areas that used the same words. But String Theory only managed one hit. Do they not wonder where they are? He then showed a timeline of neutrino physics, from Rutherford to the present, which was marked mysteriously as being somewhere on a brane.

Comments on the standard picture followed, with brief mentions of nuclear physics, neutrino gases, solar neutrinos, supernovae, AGNs, GRBs, CP violation etc. A fascinating fact is the shift in publications indicated roughly by the diagram Smirnov also stressed that although the initial excitement in new neutrino physics moved around the idea of beyond the Standard Model physics, the situation was far from clear. He listed a few bottom up approaches to theory, such as the tribimaximal mixing. Actually, he cited Carl Brannen alongside Koide in a final note about nonperturbative approaches (perhaps I should give him Carl's blog url as a reference).

The talk went 15 minutes over time due to a sneaky tactic of~~lying through teeth~~ promising to be on the last slide.

Comments on the standard picture followed, with brief mentions of nuclear physics, neutrino gases, solar neutrinos, supernovae, AGNs, GRBs, CP violation etc. A fascinating fact is the shift in publications indicated roughly by the diagram Smirnov also stressed that although the initial excitement in new neutrino physics moved around the idea of beyond the Standard Model physics, the situation was far from clear. He listed a few bottom up approaches to theory, such as the tribimaximal mixing. Actually, he cited Carl Brannen alongside Koide in a final note about nonperturbative approaches (perhaps I should give him Carl's blog url as a reference).

The talk went 15 minutes over time due to a sneaky tactic of

## Sunday, May 25, 2008

### Neutrino08

Neutrino08 kicks off this afternoon in the Town Hall, with a reception and cultural performance. Whilst viewing the latest poster listing I noticed that none other than Professor Koide will be attending and his abstract is already available here. (Now I wish I had made a paper poster myself, although it is probably rude for the hosts to take up conference space).

Tommorrow morning kicks off with a lecture on Ernest Rutherford, followed by a talk entitled Where are we? Where are we going?, by A. Smirnov from ICTP. More later.

Tommorrow morning kicks off with a lecture on Ernest Rutherford, followed by a talk entitled Where are we? Where are we going?, by A. Smirnov from ICTP. More later.

## Saturday, May 24, 2008

### Mass Update

Carl Brannen has been surreptitiously posting Koide mass formulas for pi meson and (the lightest) rho meson triplets at PF. For $n = 1,2,3$ and that damned number $\delta \simeq \frac{2}{9}$, the square root mass eigenvalues (for the same choice of units) all take the form

$\lambda_{n} = v + 2s \cdot \textrm{cos} (\delta + \frac{2n \pi}{3})$

where the parameters $v$ and $s$ must be set to

lepton: $v = \frac{1}{\sqrt{2}} , s = 1$

pi: $v = \frac{6}{5} , s = \frac{-3}{4}$

rho: $v = \frac{10}{7} , s = \frac{-1}{3}$

Now I must find time to check these against the PDG data...

$\lambda_{n} = v + 2s \cdot \textrm{cos} (\delta + \frac{2n \pi}{3})$

where the parameters $v$ and $s$ must be set to

lepton: $v = \frac{1}{\sqrt{2}} , s = 1$

pi: $v = \frac{6}{5} , s = \frac{-3}{4}$

rho: $v = \frac{10}{7} , s = \frac{-1}{3}$

Now I must find time to check these against the PDG data...

## Friday, May 23, 2008

### M Theory Lesson 192

Are the Foaming Loopies secretly doing String Theory at last? The latest paper by Markopoulou et al looks at ribbon graphs in two dimensions and three stranded diagrams in three dimensions. The three strands may be considered as tubes, much as in closed string diagrams. Then the open-closed string duality becomes a duality between 2d simplices and 3d ones. But how can this be? As a Poincare duality one exchanges 2d and 3d objects only in dimension 5, whereas this stringy duality is usually associated with 2-categorical structures. Fortunately, a moduli space perspective solves the mystery. The tube diagram for the tetrahedron is a 4 punctured sphere, the moduli of which is indeed two dimensional. The other two dimensional moduli space is the space of elliptic curves. These two moduli describe duality as envisaged by Grothendieck in his work on ribbon graphs for surfaces.

### Hidey Holes

The cover story of the last issue of New Scientist talks about the last place you'd expect to find a black hole. Yet another story about higher dimensions at the LHC? No, as the first paragraph states:

As the outside of the star finally cools, like a dying ember, its outer layers are suddenly blown away into space. And there, uncloaked for the first time, is a monstrous black hole.The article, based on this recent paper, discusses the work of the University of Colorado's Mitchell Begelman and colleagues (but of course not Louise Riofrio). Referring to the conservative star formation mechanisms discussed in the paper, Fulvio Melia from the University of Arizona says:

With these mechanisms, something unusual - even dramatic - has to happen to make them work. Somehow this has to happen in a matter of only a few hundred million years, whereas simulations with standard physics show that it should take billions.Perhaps something else is going on here.

## Thursday, May 22, 2008

### Oh Mini Me II

Tommaso Dorigo reports from PPC08 on a MiniBooNE talk by Zelimir Djurcic, including a discussion of the low energy excess.

Photonuclear absorption of photons from $\pi_{0}$ decays was found to be a source of events at low energy.This apparently accounts for some of the excess. Coincidently, there is also available a new PI talk by Jeffrey Harvey, which discusses a low energy QCD (AdS inspired) computation for the MiniBooNE excess based on a novel meson field process which has been neglected in the background analysis (see this paper). Initial results show impressive, if only tentative, agreement with experiment. I am hoping to report further on this after talks by Stephen Brice and Sam Zeller at Neutrino08 next week.

## Wednesday, May 21, 2008

### M Theory Lesson 191

As one moves up in dimension, it quickly becomes difficult to draw all intersections. The tetrahedron comes from four sets, with single (orange), double (blue), triple (pink) and quadruple (green) intersections. This gives an Euler characteristic of $\chi = 4 - 6 + 4 - 1 = 1$ for the ball in three dimensional space. Observe that by alternating signs we lose the information that there are 15 (= 4 + 6 + 4 + 1) pieces of Venn diagram. An invariant that combines both pieces of information is the Pauli circulant

$A$ $B$

$B$ $A$

for $A = V + F$ and $B = E + I$ ($I$ meaning 3d pieces) in this three dimensional example. Recall that the eigenvalues of the Pauli matrix are $A - B$ and $A + B$, the first being $\chi$ and the second the subset counter. This works in all dimensions.

$A$ $B$

$B$ $A$

for $A = V + F$ and $B = E + I$ ($I$ meaning 3d pieces) in this three dimensional example. Recall that the eigenvalues of the Pauli matrix are $A - B$ and $A + B$, the first being $\chi$ and the second the subset counter. This works in all dimensions.

## Tuesday, May 20, 2008

### M Theory Lesson 190

Euler characteristic as an alternating sum is related to inclusion-exclusionsaid Scott Carter at the n-Category Cafe today. He attributes the quote to Vassiliev. The $n$-simplices used to calculate ordinary Euler characteristics may be viewed as dual to $n$ intersecting sets. For example, the full intersection of three sets corresponds to the face of a triangle, whereas the three edges of the triangle come from the double intersections. The union of the three sets counts vertices once and edges twice, so one takes away the double intersections and then adds back on the single face of the triple intersection. This parity of simple intersections is what gives the terms in $\chi$ their sign. In M Theory, we like to think of set intersections (or the vector space analogue) as topos theory pullbacks, which turns the triangle into the three faces at the corner of a cube! For mass operators, it is important to look at tricategorical analogues. This is why we study ternary geometry!

## Sunday, May 18, 2008

### Cool Cats

It's already here! Videos and slides from the second Categories, Logic and Physics meeting at Imperial. Thanks!

## Saturday, May 17, 2008

## Friday, May 16, 2008

### M Theory Lesson 189

Recall that the MZV weights for period integrals work so that $\zeta (3)$ appears in dimension 3 along with the 9 faced Stasheff associahedron, used by Mulase et al to study the 6 valent ribbon vertex, and also used to tile the (real points of the) 6 point genus zero moduli space which counts the particle generations. In the last lesson, we saw that $\zeta (3)$ appears in connection with the trefoil knot. It seems clear, then, that the connection between the polytope and the trefoil is number theoretic, as well as lying at the heart of physics.

### Around About

Woit points to a very well written article about Garrett Lisi. Todd and Vishal's blog is now on my Category Theory roll. Check out the post on Stone duality.

## Wednesday, May 14, 2008

### M Theory Lesson 188

The 1997 Broadhurst and Kreimer paper shows how knot crossing numbers correspond to the weight of the MZV. For example, the positive braid in $B_{2}$ defined by the word $\sigma_{1}^{5}$ is decorated with three chords, and this corresponds to $\zeta (5)$ at weight $w = 5$. The trefoil knot $\sigma_{1}^{3}$ is the simplest $B_{2}$ knot, which gives a three loop chord diagram (well, Feynman diagram, actually) using only two chords. The pattern of crossing and non-crossing chords gets more interesting for braids with $s$ strands where $s > 2$, via the relations of the MZV algebra, where depth corresponds to $(s - 1)$ (this is why the example of $\zeta (5)$ only has one argument). Who would have thought it was so easy to do QED with knots and number theory? Once upon a time physicists admitted that group and gauge theory was a complicated, messy business, so why bother with it? M Theory is much more fun. Observe that the number of points on the circle of the chord diagram is $2n$ (or $w + 1$) where $n$ is the number of chords, so $\zeta (5)$ is really a decorated hexagon, our favourite polygon, often used to label the vertices of the three dimensional associahedron.

## Tuesday, May 13, 2008

### M Theory Lesson 187

Bar-Natan describes the correspondence between chords and self intersections in knots. A crossing of chords becomes a crossing in the knot diagram. This work led to the classic paper [1], which in turn was used by Broadhurst and Kreimer [2] to analyse the algebra of MZVs as it appears in QFT, although the latter paper uses chorded braid diagrams to represent zeta values. Nowadays we understand that the MZV algebra comes from motivic integrals on spaces tiled by the associahedra, so we expect associahedra and chorded braids to be closely linked.

[1] T.Q.T. Le and J. Murakami, Topology and Appl. 62 (1995) 193-206

[2] D.J. Broadhurst and D. Kreimer, Physics Lett. B 393 (1997) 403-412

[1] T.Q.T. Le and J. Murakami, Topology and Appl. 62 (1995) 193-206

[2] D.J. Broadhurst and D. Kreimer, Physics Lett. B 393 (1997) 403-412

## Monday, May 12, 2008

### Today's Mottle Quote

This was just too funny to pass up. On hearing about the appointment of Turok to the head geek job, Mottle says

As soon as the remaining heretics will be removed, the PI's cutting-edge picture of the Universe will be based on ekpyrotic loop quantum cosmology with a variable speed of light and 31+ octopi swimming in the spin network.Presumably the 31 refers to the Kostant work on Garrett's E8, which is presently being discussed by Schreiber et al.

### M Theory Lesson 186

A new paper by Bloch and Kreimer looks at mixed Hodge structures and renormalization. They begin by noting that the mathematical description of locality in QFT comes from studying a certain monodromy transformation $m: H_{p} \rightarrow H_{p}$ on homology, with the property that the matrix $M = \textrm{log} (m)$ is nilpotent. The nilpotency ensures that the expression

$\textrm{exp} (\frac{- M \textrm{log} t}{2 \pi i})$

is a matrix with entries only polynomial in $\textrm{log} t$, where $t$ is a suitable renormalization parameter. This matrix acts upon a vector of period integrals (this is the fancy operad stuff) to give numerical values of physical interest as $t \rightarrow 0$. Let us consider the example they look at on page 38. The binary matrix $M$ will be an $8 \times 8$ matrix in the case that there are $n + 1 = 4$ loops in the graph being evaluated, namely

0 1 1 1 0 0 0 0

0 0 0 0 1 1 0 0

0 0 0 0 1 0 1 0

0 0 0 0 0 1 1 0

0 0 0 0 0 0 0 1

0 0 0 0 0 0 0 1

0 0 0 0 0 0 0 1

0 0 0 0 0 0 0 0

which is built from the modules $0$, $(1,1,1)$, their duals, and the $n = 3$ 2-circulant

1 1 0

1 0 1

0 1 1

which will be familiar to M theorists.

Aside: If a kindly mathematician feels like spending a season or two (self funded) in NZ explaining mixed Hodge structures to me, it would be greatly appreciated!

$\textrm{exp} (\frac{- M \textrm{log} t}{2 \pi i})$

is a matrix with entries only polynomial in $\textrm{log} t$, where $t$ is a suitable renormalization parameter. This matrix acts upon a vector of period integrals (this is the fancy operad stuff) to give numerical values of physical interest as $t \rightarrow 0$. Let us consider the example they look at on page 38. The binary matrix $M$ will be an $8 \times 8$ matrix in the case that there are $n + 1 = 4$ loops in the graph being evaluated, namely

0 1 1 1 0 0 0 0

0 0 0 0 1 1 0 0

0 0 0 0 1 0 1 0

0 0 0 0 0 1 1 0

0 0 0 0 0 0 0 1

0 0 0 0 0 0 0 1

0 0 0 0 0 0 0 1

0 0 0 0 0 0 0 0

which is built from the modules $0$, $(1,1,1)$, their duals, and the $n = 3$ 2-circulant

1 1 0

1 0 1

0 1 1

which will be familiar to M theorists.

Aside: If a kindly mathematician feels like spending a season or two (self funded) in NZ explaining mixed Hodge structures to me, it would be greatly appreciated!

## Saturday, May 10, 2008

### M Theory Lesson 185

A recent talk by Yong-Shi Wu points out that multiple qubit quantum circuits are closely related to the Jones invariant at a fourth root of unity. The factor of four comes from the 4 Bell states, or rather the $2^{2}$ MUB case.

For any prime $p$, M theoretic quantum information likes to specialise the knot invariants to associated roots of unity. For example, the trefoil knot at a cubed root of unity is always 1, and this normalises torus knots. This follows from the categorical $\hbar$ hierarchy, which insists that $q$ take on a fixed value determined by the categorical dimension. If this dimension were given by the number of knot crossings, as it is in Khovanov homology, it suggests a study of the numerical Jones polynomial for $q$ fixed at a primitive root of unity corresponding to the number of crossings. This is not usually done. One often encounters studies of fixed values of $q$ for all knots, but not a grading by crossing number.

A grading by strand number, however, is common in the connection between MZV algebras, knots, Feynman diagrams and chord diagrams, originally due to Kreimer but now studied by many mathematicians. The strand number is also 3 for a trefoil, or 2 for a basic braid generator associated to a qubit, so this grading is important in the analysis of quantum circuits.

For any prime $p$, M theoretic quantum information likes to specialise the knot invariants to associated roots of unity. For example, the trefoil knot at a cubed root of unity is always 1, and this normalises torus knots. This follows from the categorical $\hbar$ hierarchy, which insists that $q$ take on a fixed value determined by the categorical dimension. If this dimension were given by the number of knot crossings, as it is in Khovanov homology, it suggests a study of the numerical Jones polynomial for $q$ fixed at a primitive root of unity corresponding to the number of crossings. This is not usually done. One often encounters studies of fixed values of $q$ for all knots, but not a grading by crossing number.

A grading by strand number, however, is common in the connection between MZV algebras, knots, Feynman diagrams and chord diagrams, originally due to Kreimer but now studied by many mathematicians. The strand number is also 3 for a trefoil, or 2 for a basic braid generator associated to a qubit, so this grading is important in the analysis of quantum circuits.

## Friday, May 09, 2008

### Differential, Dude

The most common criticism I receive about my work is that it can't possibly have anything to do with physics, because there are no differential equations. So it is with great delight I discover that V. Buchstaber at Manchester is working on turning polytope combinatorics into interesting partial differential equations.

First, consider only simple polytopes. That is, ones in $d$ dimensions with $d$ faces meeting at a vertex. For example, the three dimensional Stasheff associahedron has 3 faces (pentagons or squares) meeting at each vertex. Now group equivalent polytopes into classes (a common trick) and then make an algebra from combinations of these classes. The zero is the empty polytope and the unit is the single point. There is an operator $D$ that sends a $d$ dimensional polytope to a $(d - 1)$ dimensional one. For example, on the simplex $K_{n}$ it acts as

$D K_{n} = (n + 1) K_{n - 1}$

sending a $4$-simplex to the 5 tetrahedra on its boundary. Let $f_{k, n-k}$ denote the number of $k$ dimensional faces of an $n$ dimensional polytope. Then for any such polytope $P$ there is a homogeneous polynomial in $a$ and $t$ given by

$F(P) = a^{n} + f_{n-1,1} a^{n-1} t + \cdots + f_{0,n} t^{n}$

Buchstaber shows that the map $F$ satisfies

$F(DP) = \frac{\partial}{\partial t} F(P)$

An interesting sequence $P_{n}$ of polytopes turns out to be the sequence of associahedra. In this case, by letting

$U(a,t,x) = \sum_{n} F(P_{n}) x^{n+2}$

it turns out that $U(t,x)$ must be the solution of the Hopf equation

$\frac{\partial}{\partial t} U(t,x) = U(t,x) \frac{\partial}{\partial x} U(t,x)$

with $U(0,x) = x^{2} (1 - ax)^{-1}$. This is related to the important KdV equation from soliton theory.

First, consider only simple polytopes. That is, ones in $d$ dimensions with $d$ faces meeting at a vertex. For example, the three dimensional Stasheff associahedron has 3 faces (pentagons or squares) meeting at each vertex. Now group equivalent polytopes into classes (a common trick) and then make an algebra from combinations of these classes. The zero is the empty polytope and the unit is the single point. There is an operator $D$ that sends a $d$ dimensional polytope to a $(d - 1)$ dimensional one. For example, on the simplex $K_{n}$ it acts as

$D K_{n} = (n + 1) K_{n - 1}$

sending a $4$-simplex to the 5 tetrahedra on its boundary. Let $f_{k, n-k}$ denote the number of $k$ dimensional faces of an $n$ dimensional polytope. Then for any such polytope $P$ there is a homogeneous polynomial in $a$ and $t$ given by

$F(P) = a^{n} + f_{n-1,1} a^{n-1} t + \cdots + f_{0,n} t^{n}$

Buchstaber shows that the map $F$ satisfies

$F(DP) = \frac{\partial}{\partial t} F(P)$

An interesting sequence $P_{n}$ of polytopes turns out to be the sequence of associahedra. In this case, by letting

$U(a,t,x) = \sum_{n} F(P_{n}) x^{n+2}$

it turns out that $U(t,x)$ must be the solution of the Hopf equation

$\frac{\partial}{\partial t} U(t,x) = U(t,x) \frac{\partial}{\partial x} U(t,x)$

with $U(0,x) = x^{2} (1 - ax)^{-1}$. This is related to the important KdV equation from soliton theory.

## Thursday, May 08, 2008

### M Theory Lesson 184

Recall that single chorded polygons label the faces (codimension 1 objects) of an associahedron. Codimension 2 objects are labelled by non-intersecting two chorded polygons. For example, the two dimensional polytope has 5 faces labelled by the 5 chords of a pentagon. The 5 non-crossing two chord diagrams give the 5 vertices. By including the crossed chord diagrams, one effectively describes (the dual of) a full simplex (in 4D) with 10 vertices and 5 faces. For any chorded polygon, the choice of an arbitrary pair of chords amounts to the choice of an arbitrary pair of faces on the polytope. If one represents faces by points, two chords represent an edge joining two points, and one always obtains a full higher dimensional n-simplex $K_{n+1}$. The (dual) associahedra then appear as subgraphs of the complete graph $K_{m}$ for $m$ in this sequence.

## Wednesday, May 07, 2008

### M Theory Lesson 183

Recall that the sixth face of the parity cube may represent a breaking of the Mac Lane pentagon by splitting the symmetric four leaved tree into two parts. This tree was also considered by Forcey et al in a 2004 paper discussing higher operads, beginning with the observation indicated by the following diagram. Consider the boxed vertical lines as a fixed object in the category, and ignore the bottom third of the diagram. Then there are two ways to piece together the tree: do the horizontal (pink) products first, or else the vertical (green) ones. This issue of commutativity for two tensor products is a central axiom of a bicategory, commonly called the interchange rule. By considering categories with three products, Forcey et al magically go on to prove that (ordered) three dimensional Young diagrams can describe what they call a 3-fold monoidal category, a fascinating recursive structure. Moreover, this result generalises to all higher dimensions.

## Tuesday, May 06, 2008

### M Theory Lesson 182

In this 1998 paper [1], Burgiel and Reiner define signed analogues of the associahedra. Recall that the vertices of an associahedron could be labelled by chorded polygons, such as the hexagon for the polytope in three dimensions. Here one uses a pentagon to obtain a three dimensional polytope. Signed squares give the octagon, as shown. Note that edges exist if either a sign or chord is flipped. There are always two vertices which remain unsigned. One wonders whether or not this particular extension is interesting in the context of operads. Does this octagon represent an octahedron in the same way that a hexagon represents a cube?

[1] New York J. Math. 4 (1998) 83-95

[1] New York J. Math. 4 (1998) 83-95

## Monday, May 05, 2008

### M Theory Lesson 181

As The Everything Seminar pointed out, the 4T relation may be thought of in terms of trivalent knotted diagrams. The chorded circle below is obtained by shifting the internal node down onto the circle, where is it resolved into two trivalent vertices. See this paper by Bar-Natan. Observe that a chorded braid, as drawn in the last lesson, becomes a chorded circle upon composition with a braid such as $(312)$ in $B_3$. A chord diagram can be turned into a knot, allowing self intersection. One rule is to send the endpoints of a chord to a self intersection. The Vassiliev invariants discussed by Bar-Natan use the idea that smooth paths of deformations of embedded knots in three dimensional space should naturally pass through such self intersecting knots.

## Friday, May 02, 2008

### M Theory Lesson 180

The Bar-Natan paper continues with a definition of chorded braids and an algebra over (bracketed) chord diagrams which satisfies, in particular, the 4T relation. On $3$ strands, the algebra is given by combinations of the bracket symbols as shown. We can represent the 4T relation by paths on a triangle. Note that the $[12]$ terms act on either the left or right by composition, giving a direction to the edges $[01]$ and $[02]$, illustrated by the red and green arrows. Thus the 4T relation says that the span and cospan diagrams are equal, in some sense.

## Thursday, May 01, 2008

### M Theory Lesson 179

Recall that Bar-Natan's 1998 paper on the Grothendieck Teichmuller group discusses associated braids. On forgetting the crossing information, a braid in $B_{n}$ becomes a permutation on $n$ letters. So with bracketed endpoint sets, it marks a vertex of the $(n - 1)$ dimensional permutoassociahedron for $S_{n}$.

As well as the category of bracketed permutations, Bar-Natan considers the category of bracketed braids with linearised morphisms of the form $\sum \alpha_{j} B^{j}$, where the $B^{j}$ are allowable braids corresponding to a given element $P \in S_{n}$ and the $\alpha_{j}$ are numerical coefficients. For example, the Pauli permutation $\sigma_{x}$ gives morphisms of the form where $a$ and $b$ are usually rational numbers. The functor to the permutation category that forgets the braid structure is an example of a fibration of a very nice kind. The GT group is a certain group of endofunctors (from bracketed braids to bracketed braids) which fix $\sigma_{x}$ (with a choice of crossing). Bar-Natan shows that the bracketed braids are generated by the two basic diagrams shown above (ie. associator and $\sigma_{x}$ and inverses).

As well as the category of bracketed permutations, Bar-Natan considers the category of bracketed braids with linearised morphisms of the form $\sum \alpha_{j} B^{j}$, where the $B^{j}$ are allowable braids corresponding to a given element $P \in S_{n}$ and the $\alpha_{j}$ are numerical coefficients. For example, the Pauli permutation $\sigma_{x}$ gives morphisms of the form where $a$ and $b$ are usually rational numbers. The functor to the permutation category that forgets the braid structure is an example of a fibration of a very nice kind. The GT group is a certain group of endofunctors (from bracketed braids to bracketed braids) which fix $\sigma_{x}$ (with a choice of crossing). Bar-Natan shows that the bracketed braids are generated by the two basic diagrams shown above (ie. associator and $\sigma_{x}$ and inverses).