The majority of published particle physics papers involve looking for a very specific elementary particle process. They conclude by either showing that it exists or constraining limits on how often that process could possibly occur. But is there a way to take advantage of a collider’s whole data set and seeing if it is consistent with an expansive theoretical model, such as the Standard Model of particle physics?
In a wide-ranging discussion about searches for new and exotic particles, forces, and phenomena at Fermilab’s Tevatron, Florida State University’s Todd Adams, described today at the American Physical Society meeting in Washington, DC, how physicists interrogate the entire collider data set. This is not actually new work, and they published a paper (preprint, Phys. Rev. D) about a year ago on this topic. However, it’s not the kind of analysis you usually see in particle physics and it makes for a nice way to look at the data.
Typically, the CDF and DZero collaborations at Fermilab’s Tevatron (and other colliders) publish papers with titles like “Measurement of the branching fraction Br(Bs -> Ds(*) Ds(*))”. In such a paper, physicists are looking at a very specific way that one type of particle decays into other specific particles. This is important work for testing the details of models, but it is easy to get overwhelmed by those details if you’re not a particle physicist.
The all-data studies break up the Tevatron data into sets of collision events which each have a specific final state. That is, for a particle proton-proton collision in the heart of one of the detectors, the outgoing particles must end up being, say, two electrons and two jets of particles; or a muon, two jets, and missing energy. It could also be one of many other possibilities. But this cuts down the full data set to a manageable size without assuming anything about how the particles got to that end state. Perhaps there were top quarks involved, or maybe W bosons. Or perhaps there are more exotic intermediate particles like Higgs bosons, or gravitons, or a Z’ representing an extra so-far-unknown fundamental force.
Then physicists take the best model they have of particle interactions, the Standard Model, and grind through the calculations of what all the possible events look like with that same final state. With the theoretical prediction in hand, and the experimental data crunched, physicists can see if they match up. If not, it might be sign of exotic physics.
The physicists use four different techniques to look for deviations from theory. First they look for an overall excess or deficit of end products. Second, they see if the shape of the data—that is how the number of events depends on energy—is consistent with theory. Third, they look for unexpected events at very high energies. Fourth, the go on a bump hunt—they see if there are extra events at well-defined, narrow energy ranges in among the regularly predicted collision events.
When physicists do these searches, they find that on a first cut, they see a whole bunch of discrepant events. In technical terms, they see events with more than 3 sigma deviation from the theory. That would usually be enough to claim evidence for a new particle process. Five sigma deviation is generally regarded as the cutoff for a new discovery. Some results from the analysis have deviations as much as 4.3 sigma—enticingly close to discovery.
However, the all-data approach is a different way of analyzing data and is akin to large scale data mining. When you do so many possible searches, you are likely to get some results randomly fluctuating almost to discovery level. But that is all they are: random fluctuations. There are extra statistical precautions you need to take when doing a large number of tests on a single data set. When you apply those precautions, the significance of these deviations drops down. What seemed like a 4.3 sigma deviation becomes a mere 2.7 sigma deviation. Interesting, and maybe even a hint of something, but nowhere near enough to claim evidence for a new physics process.
With the whole data set taken into account (as of about a year ago), there were no convincing signs of new physics, but the process demonstrated the possibilities of interrogating the data in a different way. In cases where there is no discovery of new physics, the data places tighter constraints on the existence of those processes. Adams commented that these techniques are beginning to place real limits on some of the theoretical predictions about particle processes that are interesting on cosmological scales, such as the properties of any extra dimensions and dark matter.
Whatever the power of these techniques, the story reflects an interesting approach to slicing up the huge data sets of particle physics and looking for what might lie beyond the theories that have served physicists so well for so many years.