Scientists conceived of the Large Hadron Collider and its experiments in 1992. Back then, Apple was starting to figure out the laptop computer, and a CERN fellow named Tim Berners-Lee had just released the code for a pet project called the World Wide Web.
“In the days when we started, there was no Google. There was no Facebook. There was no Twitter. All of these companies that tackle big data did not exist,” says Graeme Stewart, a CERN software specialist working on the ATLAS experiment. “Big data didn’t exist.”
The LHC experiments grew up with computing and have been remarkable in their ability to adapt to the evolving technology. Over the last 15 years, researchers have written more than 20 million lines of code that govern everything from data acquisition to final analysis. But physicists are anxious that the continually accumulating code has begun to pose a problem.
“This software is not sustainable,” says Peter Elmer, a physicist at Princeton. “Many of the original authors have left physics. Given the complex future demands on the software, it will be very difficult to evolve.”
Back when Stewart and his computer engineering colleagues were designing the computing structure for the LHC research program, they were focused on making their machines perform a single task faster and faster.
“And then in the mid-2000s, hardware manufacturers hit a wall, and it was impossible to get a computer to do one single thing any more quickly,” Stewart says, “so instead they started to do something which we call concurrency: the ability of a computer to do more than one thing at the same time. And that was sort of unfortunate timing for us. If it had happened five or 10 years earlier, we would have built that concurrent paradigm into the software framework for the LHC startup, but it came a little bit too late.”
Thanks to concurrency, today’s personal laptops can perform roughly four tasks at the same time, and the processors in CERN’s computing clusters can perform around 30 tasks at once. But graphics cards—such as the GPUs used in gaming—are now able to process up to 500 tasks at once.
“It’s critical that we take advantage of these new architectures to get the most out of the LHC research program,” Stewart says. “At the same time, adapting to that kind of hardware is a tremendous challenge.”
The experiments will need these hardware advancements. In eight years, a turbocharged version of the LHC will turn on with a proton beam four times more intense than it is today. This transformation will provide scientists with the huge volume of data they need to search for new physics and study rare processes. But according to Stewart, today’s software won’t be able to handle it.
“The volume of data anticipated jumps by an order of magnitude, and the complexity goes up by an order of magnitude,” he says. “Those are tremendous computing challenges, and the best way of succeeding is if we work in common.”
Stewart and Elmer are part of a huge community initiative that is planning how they will meet the enormous computing challenges of the four big LHC experiments and prepare the program for another two decades of intensive data collection.
According to a white paper recently published by the High Energy Physics Software Foundation, the software and computing power will be the biggest limiting factor to the amount of data the LHC experiments can collect and process, and so “the physics reach during HL-LHC will be limited by how efficiently these resources can be used.”
So the HEP Software Foundation has set out to adapt the LHC software to modern computing hardware so that the entire system can run more effectively and efficiently. “It’s like engineering a car,” Stewart says. “You might design something with really great tires, but if it doesn’t fit the axle, then the final result will not work very well.”
Instead of building custom solutions for each experiment—which would be time-consuming and costly—the community is coming together to identify where their computing needs overlap.
“Ninety percent of what we do is the same, so if we can develop a common system which all the experiments can use, that saves us a lot on time and computing resources,” Stewart says. “We’re creating tool kits and libraries that protect the average physicist from the complexity of the hardware and give them good signposts and guidelines as to how they actually write their code and integrate it into the larger system.”
These incremental changes will gradually modernize LHC computing and help maintain continuity with all the earlier work. It will also enable the system to remain flexible and adaptable to future advancements in computing.
“The discovery of the Higgs is behind us,” says Elmer. “The game is changing, and we need to be prepared.”