GRID FROM PAGE 25 puting task, PCs used in the grid don’t independently on separate boxes, so
Along with lessons about what the have to communicate at very high any PC which fails can be replaced
universe comprises, the LHC Comput- speeds with one another so they are and just that job restarted.”
ing Grid project will teach network linked via grid middleware for “trivi- “Our typical workhorses” are dual-engineers valuable lessons about what ally parallel” processing, according processor PCs in a one-rack unit “pizza
it takes to run and manage one of to Grey. box” form factor stacked in 19-inch
the largest 10G-bps networks in the Detectors read out images of the racks, according to Helge Meinhard,
“Everyone is looking to see who’s
installing a large backbone on that
scale. We’ve become a reference for
other people waiting to see what happens,” Grey said. “We have no choice
because we need that speed. We’re also
learning a lot about shipping data at
high rates and how to optimize a grid
between 10G-bps and slower links.”
About 200 institutions in 80 countries—some with their own large data
centers—will participate in the grid
to help process an expected 15 petabytes of data per year generated by
“We realized early on there was no
way we could store all that data and
analyze it here at CERN,” Grey said.
“The idea was to pull those resources David Foster (left) and Francois Grey, of CERN’s IT department, said HP’s willingness to
together in a grid.” work with CERN at an engineering level has helped make the project successful.
The grid is organized in a three-
tiered hierarchy, with CERN serving as collisions, which are analyzed for technical coordinator for server pro-
the Tier 0 “fountainhead” from which particular patterns. “Each collision curements at CERN.
data subsets will be dispersed to 11 is independent from the next one, Although most of the roughly 8,000
Tier 1 data centers in Europe, North which is why trivially parallel process- PCs are single-socket machines that
America and Asia, according to Grey. ing works,” Grey said. run single-core chips, about 750 are
Tier 2 data centers, located mostly at At CERN, the PCs, CPU servers two-socket systems that use dual-core
more than 250 universities around the and disks are linked on a 1G-bps net- processors.
globe, will serve as the locations where work provided by Hewlett-Packard Administering all the PCs is a batch
physicists analyze the data subsets ProCurve switches. CERN itself will scheduler, which identifies available
they receive. contribute about 10 percent of the units and assigns a job to them.
The LHC Computing Grid rides 100,000 or so processors needed for HP switches, including 600 Pro-
on dark fiber used in national and thejob. Inall, CERN will provideabout Curve 3400cl, 400 ProCurve 3500yl
international research net works to 8,000 systems—using both single- and and 20 ProCurve 5400-series devices,
interconnect each of the 11 Tier 1 dual-core chips—to the task. The sys- link the CERN processors at 1G bps,
sites at 10G bps for continuous paths tems holding the processors will run with 10 Gigabit uplinks into the grid’s
to the different locations. Commercial a version of Linux called Scientific core backbone. The network uses pri-
links are used to connect participants Linux CERN. marily fiber connectivity, although
in Canada, Taiwan and the United The PCs used at CERN are com- it also uses some UTP (unshielded
States. modity systems from a mix of smaller twisted-pair) Category 6 copper cabling
In North America, Tier 1 sites vendors, including Elonex, of Broms- for 1G-bps links.
include two in the United States— grove, England, and Scotland Electron- Sixteen 10G-bps routers from
Fermi National Accelerator Lab, in ics, of Moray, Scotland. Force10 Networks in the core back-
Batavia, Ill., and Brookhaven National “We buy them cheap and stack bone link the CERN network to other
Laboratory, in Long Island, N.Y.—as them high,” said David Foster, com- participants in the grid.
well as the Triumph Laboratory, in munications systems group leader in HP and Force10 Networks were cho-
Vancouver, British Columbia. CERN’s IT department. “The physics sen for the LHC grid project because
Because of the nature of the com- applications can run in parallel, but of their feature [CONTINUED ON PAGE 28]