GIOD – Globally Interconnected Object Databases

(A Caltech/CERN/HP Joint Project)

Julian Bunn/CERN and Harvey Newman/Caltech

 

High Energy Physics research involves a spectrum of demanding computational tasks. The GIOD Project, a collaboration between Caltech, CERN’s Information Technology Division, CERN’s CMS collaboration, and Hewlett Packard, is one of the new data-intensive projects in which CACR is participating.

The main GIOD Web page is here.

The project will model the computing environment required for data analysis at the Large Hadron Collider (LHC), a particle accelerator which will begin operation of its colliding proton beams in the year 2005 at CERN, the European Laboratory for Particle Physics. The worldwide High Energy Physics (HEP) community is currently engaged in planning and prototyping of the detector hardware and software systems for four unique and separate experiments at the LHC. These experiments will have to cope with an expected compressed raw data rate from the detector electronics of around 100 MBytes per second. The two largest experiments, CMS and ATLAS, group collaborators from several hundred instutes and universities from around the World: each numbers around 1500 physicists and engineers in the collaboration. Figure 1 shows a simulated Higgs particle decay in the CMS detector.

 

In late 1996, the CMS and ATLAS experiments completed work on computing technical proposals that described the unprecedented challenges of computing at the LHC, and how they might be addressed. The proposals identified models of computing, the primary goal of which is to achieve a high degree of location-independence for physicists wishing to analyse the many PetaBytes (10^15 bytes) of event data expected at the LHC, whilst remaining within the practical cost bounds of processing power, digital storage space and network throughput.

 

The key ideas expressed in the computing models were the use of Object Oriented software and data storage, hierarchical mass storage systems, and a few so-called Regional Computing Centres, interconnected by very fast network links. The Regional Centres are expected to house substantial compute resources and support infrastructure, and will serve data and resources to physicists located at nearby institutes. Typically, they will offer a replica of the OO event database, the master of which is populated at CERN directly from the data acquisition systems in the experiments.

 

The goal of the GIOD project is to model an LHC Regional Centre. This large-scale prototype will employ the HP Exemplar SPP2000 as a compute and data server. An Object Database Management System (ODBMS) will be installed and coupled with the High Performance Storage System (HPSS). The role of the HPSS is as a manager for the physical storage used by the ODBMS, and needs to be seamlessly integrated with it. Tests of access to the ODBMS will be made over high bandwidth WANs in order to understand how central and remote computing centres will inter-operate with distributed object data. Over the course of the project, a complete LHC software environment will be installed and evaluated. In addition, large scale Monte Carlo simulations of the CMS detector will be made. A demonstration that large systems such as the SPP2000 can successfully be used as general purpose computing resources for the HEP community.

 

The project, timed to finish in mid-1999, is comprised of several phases:

 

 

Work started in June 1997, and is now (October 1997) well advanced. Two commercial Object Database systems (Objectivity and Versant) have been evaluated, and their performance compared. The expected scaling behaviour of database query execution times as a function of the number of objects stored in the database, and the number of clients executing queries, were both demonstrated. It was confirmed that database queries could be made transparently against local or remote Object data. In another test, an Object Database at CERN was replicated across the WAN to Caltech. The reliability of the network and the transaction commit time were compared with replicating the same database across the LAN. The strikingly successful results showed that, during quiet periods on the trans-Atlantic network, the commit time for the Caltech replica was approximately the same as that for the local replica at CERN. The above results are important in the context of validating distributed access to large Object stores.

 

Another area of work has been on an Object Oriented data analysis environment for data being taken in a small particle beam experiment at CERN. The experiment is being used to evaluate detector elements destined for eventual installation in the CMS detector. The analysis software has been installed at Caltech and run against a replica of the 50 GByte test beam Object Database.

 

Finally, the HP Exemplar was used to carry out large scale Monte Carlo simulations of the CMS detector. In one run, 64 simultaneous jobs were executed in a 64-CPU system partition, each of which generated 100 fully detailed simulations of the decay of the postulated Higgs particle into four muons. The 6400 different events that resulted from this run were generated and tracked through the detector in a record time of two hours.