USC News

Menu Search

Supercomputing on Demand

Carl Kesselman of the Information Sciences Institute and his colleagues believe that computational grids will eventually put advanced supercomputers, data archives, virtual-reality displays and scientific instruments at the fingertips of the nation’s scientists and engineers – regardless of where tools or people are located.

Photo by Irene Fertik

A PROTOTYPE FOR future “computational grids,” which will provide supercomputing power on demand just as a power grid provides electricity, has been demonstrated by researchers from USC’s Information Sciences Institute (ISI) and the Argonne National Laboratory.

The test-bed grid at the SC97 conference, held in San Jose’s McEnery Convention Center in November, harnessed 3,000 data processors in the U.S. and Europe.

GUSTO (Globus Ubiquitous Supercomputing Testbed) is the latest development of the Globus system, an integrated set of software components for next-generation high-performance Internet computing.

“By providing pervasive access to supercomputing capabilities, computational grids will change the way we think about and use high-end computing,” said Ian Foster of Argonne, who co-leads the Globus project with Carl Kesselman of ISI. “We’re excited to take this step toward establishing a permanent computational grid facility.”

From previous experiments, Kesselman said, “We know that both the technical and organizational obstacles to creating such integrated grids are tremendous. However, the cooperation that we have received from all participants has been amazing. People are clearly ready for this next step toward widespread collaborative supercomputing.”

Foster and Kesselman believe that future computational grids will place the most advanced supercomputers, data archives, virtual-reality displays and scientific instruments at the fingertips of the nation’s scientists and engineers – regardless of where tools or people are located – and enable new problem-solving techniques, such as distributed supercomputing, remote visualization and tele-immersion.

At SC97, 10 groups used Globus software and GUSTO resources for a range of distributed supercomputing applications, including:

  • SF-Express, a distributed interactive simulation system developed by Caltech and JPL, which uses multiple supercomputers to perform very large synthetic forces simulations.
  • A system that uses remote supercomputers for the reconstruction of crystallographic data from the Advanced Photon Source at Argonne.
  • Neph, a supercomputer-enhanced instrumentation application that uses dynamically acquired computing resources for analysis of satellite data.
  • NICE/CAVERN, a system developed at the University of Illinois at Chicago for the creation of shared virtual spaces.
  • Cactus, a system for computing and visualizing Einstein’s gravity wave equations at a distance. This application was run across the Atlantic, using a supercomputer in Garching, Germany.
  • Nimrod, a system for creating and managing large parameter studies, used for environmental and superconductivity studies.
  • MPMM, an integrated weather modeling system that used remote computing resources to generate daily weather forecasts during SC97.

ONE NOVEL RESEARCH incorporated in the GUSTO grid is a large pool of workstations managed by Condor, a system developed at the University of Wisconsin by professor Miron Livny and his colleagues to support high-throughput computing. “We’re excited to be coupling Condor with Globus,” Livny said. “We believe that future grids will need to support both high-performance and high-throughput computing, and this seems to be the way to do it.”

The highlight of the SC97 conference was a world-record SF-Express run. By harnessing a sizable fraction of GUSTO resources, researchers were able to simulate the behavior of 60,000 independent entities in a closed system.

The GUSTO grid was developed in collaboration with the National Computational Science Alliance and the National Partnership for Advanced Computational Infrastructure (NPACI), both recipients of National Science Foundation (NSF) Partnerships for Advanced Computational Infrastructure awards; as well as with staff at Department of Energy and NASA laboratories and other research centers around the world.

The alliance, anchored by the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, helped develop essential software and incorporated its large SGI/Cray Origin2000 and Convex supercomputers into the grid. NPACI, anchored by the San Diego Supercomputer Center at UC San Diego, also supported the effort and provided access to computers.

Foster and Kesselman previously collaborated on I-SOFT, the middleware used for the I-WAY experiment at the Supercomputing ’95 conference.

Globus research and development is supported by the Defense Advanced Research Projects Agency, the Department of Energy and the NSF, and by an equipment grant from Sun Microsystems.

Supercomputing on Demand

Top stories on USC News