4-17, CERN School of Computing 2005, Saint Malo, France
6-7, 14th GridPP Collaboration Meeting, Birmingham, UK
6-15, Second National Virtual Observatory Summer School, Aspen, Colorado
11-14, The Second Grid Applications and Middleware Workshop, Poznan, Poland
The Canadian GridX1. (Click on image for larger version.)
Image courtesy of Daniel Vanderster
GridX1 is a collaborative project between researchers at
the Universities of Alberta, Calgary, Simon Fraser, Toronto, and Victoria,
the National Research Council in Ottawa and the TRIUMF Laboratory in
Vancouver. This experimental Canadian grid computing facility has been used
to execute more than 20,000 jobs for the ATLAS particle physics experiment
The Global Ring Network for Advanced Applications Developemnt, which provides
scientists around the globe with advanced networking tools, exists due to
the shared commitment of the U.S., Russia, China, Korea,
Canada and the Netherlands to promote increased engagement and cooperation between
Fermilab and Caltech successfully use UltraScience Net
Fermilab Press Release, September 1, 2005
BATAVIA, Illinois—Preparing for an onslaught of data to be processed and distributed in the upcoming years, scientists at the Department of Energy's Fermi National Accelerator Laboratory and at the California Institute of Technology successfully tested a new ultrafast data transfer connection developed by the Office of Science of the Department of Energy.
CrossGrid project concludes
CERN Courier, September 2005
The CrossGrid project has ended after three years, during which time Grid-enabled solutions were developed for computer- and data-intensive applications that are distributed but that require near-real-time responses.
SDSC Team Supports Tsunami Reconnaissance Data Collection
SDSC News Release, August 31, 2005
More than 20 NSF-funded scientific reconnaissance teams went to work in Asia capturing data from the 2004 Tsunami – the deadliest in recorded history.
|Grid3 Ends Productive Two-Year Run
The first U.S. grid to allow multiple virtual organizations to share resources in a common infrastructure ended its successful two-year run on September 1. Researchers from several scientific fields used Grid3 to test their new grid-enabled applications, learn how to operate and work within a grid environment, and produce scientific results.
"New discoveries in astrophysics, simulations in particle physics, earthquake engineering optimizations and analyses of protein sequences in biology all benefited from extended use of Grid3 resources," said Fermilab's Ruth Pordes, one of the Grid3 coordinators. "These results would not have been possible without interfacing individual projects' computing resources to Grid3's common infrastructure."
Grid3 was initially created to demonstrate specific technologies at the Supercomputing 2003 conference. It proved so successful that operation was continued well after the conference, to the benefit of participating scientists. Grid3 represented breakthrough collaboration between the National Science Foundation-funded GriPhyN and iVDGL projects, the Department of Energy's Office of Science-funded PPDG project, the U.S. ATLAS and U.S. CMS particle physics experiments and the Condor and Globus teams—collaboration that will continue with the Open Science Grid infrastructure.
|Running the Grid on Lite
Any Grid infrastructure consists of three basic building blocks: on the one side, the underlying infrastructure (or fabric) providing computing and storage resources; on the other, the users with their applications, wanting to use the resources; and bringing the two together, the so-called "middleware". Aptly named, this software typically consists of a stack of different modules, which act collectively as an intermediary, hiding the multiple parts and detailed workings of the Grid infrastructure from the user. The Grid thus appears as a single, coherent, easy-to-use resource, in which the middleware ensures that the resources are used as efficiently as possible and in a secure and accountable manner.
Daily average throughput (in MB/s) during one week of the LCG Service Challenge 3. The gLite file-transfer system is used to provide reliable file transfer between sites, and to allow sites to control their resource usage.
Many national and international projects are working in the rapidly evolving field of Grid computing, and there are diverse technologies currently available. Recent efforts have focused on closer collaboration between projects to find the best solutions and ensure that viable standards emerge and evolve—a trend that should help to make Grid technology widely accessible to a larger user community.
It is in this spirit that the Enabling Grids for E-science (EGEE) project launched its gLite middleware initiative in April 2004.
Read the full article in this month's CERN Courier
|Matteo Melani: Certified Grid Accountant
In 2003, after a year and a half of traveling around the world, Matteo Melani started work in Italy implementing a computing structure for the BaBar experiment and teaching undergraduate students about computer science. To get freshman physicists interested in his class, he looked for connections between the two fields and discovered the world of grid computing. Soon after that class, his Italian university sent him to work on BaBar databases at the Stanford Linear Accelerator Center in California, and a six-month stint turned into a full-time research software architect position at SLAC working on grid computing.
"My main focus now is the accounting system for the Open Science Grid," said Melani. "I'm also working with a group of ATLAS physicists exploring the idea of making SLAC a Tier 2 computing site for that experiment, and with BaBar experimenters interested getting their software running on the OSG."
Melani and Philippe Canal from Fermilab co-chair the OSG accounting activity, which is gathering requirements and designing an accounting system for the U.S. infrastructure. To achieve the OSG goals of bringing together a many institutions and universities and linking consumers and providers of resources, the infrastructure must provide a reliable way to track the usage of those resources.
| Catlett on TeraGrid's $150M Payday
The National Science Foundation recently announced a five-year initiative with $150 million in funding to operate and enhance TeraGrid, a distributed national infrastructure supporting computational science. TeraGrid—built over the past four years—is the world's largest, most comprehensive distributed cyberinfrastructure for open scientific research. Through high-performance network connections, TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities around the country.
On the heels of the big NSF award, GRIDtoday editor Derrick Harris spoke with Catlett about what the $150 million mean to the future of the project. In addition, Catlett discusses his current relationship with the GGF, and what the addition of "Big Ben" means to the TeraGrid.
GRIDtoday: First of all, how good does it feel to receive such a hefty award and commitment from the NSF?
CHARLIE CATLETT: Well, this investment is a really strong statement about the quality of the work that our team did in the construction of TeraGrid and the way that the partners have come together to develop a strong plan for the next five years. I'm really proud of the leaders who have worked together from the sites and of the several hundred individuals who have worked hard to build TeraGrid and who are continuing to collaborate to deliver a valuable facility to the science community.
This article, by GRIDtoday editor Derrick Harris, originally appeared in the September 5, 2005 issue of GRIDtoday.