Energy for computing

CHEYENNE, Wyo. – The din of more than a thousand million million computer calculations per second – that’s a quadrillion, with 15 zeroes – sounds like a rushing river echoing inside this dark, immaculate, wide-open room. A few rows of black, refrigerator-sized cabinets hold 100 racks of hardware and more than 70,000 processors that together comprise one of the world’s fastest supercomputers.

The system, known as Yellowstone, is the beating heart of the National Center for Atmospheric Research’s Wyoming Supercomputing Center.

“You get used to the sound signature,” shouted Gary New, the center’s operations manager, referring to the facility’s various noises.

Standing in the loud hall, Yellowstone’s technological muscle is almost as hard to fathom as the energy needed to keep it all running. Supercomputers that operate at dizzying speeds with massive storage are allowing scientists to run highly complex scientific models and simulations. The components also require lots of power, however, and typically lots of cooling as well to keep equipment functioning.

Here, the facility’s power bill is $1 million annually.

The tab may sound staggering, but NCAR – a Boulder-based program of the National Science Foundation – and its partners are getting the most computational bang for their buck. This spring, the center won first place for facility design implementation in the Green Enterprise IT awards, presented by the independent Uptime Institute, recognizing its innovative and high-efficiency design. With a power density of 1,000 watts per square foot, the facility is among the top 1 percent of data centers in the world for efficiency, thanks to building features that boost performance and make use of Wyoming’s climate.

Completed in October, the supercomputing center covers 153,000 square feet in a modern and sleek building, located across from a Wal-Mart distribution center and a Microsoft data center under construction. Just a few dozen engineers and technicians work on site at the $70 million building, while other staff monitors systems from NCAR’s Mesa Laboratory in Boulder.

NCAR’s first supercomputer, housed at the Mesa Lab, started operating in 1963 – and, with a single processor, hardly seems “super” by present standards. New arrays in the 1980s brought in early parallel systems that linked thousands of processors to increase capabilities. By the ’90s, supercomputing technology had advanced so far that the Boulder lab no longer was equipped to meet the power, space or cooling requirements for modern arrangements. With aspirations for a high-performance, world-class system, NCAR decided in 2007 to site its new facility in Cheyenne, partnering with state and city entities and the University of Wyoming to finance a land sale, construction and acquisition of the supercomputer. Yellowstone, which was built by IBM, is the 13th fastest computer in the world and the largest dedicated to atmospheric modeling.

“We’re really trying to advance the science of climate and weather,´ said Anke Kamrath, director of operations and services for NCAR’s computational and information systems laboratory. “We’re still at the tip of the iceberg.”

The facility allows scientists to run complex and data-intensive climate and atmospheric models at local and global scales. Early projects include studies of ocean currents and turbulence, long-range weather forecasts, air-pollution projections and electricity in space. The supercomputer also has contributed to the forthcoming global climate assessment through the Intergovernmental Panel on Climate Change. A simulation that might have taken its Boulder predecessor, named Bluefire, half a year to process now runs for just a week. More than 1,500 users and about 100 projects are tapping into Yellowstone, and the system is already at 90 percent utilization.

Yellowstone is almost 30 times more powerful than Bluefire. But while moving data among components and running over a quadrillion calculations per second isn’t cheap, Kamrath says, technology and efficiency gains have kept associated electric costs from rising proportionally. In fact, the Wyoming center only runs on two to three times as much power as Bluefire. Breakthroughs in computing capacity have played a key role, along with special attention to the NWSC’s construction. “Running the building itself is as complex as running the computer,” Kamrath says.

“Running the building itself is as complex as running the computer,” Kamrath said.

Facility design, completed by the Lakewood-based RMH Group, took advantage of Wyoming’s frequent winds and cold air to reduce cooling needs, which is often a huge cost for data centers. Among the building’s specs is a 135,000-gallon evaporative cooling tower that uses the outside air to keep computers from overheating – without having to maintain frigid inside temperatures. The system is so efficient that the facility’s industrial chillers will only need to be turned on for about five days a year.

“The center doesn’t need to be a meat locker,´ said New, referring to the brisk interiors maintained in many other data centers.

A waste-heat recapture system funnels heat from the components and transfers energy to warm offices and even melts outdoor snow and ice in winter. In the mechanical room in the building’s basement, a looped “chilled-beam system” enables efficient delivery of warming and cooling in both directions. Building and lighting automation monitor and control settings based on occupancy to further raise efficiency. On a flat-screen monitor, technicians can literally track when someone plugs in a cell phone in the facility. Overall, the efficiency measures are working so well that it costs more to light the building than to cool the computers, and roughly 90 percent of the center’s power goes directly to computing.

After already achieving LEED Gold certification for its green-building credentials, the award from the Uptime Institute recognizes that the center is operating at its potential – and serving as an example for other data and computing centers.

“Everything’s doing what it’s intended to do so far,´ said New. “We did our homework.”

Even as Yellowstone efficiently hums along, New, Kamrath and others are planning for the future. Kamrath already is starting to shop for the next-generation supercomputer that will replace Yellowstone in another handful of years – a nod to the rapid advancement in technology. The facility, for its part, is ready for the expansion. Current operations use 2 to 4 megawatts, but the center is designed to handle 25 megawatts. The expansive room that holds Yellowstone is just one of four modular computing spaces in the facility, with the other three waiting.

“We built in expandability, so we weren’t maxed out the day we moved in,´ said New. “We’ve designed a building for a 20- to 30-year use life for an industry that doesn’t know where it’ll be in five years. We don’t know what technology will be available down the road.”