The Johns Hopkins University Applied Physics Laboratory Air and Missile Defense Department's Combat Systems Development Facility has achieved a major breakthrough in virtualization according to GCN.
The facility has managed to use virtualization to streamline some complicated calculations.
The desire to efficiently use the facility’s 1,500 node computing clusters more efficiently drove the move toward virtualization. Some nodes were fully loaded while others sat idle. A node that’s not processing anything is one that’s wasting money when the costs of the hardware and power and cooling are factored in.
The design of the simulations the lab was running was a major obstacle to virtualizing the high performance computing system. The lab has two clusters, one based on Windows, and the other on Linux. The reason for the two platforms is that the lab does work with outside contractors, and they have their own requirements.
"Some simulations take five seconds per task, and we run that same task up to a million times," Edmond DeMattia, a senior system engineer and virtualization architect told GCN. "While others may take 15 hours per task, but are only run 1,000 times."
These simulations use the “Monte Carlo method,” which means they use repeated random sampling to make calculations. All of those simulations require a lot of computing power.
DeMattia used VMware’s ESXi hypervisor to manage the virtual machines. ESXi can monitor all of the virtual machines, diverting resources from underused virtual machines to machines that are running intensive tasks, using the cluster more efficiently than without the hypervisor.
With the overhead of virtualization, DeMattia expected a 6 to 8 percent performance loss. Instead, he was surprised to find a 2 percent increase. He then began moving the cluster over to the new virtualized grid.
“My team fundamentally redesigned how high-performance scientific computing is performed in the Air and Missile Defense Department by utilizing virtualization and distributed storage as the framework for pooling resources across multiple departments," he said.
The lab saved about $504,000 in hardware costs and over $40,000 in cooling costs by virtualizing its computing cluster.
Edited by Maurice Nagle