Data centers seem to be multiplying at a rapid rate in an effort to keep up with the demand for high-capacity cloud computing resources. As cloud applications and services generate an ever-increasing amount of big data, the demands on data center power consumption and server bandwidth have exploded.
Data center operators are now seeking ways to constrain costs and power consumption while still offering the high levels of availability, redundancy and security that their customers demand. Virtualization, of both servers and general physical infrastructure, could very well prove to be the key to the efficiencies needed in today’s demanding data center environments.
Very simply, virtualization software separates physical infrastructure, enabling multiple applications and systems to be run simultaneously on the same hardware. Organizations can immediately reap the benefits of cost savings on hardware and operating expenses since fewer servers are essentially needed to run a variety of systems and software. Virtualization is the powerhouse behind cloud computing and offers massive potential when it comes to how data centers operate.
According to research firm IDC (News - Alert), virtualization of servers and physical infrastructure could save businesses in the Asia-Pacific region up to $106 billion by 2020. The company examined server spending costs along with the associated costs of administration, power and cooling and physical floor space. The technology also has a massive environmental impact, with the potential to cut out 6.4 million tons of carbon dioxide emission in the region through 2020.
In the U.S., virtualization has the potential to save companies $1.9 trillion in gross energy and fuel costs through 2020, as well as 9.1 gigatons of carbon dioxide emissions. IDC adds that the technology can also significantly increase time to market for services, which leads to a better ROI.
Virtualization can present complications, however, and Stefan Bernbo, CEO of Compuverde, recently wrote about some of the drawbacks. (Compuverde specializes in big data cloud storage solutions.)
According to Bernbo, rapid virtualization can create issues like data congestion unless the proper hardware is used to keep pace with expansion. As virtual machines (VMs) become rapidly implemented, bottlenecks can occur if all servers and VMs are connected to the same shared storage, creating significant problems. Data center operators need to ensure their infrastructure architectures keep up with the rapid pace of virtualization to avoid problems.
Bernbo suggests organizations look at solutions used by early virtualization adopters like telcos and service providers to deal with congestion issues. Seeking out solutions with multiple data entry points that distribute the load across all servers can optimize performance and minimize lag time.
Running VMs inside a storage node itself is another way to tackle the problem. “With this approach, the whole architecture is flattened out,” writes Bernbo. “If the organization is using shared storage in a SAN, the VM usually hosts from the top of the storage layer, turning it into one giant storage system with only one point of entry. To fix the data congestion issues that result from approach, some businesses are starting to move from the typical two-layer architecture that keeps virtual machines and storage running out of the same layer.”
By ensuring physical infrastructure keeps up with the rapid pace of virtualization, data center operators and their customers can reap the most benefits from the technology.
Edited by Rory J. Thompson