If the internet is the blood that runs in the veins of today’s economy, then the datacenter is the heart that keeps that blood circulating where it needs to go. Datacenters are where information lives, is stored, transmitted and accessed. Without datacenters and connectivity, there is no cloud, no digital economy. But just like we can’t get complacent about our own heart health, we must also keep these digital hearts healthy by addressing the internal and external challenges they face.
We took a look at five datacenters ailments, and proposed corrective course to avoid or overcome those challenges. All to maintain health both inside the walls of the data center and their interconnections.
1. Space and power
Space and power inside the data center have long been the operator’s biggest challenges. The largest of these facilities tend to be located where power is cheap and there is lots of room. Even then, however, reducing power consumption is a primary concern, as it is one of the biggest operational costs. Metro-area datacenters are faced with even more complications due to power consumption and associated factors, such as heat dissipation and adhering to stringent building codes.
Space is also top-of-mind because many small- and medium-sized data centers are running out of room. This makes it more and more difficult to add capacity – which was typically accomplished by adding hardware – because there’s no space left to add an extra server rack or even a single network element.
Prescription: Purpose-built platforms
Designed to significantly reduce space and power, and to increase flexibility, purpose-built datacenter interconnect platforms that make it easier to plan, order, and install high-capacity network elements. These are plug-and-play devices that can be up and running in minutes and have very small power, footprint, and cooling requirements, all of which drive down operating costs.
2. Manual operations
Managing datacenter hardware is frequently a labor-intensive and slow process that can often fall prey to human error. And the consequences of these mistakes can be disastrous. In a recent example, errors managing a datacenter led to thousands of flights being grounded, costing an airline an enormous amount of revenue, let alone customer goodwill. Changing the way hardware is managed to avoid these errors is a top priority.
Prescription: Automation and open APIs
Automation and open APIs are overcoming the need to manually program the hardware. Unlike telephone voice traffic, datacenter traffic is difficult to predict, with different applications placing different loads on the network spontaneously, creating unpredictable spikes in demand. Automation allows both the datacenter, and layer 2 and 3 equipment to be reconfigured as required to match these demand spikes using APIs.
Capacity is the primary external challenge. Enterprises need to transmit high capacity traffic over long distances and they are limited by the physics that govern such operations. These are put under even more strain as traffic grows by upwards of 30 to 40 percent a year.
For example, Facebook (News - Alert) went from 1 billion video views a day to 5 billion in less than a year, which puts a tremendous burden on the ability for the network capacity to scale in relation to this traffic. From a financial perspective, online retail locations can sell hundreds of thousands of dollars of merchandise in minutes, so any capacity issues that cause downtime in the network or datacenter can be extremely costly.
Prescription: Coherent optics
The best cure for today’s network capacity issues is a strong dose of coherent optics. This technology has allowed data to be successfully transmitted faster – and farther – then ever before. Where SONET maxed out at 40G and couldn’t travel very far, coherent optics have removed traditional roadblocks, such as chromatic dispersion and polarization mode dispersion, to allow high-capacity transmission over virtually any type of fiber, over long distances, with 100G speed and beyond.
Some top-level applications – such as such as data mirroring and video streaming – are very sensitive to latency, which can be an important factor in the end-user experience. The more remote a data center is, the more latency is introduced, and that’s why latency-sensitive applications and data are mostly located within metro data centers.
And while fiber-caused latency can’t easily be addressed, latency introduced by hardware and software processes can be fixed.
Prescription: Optimized hardware and software
Latency can be overcome by high-performance, ultra-high-speed optoelectronics and optimized software engines, which have significantly reduced latency when connecting datacenters.
Security is another concern. Most organizations protect data at rest, securing servers, databases, routers and switches by managing user access and credentialing. However, in today’s web-scale networks, large amounts of critical data are in-flight, as high-bandwidth communications occur beyond the walls of the data center, traversing a larger, potentially worldwide network. This in-fight information must be protected from intruders and hacking tools, which can go so far as to tap directly into the fiber.
Prescription: Wire-speed encryption
Security is being addressed by in-flight wire-speed encryption, which digitally protects data while it’s en-route between source and destination. By encrypting data as it leaves the security of the private cloud, operators can ensure this data is protected from unauthorized intercept as it traverses the network, and crosses varying security levels before reaching its destination.
Space and power, manual operations, capacity, latency and security don’t have to remain a heartburn for datacenter operators. With a few simple prescriptions, datacenter can be kept healthy, achieving performance both inside and outside the walls of the datacenter.
Edited by Maurice Nagle