The difference between long-term, upfront, strategic planning versus reactive change holds significant and divergent implications, especially when it comes to building a smart data center.
In a recent webcast featuring market segment partners Panduit and Cisco (News
) experts delved into the “hot” topic of data center availability and how it correlates with unified physical infrastructure. Company executives explained their long-term vision for helping enterprises create a more efficient, cost-effective use of their data centers, and ultimately provide a consistent, successful experience for their end users.
“The big challenge that we’ve been seeing in this space is the newer hardware that’s coming in can do a lot more than the older systems. More processing means it’s going to consume more power and those systems are going to generate more heat so this can put some pressure on your different physical infrastructure type,” explained Cisco IT Architect Doug Alger, who touched on the many challenges physical infrastructures face today.
The apparent trend is undoubtedly the need for more power in data centers around the world. The U.S. Environmental Protection Agency testified before Congress in 2007, Alger said, which included a report that revealed global power to have registered 61 billion kilowatt hours in 2006 – and that figure had doubled since 2000. “The good news is there are things we can do to optimize power, some on the IT side and some on facilities side,” Alger said.
“Cisco certainly knows its data centers – the company has 52 data centers with 220,000 square feet of hosting space, and tends to inherit a significant number of data centers through mergers and acquisitions,” Alger said, “which often present challenges since they are often older”.
“You add in all of these power constraints and you add in the fact that systems are getting smaller, and it can not only stress what’s happening on the power side, but it has a ripple effect on what’s happening with cooling. You put in more systems and it increases your cabling densities and especially in older facilities. If you’re packing in more hardware to the same space, you can have challenges with structural loading,” he explained.
During the webcast, the audience was polled on when they were planning to expand data center physical capacities in the near future – interestingly two of the audience segments answered closely, but the implications of their answers were quite divergent. Twenty-five percent said they would be expanding in the next 12 months, while close to 30 percent said they were unsure when they would be expanding. Meanwhile, the majority said they already have ample space.
According to Alger, today the challenge within data centers is in power, whereas two years ago it was space. “Just because there is a power outlet there doesn’t mean you have enough power there,” he warned.
Alger said that industry group ASHRAE recently came out with a new set of standards for acceptable temperatures, with a broader range for server/hosting operating environments, calling them notably wider than their previous standard, and noted that Panduit focuses on sustainable ways to optimize data centers.
“We definitely have an understanding that there is an energy consumption cost that goes with cooling and there is a variety of cooling technologies that involve energy efficiency. If your data center is in climate where it’s cool, you can use air economizers or heat wheels that essentially allow Mother Nature to do some of your cooling for you and can lower your energy costs.”
In addition, he said variable frequency drives, which are anything in the cooling system run by a motor, can have middle point so systems aren’t always running at 100 percent with an “on” or “off” switch. He also said isolating hot and cold air flow, and having less air mixing, allows for more efficiency. Simple steps like sealing off tiny gaps in data center space can add up to significant cost-savings and greater efficiency.
In addition, distribution can help to alleviate clustering heat sources together to avoid a “hot spot” from being created. On the power side, Naese said enterprises should have high efficiency components so they aren’t wasting space. “A few percent may not sound like a whole lot, but it can add up if you have a large data center,” Alger said. In terms of cooling and temperature, it’s critical to have the ability to monitor what power to draw from and allow for flexibility so that companies can plan and adjust accordingly.
“In addition, while most of this focus is on facilities side, obviously the decisions made are around hardware – since they are things that are drawing the power – which can have a big impact on all of your capacities. Using virtualization in the server and storage side is a good way to extend the life of your data center,” he explained.
Marc Naese, director of IT infrastructure and operations at Tinley Park, Ill.-based Panduit, explained that focusing on individual design elements is critical to planning and designing a data center early and upfront in the design process as well as the entire lifecycle of the data center.
“IP communications is really driving systems infrastructure convergence – so when we talk about the unified physical infrastructure, and how we develop and design around that, we’re looking at several different elements: power, communication, computing, security and control,” Naese said. “All of these elements are extremely critical, and being able to service and pay attention to each one is important.”
As these different systems start to converge and optimize, it becomes an extremely big challenge within the data center to be able to support these systems and implement them in a way so that all of these elements are in their optimal state.
When referencing the webcast poll, 25 percent companies remained pensive in knowing when they would need to expand their data center, Naese said. “The fact that they don’t know, that’s really concerning to me, and that tells me that a lot of people aren’t to that automated state yet.”
This fact further validates the significance and relevance of Panduit’s UPI-approach, which ties all system components together to provide a seamless infrastructure.
“If you look around the industry and see performance requirements and technology change, one thing that is consistent throughout all those changes is the physical infrastructure. So if I don’t have the right media types or the performance, and I’m not moving my systems from 1Gig to 10Gig or beyond, I’m not going to be able to support that new technology,” Naese said. “The end result is significant problems within the infrastructure that may cause outages that can remove your ability to provide availability to your end users.”
Naese uncovered three top challenges, which are:
1) Availability – providing uptime and meeting service level agreements;
2) Agility – maximizing your real estate investment while still have the flexibility to integrate new elements and move things around at fast pace; and
3) Security – keeping mission-critical elements secure, no hacking into the network.
With more companies looking to migrate and ensure seamless operability, Naese said that Panduit thoroughly follows the development cycle and increases interoperability testing to maximize efficiency.
“As technology launches to market, we develop reference architecture best practice designs that give you the recipe to tell you how to deploy these in your infrastructure successfully,” he said. He added that creating a logical mapping design for the physical infrastructure at the forefront, when possible, gives companies a significant advantage.
Knowing every environment is different, Panduit takes those individual elements into consideration in terms of the many upfront decisions involved.
“These decisions are usually made prior to purchasing network equipment, so if we can help get involved earlier in the process to make better decisions, you can end up with an infrastructure that gives you that flexibility and agility to support that infrastructure successfully,” Naese said.