I recently sat down with Austin Hipes, vice president of technology at NEI, for a podcast that detailed what effect 40GE connectivity will have on packet processing hardware, how advances in both hardware and software may require developers to redesign their solutions, and the overall benefits of new server solutions for telecom platform deployments to OEMs.
“As we move from 1GE to 10GE and now into 40GE, everything just keeps getting faster. And this increase in speed means you have to accomplish the same amount of work in a much smaller sliver of time as the packets come through in order to keep from affecting network throughput. This means both server IO and processing power must be significantly increased -- especially to break up the work into individual processes and spread it across multiple cores for efficient throughput, “ Hipes said.
New advances coming in 2012, including a brand new processor architecture from Intel (News - Alert) and the addition of multiple cores to the individual processors themselves, will force developers to retool their solutions. Also, the addition of the PCI (News - Alert) Express IO into the CPU itself, along with more channels and memory, will provide a very low-latency architecture, as all of the hardware is now centralized into a single piece of silicon. Hipes added, “This will allow for many of these faster throughputs without the need for a dedicated packet processor. A dual processor system will be able to have 16 cores with a very large amount of PCI Express bandwidth, enabling anywhere from 10 to 20GE of second packet processing throughputs.”
In addition, the introduction of the DPDK (Data Plan Development Kit) reference library from Intel will allow users to leverage their general purpose processors strictly for packet processing functions. Also, dedicated network processors from various providers such as Cavium and NetLogic (News - Alert) are getting faster from a dedicated packet processing side. “This means is it split for applications that require greater than 20GE of throughput, when used with multiple processors in a single system,” Hipes added. “Now for many applications that require 20GE and less, you will for the first time be able to utilize standard server architectures with APIs. For real core networks that need beyond 20GE of performance, the dedicated network processors themselves are increasing to keep up with demand.”
As new servers and new architectures begin to come out at the end of the first quarter of 2012, 20GE applications will be feasible on a general purpose server. Hipes concluded, “The reason this is important for many OEMs is often times they have had to deploy multiple solutions to hit different performance and cost targets. Now essentially anything 20GE and less, which is still the bulk of the market, can be done all in software by varying both the amount of processing you have in the general purpose system and the amount of Ethernet performance for multi-core platforms. This makes it much easier on developers if they can deploy a code base in many different areas, as it greatly speeds up time to market and allows them to add more features.”
To hear the podcast in full, click here.
Jamie Epstein is a TMCnet Web Editor. Previously she interned at News 12 Long Island as a reporter's assistant. After working as an administrative assistant for a year, she joined TMC (News - Alert) as a Web editor for TMCnet. Jamie grew up on the North Shore of Long Island and holds a bachelor's degree in mass communication with a concentration in broadcasting from Five Towns College. To read more of her articles, please visit her columnist page.