10 Gigabit Ethernet – Why Choose it?

Gigabit Ethernet (GbE) has long been the dominant local area network (LAN) applications since Ethernet technology was born in the 1970s. But when it’s required to connect servers to storage area networks (SANs) and network-attached storage (NAS) or for server-to-server connections, GbE is not sufficient enough. As such, Ethernet has developed the later technology standard as newer, higher performing iteration—10GbE.

The Institute of Electrical and Electronics Engineers (IEEE) 802.3 working group has published several standards regarding 10GbE, including 802.3ae-2002 (fiber -SR, -LR, -ER), 802.3ak-2004 (CX4 copper twin-ax InfiniBand type cable), etc. Among these standard interfaces, 10GBASE-SR is the most widely used type, like SFP-10G-SR and SFP-10G-SR-S. With 10Gigabit connectivity becoming widely available, 10GbE technology has emerged as the connection choice for many companies to grow their networks and support new applications and traffic types. Behind the 10GbE, there are three main advantages which explain why users choose it today.

Data Center Network Simplification

10gbe in DatacentersWhile Fibre Channel and InfiniBand are specialized technologies that can connect servers and storage, they can’t extend beyond the data center. However, a single 10GbE network and a single switch can support the LAN, server-to-server communications, and can connect to the wide-area network. Ethernet and IP network technology are familiar to network designers, so replacing multiple networks with a single 10GbE network avoids complex staff training. And by consolidating multiple gigabit ports into a single 10gigabit connection, 10GbE simplifies the network infrastructure while providing greater bandwidth.

Traffic Prioritization and Control

A major advantage of 10GbE is that separate networks for SANs, server-to-server communication, and the LAN can be replaced with a single 10GbE network. While 10Gb links may have sufficient bandwidth to carry all three types of data, bursts of traffic can overwhelm a switch or endpoint.

SAN performance is extremely sensitive to delay. Slowing down access to storage has an impact on server and application performance. Server-to-server traffic also suffers from delays, while LAN traffic is less sensitive. There must be a mechanism to allocate priority to critical traffic while lower-priority data waits until the link is available.

Existing Ethernet protocols do not provide the controls needed. A receiving node can send an 802.3x PAUSE command to stop the flow of packets, but PAUSE stops all packets. 802.1p was developed in the 1990s to provide a method to classify packets into one of eight priority levels. However, it did not include a mechanism to pause individual levels. The IEEE is now developing 802.1Qbb Priority-based Flow Control (PFC) to provide a way to stop the flow of low-priority packets while permitting high-priority data to flow.

A bandwidth allocation mechanism is also required. 802.1Qaz Enhanced Transmission Selection (ETS) provides a way to group one or more 802.1p priorities into a priority group. All of the priority levels within a group should require the same level of service. Each priority group is then assigned a percentage allocation of the link. One special priority group is never limited and can override all other allocations and consume the entire bandwidth of the link. During periods when high-priority groups are not using their allocated bandwidth, lower-priority groups are allowed to use the available bandwidth.

Congestion control

802.1Qbb and 802.1Qaz by themselves don’t solve the packet loss problem. They can pause low-priority traffic on a link, but they don’t prevent congestion when a switch or an end node is being overwhelmed by high-priority packets from two or more links. There must be a way for receiving nodes to notify sending nodes to slow their rate of transmission.

IEEE 802.1Qau provides such a mechanism. When a receiving node detects that it is nearing the point where it will begin discarding incoming packets, it sends a message to all nodes currently sending to it. Sending nodes to slow their transmission rate. Then, when congestion is cleared, the node sends a message informing senders to resume their full rate.

10GbE in Data Centers

For many institutions, especially those that utilize automated trading, uptime and response time is critical. Longer delays than a second can be exceedingly costly. With servers now being able to transmit bandwidth and network downtime, today’s data centers of some companies need extended bandwidth. 10GbE is an ideal technology to move large amounts of data quickly. The bandwidth it provides in conjunction with server consolidation is highly advantageous for Web caching, real-time application response, parallel processing, and storage.

Conclusion

10GbE provides greater bandwidth for transporting data over Ethernet architectures with reduced cost and complexity, serving as the ideal connection choice for some companies. Fiberstore offers an ocean of 10GbE solutions, such as high-quality SFP+ modules (eg. SFP-10G-SR and SFP-10G-SR-S).

Related Articles

How to Build Affordable 10G Network for Small and Midsize Business

Copper Cable vs Fibre Optic Cable Price, Is the Copper Really Cheaper?

What Is VLAN and How It Works?

GPON: Optimal Solution to FTTH

VPLS vs VPWS—How Much Do You Know?

Share:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.