There are so many options to consider for final-meters data center connectivity. Most new connections are being implemented at 10Gbps but 25Gbps starting to emerge as a faster option. Beyond speed you can break down the other options into two main groups, one option being cables with integrated transceivers at each end (DACs & AOCs), and the second option using connectorized transceivers combined with cables of the appropriate type and length for each connection. These cabling options can be point-to-point or point-to-multipoint (breakout cables).
Today we are going to take a close look at one of the popular ways to connect a Top-Of-Rack leaf switch to a number of rack mounted high performance servers. The specific rack mounted items in this example are Cisco’s Nexus 93180YC-FX switch and Dell’s PowerEdge R640 server. The connectivity between these devices will be accomplished using point to point 10Gbps over OM3 multimode duplex fibers. Additional items required FluxLight 10GBASE-SR optical transceivers and a 10Gbps Network Interface Card (NIC, one per server). Let’s look at each of these components in a bit more detail.
TOR Leaf Switch: Cisco Nexus 93180YC-FX
The Nexus® 9300 platform is part of the latest generation of the fixed Nexus® 9000 Series Switches. Devices in this family are described by Cisco as being based on Cisco Cloud Scale (CCS) technology. The primary elements of CCS are,
- Greater performance scaling through the use of multi-rate ports (1/10/25/50/100G)
- Lower Total Cost of Ownership
- Detailed, real-time telemetry and analytics
- Improved security with line-rate encryption
- Faster application completion time (claim 50% improvement)
The Nexus 93180YC-FX switch we will use in this example is 1RU in height (1.75” rack unit) for use in standard 19” equipment racks. This switch supports a total of 2.16 Terabits/sec (Tbps) of bandwidth and over 850 million packets per second (mpps). The 93180YC-FX has 48 downlink ports that can equipped with either SFP+ or SFP28 optical transceivers and each port may be configured to run at 1Gbps, 10Gbps or 25Gbps in Ethernet mode or at 16Gbps or 32Gbps in Fibre Channel mode.
Rack-mounted Server: Dell PowerEdge R640
Dell’s EMC PowerEdge line includes a wide array of server models with varying amounts of processor(s) compute power, storage (RAM, SSD, Hard Drives), expansion slots, fans, power supplies, etc. The PowerEdge R640 rack server used in this example is 1RU in height. The base R640 chassis has 4 x 3.5” slots for drives and 3 PCIe slots. There are a wide range of options for processors for R640, all from Intel’s Xeon Gold and Platinum lines. The base system comes with a single hot-pluggable 495 watt power supply but may be upgraded to dual (1+1) redundant 750 watt supplies.
Dual-SFP+ Port 10G NIC: Broadcom 57810
The Broadcom 57810 is a low-profile PCIe (PCI Express) network interface card equipped with two SFP+ slots. Either or both of these slots may be equipped with any of a wide selection of optical transceiver modules, ranging from short reach multimode (10GBASE-SR) to very long reach singlemode (80km 10GBASE-ZR). If both SFP+ slots are equipped with transceivers, with the use of IEEE 802.3ad link aggregation, a channel supporting a total of 20Gbps full-duplex is possible. Beyond the bandwidth increase, there is an improvement in fault tolerance as the two fiber jumpers can each be run over alternate routes to the host TOR switch. If one of these cables is disconnected for some reason, connectivity will be maintained just as half speed.
10GBASE-SR Optical Transceivers: FluxLight SFP-10G-SR
FluxLight’s SFP-10G-SR is a Cisco® compatible 10GBase SFP+ Optical Transceiver and is factory pre-programmed with all the necessary configuration data for seamless network integration. These transceivers perform identically to Cisco® original transceivers and are 100% compatible and support full hot swappable operation. The SFP-10G-SR is 100% MSA (Multi-Source Agreement) compliant.
Fiber Jumper Cables: FluxLight FL-LCLC-OM3-2M
Fluxlight carries a wide range of fiber jumper cable, with various terminations, types of fiber and lengths. Particularly well suited to the short range 10Gbps Ethernet links we are using in this the example is Fluxlight part number: FL-LCLC-OM3-2M. This is duplex (2-fiber) cable with dual LC-type connectors on each end. The dual LC connector mates perfectly with the dual LC receptacles on the SFP+ optical transceivers we are using in this example.
Putting it all together
So we have the parts we need…now we have to put it all together. Where to put the TOR leaf switch? How about the top of the rack! A typical 19” equipment rack in a data center has 42 RUs of usable space to mount equipment. The Cisco switch only uses 1RU and, since each Dell R640 also uses only 1RU, theoretically you could fit 41 servers in. Usually there is a certain amount of space used for things like power supplies, fiber management panels etc. In any case, you must install a Broadcom 57810 NIC in each server and then put two Fluxlight 10GBASE-SR transceivers into each of these NICs. Since we are running two 10Gbps links per server, two Fluxlight 10GBASE-SR modules per server must be installed in the Nexus 83180YC-FX. Since there are 48 SFP+ slots in the switch, that allows us to connect a total of 24 servers. Finally, an LC-LC OM3 duplex multimode fiber jumper must be connect to each pair of SFP+ 10G transceivers (one in the Nexus switch and the other in NIC a server). Remember, just to minimize the chance of a fiber accident disconnecting an entire server, to route each the two fiber jumpers on each NIC to opposite sides of the rack.
Available at FluxLight
Buy Now: FluxLight SFP-10G-SR
Buy Now: FluxLight FL-LCLC-OM3-2M