10gb switch
At the Dell Interop 2012 in Las Vegas Dell announced the first FTOS based blade-switch: the Force10 MXL 10/40Gpbs blade switch, and later a 10/40Gbit/s concentrator. The FTOS MXL 40 Gb was introduced on 19 July 2012.[24] The MXL provides 32 internal 10Gbit/s links (2 ports per blade in the chassis), two QSFP+ 40Gbit/s ports and two empty expension slots allowing a maximum of 4 additional QSFP+ 40Gbit/s ports or 8 10Gbit/s ports. Each QFSP+ port can be used for a 40Gbit/s switch to switch (stack) uplink or, witch a break-out cable, 4 x 10Gbit/s links. Dell offers direct attach cables with on one side the QSFP+ interface and 4 x SFP+ on the other end or a QSFP+ transceiver on one end and 4 fibre-optic pairs to be connected to SFP+ transceivers on the other side. Up to six XML blade-switch can be stacked into one logical switch.
The XML switches also support Fibre Channel over Ethernet so that server-blades with a Converged network adapter Mezzanine card can be used for both data as storage using a fibre channel storage system. The XML 10/40 Gbit/s blade switch will run FTOS[25] and because of this will be the first M1000e I/O product without a Web GUIIn October 2012 Dell also launched the I/O Aggregator for the M1000e chassis running on FTOS. The I/O Aggregator offers 32 internal 10Gb ports towards the blades and standard two 40Gbps QSFP+ uplinks and offers two extension slots. Depending on your requirements you can get extension modules for 40Gb QSFP+ ports, 10 Gb SFP+ or 1-10 GBaseT copper interfaces. You can assign up to 16 x 10Gb uplinks to your distribution or core layer. The I/O aggregator supports FCoE and DCB (Data center bridging) features[26]
The XML switches also support Fibre Channel over Ethernet so that server-blades with a Converged network adapter Mezzanine card can be used for both data as storage using a fibre channel storage system. The XML 10/40 Gbit/s blade switch will run FTOS[25] and because of this will be the first M1000e I/O product without a Web GUIIn October 2012 Dell also launched the I/O Aggregator for the M1000e chassis running on FTOS. The I/O Aggregator offers 32 internal 10Gb ports towards the blades and standard two 40Gbps QSFP+ uplinks and offers two extension slots. Depending on your requirements you can get extension modules for 40Gb QSFP+ ports, 10 Gb SFP+ or 1-10 GBaseT copper interfaces. You can assign up to 16 x 10Gb uplinks to your distribution or core layer. The I/O aggregator supports FCoE and DCB (Data center bridging) features[26]
No comments:
Post a Comment