Mellanox – Systemy przełącznikowe

mellanox-Switches_01
home_offkarty_offkable_offsoft_off

SB7700/SB7790 – 36-port EDR 100Gb/s InfiniBand Switch Systems

With the exponential growth of data being generated around the world and the increase of applications that can take advantage of real time massive data processing for high performance, data analytics, business intelligence, national security and ‘Internet of Things’ applications, the market demands faster and more efficient interconnect solutions.

The SB7700 and SB7790 switch systems provide the highest-performing fabric solutions in a 1RU form factor by delivering 7.2Tb/s of non-blocking bandwidth to High-Performance Computing and Enterprise Data Centers, with 90ns port-to-port latency. Built with Mellanox’s latest Switch-IB InfiniBand switch device, these switches provide up to 100Gb/s full bidirectional bandwidth per port. These stand-alone switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. They are designed to carry converged LAN and SAN traffic with the combination of assured bandwidth and granular Quality of Service (QoS).

The integrated InfiniBand routing functionality enables design and deployment of larger scale InfiniBand fabrics with no limitations. Its low-latency enables fast communications within and across the data center. InfiniBand routing enables also fabric isolation between different cluster segments (compute/network and storage).

The SB7000 switch family enables efficient computing with features such as static routing, adaptive routing, and advanced congestion management. These features ensure the maximum effective fabric bandwidth by eliminating congestion hot spots.


icon_pdf SB7700 Product Brief
icon_pdf SB7790 Product Brief
 wiecej_button

  • Industry-leading, switch platform in performance, power, and density
  • Highest ROI – designed for energy and cost savings
  • Ultra low latency
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions

 

  • 36 EDR (100Gb/s) ports in a 1U switch
  • 7.2Tb/s switching capacity
  • IBTA Specification 1.3 and 1.2.1 compliant
  • Low latency and low power design
  • InfiniBand router
  • Quality of Service enforcement
  • Port Mirroring
  • Adaptive routing
  • Congestion control
  • Redundant power supplies
  • Replaceable fan drawers

MANAGEMENT (SB7700 ONLY)

  • Integrated subnet manager agent (up to 2k nodes)
  • Fast and efficient fabric bring-up
  • Comprehensive chassis management
  • Intuitive CLI and GUI for easy access
  • Can be enhanced with Mellanox’s Unified Fabric Manager(UFM™)

CS7500 – 648-Port EDR 100Gb/s InfiniBand Director Switch

With the exponential growth of data being generated around the world and the increase of applications that can take advantage of real time massive data processing for high performance, data analytics, business intelligence, national security and ‘Internet of Things’ applications, the market demands faster and more efficient interconnect solutions.

The CS7500 switch provides the highest performing fabric solution by delivering high bandwidth and low-latency to Enterprise Data Centers and High-Performance Computing environments in a 28U chassis. Networks built with the CS7500 can carry converged traffic with the combination of assured bandwidth and granular quality of service. Mellanox’s latest Switch-IB InfiniBand switch device, these switches provide up to 100Gb/s full bidirectional bandwidth per port with ultra-low latency of <0.5u per port.


icon_pdf Product Brief
 wiecej_button

  • Highest ROI – energy efficiency, cost savings and scalable high performance
  • High-performance fabric for parallel computation or I/O convergence
  • Up to 648 Ports Modular Scalability
  • High-bandwidth, low-latency fabric for compute-intensive applications
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

  • 648 EDR (100Gb/s) ports in a 28U switch
  • 130Tb/s switching capacity
  • Ultra-low latency
  • IBTA Specification 1.3 and 1.2.1 compliant
  • Quality of Service enforcement
  • N+N power supply

MANAGEMENT

  • Integrated subnet manager agent (up to 2k nodes)
  • Fast and efficient fabric bring-up
  • Comprehensive chassis management
  • Intuitive CLI and GUI for easy access
  • Can be enhanced with Mellanox’s Unified Fabric Manager (UFM™)
  • Temperature sensors and voltage monitors
  • Fan speed controlled by management software

CS7510 – 324-Port EDR 100Gb/s InfiniBand Director Switch

With the exponential growth of data being generated around the world and the increase of applications that can take advantage of real time massive data processing for high performance, data analytics, business intelligence, national security and ‘Internet of Things’ applications, the market demands faster and more efficient interconnect solutions.

The CS7510 switch provides the highest performing fabric solution by delivering high bandwidth and low-latency to Enterprise Data Centers and High-Performance Computing environments in a 16U chassis. Networks built with the CS7510 can carry converged traffic with the combination of assured bandwidth and granular quality of service. Mellanox’s latest Switch-IB InfiniBand switch device, these switches provide up to 100Gb/s full bidirectional bandwidth per port with ultra-low latency of <0.5u per port.


icon_pdf Product Brief
 wiecej_button

  • Highest ROI – energy efficiency, cost savings and scalable high performance
  • High-performance fabric for parallel computation or I/O convergence
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestion

  • 324 EDR (100Gb/s) ports in a 16U switch
  • 64Tb/s switching capacity
  • Ultra-low latency
  • IBTA Specification 1.3 and 1.2.1 compliant
  • Quality of Service enforcement
  • N+N power supply

MANAGEMENT

  • Integrated subnet manager agent (up to 2000 nodes)
  • Fast and efficient fabric bring-up
  • Comprehensive chassis management
  • Intuitive CLI and GUI for easy access
  • Can be enhanced with Mellanox’s Unified Fabric Manager
  • Temperature sensors and voltage monitors
  • Fan speed controlled by management software

CS7520 – 216-Port EDR 100Gb/s InfiniBand Director Switch

With the exponential growth of data being generated around the world and the increase of applications that can take advantage of real time massive data processing for high performance, data analytics, business intelligence, national security and ‘Internet of Things’ applications, the market demands faster and more efficient interconnect solutions.

The CS7520 switch provides the highest performing fabric solution by delivering high bandwidth and low-latency to Enterprise Data Centers and High-Performance Computing environments in a 12U chassis. Networks built with the CS7520 can carry converged traffic with the combination of assured bandwidth and granular quality of service. Mellanox’s latest Switch-IB InfiniBand switch device, these switches provide up to 100Gb/s full bidirectional bandwidth per port with ultra-low latency of <0.5u per port.


icon_pdf Product Brief
 wiecej_button

  • Highest ROI – energy efficiency, cost savings and scalable high performance
  • High-performance fabric for parallel computation or I/O convergence
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions

  • 216 EDR (100Gb/s) ports in a 12U switch
  • 43Tb/s switching capacity
  • Ultra-low latency
  • IBTA Specification 1.3 and 1.2.1 compliant
  • Quality of Service enforcement
  • N+N power supply

MANAGEMENT

  • Integrated subnet manager agent (up to 2000 nodes)
  • Fast and efficient fabric bring-up
  • Comprehensive chassis management
  • Intuitive CLI and GUI for easy access
  • Can be enhanced with Mellanox’s Unified Fabric Manager (UFM&trade)
  • Temperature sensors and voltage monitors
  • Fan speed controlled by management software

SX6710 – 36-port 56Gb/s InfiniBand/VPI Switch System

The SX6710 switch system provide the highest-performing fabric solutions in a 1RU form factor by delivering 4.032Tb/s of non-blocking bandwidth to High-Performance Computing and Enterprise Data Centers, with 200ns port-to-port latency. Built with Mellanox’s latest SwitchX®-2 InfiniBand switch device, these switches provide up to 56Gb/s full bidirectional bandwidth per port. These stand-alone switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. They are designed to carry converged LAN and SAN traffic with the combination of assured bandwidth and granular Quality of Service (QoS).

The SX6710 with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers. VPI simplifies system development by serving multiple fabrics with one hardware design. VPI simplifies today network by enabling one platform to run both InfiniBand and Ethernet subnets on the same chassis.

The SX6000 switch family enables efficient computing with features such as static routing, adaptive routing, and advanced congestion management. These features ensure the maximum effective fabric bandwidth by eliminating congestion hot spots.


icon_pdf Product Brief
 wiecej_button

  • Virtual Protocol Interconnect® (VPI) flexibility offers InfiniBand and Ethernet connectivity
  • Industry-leading, switch platform in performance, power, and density
  • Highest ROI – designed for energy and cost savings
  • Ultra low latency
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

Performance

  • 36 X FDR 56Gb/s ports in a 1U switch
  • 4Tb/s aggregate switch throughput
  • 200ns switch latency

Optimized Design

  • 1+1 redundant & hot-swappable power
  • N+1 redundant & hot-swappable fans
  • AC and DC power supplies ordering option
  • 80 gold+ and energy star certified power supplies
  • Dual-core x86 CPU

SX6005/SX6012 – 12-port 56Gb/s InfiniBand/VPI Switch Systems

The SX6005 and SX6012 switch systems provide the highest-performing fabric solutions in a 1RU half-width form factor by delivering up to 1.3Tb/s of non-blocking bandwidth to Storage and embedded systems, with 200ns port-to-port latency. Built with Mellanox’s 6th generation SwitchX®-2 InfiniBand switch device, these switches provide up to 56Gb/s full bidirectional bandwidth per port. These stand-alone switches are an ideal choice for smaller departmental or back-end clustering uses with high-performance needs, such as storage, data base and GPGPU clusters.

The SX6012 with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers. VPI simplifies system development by serving multiple fabrics with one hardware design. VPI simplifies today network by enabling one platform to run both InfiniBand and Ethernet subnets on the same chassis.

The SX6000 switch family enables efficient computing with features such as static routing, adaptive routing, and advanced congestion management. These features ensure the maximum effective fabric bandwidth by eliminating congestion hot spots.

 

ipv6_READY

icon_pdf SX6005 Product Brief
icon_pdf SX6012 Product Brief
 wiecej_button

  • Virtual Protocol Interconnect® (VPI) flexibility offers InfiniBand and Ethernet connectivity
  • Industry-leading, switch platform in performance, power, and density
  • Highest ROI – designed for energy and cost savings
  • Ultra low latency
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

Performance

  • 12 FDR (56Gb/s) ports in a 1U switch
  • 1.3Tb/s switching capacity
  • FDR/FDR10 support for Forward Error Correction (FEC)
  • IBTA Specification 1.3 and 1.2.1 compliant
  • Quality of Service enforcement
  • Port Mirroring*
  • Adaptive routing*
  • Congestion control*
  • Reversible air flow

MANAGEMENT (SX6012 ONLY)

  • Integrated subnet manager agent (up to 648 nodes)
  • Fast and efficient fabric bring-up
  • Comprehensive chassis management
  • Mellanox API for 3rd party integration
  • Intuitive CLI and GUI for easy access
  • Can be enhanced with Mellanox’s Unified Fabric Manager(UFM™)

SX6015/SX6018 – 18-port 56Gb/s InfiniBand/VPI Switch Systems

The SX6015 and SX6018 switch systems provides the highest performing fabric solution in a 1RU form factor by delivering 2Tb/s of non-blocking bandwidth with 200ns port-to-port latency. Built with Mellanox’s 6th generation SwitchX®-2 InfiniBand switch device, these switches provides up to eighteen 56Gb/s full bi-directional bandwidth per port. These stand-alone switches are an ideal choice for top-of-rack leaf connectivity or for building small to extremely large sized clusters.

The SX6018 with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers. VPI simplifies system development by serving multiple fabrics with one hardware design. VPI simplifies today network by enabling one platform to run both InfiniBand and Ethernet subnets on the same chassis.

The SX6000 switch family enables efficient computing with features such as static routing, adaptive routing, and advanced congestion management. These features ensure the maximum effective fabric bandwidth by eliminating congestion hot spots.

ipv6_READY
icon_pdf SX6015 Product Brief
icon_pdf SX6018 Product Brief
 wiecej_button

  • Virtual Protocol Interconnect® (VPI) flexibility offers InfiniBand and Ethernet connectivity
  • Industry-leading, switch platform in performance, power, and density
  • Highest ROI – designed for energy and cost savings
  • Ultra low latency
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

Performance

  • 18 FDR (56Gb/s) ports in a 1U switch
  • 2Tb/s switching capacity
  • FDR/FDR10 support for Forward Error Correction (FEC)
  • IBTA Specification 1.3 and 1.2.1 compliant
  • Quality of Service enforcement
  • Port Mirroring**
  • Adaptive routing**
  • Congestion control**
  • Up to 8 multiple switch partitions**
  • InfiniBand to InfiniBand routing**
  • Redundant power supplies and fan drawers

MANAGEMENT (SX6018 ONLY)

  • Integrated subnet manager agent (up to 648 nodes)
  • Fast and efficient fabric bring-up
  • Comprehensive chassis management
  • Mellanox API for 3rd party integration
  • Intuitive CLI and GUI for easy access
  • Can be enhanced with Mellanox’s Unified Fabric Manager(UFM™)

SX6025/SX6036 – 36-port 56Gb/s InfiniBand/VPI Switch Systems

The SX6025 and SX6036 switch systems provide the highest-performing fabric solutions in a 1RU form factor by delivering 4.032Tb/s of non-blocking bandwidth to High-Performance Computing and Enterprise Data Centers, with 200ns port-to-port latency. Built with Mellanox’s latest SwitchX®-2 InfiniBand switch device, these switches provide up to 56Gb/s full bidirectional bandwidth per port. These stand-alone switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. They are designed to carry converged LAN and SAN traffic with the combination of assured bandwidth and granular Quality of Service (QoS).

The SX6036 with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers. VPI simplifies system development by serving multiple fabrics with one hardware design. VPI simplifies today network by enabling one platform to run both InfiniBand and Ethernet subnets on the same chassis.

The SX6000 switch family enables efficient computing with features such as static routing, adaptive routing, and advanced congestion management. These features ensure the maximum effective fabric bandwidth by eliminating congestion hot spots.

ipv6_READY
icon_pdf SX6025 Product Brief
icon_pdf SX6036 Product Brief
 wiecej_button

  • Virtual Protocol Interconnect® (VPI) flexibility offers InfiniBand and Ethernet connectivity
  • Industry-leading, switch platform in performance, power, and density
  • Highest ROI – designed for energy and cost savings
  • Ultra low latency
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

  • 36 FDR (56Gb/s) ports in a 1U switch
  • 4.032Tb/s switching capacity
  • FDR/FDR10 support for Forward Error Correction (FEC)
  • IBTA Specification 1.3 and 1.2.1 compliant
  • Quality of Service enforcement
  • Port Mirroring
  • Adaptive routing
  • Congestion control
  • Reversible air flow
  • Redundant power supplies
  • Replaceable fan drawers

MANAGEMENT (SX6036 ONLY)

  • Integrated subnet manager agent (up to 648 nodes)
  • Fast and efficient fabric bring-up
  • Comprehensive chassis management
  • Mellanox API for 3rd party integration
  • Intuitive CLI and GUI for easy access
  • Can be enhanced with Mellanox’s Unified Fabric Manager(UFM™)

IS5022 – 8-port Non-blocking Remotely-managed 40Gb/s InfiniBand Switch System

The IS5022 remotely-managed switch system provides the highest-performing fabric solution in a 1U half-width form factor by delivering 640Gb/s of non-blocking bandwidth with 100ns port-to-port latency. Built with Mellanox’s 4th generation InfiniScale® IV InfiniBand switch device, the IS5022 provides up to 40Gb/s full bidirectional bandwidth per port. The IS5022 is an ideal choice for smaller departmental or back-end clustering uses with high-performance needs, such as storage, data base and GPGPU clusters. The IS5022 is designed to carry converged LAN and SAN traffic with the combination of assured bandwidth and granular Quality of Service (QoS).
icon_pdf Product Brief
 wiecej_button

  • Industry-leading, switch platform in performance, power, and density
  • Highest ROI – designed for energy and cost savings
  • Ultra low latency
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions

  • Industry-leading, switch platform in performance, power, and density
  • Highest ROI – designed for energy and cost savings
  • Ultra low latency
  • Granular QoS for Cluster, LAN and SAN traffic
  • Maximizes performance by removing fabric congestions

INFINIBAND

  • IBTA Specification 1.2.1 compliant
  • Integrated subnet manager agent
  • Adaptive routing
  • Congestion control
  • 256 to 4Kbyte MTU
  • 9 virtual lanes: 8 data + 1 management
  • 48K entry linear forwarding data base
  • Port Mirroring

IS5023 – 18-port Non-blocking Remotely-managed 40Gb/s InfiniBand Switch System

The IS5023 remotely-managed switch system provides a cost effective high-performance fabric solution in a 1U form factor by delivering 1.44Tb/s of non-blocking bandwidth with 100ns port-to-port latency. Built with Mellanox’s 4th generation InfiniScale® IV InfiniBand switch device, the IS5023 provides up to 40Gb/s full bidirectional bandwidth per port. This remotely-managed fixed edge switch is an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. The IS5023 is designed to carry converged LAN and SAN traffic with the combination of assured bandwidth and granular Quality of Service (QoS).
icon_pdf Product Brief
 wiecej_button

  • Industry-leading, switch platform in performance, power, and density
  • Highest ROI – designed for energy and cost savings
  • Ultra low latency
  • Granular QoS for Cluster, LAN and SAN traffic
  • Maximizes performance by removing fabric congestions

  • 1.44Tb/s switching capacity
  • Signal optimization for longer cable length
  • Quality of Service enforcement
  • Reversible air flow

INFINIBAND

  • IBTA Specification 1.2.1 compliant
  • Integrated subnet manager agent
  • Adaptive routing
  • Congestion control
  • 256 to 4Kbyte MTU
  • 9 virtual lanes: 8 data + 1 management
  • 48K entry linear forwarding data base
  • Port Mirroring

IS5024 – 36-port Non-blocking Remotely-managed 40Gb/s InfiniBand Switch System

The IS5024 remotely-managed switch system provides the highest-performing fabric solution in a 1U form factor by delivering 2.88Tb/s of non-blocking bandwidth with 100ns port-to-port latency. Built with Mellanox’s 4th generation InfiniScale® IV InfiniBand switch device, the IS5024 provides up to 40Gb/s full bidirectional bandwidth per port. This remotely-managed fixed edge switch is an ideal choice for top-of-rack leaf connectivity or for building small to extremely large sized clusters. The IS5024 is designed to carry converged LAN and SAN traffic with the combination of assured bandwidth and granular Quality of Service (QoS).
icon_pdf Product Brief
 wiecej_button

  • Industry-leading, switch platform in performance, power, and density
  • Highest ROI – designed for energy and cost savings
  • Ultra low latency
  • Granular QoS for Cluster, LAN and SAN traffic
  • Maximizes performance by removing fabric congestions

  • 2.88Tb/s switching capacity
  • Signal optimization for longer cable length
  • Quality of Service enforcement
  • Reversible air flow

INFINIBAND

  • IBTA Specification 1.2.1 compliant
  • Integrated subnet manager agent
  • Adaptive routing
  • Congestion control
  • 256 to 4Kbyte MTU
  • 9 virtual lanes: 8 data + 1 management
  • 48K entry linear forwarding data base
  • Port Mirroring

IS5025/IS5030/IS5035 – 36-port 40Gb/s InfiniBand Switch Systems

The IS5025, IS5030, and IS5035 switch systems provide the highest-performing fabric solutions in a 1RU form factor by delivering 2.88Tb/s of non-blocking bandwidth to High-Performance Computing and Enterprise Data Centers, with 100ns port-to-port latency. Built with Mellanox’s 4th generation InfiniScale® IV InfiniBand switch device, these switches provide up to 40Gb/s full bidirectional bandwidth per port. These stand-alone switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. They are designed to carry converged LAN and SAN traffic with the combination of assured bandwidth and granular Quality of Service (QoS).

The IS5025, IS5030, and IS5035 enable efficient computing with features such as static routing, adaptive routing, and advanced congestion management. These features ensure the maximum effective fabric bandwidth by eliminating congestion hot spots. Whether used for parallel computation or as a converged fabric, the IS5000 family of switches provides the industry’s best traffic-carrying capacity.

IS5025-IS5030-IS5035
icon_pdf IS5025 Product Brief
icon_pdfIS5030 Product Brief
icon_pdfIS5035 Product Brief
 wiecej_button

  • Industry-leading, switch platform in performance, power, and density
  • Highest ROI – designed for energy and cost savings
  • Ultra low latency
  • Granular QoS for Cluster, LAN and SAN traffic
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

  • 2.88Tb/s switching capacity
  • Signal optimization for longer cable length
  • Quality of Service enforcement
  • Temperature sensors and voltage monitors
  • Reversible air flow
  • Redundant power supplies
  • Replacable fan drawers
  • Fan speed controlled by management software

INFINIBAND

  • IBTA Specification 1.2.1 compliant
  • Integrated subnet manager agent
  • Adaptive routing– Congestion control
  • 256 to 4Kbyte MTU
  • 9 virtual lanes: 8 data + 1 management
  • 48K entry linear forwarding data base
  • Port Mirroring

MANAGEMENT (IS5030/IS5035 ONLY)

  • Fast and efficient fabric bring-up
  • Fabric-wide bandwidth verification
  • Comprehensive chassis management
  • Mellanox API for 3rd party integration
  • Intuitive CLI and GUI for easy access

SX6506 – 108-Port InfiniBand Director Switch

The SX6506 switch system provides the highest-performing fabric solution in a 6U form factor by delivering 12.1Tb/s of non-blocking bandwidth with sub 1µ port latency.

Built with Mellanox’s 5th-generation SwitchX® InfiniBand switch device, the SX6506 provides up to 108 56Gb/s full bi-directional bandwidth per port.

The SX6506 can scale as the number of nodes per cluster and the number of cores per node increase. This modular chassis switch provides an excellent price-performance ratio for medium to extremely large size clusters, along with the reliability and manageability expected from a director-class switch.

icon_pdf Product Brief
 wiecej_button

  • Industry leading, switch platform in performance, power, and density
  • Unlimited scalability across storage, application and database servers
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions

  • 108 FDR (56Gb/s) ports in a 6U switch
  • 12.1Tb/s aggregate switching capacity
  • Ultra-low latency
  • Congestion control
  • Quality of Service enforcement
  • FDR/FDR10 support for Forward Error Correction (FEC)
  • N+N power supply

SX6512 – 216-Port InfiniBand Director Switch

The SX6512 switch system provides the highest-performing fabric solution in a 9U form factor by delivering 24.2Tb/s of non-blocking bandwidth with sub 1µ port latency.

Built with Mellanox’s 5th-generation SwitchX® InfiniBand switch device, the SX6512 provides up to 216 56Gb/s full bi-directional bandwidth per port.

The SX6512 can scale as the number of nodes per cluster and the number of cores per node increase. This modular chassis switch provides an excellent price-performance ratio for medium to extremely large size clusters, along with the reliability and manageability expected from a director-class switch.

SX6512
icon_pdf Product Brief
 wiecej_button

  • Industry leading, switch platform in performance, power, and density
  • Unlimited scalability across storage, application and database servers
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions

  • 216 FDR (56Gb/s) ports in a 9U switch
  • 24.2Tb/s aggregate switching capacity
  • Ultra-low latency
  • Congestion control
  • Quality of Service enforcement
  • FDR/FDR10 support for Forward Error Correction (FEC)
  • N+N power supply

SX6518 – 324-Port InfiniBand Director Switch

The SX6518 switch system provides the highest-performing fabric solution in a 16U form factor by delivering 36.3Tb/s of non-blocking bandwidth with sub 1µ port latency.

Built with Mellanox’s 5th generation SwitchX® InfiniBand switch device, the SX6518 provides up to 324 56Gb/s full bi-directional bandwidth per port.

The SX6518 can scale as the number of nodes per cluster and the number of cores per node increases. This modular chassis switch provides an excellent price-performance ratio for medium to extremely large size clusters, along with the reliability and manageability expected from a director-class switch.

icon_pdf Product Brief
 wiecej_button

  • Industry-leading, switch platform in performance, power, and density
  • Unlimited scalability across storage, application, and database servers
  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions

  • 324 FDR (56Gb/s) ports in a 16U switch
  • 36.3 Tb/s aggregate switching capacity
  • Ultra-low latency
  • Congestion control
  • Quality of Service enforcement
  • FDR/FDR10 support for Forward Error Correction (FEC)
  • N+N power supply

SX6536 – 648-Port InfiniBand Director Switch

The SX6536 switch provides the highest performing fabric solution by delivering high bandwidth and low-latency to Enterprise Data Centers and High-Performance Computing environments in a 29U chassis. Networks built with the SX6536 can carry converged traffic with the combination of assured bandwidth and granular quality of service.

Built with Mellanox’s 5th generation SwitchX® InfiniBand switch device, the SX6536 provides up to 56Gb/s (FDR) full bisectional bandwidth per port. With up to 648 ports, this system is among the densest switching systems available. The SX6536 supports a superior scalable platform that scales as more number of nodes per cluster and more number of cores per node increases. This modular chassis switch is an ideal choice for building medium to large size clusters or for use as a core switch for very large clusters.

icon_pdf Product Brief
 wiecej_button

  • Highest ROI – energy efficiency, cost savings and scalable high performance
  • High-performance fabric for parallel computation or I/O convergence
  • Up to 648 Ports Modular Scalability
  • High-bandwidth, low-latency fabric for compute-intensive applications Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric Management for cluster and converged I/O applications

  • 648 FDR (56Gb/s) ports in a 29U switch
  • 72.52Tb/s switching capacity
  • Ultra-low latency
  • FDR/FDR10 support for Forward Error Correction (FEC)
  • IBTA Specification 1.3 and 1.2.1 compliant
  • Quality of Service enforcement
  • N+N power supply

Management

  • Integrated subnet manager agent (up to 648 nodes)
  • Fast and efficient fabric bring-up
  • Comprehensive chassis management
  • Mellanox API for 3rd party integration
  • Intuitive CLI and GUI for easy access
  • Can be enhanced with Mellanox’s Unified Fabric Manager (UFM™)
  • Temperature sensors and voltage monitors
  • Fan speed controlled by management software

Ethernet-Switches

SN2700 – 32-port Non-blocking 100GbE Open Ethernet Spine Switch System

The SN2700 switch provides the highest density 100GbE switching solution for the growing demands of today’s data centers environments.

The SN2700 switch is an ideal spine and top of rack (ToR) solution, allowing maximum flexibility, with port speeds spanning from 10Gb/s to 100 Gb/s per port and port density that enables full rack connectivity to any server at any speed. The uplink ports allow a variety of blocking ratios that suit any application requirement.

Powered by the Spectrum ASIC and packed with 32 ports running at 100GbE, the SN2700 carries a whopping switching capacity of 6.4Tb/s with a landmark 9.52 bpps processing capacity in a compact 1RU form factor.

 Spectrum
icon_pdf Product Brief
 wiecej_button

  • Zero Packet Loss
  • True cut through latency
  • Lowest Power
  • Easy Scale from one to thousands of nodes and switches
  • Arranged and Organized Data Center
    • Supports speeds of 10/25/40/50/56/100GbE
    • Easy deployment
    • Easy maintenance
  • Unprecedented Performance
    • Line rate performance on all ports at all packet sizes
    • Storage and server applications run faster
  • Software Defined Networking (SDN) support
  • Running MLNX-OS, alternative operating systems over ONIE

  • Wire Speed Switching / Routing
    • 6.4Tb/s
    • 9.52B packets-per-second
  • High Density
    • 32 40/56/100GbE ports
    • Up to 64 10/25GbE ports, up to 64 50GbE ports
  • Lowest Latency
    • 300nsec for 100GbE port-to-port
    • Flat latency across L2 and L3 forwarding
    • Cut-through latency between ports in different speeds
  • Lowest Power
    • under 7.5 watts per port
  • VM running user applications
  • Integral Layer 2 and Layer 3 support
    • IPv4 and IPv6
  • Mellanox NEO Cloud Networking Orchestration and Management
    • Maintain from 1 to 1000S nodes and switches
    • Centralized configuration and management of the data center

SN2410 – 48-port 25GbE + 8-port 100GbE Open Ethernet Switch System

The SN2410 switch provides the highest performance 100GbE top of rack switching solution for the growing demands of today’s data centers environments.

The SN2410 switch is an ideal top of rack (ToR) solution, allowing maximum flexibility, with port speeds spanning from 10Gb/s to 100 Gb/s per port. Its optimized port configuration enables high-speed rack connectivity to any server at 10GbE or 25GbE speeds. The 100GbE uplink ports allow a variety of blocking ratios that suit any application requirement.

Powered by the Spectrum ASIC and packed with 8 ports running at 100GbE and 48 ports running at 25GbE, the SN2410 carries a whopping switching capacity of 4Tb/s with a landmark 5.95Bpps processing capacity in a compact 1RU form factor.

 Spectrum
icon_pdf Product Brief
 wiecej_button

  • www.zeropacketloss.com
  • True cut through latency
  • Easy Scale from one to thousands of nodes and switches
  • Arranged and Organized Data Center
    • Supports speeds of 10/25/40/50/56/100GbE
    • Easy deployment
    • Easy maintenance
  • Unprecedented Performance
    • Line rate performance on all ports at all packet sizes
    • Storage and server applications run faster
  • Software Defined Networking (SDN) support
  • Running MLNX-OS, alternative operating systems over ONIE

  • Wire Speed Switching
    • 4Tb/s
    • 5.95B packets-per-second
  • High Density
    • 8 40/56/100GbE ports in 1RU
    • Up to 48 10/25 ports
  • Lowest Latency
    • 300nsec port-to-port
    • Flat latency across L2 and L3 forwarding
    • Cut-through latency between ports in different speeds
  • Lowest Power
    • under TBD watts per port

SN2100 – 48-port 25GbE + 8-port 100GbE Open Ethernet Switch System

The SN2100 switch provides a high density, side-by-side 100GbE switching solution which scales up to 64 25GbE ports in 1RU for the growing demands of today’s database, storage, data centers environments.

The SN2100 switch is an ideal spine and top of rack (ToR) solution, allowing maximum flexibility, with port speeds spanning from 10Gb/s to 100 Gb/s per port and port density that enables full rack connectivity to any server at any speed. The uplink ports allow a variety of blocking ratios that suit any application requirement.

Powered by the Spectrum ASIC and packed with 16 ports running at 100GbE, the SN2100 carries a whopping switching capacity of 3.2Tb/s with a landmark 4.8Bpps processing capacity in a compact 1RU form factor.

 Spectrum
icon_pdf Product Brief
 wiecej_button

  • Zero Packet Loss
  • True cut through latency
  • Lowest Power
  • Easy Scale from one to thousands of nodes and switches
  • Arranged and Organized Data Center
    • Supports speeds of 10/40/56GbE
    • Easy deployment
    • Easy maintenance
  • Unprecedented Performance
    • Line rate performance on all ports at all packet sizes
    • Storage and server applications run faster
  • Software Defined Networking (SDN) support
  • Side-by-side deployment in a standard rack
  • Running MLNX-OS, alternative operating systems over ONIE

  • Wire Speed Switching / Routing
    • 3.2Tb/s
    • 4.8B packets-per-second
  • High Density
    • 32 40/56/100GbE ports in 1RU
    • Up to 128 10/25GbE ports, up to 32 50GbE ports in 1RU
  • Lowest Latency
    • 300nsec for 100GbE port-to-port
    • Flat latency across L2 and L3 forwarding
    • Cut-through latency between ports in different speeds
  • Lowest Power
    • under 7.5 watts per port
  • VM running user applications
  • Integral Layer 2 and Layer 3 support
    • IPv4 and IPv6
  • Mellanox NEO Cloud Networking Orchestration and Management
    • Maintain from 1 to 1000S nodes and switches
    • Centralized configuration and management of the data center

SX1710 – 36-port Non-blocking 40/56GbE Open Ethernet Spine Switch System

With industry leading density, power efficiency and low latency, the SX1710 is the first non-blocking SDN spine switch providing unmatched performance advantage while lowering operating expenses.The SX1710 switch system enables data center applications at the highest performance for best return on investment.

The SX1710 SDN switch is the optimal spine switch with 36 ports of 40/56GbE, with split capability up to 64 ports of 10GbE. It provides non blocking throughput between rack and aggregation layer. Based on Mellanox’s SwitchX®-2 silicon and advanced hardware design this switch packs 36 QSFP interfaces in an ultra dense 1U form factor. The SX1710 features industry leading latency of 230ns and power efficiency while providing optimal performance.

The SX1710 switch has a rich set of networking and application performance features that excel and enable Software Defined Networking in any data center, making this switch the perfect solution for your network, whether it is enterprise data center, financial services, Web 2.0, high performance computing or cloud computing applications.

icon_pdf Product Brief
 wiecej_button

  • Zero Packet Loss
  • True cut through latency
  • Lowest Power
  • Easy Scale from one to thousands of nodes and switches
  • Arranged and Organized Data Center
    • Supports speeds of 10/40/56GbE
    • Easy deployment
    • Easy maintenance
  • Unprecedented Performance
    • Line rate performance on all ports at all packet sizes
    • Storage and server applications run faster
  • Software Defined Networking (SDN) support
  • Running MLNX-OS, alternative operating systems over ONIE
  • Virtual Protocol Interconnect (VPI) upgradable

  • Wire Speed Switching / Routing
    • 2.016Tb/s
    • 3B packets-per-second
  • High Density
    • 36 40/56GbE ports
    • Up to 64 10GbE ports
  • Lowest Latency
    • 230nsec for 40/56GbE
    • 250nsec for 10GbE
    • Cut-through latency between ports in different speeds
  • Lowest Power
    • under 2.5 watts per port
  • VM running user applications
  • Integral Layer 2 and Layer 3 support
    • IPv4 and IPv6
  • Mellanox NEO Cloud Networking Orchestration and Management
    • Maintain from 1 to 1000S nodes and switches
    • Centralized configuration and management of the data center

SX1410 – 48-port 10GbE + 12-port 40/56GbE SDN Switch System

With industry leading density, power efficiency and low latency, the SX1410 is the first non-blocking top of rack SDN switch providing unmatched performance advantage while lowering capital and operational expenditures.

The SX1410, is the optimal top of rack switch with 48 port of 10GbE and 12 uplink ports of 40/56GbE for non-blocking throughput between rack and aggregation layer. Based on Mellanox’s SwitchX®-2 silicon and advanced hardware design this switch packs 48 SFP+ and 12 QSFP interfaces in an ultra-dense 1U form factor. The SX1410 features industry leading latency of 250ns and power efficiency while providing optimal performance for enterprise data center, financial services, Web 2.0, high performance computing and cloud computing applications.

icon_pdf Product Brief
 wiecej_button

  • Optimal ToR design
  • 48x10GbE hosts ports and 12×40/56GbE uplinks
  • Software Defined Networking support
  • Leading performance and scalability
  • Low latency (270ns)
  • Energy efficient
  • Virtual Protocol Interconnect (VPI)
  • Built-in L3 features
  • IPv6 Ready
  • IPv6 IPsec

  • High Density
    • Non-Blocking ToR for 48 10GbE ports
    • Up to 64 10GbE ports
  • Lowest Latency
    • 220nsec for 40GbE
    • 270nsec for 10GbE
  • Lowest Power
  • Control Plane resiliency
    • Quad core x86 CPU
    • 16GB SSD
    • 4GB DIMM
  • VM running user applications

SX1400 – 48-port 10GbE + 12-port 40/56GbE Non-blocking Open Ethernet ToR Switch System

With industry leading density, power efficiency and low latency, the SX1400 is the first non-blocking top of rack SDN switch providing unmatched performance advantage while lowering operating expenses.

The SX1400 is the optimal top of rack switch with 48 ports of 10GbE and 12 uplink ports of 40/56GbE, for non blocking throughput between rack and aggregation layer. Based on Mellanox’s SwitchX®-2 silicon and advanced hardware design this switch packs 48 SFP+ and 12 QSFP interfaces in an ultra dense 1U form factor. The SX1024 features industry leading latency of 230ns and power efficiency while providing optimal performance.

The SX1400 features industry leading latency of 250ns and power efficiency while providing optimal performance for enterprise data center, financial services, Web 2.0, high performance computing and cloud computing applications.

The SX1400 switch has a rich set of networking and application performance features that excel and enable Software Defined Networking in any data center, making this switch the perfect solution for your network, whether it is enterprise data center, financial services, Web 2.0, high performance computing or cloud computing applications.


ipv6_IPSECipv6_READY
icon_pdf Product Brief
 wiecej_button

  • Zero Packet Loss
  • True cut through latency
  • Lowest Power
  • Easy Scale from one to thousands of nodes and switches
  • Arranged and Organized Data Center
    • Supports speeds of 10/40/56GbE
    • Easy deployment
    • Easy maintenance
  • Unprecedented Performance
    • Line rate performance on all ports at all packet sizes
    • Storage and server applications run faster
  • Software Defined Networking (SDN) support
  • Top-of-Rack optimization
  • Virtual Protocol Interconnect (VPI) upgradable

  • Wire Speed Switching / Routing
    • 1.15Tb/s
    • 1.71B packets-per-second
  • High Density
    • 12 40/56GbE ports
    • 250nsec for 10GbE
    • Cut-through latency between ports in different speeds
  • Lowest Power
    • under 1.8 watts per port
  • VM running user applications
  • Integral Layer 2 and Layer 3 support
    • IPv4 and IPv6
  • Mellanox NEO Cloud Networking Orchestration and Management
    • Maintain from 1 to 1000S nodes and switches
    • Centralized configuration and management of the data center

SX1400 – 48-port 10GbE + 12-port 40/56GbE Non-blocking Open Ethernet ToR Switch System

With industry leading density, power efficiency and low latency, the SX1400 is the first non-blocking top of rack SDN switch providing unmatched performance advantage while lowering operating expenses.

The SX1400 is the optimal top of rack switch with 48 ports of 10GbE and 12 uplink ports of 40/56GbE, for non blocking throughput between rack and aggregation layer. Based on Mellanox’s SwitchX®-2 silicon and advanced hardware design this switch packs 48 SFP+ and 12 QSFP interfaces in an ultra dense 1U form factor. The SX1024 features industry leading latency of 230ns and power efficiency while providing optimal performance.

The SX1400 features industry leading latency of 250ns and power efficiency while providing optimal performance for enterprise data center, financial services, Web 2.0, high performance computing and cloud computing applications.

The SX1400 switch has a rich set of networking and application performance features that excel and enable Software Defined Networking in any data center, making this switch the perfect solution for your network, whether it is enterprise data center, financial services, Web 2.0, high performance computing or cloud computing applications.


ipv6_IPSECipv6_READY
icon_pdf Product Brief
 wiecej_button

  • Zero Packet Loss
  • True cut through latency
  • Lowest Power
  • Easy Scale from one to thousands of nodes and switches
  • Arranged and Organized Data Center
    • Supports speeds of 10/40/56GbE
    • Easy deployment
    • Easy maintenance
  • Unprecedented Performance
    • Line rate performance on all ports at all packet sizes
    • Storage and server applications run faster
  • Software Defined Networking (SDN) support
  • Top-of-Rack optimization
  • Virtual Protocol Interconnect (VPI) upgradable

  • Wire Speed Switching / Routing
    • 1.15Tb/s
    • 1.71B packets-per-second
  • High Density
    • 12 40/56GbE ports
    • 250nsec for 10GbE
    • Cut-through latency between ports in different speeds
  • Lowest Power
    • under 1.8 watts per port
  • VM running user applications
  • Integral Layer 2 and Layer 3 support
    • IPv4 and IPv6
  • Mellanox NEO Cloud Networking Orchestration and Management
    • Maintain from 1 to 1000S nodes and switches
    • Centralized configuration and management of the data center

SX1012 – Half-Width 12-port Non-blocking 40/56GbE Open Ethernet Switch System

With industry leading density, power efficiency and low latency, the SX1012 is the first non-blocking SDN compact spine switch providing unmatched performance advantage while lowering operating expenses.With its unique form factor, the SX1012 switch system enables data center applications at the highest performance for best return on investment.

The SX1012 SDN switch is an optimal spine/top of rack switch with 12 ports of 40/56GbE, with split capability up to 48 ports of 10GbE. It provides non blocking throughput between rack and aggregation layer. Based on Mellanox’s SwitchX®-2 silicon and advanced hardware design this switch packs 12 QSFP interfaces in an ultra dense 1U, half-width form factor. The SX1012 features industry leading latency of 230ns and power efficiency while providing optimal performance.

The SX1012 switch has a rich set of networking and application performance features that excel and enable Software Defined Networking in any data center, making this switch the perfect solution for your network, whether it is enterprise data center, financial services, Web 2.0, high performance computing or cloud computing applications.


ipv6_IPSECipv6_READY
icon_pdf Product Brief
 wiecej_button

  • Zero Packet Loss
  • True cut through latency
  • Lowest Power
  • Easy Scale from one to thousands of nodes and switches
  • Arranged and Organized Data Center
    • Supports speeds of 10/40/56GbE
    • Easy deployment
    • Easy maintenance
  • Unprecedented Performance
    • Line rate performance on all ports at all packet sizes
    • Storage and server applications run faster
  • Software Defined Networking (SDN) support
  • Side-by-side deployment in a standard rack
  • Virtual Protocol Interconnect (VPI) upgradable

  • Wire Speed Switching / Routing
    • 672Gb/s
    • 1B packets-per-second
  • High Density
    • 24 40/56GbE ports in 1RU
    • Up to 96 10GbE in 1RU
  • Lowest Latency
    • 230nsec for 40/56GbE
    • 250nsec for 10GbE
    • Cut-through latency between ports in different speeds
  • Lowest Power
    • under 4 watts per port
  • Integral Layer 2 and Layer 3 support
    • IPv4 and IPv6
  • Mellanox NEO Cloud Networking Orchestration and Management
    • Maintain from 1 to 1000S nodes and switches
    • Centralized configuration and management of the data center

SX1016 – 64-port Non-blocking 10GbE Open Ethernet Switch System

With industry leading density, power efficiency and low latency, the SX1016 is a non-blocking 10GbE SDN switch providing unmatched performance advantage while lowering operating expenses.

The SX1016 is the optimal aggregation or top of rack switch with 64 ports of 10GbE, for non blocking throughput between rack and aggregation layer. Based on Mellanox’s SwitchX®-2 silicon and advanced hardware design this switch packs 64 SFP+ interfaces in an ultra dense 1U form factor. The SX1016 features industry leading latency of 230ns and power efficiency while providing optimal performance.

The SX1016 switch has a rich set of networking and application performance features that excel and enable Software Defined Networking in any data center, making this switch the perfect solution for your network, whether it is enterprise data center, financial services, Web 2.0, high performance computing or cloud computing applications.


ipv6_READY
icon_pdf Product Brief
 wiecej_button

  • Zero Packet Loss
  • True cut through latency
  • Lowest Power
  • Easy Scale from one to thousands of nodes and switches
  • Arranged and Organized Data Center
    • Supports speeds of 10GbE
    • Easy deployment
    • Easy maintenance
  • Unprecedented Performance
    • Supports speeds of 10GbE
    • Line rate performance on all ports at all packet sizes
    • Storage and server applications run faster
  • Software Defined Networking (SDN) support
  • Virtual Protocol Interconnect (VPI) upgradable

  • Wire Speed Switching / Routing
    • 640Gb/s
    • 950M packets-per-second
  • High Density
    • 64 10GbE ports
  • Lowest Power
    • 230nsec for 40/56GbE
    • 250nsec for 10GbE
    • Cut-through latency between ports in different speeds
  • Lowest Latency
    • under 1 watt per port
  • Integral Layer 2 and Layer 3 support
    • IPv4 and IPv6
  • Mellanox NEO Cloud Networking Orchestration and Management
    • Maintain from 1 to 1000S nodes and switches
    • Centralized configuration and management of the data center

SX1024 / SX1024(52) – 48-port 10GbE + 12-port 40/56GbE Non-blocking Open Ethernet ToR Switch System

With industry leading density, power efficiency and low latency, the SX1024 is the first non-blocking top of rack SDN switch providing unmatched performance advantage while lowering operating expenses.

The SX1024 is the optimal top of rack switch with 48 ports of 10GbE and 12 uplink ports of 40/56GbE, for non blocking throughput between rack and aggregation layer. Based on Mellanox’s SwitchX®-2 silicon and advanced hardware design this switch packs 48 SFP+ and 12 QSFP interfaces in an ultra dense 1U form factor. The SX1024 features industry leading latency of 230ns and power efficiency while providing optimal performance.

The SX1024 switch has a rich set of networking and application performance features that excel and enable Software Defined Networking in any data center, making this switch the perfect solution for your network, whether it is enterprise data center, financial services, Web 2.0, high performance computing or cloud computing applications.


ipv6_READY INTEROP_2012
icon_pdf Product Brief
 wiecej_button

  • Zero Packet Loss
  • True cut through latency
  • Lowest Power
  • Easy Scale from one to thousands of nodes and switches
  • Arranged and Organized Data Center
    • Supports speeds of 10/40/56GbE
    • Easy deployment
    • Easy maintenance
  • Unprecedented Performance
    • Line rate performance on all ports at all packet sizes
    • Storage and server applications run faster
  • Software Defined Networking (SDN) support
  • Top-of-Rack optimization
  • Virtual Protocol Interconnect (VPI) upgradable

  • Wire Speed Switching / Routing
    • 1.15Tb/s
    • 1.71B packets-per-second
  • High Density
    • 12 40/56GbE ports
    • 48 10GbE ports
  • Lowest Latency
    • 230nsec for 40/56GbE
    • 250nsec for 10GbE
    • Cut-through latency between ports in different speeds
  • Lowest Power
    • under 1.5 watts per port
  • Integral Layer 2 and Layer 3 support
    • IPv4 and IPv6
  • Mellanox NEO Cloud Networking Orchestration and Management
    • Maintain from 1 to 1000S nodes and switches
    • Centralized configuration and management of the data center

SX1036 – 36-port Non-blocking 40/56GbE Open Ethernet Spine Switch System

With industry leading density, power efficiency and low latency, the SX1036 is the first non-blocking SDN spine switch providing unmatched performance advantage while lowering operating expenses.The SX1036 switch system enables data center applications at the highest performance for best return on investment.

The SX1036 SDN switch is the optimal spine switch with 36 ports of 40/56GbE, with split capability up to 64 ports of 10GbE. It provides non blocking throughput between rack and aggregation layer. Based on Mellanox’s SwitchX®-2 silicon and advanced hardware design this switch packs 36 QSFP interfaces in an ultra dense 1U form factor. The SX1710 features industry leading latency of 230ns and power efficiency while providing optimal performance.

The SX1036 switch has a rich set of networking and application performance features that excel and enable Software Defined Networking in any data center, making this switch the perfect solution for your network, whether it is enterprise data center, financial services, Web 2.0, high performance computing or cloud computing applications.


ipv6_READY 
icon_pdf Product Brief
 wiecej_button

  • Zero Packet Loss
  • True cut through latency
  • Lowest Power
  • Easy Scale from one to thousands of nodes and switches
  • Arranged and Organized Data Center
    • Supports speeds of 10/40/56GbE
    • Easy deployment
    • Easy maintenance
  • Unprecedented Performance
    • Line rate performance on all ports at all packet sizes
    • Storage and server applications run faster
  • Software Defined Networking (SDN) support
  • Virtual Protocol Interconnect (VPI) upgradable

  • Wire Speed Switching / Routing
    • 2.016Tb/s
    • 3B packets-per-second
  • High Density
    • 36 40/56GbE ports
    • Up to 64 10GbE ports
  • Lowest Latency
    • 230nsec for 40/56GbE
    • 250nsec for 10GbE
    • Cut-through latency between ports in different speeds
  • Lowest Power
    • under 2.3 watts per port
  • Integral Layer 2 and Layer 3 support
    • IPv4 and IPv6
  • Mellanox NEO Cloud Networking Orchestration and Management
    • Maintain from 1 to 1000S nodes and switches
    • Centralized configuration and management of the data center

Gateway-Systems

SX6036G 36-port Non-blocking Managed 56Gb/s InfiniBand to 40GbE Ethernet Gateway

The SX6036G is a high-performance, low-latency 56Gb/s FDR InfiniBand to 40Gb/s Ethernet gateway built with Mellanox’s 6th generation SwitchX®-2 InfiniBand switch device.

Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers. VPI simplifies system development by serving multiple fabrics with one hardware design. VPI simplifies today network by enabling one platform to run both InfiniBand and Ethernet subnets on the same chassis.

With its high bandwidth, low latency and reduced overhead, InfiniBand is the ideal choice for speeding application performance while simultaneously consolidating network and I/O infrastructure. Combining InfiniBand and Ethernet into a single solution provides ideal rack backbone for next generation data centers.

Mellanox’s gateway system includes both InfiniBand and Ethernet switches on a single platform, simplifying the need for additional switches to connect to the gateway.

Using Mellanox’s gateway, a user can also benefit from a full end-to-end solution combined with Mellanox InfiniBand FDR 56Gb/s SX6000 switch series and Ethernet 40Gb/s SX1000 switch series.


ipv6_READY 
icon_pdf Product Brief
 wiecej_button

  • Industry-leading, gateway platform in performance, power, and density
  • High-performance connectivity to Ethernetbased services and resources
  • Designed for energy and cost savings
  • Quick and easy setup and management
  • Fabric Management for cluster and converged I/O applications

  • 36 56Gb/s ports in a 1U switch
  • Up to 4Tb/s aggregate switching capacity
  • 400ns latency between InfiniBand and Ethernet
  • Optional redundant power supplies and fan drawersŚ

TX6000 – Long-Haul Solutions to Data Centers

The MetroDX TX6000 series extends Mellanox InfiniBand solutions from a single-location data center network to distances of up to 1km to local and campus data centers and applications.

Mellanox MetroDX enables connecting between data centers deployed across multiple geographically distributed sites, extending InfiniBand RDMA and Ethernet RoCE benefits beyond local data centers and storage clusters.

Mellanox’s MetroDX is the perfect cost-effective, low power, easily managed and scalable solution to enable today’s data centers and storage to run up to 1km over local and distributed fabrics, managed as a single unified network infrastructure.


 
icon_pdf Product Brief
 wiecej_button

  • Extends InfiniBand networks up to a 1km radius over dark fiber
  • Low cost, low power, long-haul solution over an InfiniBand fabric
  • Simple management
  • RDMA execution over a distant site

  • 16 Long-haul (40Gb/s) ports in a 1U system
  • Up to 640Gb/s long-haul aggregate data
  • 16 Downlink (56Gb/s) VPI ports
  • Compliant with IBTA 1.2.1 and 1.3
  • 1 Virtual Lanes for QoS applications
  • Compliant with Mellanox LR4 QSFP+ 40Gb/s transceivers
  • Redundant power supplies and fan drawers

TX6100 – RDMA Long-Haul Solutions for the Campus

The MetroX TX6100 series extends Mellanox switch solutions from a single-location data center network to distances of up to 10km for local, campus and even metro applications.

Mellanox MetroX enables connecting between data centers deployed across multiple geographically distributed sites, extending InfiniBand RDMA and Ethernet RoCE benefits beyond local data centers and storage clusters.

Mellanox’s MetroX is the perfect cost-effective, low power, easily managed and scalable solution to enable today’s data centers and storage to run up to 10km over local and distributed fabrics, managed as a single unified network infrastructure.


 
icon_pdf Product Brief
 wiecej_button

  • Extends RDMA networks up to a 10km radius over dark fiber
  • Low cost, low power, long-haul solution over an InfiniBand or Ethernet fabric
  • Simple management
    • RDMA execution over a distant site

  • 6 Long-haul (40Gb/s) ports in a 1U system
  • Up to 240Gb/s long-haul aggregate data
  • 6 Downlink (56Gb/s) VPI ports
  • Compliant with IBTA 1.2.1 and 1.3
  • 1 Virtual Lanes for QoS applications
  • Compliant with Mellanox LR4 QSFP+ 40Gb/s transceivers
  • Redundant power supplies and fan drawers

TX6240 – RDMA Long-Haul Solutions for the Metro

The MetroX™ TX6240 series extends Mellanox’s InfiniBand and Ethernet RDMA solutions metro applications. The TX6240 supports 2 long-haul ports running 40Gb/s to a distance up to 40km.

While Mellanox products have been traditionally deployed for their high-performance interconnect benefits within the data center, Mellanox’s MetroX solutions, implementing long-haul RDMA, enable connections between data centers deployed across multiple geographically distributed sites, extending the same world-leading interconnect benefits of Mellanox switches beyond local data centers and storage clusters.

Mellanox’s MetroX is the perfect cost-effective, low power, easily managed and scalable solution that enables today’s data centers and storage to run up to 40km over local and distributed fabrics, managed as a single unified network infrastructure.


icon_pdf Product Brief
 wiecej_button

  • Extends RDMA networks up to a 40km radius over dark fiber
  • Low cost, low power, long-haul solution over InfiniBand and Ethernet fabrics
  • Simple management

  • 2 Long haul (40Gb/s) ports in a 2U system
  • Up to 80Gb/s long-haul aggregate data
  • 2 Downlink (56Gb/s) VPI ports
  • Compliant with IBTA 1.2.1 and 1.3
  • Supports full C-Band Tunable DWDM Line sides (SFP+)
  • Optional integrated EDFAs
  • Redundant power supplies and fan drawers

TX6280 – RDMA Long-Haul Solutions for the Metro

The MetroX TX6280 series extends Mellanox InfiniBand solutions from a single-location data center network to distances of up to 80km to campus and metro applications.

Mellanox MetroX enables connecting between data centers deployed across multiple geographically distributed sites, extending InfiniBand RDMA and Ethernet RoCE beyond local data centers and storage clusters.

Mellanox’s MetroX is the perfect cost-effective, low power, easily managed and scalable solution to enable today’s data centers and storage to run up to 80km over local and distributed fabrics, managed as a single unified network infrastructure.


icon_pdf Product Brief
 wiecej_button

  • Extends InfiniBand networks up to a 80km radius over dark fiber
  • Low cost, low power, long-haul solution over an InfiniBand and Ethernet fabric
  • Simple management
  • RDMA execution over a distant site

  • 1 Long haul (40Gb/s) port in a 2U system
  • Up to 40Gb/s long-haul aggregate data
  • 1 Downlink (56Gb/s) VPI ports
  • Compliant with IBTA 1.2.1 and 1.3
  • Supports full C-Band Tunable DWDM Line sides (SFP+)
  • Optional integrated EDFAs
  • Redundant power supplies and fan drawers

Top