Mellanox – Karty sieciowe

mellanox-adapters_01
home_offswich_offkable_offsoft_off

ConnectX®-4 Single/Dual-Port Adapter supporting 100Gb/s with VPI

ConnectX-4 adapter cards with Virtual Protocol Interconnect (VPI), supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity, provide the highest performance and most flexible solution for high-performance, Web 2.0, Cloud, data analytics, database, and storage platforms.

With the exponential growth of data being shared and stored by applications and social networks, the need for high-speed and high performance compute and storage data centers is skyrocketing.

ConnectX-4 provides exceptional high performance for the most demanding data centers, public and private clouds, Web2.0 and Big Data applications, as well as High-Performance Computing (HPC) and Storage systems, enabling today’s corporations to meet the demands of the data explosion.

ConnectX-4 logo


ConnectX-4 VPI

 icon_pdf Product Brief
 wiecej_button

  • Highest performing silicon for applications requiring high bandwidth, low latency and high message rate
  • World-class cluster, network, and storage performance
  • Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms
  • Cutting-edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Virtualization acceleration
  • Power efficiency
  • Scalability to tens-of-thousands of nodes

  • EDR 100Gb/s InfiniBand or 100Gb/s Ethernet per port
  • 10/20/25/40/50/56/100Gb/s speeds
  • 150M messages/second
  • Single and dual-port options available
  • Erasure Coding offload
  • T10-DIF Signature Handover
  • Virtual Protocol Interconnect (VPI)
  • Power8 CAPI support
  • CPU offloading of transport operations
  • Application offloading
  • Mellanox PeerDirect™ communication acceleration
  • Hardware offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • Ethernet encapsulation (EoIB)
  • ROHS-R6

Connect-IB® Single/Dual-Port InfiniBand Host Channel Adapter Cards

Connect-IB adapter cards provide the highest performing and most scalable interconnect solution for server and storage systems. High- Performance Computing, Web 2.0, Cloud, Big Data, Financial Services, Virtualized Data Centers and Storage applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Connect-IB delivers leading performance with maximum bandwidth, low latency, and computing efficiency for performance-driven server and storage applications. Maximum bandwidth is delivered across PCI Express 3.0 x16 and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.

Connect-IB offloads the CPU protocol processing and the data movement from the CPU to the interconnect, maximizing the CPU efficiency and accelerate parallel and data-intensive application performance. Connect-IB supports new data operations including noncontinuous memory transfers which eliminate unnecessary data copy operations and CPU overhead. Additional application acceleration is achieved with a 4X improvement in message rate compared with previous generations of InfiniBand cards.

 icon_pdf Product Brief
 wiecej_button

  • World-class cluster, network, and storage performance
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

  • Greater than 100Gb/s over InfiniBand
  • Greater than 130M messages/sec
  • 1us MPI ping latency
  • PCI Express 3.0 x16
  • CPU offload of transport operations
  • Application offload
  • GPU communication acceleration
  • End-to-end internal data protection
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

Programmable ConnectX®-3 Pro Adapter Card Dual-Port Adapter with VPI

Mellanox programmable adapters provide users with the capability to program an attached FPGA to the ConnectX-3 Pro network adapter device, taking advanatage of ConnectX-3 Pro enhanced application acceleration and high speed network. Programmable ConnectX-3 Pro VPI adapter cards support InfiniBand and Ethernet connectivity with hardware offload engines. The attached FPGA and memory are accessible through the PCI Express Gen 3 interface or the network interface for full flexibility. Mellanox programmable adapters can deliver the competitive advantage to companies and users using public and private clouds, telecom and enterprise data centers, high performance computing and more.

Modern data centers, public and private clouds, Web 2.0 infrastructures, telecommunication, and high-performance computing require to achieve highest performance and maximum flexibility which result in reduced completion time and lower cost per operation. Programmable ConnectX-3 Pro VPI Adapter simplifies system development by serving multiple fabrics with one hardware design.

 icon_pdf Product Brief
 wiecej_button

  • 40Gb/s FPGA as bump-on-the-wire for DOS attack prevention
  • FPGA on PCIe Gen3 x8 bus (up to 8GT/s) as application acceleration engine with real-time processing power
  • User application customization for encryption/decryption, deduplication offload and data compression acceleration
  • Flexible steering on PCIe achieving maximum performance
  • One design for InfiniBand, Ethernet (10/40/56GbE), or Data Center Bridging fabrics
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

ConnectX®-3 Pro Single/Dual-Port Adapter with Virtual Protocol Interconnect®

ConnectX-3 Pro adapter cards with Virtual Protocol Interconnect (VPI), supporting InfiniBand and Ethernet connectivity with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Public and private cloud clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. ConnectX-3 Pro with VPI also simplifies system development by serving multiple fabrics with one hardware design.

open_sml
The Open Compute Project (OCP) mission is to develop and specify the most cost-efficient, energy-efficient and scalable enterprise and Web 2.0 data centers. Mellanox ConnectX-3 Pro VPI adapter card delivers leading InfiniBand and Ethernet connectivity for performance-driven server and storage applications in Web 2.0, Enterprise Data Centers and Cloud environments. The OCP Mezzanine adapter form factor is designed to mate into OCP servers.

icon_pdf ConnectX-3 Pro VPI Product Brief

icon_pdf ConnectX-3 Pro VPI OCP Product Brief

 wiecej_button

  • One design for InfiniBand, Ethernet (10GbE, 40GbE), or Data Center Bridging fabrics
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

  • Virtual Protocol Interconnect
  • 1us MPI ping latency
  • Up to 56Gb/s InfiniBand or 40 Gigabit Ethernet per port
  • Single- and Dual-Port options available
  • PCI Express 3.0 (up to 8GT/s)
  • CPU offload of transport operations
  • Application offload
  • GPU communication acceleration
  • Precision Clock Synchronization
  • HW Offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • Ethernet encapsulation (EoIB)
  • RoHS-R6

ConnectX®-3 Single/Dual-Port Adapter with VPI

ConnectX-3 adapter cards with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallel processing, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation ConnectX-3 with VPI also simplifies system development by serving multiple fabrics with one hardware design.

icon_pdf Product Brief  wiecej_button

  • One adapter for FDR/QDR InfiniBand, 10/40 GbE
  • Ethernet or Data Center Bridging fabrics
  • World-class cluster, network, and storage performance
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

  • Virtual Protocol Interconnect
  • 1μs MPI ping latency
  • Up to 56Gb/s InfiniBand or 40 Gigabit Ethernet per port
  • Single- and Dual-Port options available
  • PCI Express 3.0 (up to 8GT/s)
  • CPU offload of transport operations
  • Application offload
  • GPU communication acceleration
  • Precision Clock Synchronization
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • Fibre Channel encapsulation (FCoIB or FCoE)
  • Ethernet encapsulation (EoIB)
  • RoHS-R6

ConnectX®-4 EN Adapter Card Single/Dual-Port 100 Gigabit Ethernet Adapter

ConnectX-4 EN Network Controller with 100Gb/s Ethernet connectivity, provide the highest performance and most flexible solution for highperformance, Web 2.0, Cloud, data analytics, database, and storage platforms.

With the exponential growth of data being shared and stored by applications and social networks, the need for high-speed and high performance compute and storage data centers is skyrocketing.

ConnectX-4 EN provides exceptional high performance for the most demanding data centers, public and private clouds, Web2.0 and BigData applications, and Storage systems, enabling today’s corporations to meet the demands of the data explosion.

open_sml The Open Compute Project (OCP) mission is to develop and specify the most cost-efficient, energy-efficient and scalable enterprise and Web 2.0 data centers. Mellanox ConnectX-4 EN OCP adapter card delivers leading Ethernet connectivity for performance-driven server and storage applications in Web 2.0, Enterprise Data Centers and Cloud environments. The OCP Mezzanine adapter form factor is designed to mate into OCP servers.


ConnectX-4-EN_2

icon_pdf ConnectX-4 Product Brief

icon_pdf ConnectX-4 for OCP Product Brief

 wiecej_button

  • Highest performing silicon for applications requiring high bandwidth, low latency and high message rate
  • World-class cluster, network, and storage performance
  • Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms
  • Cutting-edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Virtualization acceleration
  • Power efficiency
  • Scalability to tens-of-thousands of nodes

  • 100Gb/s Ethernet per port
  • 10/20/25/40/50/56/100Gb/s speeds
  • Single and dual-port options available
  • Erasure Coding offload
  • T10-DIF Signature Handover
  • Power8 CAPI support
  • CPU offloading of transport operations
  • Application offloading
  • Mellanox PeerDirect™ communication acceleration
  • Hardware offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • Ethernet encapsulation (EoIB)
  • RoHS-R6

ConnectX®-4 Lx EN Card

10/25/40/50 Gigabit Ethernet Adapter Cards supporting Multi-Host™ Technology, RDMA, Overlay Networks Encapsulation/Decapsulation and more

ConnectX-4 Lx EN Network Controller with 10/25/40/50Gb/s Ethernet connectivity addresses virtualized infrastructure challenges, delivering best-in-class and highest performance to various demanding markets and applications. Providing true hardware-based I/O isolation with unmatched scalability and efficiency, achieving the most cost-effective and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.

With the exponential increase in usage of data and the creation of new applications, the demand for the highest throughput, lowest latency, virtualization and sophisticated data acceleration engines continues to rise. ConnectX-4 Lx EN enables data centers to leverage the world’s leading interconnect adapter for increasing their operational efficiency, improving servers’ utilization, maximizing applications productivity, while reducing total cost of ownership (TCO).

ConnectX-4 Lx EN provides an unmatched combination of 10, 25, 40, and 50GbE bandwidth, sub-microsecond latency and a 75 million packets per second message rate. It includes native hardware support for RDMA over Converged Ethernet, Ethernet stateless offload engines, Overlay Networks,and GPUDirect® Technology.


icon_pdf Product Brief  wiecej_button

  • Highest performing boards for applications requiring high bandwidth, low latency and high message rate
  • Industry leading throughput and latency for Web 2.0, Cloud and Big Data applications
  • Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platforms
  • Cutting-edge performance in virtualized overlay networks
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Virtualization acceleration
  • Power efficiency

  • 10/25/40/50Gb/s speeds
  • Single and dual-port options available
  • Erasure Coding offload
  • Virtualization
  • Low latency RDMA over Converged Ethernet
  • CPU offloading of transport operations
  • Application offloading
  • Mellanox PeerDirectTM communication acceleration
  • Hardware offloads for NVGRE, VXLAN and GENEVE encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

ConnectX®-4 Lx EN for Open Compute Project (OCP)

Single-Port 10/25/40/50 Gigabit Ethernet Adapters Supporting Multi-Host™ Technology, RDMA, Overlay Networks and More

ConnectX-4 Lx EN Network Controller with 10/25/40/50 Gb/s Ethernet interface delivers high-bandwidth, low latency and industry-leading Ethernet connectivity for Open Compute Project (OCP) server and storage applications in Web 2.0, Enterprise Data Centers and Cloud infrastructure.

With ConnectX-4 Lx EN, server and storage applications will achieve significant throughput and latency improvements resulting in faster access, real-time response and more virtual machines hosted per server. ConnectX-4 Lx EN for Open Compute Project (OCP) specification 2.0 improves network performance by increasing available bandwidth while decreasing the associated transport load on the CPU especially in virtualized server environments.

Moreover, ConnectX-4 Lx EN introduces the new Multi-Host technology, which enables a new innovative rack design that achieves maximum CAPEX and OPEX savings without compromising on network performance.


icon_pdf Product Brief  wiecej_button

  • 10/25/40/50GbE connectivity for servers and storage
  • Open Compute Project form factor
  • Industry-leading throughput and low latency for Web access and storage performance
  • Maximizing data centers’ return on investment (ROI) with Multi-Host technology
  • Smart interconnect for x86, Power, ARM, and GPU-based compute and storage platform
  • Cutting-edge performance in virtualized Overlay Networks (VXLAN and NVGRE)
  • Efficient I/O consolidation, lowering data center costs and complexity
  • Virtualization acceleration
  • Power efficiency

  • Multi-Host technology
  • Connectivity to up-to 4 independent hosts
  • OCP Specification 2.0 and 0.5, as applicable
  • 10/25/40/50Gb/s speeds
  • Virtualization
  • Low latency RDMA over Converged Ethernet
  • Hardware offloads for NVGRE and VXLAN encapsulated traffic
  • CPU offloading of transport operations
  • Application offloading
  • Mellanox PeerDirect™ communication acceleration
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • Erasure Coding offload
  • RoHS-R6

Programmable ConnectX®-3 Pro Adapter Card

Dual-Port Programmable Adapter Evaluation Board

Mellanox programmable adapters provide users with the capability to program an FPGA attached to the ConnectX-3 Pro network adapter device, taking advantage of ConnectX-3 Pro enhanced application acceleration and high speed network.

The requirements of high performance and maximum flexibility in modern data centers, public and private clouds, Web 2.0 infrastructures, telecommunication, and highperformance computing are mandatory in order to achieve reduced completion time and lower cost per operation. The Programmable ConnectX-3 Pro Adapter Card simplifies system development by serving multiple fabrics with one hardware design


icon_pdf Product Brief  wiecej_button

  • 10/40Gb/s FPGA as bump-on-the-wire
  • FPGA on PCIe Gen3 x8 bus (up to 8GT/s) for high speed FPGA configuration
  • Enabler for user application of per-packet encryption/decryption
  • Enabler for CPU offload applications customization based on direct PCIe access to the FPGA
  • One design for Ethernet (10/40GbE), or Data Center Bridging fabrics
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • I/O consolidation
  • Virtualization acceleration
  • Scales to tens-of-thousands of nodes

ConnectX®-3 Pro EN Single/Dual-Port Adapters 10/40/56GbE Adapters w/ PCI Express 3.0

ConnectX-3 Pro EN 10/40/56GbE adapter cards with hardware offload engines for Overlay Networks („Tunneling”) provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Using these cards, public and private cloud clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications will achieve significant performance improvements, resulting in reduced completion time and lower cost per operation.

ConnectX-3-Pro-EN

icon_pdf ConnectX-3 Pro Product Brief

icon_pdf ConnectX-3 Pro 10GbE for OCP Product Brief

icon_pdf ConnectX-3 Pro 40GbE for OCP Product Brief

 wiecej_button

  • 10/40/56Gb/s connectivity for servers and storage
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

  • 1us MPI ping latency
  • Up to 40/56GbE per port
  • Single- and Dual-Port options available
  • PCI Express 3.0 (up to 8GT/s)
  • CPU offload of transport operations
  • Application offload
  • Precision Clock Synchronization
  • HW Offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

ConnectX®-3 EN Single/Dual-Port 10/40/56GbE Adapters w/ PCI Express 3.0

Mellanox ConnectX-3 EN 10/40/56GbE Network Interface Cards (NIC) with PCI Express 3.0 deliver high-bandwidth and industry-leading Ethernet connectivity for performance-driven server and storage applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered databases, web infrastructure, and high frequency trading are just a few applications that will achieve significant throughput and latency improvements resulting in faster access, real-time response and more users per server. ConnectX-3 EN improves network performance by increasing available bandwidth while decreasing the associated transport load on the CPU especially in virtualized server environments.

open_sml

The Open Compute Project (OCP) mission is to develop and specify the most cost-efficient, energy-efficient and scalable enterprise and Web 2.0 data centers. Mellanox ConnectX-3 EN 10GbE Open Compute Mezzanine adapter card delivers leading Ethernet connectivity for performance-driven server and storage applications in Web 2.0, Enterprise Data Centers and Cloud environments. The OCP Mezzanine adapter form factor is designed to mate into OCP servers.


ConnectX-3-EN_1ConnectX-3-EN_2
icon_pdf PB_ConnectX3_EN_Card  wiecej_button

  • 10/40/56Gb/s connectivity for servers and storage
  • Industry-leading throughput and latency performance
  • I/O consolidation
  • Virtualization acceleration
  • Software compatible with standard
  • TCP/UDP/IP and iSCSI stacks

  • Single or Dual 10/40/56GbE ports
  • PCI Express 3.0 (up to 8GT/s)
  • Low Latency RDMA over Ethernet
  • Data Center Bridging support
  • T11.3 FC-BB-5 FCoE
  • TCP/IP stateless offload in hardware
  • Traffic steering across multiple cores
  • Hardware-based I/O virtualization
  • Intelligent interrupt coalescence
  • Advanced Quality of Service
  • RoHS-R6

Top