Mellanox – Oprogramowanie

mellanox-Software_01
home_offkarty_offswich_offkable_off

Mellanox provides a unique family of application accelerator products that together with its InfiniBand and 10GigE switches provide the highest performing solutions in the market. Mellanox’s application accelerator software solutions reduce latency, increase throughput, and offload CPU cycles, enhancing the performance of applications while eliminating the need for large investments in hardware infrastructure.

Mellanox Application Accelerator Product Family

Mellanox has worked on hundreds of software projects with ISVs and other software companies such as NYSE Technologies, 29West, Oracle, Microsoft, Ansys, LSTC, ESI and many others. Building on this extensive software experience, Mellanox is able to provide added value on top of its hardware solutions.

The following software products are designed to accelerate the performance of applications running on scale-out data center fabrics.

  • Unstructured Data Accelerator (UDA)
  • Messaging Accelerator (VMA)
  • Linux SW/Drivers
  • Windows SW/Drivers

Mellanox Advantage

Mellanox Technologies is a leading supplier of end-to-end servers and storage connectivity solutions to optimize data center performance and efficiency. Mellanox InfiniBand and Ethernet adapters, switches, and software are powering Fortune 500 data centers and the world’s most powerful supercomputers. For the best in server and storage performance and scalability with the lowest TCO, Mellanox interconnect products are the solution.

Unstructured Data Accelerator (UDA)

Enterprise and research data sets are on a steep rise in terms of volume, velocity and variety. Hadoop, one of the dominant Big Data frameworks, helps organizations to store and analyze contemporary data amounts and features.
The UDA plug-in software package provides a novel shuffle approach for Hadoop’s MapReduce framework. RDMA based networks, with its low latency and high bandwidth features, build the most efficient shuffle provider for MapReduce. Compared to a 1GbE network, benchmark results show nearly double performance of Hadoop® clusters using UDA with 10GbE networks and quadruple the performance using FDR InfiniBand. UDA is a free software package, available under the Apache 2.0 License.
UDA is jointly developed by the Parallel Architecture and System Laboratory headed by Dr. Weikuan Yu from Auburn University and Mellanox.

icon_pdf Product Brief

  • Enhances Hadoop performance
  • Provides an efficient shuffle provider for Map Reduce framework
  • More than double CPU efficiency
  • Reduces disk operation, writes by 45% and reads by 15%
  • Open source

Messaging Accelerator (VMA)

Dramatically improves performance of socket based applications

Mellanox’s Messaging Accelerator (VMA) boosts performance for message-based and streaming applications such as those found in financial services market data environments and Web2.0 clusters. The result is a reduction in latency by as much as 300% and an increase in application throughput by as much as 200% per server as compared to applications running on standard Ethernet or InfiniBand interconnect networks.

This solution lowers latency and increases transactions per second for a wide-array of applications including medical online Web Services, imaging, radar and other data acquisition systems. VMA Open Source can improve performance of any application that makes heavy use of Multicast, UDP unicast and TCP streaming and requires high packet-per-second rates, low data distribution latency, low CPU utilization or increased application scalability.

License
VMA Open Source is made available under a dual license: GPLv2 and commercial. The software can be downloaded for free from Google Code at https://github.com/Mellanox/libvma/.

download_VMA

 

  • Drastically improve unicast and multicast application performance without application code changes
  • Increase throughput small packet messaging and streaming applications
  • Lower overall communication latencies
  • Increased Packet Per Second (PPS) rates
  • Improved CPU utilization

Management-Software

Comprehensive Fabric Management Solution for Clustered Computing, Database, Storage and Cloud Computing

Mellanox’s comprehensive suite of management software provides an innovative application-centric approach to bridge the gap between servers, applications and fabric elements. Mellanox’s management solution allows users to manage small to extremely large fabrics as a set of inter-related business entities and enables fabric monitoring and performance optimization at the application-logical level rather than merely at the individual port or device level.

All Mellanox’s managed switches include the advanced embedded Mellanox Operating System (MLNX-OS®) or FabricIT management software providing an embedded Subnet Manager (supporting up to 648 nodes) and chassis management through CLI/WebUI/SNMP and XML (REST) interfaces.

All switches can be further enhanced using Mellanox’s Unified Fabric Manager (UFM®) packages including fabric diagnostics, monitoring, provisioning and advanced features such as Congestion Manager and server virtualization support.

Unified Fabric Manager (UFM®) Software for Data Center Management

Mellanox’s Unified Fabric Manager (UFM®)is a powerful platform for managing scale-out computing environments. UFM enables data center operators to monitor, efficiently provision, and operate the modern data center fabric. UFM eliminates the complexity of fabric management, provides deep visibility into traffic and optimizes fabric performance.

Fabric Visibility & Control

UFM includes an advanced granular monitoring engine that provides real time access to health and performance, switch and host data, enabling:

  • Real-time identification of fabric-related errors and failures
  • Insight into fabric performance and potential bottlenecks
  • Preventive maintenance via granular threshold-based alerts
  • SNMP traps and scriptable actions
  • Correlation of monitored data to application/ service level enabling quick and effective fabric analysis

Solve Traffic Bottlenecks

Fabric congestion is difficult to detect when using traditional management tools, resulting in unnoticed congestion and fabric under utilization. UFM’s unique congestion tracking feature quickly identifies traffic bottlenecks and congestion events spreading over the fabric. This feature enables accurate problem identification and quick resolution of performance issues:

  • Quickly identifies traffic issues, topology inefficiencies or non-optimal node placement
  • Allows the administrator to improve fabric topology and configuration
  • Enables increased performance and higher fabric utilization

Ease Fabric Deployment and Operations

UFM’s central management console reduces the effort and complexity involved with bring-up and the day-by-day fabric maintenance tasks. This significantly reduces downtime and makes UFM the ultimate management tool for the most demanding data center environments.

  • UFM’s advanced Fabric Health diagnostic tools enable the user to get a clear picture of fabric and link health across the fabric, eases deployment and shortens maintenance windows
  • UFM’s asset management capabilities enable effective tracking of fabric devices and ports, from the smallest to the largest 10Ks node clusters
  • Group operations such as switch firmware updates are enabled via a single mouse click
  • Failovers are handled seamlessly and are transparent to both the user and the applications running on the fabric

The SDN Approach

While other tools are device-oriented and involve local device logic, UFM uses an SDN architecture together with a service oriented approach to manage the fabric.

  • UFM’s intelligent end-to-end fabric policy engine correlates application defined needs to the underlying physical infrastructure and enables programmable configuration of routing policy, connectivity, and QoS across the fabric
  • UFM uses Mellanox advanced silicon capabilities to effectively control “managed” as well as “externally managed” devices in a central, programmable manner
  • UFM’s monitoring engine enables correlation of the monitored data and fabric events to the logical layer, providing the end-user valuable business-oriented information about the fabric in an easy to consume way

UFM’s SDN Model advantages:

  • Detaching of the fabric logic from the local device logic – enabling high flexibility in device deployment
  • Quick policy changes and quick remediation
  • Easy integration in cloud and dynamic environments which require service oriented logic
  • High level of SLA tracking and alerting

Integration with Existing Data Center Management Tools

UFM provides an open and extensible object model to describe data center infrastructure and conduct all relevant management actions. UFM’s API enables integration with leading job schedulers, cloud and cluster managers.

UFM_soft1

ufm_download_eval
icon_pdf Product Brief

  • Eliminates fabric congestion and hot spots
  • Simplifies the management of large or complex environments
  • Automates service provisioning on the fabric layer
  • Seamlessly manages workload migration scenarios
  • Tunes your fabric interconnect for highest performance
  • Provides preventive maintenance and “soft degradation” alerts
  • Quickly troubleshoots any connectivity problem
  • Integrates and streamlines fabric information into your IT systems

Mellanox NEO™

Cloud Networking Orchestration and Management Software

Mellanox NEO™ is a powerful platform for managing scale-out computing networks. Mellanox NEO™ enables data center operators to efficiently provision, monitor and operate the modern data center fabric.

Mellanox NEO™ serves as interface to the fabric, thus extending existing tools capabilities into monitoring and provisioning the data center network. Mellanox NEO™ uses an extensive set of REST APIs to allow access to fabric-related data and provisioning activities.

Mellanox NEO™ eliminates the complexity of fabric management. It automates the configuration of devices, provides deep visibility into traffic and health, and provides early detection of errors and failures.

 

icon_pdf Product Brief

  • Reduces complexity of fabric management
  • Provides in-depth visibility into traffic and health information
  • Network API supports integration, automation and SDN programmable fabrics
  • Historical health and performance graphs
  • Generates preventive maintenance and “soft degradation” alerts
  • Quickly troubleshoots topology and connectivity issues
  • Integrates and streamlines fabric information into your IT systems

FabricIT™

Integrated switch management solution

FabricIT™ is a switch based comprehensive management software solution that provides optimal performance for cluster computing, enterprise data centers, and cloud computing over Mellanox IS5000 Switch family. The fabric management capabilities ensures the highest fabric performance while the chassis management ensures easy provisioning and the longest switch up time. With FabricIT EFM running on InfiniScale® IV powered fabrics, IT managers will see a higher return on their compute, storage and networking infrastructure investment through higher CPU productivity, efficiency and availability.

Switch Chassis Management

FabricIT- Chassis management software is included with every IS5000 series managed switch, enabling Network Administrators to monitor and diagnose the switch hardware. With local and remote configuration and management capabilities, chassis management provides critical system information including port status with event and error logs, CPU resources, and internal temperature with alarms. The chassis manager enables easy switch maintenance and high network availability.

Fabric Management

FabricIT EFM fabric management provides and intuitive, reliable and scalable management solution for cluster and data center fabrics. Its modular design integrates the subnet manager (SM) with advanced features simplifying cluster bring up and node initialization through automatic discovery and configuration. Performance monitors measure the fabric characteristics to ensure the highest effective throughput.

Mellanox Advantage

Mellanox Technologies is a leading supplier of end-to-end servers and storage connectivity solutions to optimize data center performance and efficiency. Mellanox InfiniBand adapters, switches, and software are powering Fortune 500 data centers and the world’s most powerful supercomputers. For the best in server and storage performance and scalability with the lowest TCO, Mellanox interconnect products are the solution.

icon_pdf Product Brief

MLNX-OS®

Integrated Switch Management Solution

MLNX-OS is a comprehensive management software solution that provides optimal performance for cluster computing, enterprise data centers, and cloud computing over Mellanox SwitchX™ Switch family. The fabric management capabilities ensure the highest fabric performance while the chassis management ensures the longest switch up time. With MLNX-OS IT managers will see a higher return on their compute as well as infrastructure investment through higher CPU productivity due to higher network throughput and availability.

Virtual Protocol Interconnect® (VPI)

VPI flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. Each port can operate on InfiniBand, Ethernet, Data Center Bridging (DCB) fabrics and RDMA over Converged Ethernet (RoCE). VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center.

Complete Ethernet Stack

MLNX-OS introduces a complete Ethernet L2 and L3 protocol stack with unicast and multicast switching and routing capabilities complemented with SDN attributes for maximizing the network’s administrator control over the network resource.

Switch Chassis Management

Embedded subnet Manager (SM) and chassis management software is included with every managed switch, enabling Network Administrators to monitor and diagnose the switch hardware. With local and remote configuration and management capabilities, chassis management provides parameter information including port status with event and error logs, CPU resources, and internal temperature with alarms. The chassis manager ensures low switch maintenance and high network availability.

Ease of Management

MLNX-OS management software communication interfaces includes: CLI, GUI, SNMP and XML gateway. Licensed features can be activated via keys to enable the plug-ins.

icon_pdf Product Brief

Fabric Inspector

Plug-In fabric diagnostics solution

Fabric Inspector is a switch based software Plug-In that enhances Mellanox’s Operating System (MLNX-OS™) management software with fabric diagnostic capabilities to ensure fabric health. Cluster management software must provide tools to help a Network Administrator bring up the network and optimize performance. Fabric Inspector includes a complete set of tools for fabric wide diagnostics to check node-node and node-switch connectivity and to verify routes within the fabric.

Simplicity and Ease of Management

Fabric Inspector is a plug & play software module within Mellanox Operating System (MLNX-OS) displaying and filtering all identified systems and nodes within the fabric (adapters, switches). The display can be done according to activity status, port type (HCA, switch or management) or port rate (link speed or link width). Moreover, Fabric Inspector helps assigning meaningful names to GUID’s enabling externally managed systems management.

Mellanox Advantage

Mellanox Technologies is a leading supplier of end-to-end server and storage connectivity solutions to optimize data center performance and efficiency. Mellanox InfiniBand adapters, switches, and software are powering Fortune 500 data centers and the world’s most powerful supercomputers. The company offers innovative solutions that address a wide range of markets including HPC, enterprise data centers, cloud computing, Internet and Web 2.0.

Fabric Inspector

icon_pdf Product Brief

FabricIT BridgeX Manager (BXM)

Efficient Management of Virtual I/O for InfiniBand in the Data Centers

Virtualization and cloud computing require a new class of on-demand I/O service where traditional I/O with multiple storage and networking cards on a single server is not efficient or cost effective. BridgeX® gateway improves data center efficiency by enabling network consolidation with Virtual I/O using ConnectX® InfiniBand adapters, allowing a server with a single physical adapter over a single cable to connect to Ethernet LAN. IT Managers can repurpose their server(s) dynamically with a single ConnectX InfiniBand card to create multiple virtual NICs (vNIC) based on user demand. Flexibility to dynamically repurpose servers is achieved using FabricIT BridgeX Manager (BXM) fabric management software.

FabricIT BXM is robust management software running on BridgeX gateways to manage I/O consolidation for cluster, cloud, and virtual environments. It simplifies connectivity from an efficient InfiniBand network to Ethernet LAN. Supported key features include discovery of network virtualization hardware, creation, addition, and deletion of Virtual I/O, association of each Virtual I/O connection to network or storage, flexibility to repurpose the LAN ports on the gateway, security, I/O isolation and redundancy. Standard CLI and Web GUI interface allows users the flexibility to manage, provision, and orchestrate network virtualization for the entire fabric with no changes to the physical servers, switches, and storage targets.

icon_pdf Product Brief

HPC-X

Mellanox HPC-X™ is a comprehensive software package that includes MPI, SHMEM and UPC communications libraries. HPC-X™ also includes various acceleration packages to improve both the performance and scalability of applications running on top of these libraries, including MXM (Mellanox Messaging) which accelerates the underlying send/receive (or put/get) messages, and FCA (Fabric Collectives Accelerations) which accelerates the underlying collective operations used by the MPI/PGAS languages. This full-featured, tested and packaged version of HPC software enables MPI, SHMEM and PGAS programming languages to scale to extremely large clusters by improving on memory and latency related efficiencies, and to assure that the communication libraries are fully optimized of the Mellanox interconnect solutions.

Mellanox HPC-X™ allow OEM’s and System Integrators to meet the needs of their end-users by deploying the latest available software that takes advantage of the features and capabilities available in the most recent hardware and firmware changes.

Mellanox HPC-X™ Software Toolkit

To meet the needs of scientific research and engineering simulations, supercomputers are growing at an unrelenting rate. The Mellanox HPC-X Toolkit is a comprehensive MPI, SHMEM and UPC software suite for high performance computing environments. HPC-X provides enhancements to significantly increase the scalability and performance of message communications in the network. HPC-X enables you to rapidly deploy and deliver maximum application performance without the complexity and costs of licensed third-party tools and libraries.
icon_pdf Product Brief

  • Complete MPI, SHMEM, UPC package, including Mellanox MXM and FCA acceleration engines
  • Offload collectives communication from MPI process onto Mellanox interconnect hardware
  • Maximize application performance with underlying hardware architecture
  • Fully optimized for Mellanox InfiniBand and VPI interconnect solutions
  • Increase application scalability and resource efficiency
  • Multiple transport support including RC, DC and UD
  • Intra-node shared memory communication
  • Receive side tag matching
  • Native support for MPI-3

Fabric Collective Accelerator (FCA)

FCA is a MPI-integrated software package that utilizes CORE-Direct technology for implementing the MPI collective communications. FCA can be used with all major commercial and open-source MPI solutions that exist and being used for high-performance applications. FCA with CORE-Direct technology accelerates the MPI collectives runtime, increases the CPU availability to the application and allows overlap of communications and computations with collective operations. FCA allows for efficient collectives communication flow optimized to job and topology. It also contains support to build runtime configurable hierarchical collectives (HCOL) and supports multiple optimizations within a single collective algorithm.
icon_pdf Product Brief

  • Offload collectives communication from MPI process onto Mellanox interconnect hardware
  • Efficient collectives communication flow optimized to job and topology
  • Significantly reduce MPI collectives runtime
  • Native support for MPI-3
  • Blocking and nonblocking collectives
  • Hierarchical communication algorithms (HCOL)
  • Multiple optimizations within a single collective algorithm
  • Increase CPU availability and efficiency for increased application performance
  • Seamless integration with MPI libraries and job schedulers

Fabric Collective Accelerator (FCA)

Mellanox Messaging Accelerator (MXM) provides enhancements to parallel communication libraries by fully utilizing the underlying networking infrastructure provided by Mellanox HCA/switch hardware. This includes a variety of enhancements that take advantage of Mellanox networking hardware including:

  • Multiple transport support including RC, DC and UD
  • Proper management of HCA resources and memory structures
  • Efficient memory registration
  • One-sided communication semantics
  • Connection management
  • Receive side tag matching
  • Intra-node shared memory communication

These enhancements significantly increase the scalability and performance of message communications in the network, alleviating bottlenecks within the parallel communication libraries

icon_pdf Product Brief

HPC-X™ MPI – Message Passing Interface

Message Passing Interface (MPI) is a standardized, language-independent and portable message-passing system, and is the industry-standard specification for writing message-passing programs. HPC-X MPI is a high performance implementation of Open MPI optimized to take advantage of the additional Mellanox acceleration capabilities and also provides seamless integration with the industry leading commercial and open-source application software packages.

HPC-X™ OpenSHMEM

The HPC-X™ OpenSHMEM programming library is a one-side communications library that supports a unique set of parallel programming features including point-to-point and collective routines, synchronizations, atomic operations, and a shared memory paradigm used between the processes of a parallel programming application.

SHMEM (SHared MEMory), uses the PGAS model to allow processes to globally share variables by allowing each process to see the same variable name, but each process keeps its own copy of the variable. Modification to another process address space is then accomplished using put/get (or write/read) semantics. The ability of put/get operations, or one-sided communication, is one of the major differences between SHMEM and MPI (Message Passing Interface) which only uses two-sided, send/ receive semantics.

icon_pdf Product Brief

  • Provides a programming library for shared memory communication model extending use of InfiniBand to SHMEM applications
  • Seamless integration with MPI libraries and job schedulers allowing for Hybrid programming model
  • Maximum collective scalability through integration with Mellanox Fabric Collective Accelerator (FCA)
  • High message rate performance with integration and Mellanox Messaging Accelerator (MXM)

HPC-X™ UPC

Unified Parallel C (UPC) is an extension of the C programming language designed for high performance computing on large-scale parallel systems. The language provides a uniform programming model for shared and distributed memory hardware. The processor memory has a single shared, partitioned address space, where variables may be directly read and written by any processor, but each variable is physically associated with a single processor. UPC uses a Single Program Multiple Data (SPMD) model of computation in which the amount of parallelism is fixed at program startup time, typically with a single thread of execution per processor.

Mellanox HPC-X™ UPC is based on the Berkeley Unified Parallel C project. Berkeley UPC library includes an underlying communication conduit called GASNET, which works over the OpenFabrics RDMA for Linux stack (OFED™). Mellanox has optimized this GASNET layer with the inclusion of their Mellanox Messaging libraries (MXM) as well as Mellanox Fabric Collective Accelerations (FCA), providing an unprecedented level of scalability for UPC programs running over InfiniBand.

icon_pdf Product Brief

  • Provides a programming library for shared memory communication model extending use of InfiniBand to Berkeley UPC applications
  • Seamless integration with MPI libraries and job schedulers allowing for Hybrid programming model
  • Maximum collective scalability through integration with Mellanox Fabric Collective Accelerator (FCA)
  • High message rate performance with integration and Mellanox Messaging Accelerator (MXM)

Adapter-IB-VPI-SW

Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED)

Clustering using commodity servers and storage systems is seeing widespread deployments in large and growing markets such as high performance computing, data warehousing, online transaction processing, financial services and large scale web 2.0 deployments. To enable distributed computing transparently and with maximum efficiency, applications in these markets require the highest I/O bandwidth and lowest possible latency. These requirements are compounded with the need to support a large interoperable ecosystem of networking, virtualization, storage, and other applications and interfaces. The OFED from OpenFabrics Alliance (www.openfabrics.org) has been hardened through collaborative development and testing by major high performance I/O vendors. Mellanox OFED (MLNX_OFED) is a Mellanox tested and packaged version of OFED and supports two interconnect types using the same RDMA (remote DMA) and kernel bypass APIs called OFED verbs – InfiniBand and Ethernet. 10/20/40Gb/s InfiniBand and RoCE (based on the RDMA over Converged Ethernet standard) over 10/40GbE are supported with OFED by Mellanox to enable OEMs and System Integrators to meet the needs end users in the said markets.
icon_pdf Product Brief

  • Virtual Protocol Interconnect (VPI) allows Mellanox ConnectX adapter family to run InfiniBand and Ethernet traffic simultaneously on two ports
  • Single software stack that operates across all available Mellanox InfiniBand and Ethernet devices and configurations such as mem-free, SDR/DDR/QDR/FDR, 10 /40 GbE, and PCI Express modes
  • Support for HPC applications for scientific research, oil and gas exploration, car crash tests, bench marking etc. E.g., Fluent, LS-DYNA
  • Support for Data Center applications such as Oracle 11g/10g RAC, IBM DB2, Financial services applications such as IBM WebSphere LLM, Red Hat MRG, NYSE Data Fabric
  • Support for high-performance block storage applications utilizing RDMA benefits

Mellanox OFED for Windows

Windows OS Host controller driver for Cloud, Storage and High-Performance computing applications utilizing Mellanox’ field-proven RDMA and Transport Offloads

The Mellanox Windows distribution includes software for database clustering, Cloud, High Performance Computing, communications, and storage applications for servers and clients running different versions of Windows OS. This collection consists of drivers, protocols, and management in simple ready-to-install MSIs.
More detailed information on each package is provided in the documentation package available in the Related Documents section.

For ConnectX-3 and ConnectX-3 Pro drivers download WinOF.

For ConnectX-4 drivers download WinOF-2.

The Mellanox WinOF and WinOF-2 distribution provides the following benefits:

  • Virtual Protocol Interconnect (VPI): Running Ethernet and/or InfiniBand on the same Host Controller
  • Support Windows Azure Pack
  • Support traditional IP and Sockets based applications leveraging the benefits of RDMA
  • Support high-performance block storage applications utilizing RDMA benefits
  • Cloud and virtualization:
    • NVGRE Hardware offload (ConnectX-3 Pro and ConnectX-4)
    • SR-IOV
    • Function per-port (ConnectX-4)
  • NDK with SMB-Direct
  • NDv1 and v2 API support in user space
  • Support Teaming and High-Availability
  • Support a variety of Windows Server and Client OS )

Mellanox FlexBoot

FlexBoot is a multiprotocol remote boot technology that delivers unprecedented flexibility in how IT Managers can provision or repurpose their datacenter servers. FlexBoot enables remote boot over InfiniBand or Ethernet using Boot over InfiniBand, over Ethernet, or Boot over iSCSI (Bo-iSCSI). Combined with Virtual Protocol Interconnect (VPI) technologies available in ConnectX®-3/ConnectX®-3 Pro/ConnectX®-4 and Connect-IB® adapters, FlexBoot gives IT Managers the flexibility to deploy servers with one adapter card into InfiniBand or Ethernet networks with the ability to boot from LAN or remote storage targets. This technology is based on the Preboot Execution Environment (PXE) standard specification, and FlexBoot software is based on the open source iPXE project (see www.ipxe.org).
icon_pdf Product Brief

  • Simplified boot image and configuration management
  • Rapid recovery from server and site failures
  • Support boot options from both InfiniBand and Ethernet
  • Remote boot support from iSCSI Storage (target) and Ethernet (LAN targets)
  • Reduction in Data Center costs

Top