Mellanox provides a unique family of application accelerator products that together with its InfiniBand and 10GigE switches provide the highest performing solutions in the market. Mellanox’s application accelerator software solutions reduce latency, increase throughput, and offload CPU cycles, enhancing the performance of applications while eliminating the need for large investments in hardware infrastructure.
Mellanox Application Accelerator Product Family
Mellanox has worked on hundreds of software projects with ISVs and other software companies such as NYSE Technologies, 29West, Oracle, Microsoft, Ansys, LSTC, ESI and many others. Building on this extensive software experience, Mellanox is able to provide added value on top of its hardware solutions.
The following software products are designed to accelerate the performance of applications running on scale-out data center fabrics.
- Unstructured Data Accelerator (UDA)
- Messaging Accelerator (VMA)
- Linux SW/Drivers
- Windows SW/Drivers
Mellanox Advantage
Mellanox Technologies is a leading supplier of end-to-end servers and storage connectivity solutions to optimize data center performance and efficiency. Mellanox InfiniBand and Ethernet adapters, switches, and software are powering Fortune 500 data centers and the world’s most powerful supercomputers. For the best in server and storage performance and scalability with the lowest TCO, Mellanox interconnect products are the solution.
Unstructured Data Accelerator (UDA)
Enterprise and research data sets are on a steep rise in terms of volume, velocity and variety. Hadoop, one of the dominant Big Data frameworks, helps organizations to store and analyze contemporary data amounts and features. The UDA plug-in software package provides a novel shuffle approach for Hadoop’s MapReduce framework. RDMA based networks, with its low latency and high bandwidth features, build the most efficient shuffle provider for MapReduce. Compared to a 1GbE network, benchmark results show nearly double performance of Hadoop® clusters using UDA with 10GbE networks and quadruple the performance using FDR InfiniBand. UDA is a free software package, available under the Apache 2.0 License. UDA is jointly developed by the Parallel Architecture and System Laboratory headed by Dr. Weikuan Yu from Auburn University and Mellanox. |
|
Product Brief |
Messaging Accelerator (VMA)
Dramatically improves performance of socket based applications
Mellanox’s Messaging Accelerator (VMA) boosts performance for message-based and streaming applications such as those found in financial services market data environments and Web2.0 clusters. The result is a reduction in latency by as much as 300% and an increase in application throughput by as much as 200% per server as compared to applications running on standard Ethernet or InfiniBand interconnect networks. This solution lowers latency and increases transactions per second for a wide-array of applications including medical online Web Services, imaging, radar and other data acquisition systems. VMA Open Source can improve performance of any application that makes heavy use of Multicast, UDP unicast and TCP streaming and requires high packet-per-second rates, low data distribution latency, low CPU utilization or increased application scalability. License |
|
Comprehensive Fabric Management Solution for Clustered Computing, Database, Storage and Cloud Computing
Mellanox’s comprehensive suite of management software provides an innovative application-centric approach to bridge the gap between servers, applications and fabric elements. Mellanox’s management solution allows users to manage small to extremely large fabrics as a set of inter-related business entities and enables fabric monitoring and performance optimization at the application-logical level rather than merely at the individual port or device level.
All Mellanox’s managed switches include the advanced embedded Mellanox Operating System (MLNX-OS®) or FabricIT management software providing an embedded Subnet Manager (supporting up to 648 nodes) and chassis management through CLI/WebUI/SNMP and XML (REST) interfaces.
All switches can be further enhanced using Mellanox’s Unified Fabric Manager (UFM®) packages including fabric diagnostics, monitoring, provisioning and advanced features such as Congestion Manager and server virtualization support.
Unified Fabric Manager (UFM®) Software for Data Center Management
Mellanox’s Unified Fabric Manager (UFM®)is a powerful platform for managing scale-out computing environments. UFM enables data center operators to monitor, efficiently provision, and operate the modern data center fabric. UFM eliminates the complexity of fabric management, provides deep visibility into traffic and optimizes fabric performance.
Fabric Visibility & Control UFM includes an advanced granular monitoring engine that provides real time access to health and performance, switch and host data, enabling:
Solve Traffic Bottlenecks Fabric congestion is difficult to detect when using traditional management tools, resulting in unnoticed congestion and fabric under utilization. UFM’s unique congestion tracking feature quickly identifies traffic bottlenecks and congestion events spreading over the fabric. This feature enables accurate problem identification and quick resolution of performance issues:
Ease Fabric Deployment and Operations UFM’s central management console reduces the effort and complexity involved with bring-up and the day-by-day fabric maintenance tasks. This significantly reduces downtime and makes UFM the ultimate management tool for the most demanding data center environments.
The SDN Approach While other tools are device-oriented and involve local device logic, UFM uses an SDN architecture together with a service oriented approach to manage the fabric.
UFM’s SDN Model advantages:
Integration with Existing Data Center Management Tools UFM provides an open and extensible object model to describe data center infrastructure and conduct all relevant management actions. UFM’s API enables integration with leading job schedulers, cloud and cluster managers. |
|
Product Brief |
Mellanox NEO™
Cloud Networking Orchestration and Management Software
Mellanox NEO™ is a powerful platform for managing scale-out computing networks. Mellanox NEO™ enables data center operators to efficiently provision, monitor and operate the modern data center fabric. Mellanox NEO™ serves as interface to the fabric, thus extending existing tools capabilities into monitoring and provisioning the data center network. Mellanox NEO™ uses an extensive set of REST APIs to allow access to fabric-related data and provisioning activities. Mellanox NEO™ eliminates the complexity of fabric management. It automates the configuration of devices, provides deep visibility into traffic and health, and provides early detection of errors and failures. |
|
Product Brief |
FabricIT™
Integrated switch management solutionFabricIT™ is a switch based comprehensive management software solution that provides optimal performance for cluster computing, enterprise data centers, and cloud computing over Mellanox IS5000 Switch family. The fabric management capabilities ensures the highest fabric performance while the chassis management ensures easy provisioning and the longest switch up time. With FabricIT EFM running on InfiniScale® IV powered fabrics, IT managers will see a higher return on their compute, storage and networking infrastructure investment through higher CPU productivity, efficiency and availability. Switch Chassis ManagementFabricIT- Chassis management software is included with every IS5000 series managed switch, enabling Network Administrators to monitor and diagnose the switch hardware. With local and remote configuration and management capabilities, chassis management provides critical system information including port status with event and error logs, CPU resources, and internal temperature with alarms. The chassis manager enables easy switch maintenance and high network availability. Fabric ManagementFabricIT EFM fabric management provides and intuitive, reliable and scalable management solution for cluster and data center fabrics. Its modular design integrates the subnet manager (SM) with advanced features simplifying cluster bring up and node initialization through automatic discovery and configuration. Performance monitors measure the fabric characteristics to ensure the highest effective throughput. Mellanox AdvantageMellanox Technologies is a leading supplier of end-to-end servers and storage connectivity solutions to optimize data center performance and efficiency. Mellanox InfiniBand adapters, switches, and software are powering Fortune 500 data centers and the world’s most powerful supercomputers. For the best in server and storage performance and scalability with the lowest TCO, Mellanox interconnect products are the solution. |
|
Product Brief |
MLNX-OS®
Integrated Switch Management SolutionMLNX-OS is a comprehensive management software solution that provides optimal performance for cluster computing, enterprise data centers, and cloud computing over Mellanox SwitchX™ Switch family. The fabric management capabilities ensure the highest fabric performance while the chassis management ensures the longest switch up time. With MLNX-OS IT managers will see a higher return on their compute as well as infrastructure investment through higher CPU productivity due to higher network throughput and availability. Virtual Protocol Interconnect® (VPI)VPI flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. Each port can operate on InfiniBand, Ethernet, Data Center Bridging (DCB) fabrics and RDMA over Converged Ethernet (RoCE). VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center. Complete Ethernet StackMLNX-OS introduces a complete Ethernet L2 and L3 protocol stack with unicast and multicast switching and routing capabilities complemented with SDN attributes for maximizing the network’s administrator control over the network resource. Switch Chassis ManagementEmbedded subnet Manager (SM) and chassis management software is included with every managed switch, enabling Network Administrators to monitor and diagnose the switch hardware. With local and remote configuration and management capabilities, chassis management provides parameter information including port status with event and error logs, CPU resources, and internal temperature with alarms. The chassis manager ensures low switch maintenance and high network availability. Ease of ManagementMLNX-OS management software communication interfaces includes: CLI, GUI, SNMP and XML gateway. Licensed features can be activated via keys to enable the plug-ins. |
|
Product Brief |
Fabric Inspector
Plug-In fabric diagnostics solutionFabric Inspector is a switch based software Plug-In that enhances Mellanox’s Operating System (MLNX-OS™) management software with fabric diagnostic capabilities to ensure fabric health. Cluster management software must provide tools to help a Network Administrator bring up the network and optimize performance. Fabric Inspector includes a complete set of tools for fabric wide diagnostics to check node-node and node-switch connectivity and to verify routes within the fabric. Simplicity and Ease of ManagementFabric Inspector is a plug & play software module within Mellanox Operating System (MLNX-OS) displaying and filtering all identified systems and nodes within the fabric (adapters, switches). The display can be done according to activity status, port type (HCA, switch or management) or port rate (link speed or link width). Moreover, Fabric Inspector helps assigning meaningful names to GUID’s enabling externally managed systems management. Mellanox AdvantageMellanox Technologies is a leading supplier of end-to-end server and storage connectivity solutions to optimize data center performance and efficiency. Mellanox InfiniBand adapters, switches, and software are powering Fortune 500 data centers and the world’s most powerful supercomputers. The company offers innovative solutions that address a wide range of markets including HPC, enterprise data centers, cloud computing, Internet and Web 2.0. |
|
Product Brief |
FabricIT BridgeX Manager (BXM)
Efficient Management of Virtual I/O for InfiniBand in the Data CentersVirtualization and cloud computing require a new class of on-demand I/O service where traditional I/O with multiple storage and networking cards on a single server is not efficient or cost effective. BridgeX® gateway improves data center efficiency by enabling network consolidation with Virtual I/O using ConnectX® InfiniBand adapters, allowing a server with a single physical adapter over a single cable to connect to Ethernet LAN. IT Managers can repurpose their server(s) dynamically with a single ConnectX InfiniBand card to create multiple virtual NICs (vNIC) based on user demand. Flexibility to dynamically repurpose servers is achieved using FabricIT BridgeX Manager (BXM) fabric management software. FabricIT BXM is robust management software running on BridgeX gateways to manage I/O consolidation for cluster, cloud, and virtual environments. It simplifies connectivity from an efficient InfiniBand network to Ethernet LAN. Supported key features include discovery of network virtualization hardware, creation, addition, and deletion of Virtual I/O, association of each Virtual I/O connection to network or storage, flexibility to repurpose the LAN ports on the gateway, security, I/O isolation and redundancy. Standard CLI and Web GUI interface allows users the flexibility to manage, provision, and orchestrate network virtualization for the entire fabric with no changes to the physical servers, switches, and storage targets. |
|
Product Brief |
Mellanox HPC-X™ is a comprehensive software package that includes MPI, SHMEM and UPC communications libraries. HPC-X™ also includes various acceleration packages to improve both the performance and scalability of applications running on top of these libraries, including MXM (Mellanox Messaging) which accelerates the underlying send/receive (or put/get) messages, and FCA (Fabric Collectives Accelerations) which accelerates the underlying collective operations used by the MPI/PGAS languages. This full-featured, tested and packaged version of HPC software enables MPI, SHMEM and PGAS programming languages to scale to extremely large clusters by improving on memory and latency related efficiencies, and to assure that the communication libraries are fully optimized of the Mellanox interconnect solutions.
Mellanox HPC-X™ allow OEM’s and System Integrators to meet the needs of their end-users by deploying the latest available software that takes advantage of the features and capabilities available in the most recent hardware and firmware changes.
Mellanox HPC-X™ Software Toolkit
To meet the needs of scientific research and engineering simulations, supercomputers are growing at an unrelenting rate. The Mellanox HPC-X Toolkit is a comprehensive MPI, SHMEM and UPC software suite for high performance computing environments. HPC-X provides enhancements to significantly increase the scalability and performance of message communications in the network. HPC-X enables you to rapidly deploy and deliver maximum application performance without the complexity and costs of licensed third-party tools and libraries. | |
Product Brief |
Fabric Collective Accelerator (FCA)
FCA is a MPI-integrated software package that utilizes CORE-Direct technology for implementing the MPI collective communications. FCA can be used with all major commercial and open-source MPI solutions that exist and being used for high-performance applications. FCA with CORE-Direct technology accelerates the MPI collectives runtime, increases the CPU availability to the application and allows overlap of communications and computations with collective operations. FCA allows for efficient collectives communication flow optimized to job and topology. It also contains support to build runtime configurable hierarchical collectives (HCOL) and supports multiple optimizations within a single collective algorithm. | |
Product Brief |
Fabric Collective Accelerator (FCA)
Mellanox Messaging Accelerator (MXM) provides enhancements to parallel communication libraries by fully utilizing the underlying networking infrastructure provided by Mellanox HCA/switch hardware. This includes a variety of enhancements that take advantage of Mellanox networking hardware including:
These enhancements significantly increase the scalability and performance of message communications in the network, alleviating bottlenecks within the parallel communication libraries |
|
Product Brief |
HPC-X™ MPI – Message Passing Interface
Message Passing Interface (MPI) is a standardized, language-independent and portable message-passing system, and is the industry-standard specification for writing message-passing programs. HPC-X MPI is a high performance implementation of Open MPI optimized to take advantage of the additional Mellanox acceleration capabilities and also provides seamless integration with the industry leading commercial and open-source application software packages.
HPC-X™ OpenSHMEM
The HPC-X™ OpenSHMEM programming library is a one-side communications library that supports a unique set of parallel programming features including point-to-point and collective routines, synchronizations, atomic operations, and a shared memory paradigm used between the processes of a parallel programming application.
SHMEM (SHared MEMory), uses the PGAS model to allow processes to globally share variables by allowing each process to see the same variable name, but each process keeps its own copy of the variable. Modification to another process address space is then accomplished using put/get (or write/read) semantics. The ability of put/get operations, or one-sided communication, is one of the major differences between SHMEM and MPI (Message Passing Interface) which only uses two-sided, send/ receive semantics. |
|
Product Brief |
HPC-X™ UPC
Unified Parallel C (UPC) is an extension of the C programming language designed for high performance computing on large-scale parallel systems. The language provides a uniform programming model for shared and distributed memory hardware. The processor memory has a single shared, partitioned address space, where variables may be directly read and written by any processor, but each variable is physically associated with a single processor. UPC uses a Single Program Multiple Data (SPMD) model of computation in which the amount of parallelism is fixed at program startup time, typically with a single thread of execution per processor.
Mellanox HPC-X™ UPC is based on the Berkeley Unified Parallel C project. Berkeley UPC library includes an underlying communication conduit called GASNET, which works over the OpenFabrics RDMA for Linux stack (OFED™). Mellanox has optimized this GASNET layer with the inclusion of their Mellanox Messaging libraries (MXM) as well as Mellanox Fabric Collective Accelerations (FCA), providing an unprecedented level of scalability for UPC programs running over InfiniBand. |
|
Product Brief |
Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED)
Clustering using commodity servers and storage systems is seeing widespread deployments in large and growing markets such as high performance computing, data warehousing, online transaction processing, financial services and large scale web 2.0 deployments. To enable distributed computing transparently and with maximum efficiency, applications in these markets require the highest I/O bandwidth and lowest possible latency. These requirements are compounded with the need to support a large interoperable ecosystem of networking, virtualization, storage, and other applications and interfaces. The OFED from OpenFabrics Alliance (www.openfabrics.org) has been hardened through collaborative development and testing by major high performance I/O vendors. Mellanox OFED (MLNX_OFED) is a Mellanox tested and packaged version of OFED and supports two interconnect types using the same RDMA (remote DMA) and kernel bypass APIs called OFED verbs – InfiniBand and Ethernet. 10/20/40Gb/s InfiniBand and RoCE (based on the RDMA over Converged Ethernet standard) over 10/40GbE are supported with OFED by Mellanox to enable OEMs and System Integrators to meet the needs end users in the said markets. | |
Product Brief |
Mellanox OFED for Windows
Mellanox FlexBoot
FlexBoot is a multiprotocol remote boot technology that delivers unprecedented flexibility in how IT Managers can provision or repurpose their datacenter servers. FlexBoot enables remote boot over InfiniBand or Ethernet using Boot over InfiniBand, over Ethernet, or Boot over iSCSI (Bo-iSCSI). Combined with Virtual Protocol Interconnect (VPI) technologies available in ConnectX®-3/ConnectX®-3 Pro/ConnectX®-4 and Connect-IB® adapters, FlexBoot gives IT Managers the flexibility to deploy servers with one adapter card into InfiniBand or Ethernet networks with the ability to boot from LAN or remote storage targets. This technology is based on the Preboot Execution Environment (PXE) standard specification, and FlexBoot software is based on the open source iPXE project (see www.ipxe.org). | |
Product Brief |
You must be logged in to post a comment.