NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card
Ürün ayrıntıları:
| Marka adı: | Mellanox |
| Model numarası: | MCX555a-E-Eat |
| Belge: | CONNECTX-5 infiniband.pdf |
Ödeme & teslimat koşulları:
| Min sipariş miktarı: | 1 adet |
|---|---|
| Fiyat: | Negotiate |
| Ambalaj bilgileri: | Dıştaki kutu |
| Teslim süresi: | Envantere dayalı |
| Ödeme koşulları: | T/T |
| Yetenek temini: | Proje/Parti ile Tedarik |
|
Detay Bilgi |
|||
| Ürün durumu: | Stoklamak | Başvuru: | Sunucu |
|---|---|---|---|
| Durum: | Yeni ve orijinal | Tip: | kablolu |
| Maksimum Hız: | EDR ve 100GBE | Ethernet konnektörü: | QSFP28 |
| Vurgulamak: | NVIDIA ConnectX-5 InfiniBand adapter,100Gb/s QSFP28 network card,PCIe 3.0 x16 Mellanox card |
||
Ürün Açıklaması
High-performance, low-latency 100Gb/s network adapter designed for HPC, AI, and cloud data centers. Featuring advanced offloads including NVMe over Fabrics, GPUDirect RDMA, and tag matching for MPI workloads — delivering industry-leading throughput and CPU efficiency.
The NVIDIA ConnectX-5 MCX555A-ECAT is a single-port 100Gb/s InfiniBand adapter card in a low-profile PCIe form factor. Leveraging the proven ConnectX-5 architecture, it delivers up to 100Gb/s throughput with sub-microsecond latency and a high message rate. The card supports both InfiniBand (up to EDR) and 100GbE, providing versatile connectivity for high-performance computing, storage, and virtualized environments.
Built with an embedded PCIe switch and advanced RDMA capabilities, the MCX555A-ECAT offloads critical communication tasks from the CPU — enabling higher application performance, lower power consumption, and reduced total cost of ownership. It is fully compatible with PCIe 3.0 x16 slots and supports a wide range of operating systems and acceleration frameworks.
- Up to 100Gb/s connectivity per port (InfiniBand EDR / 100GbE)
- Single QSFP28 connector for optical or copper cables
- PCIe 3.0 x16 host interface (auto-negotiates to x8, x4, x2, x1)
- RDMA, send/receive semantics with hardware-based reliable transport
- Tag matching and rendezvous offloads for MPI and SHMEM
- NVMe over Fabrics (NVMe-oF) target offloads for efficient storage
- GPUDirect RDMA (PeerDirect) acceleration for GPU communication
- Hardware-based congestion control & adaptive routing support
- SR-IOV virtualization: up to 512 virtual functions
- RoHS compliant, low-profile form factor (tall bracket included)
The ConnectX-5 architecture integrates a range of hardware acceleration engines that reduce CPU intervention and improve application scalability:
- MPI Tag Matching & Rendezvous Offload: Offloads message matching and rendezvous protocol processing, dramatically improving MPI performance for HPC clusters.
- Out-of-Order RDMA with Adaptive Routing: Enables efficient use of multiple network paths while maintaining ordered completion semantics, maximizing fabric utilization.
- NVMe-oF Target Offloads: Allows NVMe storage systems to serve remote access with near-zero CPU overhead, ideal for disaggregated storage architectures.
- Dynamically Connected Transport (DCT): Provides extreme scalability for large compute and storage systems by eliminating connection setup overhead.
- ASAP2 Accelerated Switching & Packet Processing: Hardware offload for Open vSwitch (OVS) and overlay network tunneling (VXLAN, NVGRE, GENEVE).
- On-Demand Paging (ODP): Supports virtual memory paging for RDMA operations, simplifying application development.
- High-Performance Computing (HPC): Ideal for supercomputing clusters, MPI-based simulations, and scientific research workloads requiring low latency and high message rates.
- AI & Deep Learning Training: Combined with GPUDirect RDMA, enables fast GPU-to-GPU communication across nodes, accelerating training times.
- NVMe-oF Storage Systems: Deploy as storage targets or initiators in NVMe over Fabrics environments for high-throughput, low-latency block storage access.
- Cloud & Virtualized Data Centers: SR-IOV and virtualization offloads support multi-tenant environments with guaranteed QoS and secure isolation.
- High-Frequency Trading (HFT): Ultra-low latency and hardware timestamping (IEEE 1588v2) meet the demands of financial services applications.
The MCX555A-ECAT is designed for broad compatibility with NVIDIA InfiniBand switches (e.g., Quantum, Spectrum) and third-party 100GbE switches. It supports both passive copper DACs and active optical cables via QSFP28 ports.
Operating Systems & Software Stacks:
- RHEL / CentOS, Ubuntu, Windows Server, FreeBSD, VMware ESXi
- OpenFabrics Enterprise Distribution (OFED) / WinOF-2
- NVIDIA HPC-X, OpenMPI, MVAPICH2, Intel MPI, Platform MPI
- Data Plane Development Kit (DPDK) for kernel bypass
| Parameter | Specification |
|---|---|
| Model | MCX555A-ECAT |
| Form Factor | PCIe Low-Profile (14.2cm x 6.9cm without bracket), Tall bracket pre-installed, short bracket included |
| Port Speed & Type | 1x QSFP28, up to 100Gb/s InfiniBand (EDR) and 100GbE |
| Host Interface | PCI Express 3.0 x16 (compatible with x8, x4, x2, x1) |
| InfiniBand Support | IBTA 1.3 compliant, 100Gb/s EDR, FDR, QDR, DDR, SDR; 8 virtual lanes + VL15; 16 million I/O channels |
| Ethernet Support | 100GbE, 50GbE, 40GbE, 25GbE, 10GbE, 1GbE; IEEE 802.3cd, 802.3bj, 802.3by, 802.3ba, 802.3ae |
| RDMA Capabilities | RDMA over Converged Ethernet (RoCE), hardware reliable transport, out-of-order RDMA, atomic operations |
| Storage Offloads | NVMe over Fabrics target offload, iSER, SRP, NFS RDMA, SMB Direct, T10 DIF signature handover |
| Virtualization | SR-IOV (up to 512 virtual functions), VMware NetQueue, NPAR, PCIe Access Control Services (ACS) |
| CPU Offloads | TCP/UDP/IP stateless offload, LSO/LRO, checksum offload, RSS/TSS, VLAN/MPLS tag insertion/stripping |
| Overlay Networks | Hardware offload for VXLAN, NVGRE, GENEVE encapsulation/decapsulation |
| Management | NC-SI over MCTP, PLDM for monitor/control and firmware update, I2C, SPI, JTAG |
| Remote Boot | Remote boot over InfiniBand, Ethernet, iSCSI; UEFI, PXE support |
| Power Consumption | Not publicly specified; typical sub-20W range – please confirm for your system |
| Operating Temperature | 0°C to 55°C (typical environment) |
| Compliance | RoHS, REACH, FCC, CE, VCCI, ICES, RCM |
Note: Specifications derived from NVIDIA ConnectX-5 product documentation. For the latest details and firmware support, refer to official NVIDIA release notes.
| Ordering Part Number | Ports / Speed | Host Interface | Form Factor | Key Features |
|---|---|---|---|---|
| MCX555A-ECAT | 1x QSFP28, 100Gb/s | PCIe 3.0 x16 | Low-profile PCIe | Standard single-port, EDR InfiniBand / 100GbE |
| MCX556A-ECAT | 2x QSFP28, 100Gb/s | PCIe 3.0 x16 | Low-profile PCIe | Dual-port, EDR/100GbE |
| MCX556A-EDAT | 2x QSFP28, 100Gb/s | PCIe 4.0 x16 | Low-profile PCIe | ConnectX-5 Ex, enhanced PCIe Gen4 |
| MCX556M-ECAT-S25 | 2x QSFP28, 100Gb/s | 2x PCIe 3.0 x8 | Socket Direct | Dual-socket server connection via harness |
| MCX545B-ECAN | 1x QSFP28, 100Gb/s | PCIe 3.0 x16 | OCP 2.0 Type 1 | Open Compute Project form factor |
For OCP or Multi-Host variants, please contact sales. All cards support backward compatibility to lower speeds.
- Superior Application Performance: Hardware offloads for MPI, NVMe-oF, and overlays free CPU cores for business logic.
- Scalable RDMA Fabric: DCT, XRC, and out-of-order RDMA deliver linear scalability for thousands of nodes.
- GPU Acceleration Ready: GPUDirect RDMA enables direct memory access between GPUs and network adapters, eliminating CPU bottlenecks in AI clusters.
- Flexible Deployment: Single QSFP28 port simplifies cabling and is ideal for 100Gb/s leaf-spine architectures.
- Investment Protection: Support for both InfiniBand and Ethernet allows seamless transition between protocols as needs evolve.
Hong Kong Starsurge Group provides complete lifecycle support for NVIDIA ConnectX-5 adapters, including pre-sales configuration assistance, firmware update guidance, and warranty service. Our technical team can help with:
- Compatibility verification with your server and switch infrastructure
- Performance tuning for HPC or storage workloads
- Custom bracket options and bulk packaging requirements
- RMA processing and advanced replacement services
Contact our sales engineers for volume pricing and lead time information.
Electrostatic Discharge (ESD): Always use ESD-safe practices when handling the adapter. Store in anti-static packaging until installation. Cooling Requirements: Ensure adequate airflow in the server chassis to maintain operating temperature within specified range. Firmware Updates: Use NVIDIA official firmware tools (MFT) and verify compatibility with your OS and driver version before updating. Cable Bending: Follow QSFP28 cable bend radius guidelines to avoid signal degradation.
This is a Class A product. In a residential environment it may cause radio interference. Ensure proper shielding and grounding per local regulations.
Founded in 2008, Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration solutions. Serving customers worldwide with products including network switches, NICs, wireless access points, controllers, and high-speed cabling, Starsurge combines deep technical expertise with a customer-first approach. The company supports industries such as government, healthcare, manufacturing, education, finance, and enterprise, offering IoT solutions, network management systems, custom software development, and multilingual global delivery. With a focus on reliable quality and responsive service, Starsurge helps clients build efficient, scalable, and dependable network infrastructure.
| Component / System | Compatibility Status | Notes |
|---|---|---|
| NVIDIA Quantum InfiniBand Switches | Certified | EDR, HDR compatibility when using appropriate firmware |
| NVIDIA Spectrum Ethernet Switches | Certified | 100GbE, 50GbE, 25GbE modes supported |
| Third-party 100GbE switches | Compatible | Requires IEEE standards compliance; tested with major vendors |
| GPU servers (NVIDIA DGX, HGX) | Certified with GPUDirect | RDMA acceleration for multi-GPU communication |
| Storage arrays with NVMe-oF | Supported | Target offload enables efficient NVMe fabric access |
- ☑ Confirm server has an available PCIe 3.0 x16 (or x8) slot with adequate clearance.
- ☑ Determine port count: single-port (MCX555A-ECAT) vs dual-port (MCX556A-ECAT).
- ☑ Choose cable type: passive copper DAC for short distances (≤5m) or optical for longer runs.
- ☑ Verify operating system and driver support (OFED, Windows, VMware).
- ☑ For GPU clusters, ensure GPUDirect RDMA compatibility with your GPU model and driver version.
- ☑ Check if tall or short bracket is required for your server chassis.
- NVIDIA MCX556A-ECAT – Dual-port 100Gb/s ConnectX-5 adapter
- NVIDIA MCX556A-EDAT – ConnectX-5 Ex with PCIe 4.0 support
- NVIDIA Quantum-2 QM9700 40-port 800Gb/s InfiniBand Switch
- Mellanox QSFP28 passive DAC cables (1m, 2m, 3m)
- NVIDIA Spectrum-4 SN5600 100GbE/400GbE Ethernet switches
- NVIDIA ConnectX-5 InfiniBand Adapter Card User Manual
- RDMA over Converged Ethernet (RoCE) Deployment Guide
- GPUDirect RDMA Best Practices for AI Clusters
- NVMe over Fabrics with ConnectX-5 – Configuration Guide
- OFED Installation and Tuning Guide







