NVIDIA ConnectX-6 InfiniBand Adapter MCX653106A-ECAT 200Gb/s Smart NIC
Ürün ayrıntıları:
| Marka adı: | Mellanox |
| Model numarası: | MCX653106A-EEAT |
| Belge: | connectx-6-infiniband.pdf |
Ödeme & teslimat koşulları:
| Min sipariş miktarı: | 1 adet |
|---|---|
| Fiyat: | Negotiate |
| Ambalaj bilgileri: | Dıştaki kutu |
| Teslim süresi: | Envantere dayalı |
| Ödeme koşulları: | T/T |
| Yetenek temini: | Proje/Parti ile Tedarik |
|
Detay Bilgi |
|||
| Ürün durumu: | Stoklamak | Başvuru: | Sunucu |
|---|---|---|---|
| Arayüz Türü:: | İnfiniband | Limanlar: | Çift |
| Maksimum Hız: | 100GBE | Tip: | kablolu |
| Durum: | Yeni ve orijinal | Garanti süresi: | 1 Yıl |
| Modeli: | MCX653106A-EEAT | İsim: | Mellanox Ağ Kartı Orijinal MCX653106A-ECAT connect X-6 100 Gb/s çift bağlantı noktalı QSFP56 Etherne |
| Anahtar kelime: | Mellanox ağ kartı | ||
| Vurgulamak: | NVIDIA ConnectX-6 InfiniBand adapter,200Gb/s Smart NIC,Mellanox network card with warranty |
||
Ürün Açıklaması
200Gb/s Dual-Port Smart Adapter with In-Network Computing
The NVIDIA ConnectX-6 MCX653106A-ECAT delivers up to 200Gb/s bandwidth, sub-microsecond latency, and hardware offloads for HPC, AI, and hyperconverged storage. Featuring RDMA, NVMe-oF acceleration, block-level XTS-AES encryption, and PCIe 4.0, this dual-port QSFP56 InfiniBand adapter maximizes data center efficiency and scalability. Ideal for GPU clusters, ML training, and mission-critical networks.
The MCX653106A-ECAT is part of the NVIDIA ConnectX-6 InfiniBand adapter family, engineered for the most demanding workloads. It combines two QSFP56 ports capable of 200Gb/s InfiniBand or 200Gb/s Ethernet connectivity, offering hardware-based reliable transport, congestion control, and In-Network Computing engines. By offloading collective operations, MPI tag matching, and encryption from the host CPU, the adapter reduces CPU overhead and increases application performance in large-scale clusters. Enterprises, research labs, and hyperscale data centers rely on ConnectX-6 to build energy-efficient, low-latency fabrics.
Up to 200Gb/s per port (HDR InfiniBand / 200GbE)
Up to 215 million messages/sec
Block-level XTS-AES 256/512-bit, FIPS compliant
Collective offloads, NVMe-oF target/initiator offloads
PCIe Gen 4.0/3.0 x16 (dual-port support)
SR-IOV up to 1K VFs, ASAP2, Open vSwitch offload
RoCE, XRC, DCT, On-Demand Paging, Adaptive Routing support
Stand-up PCIe (low-profile), dual-port QSFP56
Built on NVIDIA's proven InfiniBand architecture, ConnectX-6 integrates In-Network Computing to accelerate MPI operations, deep learning frameworks, and storage protocols. The adapter supports Remote Direct Memory Access (RDMA) for zero-copy data transfers, bypassing the CPU and kernel. Hardware-based congestion control ensures predictable performance even under heavy load. Additionally, NVIDIA GPUDirect RDMA allows direct data exchange between GPU memory and the network adapter, slashing latency for AI training. With support for NVMe over Fabrics (NVMe-oF) offloads, the card reduces CPU utilization in storage arrays while enabling high-throughput, low-latency access to NVMe flash.
- High Performance Computing (HPC): Large-scale simulations, weather modeling, and computational fluid dynamics requiring low latency and high bandwidth.
- AI & Machine Learning Clusters: Distributed training of deep neural networks, leveraging GPUDirect and RDMA for maximum efficiency.
- NVMe-oF Storage Systems: Target or initiator offloads enable high-performance disaggregated storage with low CPU overhead.
- Hyperscale Data Centers: Virtualized environments with SR-IOV, overlay networks, and service chaining.
- Financial Services: Ultra-low latency trading infrastructure requiring deterministic performance.
The ConnectX-6 MCX653106A-ECAT is compatible with a wide range of servers, switches, and operating systems. It interoperates with NVIDIA Quantum InfiniBand switches (HDR 200Gb/s), as well as 200GbE Ethernet switches. The adapter supports standard PCIe slots (x16, x8, x4) and includes driver support for major OS platforms.
| Parameter | Specification |
|---|---|
| Product Model | MCX653106A-ECAT |
| Data Rate | 200Gb/s, 100Gb/s, 50Gb/s, 40Gb/s, 25Gb/s, 10Gb/s, 1Gb/s (InfiniBand & Ethernet) |
| Ports | 2x QSFP56 connectors |
| Host Interface | PCIe Gen 4.0 / 3.0 x16 (supports x8, x4, x2, x1 configurations) |
| Latency | Extremely low sub-microsecond (typical <0.8µs) |
| Message Rate | Up to 215 Million messages/sec |
| Encryption | XTS-AES 256/512-bit, FIPS 140-2 compliance ready |
| Form Factor | PCIe low-profile stand-up (tall bracket mounted, short bracket included) |
| Dimensions (without bracket) | 167.65mm x 68.90mm |
| Power Consumption | Typical 22W (depending on traffic) |
| Virtualization | SR-IOV (1K VFs), VMware NetQueue, NPAR, ASAP2 flow offload |
| Management | NC-SI, MCTP over PCIe/SMBus, PLDM for firmware update & monitoring |
| Remote Boot | InfiniBand, iSCSI, PXE, UEFI |
| Operating Systems | RHEL, SLES, Ubuntu, Windows Server, FreeBSD, VMware vSphere, OFED stack |
| Ordering Part Number (OPN) | Ports | Max Speed | Host Interface | Key Differentiator |
|---|---|---|---|---|
| MCX653106A-ECAT | 2x QSFP56 | 100Gb/s (also lower) | PCIe 3.0/4.0 x16 | Dual-port 100GbE/IB, advanced offloads, crypto optional? No built-in crypto in this variant, but supports block encryption via software? Actually hardware AES engine, consult spec; ideal for virtualization & storage |
| MCX653105A-HDAT | 1x QSFP56 | 200Gb/s | PCIe 3.0/4.0 x16 | Single-port 200Gb/s, crypto support |
| MCX653106A-HDAT | 2x QSFP56 | 200Gb/s | PCIe 3.0/4.0 x16 | Dual-port 200Gb/s full bandwidth, crypto offload |
| MCX653105A-ECAT | 1x QSFP56 | 100Gb/s | PCIe x16 | Single-port 100Gb/s, lower-cost entry |
| MCX653436A-HDAT (OCP 3.0) | 2x QSFP56 | 200Gb/s | PCIe 3.0/4.0 x16 | OCP 3.0 small form factor, dual-port |
- Maximized Application Performance: Hardware offloads for MPI, NVMe-oF, and encryption free up CPU cores for actual workloads.
- Future-Ready Bandwidth: PCIe 4.0 and 200Gb/s readiness ensures longevity in high-speed fabrics.
- In-Network Memory & Computing: Supports collective offloads and burst buffer, reducing data movement overhead.
- Trusted Security: Block-level AES-XTS encryption with FIPS compliance ensures data-at-rest and in-transit protection without performance hit.
- Simplified Management: Broad OS and hypervisor support, with unified driver stack (OFED, WinOF-2).
Hong Kong Starsurge Group provides full technical support, warranty coverage, and RMA services for all NVIDIA ConnectX adapters. Our team of networking engineers assists with configuration, firmware updates, and performance tuning. We offer global shipping, bulk pricing for data center projects, and customized stock reservations. For volume orders, contact our sales team to receive tailored quotations and lead time details.
• Confirm the PCIe slot provides adequate power (up to 75W via slot; this adapter typically uses <25W).
• For liquid-cooled platforms, check compatibility with Intel Server System D50TNP if cold plate variant is needed (this OPN is standard air-cooled).
• Verify OS driver compatibility with latest OFED or WinOF-2 stacks.
Since 2008, Hong Kong Starsurge Group Co., Limited delivers enterprise-grade networking hardware, system integration, and IT services worldwide. As a trusted partner for NVIDIA networking products, Starsurge offers certified solutions for government, finance, healthcare, education, and hyperscale data centers. Our technical team ensures smooth deployment, from pre-sales architecture design to post-sales support. With a customer-first philosophy, we provide tailored, scalable infrastructure components including NICs, switches, cables, and end-to-end network solutions.
Global delivery · Multilingual support · OEM services available
| Component / Ecosystem | Supported | Notes |
|---|---|---|
| NVIDIA Quantum HDR Switches | ✓ Yes | 200Gb/s full fabric integration |
| Ethernet 200G/100G Switches | ✓ Yes | Requires compatible transceiver/FEC modes |
| GPU Direct RDMA | ✓ Yes | NVIDIA GPU series supported |
| VMware vSphere / ESXi | ✓ Certified | Native drivers, SR-IOV support |
| Windows Server 2019/2022 | ✓ Yes | WinOF-2 driver package |
| Linux Kernel & OFED | ✓ Full support | MLNX_OFED, inbox drivers |
- Confirm required link speed: 100Gb/s dual-port meets your cluster bandwidth plan? For 200Gb/s dual-port, consider -HDAT OPN.
- Verify server PCIe slot: x16 physical, Gen 3 or Gen 4 recommended.
- Check cable type: QSFP56 passive copper (up to 5m) or active optical cables for longer reach.
- Ensure operating system drivers are available (OFED, WinOF).
- For encryption requirements: confirm if built-in block encryption is needed – MCX653106A-ECAT supports AES-XTS, but always confirm FIPS level with NVIDIA datasheet.
- Evaluate virtualization needs: SR-IOV, VXLAN offload, etc.







