MCX512A-ACAT Mellanox Connectx 5 Dual Port 10 25gbe Sfp28 EN Adapter Card PCIe 3.0 X8

Ürün ayrıntıları:

Marka adı: Mellanox
Model numarası: MCX512A-ACAT
Belge: connectx-5-en-card.pdf

Ödeme & teslimat koşulları:

Min sipariş miktarı: 1 adet
Fiyat: Negotiate
Ambalaj bilgileri: Dıştaki kutu
Teslim süresi: Envantere dayalı
Ödeme koşulları: T/T
Yetenek temini: Proje/Parti ile Tedarik
En iyi fiyat İletişim

Detay Bilgi

Başvuru: Sunucu Arayüz Türü::
Limanlar: Çift Maksimum Hız: 25GBE
Bağlayıcı Türü: SFP28 Tip: kablolu
Durum: Yeni ve orijinal Modeli: MCX512A-ACAT
İsim: Mellanox Ağ Kartı MCX512A-ACAT ConnectX-5 EN Adaptör Kartı 10/25GbE Çift Bağlantı Noktalı SFP28 PCIe Anahtar kelime: Mellanox ağ kartı

Ürün Açıklaması

NVIDIA ConnectX-5 EN MCX512A-ACAT

Dual-port SFP28 25GbE Ethernet adapter card — delivering up to 25Gb/s per port, ultra-low latency, and advanced application offloads. Ideal for cloud, Web 2.0, storage, AI, and virtualization platforms requiring high bandwidth with exceptional CPU efficiency.

2x 25GbE ports PCIe 3.0 x8 RoCE support SR-IOV up to 512 VFs VXLAN/NVGRE/GENEVE offloads NVMe-oF offloads ASAP2 vSwitch offload UEFI enabled
Product Overview

The NVIDIA ConnectX-5 EN MCX512A-ACAT is a dual-port 25GbE Ethernet adapter card designed for data centers that demand high throughput, low latency, and efficient server utilization. Built on the ConnectX-5 architecture, this adapter supports 25GbE, 10GbE, and 1GbE speeds, providing seamless migration from 10GbE to 25GbE infrastructure. With ultra-low latency, high message rate, and PCIe 3.0 x8 host interface, the MCX512A-ACAT delivers industry-leading performance for virtualized and bare-metal environments. Key capabilities include RoCE (RDMA over Converged Ethernet), SR-IOV virtualization with up to 512 Virtual Functions, ASAP2 accelerated switching and packet processing for vSwitch/vRouter offloads, NVMe over Fabric offloads, T10-DIF Signature Handover, comprehensive overlay network offloads (VXLAN, NVGRE, GENEVE), and UEFI support. This adapter is available in a low-profile PCIe form factor and is ideal for top-of-rack server connectivity.

Key Features
Dual-Port 25GbE
Two SFP28 ports supporting 25GbE, 10GbE, and 1GbE speeds. Backward compatible with existing 10GbE infrastructure.
Ultra-Low Latency and High Message Rate
Sub-microsecond latency and high message rate for latency-sensitive applications like HFT and NVMe-oF.
RDMA over Converged Ethernet (RoCE)
Low-latency RDMA services over Layer 2 and Layer 3 networks for storage and compute workloads.
ASAP2 Accelerated Switching
Hardware offload of Open vSwitch (OvS) and vRouter data plane, achieving wire-speed performance while reducing CPU load.
NVMe over Fabric Offloads
Hardware-accelerated NVMe-oF target offloads enabling efficient NVMe storage access with near-zero CPU intervention.
SR-IOV Virtualization
Up to 512 Virtual Functions (VFs) and 8 Physical Functions per port, with guaranteed QoS and VM isolation.
Overlay Network Offloads
Hardware encapsulation and de-encapsulation for VXLAN, NVGRE, GENEVE, MPLS, and NSH tunnels.
Flexible Programmable Pipeline
Flexible parser and match-action tables enabling hardware offloads for current and future protocols.
Host Management and Remote Boot
NC-SI over MCTP, BMC interface, PLDM for monitoring and firmware update, PXE and UEFI remote boot.
Technology: ConnectX-5 Architecture

The ConnectX-5 EN ASIC delivers record-setting performance with advanced acceleration engines. Key technological innovations include:

  • PeerDirect (GPUDirect) – Eliminates unnecessary PCIe data copies between GPU and CPU, accelerating HPC, AI, and machine learning workloads.
  • Adaptive Routing on Reliable Transport – Enables out-of-order RDMA and adaptive routing for optimized fabric utilization.
  • Tag Matching and Rendezvous Offloads – Hardware offload of MPI tag matching and rendezvous protocol, reducing CPU overhead in HPC clusters.
  • Burst Buffer Offloads – Hardware acceleration for background checkpointing in large-scale simulations and ML training.
  • Embedded PCIe Switch – Supports up to 8 bifurcations, enabling host chaining and elimination of backend switches in storage racks.
  • On-Demand Paging (ODP) – Registration-free RDMA memory access, simplifying application development.
  • Extended Reliable Connected (XRC) and Dynamically Connected Transport (DCT) – Scales RDMA to tens of thousands of nodes.
  • T10-DIF Signature Handover – Hardware-based data integrity protection for storage workloads at wire speed.
Typical Deployments
Cloud and Web 2.0 Data Centers
High-density virtualization, overlay networks, and vSwitch offloads reduce CPU utilization while maintaining wire-speed 25GbE performance.
High-Performance Storage
NVMe-oF target offloads, T10-DIF, and RoCE enable high-performance block storage with sub-microsecond latency.
AI and Machine Learning Clusters
PeerDirect GPUDirect and adaptive routing accelerate distributed training workloads at 25GbE.
Telecommunications and NFV
ASAP2 vSwitch offloads and service chaining enable efficient Network Function Virtualization.
25GbE Data Center Migration
Seamlessly upgrade from 10GbE to 25GbE while maintaining backward compatibility with existing switches and cables.
Virtualized Server Environments
SR-IOV with up to 512 VFs enables dense VM deployments with guaranteed performance isolation.
Compatibility and Ecosystem

The MCX512A-ACAT is compatible with a wide range of operating systems: RHEL/CentOS, Ubuntu, Windows Server, FreeBSD, VMware ESXi, and Citrix XenServer. It supports standard 25GbE SFP28 optics, passive DAC cables, active optical cables (AOC), and breakout configurations. The adapter integrates seamlessly with NVIDIA Spectrum switches and any standards-based 10GbE/25GbE infrastructure. Software support includes OFED (OpenFabrics Enterprise Distribution), DPDK, and WinOF-2 for Windows. UEFI support enables modern server boot environments.

Technical Specifications
Category Specification
Model MCX512A-ACAT
Form Factor Low-profile PCIe add-in card. Ships with tall bracket mounted, short bracket included.
Ports 2x SFP28 (25GbE / 10GbE / 1GbE)
Supported Speeds 25GbE, 10GbE, 1GbE
Host Interface PCIe 3.0 x8 (compatible with x16, x4, x2, x1; auto-negotiated)
Message Rate Up to 200 million messages per second (Mpps)
Latency Sub-microsecond (typical)
Virtualization SR-IOV: up to 512 Virtual Functions, 8 Physical Functions per port
RoCE Support Yes – RDMA over Converged Ethernet (RoCE)
Overlay Offloads VXLAN, NVGRE, GENEVE, MPLS, NSH hardware encapsulation and de-encapsulation
vSwitch/vRouter Offloads ASAP2 – Open vSwitch (OvS) and vRouter data plane offload with flexible match-action tables
Storage Offloads NVMe-oF target offloads, T10-DIF Signature Handover, SRP, iSER, NFS RDMA, SMB Direct
Enhanced Features Tag matching, rendezvous offload, adaptive routing, burst buffer offload, embedded PCIe switch, ODP, XRC, DCT
CPU Offloads TCP/UDP stateless offloads, LSO/LRO, checksum offload, RSS/TSS, HDS, VLAN/MPLS tag insertion/stripping
Management Interfaces NC-SI over MCTP (SMBus/PCIe), BMC interface, PLDM (monitoring and firmware update), SDN eSwitch management, SPI, JTAG
Remote Boot PXE, UEFI, iSCSI remote boot
UEFI Support Yes – UEFI enabled (x86 and Arm platforms)
Power Consumption Not publicly specified – please confirm before ordering
Operating Temperature 0°C to 55°C (typical)
Standards IEEE 802.3by (25GbE), 802.3ae (10GbE), 802.3az EEE, 802.1Qbb PFC, 802.1Qaz ETS, 802.1Qau QCN, 1588v2, PCIe Gen 3.0
RoHS Compliant
Note: Specifications are based on NVIDIA public documentation. Please confirm exact details with sales for your order.
Selection Guide: ConnectX-5 EN Portfolio
OPN (Ordering Part Number) Ports Max Speed Interface Host Interface Key Feature
MCX512A-ACAT 2 25GbE SFP28 PCIe 3.0 x8 Dual-port 25GbE, UEFI enabled, RoCE, ASAP2
MCX512A-ADAT 2 25GbE SFP28 PCIe 3.0 x8 ConnectX-5 Ex enhanced version
MCX512F-ACAT 2 25GbE SFP28 PCIe 3.0 x16 Enhanced host management
MCX516A-CCAT 2 100GbE QSFP28 PCIe 3.0 x16 Dual-port 100GbE for spine connectivity
MCX516A-CDAT 2 100GbE QSFP28 PCIe 4.0 x16 ConnectX-5 Ex 100GbE with PCIe 4.0
Why Choose MCX512A-ACAT from Starsurge
25GbE Ready
Future-proof your data center with 2.5x the bandwidth of 10GbE while maintaining backward compatibility.
Exceptional Message Rate
200 Mpps enables the highest packet processing density for vSwitch, NFV, and latency-sensitive applications.
Comprehensive Offloads
NVMe-oF, T10-DIF, ASAP2, and RoCE offloads dramatically reduce CPU utilization and improve application performance.
Global Logistics and Support
Hong Kong Starsurge offers competitive pricing, warranty support, and fast worldwide delivery.
Service and Support

Hong Kong Starsurge provides end-to-end support for NVIDIA/Mellanox adapters, including compatibility verification, firmware updates, and technical troubleshooting. Standard warranty aligns with NVIDIA's limited hardware warranty (1 year return-and-repair). Extended support options are available upon request. Our team can assist with driver installation, performance tuning, RoCE configuration, and integration into existing server, storage, and network environments.

Frequently Asked Questions
Q: What is the difference between MCX512A-ACAT and MCX512A-ADAT?
MCX512A-ACAT is the standard ConnectX-5 EN adapter. MCX512A-ADAT is the ConnectX-5 Ex enhanced performance version with additional optimizations. Both offer dual-port 25GbE.
Q: Does this card support RDMA over Converged Ethernet?
Yes. The ConnectX-5 EN fully supports RoCE (RDMA over Converged Ethernet) for low-latency memory access across the network, including RoCE over overlay networks.
Q: Can I use this adapter with a PCIe 4.0 slot?
The card is PCIe 3.0 x8 but is backward compatible with PCIe 4.0 slots (operating at PCIe 3.0 speeds). No performance penalty for 25GbE operation as PCIe 3.0 x8 provides sufficient bandwidth.
Q: What cables are compatible with 25GbE operation?
Standard SFP28 passive DAC cables (up to 5m), SFP28 active optical cables (AOC), 25GBASE-SR (850nm, up to 100m), and 25GBASE-LR (1310nm, up to 10km) are supported. For 10GbE operation, standard SFP+ cables and optics work as well.
Q: What is ASAP2 and how does it benefit my deployment?
ASAP2 (Accelerated Switching and Packet Processing) offloads Open vSwitch and vRouter data plane to hardware, achieving wire-speed performance while reducing CPU load by up to 10x in virtualized environments.
Q: Does this card support NVMe over Fabric?
Yes, the ConnectX-5 EN includes hardware offloads for NVMe-oF target, enabling efficient remote NVMe storage access with minimal CPU intervention.
Q: Is UEFI supported?
Yes, the MCX512A-ACAT includes UEFI support for both x86 and Arm server platforms.
Key Facts
Dual-port 25GbE SFP28 200 Mpps message rate Sub-microsecond latency RoCE supported 512 SR-IOV Virtual Functions VXLAN/NVGRE/GENEVE offload PCIe 3.0 x8 ASAP2 vSwitch offload NVMe-oF target offloads T10-DIF Signature Handover PeerDirect GPUDirect NC-SI BMC management UEFI enabled
Compatibility Matrix
Category Supported Options
Operating Systems RHEL/CentOS 7/8/9, Ubuntu 18.04+, Windows Server 2016/2019/2022, FreeBSD 12+, VMware ESXi 6.7/7.0/8.0, Citrix XenServer
Switches NVIDIA Spectrum SN3000/SN3420/SN3700 series, Cisco Nexus 3000/9000, Arista 7000 series, Juniper QFX series, any standards-based 10/25GbE switch
Cables and Optics (25GbE) SFP28 passive DAC (up to 5m), SFP28 AOC, 25GBASE-SR (850nm, up to 100m), 25GBASE-LR (1310nm, up to 10km)
Cables and Optics (10GbE) SFP+ passive DAC, SFP+ AOC, 10GBASE-SR, 10GBASE-LR
Management Protocols NC-SI, MCTP over PCIe/SMBus, PLDM for monitoring and firmware update, SDN eSwitch management
Buyer Checklist
  1. Confirm server has an available PCIe x8 (or larger) slot – Gen 3.0 or higher.
  2. Determine required cable type: passive DAC (short distance), active optical (medium distance), or optical transceivers (long distance) for 25GbE operation.
  3. Verify operating system driver availability from NVIDIA/Mellanox official site (latest OFED or inbox drivers).
  4. Ensure your switch supports 25GbE SFP28 ports (most modern top-of-rack switches do).
  5. For RoCE deployments, confirm switch support for DCB (PFC, ETS, ECN) and congestion notification.
  6. For NVMe-oF target offloads, verify your storage software stack compatibility.
  7. If upgrading from 10GbE, confirm existing SFP+ optics can be used at 10GbE mode on this adapter.
  8. For UEFI boot, verify server firmware compatibility.
Related Products
NVIDIA MCX516A-CCAT
Dual-port 100GbE adapter for spine connectivity and high-bandwidth uplinks.
NVIDIA SN3420 Switch
48x 25GbE + 12x 100GbE top-of-rack switch for leaf/spine fabrics.
NVIDIA LinkX SFP28 DAC Cables
Passive copper direct-attach cables for 25GbE connections up to 5 meters.
NVIDIA MCX512A-ADAT
ConnectX-5 Ex enhanced version for additional performance optimizations.
Related Guides
  • RoCE Deployment Guide for ConnectX-5 Series
  • 25GbE Migration: Best Practices from 10GbE to 25GbE
  • ASAP2 Open vSwitch Offload Configuration Guide
  • NVMe over Fabric with ConnectX-5 Best Practices
  • SR-IOV Configuration on VMware ESXi with Mellanox Adapters
About Hong Kong Starsurge Group

Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration since 2008. Serving government, healthcare, manufacturing, finance, education, and enterprise clients worldwide. We deliver switches, NICs, wireless solutions, IoT systems, and custom software with multilingual support and global delivery. With a customer-first approach, Starsurge ensures reliable quality, responsive service, and tailored network infrastructure solutions.

Bu ürün hakkında daha fazla bilgi edinmek istiyorum
İlgileniyorum MCX512A-ACAT Mellanox Connectx 5 Dual Port 10 25gbe Sfp28 EN Adapter Card PCIe 3.0 X8 bana tür, boyut, miktar, malzeme gibi daha fazla ayrıntı gönderebilir misiniz
Teşekkürler!
Cevabını bekliyorum.