NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter
تفاصيل المنتج:
| اسم العلامة التجارية: | Mellanox |
| رقم الموديل: | MCX653106A-HDAT-SP |
| وثيقة: | connectx-6-infiniband.pdf |
شروط الدفع والشحن:
| الحد الأدنى لكمية: | 1 قطعة |
|---|---|
| الأسعار: | Negotiate |
| تفاصيل التغليف: | الصندوق الخارجي |
| وقت التسليم: | على أساس المخزون |
| شروط الدفع: | تي/تي |
| القدرة على العرض: | العرض بواسطة المشروع/الدفعة |
|
معلومات تفصيلية |
|||
| حالة المنتجات: | مخزون | طلب: | الخادم |
|---|---|---|---|
| حالة: | جديدة ومبتكرة | يكتب: | سلكي |
| السرعة القصوى: | ما يصل إلى 200 جيجابايت/ثانية | موصل Ethernet: | QSFP56 |
| نموذج: | MCX653106A-HDAT | اسم: | MCX653106A-HDAT-SP Mellanox Network Card 200gbe خزنة ذكية عالية السرعة |
| إبراز: | NVIDIA ConnectX-6 InfiniBand adapter,Mellanox 200Gb/s network card,dual-port InfiniBand smart adapter,Mellanox 200Gb/s network card,dual-port InfiniBand smart adapter |
||
منتوج وصف
NVIDIA ConnectX-6 MCX653106A-HDAT
200Gb/s Dual-Port HDR InfiniBand Smart Adapter
Unlock extreme HPC and AI performance with NVIDIA In-Network Computing. This PCIe 4.0 x16 adapter delivers 215 million messages/sec and hardware-based acceleration for the most demanding data centers.
Product Overview
The NVIDIA ConnectX-6 MCX653106A-HDAT is a dual-port 200Gb/s InfiniBand and Ethernet smart adapter, designed as a cornerstone of the NVIDIA Quantum InfiniBand platform. It integrates advanced features like Remote Direct Memory Access (RDMA), NVMe over Fabrics (NVMe-oF) offloads, and block-level encryption to drastically reduce CPU overhead. By moving computation into the network fabric, this adapter enhances scalability and efficiency for high-performance computing, machine learning workloads, and hyperscale cloud infrastructures.
Key Features
- Ultra-High Throughput: 200Gb/s connectivity per port with a maximum aggregate bandwidth of 200Gb/s.
- In-Network Computing: Hardware offloads for collective operations, MPI tag matching, and rendezvous protocol.
- Block-Level Encryption: XTS-AES 256/512-bit hardware encryption for FIPS-compliant data security.
- PCIe 4.0 Support: 16 GT/s link rate with full backward compatibility to PCIe 3.0/2.0/1.1.
- Message Rate: Up to 215 million messages per second for extreme small-packet performance.
- Storage Offloads: NVMe-oF target and initiator offloads, T10-DIF, and support for SRP, iSER, NFS RDMA.
- Virtualization: SR-IOV with up to 1K virtual functions and ASAP2 for OVS offload.
NVIDIA In-Network Computing Technology
ConnectX-6 integrates NVIDIA’s unique In-Network Computing engines, offloading collective communication operations (like MPI all-reduce) from the CPU to the network fabric. This drastically reduces latency and frees CPU cycles for application processing. Combined with RDMA and advanced memory mapping (UMR), the adapter enables GPU Direct RDMA and peer-to-peer GPU communication across the network, accelerating AI training clusters and complex simulations.
Typical Deployments
- High-Performance Computing (HPC): Large-scale clusters running weather simulation, computational fluid dynamics, and molecular dynamics.
- AI & Machine Learning: Distributed training of deep neural networks requiring high throughput and low latency.
- Enterprise Data Centers: NVMe-oF storage targets, database acceleration, and virtualized infrastructure.
- Hyperscale Cloud: Multi-tenant environments requiring hardware-based isolation and QoS.
- Liquid-Cooled Platforms: Compatible with Intel Server System D50TNP cold plate designs for high-density deployments.
Compatibility
System & CPU: x86, Power, Arm, GPU (with GPUDirect), and FPGA-based platforms.
Switches: Fully interoperable with NVIDIA Quantum InfiniBand switches up to 200Gb/s and standard Ethernet switches.
Cables: Passive copper, active optical, and DAC cables with QSFP56 connectors.
Technical Specifications
| Parameter | Details |
|---|---|
| Product Name | NVIDIA ConnectX-6 MCX653106A-HDAT |
| Supported Speeds | InfiniBand: 200/100/50/40/25/10/1Gb/s; Ethernet: 200/100/50/40/25/10/1GbE |
| Network Ports | 2x QSFP56 |
| Host Interface | PCIe Gen 4.0/3.0 x16 (also supports x8, x4, x2, x1) |
| Message Rate | Up to 215 million messages/sec |
| InfiniBand Features | RDMA, XRC, DCT, ODP, Hardware Congestion Control, 16M I/O channels, 8 VLs + VL15 |
| Ethernet Offloads | RoCE, LSO/LRO, checksum offload, RSS/TSS, VXLAN/NVGRE/Geneve offload |
| Storage Offloads | NVMe-oF (target/initiator), T10-DIF, SRP, iSER, SMB Direct |
| Security | Hardware XTS-AES 256/512-bit block encryption, FIPS compliant |
| Management | NC-SI, MCTP over SMBus/PCIe, PLDM for Monitor/Firmware, I2C, JTAG |
| Dimensions | 167.65mm x 68.90mm (without brackets) |
| Regulatory | RoHS, ODCC compatible |
Note: Specifications are based on available documentation. For complete details, please confirm before ordering.
Selection Guide: MCX653106A-HDAT
This model is the dual-port QSFP56 variant in the PCIe stand-up form factor. It supports both InfiniBand and Ethernet at speeds up to 200Gb/s. For single-port needs, consider the MCX653105A-HDAT. For OCP 3.0 form factor, refer to the MCX653436A-HDAT series.
| Ports | Form Factor | OPN | Use Case |
|---|---|---|---|
| 2x QSFP56 | PCIe Stand-up | MCX653106A-HDAT | Dual-port 200Gb/s for high-availability HPC/AI nodes |
| 1x QSFP56 | PCIe Stand-up | MCX653105A-HDAT | Single-port 200Gb/s for standard compute |
| 2x QSFP56 | Socket Direct | MCX654106A-HCAT | Multi-socket server optimization |
Advantages of the ConnectX-6 MCX653106A-HDAT
- Future-Proof I/O: PCIe 4.0 readiness ensures bandwidth for next-gen CPUs and GPUs.
- Security by Default: On-board FIPS-compliant encryption eliminates the need for self-encrypting drives.
- Infrastructure Consolidation: One adapter supports both InfiniBand and Ethernet, simplifying inventory.
- Scalable Storage: Full NVMe-oF offloads reduce CPU load in disaggregated storage architectures.
Service & Support
Backed by Hong Kong Starsurge Group's experienced technical team, we provide:
- Pre-sales configuration assistance for your specific HPC or enterprise environment.
- Global shipping with tracking and secure packaging.
- Firmware update guidance and driver download support.
- Warranty and RMA services (terms may vary by region).
Frequently Asked Questions
Q: Is this card compatible with standard Ethernet switches?
A: Yes, the MCX653106A-HDAT supports both InfiniBand and Ethernet. It can operate at 200/100/50/40/25/10/1GbE.
Q: Does it support NVIDIA GPUDirect?
A: Absolutely. It supports GPUDirect RDMA and PeerDirect for direct GPU-to-GPU communication over the network.
Q: What is the difference between MCX653106A-HDAT and MCX653106A-ECAT?
A: The -HDAT suffix indicates the high-speed variant supporting 200Gb/s, while -ECAT typically denotes a lower-speed (100Gb/s) version. Always verify with the ordering guide.
Q: Can I use this card in a PCIe Gen 3 slot?
A: Yes, it is backward compatible with PCIe Gen 3.0, but bandwidth will be limited to ~100Gb/s per port due to the slower bus.
Installation Precautions
- Ensure adequate airflow; the adapter may require active cooling in dense environments.
- Use only QSFP56 modules and cables validated for 200Gb/s operation to avoid link instability.
- Check your motherboard's PCIe slot bifurcation support if using the Socket Direct variant.
- Confirm power budget: the card draws power from the PCIe slot; high-power optical modules may need additional power consideration.
About Hong Kong Starsurge Group Co., Limited
Founded in 2008, Hong Kong Starsurge Group is a technology-driven provider of network hardware, IT services, and system integration solutions. We serve customers worldwide with products including network switches, NICs, wireless access points, controllers, and related networking equipment. Our experienced sales and technical team supports industries such as government, healthcare, manufacturing, education, finance, and enterprise. With a customer-first approach, Starsurge focuses on reliable quality, responsive service, and tailored solutions that help clients build efficient, scalable, and dependable network infrastructure. We offer IoT solutions, network management systems, custom software development, multilingual support, and global delivery.
Key Facts at a Glance
Compatibility Matrix
| Component | Supported? | Notes |
|---|---|---|
| NVIDIA Quantum Switches | Yes | Full 200Gb/s HDR interoperability |
| PCIe Gen 4.0 Motherboards | Yes | Full 200Gb/s line rate |
| PCIe Gen 3.0 Motherboards | Yes | Limited to ~100Gb/s per port |
| VMware vSphere | Yes | Drivers available |
| Liquid-Cooled Intel D50TNP | Special SKU | Cold plate version exists; confirm OPN |
Buyer Checklist
- Confirm server has a free PCIe 4.0 x16 slot (physical and electrical).
- Verify InfiniBand fabric speed: this card is HDR (200Gb/s) capable.
- Choose appropriate QSFP56 modules (SR, LR, or DAC) for your distance.
- Ensure power and cooling are adequate for high-speed operation.
- Check OS/driver support: RHEL, Ubuntu, Windows Server, etc.







