NVIDIA ConnectX-6 MCX653105A-HDAT 200Gb/s Single-Port InfiniBand Smart Adapter with Hardware Encryption & PCIe 4.0

تفاصيل المنتج:

اسم العلامة التجارية: Mellanox
رقم الموديل: MCX653105A-HDAT
وثيقة: connectx-6-infiniband.pdf

شروط الدفع والشحن:

الحد الأدنى لكمية: 1 قطعة
الأسعار: Negotiate
تفاصيل التغليف: الصندوق الخارجي
وقت التسليم: على أساس المخزون
شروط الدفع: تي/تي
القدرة على العرض: العرض بواسطة المشروع/الدفعة
افضل سعر اتصل

معلومات تفصيلية

حالة المنتجات: مخزون طلب: الخادم
حالة: جديدة ومبتكرة يكتب: سلكي
السرعة القصوى: ما يصل إلى 200 جيجابايت/ثانية موصل Ethernet: QSFP56
نموذج: MCX653105A-HDAT
إبراز:

NVIDIA ConnectX-6 InfiniBand adapter,200Gb/s PCIe 4.0 network card,InfiniBand adapter with hardware encryption

,

200Gb/s PCIe 4.0 network card

,

InfiniBand adapter with hardware encryption

منتوج وصف

NVIDIA ConnectX-6 InfiniBand Adapter MCX653105A-HDAT

200Gb/s Single-Port HDR Smart Adapter with In-Network Computing & Hardware Encryption

The NVIDIA ConnectX-6 MCX653105A-HDAT delivers full 200Gb/s throughput on a single QSFP56 port, combining ultra-low latency, hardware offloads, and block-level XTS-AES encryption. Designed for HPC, AI clusters, and NVMe-oF storage, this PCIe 4.0 x16 adapter offloads collective operations, RDMA, and encryption from the CPU, maximizing application performance and scalability in demanding data center environments.

200Gb/s InfiniBand & Ethernet Hardware Crypto (XTS-AES) RDMA / GPUDirect NVMe-oF Offloads
Product Overview

The MCX653105A-HDAT belongs to the NVIDIA ConnectX-6 InfiniBand adapter family, engineered for extreme performance in modern data centers. This single-port QSFP56 card supports up to 200Gb/s (HDR InfiniBand or 200GbE) with full hardware acceleration for RDMA, reliable transport, and In-Network Computing. By integrating collective operations offloads, MPI tag matching, and NVMe over Fabrics acceleration, the adapter significantly reduces CPU overhead while boosting fabric efficiency. Its built-in AES-XTS block-level encryption ensures data security without performance penalty, making it ideal for financial services, government research, and hyperscale cloud deployments.

Key Features
Port Speed
Up to 200Gb/s (HDR InfiniBand / 200GbE) on single QSFP56
Message Rate
Up to 215 million messages/sec
Hardware Encryption
Block-level XTS-AES 256/512-bit, FIPS compliant
In-Network Computing
Collective offloads, NVMe-oF target/initiator offloads, burst buffer
Host Interface
PCIe Gen 4.0 / 3.0 x16 (backward compatible)
Virtualization & Offloads
SR-IOV (1K VFs), ASAP2, Open vSwitch offload, overlay tunnels
RDMA & GPUDirect
RoCE, XRC, DCT, On-Demand Paging, GPUDirect RDMA support
Form Factor
Stand-up PCIe low-profile, tall bracket pre-installed + short bracket included
Advanced Technology: In-Network Computing & Security

NVIDIA ConnectX-6 integrates In-Network Computing acceleration engines that offload critical datacenter operations from the host CPU. The MCX653105A-HDAT supports hardware-based reliable transport, adaptive routing, and congestion control, ensuring predictable performance in large-scale fabrics. Remote Direct Memory Access (RDMA) enables zero-copy data transfers, bypassing the OS kernel. With NVIDIA GPUDirect RDMA, GPU memory communicates directly with the network adapter, slashing latency for AI training and HPC simulations. Built-in block-level XTS-AES encryption (256/512-bit key) ensures data-in-transit and data-at-rest security with no CPU overhead, and the adapter is designed to meet FIPS 140-2 compliance requirements.

Typical Deployments
  • High Performance Computing (HPC): Large-scale simulations, weather forecasting, and computational fluid dynamics requiring 200Gb/s low-latency interconnect.
  • AI & Deep Learning Clusters: Distributed training with GPUDirect RDMA, maximizing throughput between GPU nodes.
  • NVMe-oF Storage Systems: High-performance disaggregated storage with full target/initiator offloads, reducing CPU utilization.
  • Hyperscale & Cloud Data Centers: Virtualized environments with SR-IOV, overlay networks, and hardware-accelerated encryption.
  • Financial Trading Platforms: Ultra-low latency deterministic networking for algorithmic trading.
Compatibility & Ecosystem

The ConnectX-6 MCX653105A-HDAT interoperates seamlessly with NVIDIA Quantum InfiniBand switches (HDR 200Gb/s), standard 200GbE switches, and a wide range of server platforms. It supports major operating systems and virtualization stacks, ensuring flexible integration into existing infrastructure.

Technical Specifications
Parameter Specification
Product Model MCX653105A-HDAT
Data Rate 200Gb/s, 100Gb/s, 50Gb/s, 40Gb/s, 25Gb/s, 10Gb/s, 1Gb/s (InfiniBand and Ethernet)
Ports & Connector 1x QSFP56 (supports passive copper, active optical, and AOC cables)
Host Interface PCIe Gen 4.0 x16 (also compatible with Gen 3.0, 2.0; supports x8, x4, x2, x1 configurations)
Latency Sub-microsecond (typical <0.7µs)
Message Rate Up to 215 million messages per second
Encryption XTS-AES 256/512-bit hardware offload, FIPS 140-2 ready
Form Factor PCIe low-profile stand-up (tall bracket pre-installed, short bracket accessory included)
Dimensions (without bracket) 167.65mm x 68.90mm
Power Consumption Typical 22W – 24W (depends on link utilization)
Virtualization SR-IOV (up to 1K Virtual Functions), VMware NetQueue, NPAR, ASAP2 flow offload
Management & Monitoring NC-SI, MCTP over PCIe/SMBus, PLDM (DSP0248, DSP0267), I2C, SPI flash
Remote Boot InfiniBand, iSCSI, PXE, UEFI
Operating Systems RHEL, SLES, Ubuntu, Windows Server, FreeBSD, VMware vSphere, OpenFabrics Enterprise Distribution (OFED), WinOF-2
Selection Guide – ConnectX-6 Adapter Variants
Ordering Part Number (OPN) Ports Max Speed Host Interface Key Features
MCX653105A-HDAT 1x QSFP56 200Gb/s PCIe 3.0/4.0 x16 Single-port, hardware crypto, full ConnectX-6 offloads, ideal for high-density servers
MCX653106A-HDAT 2x QSFP56 200Gb/s (dual-port) PCIe 3.0/4.0 x16 Dual-port 200Gb/s with crypto, maximum bandwidth density
MCX653105A-ECAT 1x QSFP56 100Gb/s PCIe 3.0/4.0 x16 Single-port 100Gb/s, cost-optimized for lower speed requirements
MCX653106A-ECAT 2x QSFP56 100Gb/s (dual-port) PCIe 3.0/4.0 x16 Dual-port 100Gb/s, virtualization & storage offloads
MCX653436A-HDAT (OCP 3.0) 2x QSFP56 200Gb/s PCIe 3.0/4.0 x16 OCP 3.0 small form factor, dual-port 200Gb/s
Note: MCX653105A-HDAT includes full hardware encryption engine (XTS-AES) and supports both InfiniBand and Ethernet protocols at up to 200Gb/s. For dual-port configurations, consider -HDAT variants with two QSFP56 cages.
Why Choose MCX653105A-HDAT for Your Infrastructure
  • Full 200Gb/s Bandwidth: Single-port design delivers maximum throughput for compute nodes where high density per port is prioritized.
  • Hardware Security Built-in: XTS-AES block encryption without CPU overhead, meeting FIPS compliance for regulated industries.
  • Accelerated Storage & AI: NVMe-oF offloads and GPUDirect RDMA significantly boost performance for AI training and software-defined storage.
  • Future-Ready PCIe 4.0: Doubles interconnect bandwidth to the host, eliminating bottlenecks for 200Gb/s networking.
  • Simplified Management: Unified driver stack (OFED, WinOF-2) and broad OS compatibility reduce deployment complexity.
Service & Support

Hong Kong Starsurge Group provides expert technical support, warranty coverage, and global RMA services for all NVIDIA ConnectX adapters. Our network specialists assist with driver installation, performance tuning, and fabric integration. We offer flexible pricing, bulk quotes for data center projects, and fast worldwide shipping. For customized solutions, contact our sales team to discuss lead times and volume discounts.

Frequently Asked Questions
Q: What is the maximum speed supported by MCX653105A-HDAT?
A: It supports up to 200Gb/s (HDR) on the InfiniBand side and 200GbE on the Ethernet side, with full backward compatibility to lower speeds (100/50/40/25/10/1Gb/s).
Q: Does this adapter include hardware encryption?
A: Yes. MCX653105A-HDAT features built-in XTS-AES 256/512-bit block-level encryption offload, reducing CPU load and ensuring data security.
Q: Can I use this card in a PCIe 3.0 slot?
A: Absolutely. The adapter is backward compatible with PCIe 3.0 and 2.0. However, maximum throughput may be limited by the slot generation.
Q: Is GPUDirect RDMA supported?
A: Yes, the ConnectX-6 series fully supports NVIDIA GPUDirect RDMA, enabling direct GPU memory access across the network for AI and HPC workloads.
Q: What cable types are compatible with the QSFP56 port?
A: It supports passive copper DACs (up to 5m), active optical cables (AOC), and QSFP56 optical transceivers for longer distances.
Important Precautions
• Ensure adequate chassis airflow for the 200Gb/s adapter; use recommended cooling per server vendor guidelines.
• Verify PCIe slot provides sufficient power (75W from slot; the adapter consumes ~22-24W typical).
• For liquid-cooled platforms, this standard air-cooled card is not compatible with cold plate variants; contact Starsurge for liquid-cooled SKU needs.
• Always use QSFP56-rated cables or modules to achieve 200Gb/s performance.
• Confirm driver version compatibility with your OS and kernel before deployment.
About Hong Kong Starsurge Group

Since 2008, Hong Kong Starsurge Group Co., Limited has been a trusted provider of enterprise networking hardware, system integration, and IT services. As an authorized partner for NVIDIA networking solutions, Starsurge delivers genuine ConnectX adapters, switches, and cables to government, finance, healthcare, education, and hyperscale clients worldwide. Our experienced sales and technical teams ensure seamless deployment from pre-sales architecture to post-sales support, with a commitment to reliable quality and responsive service.

Global delivery · Multilingual support · Tailored OEM & integration services

Key Facts at a Glance
200Gb/s single-port
215M Msgs/sec
PCIe 4.0 x16
XTS-AES + FIPS
SR-IOV (1K VFs)
NVMe-oF & GPUDirect
Compatibility Matrix
Component / Ecosystem Support Status Remarks
NVIDIA Quantum HDR InfiniBand Switches ✓ Fully supported 200Gb/s fabric, adaptive routing
200GbE Switches (IEEE 802.3) ✓ Compatible Requires FEC modes per switch specification
GPU Direct RDMA ✓ Yes NVIDIA GPU series (Volta, Ampere, Hopper, etc.)
VMware vSphere 7.0/8.0 ✓ Certified Native drivers, SR-IOV support
Linux (RHEL, Ubuntu, SLES) ✓ Full support MLNX_OFED, inbox drivers available
Windows Server 2019/2022 ✓ Supported WinOF-2 driver package
Buyer Checklist – Before Ordering MCX653105A-HDAT
  • [ ] Confirm required link speed: 200Gb/s single-port matches your node bandwidth requirements.
  • [ ] Check server PCIe slot: x16 physical slot, Gen 4 recommended for full 200Gb/s performance.
  • [ ] Select appropriate QSFP56 cables or transceivers (passive copper up to 5m, AOC, or optics).
  • [ ] Verify OS driver support (OFED version or inbox).
  • [ ] Ensure encryption compliance requirements are met (XTS-AES, FIPS).
  • [ ] Evaluate environmental cooling: high-speed adapters may require directed airflow.
Related Products

تريد أن تعرف المزيد من التفاصيل حول هذا المنتج
أنا مهتم بذلك NVIDIA ConnectX-6 MCX653105A-HDAT 200Gb/s Single-Port InfiniBand Smart Adapter with Hardware Encryption & PCIe 4.0 هل يمكن أن ترسل لي مزيدًا من التفاصيل مثل النوع والحجم والكمية والمواد وما إلى ذلك.
شكر!