NVIDIA ConnectX-7 VPI Adapter – Dual-Port NDR 400Gb/s, PCIe 5.0, GPUDirect, RoCE – MCX75310AAS-NEAT
تفاصيل المنتج:
| اسم العلامة التجارية: | Mellanox |
| رقم الموديل: | MCX75310AAS-NEAT (900-9X766-003N-SQ0) |
| وثيقة: | Connectx-7 infiniband.pdf |
شروط الدفع والشحن:
| الحد الأدنى لكمية: | 1 قطعة |
|---|---|
| الأسعار: | Negotiate |
| تفاصيل التغليف: | الصندوق الخارجي |
| وقت التسليم: | على أساس المخزون |
| شروط الدفع: | تي/تي |
| القدرة على العرض: | العرض بواسطة المشروع/الدفعة |
|
معلومات تفصيلية |
|||
| رقم الموديل: | MCX75310AAS-NEAT (900-9X766-003N-SQ0) | الموانئ: | منفذ واحد |
|---|---|---|---|
| تكنولوجيا: | إنفينيباند | نوع الواجهة: | OSFP56 |
| مواصفة: | 16.7 سم × 6.9 سم | أصل: | الهند / إسرائيل / الصين |
| معدل الإرسال: | 400GBE | الواجهة المضيفة: | Gen3 X16 |
| إبراز: | NVIDIA ConnectX-7 network adapter,Dual-Port NDR 400Gb/s PCIe card,Mellanox RoCE GPUDirect adapter,Dual-Port NDR 400Gb/s PCIe card,Mellanox RoCE GPUDirect adapter |
||
منتوج وصف
MCX755106AS‑HEAT | Dual-Port PCIe 5.0 Smart NIC
Accelerate AI, scientific computing, and enterprise cloud workloads with the NVIDIA ConnectX-7 family. The MCX755106AS-HEAT delivers up to 200Gb/s InfiniBand (HDR) and 200GbE Ethernet flexibility, in‑network computing engines, hardware‑level security, and ultra‑low latency — all powered by PCIe 5.0.
The NVIDIA ConnectX-7 VPI adapter MCX755106AS-HEAT is a dual-port 200Gb/s smart network interface card designed for high-performance computing (HPC) clusters, AI factories, and enterprise data centers. Combining InfiniBand and Ethernet protocol support, it enables Remote Direct Memory Access (RDMA), GPUDirect Storage, and advanced in‑network computing engines such as SHARPv3 and rendezvous offload. With PCIe 5.0 host interface and hardware-based security accelerators, this adapter offloads the CPU, reduces TCO, and delivers consistent low-latency performance.
Ideal for organizations modernizing their IT infrastructure from edge to core, the ConnectX-7 family brings software-defined, hardware-accelerated networking, storage, and security — empowering scalable and secure solutions with minimal overhead.
ConnectX-7 integrates NVIDIA ASAP² (Accelerated Switch and Packet Processing) technology to deliver software-defined networking at line-rate without consuming CPU cores. Inline hardware engines handle encryption/decryption for IPsec, TLS, and MACsec, protecting data in motion from edge to core. For storage, built-in NVMe-oF offload and GPUDirect Storage enable direct data movement between storage and GPU memory, reducing latency and maximizing throughput. The adapter also supports advanced time synchronization (PTP with 12ns accuracy) and on‑demand paging (ODP) for registration‑free RDMA, making it ideal for disaggregated and memory-centric architectures.
- AI & Large Language Model (LLM) Clusters: High-speed interconnect for GPU servers, leveraging GPUDirect RDMA and SHARP collective offloads.
- High-Performance Computing (HPC): 200Gb/s HDR InfiniBand fabric for MPI, OpenSHMEM, and scientific simulations.
- Hyperscale Cloud & SDN Data Centers: RoCEv2, overlay acceleration, and SR-IOV for multi-tenant virtualization.
- Enterprise Security Gateway: Inline MACsec/IPsec encryption for edge-to-core communications with hardware offload.
- Storage Systems: NVMe-oF/TCP offload, distributed storage platforms requiring ultra-low latency and high IOPS.
✅ Operating Systems: In-box drivers for Linux (RHEL, Ubuntu), Windows Server, VMware ESXi (SR-IOV), Kubernetes (CNI plugins).
✅ Protocols: InfiniBand (HDR/EDR), Ethernet (200GbE to 10GbE), RoCE, RoCEv2, iSCSI, NVMe‑oF, SRP, iSER, NFS over RDMA, SMB Direct.
✅ HPC Middleware: NVIDIA HPC-X, UCX, UCC, NCCL, OpenMPI, MVAPICH, MPICH, OpenSHMEM.
✅ Management: NC-SI, MCTP over PCIe/SMBus, PLDM, Redfish, SPDM, secure firmware update.
| Specification | Details |
|---|---|
| Product Model | MCX755106AS-HEAT (NVIDIA ConnectX-7 VPI) |
| Maximum Speed | InfiniBand HDR 200Gb/s; Ethernet up to 200GbE |
| Ports Configuration | Dual-port (supports 1/2 port variants, this model dual-port QSFP56) |
| Host Interface | PCIe 5.0 x16 (up to x32 lanes with bifurcation / Multi-Host) |
| Form Factor | PCIe HHHL (Half Height Half Length) – standard bracket |
| Protocol Support | InfiniBand (HDR/EDR) & Ethernet (200GbE/100GbE/50GbE/25GbE/10GbE) |
| RDMA | RoCE, RoCEv2, hardware reliable transport, DCT, XRC, On-Demand Paging (ODP) |
| Security Offload | Inline IPsec/TLS/MACsec (AES-GCM 128/256-bit), Secure Boot, Flash Encryption, Device Attestation |
| Storage Offload | NVMe-oF (TCP/Fabrics), NVMe/TCP, T10-DIF, block-level XTS-AES 256/512-bit |
| Timing & Sync | IEEE 1588v2 (PTP), 12ns accuracy, SyncE (G.8262.1), Configurable PPS, Time-triggered scheduling |
| Virtualization | SR-IOV, VirtIO acceleration, overlay offload (VXLAN, GENEVE, NVGRE) |
| Advanced Features | GPUDirect RDMA, GPUDirect Storage, SHARP offload, Adaptive Routing, Burst Buffer Offload |
| Management & Boot | UEFI, PXE, iSCSI boot, InfiniBand remote boot, PLDM, Redfish, SPDM, MCTP |
*Specifications are based on NVIDIA public documentation. Verify exact configuration for your system before ordering.
| Model | Ports / Speed | Host Interface | Key Target |
|---|---|---|---|
| MCX755106AS-HEAT | 2‑port HDR 200Gb/s InfiniBand / 200GbE | PCIe 5.0 x16 | AI clusters, HPC, enterprise data centers |
| MCX75310AAS-NEAT | 2‑port NDR 400Gb/s InfiniBand | PCIe 5.0 x16 | High‑end AI, large‑scale HPC |
| OCP 3.0 variants | SFF / TSF with HDR/NDR | PCIe Gen5 | Open Compute Project servers |
- Ultra-low latency & high throughput: Hardware RDMA and in‑network computing minimize application tail latency.
- Unified fabric: One adapter supports both InfiniBand and Ethernet, simplifying inventory and deployment.
- Future-proof PCIe 5.0: 32 GT/s per lane, double bandwidth of PCIe 4.0, removing I/O bottlenecks.
- Reduced TCO: Offloads CPU from networking, storage, and security tasks, enabling more efficient server utilization.
- AI-optimized: Native GPUDirect and SHARPv3 collective operations accelerate model training and inference.
Hong Kong Starsurge Group Co., Limited provides end‑to‑end support including pre-sales consulting, custom firmware configuration, and worldwide shipping. All ConnectX-7 adapters are backed by a 1-year warranty (extendable) and technical assistance from experienced network engineers. We offer multilingual support, RMA services, and fast replacement logistics to minimize downtime.
- Ensure PCIe slot provides sufficient power (75W via slot, no auxiliary power required for standard operation).
- Check physical clearance: HHHL form factor fits in most 1U/2U servers; OCP variants require corresponding mezzanine slot.
- For RoCE deployment, configure DCB (Priority Flow Control) and ECN on switches for lossless Ethernet.
- Always update firmware to latest stable version to leverage security and performance enhancements.
Founded in 2008, Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration solutions. We serve customers worldwide with products including network switches, NICs, wireless access points, controllers, cables, and networking equipment. Our experienced sales and technical team supports industries such as government, healthcare, manufacturing, education, finance, and enterprise. With a customer-first approach, Starsurge focuses on reliable quality, responsive service, and tailored solutions — helping clients build efficient, scalable, and dependable network infrastructure.
We provide IoT solutions, network management systems, custom software development, multilingual support, and global delivery. Choose Starsurge as your trusted partner for NVIDIA networking solutions.







