NVIDIA ConnectX-8 SuperNIC C8180(900-9X81E-00EX-DT0) 800G AI Networking Adapter for Hyperscale GPU Clusters

تفاصيل المنتج:

اسم العلامة التجارية: Mellanox
رقم الموديل: C8180 (900-9X81E-00EX-DT0)
وثيقة: connectx-datasheet-connectx...1).pdf

شروط الدفع والشحن:

الحد الأدنى لكمية: 1 قطعة
الأسعار: Negotiate
تفاصيل التغليف: الصندوق الخارجي
وقت التسليم: على أساس المخزون
شروط الدفع: تي/تي
القدرة على العرض: العرض بواسطة المشروع/الدفعة
افضل سعر اتصل

معلومات تفصيلية

رقم الموديل: C8180 (900-9X81E-00EX-DT0) معدل الإرسال: 400GBE
الموانئ: منفذ مزدوج وظيفة: LACP ، الدعم القابل للتكديس ، VLAN
تكنولوجيا: إنفينيباند الواجهة المضيفة: Gen6 x16
نوع الواجهة: QSFP112 علامة تجارية: ميلانوكس

منتوج وصف

NVIDIA ConnectX-8 SuperNIC C8180
Model: 900-9X81E-00EX-DT0 | Highest‑performance 800G networking designed for massive‑scale AI factories and cloud data centers.

Optimized for hyperscale AI workloads, the ConnectX-8 SuperNIC delivers up to 800Gb/s bi-directional bandwidth, PCIe Gen6 host interface, and advanced telemetry-based congestion control. Built for both InfiniBand and Ethernet fabrics, it powers trillion-parameter GPU computing and agentic AI applications with unprecedented efficiency.

Product Overview

The NVIDIA ConnectX-8 SuperNIC (C8180, 900-9X81E-00EX-DT0) represents a generational leap in AI fabric acceleration. Supporting up to 800 gigabits per second (Gb/s) over InfiniBand or Ethernet, this adapter eliminates network bottlenecks in large-scale GPU clusters. With native PCIe Gen6 (up to 48 lanes) and advanced features such as NVIDIA GPUDirect RDMA, SHARP in-network computing, and programmable congestion control, the ConnectX-8 ensures maximum throughput and lowest latency for training, inference, and data-intensive HPC workloads. Its power-efficient design aligns with sustainable AI data center goals while enabling scaling beyond hundreds of thousands of GPUs.

Key Performance Features
  • 800Gb/s total bandwidth – supports 800/400/200/100 Gb/s InfiniBand speeds and 400/200/100/50/25 Gb/s Ethernet.
  • PCIe Gen6 host interface – up to 48 lanes, low overhead, and Multi-Host support for up to four hosts.
  • In-Network Computing – SHARPv3 for collective operations, MPI accelerations, rendezvous protocol offloads.
  • GPUDirect RDMA & Storage – direct GPU memory access and GPUDirect Storage for zero-copy I/O.
  • Advanced congestion control & telemetry – real-time flow optimization for AI tail latency.
  • Hardware security – secure boot, flash encryption, device attestation (SPDM 1.1), inline crypto (IPsec/MACsec/PSP).
Technology & Offloads
  • RDMA & RoCEv2 acceleration with programmable congestion control.
  • Ethernet Accelerated Switching and Packet Processing (ASAP²) for SDN/OVS offload.
  • Overlay network acceleration: VXLAN, GENEVE, NVGRE.
  • Stateless TCP offloads (LSO, LRO, GRO, TSS, RSS).
  • Precision Time Protocol (PTP) IEEE 1588v2 Class C, SyncE, PTM, time-triggered scheduling.
  • Burst buffer offloads, high-speed packet reordering.
Typical Deployments

The ConnectX-8 SuperNIC C8180 is purpose-built for next-generation AI fabrics and hyperscale cloud environments:

  • AI Factories & Large Language Model Clusters – trillion-parameter model training with 800G front-end and back-end networks.
  • High-Performance Computing (HPC) – SHARPv3 in-network reduction accelerates MPI collectives for scientific simulations.
  • GPU-Accelerated Cloud Data Centers – multi-tenant isolation, overlay offloads, and advanced QoS.
  • Enterprise AI Infrastructure – from inference farms to AI data platforms requiring deterministic low latency.
  • Storage & Converged Fabrics – GPUDirect Storage and RoCEv2 for NVMe-oF and distributed file systems.
Compatibility & Ecosystem

Seamless integration with NVIDIA networking platforms and major server OEMs. Validated software stacks include:

Software & Middleware
  • NVIDIA NCCL, HPC-X, DOCA UCC/UCCX
  • Open MPI, MVAPICH2
  • Linux distributions (RHEL, Ubuntu, SLES)
  • Windows Server with RDMA support
  • DPDK & VPP for telco/NFV
Hardware Platforms
  • NVIDIA DGX / HGX systems
  • PCIe Gen6 enabled servers (x86 / Arm / GPU-accelerated nodes)
  • Industry standard OCP 3.0 TSFF and Mezzanine designs
  • Compatible with 800G OSFP and 400G QSFP112 optics
Specifications
Parameter Details
Product Model C8180 (900-9X81E-00EX-DT0)
Maximum Bandwidth 800 Gb/s
InfiniBand Speeds 800 / 400 / 200 / 100 Gb/s
Ethernet Speeds 400 / 200 / 100 / 50 / 25 Gb/s
Host Interface PCIe Gen6 (up to 48 lanes), Multi-Host capable (up to 4 hosts)
Form Factors PCIe HHHL 1P x OSFP, PCIe HHHL 2P x QSFP112, Dual ConnectX-8 Mezzanine, OCP 3.0 TSFF 1P x OSFP
Port Configuration 1x 800G OSFP or 2x 400G / split up to 8 logical ports
RDMA Support RoCEv2, IBTA v1.7 compliant
MTU 256 to 4096 bytes, 1GB messages
Security Features Secure boot (hardware root of trust), flash encryption, SPDM 1.1, Inline IPsec/MACsec/PSP
Timing & Sync PTP IEEE 1588v2 Class C, SyncE G.8262.1, PTM, PPS in/out
Management NC-SI, MCTP over SMBus/PCIe PLDM (DSP0248/0267/0218), SPI flash, JTAG
Network Boot InfiniBand / Ethernet PXE, iSCSI, UEFI
Note: Some advanced features may require specific firmware versions or software licenses. Please confirm with your solution architect before ordering.
Selection Guide
SKU / Option Port / Speed Form Factor Typical Use Case
C8180 – 900-9X81E-00EX-DT0 1x OSFP 800G (or split to 2x400G / 8x100G) PCIe HHHL AI training nodes, high-density GPU servers
Dual ConnectX-8 Mezzanine 2x 400G QSFP112 Proprietary mezzanine NVIDIA HGX / OEM integrated systems
OCP 3.0 TSFF 1P 1x OSFP 800G OCP 3.0 SFF Cloud-optimized OCP platforms
PCIe HHHL 2P 2x 400G QSFP112 PCIe HHHL Dual-port high availability or multi-fabric
Why Choose ConnectX-8 SuperNIC
Unmatched AI Fabric Performance

800Gb/s line rate and advanced congestion management eliminate performance variability in multi-tenant AI clouds. Combined with SHARP in-network computing, collective operation time reduces drastically.

Future-Ready Infrastructure

PCIe Gen6 and support for both InfiniBand and Ethernet ensure investment protection for next-gen GPU architectures. Power-efficient design lowers TCO at scale.

Robust Security & Telemetry

Hardware root of trust, secure firmware updates, and telemetry-based flow control give operators confidence in production AI factories.

Seamless NVIDIA Ecosystem Integration

Natively integrated with NCCL, DOCA, and GPUDirect technologies. Reduce time-to-solution for AI researchers and data scientists.

Service & Support from Hong Kong Starsurge

As an authorized channel partner, Starsurge provides global logistics, technical pre-sales consulting, and post-sales support for NVIDIA ConnectX-8 SuperNIC. Our services include:

  • Integration testing with your server/storage environment.
  • Firmware management and compatibility validation.
  • RMA and advance replacement options.
  • Custom cabling and transceiver bundling (OSFP, QSFP112, breakout cables).
  • Multilingual technical support (English, Chinese, and more).
Frequently Asked Questions
Q: What is the difference between ConnectX-8 SuperNIC and previous ConnectX-7?
A: ConnectX-8 doubles the peak bandwidth to 800Gb/s, upgrades host interface to PCIe Gen6 (up to 48 lanes), introduces enhanced telemetry congestion control, and improves in-network computing capabilities for trillion-parameter AI models.
Q: Can the C8180 operate in Ethernet mode only?
A: Yes. The ConnectX-8 SuperNIC supports both InfiniBand and Ethernet protocols. You can configure it as a standard 800G Ethernet NIC or native InfiniBand adapter depending on your fabric.
Q: Does it support split ports?
A: Yes. The single OSFP port can be split into up to 8 logical ports (e.g., 2x400G, 4x200G, 8x100G) with appropriate breakout cables, giving flexibility for diverse topologies.
Q: Is the card compatible with PCIe Gen5 servers?
A: Absolutely. While designed for PCIe Gen6, it is backward compatible with PCIe Gen5/Gen4 slots, though maximum host bandwidth may be limited by the slot generation.
Q: What security standards are supported?
A: Secure boot with hardware root of trust, flash encryption, SPDM 1.1 attestation, inline IPsec, MACsec, and Platform Security Provider (PSP) offloads.
Important Precautions
  • Ensure adequate airflow and cooling in high-density server chassis when using 800G optics (power dissipation ~25-30W typical).
  • Always use certified OSFP or QSFP112 optical/copper modules from NVIDIA or validated partners to guarantee signal integrity.
  • Firmware updates must follow NVIDIA release notes; unsupported firmware versions may cause performance degradation.
  • Multi-host configuration requires specific motherboard PCIe slot bifurcation support – please verify with your server vendor.
  • Some advanced features (e.g., SHARPv3, PTP Class C) require corresponding switch infrastructure (NVIDIA Quantum-3 or Spectrum-5 families).
About Hong Kong Starsurge Group
NVIDIA ConnectX-8 SuperNIC C8180(900-9X81E-00EX-DT0)  800G AI Networking Adapter for Hyperscale GPU Clusters 0

Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration solutions. Founded in 2008, the company serves customers worldwide with products including network switches, NICs, wireless access points, controllers, cables, and related networking equipment. Backed by an experienced sales and technical team, Starsurge supports industries such as government, healthcare, manufacturing, education, finance, and enterprise. The company also offers IoT solutions, network management systems, custom software development, multilingual support, and global delivery. With a customer-first approach, Starsurge focuses on reliable quality, responsive service, and tailored solutions that help clients build efficient, scalable, and dependable network infrastructure.

For NVIDIA ConnectX-8 SuperNIC C8180 (900-9X81E-00EX-DT0) pricing, samples, or integration advice, please contact our networking specialists.

Key Facts – At a Glance
800Gb/s bandwidth PCIe Gen6 x48 lanes InfiniBand & Ethernet dual-mode GPUDirect RDMA / Storage SHARP in-network computing Secure boot & crypto offloads PTP Class C / SyncE Up to 8 split ports
Compatibility Matrix (Key Components)
Component Recommended / Validated Models
Switch Platforms NVIDIA Quantum-3 InfiniBand, Spectrum-5 Ethernet (800G capable)
Optical Transceivers NVIDIA OSFP 800G DR8 / 2xFR4, QSFP112 400G SR4/DR4
GPU Servers NVIDIA DGX H100/H200, Supermicro GPU X13, PowerEdge XE9680, HPE Cray XD
Operating Systems Ubuntu 22.04/24.04, RHEL 9.x, Rocky Linux 9, Windows Server 2025 (RDMA)
Buyer Checklist – ConnectX-8 SuperNIC
  • ☐ Confirm PCIe slot type (PCIe Gen6 x16 or x32? For full 800G host bandwidth, at least PCIe 6.0 x16 recommended)
  • ☐ Verify thermal clearance and airflow direction in your chassis (passive heatsink or active fan required?)
  • ☐ Choose correct form factor: HHHL / OCP 3.0 / Mezzanine for your server.
  • ☐ Select compatible optics/cables: OSFP 800G or QSFP112 2x400G depending on port variant.
  • ☐ Ensure target switch supports 800G speed and required protocols (InfiniBand NDR or Ethernet 800G).
  • ☐ Check driver & firmware support: MLNX_OFED or NVIDIA DOCA version for your OS.
Related Products
NVIDIA Quantum-3 Switch
64-port 800G InfiniBand switch for GPU clusters.
NVIDIA Spectrum-5 Ethernet Switch
51.2 Tbps, 800G AI optimized Ethernet fabric.
NVIDIA BlueField-3 DPU
Offloads storage & security for AI cloud data centers.
OSFP 800G DAC & AOC Cables
Short-reach copper and active optical cables for SuperNIC interconnection.
Related Guides (External Resources)
  • NVIDIA Networking Performance Tuning Guide for AI Fabrics
  • ConnectX-8 Adapter Cards Installation Manual (available upon request)
  • RoCEv2 Congestion Control Best Practices – White Paper
  • Understanding SHARPv3 In-Network Reduction for LLMs

تريد أن تعرف المزيد من التفاصيل حول هذا المنتج
أنا مهتم بذلك NVIDIA ConnectX-8 SuperNIC C8180(900-9X81E-00EX-DT0) 800G AI Networking Adapter for Hyperscale GPU Clusters هل يمكن أن ترسل لي مزيدًا من التفاصيل مثل النوع والحجم والكمية والمواد وما إلى ذلك.
شكر!