logo

HD-RS2100 Ver: 01/MGX

  • Form Factor & Dimensions: 2U Rackmount server with dimensions of 438mm x 87mm x 900mm.

  • Processing Power: Features NVIDIA GH200 Grace Hopper Superchip with a 1x Grace CPU and 1x Hopper GPU, connected via NVLink-C2C, with a total TDP of up to 1000W.

  • Memory & Storage: Integrated 480GB LPDDR5X ECC memory (CPU) and up to 144GB HBM3e (GPU) with 4.9TB/s bandwidth, along with 4x 2.5" Gen5 NVMe drive bays and 2x M.2 PCIe Gen5 slots.

  • Power & Cooling: 1+1 redundant 2000W CRPS power supply (80-PLUS Titanium) with a cooling system featuring 3x 60mm and 4x 80mm high-speed fans.

  • Connectivity & Expansion: Offers 3x PCIe Gen5 x16 slots, 2x 10GbE LAN ports, 1x dedicated IPMI LAN port, and multiple USB 3.2 Gen1 ports.

  • Management & OS Support: Features ASPEED AST2600 BMC for remote management with IPMI, HTML5 KVM, and sensor monitoring, supporting Ubuntu 22.04.3, RHEL 9.3, and SUSE SLE 15 SP5 (aarch64).

CitrixLogo
CitrixLogo
CitrixLogo
CitrixLogo

HD-RS2100 Ver: 01/MGX

Accelerate Time to Market for Generative AI and Beyond
Hexadata's NVIDIA MGX Superchip and NVIDIA Grace Systems, featuring the latest NVIDIA GH200 Grace Hopper CPU Superchip, offer an advanced solution for AI infrastructure and accelerated computing. These systems are optimized for the most demanding workloads in a compact, 2U form factor, designed to address the need for flexibility, performance, and scalability across current and future computing needs.

Next-Gen Performance : NVIDIA GH200 Grace Hopper Superchip brings extraordinary acceleration for AI, machine learning, and deep learning workloads with a focus on large datasets and high memory demands.

By combining NVIDIA Grace CPUs and NVIDIA Hopper GPUs in a single system, the Grace Hopper architecture offers impressive memory bandwidth and ultra-low latency, leading to massive speedups in performance.

Compact Yet Powerful 2U Form Factor : Despite its compact 2U form factor, the system offers the highest levels of performance, ideal for high-demand AI applications without taking up extensive data center space.

NVIDIA H200 GPUs are integrated into the system, which enhances its AI computation power while fitting neatly into a compact chassis.

Modular Expansion for Future-Proofing : The modular bays allow for easy PCIe expansion, enabling support for both current and future GPU, DPU, and CPU technologies. This flexibility makes it an ideal solution for evolving AI workloads, ensuring your infrastructure stays ahead of the curve.
It enables users to adapt the system based on new technologies, such as newer GPU models or additional high-speed networking cards, without the need for a complete system overhaul.

Optimized for Generative AI and Large-Scale Models : These systems are specially designed to handle large, complex models used in generative AI tasks such as natural language processing (NLP), image synthesis, and autonomous systems. The massive memory bandwidth and high-performance processing capabilities support the rapid iteration and refinement required by these advanced workloads.

Accelerated AI Workloads with CUDA Platform : With Hexadata's systems supporting the CUDA platform, the system is optimized for AI and computational tasks across a broad range of industries, delivering consistent, reliable, and lightning-fast performance.
This setup allows seamless execution of AI models, improving training and inference times dramatically.

Enhanced Scalability and Clustering : The modular design allows for scale-out configurations, enabling users to link multiple systems for high-performance distributed computing. This is ideal for building large AI clusters to handle massive datasets and complex models.

High-speed networking ensures efficient data sharing across systems, facilitating collaboration and faster data processing.

Use Cases :
Generative AI : 

Accelerating the training of large generative models for AI-based content generation, chatbots, virtual assistants, and more.

Researchand Scientific Computing : 
Providing powerful computing for researchers working with complex simulations, data analysis, or genomic research.

Deep Learning :
Training deep learning models with high memory requirements, supporting fields like natural language processing, computer vision, and speech recognition.

Data Centers and EdgeComputing :
Offering high-performance computing solutions for AI, cloud services, and edge environments while maintaining the flexibility to expand as technology evolves.

Power Efficiency
Automatic Fan Speed Control

Hexadata servers are enabled with Automatic Fan Speed Control to achieve the best cooling and power efficiency. Individual fan speeds will be automatically adjusted according to temperature sensors strategically placed in the servers.

Cold Redundancy
To take advantage of the fact that a PSU will run at greater power efficiency with a higher load, Hexadata has introduced a power management feature called Cold Redundancy for servers with N+1 power supplies. When the total system load falls lower than 40%, the system will automatically place one PSU into standby mode, resulting in a 10% improvement in efficiency.

Hardware Security
Optional TPM 2.0 Module
For hardware-based authentication, the passwords, encryption keys, and digital certificates are stored in a TPM module to prevent unwanted users from gaining access to your data. Hexadata TPM modules come in either a Serial Peripheral Interface or Low Pin Count bus.

Hexadata Management Console
For management and maintenance of a server or a small cluster, users can use the Hexadata Management Console, which is preinstalled on each server. Once the servers are running, IT staff can perform real-time health monitoring and management on each server through the browser-based graphical user interface. In addition, the Hexadata Management Console also provides :

  • Support for standard IPMI specifications that allows users to integrate services into a single platform through an open interface
  • Automatic event recording, which can record system behavior 30 seconds before an event occurs, making it easier to determine subsequent actions
  • Supports Remote KVM over IP which can provide access of BIOS on remote system during the trading hours also.
  • Supports remote media redirection through which even entire operating system and Patches can be deployed remotely without having the physical access of servers.

Why Choose Hexadata NVIDIA MGX Systems :
Exceptional Performance in a Small Form Factor : Experience top-tier AI performance in a 2Uchassis, without compromising onpower or scalability.
Future-Ready Infrastructure : Expand and adapt to future technology advancements with PCIe expansion capabilities, making the systems futureproof for evolving workloads.
Versatility Across Industries : The modular architecture and raw power of the systems enable use across various sectors, from research and development to industrial applications.

The Hexadata NVIDIA MGX Systemswith NVIDIA GH200 Grace Hopper Superchip The Hexadata NVIDIA MGX and Grace CPU Superchip offer an ideal foundation for enterprises, research institutions, and AI-driven companies looking to build scalable, high-performance infrastructure for complex, cutting-edge workloads. These systems are designed not just for today’s AI needs but also for the advancements and challengesof tomorrow.

Stepping Further into the Era of AI and GPUAccelerated HPC
Moving beyond pure CPU applications, the NVIDIA GH200 Grace Hopper Superchip is built on a combination of an NVIDIA Grace CPU and an NVIDIA H100 Tensor Core GPU for giant-scale AI and HPC applications. Utilizing the same NVIDIA NVLink - C2C technology, combining the heart of computing on a single superchip, forming the most powerful computational module. The coherent memory design leverages both high speed HBM3 or HBM3e GPU memory and the large-storage LPDDR5X CPU memory.The superchip also inherits the capability of scaling out with InfiniBand networking by adopting NVIDIA Blue Field - 3 DPUs or NICs, forming a system connected with a speed of 100GB/s for ML and HPC workloads. The upcoming NVIDIA GH200 NVL32 can further improve deep learning and HPC workloads by connecting up to 32 superchips through the NVLink Switch System, a system built on NVLink switches with 900GB/s bandwidth between any two superchips, making the most use of the powerful computing chips and extended GPU memory.

Titile Img

Related Products