Form Factor & Design: 8U rackmount chassis with dimensions 930 x 448 x 353.6mm, optimized for high-density deployments.
Compute Power: Supports dual AMD EPYC™ 9005/9004 processors with up to 500W TDP and 12+12 DIMM slots for DDR5 memory up to 6400 MHz.
Extreme GPU Support: Integrated NVIDIA® HGX B200 platform with 8 GPUs and NVSwitch™ technology for maximum AI/compute workloads.
High-Speed Storage: 12 hot-swap 2.5" NVMe bays (PCIe 5.0) and dual internal M.2 slots supporting PCIe 3.0 or SATA.
Robust Connectivity: Dual 1GbE LAN ports (Intel® i350), dedicated IPMI management port, and extensive rear I/O including USB 3.2 and VGA.
Efficient Power & Cooling: Dual 3000W 80-PLUS Titanium CRPS power supplies and 33 PWM fans for optimal thermal performance.
The Hexadata HD-SW300 Version: SXM-B200/AA is a high-performance server engineered for high-demand applications in the fields of Generative AI, High Performance Computing (HPC), and Data Analytics. Built around the AMD EPYC 9005/9004 series processors and the NVIDIA HGX B200 8-GPU configuration, this server is designed to meet the growing demands for accelerated computing, AI model training, and inference. It represents a major leap forward in computational power, offering an all-in-one solution for enterprises and research institutions.
AMD has made significant strides in processor
Processor Support: AMD EPYC 9005/9004 Series The HD-SW300 is powered by AMD’s 5th Generation EPYC processors (9005 series), which leverage the SP5 platform. These processors are built on the advanced 3nm process with AMD “Zen5” and Zen5c core architecture, providing significant improvements in energy efficiency and cost optimization. With up to 192 cores and enhanced frequencies and cache sizes, these processors provide the ideal performance for cloud-native applications, general-purpose workloads, and highly specialized computational tasks.
Dual Socket SP5 (LGA 6096) configuration, offering scalability and optimal parallel processing.
Support for AMD 3D V-Cache technology, which boosts memory bandwidth and accelerates compute-heavy workloads. These processors deliver robust multi-threaded performance, ensuring the HD-SW300 can tackle a variety of tasks, from traditional data center applications to next generation AI-driven projects and deep learning environments.
PCIe Gen5 and DDR5 Memory Support:
The system supports PCIe Gen5, which provides 2x the data throughput of its predecessor. This significant increase in bandwidth is critical for ensuring the seamless movement of data between CPUs, GPUs, and storage devices, leading to faster processing and reduced bottlenecks.
With DDR5 memory, the system can handle higher bandwidths and lower latencies, providing an additional boost to performance, especially for data-heavy workloads that are typical in AI and HPC environments
NVIDIA HGX B200 8-GPU:
The Hexadata HD-SW300 integrates the NVIDIA HGX B200, a powerful accelerator designed to take AI computing to new heights. It supports NVIDIA Blackwell Tensor Core GPUs, which are specifically engineered for AI and deep learning tasks.
The system is capable of housing up to 8 GPUs, with a staggering 1.4 terabytes (TB) of GPU memory and 64 terabytes per second (TB/s) of memory bandwidth. This combination enables 15X faster real-time trillion-parameter-model inference and reduces energy consumption and costs by 12X.
NVIDIA NVLink and NVSwitch technologies enable ultra-fast GPU-to-GPU bandwidth of 1,800GB/s, which is critical for parallel processing and the efficient execution of large AI models.
The HGX B200 supports advanced networking solutions, including NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet, delivering speeds of up to 400Gb/s. These high-speed interconnects allow for low-latency, high-bandwidth data transfer between nodes, ensuring minimal delays in AI and HPC operations.
NVIDIA BlueField®-3 Data Processing Units (DPUs) provide cloud networking, composable storage, zero-trust security, and GPU compute elasticity. This makes the Hexadata platform ideal for hyperscale AI cloud environments, where dynamic resource allocation and scalability are essential.
The system is equipped with 12 x 2.5" Gen5 NVMe/SATA hot-swap bays for rapid and reliable storage performance, supporting high-performance NVMe drives for quick access to data. The platform also includes 2 x FHHL dual-slot PCIe Gen5 x16 slots and 8 x HHHL single-slot PCIe Gen5 x16 slots, allowing for flexibility in adding additional accelerators, storage, or networking cards.
Reliability and Power Efficiency:
The Hexadata HD-SW300 is designed with 12 x 3000W 80 PLUS Titanium redundant power supplies, ensuring exceptional power efficiency and reliability. This feature guarantees that the system remains operational even during power interruptions, making it suitable for critical workloads. 80 PLUS Titanium certification ensures the system consumes less power, providing a more energy-efficient solution compared to previousgeneration systems.
Scalable and Flexible Design:
The 8U rackmount form factor ensures that the system is wellsuited for deployment in data centers, where space is often a concern but high computational power is essential. The form factor allows for easy scaling, while the system’s robust design ensures reliability and longevity in demanding environments.
Hexadata Management Console
For management and maintenance of a server or a small cluster, users can use the Hexadata Management Console, which is preinstalled on each server. Once the servers are running, IT staff can perform real-time health monitoring and management on each server through the browser-based graphical user interface. In addition, the Hexadata Management Console also provides:
Hardware Security
TPM 2.0 Module
For hardware-based authentication, the passwords, encryption keys, and digital certificates are stored in a TPM module to prevent unwanted users from gaining access to your data. Hexadata TPM modules come in either a Serial Peripheral Interface or Low Pin Count bus.
Stack Your AI Rig with Hexadata HD-SW300 Version: SXM-B200/AA
The Hexadata HD-SW300 Version: SXM-B200 is an advanced AI and HPC platform that pushes the envelope of performance, efficiency, and scalability. By leveraging AMD's latest processors and NVIDIA’s cutting-edge GPU technologies, the system is designed to accelerate workloads in AI, deep learning, data analytics, and HPC environments. With a comprehensive array of features including PCIe Gen5, advanced networking, and GPU memory bandwidth—the Hexadata platform is well-suited for organizations aiming to stay at the forefront of AI innovation while ensuring cost-effective and energy-efficient operations.
This solution is built for businesses and research institutions seeking scalable infrastructure to support the most demanding AI, data analytics, and high-performance computing tasks.
AI Framework compatible with Hexadata HD-SW300
Hexadata HD-SW300 powered by NVIDIA Tesla GPUs support several AI frameworks that leverage their high-performance computing capabilities for deep learning and other AI tasks. Some of the most widely used AI frame works supported by Hexadata HD-SW300 powered by NVIDIA Tesla GPUs include:
TensorFlow
An open-source deep learning framework developed by Google, widely used for various machine learning and deep learning tasks. TensorFlow provides strong GPU support and benefits significantly from NVIDIA's CUDA and cuDNN libraries.
PyTorch
An open-source machine learning framework developed by Facebook's AI Research lab. PyTorch is knownforits dynamic computational graph and ease of use, especially in research and prototyping.
Keras
An open-source neural network library written in Python that acts as a high-level API for TensorFlow. Keras simplifies the creation of deep learning models and is often used for rapid prototyping.
mxnet
A deep learning framework designed for both efficiency and flexibility, developed by Apache. It supports multiple programming languages and is optimized for scalability and performance.
Caffe
A deep learning framework developed bythe Berkeley Vision and Learning Center (BVLC). Caffe is known for its speed and efficiency in training deep neural networks, particularly computer vision tasks.
Chainer
A flexible and intuitive deep learning framework developed by Preferred Networks. Chainer is designed for researchers and allows for the easy construction of complex neural networks.
Microsoft Cognitive Toolkit
A deep learning framework developed by Microsoft, known for its scalability and performance in training deep learning models.
theano
One of the earliest deep learning frameworks, developed by the Montreal Institute for Learning Algorithms (MILA) at the University of Montreal. While Theano is no longer actively developed, it laid the groundwork for many subsequent frameworks and still works with NVIDIAGPUs.
torch
A scientific computing framework with wide support for machine learning algorithms, providing a Lua-based scripting language. Torch was one of the predecessorsto PyTorch.
These frameworks utilize NVIDIA's CUDA parallel computing platform and programming model, along with cuDNN, a GPU-accelerated library for deep neural networks, to achieve optimal performance on NVIDIA Tesla GPUs. This supportallows researchers and developers to leverage the powerful computational capabilities of Tesla GPUs to train and deploy complex AI models efficiently.
Use Cases for Hexadata HD-SW300
The Hexadata HD-SW300 powered by NVIDIA H100 80GB GPU, based on the Hopper architecture, is designed for high performance computing,AI,and deep learning applications.Here are some use cases where the H100 80GB GPU excels :
Deep Learning Training
Inference
Data Analytics
Scientific Computing
High-PerformanceComputing(HPC)
AI-Driven Applications
Enterprise AI
Virtualization and Cloud Computing
Edge Computing
The Hexadata HD-SW300poweredby NVIDIA H100 80 GBGPU's advanced architecture and large memory capacity make it suitable for these demanding use cases, enabling breakthroughs in AI, scientific research, and high-performance computing.