logo

HD-SW300 Version - SXM-B200/IA

Form Factor & Dimensions:

  • 8U rackmount chassis, sized 930 x 448 x 353.6mm (36.6'' x 17.6'' x 13.9'').

GPU & CPU Support:

  • NVIDIA® HGX B200 8-GPU system with NVSwitch™.

  • Dual-socket support for 5th and 4th Gen Intel® Xeon® Scalable Processors (up to 350W TDP).

High-Speed Storage:

  • 8 hot-swap 2.5" NVMe (PCIe 5.0 x4) bays via PCIe Switch + additional 4 hot-swap 2.5" SATA/NVMe drive bays.

Memory Capacity & Speed:

  • 32 DIMM slots supporting DDR5 RDIMM up to 5600 MT/s and maximum DIMM capacity up to 256GB per slot (RDIMM-3DS).

Network & Management:

  • Dual 1GbE ports (Intel i350) + dedicated IPMI LAN port for remote management via ASPEED AST2600.

Power & Cooling:

  • Redundant 6+6 CRPS power supplies with 3002.4W Titanium-rated efficiency.

  • Cooling system includes 29x 80mm PWM fans and 4x 40mm PWM fans.

CitrixLogo
CitrixLogo
CitrixLogo
CitrixLogo
CitrixLogo

HD-SW300 Version - SXM-B200/IA

The Hexadata HD-SW300 Version: SXM-B200 is a revolutionary platform designed to push the boundaries of artificial intelligence (AI) and deep learning performance. At the core of this system is a blend of cutting-edge Intel processors, NVIDIA GPUs, and high-performance storage and networking technologies. The platform is specifically engineered for demanding workloads, including AI inference, high-performance computing (HPC), and large-scale data analytics.

Intel has made significant strides in processor
performance, particularly with the 4th & 5th Gen Intel Xeon Scalable processors, which integrate specialized AI acceleration engines. These engines are designed to greatly enhance the performance of AI workloads, from deep learning to high-performance computing tasks. Furthermore, the system takes full advantage of PCIe Gen 5, offering doubled throughput compared to previous generations, resulting in vastly improved data transfer rates between components,such as GPUs and storage.
This powerful combination of Intel Xeon processors and NVIDIA GPUs, along with advanced storage and networking capabilities, allows the Hexadata HD-SW300 Version - SXM-B200/IA to excel in the most demanding AI and deep learning environments.

Intel’s Next-Generation Processor Performance:
The Hexadata HD-SW300 is powered by Dual Intel Xeon Scalable Processors (4th/5th Gen), which are specifically optimized for data center workloads, AI, and deep learning tasks.
These processors offer incredible performance, improved performance per watt, and enhanced AI acceleration through built-in AI engines. These engines are crucial for tasks such as training AI models and running real-time inferences.The system also supports Intel Xeon CPU Max Series, which integrates High Bandwidth Memory (HBM) for high-performance computing (HPC) workloads that are memory-bound. This allows for faster processing speeds and improved overall efficiency for large-scale data tasks.

PCIe Gen5 and DDR5 Memory Support:
The system supports PCIe Gen5, which provides 2x the data throughput of its predecessor. This significant increase in bandwidth is critical for ensuring the seamless movement of data between CPUs, GPUs, and storage devices, leading to faster processing and reduced bottlenecks.
With DDR5 memory, the system can handle higher bandwidths and lower latencies, providing an additional boost to performance, especially for data-heavy workloads that are typical in AI and HPC environments.

NVIDIA HGX B200 8-GPU:
The Hexadata HD-SW300 integrates the NVIDIA HGX B200, a powerful accelerator designed to take AI computing to new heights. It supports NVIDIA Blackwell Tensor Core GPUs, which are specifically engineered for AI and deep learning tasks.
The system is capable of housing up to 8 GPUs, with a staggering 1.4 terabytes (TB) of GPU memory and 64 terabytes per second (TB/s) of memory bandwidth. This combination enables 15X faster real-time trillion parameter-model inference and reduces energy consumption and costs by 12X.
NVIDIA NVLink
and NVSwitch technologies enable ultra-fast GPU-to-GPU bandwidth of 1,800GB/s, which is critical for parallel processing and the efficient execution of large AI models.
The HGXB200 supports advanced networking solutions, including NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet, delivering speeds of up to 400Gb/s. These high-speed interconnects allow for low-latency, high-bandwidth data transfer between nodes, ensuring minimal delays in AI and HPC operations.
NVIDIA BlueField®-3 Data Processing Units (DPUs) provide cloud networking, composable storage, zero-trust security, and GPU compute elasticity. This makes the Hexadata platform ideal for hyperscale AI cloud environments, where dynamic resource allocation and scalability are essential.
The system is equipped with 8 x 2.5" Gen5 NVMe/SATA and 2 x2.5” SATA hot-swap bays for rapid and reliable storage performance, supporting high-performance NVMe drives for quick access to data.
The platform also includes 2 x FHHL dual-slot PCIe Gen5 x16 slots and 8 x HHHL single-slot PCIe Gen5 x16 slots, allowing for flexibility in adding additional accelerators, storage, or networking cards.

Reliability and PowerEfficiency:
The Hexadata HD-SW300 is designed with 12x3000W 80 PLUSTitanium redundant power supplies, ensuring exceptional power efficiency and reliability. This feature guarantees that the system remains operational even during powerinterruptions, making it suitable for critical workloads. 80 PLUS Titanium certification ensures the system consumes less power, providing a more energy-efficient solution compared to previousgeneration systems.

Scalable and Flexible Design:
The 8U rackmount form factor ensures that the system is wellsuited for deployment in data centers, where space is often a concern but high computational power is essential. The form factor allows for easy scaling, while the system’s robust design ensures reliability and longevity in demanding environments.

Hexadata Management Console
For management and maintenance of a server or a small cluster, users can use the Hexadata Management Console, which is preinstalled on each server. Once the servers are running, IT staff can perform real-time health monitoring and management on each server through the browser-based graphical user interface. In addition, the Hexadata
Management Console also provides :

  • Support for standard IPMI specifications that allows users to integrate services into
  • a single platform through an open interface
  • Automatic event recording, which can record system behavior 30 seconds before an event occurs, making it easier to determine subsequent actions
  • Supports Remote KVM over IP which can provide access of BIOS on remote system during the trading hours also.
  • Supports remote media redirection through which even entire operating system and Patches can be deployed remotely without having the physical access of servers.

Hardware Security
TPM 2.0 Module
For hardware-based authentication, the passwords, encryption keys, and digital certificates are stored in a TPM module to prevent unwanted users from gaining access to your data. Hexadata TPM modules come in either a Serial Peripheral Interface or Low Pin Count bus.

Stack Your AI Rig with Hexadata HD-SW300 Version: SXM-B200/IA
The Hexadata HD-SW300 Version: SXM-B200 is an advanced AI and HPC platform that pushes the envelope of performance, efficiency, and scalability. By leveraging Intel's latest processors and NVIDIA’s cutting-edge GPU technologies, the system is designed to accelerate workloads in AI, deep learning, data analytics, and HPC environments. With a comprehensive array of features— including PCIe Gen5, advanced networking, and GPU memory bandwidth—the Hexadata platform is well-suited for organizations aiming to stay at the forefront of AI innovation while ensuring costeffective and energy-efficient operations.
This solution is built for businesses and research institutions seeking scalable infrastructure to support the most demanding AI,data analytics, and high-performance computing tasks.

AI Framework compatible with Hexadata HD-SW300
Hexadata HD-SW300 powered by NVIDIA Tesla GPUs support several AI frameworks that leverage their high-performance computing capabilities for deep learning and other AI tasks. Some of the most widely used AI frame works supported by Hexadata HD-SW300 powered by NVIDIA Tesla GPUs include:

TensorFlow
An open-source deep learning framework developed by Google, widely used for various machine learning and deep learning tasks. TensorFlow provides strong GPU support and benefits significantly from NVIDIA's CUDA and cuDNN libraries.

PyTorch
An open-source machine learning framework developed by Facebook's AI Research lab. PyTorch is knownforits dynamic computational graph and ease of use, especially in research and prototyping.

Keras 
An open-source neural network library written in Python that acts as a high-level API for TensorFlow. Keras simplifies the creation of deep learning models and is often used for rapid prototyping.

mxnet
A deep learning framework designed for both efficiency and flexibility, developed by Apache. It supports multiple programming languages and is optimized for scalability and performance.

Caffe
A deep learning framework developed bythe Berkeley Vision and Learning Center (BVLC). Caffe is known for its speed and efficiency in training deep neural networks, particularly computer vision tasks.

Chainer
A flexible and intuitive deep learning framework developed by Preferred Networks. Chainer is designed for researchers and allows for the easy construction of complex neural networks.

Microsoft Cognitive Toolkit
A deep learning framework developed by Microsoft, known for its scalability and performance in training deep learning models.

theano
One of the earliest deep learning frameworks, developed by the Montreal Institute for Learning Algorithms (MILA) at the University of Montreal. While Theano is no longer actively developed, it laid the groundwork for many subsequent frameworks and still works with NVIDIAGPUs.

torch
A scientific computing framework with wide support for machine learning algorithms, providing a Lua-based scripting language. Torch was one of the predecessorsto PyTorch.

These frameworks utilize NVIDIA's CUDA parallel computing platform and programming model, along with cuDNN, a GPU-accelerated library for deep neural networks, to achieve optimal performance on NVIDIA Tesla GPUs. This supportallows researchers and developers to leverage the powerful computational capabilities of Tesla GPUs to train and deploy complex AI models efficiently.

Use Cases for Hexadata HD-SW300

The Hexadata HD-SW300 powered by NVIDIA H100 80GB GPU, based on the Hopper architecture, is designed for high performance computing,AI,and deep learning applications.Here are some use cases where the H100 80GB GPU excels :

Deep Learning Training

  • Large-Scale Model Training: The H100's massive memory capacity and computational power enable the training of large neural networks, such as GPT-3 or similar large language models,with faster convergence times.
  • Multi-GPU Training: Leveraging NVLink and NVSwitch technologies, multiple H100 GPUs can be inter connected to scale outtraining across numerous GPUs, improving throughput and efficiency

Inference

  • Real-Time Inference: The H100 can handle real-time inference workloads for applications such as autonomous driving, medical diagnostics, and financial modeling due to its high throughput andlow latency.
  • Batch Inference: Ideal for scenarios where large batches of data need to be processed simultaneously, such as in recommendation systems or image recognition tasks.

Data Analytics

  • Big Data Processing: The H100 can accelerate data analytics workloads, such as ETL (extract, transform, load) processes,enablingfaster insights from large datasets.
  • Graph Analytics: It can efficiently handle graph-basedanalytics, useful in social network analysis, fraud detection, and recommendation systems.

Scientific Computing

  • Simulations: High-performance simulations in fields like physics, chemistry, and climate modeling can leverage the H100’s computationalcapabilities to run moredetailed andaccurate simulations in less time.
  • Genomics:  Accelerating genomic sequencing and analysis tasks, helping in research and clinical applications by reducingthe time required for processing large datasets.

High-PerformanceComputing(HPC)

  • Molecular Dynamics: Speeding up molecular dynamics simulations used in drug discovery and materials science.
  • Finite Element Analysis (FEA): Enhancing the performance of FEA used in engineering applications for stress analysis, fluid dynamics,and structural analysis.

AI-Driven Applications

  • Natural Language Processing (NLP): Training and deploying advanced NLP models for applications like chatbots, translation services, and sentiment analysis.
  • Computer Vision: Enhancing the performance of computer vision tasks such as object detection,image segmentation, and facial recognition.

Enterprise AI

  • Business Intelligence : Accelerating the processing of large datasets for business intelligence and analytics, enabling quicker decision-making.
  • Recommendation Systems : Powering real-time recommendation engines for e-commerce, streaming services, and personalized content delivery.

Virtualization and Cloud Computing

  • AI-as-a-Service:Enabling cloud service providers to offer AI services on demand, utilizing the H100’s capabilities to deliver high-performance AI applications to end-users.
  • Virtual Desktop Infrastructure (VDI) : Supporting VDI deployments for data scientists and researchers who need access to powerful GPU resources for their workflows.

Edge Computing

  • Autonomous Systems: Providing the computational power needed for autonomous vehicles, drones, and robotics, which require real-time processing and decision-making capabilities.
  • Smart Cities: Enabling edge devices to perform complex AI tasks locally, reducing latency and bandwidth requirements for smart city applications like traffic management and surveillance.

The Hexadata HD-SW300poweredby NVIDIA H100 80 GBGPU's advanced architecture and large memory capacity make it suitable for these demanding use cases, enabling breakthroughs in AI, scientific research, and high-performance computing.

Titile Img

Related Products