NVIDIA DGX A100 Universal System 8-A100/40GB GPUs

Product Overview

Contact us for pricing

The NVIDIA DGX A100 is a 3rd Generation integrated universal AI system for AI infrastructure; with 5 PetaFLOPS of Performance in a Single Node. Featuring 8x NVIDIA A100 GPUs, 15TB Gen4 NVMe SSD, 6x NVSWitches and 9x Mellanox ConnectX-6 200Gb/s network interfaces, the DGX A100 can be used for data analytics, inference and training, with the ability to split GPU resource and share it between 56 different users. The DGX A100 is elastic for scale-up and scale-out computing!

DGXA-2530A+P2CMI00


NVIDIA DGX A100 Universal System 8-A100/40GB GPUs

The NVIDIA DGX A100 is a 3rd Generation integrated universal AI system for AI infrastructure; with 5 PetaFLOPS of Performance in a Single Node. Featuring 8x NVIDIA A100 GPUs, 15TB Gen4 NVMe SSD, 6x NVSWitches and 9x Mellanox ConnectX-6 200Gb/s network interfaces, the DGX A100 can be used for data analytics, inference and training, with the ability to split GPU resource and share it between 56 different users. The DGX A100 is elastic for scale-up and scale-out computing!

Direct Access to NVIDIA DGXperts NVIDIA DGX A100 is more than a server, it's a complete hardware and software platform built upon the knowledge gained from the world's largest DGX proving ground - NVIDIA DGX SATURNV - and backed by thousands of DGXperts at NVIDIA. DGXperts are AI-fluent practitioners who offer prescriptive guidance and design expertise to help fastrack AI transformation. They've built a wealth of know how and experience over the last decade to help maximize the value of your DGX investment. DGXperts help ensure that critical applications get up and running quickly, and stay running smoothly, for dramatically-improved time to insights. Fastest Time to Solution NVIDIA DGX A100 features eight NVIDIA A100 Tensor Core GPUs, providing users with unmatched acceleration, and is fully optimized for NVIDIA CUDA-X™ software and the end-to-end NVIDIA data center solution stack. NVIDIA A100 GPUs bring a new precision, TF32, which works just like FP32 while providing 20X higher FLOPS for AI vs. the previous generation, and best of all, no code changes are required to get this speedup. And when using NVIDIA's automatic mixed precision, A100 offers an additional 2X boost to performance with just one additional line of code using FP16 precision. The A100 GPU also has a class-leading 1.6 terabytes per second (TB/s) of memory bandwidth, a greater than 70% increase over the last generation. Additionally, the A100 GPU has significantly more on-chip memory, including a 40MB Level 2 cache that is nearly 7X larger than the previous generation, maximizing compute performance. DGX A100 also debuts the next generation of NVIDIA NVLink™, which doubles the GPU-to-GPU direct bandwidth to 600 gigabytes per second (GB/s), almost 10X higher than PCIe Gen 4, and a new NVIDIA NVSwitch that is 2X faster than the last generation. This unprecedented power delivers the fastest time-to-solution, allowing users to tackle challenges that weren't possible or practical before.

The World’s Most Secure AI System for Enterprise NVIDIA DGX A100 delivers the most robust security posture for your AI enterprise, with a multilayered approach that secures all major hardware and software components. Stretching across the baseboard management controller (BMC), CPU board, GPU board, self-encrypted drives, and secure boot, DGX A100 has security built in, allowing IT to focus on operationalizing AI rather than spending time on threat assessment and mitigation.

 

Key Features

GPUs 8x NVIDIA A100 Tensor Core GPUs GPU Memory 320 GB total Performance 5 petaFLOPS AI 10 petaOPS INT8 NVIDIA NVSwitches 6 System Power Usage 6.5kW max CPU Dual AMD Rome 7742, 128 cores total, 2.25 GHz (base), 3.4 GHz (max boost) System Memory 1TB Networking 8x Single-Port Mellanox ConnectX-6 VPI 200Gb/s HDR InfiniBand 1x Dual-Port Mellanox ConnectX-6 VPI 10/25/50/100/200Gb/s Ethernet Storage OS: 2x 1.92TB M.2 NVME drives Internal Storage: 15TB (4x 3.84TB) U.2 NVME drives Software Ubuntu Linux OS

 

Request more information Send to a friend View all servers

  • CPU Family

    AMD EPYC

  • CPU Manufacturer

    AMD

  • CPU Model

    7742

  • CPU Quantity (Maximum)

    2

  • CPU Series

    2nd Gen. AMD EPYC 7002

  • GPU Manufacturer

    NVIDIA

  • GPU Memory Sizes

    320GB

  • GPU Model

    NVIDIA DGX A100

  • GPU Quantity

    8

  • Manufacturer

    NVIDIA

  • Memory (Maximum)

    128GB

  • Network Adapter

    1 x Dual-Port Mellanox ConnectX-6 VPI Adapter Card
    8 x Single-Port Mellanox ConnectX-6 VPI Adapter Card

  • Operating Temperature

    5°C ~ +30°C

  • Power Consumption

    6.5kW

Get in touch to discuss our range of solutions

+91 22 5002 3262

Find your solution

Test out any of our solutions at Boston Labs

To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available

Contact us

ISC 2024

Latest Event

ISC 2024 | 13th - 15th May 2024, Congress Center, Hamburg

International Super Computing is a can't miss event for anyone interested in HPC, tech, and more.

more info