Meta has used Nvidia technologies to build an AI supercomputer. The AI Research SuperCluster (RSC) is set to be the largest Nvidia DGX A100 system, using 760 of these devices to date. The computer is expected to deliver five exaflops of AI performance for Meta’s research team when complete.
This AI supercomputer uses 6,080 GPUs in total, including the 760 Nvidia DGX A100 systems, which it uses as its compute nodes. The DGXs communicate via Nvidia’s InfiniBand fabric, and the device has a total of 132 petabytes of storage.
Compared to Meta’s previous infrastructure, the RSC can run the Nvidia Collective Communication Library (NCCL) over nine times faster, and the supercomputer can train large-scale Natural Language Processing (NLP) models in a third of the time.
The RSC has been designed to build better AI models, bringing together text, images and videos from many languages, with training already underway. These new models could, for example, help Meta to identify harmful content in real-time.
Meta claims that the RSC is one of the fastest AI supercomputers globally, with the build ongoing. In phase two of the build, which is due to be completed later this year, the number of GPUs will increase from 6,080 to 16,000, more than doubling the supercomputer’s AI training performance. When the build is complete, Meta believes it will be the fastest computer of its kind.