Skip to content

AI Servers & Solutions

Transform your business with artificial intelligence, powered by the industry’s most comprehensive high-performance server portfolio.

Accelerate industry insights with Inspur AI.

Inspur is a leading AI solutions provider developing innovative end-to-end solutions that help customers leverage and tackle high performance workloads in the data center.

5 PetaFLOPS Compute Performance

Highest performance in the industry up by 234% of AI servers of previous generations

63 World Records in AI Benchmarks

Top scores in the definitive MLperf AI benchmark tests for artificial intelligence scenarios

10x Faster AI Model Training

Improved AI model and dataset training speed converts to >100x time savings per project

NVIDIA Select Elite Server Partner

Trusted AI hardware technology partner and leading global NVIDIA GPU server vendor

Inspur Full-Stack AI Capabilities

Inspur offers a comprehensive full-stack of AI capabilities that spans the full range of application, algorithm, resource, and hardware platforms, to meet the depth and breadth of our customers’ needs.

The Industry’s Most Complete AI Product Portfolio

Power your AI applications with a broad offering of servers and platforms that deliver industry-leading performance, acceleration, and efficiency for your training and inferencing workloads. Now supporting NVIDIA H100 Tensor Core GPUs.

Inspur NF5468M6

NF5468M6
NVIDIA OVX Solution

4U 8x A100 GPU server for large-scale digital twin simulations and virtual modeling

NF5488A5
AI Training Server

4U 8x NVIDIA A100 GPU server with 2x AMD EPYC Rome processors, delivers 5 PFLOPS performance

NF5468A5
AI Training Server

4U 8x NVIDIA A100 GPU server with PCIe Gen4 and AMD EPYC Rome processors

As a certified server partner of NVIDIA, the global leader in GPU technology, Inspur applies continuous innovation to develop the fastest, most powerful GPU servers on the market. Inspur GPU accelerated servers deliver features that let you achieve your goals, like training deep learning models and deriving AI insights, in hours, not days.

Now supporting NVIDIA H100 Tensor Core GPUs.

Extreme Acceleration

Extreme Acceleration

The fastest and latest GPUs including NVIDIA Tesla® V100 Tensor Core

Advanced Architectures

Advanced Architectures

High-speed NVLink™ and NVSwitch™ GPU-to-GPU interconnect

Enhanced Power

Enhanced Power

Redundant high efficiency Titanium and Platinum power supplies

Flexible Topologies

Flexible Topologies

Configurable for a variety of applications from AI training to edge computing

GPU Accelerated Servers

NVIDIA-Certified OVX Solution

NVIDIA-Certified OVX Solution

Based on NF5468M6

4U 8x A100 GPUs to support large-scale digital twin simulations and virtual modeling within NVIDIA Omniverse Enterprise

NF5468A5

NF5468A5

4U 8GPU Server with AMD EPYC™

Cloud AI server with 8x NVIDIA A100 GPUs with PCIe Gen4 and 2x AMD EPYC™, memory capacity up to 8T

NF5488A5

NF5488A5

4U 8GPU Server with AMD EPYC™

4U 8x NVIDIA A100 GPU over NVLink 3.0 interconnect, 2x AMD EPYC™ Rome processors, 5 petaFLOPS AI performance

NF5688M6

NF5688M6

6U 8GPU NVLink AI server

Supporting 8x 500W A100 GPU with NVSwitch, up to 12 PCIe expansion cards, dual-width N20X, and air-cooling

NF5468M6

NF5468M6

4U 4-16GPU AI Server

up to 20x PCIe GPUs/accelerators in 4U supports the latest NVIDIA A40 and A100 flexible topologies

NF5280M6

NF5280M6

2U 2-Socket General Purpose Compute Server

2x 3rd Generation Intel® Xeon® Scalable processors Up to 13 PCIe expansion slots 7 versatile configurations

NF5488M5-D

NF5488M5-D

4U 8GPU Server with Intel Xeon Scalable

4U 8x NVIDIA A100 GPU over NVLink 3.0 interconnect, 2x 2nd-Generation Intel Xeon Scalable processors, HBM2e memory

NF5468M5

NF5468M5

4U 8-16GPU Server

4U 8x NVDIA V100 Tensor Core GPU or 16x Tesla P4 GPU, for Al Inference and Edge Computing

NE5260M5

NE5260M5

2U 2-Socket Half-Depth Edge Server

Reliable, compact half-depth edge server for MEC, 5G/IOT, and AR/VR. Compute node with front I/O access or head node for GPU expansion with 2x NVIDIA® Tesla® V100

GX4

GX4

2U 4GPU AI Expansion

JBOG AI expansion with NVMe and flexible configuration, for compute and storage resource pooling

NF5280M5

NF5280M5

2U 2-Socket 4 GPU Server

4x NVIDIA® V100, T4 GPU 2x Intel® Xeon® Scalable Gold/Platinum 24x 2.5” drive bays 24x DDR4 DIMM slots

NVIDIA's Trusted Server Partner

Ian Buck (NVIDIA VP/GM of Tesla Data Center Business) describes Inspur’s role in taking NVIDIA’s cutting edge GPU technology to building servers and infrastructure that power the world’s data centers.

Keith Morris of NVIDIA discusses the extreme computational demands of AI and how partnering with Inspur and leveraging our hardware expertise enables innovative solutions that rise to the challenge.

ผู้ที่สนใจสามารถสอบถามรายละเอียดเพิ่มเติม และรับบริการสาธิตโซลูชัน (POC)

ติดต่อได้ที่…

คุณฐนรัชส์ ชัยรัตนศักดา (Product Manager)
Email : Tanaratc@itgreen.co.th
mobile : 089 698 2553

เว็บไซต์ตัวแทนจำหน่าย