Shop
Search, request and order products

In the 2U GPU server ESC4000 G4 with Intel processors from the Xeon family, four double-deck GPU cards do the actual floating-point computing work. Its RAID mass storage with eight LFF SATA hard disk bays, the redundant power supplies and its 8+3 PCIe slots complete the coherent equipment list. The fast server offers terrific computing performance, enormous scalability and high energy efficiency.High-density computing is specifically designed to fit more computing power into one rack. The two-rack ESC4000 G4 easily does this by accommodating up to four GPUs for extremely high power density. The ESC4000 G4 supports up to eight PCIe Gen3 x16 slots. This results in high expandability and great compatibility with many expansion cards. Thus, the server works with a large number of GPU cards (NVIDIA Tesla, GridTM and AMD FirePro S series). In addition, there is also a PCIe Gen3 x 24 slot, which can also fit two low-profile HBAs or proprietary network cards with FDR and 10 GbE interface by means of a riser board.

This ASUS server targets performance-intensive applications where high computing power, large data volumes, and scalable infrastructure are key. Primary markets include High Performance Computing (HPC), AI-based forecasting and analytics workloads, terabyte-scale data warehouses, and demanding virtualization environments. The storage and expansion design emphasizes scalability with up to six NVMe/SATA hot-swap drives (2x 2.5" + 4x 3.5") and four PCIe 5.0 x16 slots for dual-slot GPUs (or eight single-slot GPUs) for maximum acceleration. Networking includes dual Intel I350 GbE LAN ports and ASMB11-iKVM for remote management. Cooling features ASUS Thermal Radar 2.0 with multiple high-performance fans, optimized for up to 350 W TDP processors and GPU workloads

This ASUS server targets intensive GPU workloads in AI training/inference, High Performance Computing (HPC), research and data analytics. Primary markets include demanding AI infrastructure requiring maximum compute with scalable GPU connectivity. The storage and expansion design is GPU-optimized, featuring six front hot-swap drives (2x 2.5" SATA/U.2 + 4x 3.5" SATA/U.2) and eight PCIe 5.0 slots (4x x16 + 4x x8 in x16 size) for up to four dual-slot GPUs with NVIDIA NVLink bridge support. Networking includes dual 1GbE Intel I350 ports, dedicated management LAN and OCP 3.0. Cooling uses 10 high-performance fans with liquid cooling support, designed for up to 400W TDP processors and GPU workloads.

The four height units occupying GPU server from ASUS works with two Intel Xeon processors and can be equipped with up to 3 terabytes of RAM. The eight front-accessible drive bays are unceremoniously docked to the SATA controller. Optionally, there is also a controller for fast SAS disks and SSDs. In addition, the ESC8000 G4 also offers two M.2 interfaces for SSDs (SATA 6 Gbit/s & PCIe Gen3 x4 link 22110/2280/2260), as well as two NVMe slots. (If the latter are used, the number of usable SATA drives is reduced from eight to six). Of its ten PCIe expansion slots, up to eight bays can be used for GPU cards. NVIDIA Tesla V100s are suitable for this, for example. Fully equipped, the fast computing machine achieves around 112 teraflops (Floating Point Operations Per Second) according to Single Precision calculations. For comparison: A server without GPU support calculates floating point operations about 200 times slower.

This ASUS server targets performance-intensive applications where high computing power, large data volumes, and scalable infrastructure are key. Primary markets include High Performance Computing (HPC), AI training and inferencing, terabyte-scale data warehouses, and demanding virtualization environments. The storage and expansion design emphasizes scalability with up to eight 2.5" hot-swap NVMe drives and eight PCIe 5.0 x16 slots for dual-slot GPUs (up to 600W TDP per GPU) supporting NVIDIA® NVLink™ 2-Way/4-Way bridges. Networking includes a dedicated management port and optional OCP 3.0 support. Cooling features 5 system fans and 5 dedicated GPU fans with ASUS Thermal Radar 2.0, optimized for up to 350W TDP processors and extreme GPU workloads.

The ASUS GPU server, which occupies four height units, works with two AMD EPYC 9004 processors and can be equipped with up to 6 terabytes of RAM. The eight front-accessible drive bays are simply docked onto the SATA controller. There is also an optional controller for fast SAS disks and SSDs. The ESC8000A-E12 also offers an M.2 interface for SSDs (PCIe Gen3 x4 link 22110/2280/2260). Up to eight of its ten PCIe expansion slots can be used for GPU cards. NVIDIA H100, for example, is suitable for this. Fully equipped, the fast computing machine achieves around 480/240 teraflops (FP32/FP64)

The ESC8000A-E13 is a dual-socket server with AMD EPYC processors. It is designed for enterprise AI infrastructure with exceptional computing capabilities. It features accelerated GPU connections and high-bandwidth fabric, supports up to eight 600W dual-slot GPUs, and delivers scalable performance through configurable NVIDIA NVLinkTM 2-way or 4-way bridges.

This ASUS server targets extreme GPU workloads in AI training/inference, High Performance Computing (HPC) and enterprise AI applications. Primary markets include generative AI, Large Language Models (LLM), scientific simulations and data-intensive analytics environments. The GPU-optimized design supports up to eight dual-slot GPUs (e.g. NVIDIA H200/H100/RTX PRO 6000 Blackwell) with PCIe 5.0 x16 connectivity and optional NVLink bridge support in NVIDIA MGX architecture. Storage includes eight 2.5" NVMe hot-swap bays plus two M.2 PCIe 5.0 slots. Networking ranges from dual 10GbE (Intel X710-AT2) to dedicated 400GbE PCIe cards, complemented by ASMB11-iKVM remote management. Cooling with 5 system fans is designed for dual EPYC 9005 (up to 500W TDP) and GPU TDP up to 600W.
