ESC8000 and ESC4000 servers with NVIDIA L40S accelerate training, fine tuning and inference workloads, with powerful performance to build and deploy AI models
KEY POINTS
- Faster procurement: ESC8000-E11, ESC4000-E11, ESC8000A-E12 and ESC4000A-E12 with ready-to-go GPU availability
- Better performance per dollar: ASUS servers with NVIDIA L40S deliver better performance in AI inferencing
- True AI-solutions provider: ASUS is a select NVIDIA OVX system provider, with prowess elevated by proprietary ASUS AI software capability in LLM
ASUS today announced that ESC8000 and ESC4000 servers with the latest NVIDIA® L40S GPUs are ready for order with greater availability and better performance per dollar. To transform with generative AI, enterprises need to deploy more compute resources at a larger scale — and ASUS offers eight-GPU and four-GPU NVIDIA L40S servers to accelerate training, fine tuning and inference workloads, with powerful performance to build and deploy AI models.
In addition, ASUS is one of only a handful NVIDIA OVX server system providers in the world, and we development our own innovative AI LLM technology deliver comprehensive and true generative-AI solutions. Learn more about ASUS L40S solutions.
ASUS ESC8000 and ESC4000 servers with L40S available for rapid fulfilment
Enterprises today need computing infrastructure to deliver performance, scalability and reliability for data centers. ASUS offers both the Intel-based ESC8000-E11 and ESC4000-E11, and AMD-based ESC8000A-E12 and ESC4000A-E12, servers with up to eight NVIDIA L40S GPUs, providing faster time to AI deployment with quicker access to GPU availability and better performance per dollar for AI inferencing. These L40S GPU servers enable enterprises to confidently deploy hardware solutions that securely and optimally run their modern accelerated workloads and are engineered with independent GPU- and CPU-airflow tunnels, and flexible module design on storage and networking for scalability.
The NVIDIA L40S GPU, based on the Ada Lovelace architecture, is the most powerful universal GPU for the data center, delivering breakthrough multi-workload acceleration for large language model (LLM) inference and training, graphics and video applications. As the premier platform for multi-modal generative AI, the L40S GPU provides end-to-end acceleration for inference, training, graphics and video workflows to power the next generation of AI-enabled audio, speech, 2D, video, and 3D applications.
ASUS servers that will be validated by NVIDIA with L40S include:
- Intel: ESC8000-E11, ESC4000-E11 and ESC4000-E10
- AMD: ESC8000A-E12, ESC8000A-E11, ESC4000A-E12 and ESC4000A-E11
ASUS is select NVIDIA OVX server system provider, with innovative AI technology in LLM
Generative AI set new benchmarks for 3D rendering. Proprietary ASUS AI LLM technology employs advanced algorithms to learn and iterate from data to generate intricate 3D models, landscapes and realistic textures, often surpassing human capabilities. Parameters set, designers can watch as the AI offers myriad variations, saving invaluable time and refining artistic visions. Generative AI also helps non-professional workers manipulate 3D elements easily, accelerating realization of their ideas.
ASUS is a select NVIDIA OVX server system provider and experienced and trusted AI-solutions provider, with the knowledge and capabilities to bridge technology chasms and deliver optimized solutions to customers.