AI Application

Chenbro's SR113 and SR115 upright convertible rack-mountable 4U server chassis are specifically designed for AI inference and deep-learning GPGPU servers, supporting up to 5 GPGPU cards. AI inference servers can utilize efficient computation to meet large-scale inference requirements. The SR115LCooling model is fitted with a liquid-cooling module, providing excellent heat dissipation that has been verified through testing to provide strong hardware support and a fully integrated AI inference server chassis.

AI

The generative AI wave sparked by ChatGPT continues to ferment. The Top 10 Strategic Technology Trends for 2024 released by the international research firm Gartner indicate that generative AI will bring new possibilities, enabling humans to accomplish previously impossible tasks. Following this trend, AI-related technologies and products are gaining increasing attention, especially AI servers supporting large-scale data processing, which is becoming increasingly crucial. Due to substantial computational resources and data storage required for AI training models to provide fast and efficient inference capabilities, supporting the training of large models and processing massive amounts of data, AI servers must be equipped with at least 6 to 8 GPU processors and expanded memory capacity. Consequently, the design and structure of AI server chassis must also be upgraded to better integrate server components and accommodate more components within limited space. Chenbro's SR113 and SR115 upright convertible rack-mountable 4U server chassis are specifically designed for AI inference and deep-learning GPGPU servers, supporting up to 5 GPGPU cards. AI inference servers can utilize efficient computation to meet large-scale inference requirements. The SR115LCooling model is fitted with a liquid-cooling module, providing excellent heat dissipation that has been verified through testing to provide strong hardware support and a fully integrated AI inference server chassis.
AI
play-video