Chenbro's SR113 and SR115 upright convertible rack-mountable 4U server chassis are specifically designed for AI inference and deep-learning GPGPU servers, supporting up to 5 GPGPU cards. AI inference servers can utilize efficient computation to meet large-scale inference requirements. The SR115LCooling model is fitted with a liquid-cooling module, providing excellent heat dissipation that has been verified through testing to provide strong hardware support and a fully integrated AI inference server chassis.