WEKA
Close

WeRide Runs Its Driverless Robotaxis in the Hybrid Cloud with the WEKA® Data Platform

Overview

WEKA Fuels WeRide’s GPU Farm to Power a New Frontier of Safe, Reliable Driverless Transportation

Established in 2017, WeRide is pioneering the creation of Level 4 autonomous driving technologies to make transportation and mobility safer, more affordable and accessible. WeRide facilitates strategic alliances among artificial intelligence (AI) technology companies, car makers, and mobility service platform providers. The WeRide RoboTaxi, a joint venture company of WeRide, Baiyun Taxi, and SCI (Guangzhou) Group in Guangzhou, China, operates the first robotaxi service open to the public in China.

The Challenge

WeRide Robotaxis ingest large video and image files collected over 2 million kilometers of driving distance, requiring multiple petabytes of high throughput data processing daily. Data is annotated at the core, trained by an AI model on a cloud-based cluster, and fed back to an on-premises AI engine powered by a performance-intensive GPU farm.

The WeRide team needed a unified data management solution that could help to manage hundreds of terabytes of data across edge, core and cloud and ensure its GPUs were fully and efficiently utilized to promote greater sustainability and cost efficiency.

The new solution needed to:

Increase GPU and AI training model utilization and efficiency

Provide a hybrid cloud environment to reduce the on-prem data center footprint

Deliver flexibility through hardware-agnostic compatibility with commodity servers

Provide the best economic value for capacity planning and future performance

The Solution

The WEKA Data Platform Running on Commodity Servers and AWS

WeRide conducted an extensive cost analysis of various data management scenarios, including utilizing open-source solutions like HDFS to feed data to the GPU farm. Ultimately, none of these approaches proved to be cost-effective. Ultimately, WeRide chose a hybrid cloud implementation using the WEKA Data Platform on commodity servers and Amazon Web Services (AWS).

WEKA’s two-tier architecture integrates with on-premises servers and in the cloud seamlessly, presenting them as a unified platform solution. On-premises, WeRide manages hundreds of TBs of NVMe flash on the WEKA platform using commodity Intel x86-based servers from AMAX and Mellanox Ethernet network switches. It leverages the WEKA platform in the cloud to deliver high-bandwidth I/O to its GPU resources in AWS China. WeRide’s data infrastructure team credits its close collaboration with WEKA’s Customer Success team for its seamless transition to the AWS cloud.

“We built a GPU farm that needed a high-speed data pipeline to feed it. We evaluated open-source solutions like HDFS and the public cloud. We chose WEKA for its ability to provide cost-effective, high-bandwidth I/O to our GPUs and for its product maturity, customer references, and stellar on-demand support.”

Paul Liu, Engineering Operations Lead

Outcomes

Accelerated AI product development across edge, core, and cloud

Improved GPU utilization, cost and energy efficiency

Public cloud integration for computing elasticity

Stellar collaboration with customer support

title title

Start Solving the Big Problems