Artificial intelligence (AI) and cloud computing

Meet Neysa

Accelerating AI Adoption at Scale

AREAS OF FOCUS
  • Cloud
Region
  • Asia-Pacific
CUSTOMER LINKS

India’s fastest-growing AI Cloud provider eliminated storage bottlenecks and achieved enterprise-grade reliability by using NeuralMesh™ by WEKA.

Growing Demands Outpaced Open-Source Capabilities

As AI workloads scaled, Neysa observed that different workload categories required different storage characteristics. Ceph continued to serve object storage and broader platform needs effectively. However, high-performance GPU clusters demanded tighter latency consistency, stronger isolation, and predictable throughput under bursty traffic patterns.

To meet these requirements, Neysa evolved to a dual-layer approach, retaining open-source storage where appropriate, while integrating a specialized high-performance storage layer optimized for distributed AI workloads.

“Storage is not one-size-fits-all. Our goal is to align storage architecture with workload behavior, so GPUs remain productive and customers experience consistent performance.”

Anindya Das Co-founder and CTO, Neysa Networks

NeuralMesh by WEKA Delivers Performance, Flexibility, and Peace of Mind

Neysa evaluated multiple storage platforms against real AI workload characteristics—extreme parallelism, metadata-intensive pipelines, mixed IO profiles, and synchronized checkpoint bursts—rather than synthetic benchmarks or steady-state assumptions.

After detailed architecture reviews and production workload testing, Neysa selected NeuralMesh by WEKA as a complementary high-performance data layer purpose-built for GPU-intensive environments.

“NeuralMesh’s architecture is built specifically for GPU-driven, parallel AI workloads. It wasn’t about replacing Ceph. It was about adding a storage layer designed for deterministic behavior under GPU-driven pressure.”

Anindya Das Co-founder and CTO, Neysa Networks
Story telling

Critical differentiators included:

  • Software-defined flexibility: NeuralMesh could run on both HPE and Dell hardware, preventing vendor lock-in.
  • Enterprise-grade support: WEKA provided the professional support structure Neysa needed to serve demanding customers with confidence.
  • Rapid deployment: When Neysa faced tight customer delivery timelines, WEKA worked with the team to repurpose existing Dell hardware and implemented the solution in just two days.
  • Performance for AI workloads: NeuralMesh delivered low and consistent latency under mixed read/write workloads, with near-linear scaling aligned with high-speed GPU networks—a unified file and object platform across the entire AI lifecycle.

The engineering engagement proved remarkably smooth, with Neysa operating without a single support ticket for six months.

Enabling AI Innovation Without Limits

“NeuralMesh has been rock-solid, allowing our team to focus on building AI infrastructure rather than troubleshooting storage issues,” said Das. “That reliability gives us the confidence to take on our most demanding customers.”

Pioneering AI Infrastructure Innovation

As Neysa continues its rapid expansion, the company is piloting WEKA’s Axon technology, positioning themselves as a lighthouse customer for cutting-edge storage innovation. WEKA provides the AI-ready storage foundation, while Neysa integrates compute, orchestration, observability, security, and cost control into a production-grade AI infrastructure system—enabling customers to move from idea to scaled deployment without compromise.

Accelerate AI Adoption at Scale

Learn more about Neysa and WEKA