Meet Siam AI
Powering Cultural AI Independence

How Siam AI Built Thailand’s First Sovereign Cloud to Compete Globally
Siam AI is Thailand’s first sovereign AI cloud provider, a groundbreaking achievement that has positioned it as the country’s leading AI service provider.
However, on the path to innovation, CEO Ratanaphon Wongnapachant and his team wanted to do more than build generic AI models. The company is committed to developing a truly culturally aware AI service to enhance its understanding and support of Thailand’s people and businesses.
“When we founded Siam AI, our vision was to build Thailand’s first sovereign AI cloud and develop Thai-based large language models (LLMs) that truly understand our language, culture, and unique context.”
“We wanted to develop the LLMs ourselves, trained on our native language, rather than relying on foreign-developed models. Otherwise, we would have risked losing valuable data that could be used for cognitive development, an essential foundation for the country,” Mr. Ratanaphon explains.
With major partnerships for Thai LLM development, enterprise AI solutions, and tourism AI agents, Siam AI is already successfully serving enterprises, researchers, universities, and government agencies. However, to truly scale their vision and deliver the extreme performance their advanced AI models demanded, they needed highly performant, adaptive, and resilient infrastructure that could seamlessly scale with their ambitious vision and support cutting-edge NVIDIA accelerated computing.
The Challenge
Meeting Ambitious Requirements for End-to-End AI Performance Excellence
To support their growing innovations and expanding customer base, Siam AI established demanding infrastructure requirements:
- Multi-petabyte storage that could scale linearly with GPU growth
- Sustained millions of IOPS at sub-millisecond latency across 100+ clients simultaneously
- Fast and frequent checkpointing to maximize GPU productivity during training
- Rapid deployment to meet urgent market demand
- Performance, simplicity, and reliability without traditional storage architecture complexity and costs
These weren’t just technical specifications—they were the foundation needed to achieve their goals.
“To bring this vision to life, we needed an AI infrastructure platform that could keep pace with our ambition — delivering extreme performance, scalability, and efficiency at every layer.”
The Solution
A Strategic Choice: NeuralMesh by WEKA
After evaluating numerous AI storage options, Siam AI selected NeuralMesh as its storage foundation.
The collaboration between Siam AI and WEKA’s engineering and customer success teams ensured a seamless 54-node cluster deployment—demonstrating not only the power of WEKA’s technology, but also the depth of our partnership and unwavering post-deployment support, as Siam AI’s innovation converged with WEKA’s technical excellence to deliver a world-class AI infrastructure built for performance and scale.
- Aggregate Performance: ~3 TB/s reads, ~1.5 TB/s writes
- Latency: <0.5 ms with millions of IOPS
- Client Scale: Seamless access for up to 140 GPU clients
“With WEKA, we were able to stand up a production-ready AI storage platform in just four days — something that would have normally taken weeks or even months,” explains CTO Nutthapong Chuaybumrung.
“The results have been extraordinary: linear scaling across multi-petabyte nodes, each GPU client driving up to 40 GB/s throughput and more than 2 million IOPS at sub-millisecond latency.”
NeuralMesh Axon: Ultra-fast Storage Without Adding Separate Infrastructure
While the 54-node NeuralMesh cluster powers Siam AI’s sovereign cloud at national scale, the team also set out to prove a complementary principle at the frontier of AI research: that maximum performance need not mean maximum footprint. The answer was NeuralMesh Axon by WEKA, a software-defined, converged storage fabric deployed directly onto Siam AI’s AI Lab cluster of 8 × NVIDIA DGX B200 servers.
Rather than expanding the data centre with a separate dedicated storage array, Siam AI’s architects identified a more elegant approach. The DGX B200 servers already house enterprise-grade NVMe drives across every node. Axon activates that latent capacity, pooling the NVMe resources across all eight nodes into a single, high-performance shared namespace, accessible cluster-wide, with zero additional hardware footprint, no separate management plane, and no disruption to active research workloads.
“Our design principle has always been to build systems that are architecturally elegant, not just capable,” explains Siam AI’s Head of IT Infrastructure. “Converging our storage and compute onto a unified fabric was the natural next step. We wanted every joule, every core, and every byte of NVMe capacity in those DGX nodes to contribute directly to research outcomes.”
Each DGX B200 backend contributes 30 CPU cores, 7 NVMe drives, and approximately 149 GB of RAM to the shared fabric, totalling 240 active cores, 56 drives, and 119.2 TB of raw NVMe capacity, all orchestrated by Axon as a single POSIX namespace mounted identically across every node. The R&D team’s training jobs run entirely on Kubernetes via standard Persistent Volume Claims, with no custom drivers, no bespoke integrations, and no specialist storage knowledge required. Axon makes the complexity invisible, leaving researchers free to focus entirely on science.
“NeuralMesh Axon allowed us to fully realise the architectural potential of our DGX cluster. Every core, every NVMe drive, every watt is contributing to research outcomes. The entire storage fabric was operational in under two days, Kubernetes training jobs mapped straight to it, and 146 days later we have not had to think about storage once. That is precisely the kind of infrastructure that lets a research team move at the
speed of ideas.”
The Outcomes
Amplifying and Accelerating Thailand’s AI Ambitions
Siam AI has transformed its cloud services platform at every level, from national-scale sovereign cloud to frontier AI research.
By deploying NeuralMesh and NeuralMesh Axon, Siam AI has created a national AI infrastructure platform that delivers:
- Checkpointing in minutes, not hours, enabling faster recovery and iteration
- Higher GPU utilization, reducing idle time and improving ROI
- Reliable, low-latency performance for massive AI training jobs
- National impact, positioning Thailand as a regional leader in AI innovation
Mr. Ratanaphon affirms: “I would recommend NeuralMesh and NeuralMesh Axon by WEKA to any organization facing the same AI infrastructure challenges we encountered. Whether at national cloud scale or at the frontier of AI research, WEKA delivers the performance and reliability that lets our teams move at the speed of their ambitions.”
Key Achievements with NeuralMesh
Rapid Deployment
NeuralMesh production-ready in 4 days; NeuralMesh Axon operational in under 2 days, with zero disruption to active workloads.
Performance at Scale
Linear scaling across 54 nodes with no bottlenecks.
Peak Client Performance
Each GPU client achieves up to 40 GB/s reads and writes and over 2 million IOPS at <0.3 ms latency.
Optimized Infrastructure
Delivered in only 54 rack units, powered by 108 × 400 GbE connections.
Sustainability Leadership
Industry-leading performance per kilowatt and per terabyte across both deployments, with Axon eliminating the need for a separate storage tier entirely by activating the full capacity of existing DGX hardware.
Axon Converged Performance
128 GB/s peak aggregate read throughput, 465,000 IOPS per node, and 146 consecutive days of operation with zero alerts and zero storage incidents.
Building a National AI Infrastructure Platform
“Beyond raw performance, WEKA has given us the most balanced efficiency per terabyte and per kilowatt we’ve seen in the industry, allowing us to scale sustainably without compromise,” Mr. Nutthapong explains. “WEKA doesn’t just keep up with our most demanding AI workloads — it makes storage invisible, so our teams can focus on innovation, not infrastructure.”
By deploying WEKA at scale, Siam AI has created a national AI infrastructure platform that delivers:
- Checkpointing in minutes, not hours, enabling faster recovery and iteration
- Higher GPU utilization, reducing idle time and improving ROI
- Reliable, low-latency performance for massive AI training jobs
- National impact, positioning Thailand as a regional leader in AI innovation
“I would recommend NeuralMesh by WEKA to any organization facing the same AI infrastructure challenges we encountered. With WEKA, storage simply disappears from the equation, allowing our teams to focus on building AI innovation instead of wrestling
with bottlenecks.”
Creating Real
World Impact
Siam AI is extending its platform to support emerging workloads and broader access across Thailand and beyond, including:
- High-performance inference at scale
- Local AI development environments for researchers and startups
- Sovereign AI initiatives for enterprises and government
- Next-generation use cases in healthcare, manufacturing, fintech, and smart cities
As one of only four Asian countries, alongside China, Japan, and South Korea, developing indigenous LLMs, Thailand’s position in the global AI landscape continues to strengthen.
With NeuralMesh and NeuralMesh Axon as its infrastructure foundation, providing performance, simplicity, and scale, Siam AI has transformed its platform to compete globally while serving locally. As Thailand rises in the global AI readiness rankings and Siam AI hosts one of Asia’s most advanced accelerated computing infrastructures, the foundation is set for continued innovation and leadership.
