Our Investment in RadixArk: Building the Open Infrastructure for AI

Today, we’re announcing our investment in RadixArk, an AI infrastructure company building the next generation of open source inference and training systems. WSJ covered the news exclusively here, and the founders Ying and Banghua share more about what this funding means here.
For years, the most advanced AI infrastructure has been concentrated inside a handful of frontier labs—complex systems built from scratch by small, highly specialized teams. This created a bottleneck: progress across the broader ecosystem depended on the release cycles of a few organizations with the resources to train and serve cutting-edge models.
That dynamic is now changing. Model capabilities are no longer the primary constraint. With advances like test-time reasoning, reinforcement learning, and the rise of high-quality open source models, developers have unprecedented access to frontier intelligence.
The bottleneck has shifted from building models to operating them. Developers are now constrained by their ability to control, adapt, and reliably serve a growing diversity of models across hardware, environments, and use cases at scale. As a result, the ability to operate AI models is becoming a core part of product development for every company. We’re already seeing this emerge in leading AI-native companies like Decagon, where 80% of all workloads run through models trained in-house.
This shift creates a new opportunity to build foundational infrastructure for operating AI models—an open inference engine combined with flexible systems that give developers full control over how models are trained and deployed.
RadixArk is building that infrastructure.
Led by Ying, Banghua, and the creators and core maintainers of SGLang and Miles, the team is bringing frontier inference and post-training infrastructure into the open. SGLang has already become a leading open source inference engine, setting the standard for speed, quality, and reliability, with day-0 support for virtually every open model family and hardware provider. It is deployed across hundreds of thousands of GPUs and steered by a global community of thousands of contributors across hundreds of companies, universities, and research institutions, including Google, Microsoft, Oracle, LinkedIn, xAI, NVIDIA, AMD, Intel, and many others.
The team has a long track record of creating and maintaining widely adopted projects from Vicuna to FastChat, and those open source principles are central to RadixArk’s mission. Their primary goal is to support the SGLang and Miles open source projects and grow those communities through dedicated investment. Over time, RadixArk will also work closely with its partners across the ecosystem to develop new commercial products that span the full lifecycle of model development, from training to reinforcement learning to large-scale inference.
We couldn’t be more excited to partner with RadixArk as they build the AI infrastructure layer for the next generation of engineers and researchers. If you want to work at the frontier of AI systems, come join us!
Great companies aren't built alone.
Subscribe for tools, learnings, and updates from the Accel community.


%20(2).avif)