top of page


One Command, Done: Integrating Air API with a ClawHub Plugin
Hi,I’m CY Lee from the DevOps/SRE team. With the launch of Air API, we’ve been building out our internal infrastructure monitoring system. Along the way, we developed an OpenClaw plugin—and in this post, I’d like to walk you through what we built and why it matters. 🙂 Before We Start If you’ve used OpenClaw for a while, you’ve probably experienced something like this at least once. The moment you try to connect an external model provider, you find yourself going through the
6 days ago


AI Infrastructure Is Bifurcating. Big Tech Is Spending $21 Billion.
This illustration was created with AI to support the explanation. A few days ago, Meta announced it was extending its AI cloud contract with CoreWeave through 2032, committing an additional $21 billion. Combined with the existing $14.2 billion agreement, the total comes to over $35 billion — roughly $35B locked in for GPU compute, years in advance. CoreWeave, as of the announcement date, became the fastest cloud company in history to reach $5 billion in ARR. The dollar fig
Apr 15


How Many Tokens Per Month Before Self-Hosting Your GPU Becomes Cheaper?
If you've been running an AI service for any length of time, you've probably hit this question at some point. "Is using an API actually the cheaper option? Or would it be better to just buy a GPU and run it ourselves?" As model performance converges, cost has become the decisive battleground. Teams at every scale are starting to run the numbers on which approach is actually cheaper for their usage volume — and the answer changes significantly depending on how much you're act
Apr 14


The Cheapest Way to Use Qwen
Across industries, job functions, and academia, more teams are building their own AI agent assistants and putting them to work. But the longer you run them, the harder it is to ignore one unavoidable reality: cost . An API invoice larger than your monthly subscription fee, quietly accumulating call by call, has become a familiar sight. AI agents don't call a model once per task. They call it tens or even hundreds of times per job -- planning, invoking tools, verifying results
Apr 10


Air API is Now Live
If you've ever tried serving an open-source AI model yourself, you know the pain. Setting up GPU infrastructure takes longer than choosing the model itself. Provisioning GPUs, configuring environments, scaling with traffic... the road to running a single model is way too long. Air API eliminates that entire process. It's a serverless API service for open-source AI models. No infrastructure to build. Just an API key to get started. Key Features 💡 OpenAI-Compatible Endpoint
Apr 9


Google's TurboQuant — The Era of Serving LLMs Without Expensive GPUs Is Getting Closer
Google’s TurboQuant reduces KV cache memory usage in LLM inference without sacrificing accuracy. Learn why 80GB GPUs were needed—and why mid-range GPUs may now be enough.
Mar 30
bottom of page
