top of page

AIEEV Launches “Air Cloud,” Connecting Edge GPUs Worldwide to Cut AI Inference Costs by 40%

ree




AIEEV Launches “Air Cloud,” Connecting Edge GPUs Worldwide to Cut AI Inference Costs by 40%Beta Testing Completed – Upgraded Core Features Enable Affordable, Large-Scale AI Inference

September 30, 2025 – Seoul, South Korea — Decentralized GPU AI cloud startup AIEEV has officially launched Air Cloud, a 100% distributed inference-focused AI cloud platform that connects idle GPUs from consumers and enterprises to deliver high-performance AI inference at a fraction of the cost of conventional clouds.


Air Cloud eliminates dependence on centralized data centers by linking global GPU and NPU resources to create an eco-friendly, low-cost, and scalable AI infrastructure. As AI adoption accelerates, global data-center power usage and infrastructure costs have surged.


Traditional public clouds rely on costly and energy-intensive data-center expansion, hitting clear scalability limits. Focusing specifically on the inference market, AIEEV addresses this challenge by interconnecting worldwide GPU edge nodes into a single unified cloud. The company has also integrated idle GPUs and personal LLM devices such as Mnemos from Dinothisia to further improve resource efficiency. This approach enables users to run and deploy AI models quickly and affordably—without needing specialized staff or complex configurations. After nine months of beta testing with more than 20 AI companies, AIEEV officially launched Air Cloud with several upgraded features:

  • Execution Optimization: Reduced deployment latency and improved workload processing efficiency

  • Cost Efficiency: Auto-scaling and smart scheduling to minimize idle GPU usage, reducing GPU instance costs by up to 40%

  • Secure Billing: Support for credit card, bank transfer, and international payments, plus long-term savings plans for enterprise users

  • Enhanced Security: Combination of crowdsourced and dedicated security nodes for stable, reliable operation


AIEEV aims to connect over 100,000 consumer GPU nodes worldwide by 2026, positioning Air Cloud as the most affordable and massively scalable AI inference platform. The service currently operates through its managed Air Cloud+ nodes, and AIEEV is expanding capacity by partnering with major Korean PC-café operators to utilize their idle GPUs. In addition, a user-participation program will allow individuals to connect unused PCs or devices to earn

rewards—turning everyday users into contributors to the global AI infrastructure.


The company’s innovation has already been recognized both domestically and globally, having been selected for Samsung Electronics C-Lab, Daegu ABB Project, SK Telecom AI Startup Acceleration, and the Korean Ministry of SMEs and Startups’ TIPS Program. Internationally, AIEEV is part of the government-backed DeepTech GMEP initiative, collaborating with leading global enterprises through Plug and Play Tech Center in Silicon Valley.


Leveraging this foundation, AIEEV plans to accelerate its global expansion by exhibiting at CES 2026 in the United States. To celebrate the official launch, the company is offering a free Proof-of-Concept (PoC) program and a 50% discount promotion for new users, available via its website at www.aieev.com.

“AI infrastructure must go beyond speed and cost—it needs to guarantee reliability,” said Sejin Park, CEO of AIEEV. “Through its decentralized architecture, Air Cloud ensures continuous uptime even under failures, empowering customers to operate large-scale AI services efficiently and affordably.”


Source


Comments


bottom of page