Open-Source AI Wins: OpenAI Goes Apache 2.0, Superpowers Hits 94K Stars, and Xiaomi Reveals Its Secret Model

This week's open-source highlights: GPT-OSS marks OpenAI's first open weights since GPT-2, Superpowers becomes the most-starred AI coding framework, and Hunter Alpha was Xiaomi all along.

Laptop computer showing code on screen

OpenAI shipped open weights under Apache 2.0. A shell-based skills framework became the fastest-growing AI coding tool on GitHub. And the mysterious “Hunter Alpha” model that topped OpenRouter’s charts turned out to be Xiaomi testing a trillion-parameter agent model in the wild.

Here’s what matters from the past week.

GPT-OSS: OpenAI’s First Open Weights Since GPT-2

After years of closing off model weights, OpenAI released gpt-oss-120b and gpt-oss-20b under the Apache 2.0 license. These are OpenAI’s first open-weight language models since GPT-2 in 2019.

The specifications:

gpt-oss-120b:

  • 117 billion total parameters
  • 5.1 billion active per token (mixture-of-experts)
  • Runs on a single 80GB GPU via native MXFP4 quantization
  • 128K context window
  • Near-parity with o4-mini on reasoning benchmarks

gpt-oss-20b:

  • 21 billion total parameters
  • 3.6 billion active per token
  • Runs on 16GB devices
  • Matches o3-mini on common benchmarks

Both models were trained using reinforcement learning techniques from OpenAI’s frontier systems, including o3. On competition math (AIME 2024/2025), health queries (HealthBench), and tool calling (TauBench), gpt-oss-120b outperforms o4-mini.

The weights are on Hugging Face. Snowflake, Orange, and AI Sweden are already fine-tuning for on-premises deployment.

What This Means

OpenAI releasing genuine open-weight models under Apache 2.0 is a significant strategic shift. Whether this is competitive pressure from DeepSeek and Chinese open models, an attempt to commoditize the base layer while monetizing the frontier, or both—the result is that strong OpenAI-trained weights are now available for local inference and fine-tuning.

Superpowers: The Skills Framework With 94K Stars

Jesse Vincent’s Superpowers became the most-starred AI coding framework this month, crossing 94,000 stars with official inclusion in Anthropic’s marketplace.

The framework doesn’t replace your AI coding agent—it teaches it methodology. Instead of agents jumping straight to code, Superpowers enforces a structured workflow:

  1. Socratic brainstorming — Explores alternatives, refines requirements through questions
  2. Isolated worktrees — Creates branches to protect main codebase
  3. Micro-task planning — Breaks work into 2-5 minute tasks with dependencies
  4. Subagent development — Parallel specialized agents (claimed 3-4x speedup)
  5. Test-driven development — RED-GREEN-REFACTOR with 85-95% coverage targets
  6. Systematic review — Automated spec compliance verification
  7. Clean completion — Documentation and merge options

The skills trigger automatically. Once installed, users report Claude working autonomously for hours without deviating from plans.

Vincent isn’t new to open source—he created Request Tracker, served as Perl 5’s release manager, and co-founded Keyboardio. The framework launched with Claude Code’s plugin system in October 2025 and grew from a few thousand stars to 94K in five months.

Why It Works

Superpowers treats AI agents like junior developers who need process guardrails. The result: code that ships rather than code that gets abandoned halfway through implementation.

MiMo-V2-Pro: Xiaomi’s Trillion-Parameter Secret

For weeks, a mysterious model called “Hunter Alpha” dominated OpenRouter’s usage charts. Speculation centered on DeepSeek secretly testing V4. The truth was stranger—it was Xiaomi.

MiMo-V2-Pro, now publicly available, has these specs:

  • Over 1 trillion total parameters
  • 42 billion active (roughly 3x larger than MiMo-V2-Flash)
  • 1 million token context window
  • Optimized specifically for agentic scenarios
  • $1/M input tokens, $3/M output tokens on OpenRouter

The benchmark positioning: #8 worldwide on Artificial Analysis Intelligence Index, #2 among Chinese models. On PinchBench (agentic evaluation), it scores 84.0—third globally.

The team is led by Luo Fuli, a former core contributor to DeepSeek who moved to Xiaomi in late 2025. That explains the architectural DNA—MiMo-V2-Pro shares design philosophy with DeepSeek’s efficiency-focused approach.

During stealth testing as Hunter Alpha, the model processed over 1 trillion tokens. Xiaomi essentially ran a public beta disguised as a mystery.

What This Means

Xiaomi—primarily known for smartphones and IoT devices—now has a trillion-parameter model competing with frontier labs. The “Hunter Alpha” stealth launch suggests Chinese hardware companies are taking AI foundation models seriously as a competitive differentiator.

Open-SWE: LangChain’s Async Coding Agent

LangChain released Open SWE on March 17—the first open-source, async, cloud-hosted coding agent designed to work like another engineer on your team.

The workflow:

  1. Receives task from GitHub issue or custom UI
  2. Researches codebase
  3. Creates execution plan
  4. Writes code in isolated Daytona sandbox
  5. Runs tests and self-reviews
  6. Opens pull request when finished

Open SWE ships with ~15 curated tools covering shell execution, web fetching, Git operations, and integrations with Linear and Slack. Inspired by internal systems at Stripe, Ramp, and Coinbase.

The security model matters: every session runs in an isolated sandbox. No worrying about malicious commands reaching production systems.

Hugging Face State of Open Source: China at 41%

Hugging Face’s Spring 2026 report shows the ecosystem’s geographic shift:

  • 13 million users (up from ~8M a year ago)
  • 2 million+ public models
  • 500,000+ public datasets
  • Chinese models: 41% of downloads (vs 36.5% US)

DeepSeek R1’s January 2025 release drove much of the shift—frontier performance at lower cost with available weights attracted massive adoption.

Other findings: 30% of Fortune 500 companies now maintain verified Hugging Face accounts. The top 0.01% of models account for 49.6% of downloads. Robotics datasets grew 23x year-over-year.

The concentration isn’t surprising—most users want the best models, and a small number of projects dominate. But specialized communities around specific languages, domains, and modalities show sustained engagement despite modest download counts.

The Pattern

This week’s common thread: open source is no longer a compromise.

OpenAI’s GPT-OSS matches their o-series mini models. Superpowers makes AI coding agents actually productive. MiMo-V2-Pro competes with Claude Opus. Open-SWE provides enterprise-grade coding agents without vendor lock-in.

The “you need proprietary APIs for serious work” argument is weakening. GitHub now hosts 4.3 million AI-related repositories—up 178% year-over-year on LLM-focused projects alone.

What You Can Do

Try GPT-OSS-20B locally if you have 16GB of RAM. Competitive with o3-mini, runs on a MacBook or gaming PC.

Install Superpowers if you’re using Claude Code or similar. The structured workflow reduces wasted iterations.

Evaluate MiMo-V2-Pro for agent workloads. At $1/$3 per million tokens with 1M context, it’s cost-competitive for long-context agentic tasks.

Deploy Open-SWE for internal coding automation. The async architecture means tasks run overnight, pull requests ready by morning.

The frontier keeps getting more accessible.