Nemotron 3 Super: The Open Model That Outperforms GPT-OSS

March 21, 2026 · nemotron moe agentic-ai
Cover

NVIDIA has unveiled Nemotron 3 Super — a 120B total parameter, 12B active parameter Mixture-of-Experts model that blends Mamba’s linear scalability with Transformer attention, pre-trained in NVFP4 precision. This isn’t just another open model; it’s the first in the Nemotron 3 series to deploy Latent MoE and MTP layers, designed explicitly for multi-agent systems requiring high-throughput reasoning.

Unlike previous open models that traded performance for transparency, Nemotron 3 Super scores 36 on the Artificial Analysis Intelligence Index — outperforming gpt-oss-120b (33) and matching or exceeding many proprietary models in its size class. Its architecture enables 5x higher throughput in agentic workloads, making it ideal for complex reasoning chains, long-context document synthesis, and distributed agent coordination.

The model is now available on Microsoft Foundry, giving enterprise developers access to open weights with enterprise-grade deployment tooling, evaluation pipelines, and trust controls. This bridges a critical gap: high-performance AI that’s not locked behind proprietary APIs or opaque training data.

For practitioners at OtherU, this means you can now deploy state-of-the-art reasoning models without sacrificing transparency or control. Whether you’re building autonomous agents, multi-step planners, or retrieval-augmented systems, Nemotron 3 Super offers a compelling alternative to closed ecosystems — with open weights, optimized inference, and production-ready integration via Azure.

The gap between proprietary frontier models and open alternatives is narrowing fast. With Nemotron 3 Super, NVIDIA isn’t just competing — it’s redefining what open AI can achieve.