Maximize HPC clusters
with cloud-native agility
Run batch compute, data staging, and orchestration tools on secure, multi-tenant infrastructure you control.
Predictable economics
You pay for the hardware once. No per-minute billing, no per-seat licensing, no egress fees.
Cloud-native experience
API-first automation with Slurm and Flux. Self-service provisioning through web console, CLI, or API.
Secure multitenancy
Robust isolation between teams and organizations.
Powering the best HPC teams
using Oxide to modernize their stack
Traditional HPC vs. Oxide
Infrastructure that works for you
Run simulations, data pipelines, and research computing on dedicated hardware with cloud-native tooling.
Elastic compute on demand
Provision VMs programmatically. Scale capacity up or down without tickets.
Multi-tenant isolation
With granular control of utilization through quotas.
Standard interfaces
REST APIs, Terraform provider, and CLI. No proprietary orchestration layer to learn.
Compatible with your existing tools
Run interactive notebooks, databases, orchestration tools, and monitoring alongside your HPC cluster.
Plus broad compatibility
Oxide runs standard VMs, so your existing toolchain works out of the box
Modernize your HPCTalk to our team
Resources
Introduction to Oxide
An overview of the Oxide rack-scale computing system, from hardware components and networking to the integrated control plane API.
HPC Schedulers on Oxide
End-to-end Terraform and Ansible deployments for Slurm and Flux HPC job schedulers on Oxide infrastructure.
RFD 63: Network Architecture
The design behind Oxide's rack networking: IPv6 routing, Geneve encapsulation, and the programmable packet transformation engine powering tenant VPCs.