The Case for Rust in Production Services
Rust occupies a unique position in the backend landscape. It delivers C-level performance—no garbage collection pauses, no virtual machine overhead—while providing memory safety guarantees that eliminate entire categories of production bugs. In microservices where p99 latency and resource efficiency directly translate to infrastructure cost, that combination is compelling.
At MediaFront, we reach for Rust when two conditions are true: the service sits in the critical path of user interactions, and performance headroom from Go or .NET is insufficient. For everything else, faster iteration speed in those languages wins.
Why Rust's Memory Model Matters for Microservices
The biggest operational hazard in long-running services isn't bad logic — it's undefined behaviour in memory management. Dangling pointers, use-after-free, data races in concurrent code. Languages with garbage collectors avoid these at the cost of unpredictable stop-the-world pauses. Languages without GC (C, C++) avoid the pauses but leave memory management to the programmer.
Rust's ownership model is a third path: the compiler enforces memory safety at compile time through ownership rules and lifetimes. If it compiles, it doesn't have use-after-free or data races — by construction:
// The borrow checker prevents this class of bug at compile time
fn process(data: Vec<u8>) -> &[u8] { // ❌ Compile error: returns reference to moved value
&data[0..4]
}
// Correct: the lifetime annotation makes the relationship explicit
fn process(data: &[u8]) -> &[u8] { // ✅ Borrow, don't move
&data[0..4]
}
In a microservice running for months under load, this matters. You get C-like performance without the class of production incidents that plague long-running C++ services.
Fearless Concurrency with Tokio
Rust's async model, built on Tokio, allows writing highly concurrent services that feel sequential:
use axum::{extract::Path, response::Json, routing::get, Router};
use tokio::net::TcpListener;
#[tokio::main]
async fn main() {
let app = Router::new()
.route("/products/:id", get(get_product))
.route("/products", get(list_products))
.layer(tower_http::trace::TraceLayer::new_for_http());
let listener = TcpListener::bind("0.0.0.0:8080").await.unwrap();
axum::serve(listener, app).await.unwrap();
}
async fn get_product(Path(id): Path<String>) -> Result<Json<Product>, AppError> {
let product = db::find_product(&id).await?;
Ok(Json(product))
}
Tokio's runtime multiplexes thousands of concurrent async tasks onto a small thread pool — typically one thread per CPU core. Unlike OS threads (expensive: 2–8MB stack each), async tasks are cheap to create (kilobytes). A single Rust/Tokio service can handle 100,000 concurrent connections on a machine where a Spring Boot service might handle 2,000.
The Axum Framework: Ergonomic Without Overhead
Axum is Hyper-based (the same HTTP library used by curl, Linkerd, and others in the Rust ecosystem) with a type-safe routing and handler system that compiles to zero overhead at runtime:
use axum::{
extract::{Query, State},
http::StatusCode,
response::IntoResponse,
Json,
};
use serde::Deserialize;
#[derive(Deserialize)]
struct ListQuery {
page: Option<u32>,
per_page: Option<u32>,
category: Option<String>,
}
async fn list_products(
State(db): State<AppState>,
Query(params): Query<ListQuery>,
) -> impl IntoResponse {
let page = params.page.unwrap_or(1).max(1);
let per_page = params.per_page.unwrap_or(20).min(100);
match db.products.list(page, per_page, params.category).await {
Ok(products) => Json(products).into_response(),
Err(e) => {
tracing::error!("Failed to list products: {e}");
StatusCode::INTERNAL_SERVER_ERROR.into_response()
}
}
}
Everything in this handler is validated at compile time: the State extractor requires AppState to implement Clone, Query validates that ListQuery is a valid Deserialize target, and IntoResponse ensures the return type is a valid HTTP response. There's no runtime reflection and no hidden allocation.
Error Handling Without Exceptions
Rust's Result<T, E> type makes error handling explicit and composable. The ? operator propagates errors up the call stack without the boilerplate of checked exceptions or the opacity of unchecked ones:
use anyhow::Result;
async fn process_order(order_id: Uuid) -> Result<OrderConfirmation> {
let order = db::find_order(order_id).await?; // propagates if not found
let inventory = inventory::check(&order.items).await?; // propagates if service down
let payment = payments::charge(&order.payment).await?; // propagates if charge fails
let confirmation = fulfillment::create(&order).await?;
metrics::increment("orders.fulfilled");
Ok(confirmation)
}
Each ? is a compile-time checked early return. There are no uncaught exceptions — every error path is accounted for by the type system.
Real-World Performance Benchmarks
On equivalent hardware (4 vCPU, 8GB RAM, PostgreSQL connection pool of 20):
| Service | Requests/sec | p50 | p99 | Memory RSS |
|---|---|---|---|---|
| Rust/Axum | 48,000 | 0.8ms | 3.2ms | 18MB |
| Go/net/http | 32,000 | 1.2ms | 5.8ms | 42MB |
| Node.js/Fastify | 14,000 | 2.1ms | 12ms | 95MB |
| .NET/ASP.NET Core | 28,000 | 1.4ms | 6.1ms | 110MB |
Rust's memory footprint is particularly striking: 18MB RSS for a production-grade HTTP service. In a Kubernetes cluster running 50 microservices, that difference across all services adds up to meaningful infrastructure savings.
Where Rust Makes Sense (and Where It Doesn't)
Reach for Rust when:
- Service is in the critical path and latency directly affects user experience
- Resource efficiency matters (high-density containerised environments)
- The service will handle cryptography, media processing, or compute-intensive workloads
- Long-running stability is essential (financial systems, infrastructure tooling)
Stick with Go or .NET when:
- Fast iteration on business logic is the priority
- Team lacks Rust experience and onboarding cost is high
- The service is a straightforward CRUD API with low traffic
Rust's learning curve is real. The borrow checker requires a mental model shift that takes weeks to internalise. The investment is justified when performance is non-negotiable — and disproportionate when it isn't.