#Rust#Axum#Web Development#Backend#API

Axum in 2026: Why It's Become the Go-To Rust Web Framework

webhani·

According to recent Rust developer surveys, Axum has surpassed Actix-web as the most widely adopted Rust web framework. The reason isn't raw benchmark numbers — Actix-web still wins on peak throughput. Axum's appeal is its design philosophy: lean primitives, full Tower middleware compatibility, and type-safe request handling that integrates naturally with the broader async Rust ecosystem.

Here's what working with Axum looks like in practice.

Setup

# Cargo.toml
[dependencies]
axum = "0.8"
tower = "0.4"
tower-http = { version = "0.6", features = ["trace", "cors", "compression-gzip"] }
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
sqlx = { version = "0.8", features = ["postgres", "runtime-tokio"] }

Type-safe Routing and Extractors

Axum's handlers are plain async functions. Request data — path params, query strings, JSON bodies, shared state — comes in as typed extractors. If the types don't match the handler signature, it won't compile.

use axum::{
    routing::{get, post},
    Router,
    extract::{Path, Json, State},
    http::StatusCode,
};
use serde::{Deserialize, Serialize};
 
#[derive(Serialize, Deserialize, Clone)]
struct User {
    id: u64,
    name: String,
    email: String,
}
 
#[derive(Deserialize)]
struct CreateUserRequest {
    name: String,
    email: String,
}
 
async fn get_user(
    Path(id): Path<u64>,
    State(db): State<AppState>,
) -> Result<Json<User>, StatusCode> {
    db.find_user(id)
        .await
        .map(Json)
        .ok_or(StatusCode::NOT_FOUND)
}
 
async fn create_user(
    State(db): State<AppState>,
    Json(payload): Json<CreateUserRequest>,
) -> (StatusCode, Json<User>) {
    let user = db.create_user(payload.name, payload.email).await;
    (StatusCode::CREATED, Json(user))
}

No macro magic, no reflection — just function signatures that the compiler verifies.

Shared State

Application state (database pools, caches, config) is injected via the State extractor. Wrap it in Arc for cheap clones across handler tasks.

use std::sync::Arc;
use tokio::sync::RwLock;
 
#[derive(Clone)]
struct AppState {
    db_pool: sqlx::PgPool,
    cache: Arc<RwLock<std::collections::HashMap<u64, User>>>,
}
 
#[tokio::main]
async fn main() {
    let pool = sqlx::PgPool::connect("postgres://localhost/mydb")
        .await
        .expect("Failed to connect to database");
 
    let state = AppState {
        db_pool: pool,
        cache: Arc::new(RwLock::new(std::collections::HashMap::new())),
    };
 
    let app = Router::new()
        .route("/users/:id", get(get_user))
        .route("/users", post(create_user))
        .with_state(state);
 
    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000")
        .await
        .unwrap();
 
    axum::serve(listener, app).await.unwrap();
}

Tower Middleware Stack

This is where Axum's design pays off. Authentication, rate limiting, CORS, compression, distributed tracing — all come from the tower and tower-http ecosystem, not Axum-specific plugins.

use tower_http::{
    trace::TraceLayer,
    cors::{CorsLayer, Any},
    compression::CompressionLayer,
    timeout::TimeoutLayer,
};
use tower::ServiceBuilder;
use std::time::Duration;
 
let app = Router::new()
    .route("/users/:id", get(get_user))
    .route("/users", post(create_user))
    .layer(
        ServiceBuilder::new()
            .layer(TimeoutLayer::new(Duration::from_secs(30)))
            .layer(TraceLayer::new_for_http())
            .layer(
                CorsLayer::new()
                    .allow_origin(Any)
                    .allow_methods(Any)
                    .allow_headers(Any),
            )
            .layer(CompressionLayer::new()),
    )
    .with_state(state);

Because Tower is a general async middleware abstraction, the same middleware works across Axum, Tonic (gRPC), and other Tower-compatible services. You don't rewrite middleware when you add a gRPC endpoint alongside your REST API.

Structured Error Handling

Implement IntoResponse on a custom error type to get clean, consistent error responses across all handlers.

use axum::{
    response::{IntoResponse, Response},
    http::StatusCode,
    Json,
};
use serde_json::json;
 
enum AppError {
    NotFound(String),
    Unauthorized,
    Internal(anyhow::Error),
}
 
impl IntoResponse for AppError {
    fn into_response(self) -> Response {
        let (status, message) = match self {
            AppError::NotFound(msg) => (StatusCode::NOT_FOUND, msg),
            AppError::Unauthorized => {
                (StatusCode::UNAUTHORIZED, "Unauthorized".to_string())
            }
            AppError::Internal(err) => {
                tracing::error!("Internal error: {:?}", err);
                (
                    StatusCode::INTERNAL_SERVER_ERROR,
                    "Internal server error".to_string(),
                )
            }
        };
 
        (status, Json(json!({ "error": message }))).into_response()
    }
}
 
async fn get_user(
    Path(id): Path<u64>,
    State(db): State<AppState>,
) -> Result<Json<User>, AppError> {
    db.find_user(id)
        .await
        .map(Json)
        .ok_or_else(|| AppError::NotFound(format!("User {} not found", id)))
}

This pattern keeps error handling logic in one place and makes handler return types self-documenting.

Axum vs Actix-web: Practical Comparison

AspectAxumActix-web
Adoption (2026 surveys)HigherHigh
Peak throughputHigh~10-15% higher
MiddlewareTower ecosystem (shared)actix-web specific
Learning curveModerateModerate to steep
gRPC compatibilityTower (shared with Tonic)Separate integration
Best forGeneral-purpose APIsMaximum raw throughput

For most production APIs, the performance difference between Axum and Actix-web is irrelevant — both are fast enough that the bottleneck will be the database or network, not the framework. The Tower middleware sharing story is more meaningful: one middleware investment works across your whole service mesh.

Loco: Rails-style Rust Framework

Worth mentioning: Loco is gaining traction as an opinionated full-stack Rust framework built on Axum. It brings convention-over-configuration for those who find raw Axum too low-level.

cargo install loco-cli
loco new my_app
cd my_app
cargo loco start

Loco bundles SeaORM, authentication scaffolding, task scheduling, and mailer support. If you're building a product (rather than a performance-critical service) and want to move fast in Rust, it's worth evaluating.

When to Choose Rust for Web

Rust for web isn't the right call in every situation. It pays off when:

  • Performance headroom matters: you need sub-millisecond latency or can't scale horizontally
  • Memory safety at the systems boundary: your service handles untrusted binary data or complex parsing
  • Long-lived services with strict resource constraints

For standard CRUD APIs with moderate traffic, Python (FastAPI) or TypeScript (Hono, Fastify) will ship faster and maintain at lower cognitive cost. Use Rust where its strengths are actually needed.

Takeaway

Axum in 2026 is the pragmatic default for Rust web development — not because it wins every benchmark, but because it fits naturally into the async Rust ecosystem, shares middleware with the rest of the Tower stack, and keeps handler code readable. Start with Axum, reach for Loco if you want more scaffolding, and switch to Actix-web only if profiling shows you need the extra throughput.