Skip to content
v1.0.0-zig0.15.2

Basic Concepts

Volt provides async programming for Zig through a simple async/await API backed by a high-performance work-stealing runtime. This page explains the core concepts you need to understand before writing async code.

The primary API for concurrent work in Volt uses two operations:

  • io.@"async"(func, args) — launch a function as an async task, returns a volt.Future(T)
  • future.@"await"(io) — wait for the result of an async operation
const volt = @import("volt");
fn app(io: volt.Io) !void {
// Launch an async task
var f = try io.@"async"(fetchUser, .{@as(u64, 42)});
// Await the result
const user = f.@"await"(io);
_ = user;
}
fn fetchUser(id: u64) []const u8 {
_ = id;
return "Alice";
}

The function you pass to io.@"async" is wrapped in a lightweight task (~256 bytes) and scheduled on the work-stealing runtime. The Future(T) returned is a handle to that task’s result.

When you need to run several operations concurrently, launch them all before awaiting any:

fn app(io: volt.Io) !void {
// Launch both concurrently
var user_f = try io.@"async"(fetchUser, .{@as(u64, 42)});
var posts_f = try io.@"async"(fetchPosts, .{@as(u64, 42)});
// Await both results
const user = user_f.@"await"(io);
const posts = posts_f.@"await"(io);
_ = user;
_ = posts;
}

Both tasks run in parallel on the scheduler. The awaits block the calling task (not the thread) until each result is ready.

For spawning a dynamic number of tasks, use volt.Group:

fn app(io: volt.Io) !void {
var group = volt.Group.init(io);
// Spawn tasks into the group
_ = group.spawn(processItem, .{@as(u32, 1)});
_ = group.spawn(processItem, .{@as(u32, 2)});
_ = group.spawn(processItem, .{@as(u32, 3)});
// Wait for all tasks to complete
group.wait();
}

Groups provide structured concurrency — all spawned tasks are logically scoped to the group. You can also cancel all remaining tasks with group.cancel().

Individual futures can be cancelled:

fn app(io: volt.Io) !void {
var f = try io.@"async"(longRunningTask, .{});
// ... decide we don't need the result ...
f.cancel(io);
}

The Runtime is the engine that drives futures to completion. It owns three subsystems:

The scheduler manages a pool of worker threads, each with:

  • A LIFO slot — hot-path shortcut for the most recently spawned task (temporal locality)
  • A local queue — 256-slot lock-free ring buffer for pending tasks
  • A global injection queue — mutex-protected overflow queue shared by all workers
  • A cooperative budget — 128 polls per tick, preventing any single task from starving others

When a worker runs out of local tasks, it steals from other workers’ queues. The steal target is chosen randomly to avoid contention. Idle workers park on a futex and are woken via a 64-bit bitmap with O(1) lookup using @ctz.

Platform-specific I/O multiplexing:

PlatformBackendMechanism
Linux 5.1+io_uringSubmission/completion queues in shared memory
macOSkqueueEdge-triggered event notification
Windows 10+IOCPI/O Completion Ports
Linux (fallback)epollEdge-triggered file descriptor polling

The backend is auto-detected at startup. All backends present the same interface to the rest of the runtime.

CPU-intensive or blocking operations must not run on I/O worker threads (they would stall all async tasks on that worker). The blocking pool provides dedicated threads for this:

// Offload CPU-heavy work to the blocking pool
fn processData(io: volt.Io, data: []const u8) !Hash {
var f = try io.concurrent(computeExpensiveHash, .{data});
const result = try f.@"await"(io);
return result;
}

The pool auto-scales up to 512 threads (configurable) and reclaims idle threads after 10 seconds.

Two ways to start:

// Explicit pattern (recommended): create Io like an Allocator
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
var io = try volt.Io.init(gpa.allocator(), .{
.num_workers = 8,
.max_blocking_threads = 128,
.blocking_keep_alive_ns = 30 * std.time.ns_per_s,
});
defer io.deinit();
try io.run(myApp);
}
// Convenience shorthand: zero-config, auto-detect everything
pub fn main() !void {
try volt.run(myApp);
}

The Config fields:

FieldDefaultDescription
num_workers0 (auto)Number of I/O worker threads. 0 means use CPU core count.
max_blocking_threads512Maximum threads in the blocking pool.
blocking_keep_alive_ns10sHow long idle blocking threads stay alive.
backendnull (auto)Force a specific I/O backend.

A task is a lightweight unit of concurrent work scheduled on the runtime. All task operations go through the io: volt.Io handle.

There are two primary ways to create tasks:

const volt = @import("volt");
pub fn main() !void {
try volt.run(myApp);
}
fn myApp(io: volt.Io) !void {
// 1. Async: launch a function as a concurrent task (most common)
var f = try io.@"async"(myFunc, .{arg1, arg2});
const result = f.@"await"(io);
// 2. Concurrent: offload to the blocking thread pool (for CPU-bound work)
var f2 = try io.concurrent(heavyCompute, .{data});
const result2 = try f2.@"await"(io);
}
FunctionUse caseRuns on
io.@"async"(fn, args)General async workScheduler workers
io.concurrent(fn, args)CPU-heavy or blocking I/OBlocking pool threads (use .@"await"(io) for result)

io.@"async" returns a volt.Future(T) — a handle to the async operation’s result:

pub fn main() !void {
try volt.run(myApp);
}
fn myApp(io: volt.Io) !void {
var f = try io.@"async"(compute, .{@as(i32, 5)});
// Await the result (suspends this task, not the thread)
const result = f.@"await"(io);
_ = result;
// Or cancel the operation
// f.cancel(io);
}

When you have multiple concurrent tasks, use volt.Group for structured concurrency:

pub fn main() !void {
try volt.run(myApp);
}
fn myApp(io: volt.Io) !void {
var group = volt.Group.init(io);
// Spawn tasks into the group
_ = group.spawn(fetchUser, .{id});
_ = group.spawn(fetchPosts, .{id});
_ = group.spawn(fetchComments, .{id});
// Wait for all tasks to complete
group.wait();
// Or cancel all remaining tasks
// group.cancel();
}

Volt uses cooperative scheduling: tasks voluntarily yield control to the runtime by returning .pending from poll(). This is different from preemptive scheduling (like OS threads) where the scheduler forcibly interrupts execution.

Cooperative scheduling avoids the overhead of context switches, signal handling, and stack switching. But it comes with a responsibility: tasks must not block the worker thread.

Bad patterns that block:

// BAD: Blocks the I/O worker thread
std.Thread.sleep(1_000_000_000);
// BAD: CPU-intensive loop without yielding
while (i < 1_000_000) : (i += 1) {
result += expensiveComputation(i);
}

Good alternatives:

// GOOD: Async sleep (register with timer driver inside the runtime)
var slp = volt.time.sleep(volt.Duration.fromSecs(1));
// In an async context, register with the timer driver for automatic wakeup.
// For blocking contexts, use volt.time.blockingSleep(duration).
// GOOD: Offload to blocking pool
var f = try io.concurrent(batchCompute, .{data});
const result = try f.@"await"(io);

Even well-behaved tasks can accidentally starve others if they generate many sub-tasks. Volt enforces a cooperative budget of 128 polls per tick. After 128 polls, the worker forces the current task to yield, ensuring other tasks get CPU time.

This matches Tokio’s budget and is invisible to application code — you do not need to insert manual yield points.

When a task spawns a new child task, the child is placed in the worker’s LIFO slot for temporal locality (the child likely touches the same cache lines). However, the LIFO slot is capped at 3 consecutive polls per tick to prevent starvation of tasks in the FIFO queue.


1. Calling async methods without a runtime

Section titled “1. Calling async methods without a runtime”

Async operations require the io: volt.Io handle, which only exists inside a running runtime. The type system catches this at compile time.

// BAD: No runtime -- this won't compile
pub fn main() !void {
var mutex = volt.sync.Mutex.init();
mutex.lock(???); // No `io` handle available
}
// GOOD: Start a runtime first
pub fn main() !void {
try volt.run(myApp);
}
fn myApp(io: volt.Io) !void {
var mutex = volt.sync.Mutex.init();
mutex.lock(io); // io handle available inside the runtime
defer mutex.unlock();
}

std.Thread.sleep and CPU-heavy loops block the OS thread, starving all other tasks on that worker.

// BAD: Blocks the worker thread -- other tasks on this worker can't run
std.Thread.sleep(1_000_000_000);
// GOOD: Async sleep -- create and register with timer driver
var slp = volt.time.sleep(volt.Duration.fromSecs(1));
_ = slp; // Register with timer driver in async context
// GOOD: Offload CPU-heavy work to the blocking pool
var f = try io.concurrent(expensiveComputation, .{data});
const result = try f.@"await"(io);

Futures are mutated when polled (.@"await" calls poll() internally). Declaring them as const prevents this.

// BAD: Won't compile -- @"await" mutates the future
const f = try io.@"async"(myFunc, .{});
const result = f.@"await"(io);
// GOOD: Use var so the future can be polled
var f = try io.@"async"(myFunc, .{});
const result = f.@"await"(io);

The Waker is the mechanism that connects async operations to the scheduler. When an I/O operation, timer, or sync primitive becomes ready, it calls waker.wake() to reschedule the waiting task.

The Waker is a lightweight callback (16 bytes) that reschedules a task when an async event fires. When an operation like mutex.lock(io) cannot complete immediately, the runtime stores a waker in the waiter struct. When the mutex is later unlocked, the waker fires and the task is rescheduled for polling. You never interact with wakers directly when using the convenience APIs like mutex.lock(io).

For the full waker lifecycle, internal API, and detailed diagrams, see The Future Model.

A key design principle: waiter structs are embedded directly in the future, not heap-allocated. When you call mutex.lock(), the returned LockFuture contains an intrusive list node. When the future is polled and the mutex is contended, that node is linked into the mutex’s waiter list — without any heap allocation.

This is why Volt achieves 282 B/op vs Tokio’s 1,868 B/op total across all benchmarks: the waiter storage is part of the future’s stack frame, not heap-allocated.


For how Volt relates to Zig’s std.Io (in development, expected in 0.16) and the differences between the two approaches, see Stackless vs Stackful.


Every sync primitive and channel in Volt provides two tiers. Understanding the difference is critical because they have different runtime requirements.

These are pure atomic/lock-free operations that work anywhere — in main(), in a plain thread, in a library, without ever starting a runtime. They return immediately with a success/failure result.

const volt = @import("volt");
pub fn main() !void {
// No volt.run(), no Io.init() -- just plain code.
var mutex = volt.sync.Mutex.init();
if (mutex.tryLock()) {
defer mutex.unlock();
// critical section
}
var sem = volt.sync.Semaphore.init(3);
if (sem.tryAcquire(1)) {
defer sem.release(1);
// got permit
}
}

Filesystem operations are also runtime-free — they are synchronous wrappers around OS calls:

// All of these work without a runtime
const data = try volt.fs.readFile(allocator, "config.json");
try volt.fs.writeFile("output.txt", "Hello!");
var file = try volt.fs.File.open("data.bin");
defer file.close();

Networking convenience functions (net.listen, TcpStream.tryRead, UdpSocket.tryRecv) also work without a runtime — they are non-blocking OS socket operations.

These take the io: volt.Io handle as a parameter and suspend the calling task (not the OS thread) until the operation completes. Because all Tier 2 operations require the explicit io parameter, the type system prevents misuse at compile time — you physically cannot call mutex.lock(io) without an io handle, and you cannot create an io handle without starting a runtime.

const volt = @import("volt");
pub fn main() !void {
try volt.run(myApp); // Start the runtime, pass io to myApp
}
fn myApp(io: volt.Io) !void {
var mutex = volt.sync.Mutex.init();
// Blocking lock -- yields to scheduler if contended, returns when acquired
mutex.lock(io);
defer mutex.unlock();
// critical section
// Async task -- launch and await through the io handle
const f = try io.@"async"(myFunc, .{});
const result = f.@"await"(io);
_ = result;
}
APIRuntime needed?Notes
mutex.tryLock(), sem.tryAcquire(), rwlock.tryReadLock()NoWorks anywhere
ch.trySend(), ch.tryRecv()NoWorks anywhere
fs.readFile(), File.open(), readDir()NoSynchronous I/O — works anywhere
net.listen(), stream.tryRead(), stream.writeAll()NoBlocking I/O — works anywhere
Duration, Instant, Instant.now()NoJust time math
Mutex.init(), Semaphore.init(), Channel.init()NoJust initialization
mutex.lock(io), sem.acquire(io, n), rwlock.readLock(io)YesConvenience methods that take io and suspend the task (not the OS thread) until acquired.
ch.send(io, val), ch.recv(io)YesConvenience methods that take io and suspend the task (not the OS thread) until send/recv completes.
io.@"async"(), io.concurrent()YesRequires io: volt.Io — enforced by the type system at compile time.
future.@"await"(io), future.cancel(io)Yes (transitively)Operates on Futures returned by io.@"async"().
volt.GroupYesStructured concurrency: .spawn(), .wait(), .cancel().
fs.readFileAsync(), AsyncFileYesRequires runtime I/O backend.
shutdown.Shutdown, signal.AsyncSignalYesRequires runtime for async waiting.

Volt actually provides four levels of abstraction, not just two. Most developers only need the first two:

LevelExampleWhen to use
1. Try (non-blocking)mutex.tryLock(), ch.trySend(val)Hot paths, polling loops, code that must not block. Works without a runtime.
2. Convenience (async)mutex.lock(io), ch.send(io, val)Start here. Most async code uses this level. Takes io, yields the task.
3. Futuremutex.lockFuture(), ch.sendFuture(val)Manual future composition, combinators, custom scheduling logic.
4. Waiterch.recvWait(&waiter)Custom scheduler integration, raw intrusive list manipulation. Library authors only.
SituationUse
Hot path, lock rarely contendedtryLock()
Must acquire, can yieldmutex.lock(io)
Polling a channel in a looptryRecv()
Waiting for the next messagech.recv(io)
Checking semaphore availabilitytryAcquire(n)
Rate limiting with backpressuresem.acquire(io, n)
CLI tool, simple scripttryX() everywhere, no runtime needed
Server handling many connectionsUse the runtime + io convenience APIs

ConceptWhat it is
io.@"async" / .@"await"Launch a function as a concurrent task, await its result. The primary API for async work.
volt.Future(T)A handle to an async operation’s result. Supports .@"await"(io) and .cancel(io).
volt.GroupStructured concurrency: spawn multiple tasks, wait for all or cancel all.
io.concurrentOffload CPU-bound or blocking work to the dedicated blocking pool.
RuntimeThe engine: scheduler + I/O driver + blocking pool. Created via volt.run() or Io.init().
WakerLightweight callback (16 bytes) that reschedules a task when an async event fires (internal).
Cooperative schedulingTasks yield voluntarily by returning .pending. Budget of 128 polls/tick prevents starvation.
Two-tier APItryX() for non-blocking, x(io) for blocking convenience — available on all sync and channel types.

Next: explore the Usage guides for detailed coverage of each module, or jump to the Cookbook for real-world patterns.