Skip to content
v1.0.0-zig0.15.2

Migration Guide

This guide walks through converting existing Zig code that uses blocking I/O and std.Thread to use Volt’s async runtime.

Blocking ModelVolt Model
One OS thread per connectionMany tasks share few worker threads
Thread blocks on syscallTask yields, worker runs other tasks
std.Thread.Mutex blocks the OS threadvolt.sync.Mutex yields to scheduler
16-64KB stack per thread256-512 bytes per task (stackless future)
Thousands of concurrent connectionsMillions of concurrent tasks

The key insight: in Volt, “waiting” means the task is suspended and the worker thread is free to run other tasks. Nothing blocks.

Before (blocking):

pub fn main() !void {
var server = try startServer(8080);
defer server.stop();
server.serve();
}

After (Volt):

const volt = @import("volt");
pub fn main() !void {
try volt.run(server);
}
fn server(io: volt.Io) void {
var listener = volt.net.listen("0.0.0.0:8080") catch return;
defer listener.close();
while (listener.tryAccept() catch null) |result| {
_ = io.@"async"(handleClient, .{result.stream}) catch continue;
}
}

volt.run() initializes the runtime (scheduler, I/O driver, timer wheel, blocking pool) and runs your function as the root task. When it returns, the runtime shuts down.

For more control over configuration:

pub fn main() !void {
try volt.runWith(allocator, .{
.num_workers = 4,
.max_blocking_threads = 128,
}, server);
}

Or manage the runtime manually:

pub fn main() !void {
var io = try volt.Io.init(allocator, .{ .num_workers = 4 });
defer io.deinit();
try io.run(server);
}

Step 2: Replace std.Thread with io.@“async”

Section titled “Step 2: Replace std.Thread with io.@“async””

Before (OS threads):

var threads: [num_workers]std.Thread = undefined;
for (&threads) |*t| {
t.* = try std.Thread.spawn(.{}, workerFn, .{shared_state});
}
for (threads) |t| t.join();

After (Volt tasks):

const volt = @import("volt");
fn runWorkers(io: volt.Io) void {
var futures: [num_workers]@TypeOf(io.@"async"(workerFn, .{shared_state}) catch unreachable) = undefined;
for (&futures) |*f| {
f.* = io.@"async"(workerFn, .{shared_state}) catch return;
}
// Wait for all using Group
var group = volt.Group.init(io);
for (&futures) |*f| {
_ = f.@"await"(io);
}
group.wait();
}

Key differences:

  • Tasks are lightweight (~256 bytes) vs threads (~8MB default stack on Linux).
  • Tasks cooperatively yield; threads are preemptively scheduled by the OS.
  • @"async" returns a Future with @"await"(io), cancel(), and isDone().

Step 3: Replace std.Thread.Mutex with volt.sync.Mutex

Section titled “Step 3: Replace std.Thread.Mutex with volt.sync.Mutex”

Before (blocking mutex):

var mutex: std.Thread.Mutex = .{};
var shared: SharedState = .{};
fn worker() void {
mutex.lock(); // Blocks the OS thread
defer mutex.unlock();
shared.update();
}

After (async-aware mutex):

var mutex = volt.sync.Mutex.init();
var shared: SharedState = .{};
fn worker() void {
// Non-blocking attempt (preferred in hot paths)
if (mutex.tryLock()) {
defer mutex.unlock();
shared.update();
return;
}
// If contended, use the waiter-based API
var waiter = volt.sync.mutex.Waiter.init();
if (!mutex.lockWait(&waiter)) {
// Task yields here -- worker thread runs other tasks
// When the lock is released, waiter is woken
}
defer mutex.unlock();
shared.update();
}

The tryLock() path is lock-free (CAS loop, no mutex). The waiter path only takes an internal mutex on the slow path. The key advantage: while your task waits for the lock, the worker thread is running other tasks instead of blocking.

For integration with the scheduler, use the sync convenience methods that take an Io handle:

// Lock the mutex -- suspends the task until the lock is acquired
mutex.lock(io);
defer mutex.unlock();
shared.update();

Step 4: Replace std.Thread.Condition with volt.sync.Notify

Section titled “Step 4: Replace std.Thread.Condition with volt.sync.Notify”

Before (blocking condition variable):

var mutex: std.Thread.Mutex = .{};
var cond: std.Thread.Condition = .{};
var ready = false;
fn producer() void {
mutex.lock();
ready = true;
cond.signal();
mutex.unlock();
}
fn consumer() void {
mutex.lock();
while (!ready) cond.wait(&mutex);
mutex.unlock();
// proceed
}

After (async Notify):

var notify = volt.sync.Notify.init();
fn producer() void {
// Set your condition, then notify
notify.notifyOne(); // Wake one waiter
// or: notify.notifyAll(); // Wake all waiters
}
fn consumer() void {
var waiter = volt.sync.notify.Waiter.init();
notify.waitWith(&waiter);
if (!waiter.isNotified()) {
// Yield to scheduler -- will be woken by notifyOne/notifyAll
}
// proceed
}

Some operations are inherently blocking (DNS resolution, file I/O on some platforms, CPU-intensive computation). These must not run on async worker threads or they will starve other tasks.

Use the blocking pool via io.concurrent:

fn doResolve(io: volt.Io) !void {
// Blocking DNS resolution on a separate OS thread
const handle = try io.concurrent(struct {
fn resolve(host: []const u8) !volt.net.Address {
return volt.net.resolveFirst(std.heap.page_allocator, host, 443);
}
}.resolve, .{"example.com"});
const addr = try handle.wait();
_ = addr;
}

The blocking pool uses separate OS threads (up to max_blocking_threads, default 512) with idle timeout. Threads are spawned on demand and reclaimed after blocking_keep_alive_ns of inactivity.

OperationBlocking Pool?Why
DNS resolution (getaddrinfo)YesBlocking syscall
File I/O (no io_uring)YesBlocking syscall on some platforms
CPU-heavy computation (hashing, compression)YesStarves I/O workers
Memory allocation (large)Maybemmap can block
Sleep / delayNoUse volt.time.sleep() instead
Network I/ONoAlready async via kqueue/epoll/io_uring

If you were using a custom thread-safe queue:

Before:

var queue: ThreadSafeQueue(Task) = .{};
var mutex: std.Thread.Mutex = .{};
fn produce(item: Task) void {
mutex.lock();
defer mutex.unlock();
queue.push(item);
}

After:

var ch = try volt.channel.bounded(Task, allocator, 1024);
defer ch.deinit();
fn produce(item: Task) void {
switch (ch.trySend(item)) {
.ok => {},
.full => {}, // Handle backpressure
.closed => {},
}
}
fn consume() void {
switch (ch.tryRecv()) {
.value => |item| process(item),
.empty => {},
.closed => return,
}
}

The Channel uses a Vyukov/crossbeam-style lock-free ring buffer. The trySend/tryRecv paths are fully lock-free (CAS on head/tail). Only the waiter lists use a mutex.

If you call std.Thread.sleep() or any blocking syscall directly from a task, the entire worker thread blocks. Other tasks on that worker stall.

// BAD: Blocks the worker thread
fn myTask() void {
std.Thread.sleep(1_000_000_000); // 1 second -- DO NOT DO THIS
}
// GOOD: Use async sleep
fn myTask() void {
volt.time.blockingSleep(volt.time.Duration.fromSecs(1));
}
// GOOD: Or offload to blocking pool
fn myTask(io: volt.Io) void {
_ = io.concurrent(struct {
fn work() void {
std.Thread.sleep(1_000_000_000);
}
}.work, .{}) catch {};
}

When a task yields (returns .pending from a future), it may resume on a different worker thread. If it holds a std.Thread.Mutex, the unlock happens on a different thread than the lock — which is undefined behavior for OS mutexes.

Volt’s Mutex handles this correctly because it uses an internal lock-free state machine, not an OS mutex for the public API.

Channel and BroadcastChannel allocate heap memory for their ring buffers. Always deinit() them. Oneshot and Watch are zero-allocation and need no cleanup.

More workers means more contention on the global queue and more memory. Start with the default (CPU count) and measure before adding more.

  • Wrap entry point with volt.run() or volt.runWith()
  • Replace std.Thread.spawn with io.@"async" (returns a Future)
  • Replace thread.join() with f.@"await"(io)
  • Replace JoinHandle types with Future
  • Replace std.Thread.Mutex with volt.sync.Mutex (use mutex.lock(io) convenience)
  • Replace condition variables with volt.sync.Notify
  • Move blocking operations to io.concurrent (renamed from spawnBlocking)
  • Remove io.spawnFuture calls — use sync convenience methods instead
  • Remove volt.async_ops imports — use io.@"async" and direct patterns instead
  • Replace custom queues with volt.channel.bounded (use channel.send(io, val) / channel.recv(io))
  • Replace std.Thread.sleep with volt.time.blockingSleep
  • Add defer ch.deinit() for all Channel and BroadcastChannel instances
  • Run zig build test-all to verify correctness