Migration Guide
This guide walks through converting existing Zig code that uses blocking I/O and std.Thread to use Volt’s async runtime.
Conceptual Differences
Section titled “Conceptual Differences”| Blocking Model | Volt Model |
|---|---|
| One OS thread per connection | Many tasks share few worker threads |
| Thread blocks on syscall | Task yields, worker runs other tasks |
std.Thread.Mutex blocks the OS thread | volt.sync.Mutex yields to scheduler |
| 16-64KB stack per thread | 256-512 bytes per task (stackless future) |
| Thousands of concurrent connections | Millions of concurrent tasks |
The key insight: in Volt, “waiting” means the task is suspended and the worker thread is free to run other tasks. Nothing blocks.
Step 1: Wrap Your Entry Point
Section titled “Step 1: Wrap Your Entry Point”Before (blocking):
pub fn main() !void { var server = try startServer(8080); defer server.stop(); server.serve();}After (Volt):
const volt = @import("volt");
pub fn main() !void { try volt.run(server);}
fn server(io: volt.Io) void { var listener = volt.net.listen("0.0.0.0:8080") catch return; defer listener.close();
while (listener.tryAccept() catch null) |result| { _ = io.@"async"(handleClient, .{result.stream}) catch continue; }}volt.run() initializes the runtime (scheduler, I/O driver, timer wheel, blocking pool) and runs your function as the root task. When it returns, the runtime shuts down.
For more control over configuration:
pub fn main() !void { try volt.runWith(allocator, .{ .num_workers = 4, .max_blocking_threads = 128, }, server);}Or manage the runtime manually:
pub fn main() !void { var io = try volt.Io.init(allocator, .{ .num_workers = 4 }); defer io.deinit(); try io.run(server);}Step 2: Replace std.Thread with io.@“async”
Section titled “Step 2: Replace std.Thread with io.@“async””Before (OS threads):
var threads: [num_workers]std.Thread = undefined;for (&threads) |*t| { t.* = try std.Thread.spawn(.{}, workerFn, .{shared_state});}for (threads) |t| t.join();After (Volt tasks):
const volt = @import("volt");
fn runWorkers(io: volt.Io) void { var futures: [num_workers]@TypeOf(io.@"async"(workerFn, .{shared_state}) catch unreachable) = undefined;
for (&futures) |*f| { f.* = io.@"async"(workerFn, .{shared_state}) catch return; }
// Wait for all using Group var group = volt.Group.init(io); for (&futures) |*f| { _ = f.@"await"(io); } group.wait();}Key differences:
- Tasks are lightweight (~256 bytes) vs threads (~8MB default stack on Linux).
- Tasks cooperatively yield; threads are preemptively scheduled by the OS.
@"async"returns aFuturewith@"await"(io),cancel(), andisDone().
Step 3: Replace std.Thread.Mutex with volt.sync.Mutex
Section titled “Step 3: Replace std.Thread.Mutex with volt.sync.Mutex”Before (blocking mutex):
var mutex: std.Thread.Mutex = .{};var shared: SharedState = .{};
fn worker() void { mutex.lock(); // Blocks the OS thread defer mutex.unlock(); shared.update();}After (async-aware mutex):
var mutex = volt.sync.Mutex.init();var shared: SharedState = .{};
fn worker() void { // Non-blocking attempt (preferred in hot paths) if (mutex.tryLock()) { defer mutex.unlock(); shared.update(); return; }
// If contended, use the waiter-based API var waiter = volt.sync.mutex.Waiter.init(); if (!mutex.lockWait(&waiter)) { // Task yields here -- worker thread runs other tasks // When the lock is released, waiter is woken } defer mutex.unlock(); shared.update();}The tryLock() path is lock-free (CAS loop, no mutex). The waiter path only takes an internal mutex on the slow path. The key advantage: while your task waits for the lock, the worker thread is running other tasks instead of blocking.
Using the Convenience API
Section titled “Using the Convenience API”For integration with the scheduler, use the sync convenience methods that take an Io handle:
// Lock the mutex -- suspends the task until the lock is acquiredmutex.lock(io);defer mutex.unlock();shared.update();Step 4: Replace std.Thread.Condition with volt.sync.Notify
Section titled “Step 4: Replace std.Thread.Condition with volt.sync.Notify”Before (blocking condition variable):
var mutex: std.Thread.Mutex = .{};var cond: std.Thread.Condition = .{};var ready = false;
fn producer() void { mutex.lock(); ready = true; cond.signal(); mutex.unlock();}
fn consumer() void { mutex.lock(); while (!ready) cond.wait(&mutex); mutex.unlock(); // proceed}After (async Notify):
var notify = volt.sync.Notify.init();
fn producer() void { // Set your condition, then notify notify.notifyOne(); // Wake one waiter // or: notify.notifyAll(); // Wake all waiters}
fn consumer() void { var waiter = volt.sync.notify.Waiter.init(); notify.waitWith(&waiter); if (!waiter.isNotified()) { // Yield to scheduler -- will be woken by notifyOne/notifyAll } // proceed}Step 5: Handle Blocking Operations
Section titled “Step 5: Handle Blocking Operations”Some operations are inherently blocking (DNS resolution, file I/O on some platforms, CPU-intensive computation). These must not run on async worker threads or they will starve other tasks.
Use the blocking pool via io.concurrent:
fn doResolve(io: volt.Io) !void { // Blocking DNS resolution on a separate OS thread const handle = try io.concurrent(struct { fn resolve(host: []const u8) !volt.net.Address { return volt.net.resolveFirst(std.heap.page_allocator, host, 443); } }.resolve, .{"example.com"});
const addr = try handle.wait(); _ = addr;}The blocking pool uses separate OS threads (up to max_blocking_threads, default 512) with idle timeout. Threads are spawned on demand and reclaimed after blocking_keep_alive_ns of inactivity.
What Belongs in the Blocking Pool
Section titled “What Belongs in the Blocking Pool”| Operation | Blocking Pool? | Why |
|---|---|---|
DNS resolution (getaddrinfo) | Yes | Blocking syscall |
| File I/O (no io_uring) | Yes | Blocking syscall on some platforms |
| CPU-heavy computation (hashing, compression) | Yes | Starves I/O workers |
| Memory allocation (large) | Maybe | mmap can block |
| Sleep / delay | No | Use volt.time.sleep() instead |
| Network I/O | No | Already async via kqueue/epoll/io_uring |
Step 6: Replace Channels
Section titled “Step 6: Replace Channels”If you were using a custom thread-safe queue:
Before:
var queue: ThreadSafeQueue(Task) = .{};var mutex: std.Thread.Mutex = .{};
fn produce(item: Task) void { mutex.lock(); defer mutex.unlock(); queue.push(item);}After:
var ch = try volt.channel.bounded(Task, allocator, 1024);defer ch.deinit();
fn produce(item: Task) void { switch (ch.trySend(item)) { .ok => {}, .full => {}, // Handle backpressure .closed => {}, }}
fn consume() void { switch (ch.tryRecv()) { .value => |item| process(item), .empty => {}, .closed => return, }}The Channel uses a Vyukov/crossbeam-style lock-free ring buffer. The trySend/tryRecv paths are fully lock-free (CAS on head/tail). Only the waiter lists use a mutex.
Common Pitfalls
Section titled “Common Pitfalls”1. Calling blocking code on async workers
Section titled “1. Calling blocking code on async workers”If you call std.Thread.sleep() or any blocking syscall directly from a task, the entire worker thread blocks. Other tasks on that worker stall.
// BAD: Blocks the worker threadfn myTask() void { std.Thread.sleep(1_000_000_000); // 1 second -- DO NOT DO THIS}
// GOOD: Use async sleepfn myTask() void { volt.time.blockingSleep(volt.time.Duration.fromSecs(1));}
// GOOD: Or offload to blocking poolfn myTask(io: volt.Io) void { _ = io.concurrent(struct { fn work() void { std.Thread.sleep(1_000_000_000); } }.work, .{}) catch {};}2. Holding locks across yield points
Section titled “2. Holding locks across yield points”When a task yields (returns .pending from a future), it may resume on a different worker thread. If it holds a std.Thread.Mutex, the unlock happens on a different thread than the lock — which is undefined behavior for OS mutexes.
Volt’s Mutex handles this correctly because it uses an internal lock-free state machine, not an OS mutex for the public API.
3. Forgetting to deinit channels
Section titled “3. Forgetting to deinit channels”Channel and BroadcastChannel allocate heap memory for their ring buffers. Always deinit() them. Oneshot and Watch are zero-allocation and need no cleanup.
4. Using too many workers
Section titled “4. Using too many workers”More workers means more contention on the global queue and more memory. Start with the default (CPU count) and measure before adding more.
Migration Checklist
Section titled “Migration Checklist”- Wrap entry point with
volt.run()orvolt.runWith() - Replace
std.Thread.spawnwithio.@"async"(returns aFuture) - Replace
thread.join()withf.@"await"(io) - Replace
JoinHandletypes withFuture - Replace
std.Thread.Mutexwithvolt.sync.Mutex(usemutex.lock(io)convenience) - Replace condition variables with
volt.sync.Notify - Move blocking operations to
io.concurrent(renamed fromspawnBlocking) - Remove
io.spawnFuturecalls — use sync convenience methods instead - Remove
volt.async_opsimports — useio.@"async"and direct patterns instead - Replace custom queues with
volt.channel.bounded(usechannel.send(io, val)/channel.recv(io)) - Replace
std.Thread.sleepwithvolt.time.blockingSleep - Add
defer ch.deinit()for allChannelandBroadcastChannelinstances - Run
zig build test-allto verify correctness