Runtime
The Io handle is the primary entry point for Volt. It owns the runtime — the work-stealing scheduler, the blocking thread pool, and the I/O driver. Create it explicitly with init/deinit like an Allocator.
Explicit pattern (recommended)
Section titled “Explicit pattern (recommended)”Create Io explicitly — you control the allocator, configuration, and lifecycle:
const std = @import("std");const volt = @import("volt");
pub fn main() !void { var gpa = std.heap.GeneralPurposeAllocator(.{}){}; defer _ = gpa.deinit();
var io = try volt.Io.init(gpa.allocator(), .{ .num_workers = 4, .max_blocking_threads = 128, .blocking_keep_alive_ns = 30 * std.time.ns_per_s, }); defer io.deinit();
try io.run(server);}
fn server(io: volt.Io) void { // This runs inside the runtime. // All Volt APIs (net, sync, channel, time) are available here. _ = io;}This is the recommended pattern for production use. You get full control over memory (any std.mem.Allocator works) and can detect leaks with GPA.
Zero-config shorthand
Section titled “Zero-config shorthand”For quick scripts and prototyping, volt.run() creates an Io handle with sensible defaults (page_allocator, auto-detected worker count) and cleans up automatically:
const volt = @import("volt");
pub fn main() !void { try volt.run(server);}
fn server(io: volt.Io) void { _ = io; // This runs inside the runtime.}For custom configuration without managing Io yourself, use volt.runWith:
try volt.runWith(gpa.allocator(), .{ .num_workers = 4,}, server);Config fields
Section titled “Config fields”| Field | Type | Default | Description |
|---|---|---|---|
num_workers | usize | 0 (auto) | I/O worker thread count. 0 means one per logical CPU. |
max_blocking_threads | usize | 512 | Upper limit on blocking pool threads. |
blocking_keep_alive_ns | u64 | 10 seconds | Idle blocking threads exit after this duration. |
backend | ?BackendType | null (auto) | Force a specific I/O backend (io_uring, kqueue, epoll, IOCP). |
When num_workers is 0, the runtime queries the OS for the number of logical CPUs and creates one worker thread per core.
Advanced: Direct Runtime access
Section titled “Advanced: Direct Runtime access”For advanced use cases (custom schedulers, library integration), you can access the underlying Runtime through Io:
const volt = @import("volt");
pub fn main() !void { var io = try volt.Io.init(std.heap.page_allocator, .{ .num_workers = 2, }); defer io.deinit();
try io.run(myApp);}
fn myApp(io: volt.Io) !void { // Access the underlying runtime if needed const scheduler = io.runtime.getScheduler(); _ = scheduler;}Async primitives
Section titled “Async primitives”Sync primitives accept the io handle directly for async acquisition. No manual future spawning needed:
var mutex = volt.sync.Mutex.init();
// Acquire the mutex asynchronously -- suspends until lock is heldmutex.lock(io);defer mutex.unlock();The io handle lets the primitive yield to the scheduler when contended and resume the calling task when the resource becomes available.
Offloading blocking work
Section titled “Offloading blocking work”CPU-intensive or legacy blocking I/O should run on the blocking pool so the async workers stay responsive:
var f = try io.concurrent(computeHash, .{data});const hash = try f.@"await"(io);Blocking pool threads are created on demand (up to max_blocking_threads) and reclaimed after the keep-alive timeout.
Task spawning from within async context
Section titled “Task spawning from within async context”Inside an async context (from functions passed to io.run or spawned futures), use the io handle:
const volt = @import("volt");
pub fn main() !void { try volt.run(myApp);}
fn myApp(io: volt.Io) !void { // Spawn concurrent async tasks. // `@"async"` uses Zig's identifier quoting (`async` is a reserved keyword). // `@as(u64, 42)` provides an explicit type annotation for the integer literal. var user_f = try io.@"async"(fetchUser, .{user_id}); var posts_f = try io.@"async"(fetchPosts, .{user_id});
// Await both results const user = user_f.@"await"(io); const posts = posts_f.@"await"(io);
// Use results... _ = user; _ = posts;}Available task functions
Section titled “Available task functions”| Function | Returns | Description |
|---|---|---|
io.@"async"(func, args) | volt.Future(T) | Spawn async task, returns a future |
f.@"await"(io) | T | Await a future’s result |
io.concurrent(func, args) | !ConcurrentFuture(T) | Run on the blocking thread pool, call .@"await"(io) for result |
Shutdown and cleanup
Section titled “Shutdown and cleanup”Call io.deinit() to shut down the runtime. This:
- Sets the shutdown flag (atomic store).
- Stops the blocking pool (joins idle threads, waits for active ones).
- Stops the scheduler (signals workers, joins threads, frees task memory).
- Frees the runtime allocation (if the
Iohandle owns it).
Always use defer io.deinit() immediately after init to guarantee cleanup even on error paths:
var io = try volt.Io.init(allocator, .{});defer io.deinit();For servers that need to drain in-flight requests before exiting, see Signals & Shutdown.
Thread-local runtime access
Section titled “Thread-local runtime access”Inside a runtime, the current Runtime pointer is stored in a thread-local variable. Access it with:
const runtime_mod = @import("volt").internal.runtime;
// Returns ?*Runtime -- null if not inside a runtime context.const rt = runtime_mod.getRuntime();
// Panics if not inside a runtime context.const rt2 = runtime_mod.runtime();This is primarily useful for library code that needs to access the scheduler or blocking pool without threading the runtime through every function signature.
Architecture at a glance
Section titled “Architecture at a glance”main() thread | vIo.init(allocator, config) |-- Scheduler (N worker threads, work-stealing deques) |-- BlockingPool (on-demand threads, up to max_blocking_threads) |-- I/O Driver (platform backend: io_uring / kqueue / epoll / IOCP) | vio.run(myApp) --> @"async" --> @"await" | vio.deinit()Each worker thread runs a tight loop: poll local deque, steal from siblings, check global queue, poll I/O, advance timers. Tasks are stackless futures weighing approximately 256-512 bytes, enabling millions of concurrent tasks.