Skip to content
v1.0.0-zig0.15.2

Sync Primitives API

When multiple tasks need to share state or coordinate work, you reach for sync primitives. Volt provides six of them — each designed for a specific pattern you’ll hit in real applications:

  • Mutex — protect a shared resource (session cache, counters, connection state)
  • RwLock — let many readers proceed in parallel, block only for writes (config stores, routing tables)
  • Semaphore — limit concurrency to N (connection pools, rate limiters, batch size caps)

For coordination primitives (Notify, Barrier, OnceCell), see the Coordination API.

All primitives are async-aware: when contended, they yield to the scheduler instead of blocking the OS thread. They are also zero-allocation — waiter structs are embedded in futures, not heap-allocated. No deinit() needed.

const volt = @import("volt");
fn example(io: volt.Io) void {
// Protect shared state with a Mutex (convenience -- takes io, suspends until acquired)
var mutex = volt.sync.Mutex.init();
mutex.lock(io);
defer mutex.unlock();
// critical section -- only one task at a time
// Or non-blocking:
if (mutex.tryLock()) {
defer mutex.unlock();
// critical section
}
// Limit concurrent DB connections with a Semaphore
var pool = volt.sync.Semaphore.init(10); // 10 permits
pool.acquire(io, 1); // suspends until permit available
defer pool.release(1);
// use one connection slot
// Lazy-init a singleton with OnceCell
var db = volt.sync.OnceCell(DbPool).init();
const p = db.getOrInit(io, createPool); // first caller inits, rest get cached (~0.4ns)
}

Each primitive provides four API tiers:

TierPatternBehavior
Non-blockingtryX()Returns immediately (lock-free CAS)
Conveniencex(io)Takes Io handle, suspends until complete
Async FuturexFuture()Returns a Future for manual scheduling
WaiterxWait(waiter)Low-level, caller manages waiter lifecycle

The convenience tier (takes io) is recommended for most use cases. It suspends the calling task until the operation completes, keeping the code sequential and readable. The Future tier is for advanced patterns like spawning a lock acquisition as a separate task or composing with other futures.

Mutual exclusion lock. Built on Semaphore(1).

const Mutex = volt.sync.Mutex;
var mutex = Mutex.init();
MethodSignatureDescription
tryLockfn tryLock(self: *Mutex) boolNon-blocking lock attempt. Returns true if acquired.
unlockfn unlock(self: *Mutex) voidRelease the lock. Wakes the next waiter (FIFO).
lockfn lock(self: *Mutex, io: volt.Io) voidConvenience: acquire the lock, suspending until available. Takes the Io handle.
lockFuturefn lockFuture(self: *Mutex) LockFutureReturns a Future that resolves when the lock is acquired.
lockWaitfn lockWait(self: *Mutex, waiter: *Waiter) boolWaiter-based lock. Returns true if acquired immediately.
cancelLockfn cancelLock(self: *Mutex, waiter: *Waiter) voidRemove a waiter from the queue.
tryLockGuardfn tryLockGuard(self: *Mutex) ?MutexGuardNon-blocking lock with RAII guard.
isLockedfn isLocked(self: *const Mutex) boolDebug: check if locked.
waiterCountfn waiterCount(self: *Mutex) usizeDebug: number of queued waiters.

The simplest way to acquire a mutex in an async task. Suspends the current task until the lock is available, then returns. The caller must call unlock() when done.

fn updateCounter(io: volt.Io, mutex: *volt.sync.Mutex, counter: *u64) void {
mutex.lock(io); // suspends if contended
defer mutex.unlock();
counter.* += 1;
}
var future = mutex.lockFuture();
defer future.deinit();
// Output: void. Resolves when lock is acquired.
// Caller must call mutex.unlock() after use.

LockFuture implements the Future trait (pub const Output = void). Use lockFuture() when you need to spawn the lock acquisition as a separate task or compose it with other futures.

const Waiter = volt.sync.mutex.Waiter;
var waiter = Waiter.init();
waiter.setWaker(@ptrCast(&ctx), wakeFn);
if (mutex.lockWait(&waiter)) {
// Acquired immediately
} else {
// Queued -- yield and wait for wake
}
// After wake: waiter.isAcquired() == true
defer mutex.unlock();
if (mutex.tryLockGuard()) |guard| {
var g = guard;
defer g.deinit(); // Automatically unlocks
}

A counter shared across multiple tasks. Each task increments the counter under the mutex, ensuring no updates are lost. The convenience lock(io) method suspends until the lock is available.

const std = @import("std");
const volt = @import("volt");
const SharedCounter = struct {
mutex: volt.sync.Mutex,
value: u64,
fn init() SharedCounter {
return .{
.mutex = volt.sync.Mutex.init(),
.value = 0,
};
}
/// Increment the counter by 1, returning the previous value.
/// Uses lock(io) to suspend until the lock is acquired.
fn increment(self: *SharedCounter, io: volt.Io) u64 {
self.mutex.lock(io);
defer self.mutex.unlock();
const prev = self.value;
self.value += 1;
return prev;
}
/// Non-blocking read for contexts where you cannot suspend.
fn tryGet(self: *SharedCounter) ?u64 {
if (self.mutex.tryLock()) {
defer self.mutex.unlock();
return self.value;
}
return null;
}
};
var counter = SharedCounter.init();
fn workerTask(io: volt.Io) void {
const prev = counter.increment(io);
std.log.info("counter was {d}, now {d}", .{ prev, prev + 1 });
}

Protecting a HashMap with a Mutex so multiple tasks can read and write cached values. The RAII guard ensures the lock is always released, even if lookup logic returns early.

const std = @import("std");
const volt = @import("volt");
const SessionCache = struct {
mutex: volt.sync.Mutex,
// The map itself is not thread-safe, so all access goes through the mutex.
sessions: std.StringHashMap(SessionData),
const SessionData = struct {
user_id: u64,
expires_at: i64,
};
fn init(allocator: std.mem.Allocator) SessionCache {
return .{
.mutex = volt.sync.Mutex.init(),
.sessions = std.StringHashMap(SessionData).init(allocator),
};
}
fn deinit(self: *SessionCache) void {
self.sessions.deinit();
}
/// Look up a session by token. Returns null if not found or expired.
fn lookup(self: *SessionCache, token: []const u8, now: i64) ?SessionData {
// tryLockGuard returns an RAII guard that auto-unlocks on scope exit.
if (self.mutex.tryLockGuard()) |guard| {
var g = guard;
defer g.deinit();
const entry = self.sessions.get(token) orelse return null;
if (entry.expires_at < now) return null; // Expired
return entry;
}
return null; // Lock contended, caller can retry
}
/// Store a session. Uses lock(io) to suspend until lock is available.
fn store(self: *SessionCache, io: volt.Io, token: []const u8, data: SessionData) !void {
self.mutex.lock(io);
defer self.mutex.unlock();
try self.sessions.put(token, data);
}
};

Read-write lock. Multiple concurrent readers OR one exclusive writer. Built on Semaphore(MAX_READS).

const RwLock = volt.sync.RwLock;
var rwlock = RwLock.init();
MethodSignatureDescription
tryReadLockfn tryReadLock(self: *RwLock) boolNon-blocking read lock.
readUnlockfn readUnlock(self: *RwLock) voidRelease read lock.
readLockfn readLock(self: *RwLock, io: volt.Io) voidConvenience: acquire read lock, suspending until available.
readLockFuturefn readLockFuture(self: *RwLock) ReadLockFutureReturns a Future for read lock acquisition.
readLockWaitfn readLockWait(self: *RwLock, waiter: *ReadWaiter) boolWaiter-based read lock.
cancelReadLockfn cancelReadLock(self: *RwLock, waiter: *ReadWaiter) voidCancel pending read lock.
tryWriteLockfn tryWriteLock(self: *RwLock) boolNon-blocking write lock.
writeUnlockfn writeUnlock(self: *RwLock) voidRelease write lock.
writeLockfn writeLock(self: *RwLock, io: volt.Io) voidConvenience: acquire write lock, suspending until available.
writeLockFuturefn writeLockFuture(self: *RwLock) WriteLockFutureReturns a Future for write lock acquisition.
writeLockWaitfn writeLockWait(self: *RwLock, waiter: *WriteWaiter) boolWaiter-based write lock.
cancelWriteLockfn cancelWriteLock(self: *RwLock, waiter: *WriteWaiter) voidCancel pending write lock.
tryReadLockGuardfn tryReadLockGuard(self: *RwLock) ?ReadGuardNon-blocking read lock with guard.
tryWriteLockGuardfn tryWriteLockGuard(self: *RwLock) ?WriteGuardNon-blocking write lock with guard.
isWriteLockedfn isWriteLocked(self: *RwLock) boolDebug: check if write-locked.
getReaderCountfn getReaderCount(self: *RwLock) usizeDebug: number of active readers.
waitingReadersfn waitingReaders(self: *RwLock) usizeDebug: queued reader count (O(n)).
waitingWritersfn waitingWriters(self: *RwLock) usizeDebug: queued writer count (O(n)).

The simplest way to acquire a read or write lock in an async task. Suspends the current task until the lock is available.

fn readConfig(io: volt.Io, rwlock: *volt.sync.RwLock, config: *const AppConfig) AppConfig {
rwlock.readLock(io); // suspends if a writer is active
defer rwlock.readUnlock();
return config.*;
}
fn updateConfig(io: volt.Io, rwlock: *volt.sync.RwLock, config: *AppConfig, new: AppConfig) void {
rwlock.writeLock(io); // suspends until all readers and writers release
defer rwlock.writeUnlock();
config.* = new;
}

Writers have natural priority: a queued writer drains semaphore permits toward zero, causing tryReadLock() to fail for new readers. Waiters are served FIFO, so the writer runs before any reader queued behind it.

if (rwlock.tryReadLockGuard()) |guard| {
var g = guard;
defer g.deinit();
// read shared data
}
if (rwlock.tryWriteLockGuard()) |guard| {
var g = guard;
defer g.deinit();
// modify shared data
}

A configuration store where many tasks read settings frequently but an admin task writes updates infrequently. RwLock allows all readers to proceed in parallel — they only block when a writer is active.

const std = @import("std");
const volt = @import("volt");
const AppConfig = struct {
max_connections: u32,
request_timeout_ms: u64,
feature_flags: u64,
log_level: enum { debug, info, warn, err },
};
const ConfigStore = struct {
rwlock: volt.sync.RwLock,
config: AppConfig,
fn init(defaults: AppConfig) ConfigStore {
return .{
.rwlock = volt.sync.RwLock.init(),
.config = defaults,
};
}
/// Read the current configuration. Multiple tasks can call this
/// concurrently without blocking each other.
fn read(self: *ConfigStore, io: volt.Io) AppConfig {
self.rwlock.readLock(io);
defer self.rwlock.readUnlock();
return self.config;
}
/// Non-blocking read for contexts where you cannot suspend.
fn tryRead(self: *ConfigStore) ?AppConfig {
if (self.rwlock.tryReadLock()) {
defer self.rwlock.readUnlock();
return self.config;
}
return null; // A writer is active, caller can retry or yield
}
/// Check a single feature flag without copying the whole config.
fn hasFeatureFlag(self: *ConfigStore, io: volt.Io, flag_bit: u6) bool {
self.rwlock.readLock(io);
defer self.rwlock.readUnlock();
return (self.config.feature_flags & (@as(u64, 1) << flag_bit)) != 0;
}
/// Update the configuration. Blocks all readers until complete.
/// Only one writer can hold the lock at a time.
fn update(self: *ConfigStore, io: volt.Io, new_config: AppConfig) void {
self.rwlock.writeLock(io);
defer self.rwlock.writeUnlock();
self.config = new_config;
}
/// Partially update: change only the timeout, leave everything else.
fn setTimeout(self: *ConfigStore, io: volt.Io, timeout_ms: u64) void {
self.rwlock.writeLock(io);
defer self.rwlock.writeUnlock();
self.config.request_timeout_ms = timeout_ms;
}
};
var store = ConfigStore.init(.{
.max_connections = 1000,
.request_timeout_ms = 30_000,
.feature_flags = 0,
.log_level = .info,
});
// Task A, B, C, D (concurrent readers -- all proceed in parallel):
fn readerTask(io: volt.Io) void {
const cfg = store.read(io);
std.log.info("timeout = {d}ms, max_conn = {d}", .{
cfg.request_timeout_ms,
cfg.max_connections,
});
}
// Admin task (exclusive writer -- blocks readers while active):
fn adminTask(io: volt.Io) void {
store.update(io, .{
.max_connections = 2000,
.request_timeout_ms = 15_000,
.feature_flags = 0b0000_0011, // Enable features 0 and 1
.log_level = .warn,
});
}

Counting semaphore. Limits concurrent access to a resource.

const Semaphore = volt.sync.Semaphore;
var sem = Semaphore.init(10); // 10 permits
MethodSignatureDescription
tryAcquirefn tryAcquire(self: *Semaphore, num: usize) boolNon-blocking acquire (lock-free CAS).
acquirefn acquire(self: *Semaphore, io: volt.Io, num: usize) voidConvenience: acquire permits, suspending until available.
acquireFuturefn acquireFuture(self: *Semaphore, num: usize) AcquireFutureReturns a Future that resolves when permits are acquired.
acquireWaitfn acquireWait(self: *Semaphore, waiter: *Waiter) boolWaiter-based acquire.
releasefn release(self: *Semaphore, num: usize) voidRelease permits. Wakes queued waiters.
cancelAcquirefn cancelAcquire(self: *Semaphore, waiter: *Waiter) voidCancel and return partial permits.
tryAcquirePermitfn tryAcquirePermit(self: *Semaphore, num: usize) ?SemaphorePermitNon-blocking acquire with RAII permit.
availablePermitsfn availablePermits(self: *const Semaphore) usizeCurrent available permits.
waiterCountfn waiterCount(self: *Semaphore) usizeDebug: queued waiter count (O(n)).

The simplest way to acquire permits in an async task. Suspends the current task until the requested permits are available.

fn useConnection(io: volt.Io, pool_sem: *volt.sync.Semaphore) void {
pool_sem.acquire(io, 1); // suspends until a permit is available
defer pool_sem.release(1);
// use the connection
}
  • tryAcquire: Lock-free CAS loop. No mutex.
  • release: Always takes the mutex, serves waiters directly (direct handoff). Only surplus permits go to the atomic counter.
  • acquireWait: Lock-free fast path for full acquisition. If insufficient permits, locks mutex BEFORE CAS to close the race window.

Key invariant: permits never float in the atomic when waiters are queued.

if (sem.tryAcquirePermit(1)) |permit| {
var p = permit;
defer p.deinit(); // Releases permits
doWork();
}

Methods: release(), forget() (leak the permit), deinit().

A database connection pool that limits the number of concurrent connections. The semaphore acts as a gatekeeper: each task must acquire a permit before using a connection, and releases it when done.

const std = @import("std");
const volt = @import("volt");
const DbConnectionPool = struct {
/// Controls how many tasks can hold a connection simultaneously.
/// Each permit represents one available connection slot.
semaphore: volt.sync.Semaphore,
max_connections: usize,
fn init(max_connections: usize) DbConnectionPool {
return .{
.semaphore = volt.sync.Semaphore.init(max_connections),
.max_connections = max_connections,
};
}
/// Acquire a connection slot. Suspends until one is available.
fn getConnection(self: *DbConnectionPool, io: volt.Io) void {
self.semaphore.acquire(io, 1);
}
/// Release the connection slot back to the pool.
fn releaseConnection(self: *DbConnectionPool) void {
self.semaphore.release(1);
}
/// Attempt to acquire a connection slot without waiting.
fn tryGetConnection(self: *DbConnectionPool) ?volt.sync.semaphore.SemaphorePermit {
return self.semaphore.tryAcquirePermit(1);
}
/// How many connections are currently available.
fn availableConnections(self: *DbConnectionPool) usize {
return self.semaphore.availablePermits();
}
/// How many tasks are waiting for a connection.
fn waitingTasks(self: *DbConnectionPool) usize {
return self.semaphore.waiterCount();
}
};
var pool = DbConnectionPool.init(5); // Max 5 concurrent DB connections
fn handleRequest(io: volt.Io) !void {
// Acquire a connection slot, suspending until available.
pool.getConnection(io);
defer pool.releaseConnection();
// Use the connection for the duration of this scope.
// Even if executeQuery returns an error, releaseConnection runs.
try executeQuery();
}
fn executeQuery() !void {
// ... actual database work ...
}

Limiting concurrent outbound HTTP requests to avoid overwhelming a downstream service.

const volt = @import("volt");
/// Allow at most 20 outbound requests in flight at once.
var request_limiter = volt.sync.Semaphore.init(20);
fn fetchFromUpstream(io: volt.Io, url: []const u8) ![]const u8 {
// Acquire 1 permit. If 20 requests are already in flight,
// this suspends the task until one finishes.
request_limiter.acquire(io, 1);
defer request_limiter.release(1);
return doHttpGet(url);
}
fn doBulkFetch(io: volt.Io, urls: []const []const u8) !void {
// Acquire 5 permits at once for a batch operation.
// This reserves 5 of the 20 slots, leaving 15 for other tasks.
request_limiter.acquire(io, 5);
defer request_limiter.release(5);
for (urls) |url| {
_ = try doHttpGet(url);
}
}
fn doHttpGet(url: []const u8) ![]const u8 {
_ = url;
// ... actual HTTP request ...
return "";
}

For Notify, Barrier, OnceCell, thread safety details, and the full “Choosing the Right Primitive” guide, see the Coordination API.