Semaphore & Barrier
Semaphore
Section titled “Semaphore”A counting semaphore limits how many tasks can access a resource concurrently. Initialize it with N permits; at most N tasks can hold permits at the same time. When all permits are consumed, subsequent acquirers are suspended until permits are released.
Initialization
Section titled “Initialization”const volt = @import("volt");
// Allow up to 10 concurrent database connectionsvar sem = volt.sync.Semaphore.init(10);No allocator needed. Zero-allocation, no deinit required.
Non-blocking: tryAcquire / release
Section titled “Non-blocking: tryAcquire / release”if (sem.tryAcquire(1)) { defer sem.release(1); // Got a permit -- do work try handleConnection(conn);}tryAcquire uses a lock-free CAS loop and never blocks. release always takes the internal mutex to check for queued waiters (this ensures no wakeups are lost).
You can acquire and release multiple permits at once:
// Acquire 3 permits for a batch operationif (sem.tryAcquire(3)) { defer sem.release(3); try processBatch(items);}RAII permit guard
Section titled “RAII permit guard”tryAcquirePermit returns an optional SemaphorePermit that releases automatically:
if (sem.tryAcquirePermit(1)) |p| { var permit = p; defer permit.deinit(); try handleRequest(req);}Use permit.forget() to intentionally leak the permit (useful when transferring ownership):
var permit = sem.tryAcquirePermit(1).?;// Transfer ownership to the connection objectconnection.permit = permit;permit.forget(); // Don't release on scope exitAsync: acquire(io, n)
Section titled “Async: acquire(io, n)”acquire(io, n) acquires N permits asynchronously, yielding to the scheduler until they are available:
sem.acquire(io, 2);defer sem.release(2);// Got 2 permits -- do workPass the io: volt.Io handle so the semaphore can yield to the scheduler when permits are exhausted and resume the task when permits become available.
Advanced: acquireFuture(n)
Section titled “Advanced: acquireFuture(n)”For manual future composition or custom schedulers, acquireFuture(n) returns an AcquireFuture implementing the Future trait (Output = void):
var future = sem.acquireFuture(2);// Poll through your scheduler...// When future.poll() returns .ready, permits are held.defer sem.release(2);Low-level: acquireWait with explicit Waiter
Section titled “Low-level: acquireWait with explicit Waiter”var waiter = volt.sync.semaphore.Waiter.init(1);waiter.setWaker(@ptrCast(&my_ctx), myWakeCallback);
if (!sem.acquireWait(&waiter)) { // Waiter queued. Yield to scheduler. // When woken, waiter.isComplete() will be true.}defer sem.release(1);Cancellation
Section titled “Cancellation”Cancel a pending acquisition. Any partially acquired permits are returned to the semaphore:
sem.cancelAcquire(&waiter);
// Or on the future:future.cancel();Diagnostics
Section titled “Diagnostics”sem.availablePermits(); // usize -- currently available permitssem.waiterCount(); // usize -- queued waiters (O(n) list walk)Common patterns
Section titled “Common patterns”Rate limiting
Section titled “Rate limiting”// Limit to 100 concurrent requestsvar rate_limiter = volt.sync.Semaphore.init(100);
fn handleIncoming(conn: TcpStream) void { if (rate_limiter.tryAcquire(1)) { defer rate_limiter.release(1); processRequest(conn); } else { conn.writeAll("429 Too Many Requests\r\n") catch {}; conn.close(); }}Connection pool
Section titled “Connection pool”const POOL_SIZE = 20;var pool_sem = volt.sync.Semaphore.init(POOL_SIZE);
fn getConnection(io: volt.Io) !*Connection { // Async wait for a connection slot pool_sem.acquire(io, 1); return pool.checkout();}
fn releaseConnection(conn: *Connection) void { pool.checkin(conn); pool_sem.release(1);}Algorithm details
Section titled “Algorithm details”Volt’s semaphore uses a batch algorithm with direct handoff to prevent starvation:
- tryAcquire: Lock-free CAS loop (no mutex).
- release: Always takes the mutex. Serves queued waiters directly from the released amount. Only surplus permits go to the atomic counter.
- acquireWait: Lock-free CAS fast path for full acquisition. If insufficient permits, locks the mutex before draining remaining permits, eliminating the classic release-before-queue race.
Key invariant: permits never float in the atomic counter when waiters are queued.
Barrier
Section titled “Barrier”A Barrier synchronizes N tasks at a common rendezvous point. All N tasks must arrive at the barrier before any can proceed past it.
Initialization
Section titled “Initialization”// Synchronize 4 worker tasksvar barrier = volt.sync.Barrier.init(4);num_tasks must be greater than zero. No allocator needed.
Low-level: waitWith
Section titled “Low-level: waitWith”var waiter = volt.sync.barrier.Waiter.init();if (barrier.waitWith(&waiter)) { // This task was the LAST to arrive (the "leader"). // All other waiters have been woken.} else { // Waiting for other tasks. Yield to scheduler. // When woken, waiter.isReleased() is true.}
// Check if this task was the leaderif (waiter.is_leader.load(.acquire)) { // Perform one-time post-barrier work consolidateResults();}The leader designation is useful for post-barrier work that should happen exactly once (aggregating results, printing summaries, etc.).
Async: wait(io)
Section titled “Async: wait(io)”wait(io) waits at the barrier asynchronously, yielding to the scheduler until all tasks have arrived:
const result = barrier.wait(io);if (result.is_leader) { // This task was the last to arrive try publishResults();}Pass the io: volt.Io handle so the barrier can yield to the scheduler and resume the task when all participants arrive.
Advanced: waitFuture()
Section titled “Advanced: waitFuture()”For manual future composition, waitFuture() returns a WaitFuture that resolves with a BarrierWaitResult:
var future = barrier.waitFuture();// Poll through your scheduler...// When future.poll() returns .ready, result is BarrierWaitResultReusability
Section titled “Reusability”Barriers automatically reset after all tasks pass through. This enables multiple synchronization rounds:
// Round 1var w1 = volt.sync.barrier.Waiter.init();_ = barrier.waitWith(&w1);// ... all tasks proceed ...
// Round 2 (barrier automatically reset)var w2 = volt.sync.barrier.Waiter.init();_ = barrier.waitWith(&w2);// ... all tasks proceed again ...Each release increments a generation counter:
barrier.currentGeneration(); // 0, 1, 2, ...Diagnostics
Section titled “Diagnostics”barrier.arrivedCount(); // tasks arrived at current barrierbarrier.totalTasks(); // N (configured task count)barrier.waiterCount(); // tasks waiting (in the queue)barrier.currentGeneration(); // number of times barrier has releasedExample: parallel computation phases
Section titled “Example: parallel computation phases”const NUM_WORKERS = 8;var barrier = volt.sync.Barrier.init(NUM_WORKERS);
fn workerTask(worker_id: usize) void { // Phase 1: Compute partial results results[worker_id] = computePartial(worker_id);
// Synchronize -- all workers must finish phase 1 var waiter = volt.sync.barrier.Waiter.init(); _ = barrier.waitWith(&waiter);
// Phase 2: Use combined results if (waiter.is_leader.load(.acquire)) { // Leader merges partial results final_result = mergeResults(&results); }
// Synchronize again before reading final_result var waiter2 = volt.sync.barrier.Waiter.init(); _ = barrier.waitWith(&waiter2);
// Phase 3: All workers use the final result applyResult(final_result, worker_id);}Important notes
Section titled “Important notes”- Once a task calls
waitWithor polls aWaitFuture, the arrival count is incremented. Barrier waits cannot be cancelled — the task count has already been committed. - Waiter objects can be reset and reused across generations with
waiter.reset(). - The barrier handles large task counts efficiently (tested with 100+ tasks in unit tests).