Skip to content

std:workers module

Stable

The std:workers module exposes the worker-pool model to scripts. It provides cross-worker shared state through workers.shared and an observable shutdown signal through workers.shutdown_signal. The pool itself is sized at process startup; std:workers does not expose pool construction or worker lifecycle to scripts.

PropertyValue
Namespacestd
Sourcesrc/lua/std/workers.rs
TestsIn-source #[cfg(test)] mod tests in workers.rs; cross-worker behaviour exercised through tests/std/collections.test.luau
StabilityStable
Mirrortokio runtime — no std:: equivalent

Syntax

lua
local workers = require("std:workers")

The returned table exposes the module functions described below.

Description

neoc runs scripts in a pool of independent Lua virtual machines — workers. Each worker is a self-contained interpreter: globals, locals, upvalues, and any userdata constructed inside it are private to that worker. The pool is sized at process startup and tasks dispatch across workers transparently. Script authors do not pick which worker they run on.

std:workers is the surface that lets a script reason about that model. The contract is intentionally narrow: only userdata that opts in can cross worker boundaries. Plain Lua tables, functions, threads, and arbitrary userdata cannot — they are bound to the VM that created them.

For background on the worker model, see The sandboxing model.

Module functions

workers.shared

lua
workers.shared(key: string, userdata: Sharable): Sharable

Process-global registry keyed by string.

key : The string identifier under which the value is registered. Any worker calling with the same key receives a handle to the same shared state.

userdata : On the first call for a given key, the userdata is registered and a handle to the shared state is returned. On subsequent calls, the userdata is treated as a placeholder used only to communicate "I'm expecting this type"; the stored handle is rewrapped for the calling worker and returned, and the placeholder is discarded.

The returned handle is bound to the calling worker; the underlying state lives outside any single worker. See Sharable types for the set of userdata types that opt into sharing.

workers.shutdown_signal

lua
workers.shutdown_signal(): ShutdownSignal

Returns a process-singleton signal that fires once when the process receives a shutdown request (Ctrl-C or SIGINT). Repeated calls return references to the same underlying signal.

ShutdownSignal methods

ShutdownSignal:is_fired() : Returns true once the signal has fired, false otherwise. Non-blocking.

ShutdownSignal:wait() : Resumes when the signal fires. Returns immediately if the signal has already fired. Safe to call from multiple workers and multiple coroutines concurrently.

Sharable types

A userdata type is sharable when it implements the cross-worker contract. The current set is:

Type labelConstructor
std:collections.map(require("std:collections")).map()
std:collections.counter(require("std:collections")).counter()
std:net.listener(require("std:net")).listener(addr)

Other userdata types — including std:collections.mutex, lib:json Object and Array handles, json.null, database handles, and server handles — are not sharable. Passing one to workers.shared raises an error listing the currently sharable set, so scripts get a stable error catalogue rather than a silent failure.

When a new sharable type is added to the runtime, it appears in this catalogue automatically. Scripts can rely on the error message to discover the live set.

Errors

All errors from workers.shared are raised, not returned via the (value, err) tuple. Misuse of this surface is a programming error rather than a recoverable condition. Wrap calls in pcall if a script needs to observe them.

TriggerMessage contains
Plain table or scalar passed"userdata"
Userdata that does not opt in"not a sharable userdata" and the catalogue
Type mismatch on an existing key"already holds" and both type labels

Examples

A cross-worker counter

The following example publishes an atomic counter under a fixed key. Any worker calling workers.shared with the same key sees the same atomic.

lua
local workers     = require("std:workers")
local collections = require("std:collections")

local hits = workers.shared("global_hits", collections.counter())
hits:increment()
print(hits:get())

A single bound socket fanned out to all workers

The following example binds one TCP listener and shares it across all workers. Each worker accepts against the same file descriptor.

lua
local net     = require("std:net")
local workers = require("std:workers")
local hyper   = require("vnd:hyper")

local listener = workers.shared("http", net.listener("0.0.0.0:3000"))
hyper.serve(listener, function(req)
    return { status = 200, body = "hello" }
end)

A shared map across workers

The following example shares a map keyed by string, allowing any worker to read and mutate session state under a stable key.

lua
local workers     = require("std:workers")
local collections = require("std:collections")

-- Worker A — publish.
local sessions = workers.shared("sessions", collections.map())
sessions:set("user:42", "active")

-- Worker B — pick it up; mutations from A are visible.
local sessions = workers.shared("sessions", collections.map())
print(sessions:get("user:42"))  -- active

Graceful shutdown

The following example blocks the main task until Ctrl-C, then closes a long-running server.

lua
local workers = require("std:workers")
local hyper   = require("vnd:hyper")
local net     = require("std:net")

local server = hyper.serve(net.listener("0.0.0.0:3000"), function(req)
    return { status = 200, body = "ok" }
end)

workers.shutdown_signal():wait()
server:close()
server:wait()

Acceptance

The following scenarios must hold. They are exercised by the in-source #[cfg(test)] mod tests block in workers.rs and by the cross-worker scenarios in tests/std/collections.test.luau.

  1. First-call publish, later-call lookup. workers.shared("k", X()) followed by workers.shared("k", X()) from any worker returns handles that observe each other's mutations.
  2. Type mismatch raises. Registering a map at key k and then calling workers.shared("k", counter()) raises an error containing "already holds" and both type labels.
  3. Plain table rejected. workers.shared("k", { a = 1 }) raises an error.
  4. Non-sharable userdata rejected. workers.shared("k", json.null), workers.shared("k", json.object()), and workers.shared("k", json.array()) each raise an error containing "not a sharable userdata" and the current catalogue. lib:json types are not registered as sharable.
  5. Catalogue reflects the live set. The error message from a non-sharable rejection lists every type currently registered as sharable, including any added at engine boot via register_sharable<T>().
  6. Shutdown signal idempotent wait. Calling workers.shutdown_signal():wait() multiple times after the signal has fired returns immediately each time.

See also

  • The sandboxing model — How the worker pool fits into the broader engine.
  • std:collections — Sharable collection types.
  • std:net — TCP listeners shared across workers.
  • lib:json — JSON document handles. Note that Object and Array are not sharable across workers.