The tool registry
LLM agents actually need

Define tools once with Zod schemas. Get validated calls, LLM-readable errors, and one-line schema export to OpenAI, Anthropic, Gemini, or Vercel AI.

$ npm install toolwire zod
105
Tests passing
0
Runtime dependencies
4
Provider adapters

Every agent project rewrites the same three things

Different code. Every project. Every framework. No standard.

01

JSON schema

Write a schema for each tool, in the format each provider expects — then keep it in sync with your code.

02

Input validation

Validate the LLM's arguments before passing them to your function — separately from the schema definition.

03

Error normalization

Turn handler errors into something the LLM can understand and retry — reinvented every single time.

It's not about line count

The raw version is nearly the same length — because the handlers are identical. The difference is what the raw version is missing.

Feature toolwire Raw OpenAI API
Per-tool timeout enforcement built in slow tool blocks forever
Retries with exponential backoff built in write your own
Output validation Zod schema on return value handler returns anything
Structured error codes NOT_FOUND, TIMEOUT, VALIDATION… plain strings you invent
LLM-readable error messages llmMessage on every failure ~ whatever your catch block returns
Middleware (logging, auth, cache) 3 lines to plug in weave it through every handler
Provider format switching reg.toAnthropic() etc. rewrite all schemas
Hot-swap tools at runtime reg.swap() restart the process

The invisible bug: schema drift

Without toolwire, your JSON schema (sent to the LLM) and your Zod validation (run in code) are two separate things that must stay in sync manually. They don't.

✗ Two sources of truth
// JSON schema sent to the LLM
const schema = {
  parameters: {
    properties: {
      city: { type: 'string' }
    },
    required: ['city']
  }
}

// Zod validation — written separately.
// Add a field? Update both. Easy to forget.
const args = validateCity(rawArgs)
✓ One source of truth
// One Zod schema. That's it.
const weatherTool = tool({
  name: 'get_weather',
  input: z.object({
    city: z.string().min(1),
  }),
  handler: async ({ city }) => { ... }
})

// JSON schema auto-computed from Zod.
// Validation runs from the same schema.
// Change one thing. Everything stays in sync.
The compounding problem: a raw implementation is ~380 lines for 4 tools. A real agent has 15–20 tools. Each one adds JSON schema, manual validation, and a router case — all disconnected. toolwire scales linearly: add a tool({...}) call to the array. Registry, middleware, error handling, and all four provider adapters scale for free.

Up in 30 seconds

Define a tool, create a registry, call it, export to your provider.

import { tool, registry } from 'toolwire'
import { z } from 'zod'

const searchWeb = tool({
  name: 'search_web',
  description: 'Search the web for current information',
  input: z.object({
    query: z.string().min(1).describe('The search query'),
    maxResults: z.number().int().min(1).max(20).default(5),
  }),
  handler: async ({ query, maxResults }) => {
    return await mySearchAPI(query, maxResults)
  },
  timeout: 10_000,
  retries: 2,
})

const reg = registry([searchWeb, readFile, writeFile])

const tools = reg.toAnthropic()  // or toOpenAI(), toGemini(), toVercelAI()

const result = await reg.call(llmToolCall)

if (result.success) {
  messages.push({ role: 'tool', content: JSON.stringify(result.data) })
} else {
  messages.push({ role: 'tool', content: result.error.llmMessage })
}

Everything your agent loop needs

Built-in. No configuration required for the happy path.

🔒

Input + output validation

Zod schema validates arguments in. Optional output schema validates results out. Errors always include the full Zod issue tree.

🔄

Timeout + retries

Per-tool timeout with AbortSignal for cooperative cancellation. Exponential backoff retries on execution failures.

💬

LLM-readable errors

Every failure has a llmMessage field — formatted to tell the model exactly what went wrong and how to fix it.

🔌

Four provider adapters

Export tool schemas to OpenAI, Anthropic, Google Gemini, or Vercel AI SDK in a single method call.

Middleware

beforeCall / afterCall / onError hooks for logging, auth, caching, and tracing. Composable and chainable.

🔀

Hot-swap

Replace a tool at runtime without restarting the agent. Swap a slow tool for a cached version mid-run.

📁

Runtime discovery

Load tools from a directory of JS files or a remote JSON manifest. Build dynamic, self-extending agents.

🎯

Zero dependencies

Zod is a peer dependency. Everything else is built-in. No supply chain surprises.

Small surface. Full control.

Three entry points cover every use case.

tool(config) — Define a tool. Returns a frozen ToolDefinition.
const myTool = tool({
  name: 'my_tool',          // 1–64 chars: letters, digits, _ or -
  description: string,      // shown to the LLM — explain when to call this
  input: ZodSchema,         // validates LLM arguments
  output?: ZodSchema,       // optional — validates handler return value
  handler: async (input, context) => { ... },
  timeout?: number,         // ms, default 30_000
  retries?: number,         // extra attempts on execution failure, default 0
  annotations?: {
    readOnly?: boolean,
    destructive?: boolean,
    expensive?: boolean,
  },
})

The context object passed to every handler:

interface ToolContext {
  signal: AbortSignal // tied to the timeout
  attempt: number     // 0 = first try, 1 = first retry, …
}
registry(tools, options?) — Create a ToolRegistry.
const reg = registry([searchWeb, readFile, writeFile], {
  defaultTimeout: 15_000,
})
reg.call(request) — Execute a tool call. Always resolves — never throws.
const result = await reg.call({ name: 'search_web', arguments: { query: 'hello' } })

if (result.success) {
  result.data        // validated return value
  result.durationMs  // wall time
} else {
  result.error.code       // error category (see table below)
  result.error.message    // developer-readable
  result.error.llmMessage // formatted for the LLM to retry
  result.error.retryable  // boolean
  result.error.issues     // Zod issue array (VALIDATION_* errors only)
}
CodeWhen it firesRetryable
NOT_FOUND Tool name not in registry yes
DISABLED Tool is currently disabled no
VALIDATION_INPUT Arguments fail Zod schema yes
VALIDATION_OUTPUT Return value fails output schema no
TIMEOUT Handler exceeded timeout yes
EXECUTION Handler threw, all retries exhausted no
reg.swap / enable / disable / use / describe
reg.register(newTool)           // add a tool at runtime
reg.swap('search_web', v2Tool)  // hot-replace a registered tool
reg.disable('send_email')       // block a tool (returns DISABLED if called)
reg.enable('send_email')        // re-enable it
reg.use(middleware)             // add middleware
reg.list()                      // → ['search_web', 'read_file', ...]
reg.get('search_web')           // → ToolDefinition | undefined
reg.describe()                  // → human-readable summary for system prompts

const reg = await ToolRegistry.fromDir('./tools/')
const reg = await ToolRegistry.fromManifest('https://tools.myco.com/manifest.json')

One definition. Every provider.

Each adapter reads the pre-computed JSON Schema from your tool definitions. No re-conversion. No format drift. Disabled tools are always excluded.

reg.toOpenAI()               // { type: 'function', function: { name, description, parameters } }
reg.toOpenAI({ strict: true })
reg.toAnthropic()            // { name, description, input_schema }
reg.toGemini()               // { functionDeclarations: [{ name, description, parametersJsonSchema }] }
reg.toVercelAI()             // { [name]: { description, parameters: ZodSchema } }

import { toOpenAI, toAnthropic } from 'toolwire'
toOpenAI([searchWeb, readFile])
Provider Method Schema key Wrapper
OpenAI toOpenAI() parameters { type: "function", function: {...} }
Anthropic toAnthropic() input_schema Direct object array
Google Gemini toGemini() parametersJsonSchema { functionDeclarations: [...] }
Vercel AI SDK toVercelAI() Zod schema directly Record<name, { description, parameters }>

Plug in observability in 3 lines

Hooks run before and after every tool call. Multiple middleware chain in order for beforeCall, reverse order for afterCall.

reg.use({
  name: 'logger',
  beforeCall: (toolName, args) => {
    console.log(`→ ${toolName}`, args)
    span.start(toolName)
  },
  afterCall: (toolName, args, result) => {
    span.end(toolName, result.durationMs)
    metrics.record(toolName, result.data)
  },
  onError: (toolName, args, failure) => {
    alerting.send(toolName, failure.error)
    // return a ToolResult here to recover silently
  },
})

reg
  .use({ name: 'auth',    beforeCall: checkToken })
  .use({ name: 'cache',   beforeCall: checkCache, afterCall: writeCache })
  .use({ name: 'metrics', afterCall: recordMetrics })

Types flow end-to-end

Input and output types are inferred from your Zod schemas. No manual generics.

import type { InferInput, InferOutput } from 'toolwire'

const greet = tool({
  name: 'greet',
  description: 'Greet someone',
  input: z.object({ name: z.string() }),
  output: z.object({ message: z.string() }),
  handler: async ({ name }) => ({ message: `Hello, ${name}!` }),
})

type GreetInput  = InferInput<typeof greet>   // { name: string }
type GreetOutput = InferOutput<typeof greet>  // { message: string }

// Zod v3: install the optional peer — npm install zod-to-json-schema
// toolwire detects the version automatically.