NodeLLM 1.15 introduces automated schema self-correction, enabling agents to recover from validation failures without manual intervention. This release also brings fine-grained middleware lifecycle control and hardened type safety.
NodeLLM 1.14 reinforces our philosophy that agents don't require complex orchestration frameworks—they are just LLMs armed with tools. This release also brings first-class support for xAI (Grok) and the complete Mistral suite.
Vercel AI SDK is the industry standard for shipping fast. NodeLLM is a backend-first LLM runtime with middleware architecture, 540+ models across 7 providers, and enterprise-grade features.
NodeLLM 1.10 introduces a full middleware architecture for intercepting LLM requests, responses, tool executions, and errors. Build PII protection, cost guards, and custom pipelines—without changing your business logic.
Introducing @node-llm/monitor—a production-grade observability layer for LLM applications. Track costs, latency, token usage, and debug AI interactions with a built-in real-time dashboard.
Testing AI systems is often frustrating and expensive. I’ve been working on a small utility to make testing LLM interactions a bit more predictable and secure—here is `@node-llm/testing`.