NodeLLM 1.10 introduces a full middleware architecture for intercepting LLM requests, responses, tool executions, and errors. Build PII protection, cost guards, and custom pipelines—without changing your business logic.
Introducing @node-llm/monitor—a production-grade observability layer for LLM applications. Track costs, latency, token usage, and debug AI interactions with a built-in real-time dashboard.
Testing AI systems is often frustrating and expensive. I’ve been working on a small utility to make testing LLM interactions a bit more predictable and secure—here is `@node-llm/testing`.
Prompt injection is the new SQL injection. This post explores the security architecture required for 2026 AI Agents and how NodeLLM provides the necessary guardrails.
Infrastructure doesn't stop at OpenAI. Learn how to extend NodeLLM to support proprietary gateways like Oracle Cloud's Generative AI Service using a clean, zero-dependency, interface-driven approach.