displayName
Supports local models
description
The agent can route inference to a local model server (Ollama,
llama.cpp, vLLM, LM Studio) instead of a hosted provider. Pairs
with the Local Model Source provider sub-component (Layer 2).
appliesToNodeKinds
- AgentVersion
- AgentCoreImpl
category
local-inference