Agentic AI Atlasby a5c.ai
OverviewWikiGraphFor AgentsEdgesSearchWorkspace
/
GitHubDocsDiscord
iiRecord
Agentic AI Atlas · ONNX Runtime
tool:onnx-runtimea5c.ai
Search record views/
Record · tabs

Available views

II.Record viewspp. 1 - 1
overviewjsongraph
II.
Tool overview

tool:onnx-runtime

Reference · live

ONNX Runtime overview

Cross-platform, high-performance ML inference engine for ONNX models. Runs on CPU, CUDA, DirectML, CoreML, ROCm, and other execution providers; supports quantization and graph optimisations. Used for deploying models trained in PyTorch, TensorFlow, or scikit-learn after export to the open ONNX interchange format.

ToolOutgoing · 8Incoming · 4

Attributes

displayName
ONNX Runtime
homepageUrl
https://onnxruntime.ai
kind
other
description
Cross-platform, high-performance ML inference engine for ONNX models. Runs on CPU, CUDA, DirectML, CoreML, ROCm, and other execution providers; supports quantization and graph optimisations. Used for deploying models trained in PyTorch, TensorFlow, or scikit-learn after export to the open ONNX interchange format.

Outgoing edges

alternative_to3
  • tool:vllm·ToolvLLM
  • tool:tensorrt·ToolTensorRT
  • tool:triton-inference·ToolTriton Inference Server
belongs_to_language1
  • language:cpp·LanguageC++
tool_used_by2
  • skill-area:model-serving·SkillAreaModel Serving
  • skill-area:model-optimisation·SkillAreaModel Optimisation
used_for2
  • skill-area:model-serving·SkillAreaModel Serving
  • skill-area:ai-evaluation·SkillAreaAI Evaluation

Incoming edges

alternative_to3
  • tool:vllm·ToolvLLM
  • tool:tensorrt·ToolTensorRT
  • tool:triton-inference·ToolTriton Inference Server
uses_tool1
  • specialization:ml-inference-serving·SpecializationML Inference Serving

Related pages

No related wiki pages for this record.

Shortcuts

Open in graph
Browse node kind