openinference
OpenInferenceは、モデルのパフォーマンスを評価し、改善するためのツールです。特に、機械学習モデルの推論過程を可視化し、データの理解を深めることに重点を置いています。ユーザーは、モデルの出力を分析し、問題点を特定することで、より良いモデルの構築を目指すことができます。
GitHubスター
581
ユーザー評価
未評価
お気に入り
0
閲覧数
27
フォーク
132
イシュー
157
OpenInference is a set of conventions and plugins that is complimentary to OpenTelemetry to
enable tracing of AI applications. OpenInference is natively supported
by arize-phoenix, but can be used with any OpenTelemetry-compatible backend as
well.
Specification
The OpenInference specification is edited in markdown files found in the spec directory. It's designed to
provide insight into the invocation of LLMs and the surrounding application context such as retrieval from vector stores
and the usage of external tools such as search engines or APIs. The specification is transport and file-format agnostic,
and is intended to be used in conjunction with other specifications such as JSON, ProtoBuf, and DataFrames.
Instrumentation
OpenInference provides a set of instrumentations for popular machine learning SDKs and frameworks in a variety of
languages.
Python
Libraries
Package | Description | Version |
---|---|---|
openinference-semantic-conventions |
Semantic conventions for tracing of LLM Apps. | |
openinference-instrumentation |
Reusable utilities, decorators, configurations, and helpers for instrumentation. | |
openinference-instrumentation-agno |
OpenInference Instrumentation for Agno Agents. | |
openinference-instrumentation-openai |
OpenInference Instrumentation for OpenAI SDK. | |
openinference-instrumentation-openai-agents |
OpenInference Instrumentation for OpenAI Agents SDK. | |
openinference-instrumentation-llama-index |
OpenInference Instrumentation for LlamaIndex. | |
openinference-instrumentation-dspy |
OpenInference Instrumentation for DSPy. | |
openinference-instrumentation-bedrock |
OpenInference Instrumentation for AWS Bedrock. | |
openinference-instrumentation-langchain |
OpenInference Instrumentation for LangChain. | |
openinference-instrumentation-mcp |
OpenInference Instrumentation for MCP. | |
openinference-instrumentation-mistralai |
OpenInference Instrumentation for MistralAI. | |
openinference-instrumentation-portkey |
OpenInference Instrumentation for Portkey. | |
openinference-instrumentation-guardrails |
OpenInference Instrumentation for Guardrails. | |
openinference-instrumentation-vertexai |
OpenInference Instrumentation for VertexAI. | |
openinference-instrumentation-crewai |
OpenInference Instrumentation for CrewAI. | |
openinference-instrumentation-haystack |
OpenInference Instrumentation for Haystack. | |
openinference-instrumentation-litellm |
OpenInference Instrumentation for liteLLM. | |
openinference-instrumentation-groq |
OpenInference Instrumentation for Groq. | |
openinference-instrumentation-instructor |
OpenInference Instrumentation for Instructor. | |
openinference-instrumentation-anthropic |
OpenInference Instrumentation for Anthropic. | |
openinference-instrumentation-beeai |
OpenInference Instrumentation for BeeAI. | |
openinference-instrumentation-google-genai |
OpenInference Instrumentation for Google GenAI. | |
openinference-instrumentation-google-adk |
OpenInference Instrumentation for Google ADK. | |
openinference-instrumentation-autogen-agentchat |
OpenInference Instrumentation for Microsoft Autogen AgentChat. | |
openinference-instrumentation-pydantic-ai |
OpenInference Instrumentation for PydanticAI. | |
openinference-instrumentation-smolagents |
OpenInference Instrumentation for smolagents. |
Span Processors
Normalize and convert data across other instrumentation libraries by adding span processors that unify data.
Package | Description | Version |
---|---|---|
openinference-instrumentation-openlit |
OpenInference Span Processor for OpenLIT traces. | |
openinference-instrumentation-openllmetry |
OpenInference Span Processor for OpenLLMetry (Traceloop) traces. |
Examples
Name | Description | Complexity Level |
---|---|---|
Agno | Agno agent examples | Beginner |
OpenAI SDK | OpenAI Python SDK, including chat completions and embeddings | Beginner |
MistralAI SDK | MistralAI Python SDK | Beginner |
VertexAI SDK | VertexAI Python SDK | Beginner |
LlamaIndex | LlamaIndex query engines | Beginner |
DSPy | DSPy primitives and custom RAG modules | Beginner |
Boto3 Bedrock Client | Boto3 Bedrock client | Beginner |
LangChain | LangChain primitives and simple chains | Beginner |
LiteLLM | A lightweight LiteLLM framework | Beginner |
LiteLLM Proxy | LiteLLM Proxy to log OpenAI, Azure, Vertex, Bedrock | Beginner |
Groq | Groq and AsyncGroq chat completions | Beginner |
Anthropic | Anthropic Messages client | Beginner |
BeeAI | Agentic instrumentation in the BeeAI framework | Beginner |
LlamaIndex + Next.js Chatbot | A fully functional chatbot using Next.js and a LlamaIndex FastAPI backend | Intermediate |
LangServe | A LangChain application deployed with LangServe using custom metadata on a per-request basis | Intermediate |
DSPy | A DSPy RAG application using FastAPI, Weaviate, and Cohere | Intermediate |
Haystack | A Haystack QA RAG application | Intermediate |
OpenAI Agents | OpenAI Agents with handoffs | Intermediate |
Autogen AgentChat | Microsoft Autogen Assistant Agent and Team Chat | Intermediate |
PydanticAI | PydanticAI agent examples | Intermediate |
JavaScript
Libraries
Package | Description | Version |
---|---|---|
@arizeai/openinference-semantic-conventions |
Semantic conventions for tracing of LLM Apps. | |
@arizeai/openinference-core |
Reusable utilities, configuration, and helpers for instrumentation. | |
@arizeai/openinference-instrumentation-bedrock |
OpenInference Instrumentation for AWS Bedrock. | |
@arizeai/openinference-instrumentation-bedrock-agent-runtime |
OpenInference Instrumentation for AWS Bedrock Agent Runtime. | |
@arizeai/openinference-instrumentation-beeai |
OpenInference Instrumentation for BeeAI. | |
@arizeai/openinference-instrumentation-langchain |
OpenInference Instrumentation for LangChain.js. | |
@arizeai/openinference-instrumentation-mcp |
OpenInference Instrumentation for MCP. | |
@arizeai/openinference-instrumentation-openai |
OpenInference Instrumentation for OpenAI SDK. | |
@arizeai/openinference-vercel |
OpenInference Support for Vercel AI SDK. | |
@arizeai/openinference-mastra |
OpenInference Support for Mastra. |
Examples
Name | Description | Complexity Level |
---|---|---|
OpenAI SDK | OpenAI Node.js client | Beginner |
BeeAI framework - ReAct agent | Agentic ReActAgent instrumentation in the BeeAI framework |
Beginner |
BeeAI framework - ToolCalling agent | Agentic ToolCallingAgent instrumentation in the BeeAI framework |
Beginner |
BeeAI framework - LLM | See how to run instrumentation only for the specific LLM module part in the BeeAI framework | Beginner |
LlamaIndex Express App | A fully functional LlamaIndex chatbot with a Next.js frontend and a LlamaIndex Express backend, instrumented using openinference-instrumentation-openai |
Intermediate |
LangChain OpenAI | A simple script to call OpenAI via LangChain, instrumented using openinference-instrumentation-langchain |
Beginner |
LangChain RAG Express App | A fully functional LangChain chatbot that uses RAG to answer user questions. It has a Next.js frontend and a LangChain Express backend, instrumented using openinference-instrumentation-langchain |
Intermediate |
Next.js + OpenAI | A Next.js 13 project bootstrapped with create-next-app that uses OpenAI to generate text |
Beginner |
Java
Libraries
Package | Description | Version |
---|---|---|
openinference-semantic-conventions |
Semantic conventions for tracing of LLM Apps. | |
openinference-instrumentation |
Base instrumentation utilities. | |
openinference-instrumentation-langchain4j |
OpenInference Instrumentation for LangChain4j. | |
openinference-instrumentation-springAI |
OpenInference Instrumentation for Spring AI. |
Examples
Name | Description | Complexity Level |
---|---|---|
LangChain4j Example | Simple example using LangChain4j with OpenAI | Beginner |
Spring AI Example | Spring AI example with OpenAI and tool calling | Beginner |
Supported Destinations
OpenInference supports the following destinations as span collectors.
- ✅ Arize-Phoenix
- ✅ Arize
- ✅ Any OTEL-compatible collector
Community
Join our community to connect with thousands of machine learning practitioners and LLM observability enthusiasts!
- 🌍 Join
our Slack community. - 💡 Ask questions and provide feedback in the #phoenix-support channel.
- 🌟 Leave a star on our GitHub.
- 🐞 Report bugs with GitHub Issues.
- 𝕏 Follow us on X.
- 🗺️ Check out our roadmap to see where we're heading next.