Build with AI Identity
Everything you need to add identity, policy enforcement, and forensic logging to your AI agents.
On this page
Quickstart
Get up and running with AI Identity in under 5 minutes. Follow these four steps to register your first agent, obtain an API key, route traffic through the gateway, and explore forensic logs.
1. Create an Agent
Register a new AI agent with a unique identity. Every agent gets a cryptographic fingerprint that follows it across every request.
curl -X POST https://ai-identity-gateway.onrender.com/v1/agents \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "support-bot",
"description": "Customer support assistant",
"allowed_models": ["gpt-4o", "claude-sonnet-4-20250514", "gemini-2.5-pro"]
}'2. Get an API Key
Generate a scoped API key for your agent. Keys can be restricted by model, rate limit, and expiration date.
curl -X POST https://ai-identity-gateway.onrender.com/v1/agents/ag_abc123/keys \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "production-key",
"scopes": ["chat:completions", "embeddings"],
"rate_limit": 100,
"expires_in_days": 90
}'3. Point Your Gateway
Replace your LLM provider base URL with the AI Identity gateway. All requests are transparently proxied with identity headers injected.
# Instead of calling your LLM provider directly:
# POST https://api.openai.com/v1/chat/completions
# POST https://api.anthropic.com/v1/messages
# Point ALL providers to the AI Identity gateway:
curl -X POST https://ai-identity-gateway.onrender.com/v1/chat/completions \
-H "Authorization: Bearer aid_sk_your_agent_key" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"messages": [
{"role": "user", "content": "Hello, world!"}
]
}'
# Works with gpt-4o, claude-sonnet-4-20250514, gemini-2.5-pro, etc.4. Explore Forensics
Every request through the gateway is logged with a tamper-proof audit trail. Query the forensics API to see what your agents have been doing.
curl https://ai-identity-gateway.onrender.com/v1/agents/ag_abc123/logs \
-H "Authorization: Bearer YOUR_API_KEY" \
-G \
-d "limit=10" \
-d "since=2025-01-01T00:00:00Z"Core Concepts
AI Identity is built around four pillars that together provide a comprehensive governance layer for autonomous AI agents.
Identity
Every AI agent gets a unique, verifiable identity. Identities are cryptographically signed and can be validated by any downstream service. Think of it as a passport for your AI — it proves who the agent is, who created it, and what it is allowed to do.
Policy
Define fine-grained access policies per agent. Control which models an agent can call, what endpoints it can hit, rate limits, token budgets, and time-of-day restrictions. Policies are evaluated at the gateway before every request is forwarded.
Compliance
Automated compliance monitoring aligned with EU AI Act high-risk obligations, SOC 2, NIST AI RMF, and internal governance frameworks. Supports Article 9 risk management, Article 12 record-keeping, Article 13 transparency, and Article 14 human oversight requirements. Generates audit-ready reports and helps accelerate EU database registration and conformity assessments.
Forensics
Full request/response logging with tamper-proof audit trails. Every LLM call is captured with metadata including latency, token usage, model, and agent identity. Logs are immutable and can be exported for external review or incident investigation.
Gateway Architecture
The AI Identity Gateway sits between your application and your LLM providers. It acts as a transparent proxy that injects identity headers, enforces policies, and logs every interaction — all without changing your existing code.
Your App → AI Identity Gateway → LLM Provider (OpenAI, Anthropic, Gemini, Cohere, Mistral, etc.)
│
├─ Validate agent identity
├─ Check policy (rate limit, model access, budget)
├─ Inject audit headers (X-Agent-ID, X-Request-ID)
├─ Log request metadata
└─ Forward to upstream providerThe gateway works with any LLM provider — OpenAI, Anthropic, Google Gemini, Cohere, Mistral, or any custom REST API. Simply change your base URL and use your AI Identity agent key instead of a provider key. The gateway routes to the correct upstream provider based on the model specified in the request.
from openai import OpenAI
# Point the OpenAI client at the AI Identity gateway
client = OpenAI(
base_url="https://ai-identity-gateway.onrender.com/v1",
api_key="aid_sk_your_agent_key",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Summarize today's news"}],
)
print(response.choices[0].message.content)from openai import OpenAI
# Same gateway, different model — Anthropic Claude
client = OpenAI(
base_url="https://ai-identity-gateway.onrender.com/v1",
api_key="aid_sk_your_agent_key",
)
response = client.chat.completions.create(
model="claude-sonnet-4-20250514",
messages=[{"role": "user", "content": "Explain quantum computing"}],
)
print(response.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://ai-identity-gateway.onrender.com/v1",
apiKey: "aid_sk_your_agent_key",
});
// Works with any supported model — OpenAI, Anthropic, Gemini, etc.
const response = await client.chat.completions.create({
model: "gemini-2.5-pro", // or "gpt-4o", "claude-sonnet-4-20250514", etc.
messages: [{ role: "user", content: "Summarize today's news" }],
});
console.log(response.choices[0].message.content);Integrations
AI Identity works with all major agent frameworks and LLM providers — OpenAI, Anthropic, Google Gemini, Cohere, Mistral, and more. Because the gateway uses the OpenAI-compatible API format, integration usually requires changing just one line — the base URL.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
base_url="https://ai-identity-gateway.onrender.com/v1",
api_key="aid_sk_your_agent_key",
)
response = llm.invoke("What is AI Identity?")
print(response.content)import os
os.environ["OPENAI_API_BASE"] = "https://ai-identity-gateway.onrender.com/v1"
os.environ["OPENAI_API_KEY"] = "aid_sk_your_agent_key"
from crewai import Agent, Task, Crew
researcher = Agent(
role="Researcher",
goal="Find the latest AI governance news",
backstory="You are an expert AI policy analyst.",
llm="gpt-4o",
)
task = Task(
description="Summarize recent developments in AI regulation",
expected_output="A concise briefing document",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
print(result)from autogen import AssistantAgent, UserProxyAgent
config_list = [{
"model": "gpt-4o",
"base_url": "https://ai-identity-gateway.onrender.com/v1",
"api_key": "aid_sk_your_agent_key",
}]
assistant = AssistantAgent(
name="assistant",
llm_config={"config_list": config_list},
)
user_proxy = UserProxyAgent(
name="user",
human_input_mode="NEVER",
code_execution_config=False,
)
user_proxy.initiate_chat(
assistant, message="Explain how AI identity governance works."
)# List your agents
curl https://ai-identity-gateway.onrender.com/v1/agents \
-H "Authorization: Bearer YOUR_API_KEY"
# Create a chat completion through the gateway (any provider)
curl -X POST https://ai-identity-gateway.onrender.com/v1/chat/completions \
-H "Authorization: Bearer aid_sk_your_agent_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.5-pro",
"messages": [{"role": "user", "content": "Hello!"}]
}'
# Check agent usage and forensic logs
curl https://ai-identity-gateway.onrender.com/v1/agents/ag_abc123/usage \
-H "Authorization: Bearer YOUR_API_KEY"Authentication
AI Identity uses API key authentication. There are two types of keys: organization keys (for managing agents and viewing logs) and agent keys (for making LLM requests through the gateway).
Organization Keys
Prefixed with aid_org_. Used for management operations: creating agents, generating agent keys, viewing logs, and configuring policies. Keep these secure and never expose them in client-side code.
Agent Keys
Prefixed with aid_sk_. Scoped to a single agent. Used for making LLM requests through the gateway. Each key inherits the policies attached to its agent. Rotate these regularly.
# Organization-level operations
curl https://ai-identity-gateway.onrender.com/v1/agents \
-H "Authorization: Bearer aid_org_your_org_key"
# Agent-level LLM requests
curl -X POST https://ai-identity-gateway.onrender.com/v1/chat/completions \
-H "Authorization: Bearer aid_sk_your_agent_key" \
-H "Content-Type: application/json" \
-d '{"model": "claude-sonnet-4-20250514", "messages": [{"role": "user", "content": "Hi"}]}'API Reference
Full interactive API documentation is available in two formats. Both are auto-generated from the OpenAPI spec and stay in sync with the latest deployed version.
ReDoc
Clean, readable API reference with request/response schemas, example payloads, and authentication details. Best for reading and understanding the API.
Swagger UI
Interactive API explorer where you can try endpoints directly from the browser. Best for testing and debugging.
The API follows RESTful conventions. All endpoints accept and return JSON. Timestamps are ISO 8601 in UTC. Pagination uses cursor-based pagination with limit and after parameters.
# Agents
GET /v1/agents # List all agents
POST /v1/agents # Create a new agent
GET /v1/agents/:id # Get agent details
PATCH /v1/agents/:id # Update an agent
DELETE /v1/agents/:id # Delete an agent
# Keys
POST /v1/agents/:id/keys # Create an agent key
GET /v1/agents/:id/keys # List agent keys
DELETE /v1/agents/:id/keys/:key_id # Revoke a key
# Policies
GET /v1/policies # List policies
POST /v1/policies # Create a policy
PATCH /v1/policies/:id # Update a policy
# Forensics / Logs
GET /v1/agents/:id/logs # Get agent request logs
GET /v1/agents/:id/usage # Get usage summary
# Gateway (OpenAI-compatible)
POST /v1/chat/completions # Proxied chat completion
POST /v1/embeddings # Proxied embeddings