DISCOVER THE FUTURE OF AI AGENTSarrow_forward

agentgateway

calendar_todayAdded Apr 24, 2026
categoryAgent & Tooling
codeOpen Source
Workflow AutomationRustDocker大语言模型Multi-Agent SystemModel Context ProtocolAI AgentsAgent & ToolingModel & Inference FrameworkProtocol, API & IntegrationSecurity & Privacy

The first complete connectivity solution for Agentic AI — a unified gateway for LLM access, MCP tool federation, and A2A agent-to-agent communication.

agentgateway is a Linux Foundation open-source project positioned as the connectivity layer for AI Agent infrastructure, delivering five core capabilities:

LLM Gateway — Unified OpenAI-compatible API proxying to OpenAI, Anthropic, Gemini, Bedrock and other major providers, with built-in load balancing, failover, budget control, and prompt enrichment.

MCP Gateway — Tool federation based on Model Context Protocol, supporting stdio / HTTP / SSE / Streamable HTTP transports, OpenAPI integration, and OAuth authentication to securely expose distributed MCP Servers to LLMs.

A2A Gateway — Secure agent-to-agent communication based on Google's A2A protocol, with capability discovery, modality negotiation, and task collaboration.

Inference Routing — Kubernetes Inference Gateway extension that intelligently routes to self-hosted models based on GPU utilization, KV cache, LoRA adapters, and queue depth.

Security & Governance — Multi-layer guardrails (regex / OpenAI Moderation / Bedrock Guardrails / Model Armor / custom webhooks), CEL-based fine-grained RBAC policy engine, JWT / API Key / OAuth authentication, rate limiting, TLS, and full OpenTelemetry observability.

The architecture features a multi-language layered design: Rust data plane + Go control plane + TypeScript UI, with a three-tier configuration system (Static / Local / XDS). Resource relationships use a "child-points-to-parent" design for optimized incremental updates. Supports both standalone and Kubernetes deployment modes with a built-in Web UI Playground. Cross-platform: Linux / macOS / Windows.

Installation & Quick Start#

curl -sL https://agentgateway.dev/install | bash

Minimal LLM proxy example:

export OPENAI_API_KEY='<your-api-key>'
cat > config.yaml << 'EOF'
llm:
  models:
    - name: gpt-3.5-turbo
      provider: openAI
      params:
        model: gpt-3.5-turbo
        apiKey: "$OPENAI_API_KEY"
EOF
agentgateway -f config.yaml
# UI: http://localhost:15000/ui
# Proxy: http://localhost:4000
curl -s http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"Say hello in one sentence."}]}'

Unconfirmed Information#

  • Project creation date / first commit: not explicitly stated in README or docs
  • Specific A2A protocol version supported is not documented
  • K8s Inference Gateway integration described as "extensions", details TBD
  • MCP protocol version number not confirmed
  • Performance benchmark data (latency, throughput) not found in public materials

Related Projects

View All arrow_forward

STAY UPDATED

Get the latest AI tools and trends delivered straight to your inbox. No spam, just intelligence.

rocket_launch