A Kubernetes-based AI Agent runtime platform released by McKinsey, codifying patterns for deploying, orchestrating, and evaluating agentic resources via CRDs, providing production-grade infrastructure for multi-agent systems. Currently in Technical Preview.
Overview#
ARK is an open-source AI Agent runtime platform released by McKinsey & Company (QuantumBlack), built on Kubernetes to provide production-grade infrastructure for agentic systems. The project is currently in Technical Preview & RFC stage under Apache License 2.0.
Core Capabilities#
CRD Resource Management#
Declarative management of agent resources via 11 Kubernetes CRDs:
| Resource Type | Function |
|---|---|
| Models | Configure and connect AI model providers |
| Agents | Create autonomous AI agents with specific capabilities and tools |
| Teams | Orchestrate multi-agent collaboration with coordination strategies |
| Queries | Execute prompts and manage conversations with agents/teams |
| Tools | Define custom tools and MCP tool references |
| MCPServers | Configure Model Context Protocol servers |
| Memory | Persistent storage for agent conversations and state |
| Evaluator | Services for evaluating and scoring agent performance |
| Evaluation | Define evaluation configurations and results |
| ExecutionEngine | Specialized runtimes for different agent frameworks |
| A2AServer | Agent-to-Agent communication services |
Platform Features#
- Provider-agnostic agent operations
- Standardized deployment patterns
- Transparent orchestration
- Built-in evaluation capabilities
- Extensible tool integration (MCP protocol support)
- Multi-agent coordination
- Custom headers support (request tracing, custom metadata)
Built-in Services#
- ark-api: REST API + A2A Gateway
- ark-dashboard: Web management interface
- ark-mcp: MCP server integration service
- ark-evaluator: Comprehensive evaluation and scoring service
- ark-broker: Memory storage with streaming support
- localhost-gateway: Local development gateway
- langchain-execution-engine: LangChain agent execution engine
Installation & Usage#
Prerequisites#
- Kubernetes cluster
- Node.js
- Helm
- kubectl
- Minimum 2 CPU and 4 GiB RAM
Quick Start#
# Install CLI
npm install -g @agents-at-scale/ark
# Install ARK to cluster
ark install
# Configure default model
ark models create default
# Launch dashboard
ark dashboard
Python SDK#
pip install ark-sdk
from ark_sdk import ARKClientV1alpha1
from ark_sdk.models.agent_v1alpha1 import AgentV1alpha1, AgentV1alpha1Spec
client = ARKClientV1alpha1(namespace="default")
agent = AgentV1alpha1(
metadata={"name": "my-agent"},
spec=AgentV1alpha1Spec(prompt="Hello", modelRef={"name": "gpt-4"})
)
created = client.agents.create(agent)
Custom Execution Engine Development#
from ark_sdk import BaseExecutor, ExecutorApp, ExecutionEngineRequest, Message
class MyExecutor(BaseExecutor):
def __init__(self):
super().__init__("MyEngine")
async def execute_agent(self, request: ExecutionEngineRequest) -> List[Message]:
return [Message(role="assistant", content="Hello!")]
executor = MyExecutor()
app = ExecutorApp(executor, "MyEngine")
app.run(host="0.0.0.0", port=8000)
Architecture#
┌─────────────────────────────────────────────────────┐
│ ark-dashboard (Web UI) │
├─────────────────────────────────────────────────────┤
│ ark-api (REST API + A2A Gateway) │
├─────────────────────────────────────────────────────┤
│ Kubernetes Operator │
│ (Go-based Controller + CRDs) │
├─────────────────────────────────────────────────────┤
│ Execution Engines │ MCP Servers │ Evaluators │
├─────────────────────────────────────────────────────┤
│ ark-broker (Memory) │
└─────────────────────────────────────────────────────┘
Tech Stack#
- Core Controller: Go (Kubernetes Operator, kubebuilder/controller-runtime)
- CLI Tool: Node.js/TypeScript (
@agents-at-scale/ark) - Python SDK: Python 3.9+ (
ark-sdk) - Execution Engine: FastAPI + uvicorn
- Package Management: Helm Charts
- Development Tools: DevSpace (hot reload)
Use Cases#
- Production-grade Agent deployment
- Multi-agent system orchestration
- Agent performance evaluation
- Tool integration via MCP protocol
- Observability (Langfuse integration)
- Enterprise AI infrastructure
Important Notice#
The project is currently in Technical Preview stage and may contain incomplete features, experimental functionality, and undergo breaking changes based on community feedback. See official Disclaimer for details.