A microkernel-based, plugin-driven AI application development platform in Go, with unified LLM integration, Agent, RAG, MCP protocol support, and multi-tenant management.
Weave is an AI application development platform built in Golang, centered on a microkernel and layered architecture. The core design philosophy enables dynamic business capability extension through a hot-pluggable plugin system. All plugins implement a unified interface, supporting runtime loading/unloading, independent namespaces, declarative route registration, and automatic documentation generation.
On the AI capability layer, Weave uses CloudWeGo Eino to unify LLM abstraction, supporting dynamic switching across OpenAI, Ollama, and ModelScope with streaming responses. It includes a complete AIChat service (with web frontend), an Agent framework (tool calling, task planning, memory management, workflow automation), and a RAG service (RedisSearch-based vector retrieval, multi-format document parsing, hybrid search, knowledge base versioning). It also integrates the MCP tool protocol via mark3labs/mcp-go for standardized tool access.
Platform infrastructure covers JWT dual-token authentication, CSRF protection, token bucket rate limiting, Zap structured logging, Prometheus + Grafana observability, multi-tenant isolation, and team role management. It supports MySQL/PostgreSQL/SQLite backends and offers three deployment modes: local development, single Docker instance, and Docker Compose multi-instance with Nginx load balancing.
Layered Architecture
Interface Layer (Controller + Router)
↓
Business Layer (Core Logic + Plugin System)
↓
Data Layer (Model + DB)
↓
Infrastructure Layer (Logging / Config / Security / Monitoring)
Core Capability Matrix
- Microkernel & Plugin System: Runtime environment, plugin lifecycle management (register/load/unload/dependency resolution/conflict detection), hot-pluggable, independent namespace and route prefix, unified Plugin interface with
GetRoutes()declarative route registration, plugin development toolchain (scaffold + templates + examples) - Unified LLM Service: OpenAI / Ollama / ModelScope multi-platform dynamic switching, streaming responses, context optimization, summary extraction, model caching
- AIChat Service: Multi-platform/multi-model/multimodal, streaming responses, conversation management, standalone web frontend
- Agent Service: Tool calling, task planning, short/long-term memory management, complex multi-turn dialogue, workflow automation, personalization
- RAG Service: RedisSearch vector retrieval, text/PDF/Markdown/Word multi-format documents, hybrid search (vector + keyword), custom Embedding models, knowledge base versioning
- MCP Protocol: MCP tool protocol integration via
cloudwego/eino-ext/components/tool/mcpandmark3labs/mcp-go - Security: JWT dual-token auth, CSRF protection, token bucket rate limiting, password hash storage, login history audit, unified error handling, optional HTTPS
- Observability: Zap structured logging,
/healthendpoint, Prometheus metrics collection, Grafana dashboards and alert rules - Multi-tenant & Team Management: TenantID field-level isolation, team CRUD, owner/admin/member roles, ownership transfer
API Overview
- Base path:
/api/v1 - Authentication: JWT Bearer Token
- CSRF: Non-GET/HEAD/OPTIONS/TRACE requests require
X-CSRF-Tokenheader +XSRF-TOKENcookie - Main endpoint groups: Auth (
/auth), Users (/api/v1/users), Tools (/api/v1/tools), Plugins (/api/v1/plugins), Teams (/api/v1/teams), Audit Logs (/api/v1/audit/logs)
Quick Start
# Local development
git clone https://github.com/liaotxcn/Weave.git
cd Weave
make install && make run
# Docker single instance
docker build -t weave .
docker run -p 8081:8081 weave
# Docker Compose multi-instance (production recommended)
docker-compose up -d
Unconfirmed Information: No license identifier found on the repository page; no standalone documentation site discovered; web/ frontend tech stack unspecified; API version is 1.0.0 but no Git tags or releases annotated.