DISCOVER THE FUTURE OF AI AGENTSarrow_forward

OpenJarvis

calendar_todayAdded Apr 24, 2026
categoryAgent & Tooling
codeOpen Source
PythonWorkflow Automation大语言模型Model Context ProtocolAI AgentsAgent FrameworkCLIAgent & ToolingOtherModel Training & InferenceProtocol, API & IntegrationSecurity & Privacy

A local-first personal AI agent framework from Stanford that enables offline agent orchestration, skill import, and trace-driven continuous learning through five composable primitives, supporting 10+ inference backends and four interaction modes.

OpenJarvis is a local-first personal AI agent framework developed by Stanford Hazy Research and SAIL, part of the Intelligence Per Watt research program. Its core design revolves around five composable primitives: Intelligence (model management), Agents (7+ agent types), Tools (search/calc/code/retrieval/MCP), Engine (hardware-aware inference runtime), and Learning (trace-driven automatic improvement with M1→M2→M3 distillation pipeline).

All core functionalities work offline, with a unified InferenceEngine interface supporting 10+ local inference backends including Ollama, vLLM, SGLang, llama.cpp, and MLX. Cloud APIs (OpenAI/Anthropic/Gemini/OpenRouter/MiniMax) are optional extensions. Built-in GPU power, token cost, and latency telemetry treat energy metrics equally with accuracy.

The skill system follows the agentskills.io open standard, enabling import from Hermes Agent, OpenClaw, or any GitHub repo, with trace-history-based optimization and benchmarking. Four interaction modes are available: Desktop App (Tauri), Browser App, CLI, and Python SDK. jarvis serve launches an OpenAI-compatible FastAPI streaming service. Agent types span the full spectrum from simple chat and ReAct reasoning to deep research, morning digests, scheduled monitoring, and persistent operations.

The layered architecture uses the Registry Pattern, with each layer independently replaceable through clean interfaces. The codebase is built with Python (~80.7%), Rust (~9.9%), and TypeScript (~8.0%), with Rust extensions bound via PyO3. Supports Docker, systemd, and launchd deployment, with GPU-accelerated container images (including ROCm 7.2). Includes an A2A module for agent-to-agent communication.

Primary languages: Python, Rust, TypeScript. Platforms: macOS, Linux (with ROCm GPU); Windows support is unconfirmed. Apache 2.0 license. Actively developed (597+ commits, 26 contributors); desktop app is at v0.0.1-rc1 early stage.

Related Projects

View All arrow_forward

STAY UPDATED

Get the latest AI tools and trends delivered straight to your inbox. No spam, just intelligence.

rocket_launch