An Organization-as-Code framework that treats AI agents as individuals with names, personalities, memories, and schedules, supporting hierarchical multi-agent collaboration, neuroscience-inspired memory systems, and six-engine multi-model orchestration.
AnimaWorks is an Organization-as-Code framework whose core philosophy treats AI agents as individuals with names, personalities, memories, and schedules, enabling autonomous decision-making and team collaboration through message-based coordination.
For multi-agent collaboration, the framework provides a Supervisor → subordinate hierarchy where each agent runs in an isolated OS process, communicating via IPC with automatic restart capability. Agents achieve 24/7 autonomous operation through a Heartbeat cycle (observe→plan→reflect), Cron scheduled tasks, and TaskExec, supporting parallel task submission with automatic dependency resolution.
The memory system is neuroscience-inspired, adopting a "library model" rather than stuffing all content into the context window. Automatic activation is achieved through six-channel parallel search upon message arrival. Nightly memory consolidation converts episodic memories into knowledge (analogous to sleep-dependent memory consolidation), with a three-stage forgetting mechanism (mark→merge→archive). Storage backends support ChromaDB vector search + BM25 keyword search + RRF fusion, with Neo4j graph database as an optional extension.
The framework provides six execution engine modes: Claude Agent SDK (S), Codex CLI (C), Cursor Agent CLI (D), Gemini CLI (G), LiteLLM Autonomous (A), and LiteLLM Basic (B). Each agent can be independently configured with different models, and Heartbeat/Cron/Inbox can use a low-cost background_model.
The web dashboard is built on FastAPI + static SPA, offering real-time chat (SSE), voice chat, conference mode, Slack-style shared channels, organization overview, activity feeds, memory browser, and a 3D workspace, with support for 17 languages.
External integrations cover Slack (bidirectional sync), Discord, Gmail, LINE, and AWS. Voice support includes VOICEVOX/SBV2/ElevenLabs, and image generation supports NovelAI/fal.ai+Flux/Meshy. Operational aids include Usage Governor for budget management, cross-organization activity auditing, AI brainstorming, and industry-specific role templates. A built-in LoCoMo benchmark adapter is included for memory system evaluation.
The backend uses FastAPI + Uvicorn + APScheduler + Pydantic v2, with Argon2 + PyNaCl for security, and containerization via Docker + docker-compose. Current version is v0.7.0 in Beta stage, requiring Python ≥ 3.12.
Unconfirmed information: Whether the PyPI package has been actually published (Trusted Publishing workflow configured but no pip install command in README); author background unverified externally; 3D Workspace tech stack unspecified; no public production deployment cases; LoCoMo benchmark scores not shown.