A self-hostable AI research workspace for grounded chat, paper study, 206+ scientific skills, and 13-stage deep research execution.
InnoClaw is a full-pipeline AI workspace purpose-built for scientific research. Built on a Next.js monolithic architecture with Drizzle ORM and SQLite for lightweight data persistence, it supports local development as well as Docker and Kubernetes deployment.
Core capabilities are organized around five modules: Workspace Management maps server-side folders to persistent AI contexts with a built-in file browser supporting Markdown and PDF preview; RAG-Augmented Chat builds vector indexes from synced files, returning answers with inline source citations, while MAX mode provides automatic context summarization to prevent overflow; Paper Study integrates five academic sources (ArXiv, PubMed, bioRxiv, Semantic Scholar, HuggingFace Daily Papers) with AI-driven query expansion, batch summarization, and structured five-role paper discussions (moderator, skeptic, librarian, reproducer, scribe), plus cross-disciplinary research ideation.
The Scientific Skills System is the core differentiator: based on the Science Context Protocol (SCP) standardized format, it includes 206+ built-in skills spanning eight domains — drug discovery (71), genomics (41), protein science (38), chemistry (24), physics & engineering (18), experimental automation (7), earth & environmental science (5), and literature mining (2). The Skill Creator Framework serves as a meta-skill for creating, evaluating, benchmarking, and validating custom skills.
Deep Research implements a 13-stage experimental workflow with workflow graph orchestration, five-role multi-Agent collaboration, SSH remote execution, Slurm job scheduling, and real-time monitoring. The Checkpoints mechanism allows pausing research at critical review points with options to continue, modify, branch, or reject; Role Studio enables sending targeted instructions to worker Agents. The AI Agent system has 36 tool-calling capabilities including bash execution, file operations, kubectl commands, and scientific skill invocation.
The multi-LLM layer supports 9 Providers with 26+ models (OpenAI GPT-5.2/4.1/o3-o4, Anthropic Opus 4/Sonnet 4/Haiku, Google Gemini 2.5/3/3.1, DeepSeek V3.2, Qwen3 235B/3.5 397B, Moonshot Kimi K2.5, SH-Lab Intern S1/S1 Pro, MiniMax 2.5, Zhipu GLM-5). The intelligent notes and automation module supports auto-generation of summaries, FAQs, briefings, and timelines, with Cron-scheduled tasks for daily/weekly reports and git sync, plus HuggingFace/ModelScope dataset management with resume capability.
The execution layer connects to compute resources via Shell/Slurm/rjob/SSH/kubectl, with security hardening guidance in SECURITY.md. The SCP protocol is independently maintained at the InternScience/scp repository, and the SCP Hub platform provides external skill distribution.