A skill library for autonomous biomedical research AI Agents, containing 240 structured SKILL.md files covering bioinformatics, drug discovery, clinical medicine, and more, serving as the operating layer of LabOS on the OpenClaw Agent platform.
LabClaw is an AI Agent skill library jointly developed by Stanford's Le Cong Lab and Princeton's Mengdi Wang Lab, positioned as the operating layer of LabOS (Stanford-Princeton AI Co-Scientists). The project contains 240 production-ready SKILL.md files that provide modular tool-calling knowledge and workflow orchestration capabilities for AI Agents in structured Markdown format.
The project adopts a skill-driven design philosophy, decoupling domain knowledge from tool usage strategies. Each skill file follows a unified structure (Overview → When to Use → Key Capabilities → Usage Examples), guiding Agents on when to use tools, how to invoke them, and what outputs to expect. Domain coverage spans seven areas:
- 🧬 Biology & Life Sciences (86 skills): bioinformatics, single-cell, genomics, proteomics, multi-omics, structural biology
- 💊 Pharma & Drug Discovery (36 skills): cheminformatics, molecular ML, docking, target research, pharmacology
- ⚙️ General & Data Science (54 skills): statistics, machine learning, data management, scientific writing, quality control
- 📚 Literature & Search (33 skills): academic search, biomedical databases, patents, grants, citation management
- 🏥 Medicine & Clinical (22 skills): clinical trials, precision medicine, oncology, infectious diseases, medical imaging
- 👁️ Vision & XR (5 skills): hand tracking, 3D pose estimation, segmentation, egocentric vision
- 📊 Visualization (4 skills): matplotlib, seaborn, plotly, publication-grade charts
LabClaw is fundamentally a pure knowledge layer with no executable code. It depends on the OpenClaw Agent platform at runtime, can be installed with a single command (install https://github.com/wu-yc/LabClaw), and also supports copying specific subfolders on demand. The project includes multiple tooluniverse-* skills compatible with MIMS-Harvard's ToolUniverse ecosystem, and can integrate with the LabOS XR execution layer for lab XR-assisted scenarios. Additionally, LabClaw can be deployed as a persistent autonomous Agent, continuously monitoring instrument data, interpreting multimodal signals, and triggering anomaly responses.
Unconfirmed information: exact skill file count (GitHub states 240, official site states 206+, possibly due to different counting methods or update timing); OpenClaw runtime maturity; LabOS integration depth (closed-loop API and data flow details not publicly documented); role of sponsor K-Dense; no publicly available user cases or evaluation reports yet.