Local-first AI agent platform for desktop and CLI, featuring multi-model support, local RAG, multi-step agentic workflows, and MCP tool integration.
Askimo is a local-first AI agent platform available as both a desktop GUI (Compose Multiplatform) and a CLI (JLine3 + GraalVM native image).
Core Capabilities
- Multi-Model Support: OpenAI, Claude, Gemini, Grok, Ollama, LM Studio, LocalAI, Docker AI, and any OpenAI-compatible endpoint via custom base URL. Per-session model switching without context loss.
- Local RAG: Hybrid retrieval combining BM25 full-text search with vector semantic search. A built-in AI classifier automatically skips unnecessary retrievals to reduce latency and token consumption. Indexes local folders, files, and web URLs; supports source code, PDF, MS Office docs, emails, and text files. All data stays on-device.
- Plans Multi-Step Agentic Workflows: Chains multi-step AI pipelines into form-driven automated workflows. Each step has an independent role (researcher, analyst, strategist, writer) with automatic context passing between steps. Supports YAML custom definition and natural language generation. Built-in templates include cover letters, blog writing, competitive analysis, meeting notes, research reports, and email drafting. Results exportable as PDF or Word.
- Script Runner: Execute AI-generated Python, Bash, and JavaScript scripts directly in chat. Python runs in an auto-managed virtualenv with automatic dependency installation.
- MCP Tool Integration: Connect to MCP-compatible servers via stdio or HTTP, with global, per-project, and ephemeral configuration scopes.
Experience & Engineering Features
- Persistent sessions stored in local SQLite, auto-restored on restart.
- Vision support for image attachments, compatible with any multimodal model.
- On-device telemetry for token usage, cost estimation, and RAG performance metrics—no data uploaded.
- Internationalization in 9 languages: English, Chinese (Simplified/Traditional), Japanese, Korean, French, Spanish, German, Portuguese, and Vietnamese.
- Attachment support directly from the project sidebar.
Architecture
Built with Kotlin (JDK 21+). AI integration layer based on LangChain4j, providing unified abstraction across providers. Desktop and CLI share the shared/ core logic module (providers, RAG, MCP, memory, tools, database, plans engine). Build system: Gradle Kotlin DSL. Code quality enforced via Detekt static analysis with custom rules. CI uses Dockerfile.runtime for container image builds.
Use Cases
Multi-model comparative evaluation, privacy-first local code/document Q&A, automated multi-stage research report generation, one-click execution of AI-generated scripts, extending AI capabilities via MCP protocol, fully offline privacy-first AI environments.
Unconfirmed Information
- Specific vector database selection is undisclosed (likely based on LangChain4j built-in solutions).
- No Hugging Face page or academic paper found.
- Author haiphucnguyen (Hai Nguyen) background unknown; 3 additional contributors.
- Full YAML schema for custom Plans requires consulting docs or source code.
- Official MCP server recommendations or marketplace page unconfirmed.