A cross-platform desktop AI productivity client that unifies access to multiple LLM providers, featuring side-by-side model comparison, knowledge base building, AI image generation, and MCP extension support.
Cherry Studio is an open-source cross-platform desktop AI productivity client maintained by the CherryHQ organization, built on Electron + Vite + TypeScript. It unifies cloud models from OpenAI, Anthropic Claude, Google Gemini, DeepSeek, and Qwen with local models via Ollama and LM Studio into a single interface, eliminating the friction of switching between multiple LLM services.
Core capabilities include: 300+ pre-configured AI assistants (covering writing, coding, translation, etc.), side-by-side multi-model comparison, autonomous Agent mode, multi-format document parsing and Q&A (PDF/Office/images), knowledge base building from local files and web URLs, AI image generation, and MCP Server extension support. The UX features light/dark themes, transparent windows, full Markdown rendering, and a theme marketplace (cherrycss.com), with data backup via local storage and WebDAV.
The project uses a pnpm monorepo structure with Biome/OxLint for code standards, Vitest + Playwright for testing, and Changeset for version management. The community edition is open-sourced under AGPL-3.0, while the enterprise edition provides a standalone backend service with unified model management, shared knowledge bases, access control, and private deployment. Currently supporting Windows/macOS/Linux, with HarmonyOS/Android/iOS on the roadmap.
Unconfirmed: Enterprise pricing not public (contact bd@cherry-ai.com); enterprise open-source scope described as "partially open to customers" without specifics; mobile versions have no release timeline; pre-configured assistant count differs between GitHub (300+) and website (1000+); verified MCP Server list not provided.