A memory-enabled AI companion chatbot optimized for local LLMs with Retrieval Augmented Generation (RAG) technology, supporting 12 languages and OpenAI API compatibility.
One-Minute Overview#
Loyal Elephie is an AI companion chatbot that combines a Next.js frontend with a Python backend, utilizing Retrieval Augmented Generation (RAG) technology to deliver a seamless chatting experience. It features controllable memory functionality, supports local LLMs, and is compatible with OpenAI APIs, making it ideal for users seeking a private AI assistant that can run locally.
Core Value: Enable your AI to have long-term memory capabilities while protecting privacy and operating locally.
Quick Start#
Installation Difficulty: Medium - Requires configuring both frontend and backend, setting up user accounts, and preparing OpenAI-compatible API services
git clone https://github.com/v2rockets/Loyal-Elephie.git
cd Loyal-Elephie
cd frontend
npm install
cd ../backend
pip install -r requirements.txt
Is this suitable for me?
- ✅ Need a private AI assistant that can run locally
- ✅ Want the AI to remember conversations and provide personalized responses
- ✅ Use local language models for inference
- ❌ Don't have a Linux environment (Windows users need WSL)
- ❌ Require a simple deployment process
Core Capabilities#
1. Controllable Memory System - Make AI Your Second Brain#
You decide which conversations to save and can edit the context as needed. Loyal Elephie remembers important information and references it in future conversations. Actual Value: Your AI assistant can remember your preferences and important information, providing coherent and personalized conversation experiences.
2. Hybrid Search Technology - Efficient Information Retrieval#
Combines ChromaDB and BM25 algorithms for efficient search, with special optimization for date-relevant queries. Actual Value: Quickly find relevant historical conversations and reference materials, improving the accuracy and relevance of AI responses.
3. Secure Web Access - Protect Your Privacy#
Built-in login feature ensures only authorized users can access the AI assistant, keeping your conversations secure. Actual Value: Your private conversations remain protected from unauthorized access, providing a secure environment.
4. Optimized LLM Agent - Designed for Local Models#
Uses XML syntax without requiring function calls, optimized for token efficiency and works perfectly with local LLMs like Llama.cpp or ExllamaV2. Actual Value: Runs efficiently on local hardware with reduced resource consumption while maintaining high performance.
5. Markdown Editor Integration - Seamless Knowledge Management#
Optional connection to online Markdown editors allows viewing referenced documents during chat and real-time knowledge updates after editing notes. Actual Value: Easily manage reference materials and achieve seamless integration of knowledge with conversations.
Tech Stack & Integration#
Development Languages: Python (Backend), JavaScript/TypeScript (Frontend) Main Dependencies: Next.js, ChromaDB, BM25, OpenAI API compatible services Integration Method: API / Full-stack application
Maintenance Status#
- Development Activity: Project is actively updated with multiple local LLM models tested
- Recent Updates: Recent addition of SilverBulletMd editor support
- Community Response: Based on modified UI from open source project, community contributions are accumulating
Documentation & Learning Resources#
- Documentation Quality: Comprehensive
- Official Documentation: Included in the README
- Example Code: Deployment guides and configuration examples provided