A Streamlit application that enables users to build RAG pipelines from data sources using natural language, allowing you to create ChatGPT-style chatbots over your own data without writing code.
One-Minute Overview#
RAGs is a Streamlit application that lets you create RAG (Retrieval Augmented Generation) pipelines using natural language. You can describe your task (e.g., "load this web page") and desired parameters (e.g., "retrieve X number of docs"), then configure and query a data-based AI assistant through a simple interface.
Core Value: Enables non-technical users to easily build professional RAG applications without programming knowledge.
Quick Start#
Installation Difficulty: Medium - Requires Python environment and OpenAI API key, uses Poetry for dependency management
# Clone the project
git clone https://github.com/run-llama/rags.git
cd rags
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
poetry install --with dev
Create configuration file and add API key Create
.streamlit/secrets.tomlfile in the project root:openai_key = "<your OpenAI API key>"
Start the application
streamlit run 1_π _Home.py
Is this suitable for me?
- β Knowledge base Q&A: Create intelligent Q&A systems based on company documents or product manuals
- β Content analysis: Analyze web content and answer related questions
- β Highly customized complex applications: This tool prioritizes ease of use over advanced customization
- β Fully offline use: Requires OpenAI API by default, cannot run completely offline
Core Capabilities#
1. Natural Language RAG Pipeline Building - No Code Required#
- Build complete RAG systems through simple natural language descriptions (e.g., "load this PDF and answer questions") Actual Value: Lowers technical barriers, allowing business users to quickly build professional AI applications
2. Smart Parameter Configuration - Auto-generated with Manual Adjustment#
- System automatically generates RAG parameters (Top-K, chunk size, etc.) based on your description, with manual adjustment support Actual Value: Balances automation control with personalization needs, allowing optimization without understanding complex parameters
3. Multi-model Support - Flexible LLM and Embedding Model Selection#
- Supports various LLMs and embedding models including OpenAI, Anthropic, Replicate, and Hugging Face Actual Value: Flexibly switch backend models based on cost, performance, and privacy requirements
4. Interactive Chat Interface - Intuitive Data Querying#
- Provides a standard chat interface for real-time querying of RAG agents with data-based answers Actual Value: Natural user experience, no need to learn new tools to converse with your data
Technology Stack & Integration#
Development Language: Python Key Dependencies:
- Streamlit: For building user interface
- LlamaIndex: Core RAG framework
- OpenAI API: Default LLM service
Integration Method: Application/Tool
Maintenance Status#
- Development Activity: Actively developed, built on LlamaIndex
- Recent Updates: Recently updated, relatively new project
- Community Response: Has GitHub issues and Discord community support
Documentation & Learning Resources#
- Documentation Quality: Comprehensive
- Official Documentation: README.md
- Example Code: Includes installation and setup examples, plus detailed feature overviews