DISCOVER THE FUTURE OF AI AGENTS

GitHub Sentinel

Added Feb 22, 2026
Agent & Tooling
Open Source
PythonWorkflow AutomationDockerLarge Language ModelsAI AgentsAgent & ToolingModel & Inference FrameworkDeveloper Tools & CodingAutomation, Workflow & RPA

An AI Agent for the LLM era that automatically monitors GitHub project updates and Hacker News trends, generating intelligent reports via OpenAI/Ollama with multi-channel notifications and Docker deployment support.

Project Overview#

GitHub Sentinel is an open-source intelligent information retrieval and content mining tool designed for the LLM era. It acts as an automated "observer" that connects to GitHub API and Hacker News API, fetching the latest commits, pull requests, issue discussions from subscribed repositories, as well as trending topics from the tech community. Combined with large language model capabilities, it transforms unstructured update information into structured, readable project progress reports.

Core Problems Solved:

  • Information Overload: Addresses the pain point for developers, investors, or tech enthusiasts struggling to filter and follow updates from massive open-source projects
  • Automated Monitoring: Replaces manual periodic GitHub repository refreshing with 24/7 unattended progress tracking
  • Content Synthesis: Uses AI to summarize dry Commit logs or Issues into easy-to-understand summaries

Use Cases:

  • Open-source project maintainers or users monitoring dependency updates
  • Tech investors or analysts tracking evolution trends in specific technology domains
  • Enterprise or team internal tech radar scanning
  • Individual developers mining trending tech topics via Hacker News

Core Features#

Subscription & Retrieval#

  • GitHub Subscription Management: Configure subscription lists (subscriptions.json) to automatically fetch Commits, Issues, PRs
  • Multi-source Data Acquisition: Beyond GitHub, supports Hacker News trending topics and daily report mining

AI Intelligence Analysis#

  • Multi-model Support: Supports OpenAI GPT series (e.g., gpt-4o-mini) and Ollama local models (e.g., Llama3) for content generation
  • Intelligent Report Generation: Automatically generates natural language project progress reports based on retrieved data

Task Scheduling#

  • Scheduled Tasks: Supports daemon process mode, executing retrieval tasks at configured frequency (e.g., daily)
  • Progress Tracking: Records project progress over time

Interaction & Interface#

  • Graphical Interface (GUI): Gradio-based Web interface, lowering barriers for non-technical users
  • Command Line Tool (CLI): Provides command-line interaction mode suitable for script integration

Notification & Distribution#

  • Multi-channel Notifications: Supports Email (SMTP) and Slack Webhook for report delivery

Installation & Deployment#

Requirements#

  • Python 3.10+
  • Docker (optional, recommended for production)
  • GitHub Personal Access Token (required)
  • OpenAI API Key or locally running Ollama instance (for AI features)

Installation Steps (Source Run)#

  1. Clone Repository:
git clone https://github.com/DjangoPeng/GitHubSentinel.git
cd GitHubSentinel
  1. Install Dependencies:
pip install -r requirements.txt
  1. Configure Environment:
  • Copy and modify config.json, filling in GitHub Token, Email configuration, and LLM settings
  • Recommended: Inject sensitive information via environment variables:
export GITHUB_TOKEN="github_pat_xxx"
export EMAIL_PASSWORD="your_password"

Docker Deployment#

  • Build image using build_image.sh (based on python:3.10-slim)
  • Mount configuration files or inject environment variables when running containers
  • Build process integrates validate_tests.sh for unit test verification

Usage#

Three Running Modes#

A. Command Line Mode:

python src/command_tool.py

B. Gradio Web Mode:

python src/gradio_server.py
# Access at http://localhost:7860

C. Background Daemon Mode:

./daemon_control.sh start   # Start
./daemon_control.sh status  # Check status
./daemon_control.sh stop    # Stop
./daemon_control.sh restart # Restart

Configuration Guide#

Project uses config.json for fine-grained control:

{
    "github": {
        "token": "your_github_token",
        "subscriptions_file": "subscriptions.json",
        "progress_frequency_days": 1,
        "progress_execution_time": "08:00"
    },
    "email": {
        "smtp_server": "smtp.exmail.qq.com",
        "smtp_port": 465,
        "from": "from_email@example.com",
        "password": "your_email_password",
        "to": "to_email@example.com"
    },
    "llm": {
        "model_type": "ollama",
        "openai_model_name": "gpt-4o-mini",
        "ollama_model_name": "llama3",
        "ollama_api_url": "http://localhost:11434/api/chat"
    },
    "report_types": [
        "github",
        "hacker_news_hours_topic",
        "hacker_news_daily_report"
    ],
    "slack": {
        "webhook_url": "your_slack_webhook_url"
    }
}

Architecture Highlights#

  • Modular Design: Supports CLI, daemon process, Web UI multiple running modes
  • AI Agent Architecture: Integrates LLM for intelligent content analysis and report generation
  • Multi-data Source Integration: GitHub API + Hacker News API
  • Containerized Deployment: Docker support for environment consistency
  • CI/CD Ready: Includes unit testing framework (unittest, Mock) and validation scripts

Target Users#

  • Open-source enthusiasts
  • Individual developers
  • Investors
  • Users requiring high-frequency, large-scale information acquisition

Related Projects

View All

STAY UPDATED

Get the latest AI tools and trends delivered straight to your inbox. No spam, just intelligence.