A lightweight yet complete LLM/Agent application development framework. Uses decorators to transform function signatures and docstrings into prompts, enabling type-safe LLM capabilities without function body implementation. Features multi-provider support, multimodal I/O, tool calling, streaming, API key load balancing, and Langfuse observability integration.
SimpleLLMFunc is a Python framework for LLM/Agent application development, built on the principles of "LLM as Function / Prompt as Code / Code as Doc". Developed by Jingzhe Ni, released under MIT license.
Key Features#
Decorator-Driven Development#
@llm_function— Transforms async functions into LLM-powered functions with automatic prompt building, API calls, and response parsing@llm_chat— Builds conversational agents with streaming and tool calling support@tool— Registers async functions as LLM-callable tools with multimodal returns@tui— Ready-to-use Textual terminal UI
Type Safety & Async-Native#
- Python type hints + Pydantic models for type correctness
- Fully async design with native asyncio support for high-concurrency scenarios
- IDE-friendly with code completion and type checking
Multimodal & Multi-Provider#
- Supports
Text,ImgUrl,ImgPathmultimodal input/output - Compatible with OpenAI, Deepseek, Anthropic Claude, Volcengine Ark, Baidu Qianfan, Ollama, vLLM, and more
Production-Ready Capabilities#
- API Key Load Balancing: Multi-key configuration with min-heap algorithm for lowest-load selection
- Rate Limiting: Token Bucket algorithm to prevent API throttling
- Structured Logging: Automatic trace_id tracking
- Observability: Langfuse integration
Built-in Tools#
- PyRepl: Subprocess-based persistent Python REPL
- SelfReference: Persistent Agent memory contract
Typical Use Cases#
| Scenario | Description |
|---|---|
| Data Processing & Analysis | Entity extraction, text classification, structured data extraction |
| Intelligent Assistants | Agents that search, calculate, and call APIs |
| Batch Data Processing | High-concurrency calls with async + asyncio.gather |
| Multimodal Content | Image analysis, image-text comparison |
| Rapid Prototyping | Functional composition for quick business logic validation |
Quick Start#
pip install SimpleLLMFunc
import asyncio
from SimpleLLMFunc import llm_function, OpenAICompatible
llm = OpenAICompatible.load_from_json_file("provider.json")["your_provider"]["model"]
@llm_function(llm_interface=llm)
async def classify_sentiment(text: str) -> str:
"""
Analyze the sentiment tendency of text.
Args:
text: Text to analyze
Returns:
Sentiment classification: 'positive', 'negative', or 'neutral'
"""
pass # Prompt as Code!
asyncio.run(classify_sentiment("This product is amazing!"))
Requirements#
- Python >=3.12, <4.0
Related Projects#
- SimpleManus: Agent Framework by the same author, built on SimpleLLMFunc