DISCOVER THE FUTURE OF AI AGENTSarrow_forward

AutoAgents

calendar_todayAdded Feb 25, 2026
categoryAgent & Tooling
codeOpen Source
Workflow AutomationRustMulti-Agent SystemAI AgentsAgent FrameworkAgent & ToolingDeveloper Tools & CodingAutomation, Workflow & RPA

AutoAgents is a production-grade multi-agent framework built in Rust, supporting high-performance AI agent orchestration and execution across cloud, edge, and WASM environments with comprehensive tooling, memory management, and multi-agent communication.

Overview#

AutoAgents is a production-grade multi-agent framework developed by Liquidos AI Team, written in Rust (100%), current version V0.3.4. Dual-licensed under MIT License / Apache License 2.0.

Core Capabilities#

Agent Execution Modes#

  • ReAct Executor: Supports reasoning-action loop pattern for complex task decomposition
  • Basic Executor: Basic executor for simple task scenarios
  • Streaming Responses: Streaming output for improved user experience
  • Structured Outputs: Supports structured output formats like JSON Schema

Tool System#

  • Derive Macros: Simplifies tool and agent definitions via #[tool] and #[agent] macros, reducing boilerplate
  • WASM Sandbox: Sandboxed WASM runtime for tool execution with memory safety and isolation guarantees

Memory System#

  • Sliding Window Memory: Configurable sliding window memory
  • Extensible Backends: Extensible storage backends with Qdrant vector store support

Multi-Agent Orchestration#

  • Typed Pub/Sub: Type-safe publish-subscribe communication
  • Environment: Agent lifecycle management and coordination
  • Actor Model: Ractor-based Actor model implementation for high concurrency and horizontal scaling

LLM Provider Support#

Cloud Providers: OpenAI, OpenRouter, Anthropic, DeepSeek, xAI, Phind, Groq, Google, Azure OpenAI, MiniMax

Local Providers: Ollama, Mistral-rs, Llama-Cpp

Experimental: Burn, Onnx

Platform Support#

  • Server
  • Edge devices
  • WASM (Browser)
  • Android

Extension Capabilities#

  • Speech-Processing: Local TTS (Text-to-Speech) and STT (Speech-to-Text) support
  • Observability: OpenTelemetry tracing and metrics integration
  • MCP Integration: Model Context Protocol support

Typical Use Cases#

  1. Cloud AI Agent Services: Building high-concurrency, production-grade multi-agent systems
  2. Edge and Mobile Inference: Running local model-driven agents on Android or edge devices
  3. Secure Tool Execution: Running untrusted code or tools in WASM sandbox
  4. Coding Agent: Developing code assistants with file operation capabilities
  5. Rapid Prototyping: Quick workflow setup via Derive Macros and CLI

Module Structure#

AutoAgents/
├── crates/
│   ├── autoagents/              # Main library entry
│   ├── autoagents-core/         # Core Agent framework
│   ├── autoagents-protocol/     # Shared protocols and event types
│   ├── autoagents-llm/          # LLM provider abstraction and implementations
│   ├── autoagents-telemetry/    # OpenTelemetry integration
│   ├── autoagents-toolkit/      # Ready-to-use tool collection
│   ├── autoagents-mistral-rs/   # Mistral-rs local backend
│   ├── autoagents-llamacpp/     # LlamaCpp local backend
│   ├── autoagents-speech/       # TTS/STT speech processing
│   ├── autoagents-qdrant/       # Qdrant vector store integration
│   └── autoagents-derive/       # Procedural macros
├── examples/                    # Example code

Performance Characteristics#

  • Memory Efficient: Optimized memory usage for resource-constrained environments
  • Concurrent: Full async/await support for efficient concurrent task handling
  • Scalable: Horizontal scaling support for multi-agent coordination
  • Type Safe: Rust compile-time type guarantees reduce runtime errors

Related Projects

View All arrow_forward

STAY UPDATED

Get the latest AI tools and trends delivered straight to your inbox. No spam, just intelligence.

rocket_launch