Qwen-Agent
Qwen-Agent is an extensible, production-oriented agent framework designed to build interactive applications powered by Qwen family models (recommended Qwen>=3.0). It focuses on enabling models to follow instructions, call external tools, plan multi-step tasks, and maintain memory across interactions. The project provides ready-made components and example applications (Browser Assistant, Code Interpreter, Custom Assistant), making it straightforward to prototype and deploy rich LLM-based agents.
Key features
- Function / Tool Calling: Native support for model-driven function calls and parallel function call patterns; integrates with external tools and custom tool registrations.
- MCP (Model Context Protocol): Support for selecting and using MCP servers (memory, filesystem, sqlite, etc.) to provide modular context services.
- RAG (Retrieval-Augmented Generation): Built-in recipes and examples for fast RAG pipelines for long-document QA and retrieval workflows.
- Code Interpreter: Example and tooling for executing and interacting with code as part of agent flows (note: interpreter is not sandboxed — intended for local/test use only).
- Multi-model & Deploy Options: Works with DashScope model service or self-hosted OpenAI-compatible endpoints (vLLM, Ollama, etc.); provides parameter hooks for model API options.
- GUI: Gradio-based demo UI (requires Python 3.10+ when GUI extra is installed).
- Example-driven: Ships with numerous example scripts and notebooks (browser assistant, tool-call demos for Qwen3/Qwen3-Coder/QwQ-32B, MCP cookbooks, RAG examples).
Installation & quick start
Install the stable package from PyPI for full features:
pip install -U "qwen-agent[gui,rag,code_interpreter,mcp]"
# or minimal: pip install -U qwen-agentOr install the development source:
git clone https://github.com/QwenLM/Qwen-Agent.git
cd Qwen-Agent
pip install -e ./'[gui,rag,code_interpreter,mcp]'You then configure an LLM backend (DashScope or an OpenAI-compatible server), register tools, and create an Agent (e.g., Assistant) to run a chat loop or launch the provided WebUI.
Architecture & extensibility
- Modular Components: Atomic LLM wrapper classes (inherit from BaseChatModel), Tools (BaseTool), and high-level Agents (Agent, Assistant) allow mixing and matching capabilities.
- Tool Registration: Custom tools are registered via decorator and expose a schema (parameters, description) so the agent can call them programmatically.
- Function Call Templates: Provides configurable function-call templates (e.g., nous template) and supports native API tool-call parsing when desired.
Use cases
- Chatbots / Browser assistants (BrowserQwen example)
- Document question-answering at scale (fast RAG and 1M-token workflows)
- Interactive code execution and data analysis (Code Interpreter example)
- Custom assistants integrating external services or local tools
Community & maintenance
- Repository created: 2023-09-22. The project is actively maintained and includes periodic updates and demos (examples for Qwen3 tool-call, Qwen3-VL, MCP cookbooks, etc.).
- License: Apache 2.0 (see repository header).
- Ecosystem links: PyPI package, docs, ReadTheDocs, and integrations with Hugging Face / ModelScope.
Notes & cautions
- The code interpreter executes code in the user's environment and is not sandboxed — avoid using it for untrusted or production tasks without containment.
- Model-specific behaviors (templates, parsing) may require configuration depending on the chosen Qwen model/version and model server.
For developers and teams building LLM-powered agents, Qwen-Agent offers a practical, example-rich framework to accelerate prototyping and deployment while integrating advanced features like RAG, MCP, and function calling.
