Hello-Agents: Building Intelligent Agents from Scratch
Overview
Hello-Agents is an comprehensive open-source educational project developed by the Datawhale community, aimed at providing a thorough guide to the principles and practices of intelligent agent systems. Launched in the context of 2025 being proclaimed the 'Year of Agents'—following 2024's focus on large-scale model competitions—this tutorial fills a critical gap in systematic, hands-on resources for agent development. Unlike process-driven software engineering approaches (e.g., Dify, Coze, n8n) that treat LLMs as backend processors, Hello-Agents emphasizes true AI-native agents, where AI drives the core logic. The project guides learners through the entire spectrum: from foundational concepts to advanced implementations, enabling participants to evolve from mere users of large language models (LLMs) to proficient builders of multi-agent applications.
Core Objectives and Structure
The tutorial is meticulously structured into five parts, ensuring a progressive learning path that balances theory and practice. This design makes it ideal for AI developers, software engineers, students, and self-learners with basic Python skills and LLM familiarity, without requiring deep expertise in algorithms or model training.
Part 1: Foundations of Agents and Language Models (Chapters 1-3)
This foundational section demystifies intelligent agents by defining their types, paradigms, and real-world applications. It traces the evolution from symbolic AI to LLM-driven agents, highlighting key milestones. Chapter 3 solidifies LLM essentials, including Transformers, prompting techniques, popular models (e.g., GPT series, Claude), and their inherent limitations like hallucination and context constraints. By the end, learners grasp the theoretical bedrock for agentic systems.
Part 2: Building LLM-Based Agents (Chapters 4-7)
Here, the focus shifts to practical construction. Chapter 4 walks through implementing classic paradigms such as ReAct (Reasoning and Acting), Plan-and-Solve, and Reflection, using simple code snippets. Chapter 5 explores low-code platforms—Coze for bot creation, Dify for workflow orchestration, and n8n for automation—demonstrating rapid prototyping without deep coding. Chapter 6 delves into established frameworks like AutoGen for multi-agent orchestration, AgentScope for scalable simulations, and LangGraph for graph-based agent flows. The pinnacle, Chapter 7, empowers learners to develop their own agent framework (HelloAgents) from scratch, leveraging OpenAI's native APIs. This part emphasizes 'using wheels' (existing tools) alongside 'inventing wheels' (custom builds), with all code provided in the project's code directory for hands-on experimentation.
Part 3: Advanced Extensions (Chapters 8-12)
Building on the self-developed framework, this section introduces sophisticated techniques for robust agents. Chapter 8 covers memory systems (short-term vs. long-term), Retrieval-Augmented Generation (RAG), and storage solutions to mitigate LLM forgetfulness. Chapter 9 addresses context engineering, enabling sustained interactions through just-in-time context management and situational awareness. Chapter 10 dissects communication protocols like MCP (Multi-Agent Communication Protocol), A2A (Agent-to-Agent), and ANP, facilitating seamless multi-agent collaboration. Chapter 11 provides a full pipeline for Agentic Reinforcement Learning, from Supervised Fine-Tuning (SFT) to Group Relative Policy Optimization (GRPO), including practical LLM training. Finally, Chapter 12 equips learners with evaluation metrics, benchmarks (e.g., GAIA, AgentBench), and frameworks to assess agent performance rigorously.
Part 4: Integrated Case Studies (Chapters 13-15)
Theory meets application in these real-world projects. Chapter 13 builds an intelligent travel assistant using MCP for multi-agent coordination, integrating planning, booking, and personalization. Chapter 14 recreates a DeepResearch Agent for automated in-depth investigations, parsing complex queries into research pipelines. Chapter 15 constructs a 'Cyber Town' simulation, blending agents with game mechanics to model social dynamics, showcasing emergent behaviors in agent societies.
Part 5: Capstone and Future Outlook (Chapter 16)
The finale challenges learners to design a complete multi-agent application as a 'graduation project,' synthesizing all prior knowledge. It also peers into the future of agentic AI, discussing emerging trends like hybrid human-agent systems and ethical considerations.
Unique Features and Community Engagement
What sets Hello-Agents apart is its commitment to accessibility and collaboration. All content is freely available via online reading (GitBook-style), local setups, or PDF downloads (with subtle Datawhale watermarks to prevent commercialization). Accompanying resources include exercise problems, a Cookbook for advanced recipes, and a 'Community Contributions' section featuring extras like agent interview prep, Dify tutorials, and FAQs. Learners are encouraged to run, debug, and modify code, fostering deep understanding.
The project thrives on open contributions: report bugs via Issues, suggest improvements, submit PRs for content enhancements, or share personal projects. Future plans include an English version, bilingual video courses (with step-by-step implementations), and co-creation of expanded applications in Chapter 16.
Licensing and Acknowledgments
Released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, Hello-Agents ensures open access while protecting against misuse. Core contributors include Si Zhou Chen (project lead, full writing), Tao Sun (Chapter 9), Shu Fan Jiang (exercises), and experts like Pei Lin Huang and Xin Min Zeng. Special thanks to the broader Datawhale team and community bloggers for enriching extras.
In summary, Hello-Agents isn't just a tutorial—it's a gateway to mastering agentic AI, empowering you to innovate in this rapidly evolving field. Dive in, build, and contribute to shape the future of intelligent systems.
