Strands Agents is a simple yet powerful SDK that takes a model-driven approach to building and running AI agents. From simple conversational assistants to complex autonomous workflows, from local development to production deployment, it scales with your needs.
Strands Agents is a lightweight and flexible SDK designed for building AI agents using a model-driven approach. It enables developers to create everything from basic conversational bots to sophisticated multi-agent systems with minimal code. The SDK is fully customizable, model-agnostic, and supports seamless integration with various AI providers, making it ideal for both prototyping and production environments.
Key highlights include:
To begin, ensure you have Python 3.10+ installed. Create a virtual environment and install the core package along with optional tools:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install strands-agents strands-agents-toolsFor the default Amazon Bedrock model, configure AWS credentials and enable access to models like Claude 3.5 Sonnet in the us-west-2 region. Refer to the Quickstart Guide for other providers.
Create a simple agent with a calculator tool:
from strands import Agent
from strands_tools import calculator
agent = Agent(tools=[calculator])
response = agent("What is the square root of 1764?")
print(response) # Output: The square root of 1764 is 42.This demonstrates the SDK's simplicity: define an agent, add tools, and invoke it with natural language queries.
Tools are Python functions decorated with @tool, allowing LLMs to understand and invoke them via docstrings.
from strands import Agent, tool
@tool
def word_count(text: str) -> int:
"""Count the number of words in the given text."""
return len(text.split())
agent = Agent(tools=[word_count])
response = agent("How many words are in 'Hello, world!'?")For dynamic tool management, enable hot reloading from a directory:
from strands import Agent
agent = Agent(load_tools_from_directory=True) # Watches ./tools/
response = agent("Use tools from the directory to answer.")The SDK natively supports MCP servers, enabling access to thousands of tools. Example with an AWS documentation MCP server:
from strands import Agent
from strands.tools.mcp import MCPClient
from mcp import stdio_client, StdioServerParameters
aws_docs_client = MCPClient(
lambda: stdio_client(StdioServerParameters(command="uvx", args=["awslabs.aws-documentation-mcp-server@latest"]))
)
with aws_docs_client:
agent = Agent(tools=aws_docs_client.list_tools_sync())
response = agent("Explain Amazon Bedrock and its Python usage.")This allows agents to leverage external tools without custom implementations.
Strands Agents supports a wide array of models through built-in providers. Configure and use them easily:
Amazon Bedrock:
from strands import Agent
from strands.models import BedrockModel
bedrock_model = BedrockModel(model_id="anthropic.claude-3-5-sonnet-20240620-v1:0", temperature=0.3, streaming=True)
agent = Agent(model=bedrock_model)
agent("Discuss Agentic AI.")Google Gemini:
from strands.models.gemini import GeminiModel
gemini_model = GeminiModel(client_args={"api_key": "your_key"}, model_id="gemini-1.5-flash", params={"temperature": 0.7})
agent = Agent(model=gemini_model)Ollama (Local):
from strands.models.ollama import OllamaModel
ollama_model = OllamaModel(host="http://localhost:11434", model_id="llama3")
agent = Agent(model=ollama_model)Other supported providers include Anthropic, Cohere, LiteLLM, llama.cpp, LlamaAPI, MistralAI, OpenAI, SageMaker, and Writer. Custom providers can be added via the SDK's extensible interface.
The optional strands-agents-tools package includes ready-to-use tools like calculators for quick testing. More tools are available on GitHub.
For real-time voice interactions, the SDK offers experimental bidirectional streaming with persistent connections. Supported models include Amazon Nova Sonic, Google Gemini Live, and OpenAI Realtime API.
import asyncio
from strands.experimental.bidi import BidiAgent
from strands.experimental.bidi.models import BidiNovaSonicModel
from strands.experimental.bidi.io import BidiAudioIO
from strands_tools import calculator
async def main():
model = BidiNovaSonicModel()
agent = BidiAgent(model=model, tools=[calculator])
audio_io = BidiAudioIO()
await agent.run(inputs=[audio_io.input()], outputs=[audio_io.output()])
asyncio.run(main())This enables interruptible, continuous conversations, configurable for audio rates, voices, and devices.
The SDK is licensed under Apache 2.0 and welcomes contributions. For security or bugs, see the Contributing Guide. With 4341 stars on GitHub, it's gaining traction in the AI agent development community.