LogoAIAny
Icon for item

A2UI: Agent-to-User Interface

A2UI is an open-source declarative format and library from Google for agents to generate updateable, framework-agnostic user interfaces as JSON. Clients map A2UI components to native widgets (Web, Flutter, etc.), enabling secure, incremental and portable agent-generated UI rendering. The project is in v0.8 (Public Preview).

Introduction

A2UI — Agent-to-User Interface

A2UI is a declarative open standard and reference implementations that let AI agents (or systems that use LLMs) "speak UI" by sending structured JSON describing the intent and composition of a user interface. Instead of sending executable code from an agent to a client, A2UI describes a flat list of components and data bindings which a trusted client-side renderer maps to native UI widgets. This design reduces security risk while enabling rich, interactive, and incrementally updateable interfaces.

Key ideas
  • Declarative, data-first format: UI is represented as JSON, not executable code. Agents output component descriptors (e.g., card, button, text-field) and the client resolves them to concrete implementations.
  • Security-first: Clients maintain a catalog of pre-approved components and handle sandboxing and access controls. Agents can request components from the catalog but cannot run arbitrary code on the client.
  • LLM-friendly and incremental updates: A2UI uses a flat component list with ID references which LLMs can generate or update incrementally for progressive rendering and responsive interactions.
  • Framework-agnostic portability: The same A2UI payload can be rendered by web renderers (Lit, React), Flutter, or other native frameworks by mapping abstract component types to local widgets.
Architecture / Flow
  1. Generation: An agent (e.g., driven by a large model) produces an A2UI Response JSON describing components, properties, and data bindings.
  2. Transport: The payload is sent to the client via a transport (A2A, AG UI, REST, etc.).
  3. Resolution: The client A2UI renderer parses the JSON and resolves component types against its trusted component catalog (and optional smart wrappers).
  4. Rendering: The renderer maps the abstract components to native controls and wires up events/data bindings to the host application.
Use cases
  • Dynamic data collection: Agents create contextual forms (date pickers, sliders, conditional inputs) on the fly.
  • Remote sub-agents: Specialized agents return UI payloads to be embedded in a parent conversation window.
  • Adaptive workflows & dashboards: Enterprise agents generate approval flows or visualizations that update as the conversation evolves.
Implementations & Dependencies
  • Reference renderers: initial web (Lit) and Flutter support; more renderers planned (React, Jetpack Compose, SwiftUI).
  • Integrations: sample agent backends show using Gemini (or other LLMs) to produce A2UI payloads. Transport and host frameworks are pluggable.
  • Project offers samples (e.g., a restaurant finder demo), a widget builder, and client/server examples in the repo.
Getting started (high level)
  • Clone the repository, run the sample agent, build a renderer (e.g., Lit) and run the sample shell client.
  • Client developers register a component catalog and map A2UI types to native components, enforcing sandboxing/trust rules.
Roadmap & status
  • Current status: v0.8 (Public Preview) — spec and implementations functional but evolving.
  • Planned: spec stabilization to v1.0, additional official renderers (React, iOS, Android), more transports and integrations with agent frameworks.
Security & extensibility

A2UI intentionally places security responsibilities on the client: only pre-approved components from the catalog are rendered, and developers can create "Smart Wrappers" to connect existing secure components (e.g., sandboxed iframes) to the A2UI event/data model. This approach balances expressivity with the need to avoid executing untrusted agent-generated code in client contexts.

License

A2UI is released under the Apache 2.0 license and welcomes community contributions (renderers, samples, spec feedback).

Information

  • Websitegithub.com
  • AuthorsGoogle
  • Published date2025/09/24

Categories