中文译文说明

这本 《Leptos Book》 单纯是一个 Demo, 用来展示如何在 ShanTou.University 这个域名下公开一本书(or 文档),并让这本书公开、free、持续发布与更新。

具体说明参见 在 ST.U 发布一本书。·

《Leptos Book》 所有译文均由 Google Gemini 1.5 Pro 提供。

简介

本书旨在介绍 Leptos Web 框架。 它将逐步讲解构建应用程序所需的基本概念, 从一个简单的浏览器渲染应用程序开始,逐步构建一个具有服务器端渲染和 hydration 功能的全栈应用程序。

本指南假设你不了解细粒度响应式或现代 Web 框架的细节。它假设你熟悉 Rust 编程语言、HTML、CSS、DOM 和基本的 Web API。

Leptos 与 Solid (JavaScript) 和 Sycamore (Rust) 等框架最为相似。 它与 React (JavaScript)、Svelte (JavaScript)、Yew (Rust) 和 Dioxus (Rust) 等其他框架也有一些相似之处,因此了解其中一个框架也可能更容易理解 Leptos。

你可以在 Docs.rs 上找到 API 各个部分的更详细文档。

本书的源代码可在 此处 获取。 欢迎提交 PR 以修复拼写错误或进行澄清。

译注: 中文本版源代码在 leptos_cn

开始使用

开始使用 Leptos 有两种基本途径:

  1. 使用 Trunk 进行客户端渲染 (CSR) - 如果你只是想用 Leptos 创建一个简洁的网站,或者使用现有的服务器或 API,这是一个很好的选择。 在 CSR 模式下,Trunk 将你的 Leptos 应用程序编译为 WebAssembly (WASM),并在浏览器中运行,就像典型的 Javascript 单页应用程序 (SPA) 一样。Leptos CSR 的优势包括更快的构建时间和更快的迭代开发周期,以及更简单的思维模型和更多的应用程序部署选项。CSR 应用程序也有一些缺点:与服务器端渲染方法相比,最终用户的初始加载时间较慢,并且使用 JS 单页应用程序模型带来的常见 SEO 挑战也适用于 Leptos CSR 应用程序。另外请注意,在底层,使用自动生成的 JS 代码段来加载 Leptos WASM 包,因此客户端设备上必须启用 JS 才能使 CSR 应用程序正确显示。与所有软件工程一样,这里也需要权衡利弊。

  2. 使用 cargo-leptos 的全栈、服务器端渲染 (SSR) - 如果你希望 Rust 为你的前端和后端提供支持,那么 SSR 是构建 CRUD 风格网站和自定义 Web 应用程序的绝佳选择。 使用 Leptos SSR 选项,你的应用程序将在服务器上渲染为 HTML 并发送到浏览器;然后,使用 WebAssembly 来检测 HTML,使你的应用程序变得具有交互性 - 这个过程称为“hydration”。在服务器端,Leptos SSR 应用程序与你选择的 Actix-webAxum 服务器库紧密集成,因此你可以利用这些社区的 crates 来帮助构建你的 Leptos 服务器。 选择 Leptos SSR 路线的优势包括帮助你获得最佳的初始加载时间和最佳的 Web 应用程序 SEO 分数。SSR 应用程序还可以通过 Leptos 的一项称为“服务器函数”的功能极大地简化跨服务器/客户端边界的工作,该功能允许你从客户端代码透明地调用服务器上的函数(稍后将详细介绍此功能)。然而,全栈 SSR 并非完美无缺 - 缺点包括较慢的开发者迭代循环(因为在进行 Rust 代码更改时需要重新编译服务器和客户端),以及 hydration 带来的额外复杂性。

到本书结束时,你应该能够根据项目的需求,很好地了解需要做出哪些权衡,以及选择哪条路线 - CSR 还是 SSR。

在本书的第一部分,我们将从客户端渲染 Leptos 网站开始,并使用 Trunk 将我们的 JS 和 WASM 包提供给浏览器,构建响应式 UI。

我们将在本书的第二部分介绍 cargo-leptos,该部分将全面介绍如何在全栈 SSR 模式下使用 Leptos 的全部功能。

Note

如果你来自 JavaScript 世界,并且不熟悉客户端渲染 (CSR) 和服务器端渲染 (SSR) 等术语,那么理解它们之间区别的最简单方法是通过类比:

Leptos 的 CSR 模式类似于使用 React(或基于“信号”的框架,如 SolidJS),专注于生成客户端 UI,你可以将其与服务器上的任何技术栈一起使用。

使用 Leptos 的 SSR 模式类似于在 React 世界中使用全栈框架,如 Next.js(或 Solid 的“SolidStart”框架) - SSR 帮助你构建在服务器上渲染然后发送到客户端的网站和应用程序。SSR 可以帮助提高网站的加载性能和可访问性,还可以让一个人更容易在客户端服务器端工作,而无需在前端和后端的不同语言之间进行上下文切换。

Leptos 框架既可以在 CSR 模式下使用,仅用于制作 UI(如 React),也可以在全栈 SSR 模式下使用(如 Next.js),以便你可以使用一种语言(Rust)构建 UI 和服务器。

Hello World! Leptos CSR 开发环境搭建

首先,确保已安装 Rust 且为最新版本(如果需要说明,请参见此处)。

如果你尚未安装 “Trunk” 工具,可以通过在命令行中运行以下命令来安装它,以便运行 Leptos CSR 网站:

cargo install trunk

然后创建一个基本的 Rust 项目

cargo init leptos-tutorial

cd 进入你的新 leptos-tutorial 项目,并将 leptos 添加为依赖项

cargo add leptos --features=csr,nightly

或者,如果你使用的是稳定的 Rust 版本,可以省略 nightly

cargo add leptos --features=csr

在 Leptos 中使用 nightly Rust 和 nightly 特性可以启用本书大多数部分使用的信号 getter 和 setter 的函数调用语法。

要使用 nightly Rust,你可以通过运行以下命令选择为所有 Rust 项目使用 nightly 版本

rustup toolchain install nightly
rustup default nightly

或者只针对此项目

rustup toolchain install nightly
cd <进入你的项目>
rustup override set nightly

更多详细信息请参见此处。

如果你更愿意在 Leptos 中使用稳定的 Rust 版本,你也可以这样做。在本指南和示例中,你只需使用 ReadSignal::get()WriteSignal::set() 方法,而不是将信号 getter 和 setter 作为函数调用。

确保你已添加 wasm32-unknown-unknown 目标,以便 Rust 可以将你的代码编译为 WebAssembly 以在浏览器中运行。

rustup target add wasm32-unknown-unknown

leptos-tutorial 目录的根目录下创建一个简单的 index.html 文件

<!DOCTYPE html>
<html>
  <head></head>
  <body></body>
</html>

并在你的 main.rs 中添加一个简单的 “Hello, world!”

use leptos::*;

fn main() {
    mount_to_body(|| view! { <p>"Hello, world!"</p> })
}

你的目录结构现在应该如下所示

leptos_tutorial
├── src
│   └── main.rs
├── Cargo.toml
├── index.html

现在从 leptos-tutorial 目录的根目录运行 trunk serve --open。 Trunk 应该会自动编译你的应用程序并在你的默认浏览器中打开它。 如果你对 main.rs 进行编辑,Trunk 将重新编译你的源代码并实时重新加载页面。

欢迎来到由 Leptos 和 Trunk 支持的 Rust 和 WebAssembly (WASM) UI 开发世界!

Note

如果你使用的是 Windows 系统,请注意 trunk serve --open 可能无法正常工作。 如果你在使用 --open 时遇到问题, 只需使用 trunk serve 并手动打开浏览器标签页即可。


在开始使用 Leptos 构建你的第一个真正的 UI 之前,你需要了解一些事情,这些事情可以帮助你更轻松地使用 Leptos。

Leptos 开发体验改进

你可以做一些事情来改进使用 Leptos 开发网站和应用程序的体验。 你可能需要花几分钟时间设置你的环境以优化你的开发体验,特别是如果你想跟随本书中的示例进行编码。

1) 设置 console_error_panic_hook

默认情况下,在浏览器中运行 WASM 代码时发生的 panic 只会在浏览器中抛出一个错误,并显示一条无用的消息,例如 Unreachable executed 以及指向 WASM 二进制文件的堆栈跟踪。

使用 console_error_panic_hook,你可以获得一个实际的 Rust 堆栈跟踪,其中包含 Rust 源代码中的一行。

设置非常简单:

  1. 在你的项目中运行 cargo add console_error_panic_hook
  2. 在你的 main 函数中,添加 console_error_panic_hook::set_once();

如果不清楚,点击此处查看示例

现在,你应该在浏览器控制台中看到更好的 panic 消息!

2) 在 #[component]#[server] 中进行编辑器自动补全

由于宏的性质(它们可以从任何内容扩展到任何内容,但前提是输入在那一刻完全正确),rust-analyzer 很难进行正确的自动补全和其他支持。

如果你在编辑器中使用这些宏时遇到问题,你可以明确告诉 rust-analyzer 忽略某些过程宏。 尤其是对于 #[server] 宏,它注释函数体但实际上并不转换函数体内的任何内容,这可能非常有用。

从 Leptos 0.5.3 版本开始,添加了对 #[component] 宏的 rust-analyzer 支持,但如果你遇到问题,你可能也希望将 #[component] 添加到宏忽略列表中(见下文)。 请注意,这意味着 rust-analyzer 不知道你的组件 props,这可能会在 IDE 中生成它自己的一组错误或警告。

VSCode settings.json

"rust-analyzer.procMacro.ignored": {
	"leptos_macro": [
        // optional:
		// "component",
		"server"
	],
}

VSCode with cargo-leptos settings.json:

"rust-analyzer.procMacro.ignored": {
	"leptos_macro": [
        // optional:
		// "component",
		"server"
	],
},
// 如果为 `ssr` 功能配置的代码显示为非活动状态,
// 你可能希望告诉 rust-analyzer 默认启用 `ssr` 功能
//
// 你也可以使用 `rust-analyzer.cargo.allFeatures` 来启用所有功能
"rust-analyzer.cargo.features": ["ssr"]

neovim with lspconfig:

require('lspconfig').rust_analyzer.setup {
  -- Other Configs ...
  settings = {
    ["rust-analyzer"] = {
      -- Other Settings ...
      procMacro = {
        ignored = {
            leptos_macro = {
                -- optional: --
                -- "component",
                "server",
            },
        },
      },
    },
  }
}

Helix, in .helix/languages.toml:

[[language]]
name = "rust"

[language-server.rust-analyzer]
config = { procMacro = { ignored = { leptos_macro = [
	# Optional:
	# "component",
	"server"
] } } }

Zed, in settings.json:

{
  -- Other Settings ...
  "lsp": {
    "rust-analyzer": {
      "procMacro": {
        "ignored": [
          // optional:
          // "component",
          "server"
        ]
      }
    }
  }
}

SublimeText 3, under LSP-rust-analyzer.sublime-settings in Goto Anything... menu:

// 此处的设置将覆盖 "LSP-rust-analyzer/LSP-rust-analyzer.sublime-settings" 中的设置
{
  "rust-analyzer.procMacro.ignored": {
    "leptos_macro": [
      // optional:
      // "component",
      "server"
    ],
  },
}

3) 使用 Rust Analyzer 设置 leptosfmt(可选)

leptosfmt 是 Leptos view! 宏的格式化程序(你通常会在其中编写 UI 代码)。 因为 view! 宏启用了一种 'RSX'(类似于 JSX)风格的 UI 编写方式,所以 cargo-fmt 很难自动格式化 view! 宏内的代码。 leptosfmt 是一个解决格式化问题的 crate,它可以使你的 RSX 风格的 UI 代码看起来整洁漂亮!

leptosfmt 可以通过命令行或在代码编辑器中安装和使用:

首先,使用 cargo install leptosfmt 安装该工具。

如果你只想从命令行使用默认选项,只需从项目的根目录运行 leptosfmt ./**/*.rs 即可使用 leptosfmt 格式化所有 Rust 文件。

如果你希望将编辑器设置为使用 leptosfmt,或者希望自定义 leptosfmt 体验,请参阅 leptosfmt github 仓库的 README.md 页面 上的说明。

请注意,建议在每个工作区的基础上设置你的编辑器与 leptosfmt,以获得最佳结果。

Leptos 社区和 leptos-* Crates

社区

在开始使用 Leptos 构建之前,最后一点说明:如果你还没有加入,欢迎加入 Leptos DiscordGithub 上不断壮大的社区。 我们的 Discord 频道尤其活跃和友好 - 我们很乐意在那里见到你!

Note

如果你在学习 Leptos 书籍的过程中发现某个章节或解释不清楚,请在 “docs-and-education” 频道中提及,或在 “help” 频道中提问,以便我们能够澄清问题并为其他人更新书籍。

随着你在 Leptos 旅程中的深入,如果你对“如何用 Leptos 做 'x'”有疑问,那么请在 Discord 的“help”频道中搜索是否有人问过类似的问题,或者随时提出你自己的问题 - 社区非常乐于助人,而且反应非常迅速。

Github 上的“Discussions”也是提问和关注 Leptos 公告的好地方。

当然,如果你在使用 Leptos 开发过程中遇到任何错误,或者想要提出功能请求(或者贡献错误修复/新功能),请在 Github issue tracker 上提交 issue。

Leptos-* Crates

社区已经构建了越来越多的 Leptos 相关 crates,这些 crates 将帮助你更快地提高 Leptos 项目的生产力 - 在 Github 上的 Awesome Leptos 仓库中查看基于 Leptos 构建并由社区贡献的 crates 列表。

如果你想找到最新、最热门的 Leptos 相关 crates,请查看 Leptos Discord 的“工具和库”部分。 在该部分中,有一些用于 Leptos view! 宏格式化程序的频道(在“leptosfmt”频道中);有一个用于实用程序库“leptos-use”的频道;另一个用于 UI 组件库“leptonic”的频道;以及一个“libraries”频道,在新的 leptos-* crates 进入 Awesome Leptos 上不断增长的 crates 和资源列表之前,会在那里进行讨论。

第一部分:构建用户界面

在本书的第一部分中,我们将介绍如何使用 Leptos 在客户端构建用户界面。 在底层,Leptos 和 Trunk 将打包一小段 JavaScript 代码,用于加载 Leptos UI,该 UI 已被编译为 WebAssembly,以驱动 CSR(客户端渲染)网站中的交互性。

第一部分将向你介绍构建由 Leptos 和 Rust 支持的响应式用户界面所需的基本工具。 到第一部分结束时,你应该能够构建一个快速的同步网站,该网站在浏览器中渲染,并且可以部署在任何静态网站托管服务上,例如 Github Pages 或 Vercel。

Info

为了充分利用本书,我们鼓励你跟随提供的示例进行编码。 在 入门Leptos DX 章节中,我们向你展示了如何使用 Leptos 和 Trunk 设置一个基本项目,包括在浏览器中处理 WASM 错误。 这个基本设置足以让你开始使用 Leptos 进行开发。

如果你更愿意使用功能更全面的模板来开始,该模板演示了如何设置你在真实的 Leptos 项目中会看到的一些基本内容,例如路由(将在本书后面介绍)、将 <Title><Meta> 标签注入页面头部以及其他一些细节,那么请随意使用 leptos-rs start-trunk 模板仓库来启动并运行。

start-trunk 模板要求你安装了 Trunkcargo-generate,你可以通过运行 cargo install trunkcargo install cargo-generate 来获取它们。

要使用该模板设置你的项目,只需运行

cargo generate --git https://github.com/leptos-community/start-csr

然后在新建的应用程序目录中运行

trunk serve --port 3000 --open

即可开始开发你的应用程序。 Trunk 服务器将在文件更改时重新加载你的应用程序,从而使开发相对无缝。

一个基础组件

那个“Hello, world!”是一个非常简单的例子。 让我们继续学习更像普通应用程序的内容。

首先,让我们编辑 main 函数,使其不再渲染整个应用程序,而只是渲染一个 <App/> 组件。 组件是大多数 Web 框架中组合和设计的基本单元,Leptos 也不例外。 从概念上讲,它们类似于 HTML 元素:它们代表 DOM 的一部分,具有独立的、定义的行为。 与 HTML 元素不同,它们采用 PascalCase 形式,因此大多数 Leptos 应用程序都将以 <App/> 组件之类的内容开头。

fn main() {
    leptos::mount_to_body(|| view! { <App/> })
}

现在让我们定义 <App/> 组件本身。 因为它相对简单, 我将先完整地展示它,然后逐行解释。

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    view! {
        <button
            on:click=move |_| {
                // 在稳定版本中,这是 set_count.set(3);
                set_count(3);
            }
        >
            "Click me: "
            // 在稳定版本中,这是 move || count.get();
            {move || count()}
        </button>
    }
}

组件签名

#[component]

与所有组件定义一样,这以 #[component] 宏开头。 #[component] 注释一个函数,以便它可以 在你的 Leptos 应用程序中用作组件。 我们将在接下来的几章中看到此宏的其他一些功能。

fn App() -> impl IntoView

每个组件都是具有以下特征的函数

  1. 它接受零个或多个任何类型的参数。
  2. 它返回 impl IntoView,这是一个不透明类型,包括 你可以从 Leptos view 中返回的任何内容。

组件函数参数被收集到一个由 view 宏根据需要构建的 props 结构体中。

组件主体

组件函数的主体是一个只运行一次的设置函数,而不是一个多次重新运行的渲染函数。 你通常会使用它来创建一些响应式变量,定义任何响应这些值变化而运行的副作用,以及描述用户界面。

let (count, set_count) = create_signal(0);

create_signal 创建一个信号,它是 Leptos 中响应式变化和状态管理的基本单元。 这将返回一个 (getter, setter) 元组。 要访问当前值,你将使用 count.get()(或者,在 nightly Rust 上,使用简写 count())。 要设置当前值,你将调用 set_count.set(...)(或者 set_count(...))。

.get() 克隆值,.set() 覆盖它。 在许多情况下,使用 .with().update() 更有效率;如果你现在想了解更多关于这些权衡的信息,请查看 ReadSignalWriteSignal 的文档。

视图

Leptos 使用类似 JSX 的格式通过 view 宏定义用户界面。

view! {
    <button
        // 使用 on: 定义事件监听器
        on:click=move |_| {
            set_count(3);
        }
    >
        // 文本节点用引号括起来
        "Click me: "
        // 代码块可以包含 Rust 代码
        {move || count()}
    </button>
}

这应该很容易理解:它看起来像 HTML,带有一个特殊的 on:click 来定义 click 事件监听器,一个格式化像 Rust 字符串的文本节点,然后是...

{move || count()}

无论那是什么。

人们有时会开玩笑说,他们在他们的第一个 Leptos 应用程序中使用的闭包比他们一生中使用的任何时候都多。 这很公平。 基本上,将一个函数传递给视图会告诉框架:“嘿,这是一个可能会改变的东西。”

当我们点击按钮并调用 set_count 时,count 信号会被更新。 这个 move || count() 闭包,它的值依赖于 count 的值,会重新运行, 框架会对那个特定的文本节点进行有针对性的更新,而不会触及应用程序中的任何其他内容。 这就是允许对 DOM 进行极其高效的更新的原因。

现在,如果你打开了 Clippy——或者你有一双特别敏锐的眼睛——你可能会注意到 这个闭包是多余的,至少在 nightly Rust 中是这样。 如果你在 nightly Rust 中使用 Leptos,信号已经是函数了,所以闭包是不必要的。 因此,你可以编写一个更简单的视图:

view! {
    <button /* ... */>
        "Click me: "
        // 与 {move || count()} 相同
        {count}
    </button>
}

记住——这非常重要——只有函数是响应式的。 这意味着 {count}{count()} 在你的视图中做的事情非常不同。 {count} 传递 一个函数,告诉框架每次 count 改变时都要更新视图。 {count()} 只访问一次 count 的值,并将一个 i32 传递给视图, 渲染一次,非响应式。 你可以在下面的 CodeSandbox 中看到区别!

让我们做最后一个改变。 set_count(3) 对于点击处理程序来说是一个非常无用的操作。 让我们将“将此值设置为 3”替换为“将此值递增 1”:

move |_| {
    set_count.update(|n| *n += 1);
}

你可以在这里看到,虽然 set_count 只是设置值,但 set_count.update() 为我们提供了一个可变引用并在原地修改值。 两者都会触发我们 UI 中的响应式更新。

在整个教程中,我们将使用 CodeSandbox 来展示交互式示例。 将鼠标悬停在任何变量上以显示 Rust-Analyzer 详细信息 以及正在发生的事情的文档。 随意 fork 示例自己尝试!

实时示例

点击打开 CodeSandbox。

要在沙盒中显示浏览器,你可能需要点击 添加开发者工具 > 其他预览 > 8080。

CodeSandbox 源代码
use leptos::*;

// #[component] 宏将函数标记为可重用组件
// 组件是用户界面的构建块
// 它们定义了一个可重用的行为单元
#[component]
fn App() -> impl IntoView {
    // 在这里我们创建一个响应式信号
    // 并获取一个 (getter, setter) 对
    // 信号是框架中变化的基本单元
    // 我们稍后会详细讨论它们
    let (count, set_count) = create_signal(0);

    // `view` 宏是我们定义用户界面的方式
    // 它使用类似 HTML 的格式,可以接受某些 Rust 值
    view! {
        <button
            // on:click 将在每次 `click` 事件触发时运行
            // 每个事件处理程序都定义为 `on:{eventname}`

            // 我们能够将 `set_count` 移入闭包中
            // 因为信号是 Copy 和 'static
            on:click=move |_| {
                set_count.update(|n| *n += 1);
            }
        >
            // RSX 中的文本节点应该用引号括起来,
            // 就像普通的 Rust 字符串一样
            "Click me"
        </button>
        <p>
            <strong>"响应式: "</strong>
            // 你可以通过将 Rust 表达式括在大括号中来将它们作为值插入 DOM 中
            // 如果你传入一个函数,它将进行响应式更新
            {move || count()}
        </p>
        <p>
            <strong>"响应式简写: "</strong>
            // 信号是函数,所以我们可以移除包装闭包
            {count}
        </p>
        <p>
            <strong>"非响应式: "</strong>
            // 注意:如果你写 {count()},这将*不会*是响应式的
            // 它只是获取 count 的值一次
            {count()}
        </p>
    }
}

// 这个 `main` 函数是应用程序的入口点
// 它只是将我们的组件挂载到 <body> 上
// 因为我们将其定义为 `fn App`,所以我们现在可以在
// 模板中将其用作 <App/>
fn main() {
    leptos::mount_to_body(|| view! { <App/> })
}

view:动态类、样式和属性

到目前为止,我们已经了解了如何使用 view 宏来创建事件监听器,以及如何通过将函数(例如信号)传递到视图中来创建动态文本。

但是当然你可能还想更新用户界面中的其他内容。 在本节中,我们将了解如何动态更新类、样式和属性, 并且我们将介绍派生信号的概念。

让我们从一个应该很熟悉的简单组件开始:点击一个按钮来增加计数器。

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    view! {
        <button
            on:click=move |_| {
                set_count.update(|n| *n += 1);
            }
        >
            "Click me: "
            {move || count()}
        </button>
    }
}

到目前为止,这只是上一章中的示例。

动态类

现在假设我想动态更新此元素上的 CSS 类列表。 例如,假设我想在计数为奇数时添加类 red。 我可以使用 class: 语法来做到这一点。

class:red=move || count() % 2 == 1

class: 属性接受

  1. 冒号后面的类名 (red)
  2. 一个值,可以是 bool 或返回 bool 的函数

当值为 true 时,添加该类。 当值为 false 时,删除该类。 如果该值是一个访问信号的函数,则该类将在信号更改时进行响应式更新。

现在,每次我点击按钮时,文本应该在红色和黑色之间切换,因为数字在偶数和奇数之间切换。

<button
    on:click=move |_| {
        set_count.update(|n| *n += 1);
    }
    // class: 语法响应式地更新单个类
    // 在这里,当 `count` 为奇数时,我们将设置 `red` 类
    class:red=move || count() % 2 == 1
>
    "Click me"
</button>

如果你正在跟随,请确保进入你的 index.html 并添加如下内容:

<style>
  .red {
    color: red;
  }
</style>

某些 CSS 类名不能被 view 宏直接解析,尤其是当它们包含破折号、数字或其他字符的混合时。 在这种情况下,你可以使用元组语法:class=("name", value) 仍然直接更新单个类。

class=("button-20", move || count() % 2 == 1)

可以使用类似的 style: 语法直接更新单个 CSS 属性。

    let (x, set_x) = create_signal(0);
        view! {
            <button
                on:click={move |_| {
                    set_x.update(|n| *n += 10);
                }}
                // 设置 `style` 属性
                style="position: absolute"
                // 并使用 `style:` 切换单个 CSS 属性
                style:left=move || format!("{}px", x() + 100)
                style:background-color=move || format!("rgb({}, {}, 100)", x(), 100)
                style:max-width="400px"
                // 设置一个 CSS 变量供样式表使用
                style=("--columns", x)
            >
                "Click to Move"
            </button>
    }

动态属性

这同样适用于普通属性。 将纯字符串或原始值传递给 属性会赋予它一个静态值。 将函数(包括信号)传递给 属性会使其响应式地更新其值。 让我们在我们的视图中添加另一个元素:

<progress
    max="50"
    // 信号是函数,所以 `value=count` 和 `value=move || count.get()`
    // 是可以互换的。
    value=count
/>

现在每次我们设置计数时,不仅 <button>class 会被切换, 而且 <progress> 栏的 value 也会增加,这意味着我们的进度条会前进。

派生信号

让我们更深入一层,只是为了好玩。

你已经知道,我们只需将函数传递给 view 即可创建响应式界面。 这意味着我们可以轻松地更改我们的进度条。 例如,假设我们希望它移动速度快一倍:

<progress
    max="50"
    value=move || count() * 2
/>

但是想象一下,我们想在多个地方重用该计算。 你可以使用派生信号来做到这一点:一个访问信号的闭包。

let double_count = move || count() * 2;

/* 插入视图的其余部分 */
<progress
    max="50"
    // 我们在这里使用一次
    value=double_count
/>
<p>
    "Double Count: "
    // 在这里再次使用
    {double_count}
</p>

派生信号允许你创建响应式计算值,这些值可以在应用程序中的多个位置使用,并且开销最小。

注意:像这样使用派生信号意味着每次信号更改(当 count() 更改时)和每次我们访问 double_count 时,计算都会运行一次;换句话说,两次。 这是一个非常便宜的计算,所以没关系。 我们将在后面的章节中介绍 memo ,它们旨在解决昂贵计算的这个问题。

高级主题:注入原始 HTML

view 宏支持一个额外的属性 inner_html,该属性 可用于直接设置任何元素的 HTML 内容,并擦除你赋予它的任何其他子元素。 请注意,这不会转义你提供的 HTML。 你 应确保它只包含受信任的输入或任何 HTML 实体都已转义,以防止跨站点脚本 (XSS) 攻击。

let html = "<p>此 HTML 将被注入。</p>";
view! {
  <div inner_html=html/>
}

点击此处查看完整的 view 宏文档

实时示例

点击打开 CodeSandbox。

CodeSandbox 源代码
use leptos::*;

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    // “派生信号”是一个访问其他信号的函数
    // 我们可以使用它来创建依赖于
    // 一个或多个其他信号的值的响应式值
    let double_count = move || count() * 2;

    view! {
        <button
            on:click=move |_| {
                set_count.update(|n| *n += 1);
            }

            // class: 语法响应式地更新单个类
            // 在这里,当 `count` 为奇数时,我们将设置 `red` 类
            class:red=move || count() % 2 == 1
        >
            "Click me"
        </button>
        // 注意:像 <br> 这样的自闭合标签需要一个显式的 /
        <br/>

        // 每次 `count` 更改时,我们都会更新此进度条
        <progress
            // 静态属性的工作方式与 HTML 中相同
            max="50"

            // 将函数传递给属性
            // 响应式地设置该属性
            // 信号是函数,所以 `value=count` 和 `value=move || count.get()`
            // 是可以互换的。
            value=count
        ></progress>
        <br/>

        // 此进度条将使用 `double_count`
        // 所以它应该移动速度快一倍!
        <progress
            max="50"
            // 派生信号是函数,因此它们也可以
            // 响应式地更新 DOM
            value=double_count
        ></progress>
        <p>"Count: " {count}</p>
        <p>"Double Count: " {double_count}</p>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

组件和 Props

到目前为止,我们一直在单个组件中构建整个应用程序。这对于非常小的例子来说是可以的,但在任何实际的应用程序中,你都需要将用户界面分解成多个组件,这样你就可以将界面分解成更小、可重用、可组合的块。

让我们以进度条为例。假设你想要两个进度条而不是一个:一个每次点击前进一个刻度,一个每次点击前进两个刻度。

可以 通过创建两个 <progress> 元素来做到这一点:

let (count, set_count) = create_signal(0);
let double_count = move || count() * 2;

view! {
    <progress
        max="50"
        value=count
    />
    <progress
        max="50"
        value=double_count
    />
}

但是当然,这不能很好地扩展。如果你想添加第三个进度条,你需要再次添加这段代码。如果你想编辑它的任何内容,你需要编辑三次。

相反,让我们创建一个 <ProgressBar/> 组件。

#[component]
fn ProgressBar() -> impl IntoView {
    view! {
        <progress
            max="50"
            // 嗯... 我们将从哪里获得这个?
            value=progress
        />
    }
}

只有一个问题:progress 没有定义。它应该从哪里来?当我们手动定义所有内容时,我们只使用了局部变量名。现在我们需要一些方法将参数传递给组件。

组件 Props

我们使用组件属性或“props”来做到这一点。如果你使用过其他的前端框架,这可能是一个熟悉的想法。基本上,属性之于组件就像属性之于 HTML 元素:它们允许你将额外的信息传递给组件。

在 Leptos 中,你可以通过给组件函数添加额外的参数来定义 props。

#[component]
fn ProgressBar(
    progress: ReadSignal<i32>
) -> impl IntoView {
    view! {
        <progress
            max="50"
            // 现在可以了
            value=progress
        />
    }
}

现在我们可以在主要的 <App/> 组件的视图中使用我们的组件。

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);
    view! {
        <button on:click=move |_| { set_count.update(|n| *n += 1); }>
            "Click me"
        </button>
        // 现在我们使用我们的组件!
        <ProgressBar progress=count/>
    }
}

在视图中使用组件看起来很像使用 HTML 元素。你会注意到你可以很容易地分辨元素和组件之间的区别,因为组件总是有 PascalCase 的名称。你像传递 HTML 元素属性一样传递 progress prop。很简单。

响应式和静态 Props

你会注意到在整个例子中,progress 接受一个响应式的 ReadSignal<i32>,而不是一个普通的 i32。这非常重要

组件 props 没有附加任何特殊的含义。组件只是一个运行一次来设置用户界面的函数。告诉界面响应更改的唯一方法是传递一个信号类型。所以如果你有一个会随着时间变化的组件属性,比如我们的 progress,它应该是一个信号。

optional Props

现在 max 设置是硬编码的。让我们也把它作为一个 prop。但是让我们添加一个条件:让我们通过使用 #[prop(optional)] 注释组件函数的特定参数来使这个 prop 成为可选的。

#[component]
fn ProgressBar(
    // 将此 prop 标记为可选
    // 当你使用 <ProgressBar/> 时,你可以指定它也可以不指定
    #[prop(optional)]
    max: u16,
    progress: ReadSignal<i32>
) -> impl IntoView {
    view! {
        <progress
            max=max
            value=progress
        />
    }
}

现在,我们可以使用 <ProgressBar max=50 progress=count/>,或者我们可以省略 max 来使用默认值(即 <ProgressBar progress=count/>)。optional 的默认值是它的 Default::default() 值,对于 u16 来说是 0。对于进度条来说,最大值为 0 并不是很有用。

所以让我们给它一个特定的默认值。

default props

You can specify a default value other than Default::default() pretty simply with #[prop(default = ...).

#[component]
fn ProgressBar(
    #[prop(default = 100)]
    max: u16,
    progress: ReadSignal<i32>
) -> impl IntoView {
    view! {
        <progress
            max=max
            value=progress
        />
    }
}

泛型 Props

这很好。但我们从两个计数器开始,一个由 count 驱动,一个由派生信号 double_count 驱动。让我们通过使用 double_count 作为另一个 <ProgressBar/> 上的 progress prop 来重新创建它。

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);
    let double_count = move || count() * 2;

    view! {
        <button on:click=move |_| { set_count.update(|n| *n += 1); }>
            "Click me"
        </button>
        <ProgressBar progress=count/>
        // 添加第二个进度条
        <ProgressBar progress=double_count/>
    }
}

嗯... 这无法编译。应该很容易理解为什么:我们已经声明了 progress prop 接受 ReadSignal<i32>,而 double_count 不是 ReadSignal<i32>。正如 rust-analyzer 会告诉你的,它的类型是 || -> i32,即,它是一个返回 i32 的闭包。

有几种方法可以处理这个问题。一种是说:“好吧,我知道 ReadSignal 是一个函数,而且我知道闭包是一个函数;也许我可以接受任何函数?” 如果你很精通,你可能知道这两个都实现了 trait Fn() -> i32。所以你可以使用一个泛型组件:

#[component]
fn ProgressBar(
    #[prop(default = 100)]
    max: u16,
    progress: impl Fn() -> i32 + 'static
) -> impl IntoView {
    view! {
        <progress
            max=max
            value=progress
        />
        // 添加一个换行符以避免重叠
        <br/>
    }
}

这是一种编写此组件的完全合理的方式:progress 现在接受任何实现此 Fn() trait 的值。

泛型 props 也可以使用 where 子句指定,或者使用内联泛型,如 ProgressBar<F: Fn() -> i32 + 'static>。请注意,对 impl Trait 语法的支持是在 0.6.12 版本中发布的;如果收到错误消息,你可能需要 cargo update 以确保你使用的是最新版本。

泛型需要在组件 props 中的某个地方使用。这是因为 props 被构建到一个结构体中,所以所有泛型类型都必须在结构体中的某个地方使用。这通常可以通过使用可选的 PhantomData prop 来轻松实现。然后你可以使用表达类型的语法在视图中指定泛型:<Component<T>/>(而不是使用 turbofish 风格的 <Component::<T>/>)。

#[component]
fn SizeOf<T: Sized>(#[prop(optional)] _ty: PhantomData<T>) -> impl IntoView {
    std::mem::size_of::<T>()
}

#[component]
pub fn App() -> impl IntoView {
    view! {
        <SizeOf<usize>/>
        <SizeOf<String>/>
    }
}

请注意,存在一些限制。例如,我们的视图宏解析器无法处理嵌套泛型,如 <SizeOf<Vec<T>>/>

into Props

还有另一种方法可以实现这一点,那就是使用 #[prop(into)]。此属性会自动对你作为 props 传递的值调用 .into(),这允许你轻松地传递具有不同值的 props。

在这种情况下,了解 Signal 类型会很有帮助。Signal 是一个枚举类型,表示任何类型的可读响应式信号。在为要在传递不同类型的信号时重复使用的组件定义 API 时,它会很有用。当你希望能够接受静态值或响应式值时,MaybeSignal 类型很有用。

#[component]
fn ProgressBar(
    #[prop(default = 100)]
    max: u16,
    #[prop(into)]
    progress: Signal<i32>
) -> impl IntoView
{
    view! {
        <progress
            max=max
            value=progress
        />
        <br/>
    }
}

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);
    let double_count = move || count() * 2;

    view! {
        <button on:click=move |_| { set_count.update(|n| *n += 1); }>
            "Click me"
        </button>
        // .into() 将 `ReadSignal` 转换为 `Signal`
        <ProgressBar progress=count/>
        // 使用 `Signal::derive()` 包装派生信号
        <ProgressBar progress=Signal::derive(double_count)/>
    }
}

可选泛型 Props

请注意,你不能为组件指定可选的泛型 props。让我们看看如果你尝试会发生什么:

#[component]
fn ProgressBar<F: Fn() -> i32 + 'static>(
    #[prop(optional)] progress: Option<F>,
) -> impl IntoView {
    progress.map(|progress| {
        view! {
            <progress
                max=100
                value=progress
            />
            <br/>
        }
    })
}

#[component]
pub fn App() -> impl IntoView {
    view! {
        <ProgressBar/>
    }
}

Rust 帮忙指出了错误

xx |         <ProgressBar/>
   |          ^^^^^^^^^^^ 无法推断函数 `ProgressBar` 上声明的类型参数 `F` 的类型
   |
help: 考虑指定泛型参数
   |
xx |         <ProgressBar::<F>/>
   |                     +++++

你可以使用 <ProgressBar<F>/> 语法(在 view 宏中没有 turbofish)在组件上指定泛型。在这里指定正确的类型是不可能的;闭包和函数通常是不可命名的类型。编译器可以用简写来显示它们,但你不能指定它们。

但是,你可以通过使用 Box<dyn _>&dyn _> 提供具体类型来解决这个问题:

#[component]
fn ProgressBar(
    #[prop(optional)] progress: Option<Box<dyn Fn() -> i32>>,
) -> impl IntoView {
    progress.map(|progress| {
        view! {
            <progress
                max=100
                value=progress
            />
            <br/>
        }
    })
}

#[component]
pub fn App() -> impl IntoView {
    view! {
        <ProgressBar/>
    }
}

因为 Rust 编译器现在知道了 prop 的具体类型,因此即使在 None 的情况下,它也知道它在内存中的大小,所以这可以很好地编译。

在这种特殊情况下,&dyn Fn() -> i32 会导致生命周期问题,但在其他情况下,它可能是一种可能性。

记录组件

这是本书中最不重要但最重要的章节之一。严格来说,记录你的组件及其 props 并不是必要的。根据你的团队和应用程序的大小,这可能非常重要。但这非常容易,并且会立即产生效果。

要记录组件及其 props,你可以简单地在组件函数和每个 prop 上添加文档注释:

/// 显示目标进度。
#[component]
fn ProgressBar(
    /// 进度条的最大值。
    #[prop(default = 100)]
    max: u16,
    /// 应该显示多少进度。
    #[prop(into)]
    progress: Signal<i32>,
) -> impl IntoView {
    /* ... */
}

这就是你需要做的所有事情。这些行为就像普通的 Rust 文档注释,除了你可以记录单个组件 props,而这对于 Rust 函数参数是无法做到的。

这将自动为你的组件、它的 Props 类型以及用于添加 props 的每个字段生成文档。在将鼠标悬停在组件名称或 props 上并看到 #[component] 宏与 rust-analyzer 结合的强大功能之前,可能很难理解它的强大功能。

进阶主题:#[component(transparent)]

所有 Leptos 组件都返回 -> impl IntoView。但是,有些需要直接返回一些数据,而无需任何额外的包装。这些可以用 #[component(transparent)] 标记,在这种情况下,它们会完全返回它们返回的值,而渲染系统不会以任何方式转换它们。

这主要用于两种情况:

  1. 创建 <Suspense/><Transition/> 的包装器,它们返回一个透明的 suspense 结构,以便与 SSR 和 hydration 正确集成。
  2. leptos_router<Route/> 定义重构到单独的组件中,因为 <Route/> 是一个透明的组件,它返回一个 RouteDefinition 结构而不是一个视图。

通常,除非你正在创建属于这两种类别之一的自定义包装组件,否则你不需要使用透明组件。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::*;

// 将不同的组件组合在一起是我们构建
// 用户界面的方式。在这里,我们将定义一个可重用的 <ProgressBar/>。
// 你将看到如何使用文档注释来记录组件
// 及其属性。

/// 显示目标进度。
#[component]
fn ProgressBar(
    // 将此标记为可选 prop。它将默认为其类型的默认值,即 0。
    #[prop(default = 100)]
    /// 进度条的最大值。
    max: u16,
    // 将对传递到 prop 的值运行 `.into()`。
    #[prop(into)]
    // `Signal<T>` 是几个响应式类型的包装器。
    // 在像这样的组件 API 中,它会很有帮助,我们
    // 可能想要接受任何类型的响应式值
    /// 应该显示多少进度。
    progress: Signal<i32>,
) -> impl IntoView {
    view! {
        <progress
            max={max}
            value=progress
        />
        <br/>
    }
}

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    let double_count = move || count() * 2;

    view! {
        <button
            on:click=move |_| {
                set_count.update(|n| *n += 1);
            }
        >
            "Click me"
        </button>
        <br/>
        // 如果你在 CodeSandbox 或带有
        // rust-analyzer 支持的编辑器中打开了此文件,请尝试将鼠标悬停在 `ProgressBar`、
        // `max` 或 `progress` 上以查看我们在上面定义的文档
        <ProgressBar max=50 progress=count/>
        // 让我们在这个进度条上使用默认的最大值
        // 默认值为 100,所以它应该移动得慢一半
        <ProgressBar progress=count/>
        // Signal::derive 从我们的派生信号创建一个 Signal 包装器
        // 使用 double_count 意味着它应该移动得快两倍
        <ProgressBar max=50 progress=Signal::derive(double_count)/>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

迭代

无论你是列出待办事项、显示表格还是显示产品图片,迭代项目列表都是 Web 应用程序中的常见任务。协调不断变化的项目集之间的差异也是框架需要妥善处理的最棘手的任务之一。

Leptos 支持两种不同的迭代项目模式:

  1. 对于静态视图:Vec<_>
  2. 对于动态列表:<For/>

使用 Vec<_> 的静态视图

有时你需要重复显示一个项目,但你从中绘制的列表并不经常更改。在这种情况下,重要的是要知道你可以在视图中插入任何 Vec<IV> where IV: IntoView。换句话说,如果你可以渲染 T,你就可以渲染 Vec<T>

let values = vec![0, 1, 2];
view! {
    // 这将只渲染 "012"
    <p>{values.clone()}</p>
    // 或者我们可以将它们包装在 <li> 中
    <ul>
        {values.into_iter()
            .map(|n| view! { <li>{n}</li>})
            .collect::<Vec<_>>()}
    </ul>
}

Leptos 还提供了一个 .collect_view() 辅助函数,允许你将任何 T: IntoView 的迭代器收集到 Vec<View> 中。

let values = vec![0, 1, 2];
view! {
    // 这将只渲染 "012"
    <p>{values.clone()}</p>
    // 或者我们可以将它们包装在 <li> 中
    <ul>
        {values.into_iter()
            .map(|n| view! { <li>{n}</li>})
            .collect_view()}
    </ul>
}

列表 是静态的并不意味着界面需要是静态的。你可以将动态项目渲染为静态列表的一部分。

// 创建一个包含 5 个信号的列表
let length = 5;
let counters = (1..=length).map(|idx| create_signal(idx));

// 每个项目管理一个响应式视图
// 但列表本身永远不会改变
let counter_buttons = counters
    .map(|(count, set_count)| {
        view! {
            <li>
                <button
                    on:click=move |_| set_count.update(|n| *n += 1)
                >
                    {count}
                </button>
            </li>
        }
    })
    .collect_view();

view! {
    <ul>{counter_buttons}</ul>
}

也可以 响应式地渲染 Fn() -> Vec<_>。但请注意,每次它发生变化时,这都会重新渲染列表中的每个项目。这是非常低效的!幸运的是,有一种更好的方法。

使用 <For/> 组件进行动态渲染

<For/> 组件是一个带键的动态列表。它接受三个 props:

  • each:一个函数(例如信号),返回要迭代的项目 T
  • key:一个键函数,接受 &T 并返回一个稳定的、唯一的键或 ID
  • children:将每个 T 渲染成一个视图

key 是,嗯,关键。你可以在列表中添加、删除和移动项目。只要每个项目的键随着时间的推移保持稳定,框架就不需要重新渲染任何项目,除非它们是新增的,并且它可以非常有效地添加、删除和移动项目,因为它们会发生变化。这允许在列表更改时对其进行极其有效的更新,而只需最少的额外工作。

创建一个好的 key 可能有点棘手。你通常 想为此目的使用索引,因为它不稳定——如果你删除或移动项目,它们的索引会发生变化。

但是,在生成每一行时为其生成一个唯一的 ID,并将其用作键函数的 ID,这是一个好主意。

查看下面的 <DynamicList/> 组件以获取示例。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::*;

// 迭代是大多数应用程序中非常常见的任务。
// 那么如何获取数据列表并在 DOM 中渲染它呢?
// 此示例将向你展示两种方法:
// 1) 对于大多数静态列表,使用 Rust 迭代器
// 2) 对于增长、收缩或移动项目的列表,使用 <For/>

#[component]
fn App() -> impl IntoView {
    view! {
        <h1>"Iteration"</h1>
        <h2>"Static List"</h2>
        <p>"Use this pattern if the list itself is static."</p>
        <StaticList length=5/>
        <h2>"Dynamic List"</h2>
        <p>"Use this pattern if the rows in your list will change."</p>
        <DynamicList initial_length=5/>
    }
}

/// 计数器列表,无法
/// 添加或删除任何计数器。
#[component]
fn StaticList(
    /// 此列表中要包含的计数器数量。
    length: usize,
) -> impl IntoView {
    // 创建以递增数字开头的计数器信号
    let counters = (1..=length).map(|idx| create_signal(idx));

    // 当你有一个不变的列表时,你可以
    // 使用普通的 Rust 迭代器来操作它
    // 并将其收集到 Vec<_> 中以将其插入 DOM
    let counter_buttons = counters
        .map(|(count, set_count)| {
            view! {
                <li>
                    <button
                        on:click=move |_| set_count.update(|n| *n += 1)
                    >
                        {count}
                    </button>
                </li>
            }
        })
        .collect::<Vec<_>>();

    // 请注意,如果 `counter_buttons` 是一个响应式列表
    // 并且它的值发生了变化,这将非常低效:
    // 每次列表更改时,它都会重新渲染每一行。
    view! {
        <ul>{counter_buttons}</ul>
    }
}

/// 允许你添加或
/// 删除计数器的计数器列表。
#[component]
fn DynamicList(
    /// 开始时的计数器数量。
    initial_length: usize,
) -> impl IntoView {
    // 此动态列表将使用 <For/> 组件。
    // <For/> 是一个带键的列表。这意味着每一行
    // 都有一个定义的键。如果键没有改变,则该行
    // 不会重新渲染。当列表发生变化时,只有
    // 对 DOM 进行最少数量的更改。

    // `next_counter_id` 将让我们生成唯一的 ID
    // 我们通过在每次
    // 创建计数器时简单地将 ID 加一来做到这一点
    let mut next_counter_id = initial_length;

    // 我们生成一个初始列表,如 <StaticList/> 中所示
    // 但这次我们将 ID 与信号一起包含在内
    let initial_counters = (0..initial_length)
        .map(|id| (id, create_signal(id + 1)))
        .collect::<Vec<_>>();

    // 现在我们将该初始列表存储在一个信号中
    // 这样,我们将能够随着时间的推移修改列表,
    // 添加和删除计数器,它将以响应式的方式发生变化
    let (counters, set_counters) = create_signal(initial_counters);

    let add_counter = move |_| {
        // 为新的计数器创建一个信号
        let sig = create_signal(next_counter_id + 1);
        // 将此计数器添加到计数器列表中
        set_counters.update(move |counters| {
            // 因为 `.update()` 为我们提供了 `&mut T`
            // 我们可以使用普通的 Vec 方法,如 `push`
            counters.push((next_counter_id, sig))
        });
        // 增加 ID,使其始终唯一
        next_counter_id += 1;
    };

    view! {
        <div>
            <button on:click=add_counter>
                "Add Counter"
            </button>
            <ul>
                // <For/> 组件在这里是中心
                // 这允许高效、关键的列表渲染
                <For
                    // `each` 接受任何返回迭代器的函数
                    // 这通常应该是信号或派生信号
                    // 如果它不是响应式的,只需渲染 Vec<_> 而不是 <For/>
                    each=counters
                    // 键对于每一行应该是唯一的和稳定的
                    // 使用索引通常是一个坏主意,除非你的列表
                    // 只能增长,因为在列表中移动项目
                    // 意味着它们的索引会发生变化,并且它们都会重新渲染
                    key=|counter| counter.0
                    // `children` 接收来自你的 `each` 迭代器的每个项目
                    // 并返回一个视图
                    children=move |(id, (count, set_count))| {
                        view! {
                            <li>
                                <button
                                    on:click=move |_| set_count.update(|n| *n += 1)
                                >
                                    {count}
                                </button>
                                <button
                                    on:click=move |_| {
                                        set_counters.update(|counters| {
                                            counters.retain(|(counter_id, _)| counter_id != &id)
                                        });
                                    }
                                >
                                    "Remove"
                                </button>
                            </li>
                        }
                    }
                />
            </ul>
        </div>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

使用 <For/> 迭代更复杂的数据

本章将更深入地介绍嵌套数据结构的迭代。它属于迭代的另一章,但如果你现在想坚持使用更简单的主题,请随时跳过它,稍后再回来。

问题

我刚才说过,除非键发生了变化,否则框架不会重新渲染任何行中的任何项目。这乍一看可能很有道理,但它很容易让你绊倒。

让我们考虑一个例子,其中我们行中的每一项都是某种数据结构。例如,假设这些项目来自某个 JSON 键和值数组:

#[derive(Debug, Clone)]
struct DatabaseEntry {
    key: String,
    value: i32,
}

让我们定义一个简单的组件,它将迭代这些行并显示每一行:

#[component]
pub fn App() -> impl IntoView {
	// 从一组三行开始
    let (data, set_data) = create_signal(vec![
        DatabaseEntry {
            key: "foo".to_string(),
            value: 10,
        },
        DatabaseEntry {
            key: "bar".to_string(),
            value: 20,
        },
        DatabaseEntry {
            key: "baz".to_string(),
            value: 15,
        },
    ]);
    view! {
		// 当我们点击时,更新每一行,
		// 将其值加倍
        <button on:click=move |_| {
            set_data.update(|data| {
                for row in data {
                    row.value *= 2;
                }
            });
			// 记录信号的新值
            logging::log!("{:?}", data.get());
        }>
            "Update Values"
        </button>
		// 迭代这些行并显示每个值
        <For
            each=data
            key=|state| state.key.clone()
            let:child
        >
            <p>{child.value}</p>
        </For>
    }
}

请注意这里的 let:child 语法。在上一章中,我们介绍了带有 children prop 的 <For/>。我们实际上可以直接在 <For/> 组件的子组件中创建这个值,而无需跳出 view 宏:上面的 let:child<p>{child.value}</p> 的组合相当于

children=|child| view! { <p>{child.value}</p> }

当你点击“更新值”按钮时......什么也没有发生。或者更确切地说:信号已更新,新值已记录,但每行的 {child.value} 不会更新。

让我们看看:这是因为我们忘记添加闭包以使其具有响应式吗?让我们试试 {move || child.value}

...不。仍然没有。

问题在于:正如我所说,只有当键发生变化时,才会重新渲染每一行。我们已经更新了每一行的值,但没有更新任何行的键,所以没有任何内容重新渲染。如果你查看 child.value 的类型,它是一个普通的 i32,而不是一个响应式的 ReadSignal<i32> 或其他什么。这意味着即使我们用一个闭包将其包裹起来,此行中的值也永远不会更新。

我们有三种可能的解决方案:

  1. 更改 key,使其在数据结构发生更改时始终更新
  2. 更改 value,使其具有响应式
  3. 获取数据结构的响应式切片,而不是直接使用每一行

选项 1:更改键

只有当键发生变化时,才会重新渲染每一行。我们上面的行没有重新渲染,因为键没有改变。那么:为什么不强制更改键呢?

<For
	each=data
	key=|state| (state.key.clone(), state.value)
	let:child
>
	<p>{child.value}</p>
</For>

现在我们将键和值都包含在 key 中。这意味着只要行的值发生变化,<For/> 就会将其视为一个全新的行,并替换前一行。

优点

这很容易。我们可以通过在 DatabaseEntry 上派生 PartialEqEqHash 来使其更容易,在这种情况下,我们可以只使用 key=|state| state.clone()

缺点

这是三种选择中效率最低的。 每当行的值发生变化时,它都会丢弃之前的 <p> 元素,并用一个全新的元素替换它。换句话说,它不是对文本节点进行细粒度的更新,而是在每次更改时都会重新渲染整个行,这与行的 UI 的复杂程度成正比。

你还会注意到,我们最终会克隆整个数据结构,以便 <For/> 可以保存键的副本。对于更复杂的结构,这很快就会变成一个坏主意!

选项 2:嵌套信号

如果我们确实希望该值具有细粒度的响应式,一种选择是将每行的 value 包装在一个信号中。

#[derive(Debug, Clone)]
struct DatabaseEntry {
    key: String,
    value: RwSignal<i32>,
}

RwSignal<_> 是一个“读写信号”,它将 getter 和 setter 合并到一个对象中。我在这里使用它是因为它比单独的 getter 和 setter 更容易存储在结构体中。

#[component]
pub fn App() -> impl IntoView {
	// 从一组三行开始
    let (data, set_data) = create_signal(vec![
        DatabaseEntry {
            key: "foo".to_string(),
            value: create_rw_signal(10),
        },
        DatabaseEntry {
            key: "bar".to_string(),
            value: create_rw_signal(20),
        },
        DatabaseEntry {
            key: "baz".to_string(),
            value: create_rw_signal(15),
        },
    ]);
    view! {
		// 当我们点击时,更新每一行,
		// 将其值加倍
        <button on:click=move |_| {
            data.with(|data| {
                for row in data {
                    row.value.update(|value| *value *= 2);
                }
            });
			// 记录信号的新值
            logging::log!("{:?}", data.get());
        }>
            "Update Values"
        </button>
		// 迭代这些行并显示每个值
        <For
            each=data
            key=|state| state.key.clone()
            let:child
        >
            <p>{child.value}</p>
        </For>
    }
}

这个版本有效!如果你在浏览器的 DOM 检查器中查看,你会看到与之前的版本不同,在这个版本中只有单个文本节点被更新。将信号直接传递到 {child.value} 中是有效的,因为如果你将信号传递到视图中,信号确实会保持它们的响应性。

请注意,我将 set_data.update() 更改为 data.with().with() 是访问信号值的非克隆方式。在这种情况下,我们只更新内部值,而不更新值列表:因为信号维护它们自己的状态,我们实际上根本不需要更新 data 信号,所以这里使用不可变的 .with() 就可以了。

事实上,这个版本并没有更新 data,所以 <For/> 本质上是一个静态列表,如上一章所示,这可能只是一个普通的迭代器。但是,如果我们将来想要添加或删除行,<For/> 就会很有用。

优点

这是最有效的选择,并且与框架的其余心智模型直接吻合:随时间变化的值被包装在信号中,以便界面可以对它们做出响应。

缺点

如果你从 API 或你无法控制的其他数据源接收数据,并且你不想创建不同的结构体来将每个字段包装在信号中,那么嵌套的响应式可能会很麻烦。

选项 3:记忆切片

Leptos 提供了一个名为 create_memo 的原语,它创建一个派生计算,仅在其值发生变化时才触发响应式更新。

这允许你为较大数据结构的子字段创建响应式值,而无需将该结构体的字段包装在信号中。

大多数应用程序可以保持与初始(已损坏)版本相同,但 <For/> 将更新为:

<For
    each=move || data().into_iter().enumerate()
    key=|(_, state)| state.key.clone()
    children=move |(index, _)| {
        let value = create_memo(move |_| {
            data.with(|data| data.get(index).map(|d| d.value).unwrap_or(0))
        });
        view! {
            <p>{value}</p>
        }
    }
/>

你会注意到这里有一些区别:

  • 我们将 data 信号转换为枚举迭代器
  • 我们显式使用 children prop,以便更容易运行一些非 view 代码
  • 我们定义了一个 memo value 并在视图中使用它。这个 value 字段实际上并没有使用传递到每一行的 child。相反,它使用索引并返回到原始的 data 中以获取值。

现在,每次 data 发生变化时,每个 memo 都会重新计算。如果它的值发生了变化,它将更新它的文本节点,而不会重新渲染整个行。

优点

我们获得了与信号包装版本相同的细粒度响应性,而无需将数据包装在信号中。

缺点

<For/> 循环内设置这个逐行 memo 比使用嵌套信号要复杂一些。例如,你会注意到我们必须通过使用 data.get(index) 来防止 data[index] 发生 panic 的可能性,因为这个 memo 可能在行被删除后立即被触发重新运行一次。(这是因为每行的 memo 和整个 <For/> 都依赖于相同的 data 信号,并且依赖于相同信号的多个响应式值的执行顺序无法得到保证。)

还要注意,虽然 memo 会记住它们的响应式变化,但每次都需要重新运行相同的计算来检查值,因此嵌套的响应式信号对于此处的精确更新仍然更有效。

表单和输入

表单和表单输入是交互式应用程序的重要组成部分。在 Leptos 中与输入交互有两种基本模式,如果你熟悉 React、SolidJS 或类似的框架,你可能会认出它们:使用受控非受控输入。

受控输入

在“受控输入”中,框架控制输入元素的状态。在每个 input 事件上,它都会更新一个保存当前状态的本地信号,而该信号又会更新输入的 value 属性。

有两件重要的事情需要记住:

  1. input 事件在元素的(几乎)每次更改时触发,而 change 事件在(或多或少)你取消输入焦点时触发。你可能想要 on:input,但我们让你自由选择。
  2. value 属性 只设置输入的初始值,即它只更新输入到你开始输入之前的点。之后,value 属性 会继续更新输入。由于这个原因,你通常想要设置 prop:value。(对于 <input type="checkbox"> 上的 checkedprop:checked 也是如此。)
let (name, set_name) = create_signal("Controlled".to_string());

view! {
    <input type="text"
        on:input=move |ev| {
            // event_target_value 是一个 Leptos 辅助函数
            // 它的功能与 JavaScript 中的 event.target.value 相同
            // 但它简化了在 Rust 中使其工作所需的一些类型转换
            set_name(event_target_value(&ev));
        }

        // `prop:` 语法允许你更新 DOM 属性,
        // 而不是属性。
        prop:value=name
    />
    <p>"Name is: " {name}</p>
}

为什么需要 prop:value

Web 浏览器是现存最普遍和最稳定的图形用户界面渲染平台。在它们存在的三十年中,它们还保持了令人难以置信的向后兼容性。不可避免地,这意味着存在一些怪癖。

一个奇怪的怪癖是 HTML 属性和 DOM 元素属性之间存在区别,即在从 HTML 解析并可以使用 .setAttribute() 在 DOM 元素上设置的称为“属性”的东西与解析后的 HTML 元素的 JavaScript 类表示形式的字段称为“属性”的东西之间存在区别。

<input value=...> 的情况下,设置 value 属性 被定义为设置输入的初始值,而设置 value 属性 设置其当前值。通过打开 about:blank 并在浏览器控制台中逐行运行以下 JavaScript,也许最容易理解这一点:

// 创建一个输入并将其附加到 DOM
const el = document.createElement("input");
document.body.appendChild(el);

el.setAttribute("value", "test"); // 更新输入
el.setAttribute("value", "another test"); // 再次更新输入

// 现在去输入框中输入:删除一些字符,等等。

el.setAttribute("value", "one more time?");
// 什么都没有改变。现在设置“初始值”没有任何作用

// 但是...
el.value = "But this works";

许多其他前端框架将属性和属性混为一谈,或者为正确设置值的输入创建了一个特殊情况。也许 Leptos 也应该这样做;但就目前而言,我更喜欢让用户最大程度地控制他们是在设置属性还是属性,并尽我所能教育人们了解实际的底层浏览器行为,而不是掩盖它。

非受控输入

在“非受控输入”中,浏览器控制输入元素的状态。我们不使用不断更新的信号来保存它的值,而是使用 NodeRef 在我们想要获取它的值时访问输入。

在这个例子中,我们只在 <form> 触发 submit 事件时通知框架。注意 leptos::html 模块的使用,它为每个 HTML 元素提供了一堆类型。

let (name, set_name) = create_signal("Uncontrolled".to_string());

let input_element: NodeRef<html::Input> = create_node_ref();

view! {
    <form on:submit=on_submit> // on_submit 在下面定义
        <input type="text"
            value=name
            node_ref=input_element
        />
        <input type="submit" value="Submit"/>
    </form>
    <p>"Name is: " {name}</p>
}

到目前为止,视图应该很容易理解。注意两件事:

  1. 与受控输入示例不同,我们使用 value(而不是 prop:value)。这是因为我们只是设置输入的初始值,并让浏览器控制其状态。(我们可以使用 prop:value 代替。)
  2. 我们使用 node_ref=... 来填充 NodeRef。(较旧的示例有时使用 _ref。它们是一回事,但 node_ref 具有更好的 rust-analyzer 支持。)

NodeRef 是一种响应式智能指针:我们可以使用它来访问底层的 DOM 节点。它的值将在元素渲染时设置。

let on_submit = move |ev: leptos::ev::SubmitEvent| {
    // 阻止页面重新加载!
    ev.prevent_default();

    // 在这里,我们将从输入中提取值
    let value = input_element()
        // 事件处理程序只能在视图
        // 被挂载到 DOM 后触发,因此 `NodeRef` 将是 `Some`
        .expect("<input> should be mounted")
        // `leptos::HtmlElement<html::Input>` 实现了 `Deref`
        // 到 `web_sys::HtmlInputElement`。
        // 这意味着我们可以调用 `HtmlInputElement::value()`
        // 来获取输入的当前值
        .value();
    set_name(value);
};

我们的 on_submit 处理程序将访问输入的值并使用它来调用 set_name。要访问存储在 NodeRef 中的 DOM 节点,我们可以简单地将其作为函数调用(或使用 .get())。这将返回 Option<leptos::HtmlElement<html::Input>>,但我们知道该元素已经挂载(否则你如何触发此事件!),因此在这里解包是安全的。

然后我们可以调用 .value() 从输入中获取值,因为 NodeRef 允许我们访问正确类型的 HTML 元素。

查看 web_sysHtmlElement 以了解有关使用 leptos::HtmlElement 的更多信息。另请参阅本页末尾的完整 CodeSandbox 示例。

特殊情况:<textarea><select>

两个表单元素往往会以不同的方式引起一些混淆。

<textarea>

<input> 不同,<textarea> 元素不支持 value 属性。相反,它将其值作为纯文本节点接收在其 HTML 子级中。

在当前版本的 Leptos 中(实际上在 Leptos 0.1-0.6 中),创建动态子级会插入注释标记节点。如果你尝试使用它来显示动态内容,这可能会导致不正确的 <textarea> 渲染(以及 hydration 期间的问题)。

相反,你可以将非响应式的初始值作为子级传递,并使用 prop:value 来设置其当前值。(<textarea> 不支持 value 属性,但 确实 支持 value 属性...)

view! {
    <textarea
        prop:value=move || some_value.get()
        on:input=/* etc */
    >
        /* 纯文本初始值,如果信号发生变化,则不会改变 */
        {some_value.get_untracked()}
    </textarea>
}

<select>

同样,<select> 元素可以通过 <select> 本身的 value 属性来控制,这将选择具有该值的任何 <option>

let (value, set_value) = create_signal(0i32);
view! {
  <select
    on:change=move |ev| {
      let new_value = event_target_value(&ev);
      set_value(new_value.parse().unwrap());
    }
    prop:value=move || value.get().to_string()
  >
    <option value="0">"0"</option>
    <option value="1">"1"</option>
    <option value="2">"2"</option>
  </select>
  // 一个循环选择选项的按钮
  <button on:click=move |_| set_value.update(|n| {
    if *n == 2 {
      *n = 0;
    } else {
      *n += 1;
    }
  })>
    "Next Option"
  </button>
}

受控与非受控表单 CodeSandbox

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::{ev::SubmitEvent, *};

#[component]
fn App() -> impl IntoView {
    view! {
        <h2>"Controlled Component"</h2>
        <ControlledComponent/>
        <h2>"Uncontrolled Component"</h2>
        <UncontrolledComponent/>
    }
}

#[component]
fn ControlledComponent() -> impl IntoView {
    // 创建一个信号来保存值
    let (name, set_name) = create_signal("Controlled".to_string());

    view! {
        <input type="text"
            // 每当输入发生变化时触发事件
            on:input=move |ev| {
                // event_target_value 是一个 Leptos 辅助函数
                // 它的功能与 JavaScript 中的 event.target.value 相同
                // 但它简化了在 Rust 中使其工作所需的一些类型转换
                set_name(event_target_value(&ev));
            }

            // `prop:` 语法允许你更新 DOM 属性,
            // 而不是属性。
            //
            // 重要提示:`value` *属性* 只设置
            // 初始值,直到你进行更改。
            // `value` *属性* 设置当前值。
            // 这是 DOM 的一个怪癖;我并没有发明它。
            // 其他框架掩盖了这一点;我认为
            // 让你能够访问真正工作的浏览器
            // 更为重要。
            //
            // tl;dr:对表单输入使用 prop:value
            prop:value=name
        />
        <p>"Name is: " {name}</p>
    }
}

#[component]
fn UncontrolledComponent() -> impl IntoView {
    // 导入 <input> 的类型
    use leptos::html::Input;

    let (name, set_name) = create_signal("Uncontrolled".to_string());

    // 我们将使用 NodeRef 来存储对输入元素的引用
    // 这将在创建元素时填充
    let input_element: NodeRef<Input> = create_node_ref();

    // 在表单 `submit` 事件发生时触发
    // 这会将 <input> 的值存储在我们的信号中
    let on_submit = move |ev: SubmitEvent| {
        // 阻止页面重新加载!
        ev.prevent_default();

        // 在这里,我们将从输入中提取值
        let value = input_element()
            // 事件处理程序只能在视图
            // 被挂载到 DOM 后触发,因此 `NodeRef` 将是 `Some`
            .expect("<input> to exist")
            // `NodeRef` 为 DOM 元素类型实现了 `Deref`
            // 这意味着我们可以调用 `HtmlInputElement::value()`
            // 来获取输入的当前值
            .value();
        set_name(value);
    };

    view! {
        <form on:submit=on_submit>
            <input type="text"
                // 在这里,我们使用 `value` *属性* 只设置
                // 初始值,之后让浏览器维护
                // 状态。
                value=name

                // 在 `input_element` 中存储对此输入的引用
                node_ref=input_element
            />
            <input type="submit" value="Submit"/>
        </form>
        <p>"Name is: " {name}</p>
    }
}

// 这个 `main` 函数是应用程序的入口点
// 它只是将我们的组件挂载到 <body>
// 因为我们将其定义为 `fn App`,我们现在可以在
// 模板中将其用作 <App/>
fn main() {
    leptos::mount_to_body(App)
}

控制流

在大多数应用程序中,你有时需要做出决定:我应该渲染视图的这一部分吗?我应该渲染 <ButtonA/> 还是 <WidgetB/>?这就是控制流

一些技巧

在考虑如何使用 Leptos 来做到这一点时,记住以下几点很重要:

  1. Rust 是一种面向表达式的语言:像 if x() { y } else { z }match x() { ... } 这样的控制流表达式会返回它们的值。这使得它们对于声明式用户界面非常有用。
  2. 对于任何实现了 IntoViewT——换句话说,对于 Leptos 知道如何渲染的任何类型——Option<T>Result<T, impl Error> 实现了 IntoView。正如 Fn() -> T 渲染一个响应式的 T 一样,Fn() -> Option<T>Fn() -> Result<T, impl Error> 也是响应式的。
  3. Rust 有很多方便的辅助函数,比如 Option::mapOption::and_thenOption::ok_orResult::mapResult::okbool::then,它们允许你以声明式的方式在几种不同的标准类型之间进行转换,所有这些类型都可以被渲染。特别是在 OptionResult 文档中花费时间是提升你的 Rust 水平的最佳方法之一。
  4. 永远记住:要成为响应式的,值必须是函数。你会看到我在下面不断地将东西包装在一个 move || 闭包中。这是为了确保当它们依赖的信号发生变化时,它们能够实际重新运行,从而保持 UI 的响应性。

那又怎样?

为了把这些点联系起来:这意味着你实际上可以使用原生 Rust 代码实现大部分的控制流,而无需任何控制流组件或特殊知识。

例如,让我们从一个简单的信号和派生信号开始:

let (value, set_value) = create_signal(0);
let is_odd = move || value() & 1 == 1;

如果你不认识 is_odd 发生了什么,不要太担心。这只是通过对 1 进行按位 AND 来测试整数是否为奇数的一种简单方法。

我们可以使用这些信号和普通的 Rust 来构建大多数控制流。

if 语句

假设我想在数字为奇数时渲染一些文本,在数字为偶数时渲染其他一些文本。那么,这样如何?

view! {
    <p>
    {move || if is_odd() {
        "Odd"
    } else {
        "Even"
    }}
    </p>
}

if 表达式返回它的值,并且 &str 实现了 IntoView,所以 Fn() -> &str 实现了 IntoView,所以这... 就行了!

Option<T>

假设我们想在数字为奇数时渲染一些文本,在数字为偶数时什么也不渲染。

let message = move || {
    if is_odd() {
        Some("Ding ding ding!")
    } else {
        None
    }
};

view! {
    <p>{message}</p>
}

这很好用。如果我们愿意,我们可以使用 bool::then() 使它更短一些。

let message = move || is_odd().then(|| "Ding ding ding!");
view! {
    <p>{message}</p>
}

你甚至可以内联它,如果你愿意的话,虽然我个人有时喜欢通过将东西从 view 中拉出来获得更好的 cargo fmtrust-analyzer 支持。

match 语句

我们仍然只是在编写普通的 Rust 代码,对吧?所以你拥有 Rust 模式匹配的所有能力。

let message = move || {
    match value() {
        0 => "Zero",
        1 => "One",
        n if is_odd() => "Odd",
        _ => "Even"
    }
};
view! {
    <p>{message}</p>
}

为什么不呢?YOLO,对吧?

避免过度渲染

不要太 YOLO。

我们刚刚做的所有事情基本上都没问题。但是有一件事你应该记住并尽量小心。到目前为止,我们创建的每个控制流函数基本上都是一个派生信号:每次值发生变化时它都会重新运行。在上面的例子中,值在每次变化时都会从偶数切换到奇数,这很好。

但是考虑下面的例子:

let (value, set_value) = create_signal(0);

let message = move || if value() > 5 {
    "Big"
} else {
    "Small"
};

view! {
    <p>{message}</p>
}

这当然_可以_。但是如果你添加一个日志,你可能会感到惊讶

let message = move || if value() > 5 {
    logging::log!("{}: rendering Big", value());
    "Big"
} else {
    logging::log!("{}: rendering Small", value());
    "Small"
};

当用户点击一个按钮时,你会看到类似这样的内容:

1: rendering Small
2: rendering Small
3: rendering Small
4: rendering Small
5: rendering Small
6: rendering Big
7: rendering Big
8: rendering Big
... 无限循环

每次 value 发生变化时,它都会重新运行 if 语句。这在响应性工作原理中是有道理的。但它有一个缺点。对于一个简单的文本节点,重新运行 if 语句并重新渲染没什么大不了的。但是想象一下它是这样的:

let message = move || if value() > 5 {
    <Big/>
} else {
    <Small/>
};

这会重新渲染 <Small/> 五次,然后无限重新渲染 <Big/>。如果它们正在加载资源、创建信号,或者仅仅是创建 DOM 节点,这就是不必要的工作。

<Show/>

<Show/> 组件就是答案。你给它传递一个 when 条件函数,一个在 when 函数返回 false 时显示的 fallback,以及在 whentrue 时渲染的子级。

let (value, set_value) = create_signal(0);

view! {
  <Show
    when=move || { value() > 5 }
    fallback=|| view! { <Small/> }
  >
    <Big/>
  </Show>
}

<Show/> 会记住 when 条件,因此它只渲染一次 <Small/>,并继续显示相同的组件,直到 value 大于 5;然后它渲染一次 <Big/>,并继续无限期地显示它,或者直到 value 小于 5 然后再次渲染 <Small/>

当使用动态 if 表达式时,这是一个避免重新渲染的有用工具。与往常一样,这会有一些开销:对于一个非常简单的节点(比如更新单个文本节点,或者更新一个类或属性),move || if ... 会更有效率。但是,如果渲染任何一个分支的成本都很高,那就使用 <Show/>

注意:类型转换

在本节中,最后还有一件重要的事情要说。

view 宏不会返回最通用的包装类型 View。相反,它返回类型为 FragmentHtmlElement<Input> 的东西。如果从条件的不同分支返回不同的 HTML 元素,这可能会有点烦人:

view! {
    <main>
        {move || match is_odd() {
            true if value() == 1 => {
                // 返回 HtmlElement<Pre>
                view! { <pre>"One"</pre> }
            },
            false if value() == 2 => {
                // 返回 HtmlElement<P>
                view! { <p>"Two"</p> }
            }
            // 返回 HtmlElement<Textarea>
            _ => view! { <textarea>{value()}</textarea> }
        }}
    </main>
}

这种强类型实际上非常强大,因为 HtmlElement 除了其他功能外,还是一个智能指针:每个 HtmlElement<T> 类型都为相应的底层 web_sys 类型实现了 Deref。换句话说,在浏览器中,你的 view 返回的是真正的 DOM 元素,你可以访问它们上的原生 DOM 方法。

但这在像这样的条件逻辑中可能会有点烦人,因为在 Rust 中你不能从条件的不同分支返回不同的类型。有两种方法可以让你摆脱这种情况:

  1. 如果你有多个 HtmlElement 类型,可以使用 .into_any() 将它们转换为 HtmlElement<AnyElement>
  2. 如果你有各种各样的视图类型,而不仅仅是 HtmlElement,可以使用 .into_view() 将它们转换为 View

以下是添加了转换的相同示例:

view! {
    <main>
        {move || match is_odd() {
            true if value() == 1 => {
                // 返回 HtmlElement<Pre>
                view! { <pre>"One"</pre> }.into_any()
            },
            false if value() == 2 => {
                // 返回 HtmlElement<P>
                view! { <p>"Two"</p> }.into_any()
            }
            // returns HtmlElement<Textarea>
            _ => view! { <textarea>{value()}</textarea> }.into_any()
        }}
    </main>
}

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::*;

#[component]
fn App() -> impl IntoView {
    let (value, set_value) = create_signal(0);
    let is_odd = move || value() & 1 == 1;
    let odd_text = move || if is_odd() { Some("How odd!") } else { None };

    view! {
        <h1>"Control Flow"</h1>

        // 用于更新和显示值的简单 UI
        <button on:click=move |_| set_value.update(|n| *n += 1)>
            "+1"
        </button>
        <p>"Value is: " {value}</p>

        <hr/>

        <h2><code>"Option<T>"</code></h2>
        // 对于任何实现了 `IntoView` 的 `T`,
        // `Option<T>` 也实现了 `IntoView`

        <p>{odd_text}</p>
        // 这意味着你可以对它使用 `Option` 方法
        <p>{move || odd_text().map(|text| text.len())}</p>

        <h2>"Conditional Logic"</h2>
        // 你可以通过几种方式进行动态条件 if-then-else
        // 逻辑
        //
        // a. 函数中的 "if" 表达式
        //    这将在每次值发生变化时重新渲染,这使得它适用于轻量级 UI
        <p>
            {move || if is_odd() {
                "Odd"
            } else {
                "Even"
            }}
        </p>

        // b. 切换某种类
        //    这对于经常切换的元素来说很聪明,因为它不会破坏
        //    它在不同状态之间的状态
        //    (你可以在 `index.html` 中找到 `hidden` 类)
        <p class:hidden=is_odd>"Appears if even."</p>

        // c. <Show/> 组件
        //    这只会渲染一次 fallback 和子级,并且是惰性的,并且在
        //    需要时在它们之间切换。这使得它在很多情况下比 {move || if ...} 块更有效率
        <Show when=is_odd
            fallback=|| view! { <p>"Even steven"</p> }
        >
            <p>"Oddment"</p>
        </Show>

        // d. 因为 `bool::then()` 将 `bool` 转换为
        //    `Option`,你可以使用它来创建一个显示/隐藏切换
        {move || is_odd().then(|| view! { <p>"Oddity!"</p> })}

        <h2>"Converting between Types"</h2>
        // e. 注意:如果分支返回不同的类型,
        //    你可以使用
        //    `.into_any()`(对于不同的 HTML 元素类型)
        //    或 `.into_view()`(对于所有视图类型)在它们之间进行转换
        {move || match is_odd() {
            true if value() == 1 => {
                // <pre> 返回 HtmlElement<Pre>
                view! { <pre>"One"</pre> }.into_any()
            },
            false if value() == 2 => {
                // <p> 返回 HtmlElement<P>
                // 所以我们转换为更通用的类型
                view! { <p>"Two"</p> }.into_any()
            }
            _ => view! { <textarea>{value()}</textarea> }.into_any()
        }}
    }
}

fn main() {
    leptos::mount_to_body(App)
}

错误处理

在上一章中,我们看到你可以渲染 Option<T>:在 None 的情况下,它什么也不会渲染,而在 Some(T) 的情况下,它会渲染 T(也就是说,如果 T 实现了 IntoView)。你实际上可以使用 Result<T, E> 做一些非常类似的事情。在 Err(_) 的情况下,它什么也不会渲染。在 Ok(T) 的情况下,它会渲染 T

让我们从一个简单的组件开始,用于捕获数字输入。

#[component]
fn NumericInput() -> impl IntoView {
    let (value, set_value) = create_signal(Ok(0));

    // 当输入发生变化时,尝试从输入中解析一个数字
    let on_input = move |ev| set_value(event_target_value(&ev).parse::<i32>());

    view! {
        <label>
            "Type an integer (or not!)"
            <input type="number" on:input=on_input/>
            <p>
                "You entered "
                <strong>{value}</strong>
            </p>
        </label>
    }
}

每次你更改输入时,on_input 都会尝试将其值解析为 32 位整数 (i32),并将其存储在我们的 value 信号中,该信号是 Result<i32, _>。如果你输入数字 42,UI 将显示

You entered 42

但是如果你输入字符串 foo,它会显示

You entered

这不太好。它避免了我们使用 .unwrap_or_default() 或其他类似的东西,但如果我们可以捕获错误并对其进行处理,那就更好了。

你可以使用 <ErrorBoundary/> 组件来做到这一点。

Note

人们经常试图指出 <input type="number"> 会阻止你输入像 foo 这样的字符串,或任何其他不是数字的内容。这在某些浏览器中是正确的,但并非所有浏览器都如此!此外,还有各种各样的内容可以被输入到一个普通的数字输入框中,而这些内容并不是 i32:浮点数、大于 32 位的数字、字母 e 等等。可以告诉浏览器维护其中一些不变式,但浏览器的行为仍然会有所不同:自己进行解析很重要!

<ErrorBoundary/>

<ErrorBoundary/> 有点像我们在上一章中看到的 <Show/> 组件。如果一切正常——也就是说,如果一切都是 Ok(_)——它会渲染它的子级。但是,如果在这些子级中渲染了 Err(_),它将触发 <ErrorBoundary/>fallback

让我们在这个例子中添加一个 <ErrorBoundary/>

#[component]
fn NumericInput() -> impl IntoView {
    let (value, set_value) = create_signal(Ok(0));

    let on_input = move |ev| set_value(event_target_value(&ev).parse::<i32>());

    view! {
        <h1>"Error Handling"</h1>
        <label>
            "Type a number (or something that's not a number!)"
            <input type="number" on:input=on_input/>
            <ErrorBoundary
                // fallback 接收一个包含当前错误的信号
                fallback=|errors| view! {
                    <div class="error">
                        <p>"Not a number! Errors: "</p>
                        // 我们可以将错误列表渲染为字符串,如果我们愿意的话
                        <ul>
                            {move || errors.get()
                                .into_iter()
                                .map(|(_, e)| view! { <li>{e.to_string()}</li>})
                                .collect_view()
                            }
                        </ul>
                    </div>
                }
            >
                <p>"You entered " <strong>{value}</strong></p>
            </ErrorBoundary>
        </label>
    }
}

现在,如果你输入 42value 就是 Ok(42),你会看到

You entered 42

如果你输入 foovalue 就是 Err(_)fallback 将被渲染。我们选择将错误列表渲染为 String,因此你会看到类似这样的内容

Not a number! Errors:
- cannot parse integer from empty string

如果修复了错误,错误消息将消失,你用 <ErrorBoundary/> 包装的内容将再次出现。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::*;

#[component]
fn App() -> impl IntoView {
    let (value, set_value) = create_signal(Ok(0));

    // 当输入发生变化时,尝试从输入中解析一个数字
    let on_input = move |ev| set_value(event_target_value(&ev).parse::<i32>());

    view! {
        <h1>"Error Handling"</h1>
        <label>
            "Type a number (or something that's not a number!)"
            <input type="number" on:input=on_input/>
            // 如果在 <ErrorBoundary/> 内部渲染了 `Err(_)`,
            // 将显示 fallback。否则,将显示
            // <ErrorBoundary/> 的子级。
            <ErrorBoundary
                // fallback 接收一个包含当前错误的信号
                fallback=|errors| view! {
                    <div class="error">
                        <p>"Not a number! Errors: "</p>
                        // 我们可以将错误列表渲染为
                        // 字符串,如果我们愿意的话
                        <ul>
                            {move || errors.get()
                                .into_iter()
                                .map(|(_, e)| view! { <li>{e.to_string()}</li>})
                                .collect::<Vec<_>>()
                            }
                        </ul>
                    </div>
                }
            >
                <p>
                    "You entered "
                    // 因为 `value` 是 `Result<i32, _>`,
                    // 如果它是 `Ok`,它将渲染 `i32`,
                    // 如果它是 `Err`,它将渲染 nothing 并触发错误边界。
                    // 它是一个信号,因此当 `value` 发生变化时,它将动态更新
                    <strong>{value}</strong>
                </p>
            </ErrorBoundary>
        </label>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

父子组件通信

你可以将你的应用程序视为一个嵌套的组件树。每个组件都处理自己的本地状态并管理用户界面的一部分,因此组件往往是相对独立的。

但是,有时你需要在父组件与其子组件之间进行通信。例如,假设你定义了一个 <FancyButton/> 组件,它为 <button/> 添加了一些样式、日志记录或其他内容。你想在你的 <App/> 组件中使用 <FancyButton/>。但是你如何在两者之间进行通信呢?

将状态从父组件传递到子组件很容易。我们在 组件和 props 的材料中介绍了一些这方面的内容。基本上,如果你希望父组件与子组件通信,你可以传递一个 ReadSignal、一个 Signal,甚至一个 MaybeSignal 作为 prop。

但是反过来呢?子组件如何将有关事件或状态更改的通知发送回父组件?

在 Leptos 中,有四种基本的父子组件通信模式。

1. 传递一个 WriteSignal

一种方法是简单地将 WriteSignal 从父组件传递到子组件,并在子组件中更新它。这使你可以从子组件操作父组件的状态。

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        <ButtonA setter=set_toggled/>
    }
}

#[component]
pub fn ButtonA(setter: WriteSignal<bool>) -> impl IntoView {
    view! {
        <button
            on:click=move |_| setter.update(|value| *value = !*value)
        >
            "Toggle"
        </button>
    }
}

这种模式很简单,但你应该小心使用它:传递 WriteSignal 会使你的代码难以推理。在这个例子中,当你阅读 <App/> 时,很明显你正在交出改变 toggled 的能力,但根本不清楚它何时或如何改变。在这个小的、局部的例子中很容易理解,但是如果你发现你在整个代码中都像这样传递 WriteSignal,你应该认真考虑这是否会让编写意大利面条式代码变得太容易。

2. 使用回调函数

另一种方法是将一个回调函数传递给子组件:例如 on_click

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        <ButtonB on_click=move |_| set_toggled.update(|value| *value = !*value)/>
    }
}


#[component]
pub fn ButtonB(#[prop(into)] on_click: Callback<MouseEvent>) -> impl IntoView
{
    view! {
        <button on:click=on_click>
            "Toggle"
        </button>
    }
}

你会注意到,<ButtonA/> 被赋予了一个 WriteSignal 并决定如何改变它,而 <ButtonB/> 只是触发一个事件:改变发生在 <App/> 中。这样做的好处是将局部状态保持在局部,防止了意大利面条式修改的问题。但这也意味着修改该信号的逻辑需要存在于 <App/> 中,而不是 <ButtonB/> 中。这些是真正的权衡,而不是简单的对错选择。

注意我们使用 Callback<In, Out> 类型的方式。这基本上是一个围绕闭包 Fn(In) -> Out 的包装器,它也是 Copy 的,并且易于传递。

我们还使用了 #[prop(into)] 属性,以便我们可以将普通的闭包传递给 on_click。请参阅章节 “into Props” 了解更多详细信息。

2.1 使用闭包而不是 Callback

你可以直接使用 Rust 闭包 Fn(MouseEvent) 而不是 Callback

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        <ButtonB on_click=move |_| set_toggled.update(|value| *value = !*value)/>
    }
}


#[component]
pub fn ButtonB<F>(on_click: F) -> impl IntoView
where
    F: Fn(MouseEvent) + 'static
{
    view! {
        <button on:click=on_click>
            "Toggle"
        </button>
    }
}

在这种情况下,代码非常相似。在更高级的用例中,使用闭包可能需要一些克隆,而使用 Callback 则不需要。

注意我们在这里为回调函数声明泛型类型 F 的方式。如果你感到困惑,请回顾一下关于组件的章节中的 泛型 props 部分。

3. 使用事件监听器

你实际上可以用稍微不同的方式编写选项 2。如果回调函数直接映射到原生 DOM 事件,你可以直接在 <App/>view 宏中使用组件的地方添加 on: 监听器。

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        // 注意 on:click 而不是 on_click
        // 这与 HTML 元素事件监听器的语法相同
        <ButtonC on:click=move |_| set_toggled.update(|value| *value = !*value)/>
    }
}


#[component]
pub fn ButtonC() -> impl IntoView {
    view! {
        <button>"Toggle"</button>
    }
}

这让你在 <ButtonC/> 中编写的代码比在 <ButtonB/> 中少得多,并且仍然为监听器提供了一个正确类型的事件。这是通过为 <ButtonC/> 返回的每个元素添加一个 on: 事件监听器来实现的:在本例中,只有一个 <button>

当然,这只适用于你直接传递给组件中渲染的元素的实际 DOM 事件。对于不直接映射到元素的更复杂的逻辑(例如,你创建了 <ValidatedForm/> 并想要一个 on_valid_form_submit 回调函数),你应该使用选项 2。

4. 提供一个上下文

这个版本实际上是选项 1 的一个变体。假设你有一个深度嵌套的组件树:

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        <Layout/>
    }
}

#[component]
pub fn Layout() -> impl IntoView {
    view! {
        <header>
            <h1>"My Page"</h1>
        </header>
        <main>
            <Content/>
        </main>
    }
}

#[component]
pub fn Content() -> impl IntoView {
    view! {
        <div class="content">
            <ButtonD/>
        </div>
    }
}

#[component]
pub fn ButtonD<F>() -> impl IntoView {
    todo!()
}

现在 <ButtonD/> 不再是 <App/> 的直接子级,因此你不能简单地将你的 WriteSignal 传递给它的 props。你可以做一些有时被称为“prop drilling”的事情,在两者之间的每一层添加一个 prop:

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        <Layout set_toggled/>
    }
}

#[component]
pub fn Layout(set_toggled: WriteSignal<bool>) -> impl IntoView {
    view! {
        <header>
            <h1>"My Page"</h1>
        </header>
        <main>
            <Content set_toggled/>
        </main>
    }
}

#[component]
pub fn Content(set_toggled: WriteSignal<bool>) -> impl IntoView {
    view! {
        <div class="content">
            <ButtonD set_toggled/>
        </div>
    }
}

#[component]
pub fn ButtonD<F>(set_toggled: WriteSignal<bool>) -> impl IntoView {
    todo!()
}

这真是一团糟。<Layout/><Content/> 不需要 set_toggled;它们只是将其传递给 <ButtonD/>。但我需要声明三次这个 prop。这不仅烦人,而且难以维护:想象一下,我们添加了一个“half-toggled”选项,set_toggled 的类型需要更改为一个 enum。我们必须在三个地方更改它!

有没有办法跳过层级?

有!

4.1 上下文 API

你可以使用 provide_contextuse_context 来提供跳过层级的数据。上下文由你提供的数据类型(在本例中为 WriteSignal<bool>)标识,并且它们存在于一个自上而下的树中,该树遵循你的 UI 树的轮廓。在这个例子中,我们可以使用上下文来跳过不必要的 prop drilling。

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);

    // 与此组件的所有子组件共享 `set_toggled`
    provide_context(set_toggled);

    view! {
        <p>"Toggled? " {toggled}</p>
        <Layout/>
    }
}

// <Layout/> 和 <Content/> 省略
// 要在此版本中工作,请删除它们对 set_toggled 的引用

#[component]
pub fn ButtonD() -> impl IntoView {
    // use_context 向上搜索上下文树,希望
    // 找到一个 `WriteSignal<bool>`
    // 在这种情况下,我使用 .expect() 因为我知道我提供了它
    let setter = use_context::<WriteSignal<bool>>()
        .expect("to have found the setter provided");

    view! {
        <button
            on:click=move |_| setter.update(|value| *value = !*value)
        >
            "Toggle"
        </button>
    }
}

<ButtonA/> 相同的注意事项也适用于此:传递 WriteSignal 应该谨慎行事,因为它允许你从代码的任意部分修改状态。但是,如果小心谨慎地进行,这可能是 Leptos 中最有效的全局状态管理技术之一:只需在你需要它的最高级别提供状态,并在你需要它的较低级别使用它。

请注意,这种方法没有性能方面的缺点。因为你传递的是一个细粒度的响应式信号,所以在更新它时,中间组件(<Layout/><Content/>什么也不会发生。你直接在 <ButtonD/><App/> 之间进行通信。事实上——这就是细粒度响应式的强大之处——你直接在 <ButtonD/> 中的按钮点击和 <App/> 中的单个文本节点之间进行通信。就好像这些组件本身根本不存在一样。而且,嗯... 在运行时,它们确实不存在。一直到底都只是信号和效果。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::{ev::MouseEvent, *};

// 这突出了子组件与父组件通信的四种不同方式:
// 1) <ButtonA/>:将 WriteSignal 作为子组件 props 之一传递,
//    供子组件写入和父组件读取
// 2) <ButtonB/>:将闭包作为子组件 props 之一传递,供
//    子组件调用
// 3) <ButtonC/>:向组件添加 `on:` 事件监听器
// 4) <ButtonD/>:提供一个在组件中使用的上下文(而不是 prop drilling)

#[derive(Copy, Clone)]
struct SmallcapsContext(WriteSignal<bool>);

#[component]
pub fn App() -> impl IntoView {
    // 只是一些用于切换 <p> 上三个类的信号
    let (red, set_red) = create_signal(false);
    let (right, set_right) = create_signal(false);
    let (italics, set_italics) = create_signal(false);
    let (smallcaps, set_smallcaps) = create_signal(false);

    // newtype 模式在这里不是*必需的*,但这是一个好习惯
    // 它避免了与其他可能的未来 `WriteSignal<bool>` 上下文的混淆
    // 并使其更容易在 ButtonC 中引用
    provide_context(SmallcapsContext(set_smallcaps));

    view! {
        <main>
            <p
                // class: 属性接受 F: Fn() => bool,并且这些信号都实现了 Fn()
                class:red=red
                class:right=right
                class:italics=italics
                class:smallcaps=smallcaps
            >
                "Lorem ipsum sit dolor amet."
            </p>

            // 按钮 A:传递信号设置器
            <ButtonA setter=set_red/>

            // 按钮 B:传递一个闭包
            <ButtonB on_click=move |_| set_right.update(|value| *value = !*value)/>

            // 按钮 B:使用常规事件监听器
            // 像这样在组件上设置事件监听器会将其应用于
            // 组件返回的每个顶级元素
            <ButtonC on:click=move |_| set_italics.update(|value| *value = !*value)/>

            // 按钮 D 从上下文而不是 props 获取其设置器
            <ButtonD/>
        </main>
    }
}

/// 按钮 A 接收一个信号设置器并更新信号本身
#[component]
pub fn ButtonA(
    /// 单击按钮时将切换的信号。
    setter: WriteSignal<bool>,
) -> impl IntoView {
    view! {
        <button
            on:click=move |_| setter.update(|value| *value = !*value)
        >
            "Toggle Red"
        </button>
    }
}

/// 按钮 B 接收一个闭包
#[component]
pub fn ButtonB<F>(
    /// 单击按钮时将调用的回调。
    on_click: F,
) -> impl IntoView
where
    F: Fn(MouseEvent) + 'static,
{
    view! {
        <button
            on:click=on_click
        >
            "Toggle Right"
        </button>
    }

    // 只是一个注释:在普通函数中,ButtonB 可以接受 on_click: impl Fn(MouseEvent) + 'static
    // 并让你免于输入泛型
    // 组件宏实际上扩展为定义一个
    //
    // struct ButtonBProps<F> where F: Fn(MouseEvent) + 'static {
    //   on_click: F
    // }
    //
    // 这就是允许我们在组件调用中使用命名 props 的原因,
    // 而不是有序的函数参数列表
    // 如果 Rust 曾经有命名的函数参数,我们可以放弃这个要求
}

/// 按钮 C 是一个虚拟按钮:它渲染一个按钮,但不处理
/// 它的点击。相反,父组件添加了一个事件监听器。
#[component]
pub fn ButtonC() -> impl IntoView {
    view! {
        <button>
            "Toggle Italics"
        </button>
    }
}

/// 按钮 D 与按钮 A 非常相似,但不是将设置器作为 prop 传递,
/// 而是从上下文中获取它
#[component]
pub fn ButtonD() -> impl IntoView {
    let setter = use_context::<SmallcapsContext>().unwrap().0;

    view! {
        <button
            on:click=move |_| setter.update(|value| *value = !*value)
        >
            "Toggle Small Caps"
        </button>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

组件子级

就像你可以将子级传递给 HTML 元素一样,将子级传递给组件也是很常见的。例如,假设我有一个 <FancyForm/> 组件,它增强了 HTML <form>。我需要某种方法来传递它的所有输入。

view! {
    <FancyForm>
        <fieldset>
            <label>
                "Some Input"
                <input type="text" name="something"/>
            </label>
        </fieldset>
        <button>"Submit"</button>
    </FancyForm>
}

在 Leptos 中,你如何做到这一点?基本上有两种方法可以将组件传递给其他组件:

  1. 渲染 props:返回视图的函数属性
  2. children prop:一个特殊的组件属性,包含你作为子级传递给组件的任何内容。

事实上,你已经在 <Show/> 组件中看到了这两者的实际应用:

view! {
  <Show
    // `when` 是一个普通的 prop
    when=move || value() > 5
    // `fallback` 是一个“渲染 prop”:一个返回视图的函数
    fallback=|| view! { <Small/> }
  >
    // `<Big/>`(以及这里的任何其他内容)
    // 将被赋予 `children` prop
    <Big/>
  </Show>
}

让我们定义一个接受一些子级和一个渲染 prop 的组件。

#[component]
pub fn TakesChildren<F, IV>(
    /// 接受一个函数(类型 F),该函数返回任何可以
    /// 转换为视图(类型 IV)的内容
    render_prop: F,
    /// `children` 接受 `Children` 类型
    children: Children,
) -> impl IntoView
where
    F: Fn() -> IV,
    IV: IntoView,
{
    view! {
        <h2>"Render Prop"</h2>
        {render_prop()}

        <h2>"Children"</h2>
        {children()}
    }
}

render_propchildren 都是函数,所以我们可以调用它们来生成相应的视图。children,特别是,是 Box<dyn FnOnce() -> Fragment> 的别名。(你不高兴我们将其命名为 Children 而不是那个吗?)

如果你在这里需要一个 FnFnMut,因为你需要多次调用 children,我们还提供了 ChildrenFnChildrenMut 别名。

我们可以像这样使用组件:

view! {
    <TakesChildren render_prop=|| view! { <p>"Hi, there!"</p> }>
        // 这些被传递给 `children`
        "Some text"
        <span>"A span"</span>
    </TakesChildren>
}

操作子级

Fragment 类型基本上是一种包装 Vec<View> 的方法。你可以将其插入到视图中的任何位置。

但是你也可以直接访问这些内部视图来操作它们。例如,这里有一个组件,它接受它的子级并将它们转换成一个无序列表。

#[component]
pub fn WrapsChildren(children: Children) -> impl IntoView {
    // Fragment 有一个 `nodes` 字段,其中包含一个 Vec<View>
    let children = children()
        .nodes
        .into_iter()
        .map(|child| view! { <li>{child}</li> })
        .collect_view();

    view! {
        <ul>{children}</ul>
    }
}

像这样调用它将创建一个列表:

view! {
    <WrapsChildren>
        "A"
        "B"
        "C"
    </WrapsChildren>
}

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::*;

// 通常,你希望将某种子视图传递给另一个
// 组件。有两种基本模式可以做到这一点:
// - “渲染 props”:创建一个接受函数的组件 prop,
//   该函数创建一个视图
// - `children` prop:一个特殊的属性,其中包含
//   在你的视图中作为组件的子级传递的内容,而不是作为
//   属性传递的内容

#[component]
pub fn App() -> impl IntoView {
    let (items, set_items) = create_signal(vec![0, 1, 2]);
    let render_prop = move || {
        // items.with(...) 在不克隆的情况下对值做出反应
        // 通过应用一个函数。在这里,我们直接传递 `len` 方法
        // 在 `Vec<_>` 上
        let len = move || items.with(Vec::len);
        view! {
            <p>"Length: " {len}</p>
        }
    };

    view! {
        // 此组件仅显示两种类型的子级,
        // 将它们嵌入到其他一些标记中
        <TakesChildren
            // 对于组件 props,你可以简写
            // `render_prop=render_prop` => `render_prop`
            // (这不适用于 HTML 元素属性)
            render_prop
        >
            // 这些看起来就像 HTML 元素的子级
            <p>"Here's a child."</p>
            <p>"Here's another child."</p>
        </TakesChildren>
        <hr/>
        // 此组件实际上会迭代并包装子级
        <WrapsChildren>
            <p>"Here's a child."</p>
            <p>"Here's another child."</p>
        </WrapsChildren>
    }
}

/// 在标记内显示 `render_prop` 和一些子级。
#[component]
pub fn TakesChildren<F, IV>(
    /// 接受一个函数(类型 F),该函数返回任何可以
    /// 转换为视图(类型 IV)的内容
    render_prop: F,
    /// `children` 接受 `Children` 类型
    /// 这是 `Box<dyn FnOnce() -> Fragment>` 的别名
    /// ... 你不高兴我们将其命名为 `Children` 而不是那个吗?
    children: Children,
) -> impl IntoView
where
    F: Fn() -> IV,
    IV: IntoView,
{
    view! {
        <h1><code>"<TakesChildren/>"</code></h1>
        <h2>"Render Prop"</h2>
        {render_prop()}
        <hr/>
        <h2>"Children"</h2>
        {children()}
    }
}

/// 将每个子级包装在 `<li>` 中并将它们嵌入到 `<ul>` 中。
#[component]
pub fn WrapsChildren(children: Children) -> impl IntoView {
    // children() 返回一个 `Fragment`,它有一个
    // `nodes` 字段,其中包含一个 Vec<View>
    // 这意味着我们可以迭代子级
    // 来创建新的东西!
    let children = children()
        .nodes
        .into_iter()
        .map(|child| view! { <li>{child}</li> })
        .collect::<Vec<_>>();

    view! {
        <h1><code>"<WrapsChildren/>"</code></h1>
        // 将我们包装的子级包装在 UL 中
        <ul>{children}</ul>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

无宏:视图构建器语法

如果你对到目前为止描述的 view! 宏语法感到满意,欢迎跳过本章。本节中描述的构建器语法始终可用,但并非必需。

出于某种原因,许多开发人员更喜欢避免使用宏。也许你不喜欢有限的 rustfmt 支持。(不过,你应该看看 leptosfmt,这是一个很棒的工具!)也许你担心宏对编译时间的影响。也许你更喜欢纯 Rust 语法的审美,或者你难以在类似 HTML 的语法和你的 Rust 代码之间进行上下文切换。或者,也许你希望在创建和操作 HTML 元素方面比 view 宏提供的更多灵活性。

如果你属于这些阵营中的任何一个,那么构建器语法可能适合你。

view 宏将类似 HTML 的语法扩展为一系列 Rust 函数和方法调用。如果你不想使用 view 宏,你可以简单地自己使用这种扩展语法。而且它实际上非常好!

首先,如果你愿意,你甚至可以放弃 #[component] 宏:组件只是一个创建视图的设置函数,因此你可以将组件定义为一个简单的函数调用:

pub fn counter(initial_value: i32, step: u32) -> impl IntoView { }

元素是通过调用与 HTML 元素同名的函数来创建的:

p()

你可以使用 .child() 将子级添加到元素中,它接受一个子级或一个实现 IntoView 类型的元组或数组。

p().child((em().child("Big, "), strong().child("bold "), "text"))

属性使用 .attr() 添加。这可以接受与你可以作为属性传递给视图宏的任何相同类型(实现 IntoAttribute 的类型)。

p().attr("id", "foo").attr("data-count", move || count().to_string())

类似地,class:prop:style: 语法直接映射到 .class().prop().style() 方法。

事件监听器可以使用 .on() 添加。leptos::ev 中的类型化事件可以防止事件名称中的拼写错误,并允许在回调函数中进行正确的类型推断。

button()
    .on(ev::click, move |_| set_count.update(|count| *count = 0))
    .child("Clear")

许多其他方法可以在 HtmlElement 文档中找到,包括一些在 view 宏中没有直接提供的方法。

如果你喜欢这种风格,所有这些加起来就是一个非常 Rust 的语法来构建功能齐全的视图。

/// 一个简单的计数器视图。
// 组件实际上只是一个函数调用:它运行一次以创建 DOM 和响应式系统
pub fn counter(initial_value: i32, step: u32) -> impl IntoView {
    let (count, set_count) = create_signal(0);
    div().child((
        button()
            // 在 leptos::ev 中找到的类型化事件
            // 1) 防止事件名称中的拼写错误
            // 2) 允许在回调中进行正确的类型推断
            .on(ev::click, move |_| set_count.update(|count| *count = 0))
            .child("Clear"),
        button()
            .on(ev::click, move |_| set_count.update(|count| *count -= 1))
            .child("-1"),
        span().child(("Value: ", move || count.get(), "!")),
        button()
            .on(ev::click, move |_| set_count.update(|count| *count += 1))
            .child("+1"),
    ))
}

这样做还有一个好处,那就是更加灵活:因为这些都是普通的 Rust 函数和方法,所以更容易在迭代器适配器等东西中使用它们,而无需任何额外的“魔法”:

// 获取一组属性名称和值
let attrs: Vec<(&str, AttributeValue)> = todo!();
// 你可以使用构建器语法将这些“扩展”到元素上,
// 这是视图宏无法实现的
let p = attrs
    .into_iter()
    .fold(p(), |el, (name, value)| el.attr(name, value));

性能说明

一个警告:view 宏在服务器端渲染(SSR)模式下应用了重大的优化,以显著提高 HTML 渲染性能(根据任何给定应用程序的特征,速度可提高 2-4 倍)。它通过在编译时分析你的 view 并将静态部分转换为简单的 HTML 字符串,而不是将它们扩展为构建器语法来做到这一点。

这意味着两件事:

  1. 构建器语法和 view 宏不应该混合,或者应该非常小心地混合:至少在 SSR 模式下,view 的输出应该被视为一个“黑盒子”,不能对其应用额外的构建器方法,否则会导致不一致。
  2. 使用构建器语法会导致 SSR 性能低于最佳水平。无论如何它都不会很慢(无论如何都值得运行你自己的基准测试),只是比 view 优化版本慢。

响应式

Leptos 建立在一个细粒度的响应式系统之上,该系统旨在响应更改和响应式值,尽可能少地运行昂贵的副作用(例如在浏览器中渲染某些内容或发出网络请求)。

到目前为止,我们已经看到了信号的实际应用。这些章节将更深入地介绍,并看一下效果,这是故事的另一半。

使用信号

到目前为止,我们已经使用了一些 create_signal 的简单示例,它返回一个 ReadSignal getter 和一个 WriteSignal setter。

获取和设置

有四种基本的信号操作:

  1. .get() 克隆信号的当前值,并以响应式方式跟踪对该值的任何未来更改。
  2. .with() 接受一个函数,该函数通过引用 (&T) 接收信号的当前值,并跟踪任何未来更改。
  3. .set() 替换信号的当前值,并通知任何订阅者他们需要更新。
  4. .update() 接受一个函数,该函数接收信号当前值的 mutable 引用 (&mut T),并通知任何订阅者他们需要更新。(.update() 不返回闭包返回的值,但如果需要,你可以使用 .try_update();例如,如果你要从 Vec<_> 中删除一个项目并想要这个被删除的项目。)

ReadSignal 作为函数调用是 .get() 的语法糖。将 WriteSignal 作为函数调用是 .set() 的语法糖。所以

let (count, set_count) = create_signal(0);
set_count(1);
logging::log!(count());

与以下代码相同

let (count, set_count) = create_signal(0);
set_count.set(1);
logging::log!(count.get());

你可能会注意到 .get().set() 可以用 .with().update() 来实现。换句话说,count.get()count.with(|n| n.clone()) 相同,而 count.set(1) 是通过 count.update(|n| *n = 1) 实现的。

但是当然,.get().set()(或者普通的函数调用形式!)是更好的语法。

然而,.with().update() 有一些非常好的用例。

例如,考虑一个保存 Vec<String> 的信号。

let (names, set_names) = create_signal(Vec::new());
if names().is_empty() {
	set_names(vec!["Alice".to_string()]);
}

从逻辑上讲,这很简单,但它隐藏了一些明显的低效之处。记住,names().is_empty()names.get().is_empty() 的语法糖,它克隆了值(它是 names.with(|n| n.clone()).is_empty())。这意味着我们克隆了整个 Vec<String>,运行 is_empty(),然后立即丢弃克隆。

同样,set_names 用一个全新的 Vec<_> 替换了该值。这很好,但我们不妨直接原地修改原始的 Vec<_>

let (names, set_names) = create_signal(Vec::new());
if names.with(|names| names.is_empty()) {
	set_names.update(|names| names.push("Alice".to_string()));
}

现在我们的函数只是通过引用获取 names 来运行 is_empty(),避免了克隆。

如果你打开了 Clippy,或者你目光敏锐,你可能会注意到我们可以让它更简洁:

if names.with(Vec::is_empty) {
	// ...
}

毕竟,.with() 只是接受一个通过引用获取值的函数。因为 Vec::is_empty 接受 &self,我们可以直接传入它,避免不必要的闭包。

有一些辅助宏可以使 .with().update() 更易于使用,尤其是在使用多个信号时。

let (first, _) = create_signal("Bob".to_string());
let (middle, _) = create_signal("J.".to_string());
let (last, _) = create_signal("Smith".to_string());

如果你想将这 3 个信号连接在一起而不需要不必要的克隆,你必须编写如下内容:

let name = move || {
	first.with(|first| {
		middle.with(|middle| last.with(|last| format!("{first} {middle} {last}")))
	})
};

这写起来很长很烦人。

相反,你可以使用 with! 宏同时获取所有信号的引用。

let name = move || with!(|first, middle, last| format!("{first} {middle} {last}"));

这与上面的展开相同。查看 with! 文档了解更多信息,以及相应的宏 update!with_value!update_value!

使信号相互依赖

人们经常会问一些信号需要根据其他信号的值而改变的情况。有三种好方法可以做到这一点,还有一种不太理想但可以在可控情况下使用的方法。

好的选择

**1)B 是 A 的函数。**为 A 创建一个信号,为 B 创建一个派生信号或 memo 。

let (count, set_count) = create_signal(1); // A
let derived_signal_double_count = move || count() * 2; // B 是 A 的函数
let memoized_double_count = create_memo(move |_| count() * 2); // B 是 A 的函数  

有关何时使用派生信号或 memo 的指导,请参阅 create_memo 的文档

**2)C 是 A 和其他事物 B 的函数。**为 A 和 B 创建信号,为 C 创建派生信号或 memo 。

let (first_name, set_first_name) = create_signal("Bridget".to_string()); // A
let (last_name, set_last_name) = create_signal("Jones".to_string()); // B
let full_name = move || with!(|first_name, last_name| format!("{first_name} {last_name}")); // C 是 A 和 B 的函数

**3)A 和 B 是独立的信号,但有时同时更新。**当你调用更新 A 时,进行单独的调用来更新 B。

let (age, set_age) = create_signal(32); // A
let (favorite_number, set_favorite_number) = create_signal(42); // B
// 使用它来处理对 `Clear` 按钮的点击
let clear_handler = move |_| {
  // 同时更新 A 和 B
  set_age(0);
  set_favorite_number(0);
};

如果你真的必须...

**4) 创建一个效果,每当 A 发生变化时写入 B。**这在官方上是不鼓励的,原因有以下几点: a) 它总是效率较低,因为这意味着每次 A 更新时,你都要完整地执行两次响应式过程。(你设置 A,这会导致效果运行,以及任何其他依赖于 A 的效果。然后你设置 B,这会导致任何依赖于 B 的效果运行。) b) 它增加了你意外创建无限循环或过度运行效果的可能性。这是一种乒乓球式的、响应式意大利面条式代码,在 2010 年代初期很常见,我们试图通过读写隔离和不鼓励从效果中写入信号来避免这种情况。

在大多数情况下,最好重写代码,使其基于派生信号或 memo 具有清晰的自上而下的数据流。但这并不是世界末日。

我故意在这里没有提供示例。阅读 create_effect 文档以了解它是如何工作的。

使用 create_effect 响应变化

我们已经走到这一步,而没有提到响应式系统的一半:效果。

响应性工作分为两部分:更新单个响应式值(“信号”)会通知依赖于它们的代码片段(“效果”)它们需要再次运行。响应式系统的这两部分是相互依赖的。没有效果,信号可以在响应式系统内更改,但永远无法以与外部世界交互的方式观察到。没有信号,效果只运行一次,因为没有可订阅的可观察值。效果实际上是响应式系统的“副作用”:它们的存在是为了将响应式系统与其外部的非响应式世界同步。

到目前为止,我们看到的整个响应式 DOM 渲染器背后隐藏着一个名为 create_effect 的函数。

create_effect 接受一个函数作为参数。它立即运行该函数。如果你在该函数内部访问任何响应式信号,它会向响应式运行时注册该效果依赖于该信号的事实。每当效果依赖的信号之一发生变化时,该效果就会再次运行。

let (a, set_a) = create_signal(0);
let (b, set_b) = create_signal(0);

create_effect(move |_| {
  // 立即打印“值:0”并订阅 `a`
  log::debug!("值:{}", a());
});

调用效果函数时,会传递一个参数,该参数包含它上次运行时返回的值。在初始运行时,这是 None

默认情况下,效果不会在服务器上运行。这意味着你可以在效果函数中调用特定于浏览器的 API,而不会导致问题。如果你需要在服务器上运行效果,请使用 create_isomorphic_effect

自动跟踪和动态依赖

如果你熟悉像 React 这样的框架,你可能会注意到一个关键的区别。React 和类似的框架通常要求你传递一个“依赖数组”,一组显式变量,用于确定何时应该重新运行效果。

因为 Leptos 来自同步响应式编程的传统,所以我们不需要这个显式的依赖列表。相反,我们根据效果中访问的信号自动跟踪依赖关系。

这有两个效果(没有双关语)。依赖关系是:

  1. 自动:你不需要维护依赖列表,也不需要担心应该包含什么或不应该包含什么。框架只是跟踪哪些信号可能导致效果重新运行,并为你处理它。
  2. 动态:依赖列表在每次效果运行时都会被清除和更新。如果你的效果包含一个条件(例如),则只会跟踪当前分支中使用的信号。这意味着效果重新运行的次数绝对是最少的。

如果这听起来很神奇,如果你想深入了解自动依赖跟踪是如何工作的,请查看此视频。(抱歉音量有点低!)

效果作为零成本抽象

虽然它们不是最严格意义上的“零成本抽象”——它们需要一些额外的内存使用,在运行时存在,等等——但在更高的层次上,从你正在其中进行的任何昂贵的 API 调用或其他工作的角度来看,效果是零成本抽象。考虑到你对它们的描述,它们只在绝对必要的次数内重新运行。

想象一下,我正在创建某种聊天软件,我希望人们能够显示他们的全名,或者只是他们的名字,并在他们的名字改变时通知服务器:

let (first, set_first) = create_signal(String::new());
let (last, set_last) = create_signal(String::new());
let (use_last, set_use_last) = create_signal(true);

// 这会在任何时候将名称添加到日志中
// 任何一个源信号发生变化
create_effect(move |_| {
    log(
        if use_last() {
            format!("{} {}", first(), last())
        } else {
            first()
        },
    )
});

如果 use_lasttrue,则每当 firstlastuse_last 发生变化时,效果都应该重新运行。但是,如果我将 use_last 切换为 false,则 last 的更改永远不会导致全名更改。实际上,last 将从依赖列表中删除,直到 use_last 再次切换。如果我在 use_last 仍然为 false 的情况下多次更改 last,这将避免我们向 API 发送多个不必要的请求。

何时使用 create_effect,何时不使用?

效果旨在将响应式系统与其外部的非响应式世界同步,而不是在不同的响应式值之间同步。换句话说:使用效果从一个信号中读取值并将其设置到另一个信号中总是次优的。

如果你需要定义一个依赖于其他信号值的信号,请使用派生信号或 create_memo。在效果内部写入信号并不是世界末日,它不会导致你的计算机着火,但派生信号或 memo 始终更好——不仅因为数据流清晰,而且因为性能更好。

let (a, set_a) = create_signal(0);

// ⚠️ 不太好
let (b, set_b) = create_signal(0);
create_effect(move |_| {
    set_b(a() * 2);
});

// ✅ 太棒了!
let b = move || a() * 2;

如果你需要将一些响应式值与外部的非响应式世界同步——例如 Web API、控制台、文件系统或 DOM——在效果中写入信号是一种很好的方法。然而,在许多情况下,你会发现你实际上是在事件监听器或其他东西内部写入信号,而不是在效果内部。在这种情况下,你应该查看 leptos-use,看看它是否已经提供了一个响应式包装原语来做到这一点!

如果你想了解更多关于何时应该以及何时不应该使用 create_effect 的信息,请查看此视频 以获得更深入的了解!

效果和渲染

我们已经设法在不提及效果的情况下走到了这一步,因为它们内置于 Leptos DOM 渲染器中。我们已经看到,你可以创建一个信号并将其传递到 view 宏中,并且每当信号发生变化时,它都会更新相关的 DOM 节点:

let (count, set_count) = create_signal(0);

view! {
    <p>{count}</p>
}

这是有效的,因为框架本质上创建了一个包装此更新的效果。你可以想象 Leptos 将此视图转换为如下内容:

let (count, set_count) = create_signal(0);

// 创建一个 DOM 元素
let document = leptos::document();
let p = document.create_element("p").unwrap();

// 创建一个效果来响应式地更新文本
create_effect(move |prev_value| {
    // 首先,访问信号的值并将其转换为字符串
    let text = count().to_string();

    // 如果这与先前值不同,则更新节点
    if prev_value != Some(text) {
        p.set_text_content(&text);
    }

    // 返回此值,以便我们可以记住下次更新
    text
});

每次更新 count 时,此效果都会重新运行。这就是允许对 DOM 进行响应式、细粒度更新的原因。

使用 watch 进行显式、可取消的跟踪

除了 create_effect,Leptos 还提供了一个 watch 函数,它可以用于两个主要目的:

  1. 通过显式传入一组要跟踪的值来分离跟踪和响应更改。
  2. 通过调用停止函数取消跟踪。

create_resource 一样,watch 接受第一个参数,它是响应式跟踪的,第二个参数则不是。每当其 deps 参数中的响应式值发生更改时,就会运行 callbackwatch 返回一个函数,可以调用该函数来停止跟踪依赖项。

let (num, set_num) = create_signal(0);

let stop = watch(
    move || num.get(),
    move |num, prev_num, _| {
        log::debug!("Number: {}; Prev: {:?}", num, prev_num);
    },
    false,
);

set_num.set(1); // > "数字:1;上一个:Some(0)"

stop(); // 停止观察

set_num.set(2); // (什么都没有发生)

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::html::Input;
use leptos::*;

#[derive(Copy, Clone)]
struct LogContext(RwSignal<Vec<String>>);

#[component]
fn App() -> impl IntoView {
    // 这里只是创建一个可见的日志
    // 你可以忽略它...
    let log = create_rw_signal::<Vec<String>>(vec![]);
    let logged = move || log().join("\n");

    // newtype 模式在这里不是*必需的*,但这是一个好习惯
    // 它避免了与其他可能的未来 `RwSignal<Vec<String>>` 上下文的混淆
    // 并使其更容易引用
    provide_context(LogContext(log));

    view! {
        <CreateAnEffect/>
        <pre>{logged}</pre>
    }
}

#[component]
fn CreateAnEffect() -> impl IntoView {
    let (first, set_first) = create_signal(String::new());
    let (last, set_last) = create_signal(String::new());
    let (use_last, set_use_last) = create_signal(true);

    // 这会在任何时候将名称添加到日志中
    // 任何一个源信号发生变化
    create_effect(move |_| {
        log(if use_last() {
            with!(|first, last| format!("{first} {last}"))
        } else {
            first()
        })
    });

    view! {
        <h1>
            <code>"create_effect"</code>
            " Version"
        </h1>
        <form>
            <label>
                "First Name"
                <input
                    type="text"
                    name="first"
                    prop:value=first
                    on:change=move |ev| set_first(event_target_value(&ev))
                />
            </label>
            <label>
                "Last Name"
                <input
                    type="text"
                    name="last"
                    prop:value=last
                    on:change=move |ev| set_last(event_target_value(&ev))
                />
            </label>
            <label>
                "Show Last Name"
                <input
                    type="checkbox"
                    name="use_last"
                    prop:checked=use_last
                    on:change=move |ev| set_use_last(event_target_checked(&ev))
                />
            </label>
        </form>
    }
}

#[component]
fn ManualVersion() -> impl IntoView {
    let first = create_node_ref::<Input>();
    let last = create_node_ref::<Input>();
    let use_last = create_node_ref::<Input>();

    let mut prev_name = String::new();
    let on_change = move |_| {
        log("      listener");
        let first = first.get().unwrap();
        let last = last.get().unwrap();
        let use_last = use_last.get().unwrap();
        let this_one = if use_last.checked() {
            format!("{} {}", first.value(), last.value())
        } else {
            first.value()
        };

        if this_one != prev_name {
            log(&this_one);
            prev_name = this_one;
        }
    };

    view! {
        <h1>"Manual Version"</h1>
        <form on:change=on_change>
            <label>"First Name" <input type="text" name="first" node_ref=first/></label>
            <label>"Last Name" <input type="text" name="last" node_ref=last/></label>
            <label>
                "Show Last Name" <input type="checkbox" name="use_last" checked node_ref=use_last/>
            </label>
        </form>
    }
}

#[component]
fn EffectVsDerivedSignal() -> impl IntoView {
    let (my_value, set_my_value) = create_signal(String::new());
    // 不要这样做。
    /*let (my_optional_value, set_optional_my_value) = create_signal(Option::<String>::None);

    create_effect(move |_| {
        if !my_value.get().is_empty() {
            set_optional_my_value(Some(my_value.get()));
        } else {
            set_optional_my_value(None);
        }
    });*/

    // 这样做
    let my_optional_value =
        move || (!my_value.with(String::is_empty)).then(|| Some(my_value.get()));

    view! {
        <input prop:value=my_value on:input=move |ev| set_my_value(event_target_value(&ev))/>

        <p>
            <code>"my_optional_value"</code>
            " is "
            <code>
                <Show when=move || my_optional_value().is_some() fallback=|| view! { "None" }>
                    "Some(\""
                    {my_optional_value().unwrap()}
                    "\")"
                </Show>
            </code>
        </p>
    }
}

#[component]
pub fn Show<F, W, IV>(
    /// Show 包装的组件
    children: Box<dyn Fn() -> Fragment>,
    /// 返回一个布尔值的闭包,用于确定此内容是否运行
    when: W,
    /// 在 when 语句为 false 时返回渲染内容的闭包
    fallback: F,
) -> impl IntoView
where
    W: Fn() -> bool + 'static,
    F: Fn() -> IV + 'static,
    IV: IntoView,
{
    let memoized_when = create_memo(move |_| when());

    move || match memoized_when.get() {
        true => children().into_view(),
        false => fallback().into_view(),
    }
}

fn log(msg: impl std::fmt::Display) {
    let log = use_context::<LogContext>().unwrap().0;
    log.update(|log| log.push(msg.to_string()));
}

fn main() {
    leptos::mount_to_body(App)
}

插曲:响应式和函数

我们的一位核心贡献者最近对我说:“在开始使用 Leptos 之前,我从来没有这么频繁地使用闭包。” 这是真的。闭包是任何 Leptos 应用程序的核心。它有时看起来有点傻:

// 信号保存一个值,并且可以更新
let (count, set_count) = create_signal(0);

// 派生信号是一个访问其他信号的函数
let double_count = move || count() * 2;
let count_is_odd = move || count() & 1 == 1;
let text = move || if count_is_odd() {
    "odd"
} else {
    "even"
};

// 效果会自动跟踪它所依赖的信号
// 并在它们发生变化时重新运行
create_effect(move |_| {
    logging::log!("text = {}", text());
});

view! {
    <p>{move || text().to_uppercase()}</p>
}

到处都是闭包!

但为什么?

函数和 UI 框架

函数是每个 UI 框架的核心。这很有道理。创建用户界面基本上分为两个阶段:

  1. 初始渲染
  2. 更新

在 Web 框架中,框架进行某种初始渲染。然后它将控制权交还给浏览器。当某些事件触发(如鼠标单击)或异步任务完成(如 HTTP 请求完成)时,浏览器会唤醒框架来更新某些内容。框架运行某种代码来更新你的用户界面,然后再次休眠,直到浏览器再次唤醒它。

这里的关键词是“运行某种代码”。在任意时间点“运行某种代码”的自然方式——在 Rust 或任何其他编程语言中——是调用函数。事实上,每个 UI 框架都是基于一遍又一遍地重新运行某种函数:

  1. 像 React、Yew 或 Dioxus 这样的虚拟 DOM(VDOM)框架一遍又一遍地重新运行组件或渲染函数,以生成一个虚拟 DOM 树,该树可以与之前的结果进行协调以修补 DOM
  2. 像 Angular 和 Svelte 这样的编译框架将你的组件模板分为“创建”和“更新”函数,当它们检测到组件状态发生变化时,会重新运行更新函数
  3. 在像 SolidJS、Sycamore 或 Leptos 这样的细粒度响应式框架中, 定义了重新运行的函数

这就是我们所有组件正在做的事情。

以我们典型的 <SimpleCounter/> 示例的最简单形式为例:

#[component]
pub fn SimpleCounter() -> impl IntoView {
    let (value, set_value) = create_signal(0);

    let increment = move |_| set_value.update(|value| *value += 1);

    view! {
        <button on:click=increment>
            {value}
        </button>
    }
}

SimpleCounter 函数本身只运行一次。value 信号只创建一次。框架将 increment 函数作为事件监听器传递给浏览器。当你点击按钮时,浏览器会调用 increment,它通过 set_value 更新 value。这会更新在我们的视图中由 {value} 表示的单个文本节点。

闭包是响应式的关键。它们为框架提供了响应更改重新运行应用程序中最小可能单元的能力。

所以请记住两件事:

  1. 你的组件函数是一个设置函数,而不是一个渲染函数:它只运行一次。
  2. 为了使你的视图模板中的值具有响应性,它们必须是函数:要么是信号(实现 Fn 特征),要么是闭包。

测试你的组件

测试用户界面可能相对棘手,但确实很重要。本文将讨论测试 Leptos 应用程序的一些原则和方法。

1. 使用普通的 Rust 测试来测试业务逻辑

在许多情况下,将逻辑从你的组件中提取出来并单独测试是有意义的。对于一些简单的组件,没有特别的逻辑需要测试,但对于许多组件来说,值得使用一个可测试的包装类型,并在普通的 Rust impl 块中实现逻辑。

例如,与其像这样直接在组件中嵌入逻辑:

#[component]
pub fn TodoApp() -> impl IntoView {
    let (todos, set_todos) = create_signal(vec![Todo { /* ... */ }]);
    // ⚠️ 这很难测试,因为它嵌入在组件中
    let num_remaining = move || todos.with(|todos| {
        todos.iter().filter(|todo| !todo.completed).sum()
    });
}

你可以将该逻辑提取到一个单独的数据结构中并对其进行测试:

pub struct Todos(Vec<Todo>);

impl Todos {
    pub fn num_remaining(&self) -> usize {
        self.0.iter().filter(|todo| !todo.completed).sum()
    }
}

#[cfg(test)]
mod tests {
    #[test]
    fn test_remaining() {
        // ...
    }
}

#[component]
pub fn TodoApp() -> impl IntoView {
    let (todos, set_todos) = create_signal(Todos(vec![Todo { /* ... */ }]));
    // ✅ 这有一个与之关联的测试
    let num_remaining = move || todos.with(Todos::num_remaining);
}

一般来说,你的组件本身包含的逻辑越少,你的代码就越容易理解,也越容易测试。

2. 使用端到端(e2e)测试来测试组件

我们的 examples 目录中有几个示例,其中包含使用不同测试工具进行的广泛的端到端测试。

了解如何使用这些示例的最简单方法是查看测试示例本身:

使用 counterwasm-bindgen-test

这是一个相当简单的手动测试设置,它使用 wasm-pack test 命令。

示例测试

#[wasm_bindgen_test]
fn clear() {
    let document = leptos::document();
    let test_wrapper = document.create_element("section").unwrap();
    let _ = document.body().unwrap().append_child(&test_wrapper);

    mount_to(
        test_wrapper.clone().unchecked_into(),
        || view! { <SimpleCounter initial_value=10 step=1/> },
    );

    let div = test_wrapper.query_selector("div").unwrap().unwrap();
    let clear = test_wrapper
        .query_selector("button")
        .unwrap()
        .unwrap()
        .unchecked_into::<web_sys::HtmlElement>();

    clear.click();

assert_eq!(
    div.outer_html(),
    // 这里我们生成一个微型响应式系统来渲染测试用例
    run_scope(create_runtime(), || {
        // 就好像我们用值 0 创建它一样,对吧?
        let (value, set_value) = create_signal(0);

        // 我们可以删除事件监听器,因为它们不会渲染到 HTML 中
        view! {
            <div>
                <button>"Clear"</button>
                <button>"-1"</button>
                <span>"Value: " {value} "!"</span>
                <button>"+1"</button>
            </div>
        }
        // 返回的视图是 HtmlElement<Div>,它是 DOM 元素的智能指针。所以我们仍然可以调用 .outer_html()
        .outer_html()
    })
);
}

wasm-bindgen-testcounters

这个更发达的测试套件使用了一个 fixtures 系统来重构 counter 测试的手动 DOM 操作,并轻松地测试各种情况。

示例测试

use super::*;
use crate::counters_page as ui;
use pretty_assertions::assert_eq;

#[wasm_bindgen_test]
fn should_increase_the_total_count() {
    // 给定
    ui::view_counters();
    ui::add_counter();

    // 当
    ui::increment_counter(1);
    ui::increment_counter(1);
    ui::increment_counter(1);

    // 那么
    assert_eq!(ui::total(), 3);
}

Playwright 和 counters

这些测试使用常见的 JavaScript 测试工具 Playwright 在同一个例子上运行端到端测试,使用许多以前做过前端开发的人熟悉的库和测试方法。

示例测试

import { test, expect } from "@playwright/test";
import { CountersPage } from "./fixtures/counters_page";

test.describe("Increment Count", () => {
  test("should increase the total count", async ({ page }) => {
    const ui = new CountersPage(page);
    await ui.goto();
    await ui.addCounter();

    await ui.incrementCount();
    await ui.incrementCount();
    await ui.incrementCount();

    await expect(ui.total).toHaveText("3");
  });
});

使用 todo_app_sqlite 的 Gherkin/Cucumber 测试

你可以将任何你喜欢的测试工具集成到这个流程中。这个例子使用 Cucumber,一个基于自然语言的测试框架。

@add_todo
Feature: Add Todo

    Background:
        Given I see the app

    @add_todo-see
    Scenario: Should see the todo
        Given I set the todo as Buy Bread
        When I click the Add button
        Then I see the todo named Buy Bread

    # @allow.skipped
    @add_todo-style
    Scenario: Should see the pending todo
        When I add a todo as Buy Oranges
        Then I see the pending todo

这些操作的定义在 Rust 代码中。

use crate::fixtures::{action, world::AppWorld};
use anyhow::{Ok, Result};
use cucumber::{given, when};

#[given("I see the app")]
#[when("I open the app")]
async fn i_open_the_app(world: &mut AppWorld) -> Result<()> {
    let client = &world.client;
    action::goto_path(client, "").await?;

    Ok(())
}

#[given(regex = "^I add a todo as (.*)$")]
#[when(regex = "^I add a todo as (.*)$")]
async fn i_add_a_todo_titled(world: &mut AppWorld, text: String) -> Result<()> {
    let client = &world.client;
    action::add_todo(client, text.as_str()).await?;

    Ok(())
}

// 等等。

了解更多

请随时查看 Leptos 仓库中的 CI 设置,以了解更多关于如何在你的应用程序中使用这些工具的信息。所有这些测试方法都会定期针对实际的 Leptos 示例应用程序运行。

使用 async

到目前为止,我们只处理过同步用户界面:你提供一些输入,应用程序立即处理它并更新界面。这很好,但只是 Web 应用程序功能的一小部分。特别是,大多数 Web 应用程序必须处理某种异步数据加载,通常是从 API 加载某些内容。

众所周知,异步数据很难与代码的同步部分集成。Leptos 提供了一个跨平台的 spawn_local 函数,可以轻松运行 Future,但除此之外还有更多内容。

在本章中,我们将看到 Leptos 如何帮助你简化此过程。

使用 resource 加载数据

Resource 是一种响应式数据结构,它反映了异步任务的当前状态,允许你将异步 Future 集成到同步响应式系统中。你无需使用 .await 等待其数据加载,而是将 Future 转换为一个信号,如果它已解析则返回 Some(T),如果它仍在等待中则返回 None

你可以使用 create_resource 函数来做到这一点。它接受两个参数:

  1. 一个源信号,每当它发生变化时,都会生成一个新的 Future
  2. 一个获取器函数,它从该信号中获取数据并返回一个 Future

下面是一个例子

// 我们的源信号:一些同步的、本地状态
let (count, set_count) = create_signal(0);

// 我们的 resource
let async_data = create_resource(
    count,
    // 每次 `count` 发生变化时,这都会运行
    |value| async move {
        logging::log!("从 API 加载数据");
        load_data(value).await
    },
);

要创建一个只运行一次的 resource,你可以传递一个非响应式的、空的源信号:

let once = create_resource(|| (), |_| async move { load_data().await });

要访问该值,你可以使用 .get().with(|data| /* */)。这些方法的工作方式与信号上的 .get().with() 一样——get 克隆该值并返回它,with 对其应用一个闭包——但是对于任何 Resource<_, T>,它们总是返回 Option<T>,而不是 T:因为你的 resource 始终有可能仍在加载。

因此,你可以在视图中显示 resource 的当前状态:

let once = create_resource(|| (), |_| async move { load_data().await });
view! {
    <h1>"My Data"</h1>
    {move || match once.get() {
        None => view! { <p>"Loading..."</p> }.into_view(),
        Some(data) => view! { <ShowData data/> }.into_view()
    }}
}

Resource 还提供了一个 refetch() 方法,允许你手动重新加载数据(例如,响应按钮点击),以及一个 loading() 方法,该方法返回一个 ReadSignal<bool>,指示 resource 当前是否正在加载。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use gloo_timers::future::TimeoutFuture;
use leptos::*;

// 这里我们定义一个异步函数
// 这可以是任何东西:网络请求、数据库读取等等。
// 这里,我们只是将一个数字乘以 10
async fn load_data(value: i32) -> i32 {
    // 模拟一秒钟的延迟
    TimeoutFuture::new(1_000).await;
    value * 10
}

#[component]
fn App() -> impl IntoView {
    // 这个计数是我们同步的、本地状态
    let (count, set_count) = create_signal(0);

    // create_resource 在其作用域后接受两个参数
    let async_data = create_resource(
        // 第一个是“源信号”
        count,
        // 第二个是加载器
        // 它以源信号的值作为参数
        // 并进行一些异步工作
        |value| async move { load_data(value).await },
    );
    // 每当源信号发生变化时,加载器都会重新加载

    // 你也可以创建只加载一次的 resource
    // 只需从源信号返回单元类型 () 即可
    // 这不依赖于任何东西:我们只加载一次
    let stable = create_resource(|| (), |_| async move { load_data(1).await });

    // 我们可以使用 .get() 访问 resource 值
    // 这将在 Future 解析之前以响应式方式返回 None
    // 并在解析后更新为 Some(T)
    let async_result = move || {
        async_data
            .get()
            .map(|value| format!("Server returned {value:?}"))
            // 此加载状态仅在首次加载之前显示
            .unwrap_or_else(|| "Loading...".into())
    };

    // resource 的 loading() 方法给了我们一个
    // 信号,指示它当前是否正在加载
    let loading = async_data.loading();
    let is_loading = move || if loading() { "Loading..." } else { "Idle." };

    view! {
        <button
            on:click=move |_| {
                set_count.update(|n| *n += 1);
            }
        >
            "Click me"
        </button>
        <p>
            <code>"stable"</code>": " {move || stable.get()}
        </p>
        <p>
            <code>"count"</code>": " {count}
        </p>
        <p>
            <code>"async_value"</code>": "
            {async_result}
            <br/>
            {is_loading}
        </p>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

<Suspense/>

在上一章中,我们展示了如何创建一个简单的加载屏幕,以便在资源加载时显示一些回退内容。

let (count, set_count) = create_signal(0);
let once = create_resource(count, |count| async move { load_a(count).await });

view! {
    <h1>"My Data"</h1>
    {move || match once.get() {
        None => view! { <p>"Loading..."</p> }.into_view(),
        Some(data) => view! { <ShowData data/> }.into_view()
    }}
}

但是,如果我们有两个 resources ,并且想要等待它们都加载完成怎么办?

let (count, set_count) = create_signal(0);
let (count2, set_count2) = create_signal(0);
let a = create_resource(count, |count| async move { load_a(count).await });
let b = create_resource(count2, |count| async move { load_b(count).await });

view! {
    <h1>"My Data"</h1>
    {move || match (a.get(), b.get()) {
        (Some(a), Some(b)) => view! {
            <ShowA a/>
            <ShowA b/>
        }.into_view(),
        _ => view! { <p>"Loading..."</p> }.into_view()
    }}
}

这并不_太_糟糕,但有点烦人。如果我们可以反转控制流呢?

<Suspense/> 组件可以让我们做到这一点。你给它一个 fallback prop 和子级,其中一个或多个通常涉及从 resource 中读取数据。从 <Suspense/>“下”(即它的一个子级中)读取 resource 会将该 resource 注册到 <Suspense/>。如果它仍在等待资源加载,它会显示 fallback。当它们都加载完成后,它会显示子级。

let (count, set_count) = create_signal(0);
let (count2, set_count2) = create_signal(0);
let a = create_resource(count, |count| async move { load_a(count).await });
let b = create_resource(count2, |count| async move { load_b(count).await });

view! {
    <h1>"My Data"</h1>
    <Suspense
        fallback=move || view! { <p>"Loading..."</p> }
    >
        <h2>"My Data"</h2>
        <h3>"A"</h3>
        {move || {
            a.get()
                .map(|a| view! { <ShowA a/> })
        }}
        <h3>"B"</h3>
        {move || {
            b.get()
                .map(|b| view! { <ShowB b/> })
        }}
    </Suspense>
}

每当其中一个 resource 重新加载时,"Loading..." 回退内容将再次显示。

这种控制流的反转使得添加或删除单个 resource 变得更容易,因为你不需要自己处理匹配。它还解锁了服务器端渲染期间的一些巨大性能改进,我们将在后面的章节中讨论这些内容。

<Await/>

如果你只是想在渲染之前等待某个 Future 解析完成,你可能会发现 <Await/> 组件有助于减少样板代码。<Await/> 本质上是将一个带有源参数 || () 的 resource 与一个没有回退内容的 <Suspense/> 组合在一起。

换句话说:

  1. 它只轮询一次 Future,并且不响应任何响应式更改。
  2. Future 解析完成之前,它不会渲染任何内容。
  3. Future 解析完成后,它将其数据绑定到你选择的任何变量名,然后使用该变量在作用域内渲染其子级。
async fn fetch_monkeys(monkey: i32) -> i32 {
    // 也许这不需要是异步的
    monkey * 2
}
view! {
    <Await
        // `future` 提供要解析的 `Future`
        future=|| fetch_monkeys(3)
        // 数据绑定到你提供的任何变量名
        let:data
    >
        // 你通过引用接收数据,并可以在此处在你的视图中使用它
        <p>{*data} " little monkeys, jumping on the bed."</p>
    </Await>
}

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use gloo_timers::future::TimeoutFuture;
use leptos::*;

async fn important_api_call(name: String) -> String {
    TimeoutFuture::new(1_000).await;
    name.to_ascii_uppercase()
}

#[component]
fn App() -> impl IntoView {
    let (name, set_name) = create_signal("Bill".to_string());

    // 每次 `name` 更改时,这都会重新加载
    let async_data = create_resource(

        name,
        |name| async move { important_api_call(name).await },
    );

    view! {
        <input
            on:input=move |ev| {
                set_name(event_target_value(&ev));
            }
            prop:value=name
        />
        <p><code>"name:"</code> {name}</p>
        <Suspense
            // 每当在 suspense“下”读取的 resource
            // 正在加载时,都会显示回退内容
            fallback=move || view! { <p>"Loading..."</p> }
        >
            // 子级将在初始时渲染一次,
            // 然后每当任何 resource 解析完成后都会渲染一次
            <p>
                "Your shouting name is "
                {move || async_data.get()}
            </p>
        </Suspense>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

<Transition/>

你可能会注意到,在 <Suspense/> 示例中,如果你不断重新加载数据,它会一直闪烁回到 "Loading..."。有时这很好。对于其他时候,可以使用 <Transition/>

<Transition/> 的行为与 <Suspense/> 完全相同,但它不是每次都回退,而只是在第一次显示回退内容。在所有后续加载中,它会继续显示旧数据,直到新数据准备就绪。这对于防止闪烁效果以及允许用户继续与你的应用程序交互非常方便。

此示例显示了如何使用 <Transition/> 创建一个简单的选项卡式联系人列表。当你选择一个新选项卡时,它会继续显示当前联系人,直到新数据加载完成。这比不断回退到加载消息的用户体验要好得多。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use gloo_timers::future::TimeoutFuture;
use leptos::*;

async fn important_api_call(id: usize) -> String {
    TimeoutFuture::new(1_000).await;
    match id {
        0 => "Alice",
        1 => "Bob",
        2 => "Carol",
        _ => "User not found",
    }
    .to_string()
}

#[component]
fn App() -> impl IntoView {
    let (tab, set_tab) = create_signal(0);

    // 每次 `tab` 更改时,这都会重新加载
    let user_data = create_resource(tab, |tab| async move { important_api_call(tab).await });

    view! {
        <div class="buttons">
            <button
                on:click=move |_| set_tab(0)
                class:selected=move || tab() == 0
            >
                "Tab A"
            </button>
            <button
                on:click=move |_| set_tab(1)
                class:selected=move || tab() == 1
            >
                "Tab B"
            </button>
            <button
                on:click=move |_| set_tab(2)
                class:selected=move || tab() == 2
            >
                "Tab C"
            </button>
            {move || if user_data.loading().get() {
                "Loading..."
            } else {
                ""
            }}
        </div>
        <Transition
            // 回退内容将初始显示
            // 在后续重新加载中,当前子级将
            // 继续显示
            fallback=move || view! { <p>"Loading..."</p> }
        >
            <p>
                {move || user_data.read()}
            </p>
        </Transition>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

使用 Action 修改数据

我们已经讨论了如何使用 resource 加载 async 数据。Resource 会立即加载数据,并与 <Suspense/><Transition/> 组件紧密合作,以显示你的应用程序中是否正在加载数据。但是,如果你只是想调用一些任意的 async 函数并跟踪它的执行情况,该怎么办?

好吧,你总是可以使用 spawn_local。这允许你通过将 Future 交给浏览器(或者在服务器上,是 Tokio 或任何你正在使用的其他运行时)来在同步环境中生成一个 async 任务。但是你如何知道它是否仍在等待中?好吧,你可以设置一个信号来显示它是否正在加载,另一个信号来显示结果...

所有这些都是真的。或者你可以使用最终的 async 原语:create_action

Action 和 resource 看起来很相似,但它们代表着根本不同的东西。如果你试图通过运行一个 async 函数来加载数据,无论是运行一次还是在其他值发生变化时运行,你可能想使用 create_resource。如果你试图偶尔运行一个 async 函数来响应用户点击按钮之类的事情,你可能想使用 create_action

假设我们有一些想要运行的 async 函数。

async fn add_todo_request(new_title: &str) -> Uuid {
    /* 在服务器上做一些添加新的待办事项的事情 */
}

create_action 接受一个 async 函数,该函数接受对单个参数的引用,你可以将其视为其“输入类型”。

输入始终是单个类型。如果要传入多个参数,可以使用结构体或元组。

// 如果只有一个参数,就使用它
let action1 = create_action(|input: &String| {
   let input = input.clone();
   async move { todo!() }
});

// 如果没有参数,则使用单元类型 `()`
let action2 = create_action(|input: &()| async { todo!() });

// 如果有多个参数,则使用元组
let action3 = create_action(
  |input: &(usize, String)| async { todo!() }
);

因为 action 函数接受一个引用,但 Future 需要具有 'static 生命周期,所以你通常需要克隆该值才能将其传递给 Future。这确实很尴尬,但它解锁了一些强大的功能,如乐观 UI。我们将在后面的章节中看到更多相关内容。

所以在这种情况下,我们创建 action 所需要做的就是

let add_todo_action = create_action(|input: &String| {
    let input = input.to_owned();
    async move { add_todo_request(&input).await }
});

我们将使用 .dispatch() 调用它,而不是直接调用 add_todo_action,如下所示

add_todo_action.dispatch("Some value".to_string());

你可以从事件监听器、超时或任何地方执行此操作;因为 .dispatch() 不是一个 async 函数,所以可以从同步上下文中调用它。

Action 提供对一些信号的访问,这些信号在你要调用的异步 action 和同步响应式系统之间进行同步:

let submitted = add_todo_action.input(); // RwSignal<Option<String>>
let pending = add_todo_action.pending(); // ReadSignal<bool>
let todo_id = add_todo_action.value(); // RwSignal<Option<Uuid>>

这使得跟踪请求的当前状态、显示加载指示器或基于提交将成功的假设进行“乐观 UI”变得很容易。

let input_ref = create_node_ref::<Input>();

view! {
    <form
        on:submit=move |ev| {
            ev.prevent_default(); // 不要重新加载页面...
            let input = input_ref.get().expect("input to exist");
            add_todo_action.dispatch(input.value());
        }
    >
        <label>
            "What do you need to do?"
            <input type="text"
                node_ref=input_ref
            />
        </label>
        <button type="submit">"Add Todo"</button>
    </form>
    // 使用我们的加载状态
    <p>{move || pending().then("Loading...")}</p>
}

现在,有可能这一切看起来有点过于复杂,或者可能限制太多。我想在这里将 action 与 resource 一起包含进来,作为拼图中缺失的一块。在一个真实的 Leptos 应用程序中,你实际上最常将 action 与服务器函数 create_server_action<ActionForm/> 组件一起使用,以创建真正强大的渐进增强表单。所以如果这个原语对你来说似乎毫无用处... 不要担心!也许以后会有意义。(或者现在就查看我们的 todo_app_sqlite 示例。)

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use gloo_timers::future::TimeoutFuture;
use leptos::{html::Input, *};
use uuid::Uuid;

// 这里我们定义一个异步函数
// 这可以是任何东西:网络请求、数据库读取等等。
// 将其视为一个修改:你运行的某个命令式异步操作,
// 而 resource 将是你加载的一些异步数据
async fn add_todo(text: &str) -> Uuid {
    _ = text;
    // 模拟一秒钟的延迟
    TimeoutFuture::new(1_000).await;
    // 假装这是一个帖子 ID 或其他东西
    Uuid::new_v4()
}

#[component]
fn App() -> impl IntoView {
    // action 接受一个带有一个参数的异步函数
    // 它可以是一个简单类型、一个结构体或 ()
    let add_todo = create_action(|input: &String| {
        // 输入是一个引用,但我们需要 Future 拥有它
        // 这很重要:我们需要克隆并移动到 Future 中
        // 这样它就有一个 'static 生命周期
        let input = input.to_owned();
        async move { add_todo(&input).await }
    });

    // action 提供了一堆同步的、响应式变量
    // 这些变量告诉我们关于 action 状态的不同信息
    let submitted = add_todo.input();
    let pending = add_todo.pending();
    let todo_id = add_todo.value();

    let input_ref = create_node_ref::<Input>();

    view! {
        <form
            on:submit=move |ev| {
                ev.prevent_default(); // 不要重新加载页面...
                let input = input_ref.get().expect("input to exist");
                add_todo.dispatch(input.value());
            }
        >
            <label>
                "What do you need to do?"
                <input type="text"
                    node_ref=input_ref
                />
            </label>
            <button type="submit">"Add Todo"</button>
        </form>
        <p>{move || pending().then(|| "Loading...")}</p>
        <p>
            "Submitted: "
            <code>{move || format!("{:#?}", submitted())}</code>
        </p>
        <p>
            "Pending: "
            <code>{move || format!("{:#?}", pending())}</code>
        </p>
        <p>
            "Todo ID: "
            <code>{move || format!("{:#?}", todo_id())}</code>
        </p>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

投影子级

在构建组件时,你可能偶尔会发现自己想要通过多层组件“投影”子级。

问题

考虑以下内容:

pub fn LoggedIn<F, IV>(fallback: F, children: ChildrenFn) -> impl IntoView
where
    F: Fn() -> IV + 'static,
    IV: IntoView,
{
    view! {
        <Suspense
            fallback=|| ()
        >
            <Show
				// 通过从资源中读取来检查用户是否已验证
                when=move || todo!()
                fallback=fallback
            >
				{children()}
			</Show>
        </Suspense>
    }
}

这很简单:当用户登录时,我们想显示 children。如果用户未登录,我们想显示 fallback。在我们等待结果的时候,我们只渲染 (),也就是什么也不渲染。

换句话说,我们想将 <LoggedIn/> 的子级通过 <Suspense/> 组件传递,以成为 <Show/> 的子级。这就是我所说的“投影”。

这无法编译。

error[E0507]: cannot move out of `fallback`, a captured variable in an `Fn` closure
error[E0507]: cannot move out of `children`, a captured variable in an `Fn` closure

这里的问题是 <Suspense/><Show/> 都需要能够多次构造它们的 children。第一次构造 <Suspense/> 的子级时,它会获取 fallbackchildren 的所有权,将它们移动到 <Show/> 的调用中,但随后它们将不可用于未来的 <Suspense/> 子级构造。

细节

可以随意跳到解决方案部分。

如果你想真正理解这里的问题,查看扩展后的 view 宏可能会有所帮助。这是一个清理后的版本:

Suspense(
    ::leptos::component_props_builder(&Suspense)
        .fallback(|| ())
        .children({
            // fallback 和 children 被移动到这个闭包中
            Box::new(move || {
                {
                    // fallback 和 children 在这里被捕获
                    leptos::Fragment::lazy(|| {
                        vec![
                            (Show(
                                ::leptos::component_props_builder(&Show)
                                    .when(|| true)
									// 但是 fallback 在这里被移动到 Show 中
                                    .fallback(fallback)
									// 并且 children 在这里被移动到 Show 中
                                    .children(children)
                                    .build(),
                            )
                            .into_view()),
                        ]
                    })
                }
            })
        })
        .build(),
)

所有组件都拥有自己的 props;所以在这种情况下,无法调用 <Show/>,因为它只捕获了对 fallbackchildren 的引用。

解决方案

然而,<Suspense/><Show/> 都接受 ChildrenFn,即它们的 children 应该实现 Fn 类型,这样它们就可以被多次调用,并且只使用一个不可变的引用。这意味着我们不需要拥有 childrenfallback;我们只需要能够传递它们的 'static 引用。

我们可以使用 store_value 原语来解决这个问题。这实质上是将一个值存储在响应式系统中,将所有权交给框架,换取一个引用,该引用与信号一样是 Copy'static 的,我们可以通过某些方法访问或修改该引用。

在这种情况下,它真的很简单:

pub fn LoggedIn<F, IV>(fallback: F, children: ChildrenFn) -> impl IntoView
where
    F: Fn() -> IV + 'static,
    IV: IntoView,
{
    let fallback = store_value(fallback);
    let children = store_value(children);
    view! {
        <Suspense
            fallback=|| ()
        >
            <Show
                when=|| todo!()
                fallback=move || fallback.with_value(|fallback| fallback())
            >
                {children.with_value(|children| children())}
            </Show>
        </Suspense>
    }
}

在顶层,我们将 fallbackchildren 都存储在 LoggedIn 拥有的响应式作用域中。现在我们可以简单地将这些引用向下传递到 <Show/> 组件中,并在那里调用它们。

最后一点说明

请注意,这是可行的,因为 <Show/><Suspense/> 只需要对其子级(.with_value 可以提供)的不可变引用,而不是所有权。

在其他情况下,你可能需要通过一个接受 ChildrenFn 的函数来投影拥有的 props,因此该函数需要多次调用。在这种情况下,你可能会发现 view 宏中的 clone: 帮助器很有用。

考虑这个例子

#[component]
pub fn App() -> impl IntoView {
    let name = "Alice".to_string();
    view! {
        <Outer>
            <Inner>
                <Inmost name=name.clone()/>
            </Inner>
        </Outer>
    }
}

#[component]
pub fn Outer(children: ChildrenFn) -> impl IntoView {
    children()
}

#[component]
pub fn Inner(children: ChildrenFn) -> impl IntoView {
    children()
}

#[component]
pub fn Inmost(name: String) -> impl IntoView {
    view! {
        <p>{name}</p>
    }
}

即使使用 name=name.clone(),也会出现以下错误

cannot move out of `name`, a captured variable in an `Fn` closure

它被捕获到需要多次运行的多层子级中,并且没有明显的方法将其克隆子级中。

在这种情况下,clone: 语法就派上用场了。调用 clone:name 会在将 name 移动到 <Inner/> 的子级之前克隆 name,这解决了我们的所有权问题。

view! {
	<Outer>
		<Inner clone:name>
			<Inmost name=name.clone()/>
		</Inner>
	</Outer>
}

由于 view 宏的不透明性,这些问题可能有点难以理解或调试。但总的来说,它们总是可以解决的。

全局状态管理

到目前为止,我们只处理过组件中的局部状态,并且我们已经了解了如何在父子组件之间协调状态。有时,人们会寻找更通用的全局状态管理解决方案,该解决方案可以在整个应用程序中使用。

一般来说,你不需要本章。典型的模式是将你的应用程序组合成组件,每个组件管理自己的局部状态,而不是将所有状态存储在全局结构中。但是,在某些情况下(例如主题设置、保存用户设置或在 UI 不同部分的组件之间共享数据),你可能希望使用某种全局状态管理。

三种最佳的全局状态方法是

  1. 使用路由器通过 URL 驱动全局状态
  2. 通过上下文传递信号
  3. 创建一个全局状态结构体,并使用 create_slice 创建指向它的镜头

选项 #1:URL 作为全局状态

在很多方面,URL 实际上是存储全局状态的最佳方式。它可以从你的树中的任何组件、任何位置访问。有一些原生 HTML 元素(如 <form><a>)专门用于更新 URL。而且它可以在页面重新加载和不同设备之间持久存在;你可以与朋友共享 URL,或者将其从手机发送到笔记本电脑,其中存储的任何状态都将被复制。

本教程的接下来几节将介绍路由器,我们将深入探讨这些主题。

但现在,我们将只关注选项 #2 和 #3。

选项 #2:通过上下文传递信号

父子组件通信部分,我们看到你可以使用 provide_context 将信号从父组件传递给子组件,并使用 use_context 在子组件中读取它。但 provide_context 可以在任何距离上工作。如果你想创建一个全局信号来保存某个状态片段,你可以在你提供它的组件的后代中的任何地方提供它并通过上下文访问它。

通过上下文提供的信号只会在读取它的地方引起响应式更新,而不会在两者之间的任何组件中引起更新,因此它即使在远处也能保持细粒度响应式更新的能力。

我们首先在应用程序的根目录中创建一个信号,并使用 provide_context 将其提供给它的所有子级和后代。

#[component]
fn App() -> impl IntoView {
    // 这里我们在根目录中创建一个信号,可以在应用程序的任何地方使用它。
    let (count, set_count) = create_signal(0);
    // 我们将把设置器传递给特定的组件,
    // 但通过上下文将计数本身提供给整个应用程序
    provide_context(count);

    view! {
        // SetterButton 允许修改计数
        <SetterButton set_count/>
        // 这些消费者只能读取它
        // 但是如果我们愿意,我们可以通过传递 `set_count` 来赋予他们写权限
        <FancyMath/>
        <ListItems/>
    }
}

<SetterButton/> 是我们已经写过好几次的计数器类型。 (如果你不明白我的意思,请参阅下面的沙盒。)

<FancyMath/><ListItems/> 都使用我们通过 use_context 提供的信号并对其进行处理。

/// 使用全局计数进行一些“花哨”数学运算的组件
#[component]
fn FancyMath() -> impl IntoView {
    // 这里我们使用 `use_context` 使用全局计数信号
    let count = use_context::<ReadSignal<u32>>()
        // 我们知道我们刚刚在父组件中提供了它
        .expect("there to be a `count` signal provided");
    let is_even = move || count() & 1 == 0;

    view! {
        <div class="consumer blue">
            "The number "
            <strong>{count}</strong>
            {move || if is_even() {
                " is"
            } else {
                " is not"
            }}
            " even."
        </div>
    }
}

请注意,同样的模式可以应用于更复杂的状态。如果你有多个想要独立更新的字段,你可以通过提供一些信号结构体来做到这一点:

#[derive(Copy, Clone, Debug)]
struct GlobalState {
    count: RwSignal<i32>,
    name: RwSignal<String>
}

impl GlobalState {
    pub fn new() -> Self {
        Self {
            count: create_rw_signal(0),
            name: create_rw_signal("Bob".to_string())
        }
    }
}

#[component]
fn App() -> impl IntoView {
    provide_context(GlobalState::new());

    // 等等。
}

选项 #3:创建全局状态结构体和切片

你可能会觉得像这样将结构体的每个字段都包装在一个单独的信号中很麻烦。在某些情况下,创建一个具有非响应式字段的普通结构体,然后将其包装在一个信号中会很有用。

#[derive(Copy, Clone, Debug, Default)]
struct GlobalState {
    count: i32,
    name: String
}

#[component]
fn App() -> impl IntoView {
    provide_context(create_rw_signal(GlobalState::default()));

    // 等等。
}

但有一个问题:因为我们的整个状态都包装在一个信号中,所以更新一个字段的值会导致 UI 中仅依赖于另一个字段的部分发生响应式更新。

let state = expect_context::<RwSignal<GlobalState>>();
view! {
    <button on:click=move |_| state.update(|state| state.count += 1)>"+1"</button>
    <p>{move || state.with(|state| state.name.clone())}</p>
}

在这个例子中,点击按钮会导致 <p> 内部的文本被更新,再次克隆 state.name!因为信号是响应式的原子单元,所以更新信号的任何字段都会触发对其依赖的所有内容的更新。

有一种更好的方法。你可以使用 create_memocreate_slice(它使用 create_memo 但也提供了一个设置器)来获取细粒度的响应式切片。“记忆”一个值意味着创建一个新的响应式值,该值只有在它发生变化时才会更新。“记忆一个切片”意味着创建一个新的响应式值,该值只有在状态结构体的某个字段更新时才会更新。

在这里,我们不是直接从状态信号中读取,而是通过 create_slice 创建该状态的“切片”,并进行细粒度更新。每个切片信号仅在它访问的较大结构体的特定部分更新时才会更新。这意味着你可以创建一个单一的根信号,然后在不同的组件中获取它的独立的、细粒度的切片,每个切片都可以更新而不会通知其他切片更改。

/// 更新全局状态中计数的组件。
#[component]
fn GlobalStateCounter() -> impl IntoView {
    let state = expect_context::<RwSignal<GlobalState>>();

    // `create_slice` 让我们可以创建数据的一个“镜头”
    let (count, set_count) = create_slice(

        // 我们从 `state` 中获取一个切片
        state,
        // 我们的 getter 返回数据的“切片”
        |state| state.count,
        // 我们的 setter 描述了如何根据新值修改该切片
        |state, n| state.count = n,
    );

    view! {
        <div class="consumer blue">
            <button
                on:click=move |_| {
                    set_count(count() + 1);
                }
            >
                "Increment Global Count"
            </button>
            <br/>
            <span>"Count is: " {count}</span>
        </div>
    }
}

点击此按钮只会更新 state.count,因此如果我们在其他地方创建另一个仅获取 state.name 的切片,则点击该按钮不会导致该切片更新。这允许你结合自上而下的数据流和细粒度响应式更新的优点。

注意:这种方法有一些明显的缺点。信号和 memo 都需要拥有它们的值,因此 memo 需要在每次更改时克隆字段的值。在像 Leptos 这样的框架中管理状态的最自然方法始终是提供尽可能局部作用域和细粒度的信号,而不是将所有东西都提升到全局状态。但是,当你确实需要某种全局状态时,create_slice 可能是一个有用的工具。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::*;

// 到目前为止,我们只处理过组件中的局部状态
// 我们只了解了如何在父子组件之间进行通信
// 但是还有一些更通用的方法来管理全局状态
//
// 全局状态的三种最佳方法是
// 1. 使用路由器通过 URL 驱动全局状态
// 2. 通过上下文传递信号
// 3. 创建一个全局状态结构体,并使用 `create_slice` 创建指向它的镜头
//
// 选项 #1:URL 作为全局状态
// 本教程的接下来几节将介绍路由器。
// 所以现在,我们将只关注选项 #2 和 #3。

// 选项 #2:通过上下文传递信号
//
// 在像 React 这样的虚拟 DOM 库中,使用 Context API 来管理全局
// 状态是一个坏主意:因为整个应用程序都存在于一棵树中,所以改变
// 树中高层提供的某些值会导致整个应用程序重新渲染。
//
// 在像 Leptos 这样的细粒度响应式库中,情况并非如此。
// 你可以在应用程序的根目录中创建一个信号,并使用 provide_context() 将其传递给
// 其他组件。更改它只会导致在实际使用它的特定位置重新渲染,
// 而不会导致整个应用程序重新渲染。
#[component]
fn Option2() -> impl IntoView {
    // 这里我们在根目录中创建一个信号,可以在应用程序的任何地方使用它。
    let (count, set_count) = create_signal(0);
    // 我们将把设置器传递给特定的组件,
    // 但通过上下文将计数本身提供给整个应用程序
    provide_context(count);

    view! {
        <h1>"Option 2: Passing Signals"</h1>
        // SetterButton 允许修改计数
        <SetterButton set_count/>
        // 这些消费者只能读取它
        // 但是如果我们愿意,我们可以通过传递 `set_count` 来赋予他们写权限
        <div style="display: flex">
            <FancyMath/>
            <ListItems/>
        </div>
    }
}

/// 增加我们全局计数器的按钮。
#[component]
fn SetterButton(set_count: WriteSignal<u32>) -> impl IntoView {
    view! {
        <div class="provider red">
            <button on:click=move |_| set_count.update(|count| *count += 1)>
                "Increment Global Count"
            </button>
        </div>
    }
}

/// 使用全局计数进行一些“花哨”数学运算的组件
#[component]
fn FancyMath() -> impl IntoView {
    // 这里我们使用 `use_context` 使用全局计数信号
    let count = use_context::<ReadSignal<u32>>()
        // 我们知道我们刚刚在父组件中提供了它
        .expect("there to be a `count` signal provided");
    let is_even = move || count() & 1 == 0;

    view! {
        <div class="consumer blue">
            "The number "
            <strong>{count}</strong>
            {move || if is_even() {
                " is"
            } else {
                " is not"
            }}
            " even."
        </div>
    }
}

/// 显示从全局计数生成的项目列表的组件。
#[component]
fn ListItems() -> impl IntoView {
    // 再次使用 `use_context` 使用全局计数信号
    let count = use_context::<ReadSignal<u32>>().expect("there to be a `count` signal provided");

    let squares = move || {
        (0..count())
            .map(|n| view! { <li>{n}<sup>"2"</sup> " is " {n * n}</li> })
            .collect::<Vec<_>>()
    };

    view! {
        <div class="consumer green">
            <ul>{squares}</ul>
        </div>
    }
}

// 选项 #3:创建一个全局状态结构体
//
// 你可以使用此方法来构建一个单一的全局数据结构
// 来保存整个应用程序的状态,然后通过
// 使用 `create_slice` 或 `create_memo` 获取细粒度切片来访问它,
// 这样更改状态的一部分不会导致你的
// 应用程序中依赖于状态其他部分的部分发生更改。

#[derive(Default, Clone, Debug)]
struct GlobalState {
    count: u32,
    name: String,
}

#[component]
fn Option3() -> impl IntoView {
    // 我们将提供一个保存整个状态的单一信号
    // 每个组件将负责创建自己的“镜头”来访问它
    let state = create_rw_signal(GlobalState::default());
    provide_context(state);

    view! {
        <h1>"Option 3: Passing Signals"</h1>
        <div class="red consumer" style="width: 100%">
            <h2>"Current Global State"</h2>
            <pre>
                {move || {
                    format!("{:#?}", state.get())
                }}
            </pre>
        </div>
        <div style="display: flex">
            <GlobalStateCounter/>
            <GlobalStateInput/>
        </div>
    }
}

/// 更新全局状态中计数的组件。
#[component]
fn GlobalStateCounter() -> impl IntoView {
    let state = use_context::<RwSignal<GlobalState>>().expect("state to have been provided");

    // `create_slice` 让我们可以创建数据的一个“镜头”
    let (count, set_count) = create_slice(

        // 我们从 `state` 中获取一个切片
        state,
        // 我们的 getter 返回数据的“切片”
        |state| state.count,
        // 我们的 setter 描述了如何根据新值修改该切片
        |state, n| state.count = n,
    );

    view! {
        <div class="consumer blue">
            <button
                on:click=move |_| {
                    set_count(count() + 1);
                }
            >
                "Increment Global Count"
            </button>
            <br/>
            <span>"Count is: " {count}</span>
        </div>
    }
}

/// 更新全局状态中计数的组件。
#[component]
fn GlobalStateInput() -> impl IntoView {
    let state = use_context::<RwSignal<GlobalState>>().expect("state to have been provided");

    // 这个切片完全独立于我们在另一个组件中创建的 `count` 切片
    // 它们都不会导致另一个重新运行
    let (name, set_name) = create_slice(
        // 我们从 `state` 中获取一个切片
        state,
        // 我们的 getter 返回数据的“切片”
        |state| state.name.clone(),
        // 我们的 setter 描述了如何根据新值修改该切片
        |state, n| state.name = n,
    );

    view! {
        <div class="consumer green">
            <input
                type="text"
                prop:value=name
                on:input=move |ev| {
                    set_name(event_target_value(&ev));
                }
            />
            <br/>
            <span>"Name is: " {name}</span>
        </div>
    }
}
// 这个 `main` 函数是应用程序的入口点
// 它只是将我们的组件挂载到 <body> 上
// 因为我们将其定义为 `fn App`,所以我们现在可以在
// 模板中将其用作 <App/>
fn main() {
    leptos::mount_to_body(|| view! { <Option2/><Option3/> })
}

路由

基础知识

路由驱动着大多数网站。路由器( router )是对“给定这个 URL,页面上应该显示什么?”这个问题的答案。

URL 由许多部分组成。例如,URL https://my-cool-blog.com/blog/search?q=Search#results 由以下部分组成

  • 一个 协议https
  • 一个 域名my-cool-blog.com
  • 一个 路径/blog/search
  • 一个 查询(或 搜索):?q=Search
  • 一个 哈希#results

Leptos 路由器使用路径和查询(/blog/search?q=Search)。给定这个 URL 片段,应用程序应该在页面上渲染什么?

理念

在大多数情况下,路径应该驱动页面上显示的内容。从用户的角度来看,对于大多数应用程序,应用程序状态中的大多数主要更改都应该反映在 URL 中。如果你复制粘贴 URL 并在另一个选项卡中打开它,你应该会发现自己或多或少地处在同一个位置。

从这个意义上说,路由器实际上是你的应用程序的全局状态管理的核心。最重要的是,它驱动着页面上显示的内容。

路由器通过将当前位置映射到特定的组件来为你处理大部分工作。

定义路由

入门

路由器很容易上手。

首先,请确保你已将 leptos_router 包添加到你的依赖项中。与 leptos 一样,路由器依赖于激活 csrhydratessr 功能。例如,如果你要将路由器添加到客户端渲染的应用程序中,你将要运行

cargo add leptos_router --features=csr 

leptos_router 是一个单独的包,这一点很重要。这意味着路由器中的所有内容都可以在用户代码中定义。如果你想创建自己的路由器,或者不使用路由器,你完全可以这样做!

并从路由器中导入相关的类型,可以使用类似以下的内容

use leptos_router::{Route, RouteProps, Router, RouterProps, Routes, RoutesProps};

或者简单地使用

use leptos_router::*;

提供 <Router/>

路由行为由 <Router/> 组件提供。这通常应该在你的应用程序的根目录附近,包装应用程序的其余部分。

你不应该尝试在你的应用程序中使用多个 <Router/>。请记住,路由器驱动着全局状态:如果你有多个路由器,当 URL 发生变化时,由谁来决定做什么?

让我们从一个使用路由器的简单 <App/> 组件开始:

use leptos::*;
use leptos_router::*;

#[component]
pub fn App() -> impl IntoView {
  view! {
    <Router>
      <nav>
        /* ... */
      </nav>
      <main>
        /* ... */
      </main>
    </Router>
  }
}

定义 <Routes/>

<Routes/> 组件是你定义用户可以在你的应用程序中导航到的所有路由的地方。每个可能的路由都由一个 <Route/> 组件定义。

你应该将 <Routes/> 组件放置在你的应用程序中希望渲染路由的位置。<Routes/> 之外的所有内容都将出现在每个页面上,因此你可以将导航栏或菜单之类的内容留在 <Routes/> 之外。

use leptos::*;
use leptos_router::*;

#[component]
pub fn App() -> impl IntoView {
  view! {
    <Router>
      <nav>
        /* ... */
      </nav>
      <main>
        // 我们所有的路由都将出现在 <main> 内部
        <Routes>
          /* ... */
        </Routes>
      </main>
    </Router>
  }
}

通过使用 <Route/> 组件为 <Routes/> 提供子级来定义单个路由。<Route/> 接受一个 path 和一个 view。当当前位置与 path 匹配时,将创建并显示 view

path 可以包括

  • 一个静态路径(/users),
  • 以冒号开头的动态命名参数(/:id),
  • 和/或以星号开头的通配符(/user/*any

view 是一个返回视图的函数。任何没有 props 的组件都可以在这里工作,返回某个视图的闭包也可以。

<Routes>
  <Route path="/" view=Home/>
  <Route path="/users" view=Users/>
  <Route path="/users/:id" view=UserProfile/>
  <Route path="/*any" view=|| view! { <h1>"Not Found"</h1> }/>
</Routes>

view 接受一个 Fn() -> impl IntoView。如果一个组件没有 props,它可以直接传递到 view 中。在这种情况下,view=Home 只是 || view! { <Home/> } 的简写。

现在,如果你导航到 //users,你将获得主页或 <Users/>。如果你访问 /users/3/blahblah,你将获得用户配置文件或你的 404 页面(<NotFound/>)。在每次导航中,路由器都会确定应该匹配哪个 <Route/>,从而确定应该在 <Routes/> 组件定义的位置显示什么内容。

请注意,你可以按任何顺序定义你的路由。路由器会对每个路由进行评分,以查看它的匹配程度,而不是简单地尝试从上到下进行匹配。

够简单吧?

条件路由

leptos_router 基于你的应用程序中只有一个 <Routes/> 组件的假设。它使用它在服务器端生成路由,通过缓存计算的分支优化路由匹配,并渲染你的应用程序。

你不应该使用 <Show/><Suspense/> 之类的其他组件有条件地渲染 <Routes/>

// ❌ 不要这样做!
view! {
  <Show when=|| is_loaded() fallback=|| view! { <p>"Loading"</p> }>
    <Routes>
      <Route path="/" view=Home/>
    </Routes>
  </Show>
}

相反,你可以使用嵌套路由来渲染一次 <Routes/>,并有条件地渲染路由器出口:

// ✅ 改为这样做!
view! {
  <Routes>
    // 父路由
    <Route path="/" view=move || {
      view! {
        // 仅在数据加载完成后显示出口
        <Show when=|| is_loaded() fallback=|| view! { <p>"Loading"</p> }>
          <Outlet/>
        </Show>
      }
    }>
      // 嵌套子路由
      <Route path="/" view=Home/>
    </Route>
  </Routes>
}

如果这看起来很奇怪,不要担心!本书的下一节将介绍这种嵌套路由。

嵌套路由

我们刚刚定义了以下一组路由:

<Routes>
  <Route path="/" view=Home/>
  <Route path="/users" view=Users/>
  <Route path="/users/:id" view=UserProfile/>
  <Route path="/*any" view=NotFound/>
</Routes>

这里有一定的重复:/users/users/:id。这对于一个小型应用程序来说很好,但你可能已经知道它不能很好地扩展。如果我们可以嵌套这些路由,不是很好吗?

嗯... 你可以!

<Routes>
  <Route path="/" view=Home/>
  <Route path="/users" view=Users>
    <Route path=":id" view=UserProfile/>
  </Route>
  <Route path="/*any" view=NotFound/>
</Routes>

但是等等。我们刚刚微妙地改变了应用程序的功能。

下一节是本指南整个路由部分中最重要的部分之一。请仔细阅读,如果你有任何不明白的地方,请随时提问。

嵌套路由作为布局

嵌套路由是布局的一种形式,而不是路由定义的一种方法。

换句话说:定义嵌套路由的主要目的不是为了在输入路由定义中的路径时避免重复输入。它实际上是为了告诉路由器同时在页面上并排显示多个 <Route/>

让我们回顾一下我们的实际例子。

<Routes>
  <Route path="/users" view=Users/>
  <Route path="/users/:id" view=UserProfile/>
</Routes>

这意味着:

  • 如果我访问 /users,我会得到 <Users/> 组件。
  • 如果我访问 /users/3,我会得到 <UserProfile/> 组件(参数 id 设置为 3;稍后会详细介绍)

假设我改为使用嵌套路由:

<Routes>
  <Route path="/users" view=Users>
    <Route path=":id" view=UserProfile/>
  </Route>
</Routes>

这意味着:

  • 如果我访问 /users/3,该路径会匹配两个 <Route/><Users/><UserProfile/>
  • 如果我访问 /users,则该路径不匹配。

我实际上需要添加一个回退路由

<Routes>
  <Route path="/users" view=Users>
    <Route path=":id" view=UserProfile/>
    <Route path="" view=NoUser/>
  </Route>
</Routes>

现在:

  • 如果我访问 /users/3,该路径会匹配 <Users/><UserProfile/>
  • 如果我访问 /users,该路径会匹配 <Users/><NoUser/>

换句话说,当我使用嵌套路由时,每个 路径 可以匹配多个 路由:每个 URL 可以同时在同一页面上渲染由多个 <Route/> 组件提供的视图。

这可能与直觉相悖,但它非常强大,原因希望你在几分钟内就能看到。

为什么使用嵌套路由?

为什么要这么麻烦?

大多数 Web 应用程序都包含与布局不同部分相对应的导航级别。例如,在一个电子邮件应用程序中,你可能会有一个像 /contacts/greg 这样的 URL,它在屏幕左侧显示联系人列表,在屏幕右侧显示 Greg 的联系方式。联系人列表和联系方式应该始终同时出现在屏幕上。如果没有选择联系人,你可能希望显示一些说明性文字。

你可以使用嵌套路由轻松定义这一点

<Routes>
  <Route path="/contacts" view=ContactList>
    <Route path=":id" view=ContactInfo/>
    <Route path="" view=|| view! {
      <p>"Select a contact to view more info."</p>
    }/>
  </Route>
</Routes>

你甚至可以走得更深。假设你想为每个联系人的地址、电子邮件/电话以及你与他们的对话设置选项卡。你可以在 :id 内部添加另一组嵌套路由:

<Routes>
  <Route path="/contacts" view=ContactList>
    <Route path=":id" view=ContactInfo>
      <Route path="" view=EmailAndPhone/>
      <Route path="address" view=Address/>
      <Route path="messages" view=Messages/>
    </Route>
    <Route path="" view=|| view! {
      <p>"Select a contact to view more info."</p>
    }/>
  </Route>
</Routes>

Remix 网站的主页(React 路由器的创建者创建的 React 框架)如果你向下滚动,会有一个很好的可视化示例,其中包含三级嵌套路由:Sales > Invoices > an invoice.

<Outlet/>

父路由不会自动渲染它们的嵌套路由。毕竟,它们只是组件;它们不知道它们应该在哪里渲染它们的子级,而“把它放在父组件的末尾”并不是一个很好的答案。

相反,你可以使用 <Outlet/> 组件告诉父组件在哪里渲染任何嵌套组件。<Outlet/> 只渲染两件事之一:

  • 如果没有匹配的嵌套路由,它什么也不显示
  • 如果有一个匹配的嵌套路由,它会显示它的 view

就是这样!但重要的是要知道并记住这一点,因为它是一个常见的“为什么这不起作用?”的挫折来源。如果你没有提供一个 <Outlet/>,嵌套路由将不会被显示。

#[component]
pub fn ContactList() -> impl IntoView {
  let contacts = todo!();

  view! {
    <div style="display: flex">
      // 联系人列表
      <For each=contacts
        key=|contact| contact.id
        children=|contact| todo!()
      />
      // 嵌套子级,如果有的话
      // 不要忘记这个!
      <Outlet/>
    </div>
  }
}

重构路由定义

如果你不想的话,你不需要在一个地方定义所有路由。你可以将任何 <Route/> 及其子级重构到一个单独的组件中。

例如,你可以重构上面的例子,使用两个独立的组件:

#[component]
fn App() -> impl IntoView {
  view! {
    <Router>
      <Routes>
        <Route path="/contacts" view=ContactList>
          <ContactInfoRoutes/>
          <Route path="" view=|| view! {
            <p>"Select a contact to view more info."</p>
          }/>
        </Route>
      </Routes>
    </Router>
  }
}

#[component(transparent)]
fn ContactInfoRoutes() -> impl IntoView {
  view! {
    <Route path=":id" view=ContactInfo>
      <Route path="" view=EmailAndPhone/>
      <Route path="address" view=Address/>
      <Route path="messages" view=Messages/>
    </Route>
  }
}

第二个组件是 #[component(transparent)],这意味着它只返回它的数据,而不是视图:在这种情况下,它是一个 RouteDefinition 结构体,这是 <Route/> 返回的内容。只要它被标记为 #[component(transparent)],这个子路由就可以定义在你想要的任何地方,并作为组件插入到你的路由定义树中。

嵌套路由和性能

从概念上讲,所有这些都很不错,但再次强调——有什么大不了的?

性能。

在像 Leptos 这样的细粒度响应式库中,始终重要的是尽可能减少渲染工作。因为我们处理的是真实的 DOM 节点,而不是对虚拟 DOM 进行差异化处理,所以我们希望尽可能少地“重新渲染”组件。嵌套路由使得这变得非常容易。

想象一下我的联系人列表示例。如果我从 Greg 导航到 Alice 再到 Bob,然后返回 Greg,则每次导航时都需要更改联系信息。但 <ContactList/> 永远不应该重新渲染。这不仅可以节省渲染性能,还可以维护 UI 中的状态。例如,如果我在 <ContactList/> 的顶部有一个搜索栏,则从 Greg 导航到 Alice 再到 Bob 不会清除搜索内容。

实际上,在这种情况下,我们甚至不需要在联系人之间移动时重新渲染 <Contact/> 组件。路由器只会随着我们的导航而响应式地更新 :id 参数,从而允许我们进行细粒度更新。当我们在联系人之间导航时,我们将更新单个文本节点以更改联系人的姓名、地址等,而无需进行_任何_额外的重新渲染。

此沙盒包含本节和上一节中讨论的几个功能(如嵌套路由),以及本章其余部分将介绍的几个功能。路由器是一个如此集成的系统,以至于提供一个单独的示例是有意义的,所以如果你有任何不明白的地方,请不要感到惊讶。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::*;
use leptos_router::*;

#[component]
fn App() -> impl IntoView {
    view! {
        <Router>
            <h1>"Contact App"</h1>
            // 这个 <nav> 将显示在每个路由上,
            // 因为它在 <Routes/> 之外
            // 注意:我们可以只使用普通的 <a> 标签
            // 路由器将使用客户端导航
            <nav>
                <h2>"Navigation"</h2>
                <a href="/">"Home"</a>
                <a href="/contacts">"Contacts"</a>
            </nav>
            <main>
                <Routes>
                    // / 只有一个未嵌套的 "Home"
                    <Route path="/" view=|| view! {
                        <h3>"Home"</h3>
                    }/>
                    // /contacts 有嵌套路由
                    <Route
                        path="/contacts"
                        view=ContactList
                      >
                        // 如果没有指定 id,则回退
                        <Route path=":id" view=ContactInfo>
                            <Route path="" view=|| view! {
                                <div class="tab">
                                    "(Contact Info)"
                                </div>
                            }/>
                            <Route path="conversations" view=|| view! {
                                <div class="tab">
                                    "(Conversations)"
                                </div>
                            }/>
                        </Route>
                        // 如果没有指定 id,则回退
                        <Route path="" view=|| view! {
                            <div class="select-user">
                                "Select a user to view contact info."
                            </div>
                        }/>
                    </Route>
                </Routes>
            </main>
        </Router>
    }
}

#[component]
fn ContactList() -> impl IntoView {
    view! {
        <div class="contact-list">
            // 这是我们的联系人列表组件本身
            <div class="contact-list-contacts">
                <h3>"Contacts"</h3>
                <A href="alice">"Alice"</A>
                <A href="bob">"Bob"</A>
                <A href="steve">"Steve"</A>
            </div>

            // <Outlet/> 将显示嵌套的子路由
            // 我们可以将此出口放置在布局中的任何位置
            <Outlet/>
        </div>
    }
}

#[component]
fn ContactInfo() -> impl IntoView {
    // 我们可以使用 `use_params_map` 以响应式方式访问 :id 参数
    let params = use_params_map();
    let id = move || params.with(|params| params.get("id").cloned().unwrap_or_default());

    // 假设我们在这里从 API 加载数据
    let name = move || match id().as_str() {
        "alice" => "Alice",
        "bob" => "Bob",
        "steve" => "Steve",
        _ => "User not found.",
    };

    view! {
        <div class="contact-info">
            <h4>{name}</h4>
            <div class="tabs">
                <A href="" exact=true>"Contact Info"</A>
                <A href="conversations">"Conversations"</A>
            </div>

            // 这里的 <Outlet/> 是嵌套在
            // /contacts/:id 路由下的选项卡
            <Outlet/>
        </div>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

参数和查询

静态路径对于区分不同的页面很有用,但几乎每个应用程序都希望在某些时候通过 URL 传递数据。

你可以通过两种方式来做到这一点:

  1. 命名路由 参数,如 /users/:id 中的 id
  2. 命名路由 查询,如 /search?q=Foo 中的 q

由于 URL 的构建方式,你可以从_任何_ <Route/> 视图中访问查询。你可以从定义它们的 <Route/> 或其任何嵌套子级访问路由参数。

使用几个钩子访问参数和查询非常简单:

每一个都有一个类型化选项(use_queryuse_params)和一个非类型化选项(use_query_mapuse_params_map)。

非类型化版本保存一个简单的键值映射。要使用类型化版本,请在结构体上派生 Params 特征。

Params 是一个非常轻量级的特征,通过将 FromStr 应用于每个字段,将字符串的扁平键值映射转换为结构体。由于路由参数和 URL 查询的扁平结构,它远不如 serde 灵活;它也为你的二进制文件增加更少的重量。

use leptos::*;
use leptos_router::*;

#[derive(Params, PartialEq)]
struct ContactParams {
	id: usize
}

#[derive(Params, PartialEq)]
struct ContactSearch {
	q: String
}

注意:Params 派生宏位于 leptos::ParamsParams 特征位于 leptos_router::Params。如果你避免使用像 use leptos::*; 这样的全局导入,请确保你为派生宏导入了正确的宏。

如果你没有使用 nightly 功能,你会收到以下错误

no function or associated item named `into_param` found for struct `std::string::String` in the current scope

目前,支持 T: FromStrOption<T> 作为类型化参数需要一个 nightly 功能。你可以通过简单地将结构体更改为使用 q: Option<String> 而不是 q: String 来解决此问题。

现在我们可以在组件中使用它们了。想象一个既有参数又有查询的 URL,如 /contacts/:id?q=Search

类型化版本返回 Memo<Result<T, _>>。它是一个 Memo,因此它对 URL 的更改做出反应。它是一个 Result,因为需要从 URL 中解析参数或查询,并且可能有效也可能无效。

let params = use_params::<ContactParams>();
let query = use_query::<ContactSearch>();

// id: || -> usize
let id = move || {
	params.with(|params| {
		params.as_ref()
			.map(|params| params.id)
			.unwrap_or_default()
	})
};

非类型化版本返回 Memo<ParamsMap>。同样,它是一个 memo,以对 URL 的更改做出反应。ParamsMap 的行为与任何其他映射类型非常相似,它的 .get() 方法返回 Option<&String>

let params = use_params_map();
let query = use_query_map();

// id: || -> Option<String>
let id = move || {
	params.with(|params| params.get("id").cloned())
};

这可能会有点混乱:派生一个包装 Option<_>Result<_> 的信号可能需要几个步骤。但是这样做是值得的,原因有两个:

  1. 它是正确的,即它迫使你考虑这些情况,“如果用户没有为此查询字段传递值怎么办?如果他们传递了一个无效的值怎么办?”
  2. 它具有高性能。具体来说,当你导航到与同一个 <Route/> 匹配的不同路径时,只有参数或查询发生了变化,你可以在不重新渲染的情况下对应用程序的不同部分进行细粒度更新。例如,在我们联系人列表示例中,在不同联系人之间导航会对名称字段(最终是联系信息)进行有针对性的更新,而无需替换或重新渲染包装的 <Contact/>。这就是细粒度响应式的作用。

这与上一节的示例相同。路由器是一个如此集成的系统,以至于提供一个单独的示例来突出多个功能是有意义的,即使我们还没有解释所有功能。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::*;
use leptos_router::*;

#[component]
fn App() -> impl IntoView {
    view! {
        <Router>
            <h1>"Contact App"</h1>
            // 这个 <nav> 将显示在每个路由上,
            // 因为它在 <Routes/> 之外
            // 注意:我们可以只使用普通的 <a> 标签
            // 路由器将使用客户端导航
            <nav>
                <h2>"Navigation"</h2>
                <a href="/">"Home"</a>
                <a href="/contacts">"Contacts"</a>
            </nav>
            <main>
                <Routes>
                    // / 只有一个未嵌套的 "Home"
                    <Route path="/" view=|| view! {
                        <h3>"Home"</h3>
                    }/>
                    // /contacts 有嵌套路由
                    <Route
                        path="/contacts"
                        view=ContactList
                      >
                        // 如果没有指定 id,则回退
                        <Route path=":id" view=ContactInfo>
                            <Route path="" view=|| view! {
                                <div class="tab">
                                    "(Contact Info)"
                                </div>
                            }/>
                            <Route path="conversations" view=|| view! {
                                <div class="tab">
                                    "(Conversations)"
                                </div>
                            }/>
                        </Route>
                        // 如果没有指定 id,则回退
                        <Route path="" view=|| view! {
                            <div class="select-user">
                                "Select a user to view contact info."
                            </div>
                        }/>
                    </Route>
                </Routes>
            </main>
        </Router>
    }
}

#[component]
fn ContactList() -> impl IntoView {
    view! {
        <div class="contact-list">
            // 这是我们的联系人列表组件本身
            <div class="contact-list-contacts">
                <h3>"Contacts"</h3>
                <A href="alice">"Alice"</A>
                <A href="bob">"Bob"</A>
                <A href="steve">"Steve"</A>
            </div>

            // <Outlet/> 将显示嵌套的子路由
            // 我们可以将此出口放置在布局中的任何位置
            <Outlet/>
        </div>
    }
}

#[component]
fn ContactInfo() -> impl IntoView {
    // 我们可以使用 `use_params_map` 以响应式方式访问 :id 参数
    let params = use_params_map();
    let id = move || params.with(|params| params.get("id").cloned().unwrap_or_default());

    // 假设我们在这里从 API 加载数据
    let name = move || match id().as_str() {
        "alice" => "Alice",
        "bob" => "Bob",
        "steve" => "Steve",
        _ => "User not found.",
    };

    view! {
        <div class="contact-info">
            <h4>{name}</h4>
            <div class="tabs">
                <A href="" exact=true>"Contact Info"</A>
                <A href="conversations">"Conversations"</A>
            </div>

            // 这里的 <Outlet/> 是嵌套在
            // /contacts/:id 路由下的选项卡
            <Outlet/>
        </div>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

<A/> 组件

客户端导航与普通的 HTML <a> 元素完美配合。路由器添加了一个监听器,用于处理对 <a> 元素的每次点击,并尝试在客户端处理它,即无需再次往返服务器以请求 HTML。这就是让你可能熟悉的大多数现代 Web 应用程序能够快速进行“单页应用程序”导航的原因。

在以下几种情况下,路由器将放弃处理 <a> 点击

  • 对点击事件调用了 prevent_default()
  • 点击期间按住 MetaAltCtrlShift
  • <a> 具有 targetdownload 属性,或 rel="external"
  • 该链接的来源与当前位置不同

换句话说,路由器只会在它确信可以处理时才会尝试进行客户端导航,并且它会升级每个 <a> 元素以获得这种特殊行为。

这也意味着,如果你需要退出客户端路由,你可以轻松地做到这一点。例如,如果你有一个指向同一域上另一个页面的链接,但该页面不是你的 Leptos 应用程序的一部分,你只需使用 <a rel="external"> 来告诉路由器它无法处理。

路由器还提供了一个 <A> 组件,它可以完成两项额外的工作:

  1. 正确解析相对嵌套路由。使用普通 <a> 标签进行相对路由可能很棘手。例如,如果你有一个像 /post/:id 这样的路由,<A href="1"> 将生成正确的相对路由,但 <a href="1"> 可能不会(取决于它在你的视图中的位置。)<A/> 解析相对于它出现的嵌套路由路径的路由。
  2. 如果此链接是活动链接(即,它是指向你所在页面的链接),则将 aria-current 属性设置为 page。这有助于可访问性和样式设置。例如,如果你想为链接设置不同的颜色(如果它是指向你当前所在页面的链接),你可以使用 CSS 选择器匹配此属性。

以编程方式导航

你最常用的页面间导航方法应该是使用 <a><form> 元素,或者使用增强的 <A/><Form/> 组件。使用链接和表单进行导航是可访问性和优雅降级的最佳解决方案。

但是,有时你需要以编程方式导航,即调用一个可以导航到新页面的函数。在这种情况下,你应该使用 use_navigate 函数。

let navigate = leptos_router::use_navigate();
navigate("/somewhere", Default::default());

你几乎不应该做类似 <button on:click=move |_| navigate(/* ... */)> 的事情。出于可访问性的原因,任何导航的 on:click 都应该是 <a>

这里的第二个参数是一组 NavigateOptions,其中包括相对于当前路由解析导航的选项(如 <A/> 组件所做的那样),在导航堆栈中替换它,包含一些导航状态,并在导航时维护当前滚动状态。

再一次,这与之前的例子相同。查看相关的 <A/> 组件,并查看 index.html 中的 CSS,以了解基于 ARIA 的样式。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::*;
use leptos_router::*;

#[component]
fn App() -> impl IntoView {
    view! {
        <Router>
            <h1>"Contact App"</h1>
            // 这个 <nav> 将显示在每个路由上,
            // 因为它在 <Routes/> 之外
            // 注意:我们可以只使用普通的 <a> 标签
            // 路由器将使用客户端导航
            <nav>
                <h2>"Navigation"</h2>
                <a href="/">"Home"</a>
                <a href="/contacts">"Contacts"</a>
            </nav>
            <main>
                <Routes>
                    // / 只有一个未嵌套的 "Home"
                    <Route path="/" view=|| view! {
                        <h3>"Home"</h3>
                    }/>
                    // /contacts 有嵌套路由
                    <Route
                        path="/contacts"
                        view=ContactList
                      >
                        // 如果没有指定 id,则回退
                        <Route path=":id" view=ContactInfo>
                            <Route path="" view=|| view! {
                                <div class="tab">
                                    "(Contact Info)"
                                </div>
                            }/>
                            <Route path="conversations" view=|| view! {
                                <div class="tab">
                                    "(Conversations)"
                                </div>
                            }/>
                        </Route>
                        // 如果没有指定 id,则回退
                        <Route path="" view=|| view! {
                            <div class="select-user">
                                "Select a user to view contact info."
                            </div>
                        }/>
                    </Route>
                </Routes>
            </main>
        </Router>
    }
}

#[component]
fn ContactList() -> impl IntoView {
    view! {
        <div class="contact-list">
            // 这是我们的联系人列表组件本身
            <div class="contact-list-contacts">
                <h3>"Contacts"</h3>
                <A href="alice">"Alice"</A>
                <A href="bob">"Bob"</A>
                <A href="steve">"Steve"</A>
            </div>

            // <Outlet/> 将显示嵌套的子路由
            // 我们可以将此出口放置在布局中的任何位置
            <Outlet/>
        </div>
    }
}

#[component]
fn ContactInfo() -> impl IntoView {
    // 我们可以使用 `use_params_map` 以响应式方式访问 :id 参数
    let params = use_params_map();
    let id = move || params.with(|params| params.get("id").cloned().unwrap_or_default());

    // 假设我们在这里从 API 加载数据
    let name = move || match id().as_str() {
        "alice" => "Alice",
        "bob" => "Bob",
        "steve" => "Steve",
        _ => "User not found.",
    };

    view! {
        <div class="contact-info">
            <h4>{name}</h4>
            <div class="tabs">
                <A href="" exact=true>"Contact Info"</A>
                <A href="conversations">"Conversations"</A>
            </div>

            // 这里的 <Outlet/> 是嵌套在
            // /contacts/:id 路由下的选项卡
            <Outlet/>
        </div>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

<Form/> 组件

链接和表单有时看起来完全无关。但事实上,它们的工作方式非常相似。

在纯 HTML 中,有三种方法可以导航到另一个页面:

  1. 链接到另一个页面的 <a> 元素:使用 GET HTTP 方法导航到其 href 属性中的 URL。
  2. 一个 <form method="GET">:使用 GET HTTP 方法导航到其 action 属性中的 URL,并将来自其输入的表单数据编码在 URL 查询字符串中。
  3. 一个 <form method="POST">:使用 POST HTTP 方法导航到其 action 属性中的 URL,并将来自其输入的表单数据编码在请求的正文中。

由于我们有一个客户端路由器,我们可以在不重新加载页面的情况下进行客户端链接导航,即无需完全往返服务器。我们也可以用同样的方式进行客户端表单导航,这是有道理的。

路由器提供了一个 <Form> 组件,它的工作方式类似于 HTML <form> 元素,但使用客户端导航而不是完整的页面重新加载。<Form/> 适用于 GETPOST 请求。使用 method="GET",它将导航到表单数据中编码的 URL。使用 method="POST",它将发出 POST 请求并处理服务器的响应。

<Form/> 为我们将在后面的章节中看到的一些组件(如 <ActionForm/><MultiActionForm/>)奠定了基础。但它本身也支持一些强大的模式。

例如,假设你想要创建一个搜索字段,在用户搜索时实时更新搜索结果,而无需重新加载页面,但也将搜索内容存储在 URL 中,以便用户可以复制粘贴它以与他人共享结果。

事实证明,我们迄今为止学到的模式使得这很容易实现。

async fn fetch_results() {
	// 一些获取我们搜索结果的异步函数
}

#[component]
pub fn FormExample() -> impl IntoView {
    // 对 URL 查询字符串的响应式访问
    let query = use_query_map();
	// 存储为 ?q= 的搜索
    let search = move || query().get("q").cloned().unwrap_or_default();
	// 由搜索字符串驱动的资源
	let search_results = create_resource(search, fetch_results);

	view! {
		<Form method="GET" action="">
			<input type="search" name="q" value=search/>
			<input type="submit"/>
		</Form>
		<Transition fallback=move || ()>
			/* 渲染搜索结果 */
		</Transition>
	}
}

每当你点击“提交”时,<Form/> 都会“导航”到 ?q={search}。但由于此导航是在客户端完成的,因此页面不会闪烁或重新加载。URL 查询字符串发生变化,这会触发 search 更新。因为 searchsearch_results 资源的源信号,所以这会触发 search_results 重新加载其资源。<Transition/> 会继续显示当前的搜索结果,直到新的搜索结果加载完成。当它们完成后,它将切换到显示新的结果。

这是一个很棒的模式。数据流非常清晰:所有数据都从 URL 流向资源,再流向 UI。应用程序的当前状态存储在 URL 中,这意味着你可以刷新页面或将链接发送给朋友,它将完全按照你的预期显示。一旦我们引入服务器端渲染,这种模式也将被证明是非常容错的:因为它在底层使用 <form> 元素和 URL,所以它实际上即使没有在客户端加载你的 WASM 也能很好地工作。

我们实际上可以更进一步,做一些聪明的事情:

view! {
	<Form method="GET" action="">
		<input type="search" name="q" value=search
			oninput="this.form.requestSubmit()"
		/>
	</Form>
}

你可能会注意到,此版本删除了“提交”按钮。相反,我们在输入框中添加了一个 oninput 属性。注意,这不是 on:input,它会监听 input 事件并运行一些 Rust 代码。没有冒号,oninput 就是普通的 HTML 属性。所以这个字符串实际上是一个 JavaScript 字符串。this.form 为我们提供了输入框所附加的表单。requestSubmit() 会触发 <form> 上的 submit 事件,这会被 <Form/> 捕获,就像我们点击了“提交”按钮一样。现在,表单会在每次按键或输入时“导航”,以使 URL(以及因此搜索)与用户输入的内容完全同步。

实时示例

点击打开 CodeSandbox.

CodeSandbox 源码
use leptos::*;
use leptos_router::*;

#[component]
fn App() -> impl IntoView {
    view! {
        <Router>
            <h1><code>"<Form/>"</code></h1>
            <main>
                <Routes>
                    <Route path="" view=FormExample/>
                </Routes>
            </main>
        </Router>
    }
}

#[component]
pub fn FormExample() -> impl IntoView {
    // 对 URL 查询的响应式访问
    let query = use_query_map();
    let name = move || query().get("name").cloned().unwrap_or_default();
    let number = move || query().get("number").cloned().unwrap_or_default();
    let select = move || query().get("select").cloned().unwrap_or_default();

    view! {
        // 读出 URL 查询字符串
        <table>
            <tr>
                <td><code>"name"</code></td>
                <td>{name}</td>
            </tr>
            <tr>
                <td><code>"number"</code></td>
                <td>{number}</td>
            </tr>
            <tr>
                <td><code>"select"</code></td>
                <td>{select}</td>
            </tr>
        </table>
        // <Form/> 将在每次提交时进行导航
        <h2>"Manual Submission"</h2>
        <Form method="GET" action="">
            // 输入名称决定查询字符串键
            <input type="text" name="name" value=name/>
            <input type="number" name="number" value=number/>
            <select name="select">
                // `selected` 将设置哪个开始时被选中
                <option selected=move || select() == "A">
                    "A"
                </option>
                <option selected=move || select() == "B">
                    "B"
                </option>
                <option selected=move || select() == "C">
                    "C"
                </option>
            </select>
            // 提交应该会导致客户端
            // 导航,而不是完全重新加载
            <input type="submit"/>
        </Form>
        // 这个 <Form/> 使用一些 JavaScript 在
        // 每次输入时提交
        <h2>"Automatic Submission"</h2>
        <Form method="GET" action="">
            <input
                type="text"
                name="name"
                value=name
                // 这个 oninput 属性将导致
                // 表单在每次输入到字段时提交
                oninput="this.form.requestSubmit()"
            />
            <input
                type="number"
                name="number"
                value=number
                oninput="this.form.requestSubmit()"
            />
            <select name="select"
                onchange="this.form.requestSubmit()"
            >
                <option selected=move || select() == "A">
                    "A"
                </option>
                <option selected=move || select() == "B">
                    "B"
                </option>
                <option selected=move || select() == "C">
                    "C"
                </option>
            </select>
            // 提交应该会导致客户端
            // 导航,而不是完全重新加载
            <input type="submit"/>
        </Form>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

插曲:样式

任何创建网站或应用程序的人很快都会遇到样式问题。对于小型应用程序,单个 CSS 文件可能足以设置用户界面的样式。但随着应用程序的增长,许多开发人员发现纯 CSS 越来越难以管理。

一些前端框架(如 Angular、Vue 和 Svelte)提供了将 CSS 范围限定到特定组件的内置方法,从而更容易管理整个应用程序的样式,而不会让用于修改一个小组件的样式产生全局影响。其他框架(如 React 或 Solid)不提供内置的 CSS 作用域,而是依赖生态系统中的库来为它们完成这项工作。Leptos 属于后者:框架本身对 CSS 没有任何看法,但提供了一些工具和原语,允许其他人构建样式库。

以下是一些为你的 Leptos 应用程序设置样式的不同方法,而不是纯 CSS。

TailwindCSS:实用优先的 CSS

TailwindCSS 是一个流行的实用优先的 CSS 库。它允许你通过使用内联实用程序类来设置应用程序的样式,并使用自定义 CLI 工具扫描你的文件中的 Tailwind 类名并将必要的 CSS 打包在一起。

这允许你编写如下组件:

#[component]
fn Home() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    view! {
        <main class="my-0 mx-auto max-w-3xl text-center">
            <h2 class="p-6 text-4xl">"Welcome to Leptos with Tailwind"</h2>
            <p class="px-10 pb-10 text-left">"Tailwind will scan your Rust files for Tailwind class names and compile them into a CSS file."</p>
            <button
                class="bg-sky-600 hover:bg-sky-700 px-5 py-3 text-white rounded-lg"
                on:click=move |_| set_count.update(|count| *count += 1)
            >
                {move || if count() == 0 {
                    "Click me!".to_string()
                } else {
                    count().to_string()
                }}
            </button>
        </main>
    }
}

最初设置 Tailwind 集成可能有点复杂,但你可以查看我们关于如何在 客户端渲染的 trunk 应用程序服务器端渲染的 cargo-leptos 应用程序 中使用 Tailwind 的两个示例。cargo-leptos 还有一些 内置的 Tailwind 支持,你可以将其用作 Tailwind CLI 的替代方案。

Stylers:编译时 CSS 提取

Stylers 是一个编译时作用域 CSS 库,允许你在组件的主体中声明作用域 CSS。Stylers 将在编译时将此 CSS 提取到 CSS 文件中,然后你可以将其导入到你的应用程序中,这意味着它不会增加应用程序的 WASM 二进制文件大小。

这允许你编写如下组件:

use stylers::style;

#[component]
pub fn App() -> impl IntoView {
    let styler_class = style! { "App",
        #two{
            color: blue;
        }
        div.one{
            color: red;
            content: raw_str(r#"\hello"#);
            font: "1.3em/1.2" Arial, Helvetica, sans-serif;
        }
        div {
            border: 1px solid black;
            margin: 25px 50px 75px 100px;
            background-color: lightblue;
        }
        h2 {
            color: purple;
        }
        @media only screen and (max-width: 1000px) {
            h3 {
                background-color: lightblue;
                color: blue
            }
        }
    };

    view! { class = styler_class,
        <div class="one">
            <h1 id="two">"Hello"</h1>
            <h2>"World"</h2>
            <h2>"and"</h2>
            <h3>"friends!"</h3>
        </div>
    }
}

Stylance:用 CSS 文件编写的作用域 CSS

Stylers 允许你在 Rust 代码中内联编写 CSS,并在编译时提取它并限定其作用域。Stylance 允许你在组件旁边用 CSS 文件编写 CSS,将这些文件导入到你的组件中,并将 CSS 类的作用域限定到你的组件。

这与 trunkcargo-leptos 的实时重新加载功能配合良好,因为编辑的 CSS 文件可以在浏览器中立即更新。

import_style!(style, "app.module.scss");

#[component]
fn HomePage() -> impl IntoView {
    view! {
        <div class=style::jumbotron/>
    }
}

你可以直接编辑 CSS,而无需进行 Rust 重新编译。

.jumbotron {
  background: blue;
}

Styled:运行时 CSS 作用域

Styled 是一个运行时作用域 CSS 库,它与 Leptos 集成良好。它允许你在组件函数的主体中声明作用域 CSS,然后在运行时应用这些样式。

use styled::style;

#[component]
pub fn MyComponent() -> impl IntoView {
    let styles = style!(
      div {
        background-color: red;
        color: white;
      }
    );

    styled::view! { styles,
        <div>"This text should be red with white text."</div>
    }
}

欢迎贡献

Leptos 对你如何设置网站或应用程序的样式没有任何看法,但我们很乐意为你想创建的任何工具提供支持,以使这项工作变得更容易。如果你正在研究一种你想添加到此列表中的 CSS 或样式方法,请告诉我们!

元数据

到目前为止,我们渲染的所有内容都在 HTML 文档的 <body> 内部。这很有道理。毕竟,你在网页上看到的所有内容都位于 <body> 内部。

但是,有很多情况下,你可能希望使用与 UI 相同的响应式原语和组件模式来更新文档 <head> 中的内容。

这就是 leptos_meta 包的用武之地。

元数据组件

leptos_meta 提供了特殊的组件,让你可以将应用程序中任何地方的组件内部的数据注入到 <head> 中:

<Title/> 允许你从任何组件设置文档的标题。它还接受一个 formatter 函数,该函数可用于将相同的格式应用于其他页面设置的标题。因此,例如,如果你在 <App/> 组件中放入 <Title formatter=|text| format!("{text} — My Awesome Site")/>,然后在你的路由上放入 <Title text="Page 1"/><Title text="Page 2"/>,你将获得 Page 1 — My Awesome SitePage 2 — My Awesome Site

<Link/> 接受 <link> 元素的标准属性。

<Stylesheet/> 使用你提供的 href 创建一个 <link rel="stylesheet">

<Style/> 使用你传入的子级(通常是一个字符串)创建一个 <style>。你可以使用它在编译时从另一个文件中导入一些自定义 CSS <Style>{include_str!("my_route.css")}</Style>

<Meta/> 让你可以使用描述和其他元数据设置 <meta> 标签。

<Script/><script>

leptos_meta 还提供了一个 <Script/> 组件,在这里值得停顿一下。我们考虑过的所有其他组件都在 <head> 中注入了仅限 <head> 的元素。但是 <script> 也可以包含在 body 中。

有一种非常简单的方法可以确定你应该使用大写字母 S 的 <Script/> 组件还是小写字母 s 的 <script> 元素:<Script/> 组件将在 <head> 中渲染,而 <script> 元素将在你的用户界面 <body> 中你放置它的任何位置渲染,与其他普通 HTML 元素一起渲染。这些会导致 JavaScript 在不同的时间加载和运行,因此请使用适合你需求的任何一种。

<Body/><Html/>

甚至还有一些元素旨在使语义 HTML 和样式更容易。<Html/> 让你可以从你的应用程序代码中设置 <html> 标签上的 langdir<Html/><Body/> 都有 class props,让你可以设置它们各自的 class 属性,这有时是 CSS 框架用于样式设置所需要的。

<Body/><Html/> 都有 attributes props,可以使用 attr: 语法在它们上面设置任意数量的附加属性:

<Html
	lang="he"
	dir="rtl"
	attr:data-theme="dark"
/>

元数据和服务器端渲染

现在,其中一些内容在任何场景下都很有用,但其中一些内容对搜索引擎优化(SEO)尤其重要。确保你拥有适当的 <title><meta> 标签等内容至关重要。现代搜索引擎爬虫确实会处理客户端渲染,即作为空 index.html 发送并完全在 JS/WASM 中渲染的应用程序。但它们更喜欢接收你的应用程序已渲染为实际 HTML 的页面,并在 <head> 中包含元数据。

这正是 leptos_meta 的用途。事实上,在服务器端渲染期间,它正是这样做的:收集你通过在整个应用程序中使用其组件声明的所有 <head> 内容,然后将其注入到实际的 <head> 中。

但我跑题了。我们还没有真正讨论服务器端渲染。下一章将讨论与 JavaScript 库的集成。然后我们将结束对客户端的讨论,并转到服务器端渲染。

与 JavaScript 集成:wasm-bindgenweb_sysHtmlElement

Leptos 提供了各种工具,让你无需离开框架的世界就可以构建声明式的 Web 应用程序。诸如响应式系统、componentview 宏以及路由器之类的东西使你无需直接与浏览器提供的 Web API 交互即可构建用户界面。并且它们让你直接在 Rust 中完成所有这些工作,这很棒——假设你喜欢 Rust。(如果你已经读到本书的这一部分,我们假设你喜欢 Rust。)

leptos-use 提供的奇妙实用程序集之类的生态系统 crate 可以让你走得更远,通过为许多 Web API 提供特定于 Leptos 的响应式包装器。

但是,在许多情况下,你需要直接访问 JavaScript 库或 Web API。本章可以提供帮助。

使用 wasm-bindgen 使用 JS 库

你的 Rust 代码可以编译为 WebAssembly (WASM) 模块并加载到浏览器中运行。但是,WASM 无法直接访问浏览器 API。相反,Rust/WASM 生态系统依赖于从你的 Rust 代码到托管它的 JavaScript 浏览器环境生成绑定。

wasm-bindgen crate 是该生态系统的核心。它提供了一个接口,用于使用注释标记 Rust 代码的各个部分,告诉它如何调用 JS,以及一个用于生成必要的 JS 粘合代码的 CLI 工具。你一直在不知不觉中使用它:trunkcargo-leptos 都在底层依赖 wasm-bindgen

如果你想从 Rust 中调用 JavaScript 库,你应该参考 wasm-bindgen 文档中关于 从 JS 导入函数 的部分。从 JavaScript 导入单个函数、类或值以在你的 Rust 应用程序中使用相对容易。

将 JS 库直接集成到你的应用程序中并不总是那么容易。特别是,任何依赖于像 React 这样的特定 JS 框架的库都可能难以集成。还应谨慎使用以某种方式操作 DOM 状态的库(例如,富文本编辑器):Leptos 和 JS 库都可能假设它们是应用程序状态的最终事实来源,因此你应该小心地分离它们的职责。

使用 web-sys 访问 Web API

如果你只需要访问一些浏览器 API 而无需引入单独的 JS 库,则可以使用 web_sys crate 来实现。它提供了浏览器提供的所有 Web API 的绑定,从浏览器类型和函数到 Rust 结构体和方法的 1:1 映射。

通常,如果你问“我如何使用 Leptos执行 X?”,其中执行 X 是访问某个 Web API,那么查找普通 JavaScript 解决方案并使用 web-sys 文档 将其翻译成 Rust 是一个好方法。

阅读完本节后,你可能会发现 关于 web-syswasm-bindgen 指南章节 对于进一步阅读很有用。

启用功能

web_sys 的功能被大量分隔,以保持较低的编译时间。如果你想使用它的许多 API 之一,你可能需要启用一个功能才能使用它。

使用项目所需的功能始终在其文档中列出。 例如,要使用 Element::get_bounding_rect_client,你需要启用 DomRectElement 功能。

Leptos 已经启用了 一大堆 功能——如果所需的功能已在此处启用,则你无需在自己的应用程序中启用它。 否则,将其添加到你的 Cargo.toml 中,就可以开始了!

[dependencies.web-sys]
version = "0.3"
features = ["DomRect"]

但是,随着 JavaScript 标准的演进和 API 的编写,你可能希望使用技术上尚不完全稳定的浏览器功能,例如 WebGPUweb_sys 将遵循(可能经常更改的)标准,这意味着不保证稳定性。

为了使用它,你需要添加 RUSTFLAGS=--cfg=web_sys_unstable_apis 作为环境变量。 这可以通过将其添加到每个命令中来完成,也可以将其添加到存储库中的 .cargo/config.toml 中来完成。

作为命令的一部分:

RUSTFLAGS=--cfg=web_sys_unstable_apis cargo # ...

.cargo/config.toml 中:

[env]
RUSTFLAGS = "--cfg=web_sys_unstable_apis"

从你的 view 中访问原始 HtmlElement

框架的声明式风格意味着你不需要直接操作 DOM 节点来构建你的用户界面。 但是,在某些情况下,你希望直接访问表示视图一部分的底层 DOM 元素。本书关于 “不受控制的输入” 的部分介绍了如何使用 NodeRef 类型来实现此目的。

你可能会注意到 NodeRef::get 返回一个 Option<leptos::HtmlElement<T>>。这与 web_sys::HtmlElement 不是同一种类型,尽管它们是相关的。那么这个 HtmlElement<T> 类型是什么,你如何使用它呢?

概述

web_sys::HtmlElement 是浏览器 HTMLElement 接口的 Rust 等效项,该接口为所有 HTML 元素实现。它提供了对保证可用于任何 HTML 元素的一组最少函数和 API 的访问权限。然后,每个特定的 HTML 元素都有自己的元素类,该类实现额外的功能。 leptos::HtmlElement<T> 的目标是弥合视图中的元素与这些更具体的 JavaScript 类型之间的差距,以便你可以访问这些元素的特定功能。

这是通过使用 Rust Deref 特征来实现的,该特征允许你将 leptos::HtmlElement<T> 解引用为该特定元素类型 T 的适当类型的 JS 对象。

定义

理解这种关系涉及理解一些相关的特征。

以下内容简单地定义了 leptos::HtmlElement<T>T 中允许的类型,以及它如何链接到 web_sys

pub struct HtmlElement<El> where El: ElementDescriptor { /* ... */ }

pub trait ElementDescriptor: ElementDescriptorBounds { /* ... */ }

pub trait ElementDescriptorBounds: Debug {}
impl<El> ElementDescriptorBounds for El where El: Debug {}

// 这为 `leptos::{html, svg, math}::*` 中的每个元素实现
impl ElementDescriptor for leptos::html::Div { /* ... */ }

// 与此相同,解引用到相应的 `web_sys::Html*Element`
impl Deref for leptos::html::Div {
    type Target = web_sys::HtmlDivElement;
    // ...
}

以下内容来自 web_sys

impl Deref for web_sys::HtmlDivElement {
    type Target = web_sys::HtmlElement;
    // ...
}

impl Deref for web_sys::HtmlElement {
    type Target = web_sys::Element;
    // ...
}

impl Deref for web_sys::Element {
    type Target = web_sys::Node;
    // ...
}

impl Deref for web_sys::Node {
    type Target = web_sys::EventTarget;
    // ...
}

web_sys 使用长的解引用链来模拟 JavaScript 中使用的继承。 如果你在一种类型上找不到你要找的方法,请在解引用链中进一步查看。 leptos::html::* 类型都解引用到 web_sys::Html*Elementweb_sys::HtmlElement。 通过调用 element.method(),Rust 将根据需要自动添加更多解引用以调用正确的方法!

但是,有些方法具有相同的名称,例如 leptos::HtmlElement::styleweb_sys::HtmlElement::style。 在这种情况下,Rust 将选择需要最少解引用的方法,如果你直接从 NodeRef 获取元素,则为 leptos::HtmlElement::style。 如果你想改用 web_sys 方法,则可以使用 (*element).style() 手动解引用。

如果你想对从哪个类型调用方法进行更多控制,则为解引用链中所有类型都实现了 AsRef<T>,因此你可以明确说明你想要的类型。

另请参阅:wasm-bindgen 指南:web-sys 中的继承

克隆

web_sys::HtmlElement(以及扩展的 leptos::HtmlElement)实际上只存储对其影响的 HTML 元素的引用。 因此,调用 .clone() 实际上并不会创建一个新的 HTML 元素,它只是获取对同一个元素的另一个引用。 从其任何克隆中调用更改元素的方法将影响原始元素。

不幸的是,web_sys::HtmlElement 没有实现 Copy,因此你可能需要添加一堆克隆,尤其是在闭包中使用它时。 不过别担心,这些克隆很便宜!

转换

你可以通过 DerefAsRef 获取不太具体的类型,因此尽可能使用它们。 但是,如果你需要转换为更具体的类型(例如,从 EventTarget 转换为 HtmlInputElement),则需要使用 wasm_bindgen::JsCast 提供的方法(通过 web_sys::wasm_bindgen::JsCast 重新导出)。 你可能只需要 dyn_ref 方法。

use web_sys::wasm_bindgen::JsCast;

let on_click = |ev: MouseEvent| {
    let target: HtmlInputElement = ev.current_target().unwrap().dyn_ref().unwrap();
    // 或者,只需使用现有的 `leptos::event_target_*` 函数
}

如果你好奇的话,请在此处查看 event_target_* 函数

leptos::HtmlElement

leptos::HtmlElement 添加了一些额外的便捷方法,以便于操作常用属性。 这些方法是为 构建器语法 构建的,因此它接受并返回 self。 你可以只执行 _ = element.clone().<method>() 来忽略它返回的元素 - 它仍然会影响原始元素,即使它看起来不像(请参阅上一节关于 克隆)!

以下是一些你可能想使用的方法,例如在事件监听器或 use: 指令中。

  • id覆盖 元素上的 id。
  • classes添加 元素的类。 你可以使用空格分隔的字符串指定多个类。 你还可以使用 class 有条件地添加单个类:不要使用此方法添加多个类。
  • attr:为元素设置一个 key=value 属性。
  • prop:在元素上设置一个属性:请参阅 此处属性和属性之间的区别
  • on:向元素添加事件监听器。 通过 leptos::ev::* 之一指定事件类型(它是所有小写的类型)。
  • child:将一个元素添加为该元素的最后一个子元素。

也请查看 leptos::HtmlElement 的其余方法。如果它们都不符合你的要求,还可以查看 leptos-use。否则,你将不得不使用 web_sys API。

第一部分总结:客户端渲染

到目前为止,我们编写的所有内容几乎完全在浏览器中渲染。当我们使用 Trunk 创建应用程序时,它使用本地开发服务器提供服务。如果你构建它用于生产并部署它,它将由你正在使用的任何服务器或 CDN 提供服务。无论哪种情况,提供服务的都是一个 HTML 页面,其中包含

  1. 你的 Leptos 应用程序的 URL,该应用程序已编译为 WebAssembly (WASM)
  2. 用于初始化此 WASM blob 的 JavaScript 的 URL
  3. 一个空的 <body> 元素

当 JS 和 WASM 加载完成后,Leptos 会将你的应用程序渲染到 <body> 中。这意味着在 JS/WASM 加载并运行之前,屏幕上不会显示任何内容。这有一些缺点:

  1. 它会增加加载时间,因为在下载其他资源之前,用户的屏幕是空白的。
  2. 它不利于 SEO,因为加载时间更长,并且你提供的 HTML 没有有意义的内容。
  3. 对于由于某种原因无法加载 JS/WASM 的用户来说,它是坏掉的(例如,他们在火车上,在 WASM 完成加载之前刚进入隧道;他们使用的是不支持 WASM 的旧设备;他们由于某种原因关闭了 JavaScript 或 WASM;等等)

这些缺点适用于整个 Web 生态系统,尤其是 WASM 应用程序。

但是,根据你的项目要求,你可能可以接受这些限制。

如果你只是想部署你的客户端渲染网站,请跳到关于 “部署” 的章节——在那里,你将找到有关如何最好地部署你的 Leptos CSR 网站的说明。

但是,如果你想在 index.html 页面中返回的不仅仅是一个空的 <body> 标签,该怎么办?使用“服务器端渲染”!

关于这个主题可以(并且可能已经)写出整本书,但它的核心非常简单:在 SSR 中,你将返回一个初始 HTML 页面,该页面反映了你的应用程序或站点的实际起始状态,而不是返回一个空的 <body> 标签,这样在 JS/WASM 加载期间,以及直到它们加载完成,用户都可以访问纯 HTML 版本。

本书的第二部分,关于 Leptos SSR,将详细介绍这个主题!

第二部分:服务器端渲染

本书的第二部分是关于如何将你漂亮的 UI 变成全栈 Rust + Leptos 驱动的网站和应用程序。

正如你在上一章中读到的,使用客户端渲染的 Leptos 应用程序有一些限制——在接下来的几章中,你将看到我们如何克服这些限制,并从你的 Leptos 应用程序中获得最佳性能和 SEO。

Info

在服务器端使用 Leptos 时,你可以自由选择 Actix-web 或 Axum 集成——Leptos 的全部功能集在任何一个选项中都可用。

但是,如果你需要部署到与 WinterCG 兼容的运行时(如 Deno、Cloudflare 等),那么请选择 Axum 集成,因为此部署选项仅适用于服务器上的 Axum。最后,如果你想使用全栈 WASM/WASI 并部署到基于 WASM 的无服务器运行时,那么 Axum 也是你的首选。

注意:这是 Web 框架本身的限制,而不是 Leptos 的限制。

cargo-leptos 简介

到目前为止,我们只是在浏览器中运行代码,并使用 Trunk 来协调构建过程和运行本地开发过程。如果我们要添加服务器端渲染,我们还需要在服务器上运行我们的应用程序代码。这意味着我们需要构建两个独立的二进制文件,一个编译为本机代码并在服务器上运行,另一个编译为 WebAssembly (WASM) 并在用户的浏览器中运行。此外,服务器需要知道如何将此 WASM 版本(以及初始化它所需的 JavaScript)提供给浏览器。

这不是一项不可逾越的任务,但它增加了一些复杂性。为了方便起见和更好的开发体验,我们构建了 cargo-leptos 构建工具。cargo-leptos 基本上是为了协调你的应用程序的构建过程,在进行更改时处理服务器和客户端两部分的重新编译,并添加对 Tailwind、SASS 和测试等内容的内置支持。

入门非常简单。只需运行

cargo install cargo-leptos

然后要创建一个新项目,你可以运行以下任一命令

# 对于 Actix 模板
cargo leptos new --git leptos-rs/start

# 对于 Axum 模板
cargo leptos new --git leptos-rs/start-axum

确保你已添加 wasm32-unknown-unknown 目标,以便 Rust 可以将你的代码编译为 WebAssembly 以在浏览器中运行。

rustup target add wasm32-unknown-unknown

现在 cd 到你创建的目录并运行

cargo leptos watch

注意:请记住,Leptos 有一个 nightly feature,这些启动器都使用了它。如果你使用的是稳定的 Rust 编译器, 那没关系;只需从你的新 Cargo.toml 中删除每个 Leptos 依赖项中的 nightly feature,你就应该可以开始了。

你的应用程序编译完成后,你可以打开浏览器访问 http://localhost:3000 来查看它。

cargo-leptos 有很多额外的功能和内置工具。你可以 在其 README 了解更多信息。

但是,当你打开浏览器访问 localhost:3000 时,到底发生了什么呢?好吧,请继续阅读以找出答案。

页面加载的过程

在我们深入探讨之前,先进行高级概述可能会有所帮助。从你输入服务器端渲染的 Leptos 应用程序的 URL 到你点击按钮并增加计数器之间到底发生了什么?

我假设你在这里有一些关于互联网如何工作的基本知识,并且不会深入探讨 HTTP 或其他任何内容。相反,我将尝试展示 Leptos API 的不同部分如何映射到该过程的每个部分。

此描述还从你的应用程序正在为两个单独的目标编译的前提开始:

  1. 服务器版本,通常在 Actix 或 Axum 上运行,使用 Leptos ssr 功能编译
  2. 浏览器版本,使用 Leptos hydrate 功能编译为 WebAssembly (WASM)

cargo-leptos 构建工具用于协调为这两个不同目标编译应用程序的过程。

页面加载的过程

在我们深入探讨之前,先进行高级概述可能会有所帮助。从你输入服务器端渲染的 Leptos 应用程序的 URL 到你点击按钮并增加计数器之间到底发生了什么?

我假设你在这里有一些关于互联网如何工作的基本知识,并且不会深入探讨 HTTP 或其他任何内容。相反,我将尝试展示 Leptos API 的不同部分如何映射到该过程的每个部分。

此描述还从你的应用程序正在为两个单独的目标编译的前提开始:

  1. 服务器版本,通常在 Actix 或 Axum 上运行,使用 Leptos ssr 功能编译
  2. 浏览器版本,使用 Leptos hydrate 功能编译为 WebAssembly (WASM)

cargo-leptos 构建工具用于协调为这两个不同目标编译应用程序的过程。

在服务器上

  • 你的浏览器向你的服务器发出对该 URL 的 GET 请求。此时,浏览器几乎不知道要渲染的页面。(“浏览器如何知道在哪里请求页面?”这个问题很有趣,但超出了本教程的范围!)
  • 服务器收到该请求,并检查它是否能够在该路径上处理 GET 请求。这就是 leptos_axumleptos_actix 中的 .leptos_routes() 方法的用途。当服务器启动时,这些方法会遍历你在 <Routes/> 中提供的路由结构,生成你的应用程序可以处理的所有可能路由的列表,并告诉服务器的路由器“对于这些路由中的每一个,如果你收到一个请求... 将其交给 Leptos。”
  • 服务器看到此路由可以由 Leptos 处理。因此它会渲染你的根组件(通常称为 <App/> 之类的东西),为它提供正在请求的 URL 以及其他一些数据,例如 HTTP 标头和请求元数据。
  • 你的应用程序在服务器上运行一次,构建将在该路由上渲染的组件树的 HTML 版本。(下一章将详细介绍 resource 和 <Suspense/>。)
  • 服务器返回此 HTML 页面,还注入有关如何加载已编译为 WASM 的应用程序版本的信息,以便它可以在浏览器中运行。

返回的 HTML 页面本质上是你的应用程序,“脱水”或“冻干”版本:它是 HTML,没有任何你添加的响应式或事件监听器。浏览器将通过添加响应式系统并将事件监听器附加到该服务器渲染的 HTML 来“重新水合”此 HTML 页面。因此,有两个功能标志适用于此过程的两部分:服务器端的 ssr 用于“服务器端渲染”,浏览器端的 hydrate 用于重新水合过程。

在浏览器中

  • 浏览器从服务器接收此 HTML 页面。它立即返回到服务器,开始加载运行应用程序的交互式客户端版本所需的 JS 和 WASM。
  • 同时,它会渲染 HTML 版本。
  • 当 WASM 版本重新加载完成后,它会执行与服务器相同的路由匹配过程。因为 <Routes/> 组件在服务器和客户端上是相同的,所以浏览器版本将读取 URL 并渲染与服务器已返回的页面相同的页面。
  • 在这个初始的“水合”阶段,你的应用程序的 WASM 版本不会重新创建构成你的应用程序的 DOM 节点。相反,它会遍历现有的 HTML 树,“拾取”现有元素并添加必要的交互性。

请注意,这里有一些权衡。在此水合过程完成之前,该页面将看起来是交互式的,但实际上不会响应交互。例如,如果你有一个计数器按钮,并在 WASM 加载完成之前点击它,则计数将不会增加,因为必要的事件监听器和响应式尚未添加。我们将在后面的章节中介绍一些构建“优雅降级”的方法。

客户端导航

下一步非常重要。想象一下,用户现在点击一个链接来导航到你的应用程序中的另一个页面。

浏览器将_不会_再次往返服务器,重新加载整个页面,就像它在纯 HTML 页面之间导航,或者使用服务器端渲染(例如使用 PHP)但没有客户端部分的应用程序时那样。

相反,你的应用程序的 WASM 版本将在浏览器中加载新页面,而无需从服务器请求另一个页面。本质上,你的应用程序会将自身从服务器加载的“多页应用程序”升级为浏览器渲染的“单页应用程序”。这产生了两种技术的最佳组合:由于服务器端渲染的 HTML,初始加载时间很快,并且由于客户端路由,辅助导航很快。

在以下章节中将描述的一些内容——例如服务器函数、resource 和 <Suspense/> 之间的交互——可能看起来过于复杂。你可能会问自己,“如果我的页面在服务器上被渲染为 HTML,为什么我不能在服务器上 .await 它?如果我可以直接在服务器函数中调用库 X,为什么我不能在我的组件中调用它?”原因很简单:为了实现从服务器端渲染到客户端渲染的升级,你的应用程序中的所有内容都必须能够在服务器或浏览器上运行。

当然,这不是创建网站或 Web 框架的唯一方法。但它是_最常见_的方法,而且我们碰巧认为这是一种很好的方法,可以为你的用户创造最流畅的体验。

异步渲染和 SSR“模式”

服务器端渲染仅使用同步数据的页面非常简单:你只需遍历组件树,将每个元素渲染为 HTML 字符串。但这是一个很大的警告:它没有回答我们应该如何处理包含异步数据的页面,即在客户端的 <Suspense/> 节点下渲染的内容。

当页面加载它需要渲染的异步数据时,我们应该怎么做?我们应该等待所有异步数据加载完成,然后一次渲染所有内容吗?(我们称之为“异步”渲染)我们应该走完全相反的方向,立即将我们拥有的 HTML 发送给客户端,并让客户端加载资源并填充它们吗?(我们称之为“同步”渲染)或者是否有一些中间解决方案可以以某种方式同时胜过它们?(提示:有。)

如果你曾经在线听过流媒体音乐或观看过视频,我确信你知道 HTTP 支持流式传输,允许单个连接一个接一个地发送数据块,而无需等待完整内容加载完成。你可能没有意识到浏览器也非常擅长渲染部分 HTML 页面。综上所述,这意味着你可以通过流式传输 HTML 来增强用户的体验:这是 Leptos 开箱即用支持的,根本无需配置。实际上,流式传输 HTML 的方法不止一种:你可以像视频帧一样按顺序流式传输构成你页面的 HTML 块,或者你可以流式传输它们... 好吧,乱序。

让我详细说明我的意思。

Leptos 支持所有主要的渲染包含异步数据的 HTML 的方法:

  1. 同步渲染
  2. 异步渲染
  3. 顺序流式传输
  4. 乱序流式传输(以及部分阻塞的变体)

同步渲染

  1. 同步:为任何 <Suspense/> 提供一个包含 fallback 的 HTML 外壳。使用 create_local_resource 在客户端加载数据,并在加载资源后替换 fallback
  • 优点:应用程序外壳出现得非常快:TTFB(首字节时间)很棒。
  • 缺点
    • 资源加载相对较慢;你需要等待 JS + WASM 加载完成后才能发出请求。
    • 无法在 <title> 或其他 <meta> 标签中包含来自异步资源的数据,这会损害 SEO 和社交媒体链接预览等内容。

如果你使用的是服务器端渲染,从性能的角度来看,同步模式几乎从来都不是你真正想要的。这是因为它错过了一个重要的优化。如果你在服务器端渲染期间加载异步资源,你实际上可以在服务器上开始加载数据。服务器端渲染实际上可以在客户端首次发出响应时开始加载资源,而不是等待客户端接收 HTML 响应,然后加载其 JS + WASM,然后意识到它需要资源并开始加载它们。从这个意义上说,在服务器端渲染期间,异步资源就像一个在服务器上开始加载并在客户端解析的 Future。只要资源实际上是可序列化的,这将始终导致更快的总加载时间。

这就是为什么 create_resource 默认要求资源数据可序列化,以及为什么你需要为任何不可序列化的异步数据显式使用 create_local_resource,因此这些数据只能在浏览器本身中加载。当你能够创建可序列化资源时创建本地资源始终是一种反优化。

异步渲染

  1. async:在服务器上加载所有资源。等待所有数据加载完成,然后一次性渲染 HTML。
  • 优点:更好地处理元标签(因为你甚至在渲染 <head> 之前就知道异步数据)。由于异步资源开始在服务器上加载,因此完成加载速度比同步快。
  • 缺点:加载时间/TTFB 较慢:你需要等待所有异步资源加载完成后才能在客户端上显示任何内容。在所有内容加载完成之前,页面完全空白。

顺序流式传输

  1. 顺序流式传输:遍历组件树,渲染 HTML,直到遇到 <Suspense/>。将到目前为止你得到的所有 HTML 作为流中的一个块发送,等待 <Suspense/> 下访问的所有资源加载完成,然后将其渲染为 HTML 并继续遍历,直到遇到另一个 <Suspense/> 或页面末尾。
  • 优点:在数据准备好之前,至少显示_一些东西_,而不是空白屏幕。
  • 缺点
    • 加载外壳的速度比同步渲染(或乱序流式传输)慢,因为它需要在每个 <Suspense/> 处暂停。
    • 无法显示 <Suspense/> 的回退状态。
    • 在整个页面加载完成之前无法开始水合,因此页面的早期部分在暂停的块加载完成之前将不会是交互式的。

乱序流式传输

  1. 乱序流式传输:与同步渲染类似,为任何 <Suspense/> 提供一个包含 fallback 的 HTML 外壳。但在服务器上加载数据,并在解析时将其流式传输到客户端,并为 <Suspense/> 节点流式传输 HTML,该节点将被交换以替换回退内容。
  • 优点:结合了同步和**async**的优点。
    • 由于它立即发送整个同步外壳,因此初始响应/TTFB 很快
    • 由于资源开始在服务器上加载,因此总时间很快。
    • 能够显示回退加载状态并动态替换它,而不是为未加载的数据显示空白部分。
  • 缺点:需要启用 JavaScript 才能使暂停的片段按正确顺序显示。(这小段 JS 与包含渲染的 <Suspense/> 片段的 <template> 标签一起流式传输到 <script> 标签中,因此它不需要加载任何额外的 JS 文件。)
  1. 部分阻塞流式传输:当你页面上有多个独立的 <Suspense/> 组件时,“部分阻塞”流式传输很有用。通过在路由上设置 ssr=SsrMode::PartiallyBlocked 并根据视图中的阻塞资源来触发它。如果 <Suspense/> 组件之一读取一个或多个“阻塞资源”(见下文),则不会发送回退内容;相反,服务器将等待该 <Suspense/> 解析完成,然后在服务器上将回退内容替换为已解析的片段,这意味着它包含在初始 HTML 响应中,即使禁用了 JavaScript 或不支持 JavaScript 也会出现。其他 <Suspense/> 以乱序流式传输,类似于 SsrMode::OutOfOrder 默认值。

当你页面上有多个 <Suspense/>,并且一个比另一个更重要时,这很有用:想想一篇博客文章和评论,或者产品信息和评论。如果只有一个 <Suspense/>,或者每个 <Suspense/> 都从阻塞资源中读取,则它没有用处。在这些情况下,它是 async 渲染的一种较慢的形式。

  • 优点:如果在用户的设备上禁用了 JavaScript 或不支持 JavaScript,则有效。
  • 缺点
    • 初始响应时间比乱序慢。
    • 由于服务器上的额外工作,总体响应略有延迟。
    • 不显示回退状态。

使用 SSR 模式

因为它提供了性能特征的最佳组合,所以 Leptos 默认使用乱序流式传输。但是选择这些不同的模式真的很简单。你可以通过在你的一个或多个 <Route/> 组件上添加 ssr 属性来实现,就像在 ssr_modes 示例 中一样。

<Routes>
	// 我们将使用乱序流式传输和 `<Suspense/>` 加载主页
	<Route path="" view=HomePage/>

	// 我们将使用异步渲染加载帖子,以便它们可以在加载数据*后*设置
	// 标题和元数据
	<Route
		path="/post/:id"
		view=Post
		ssr=SsrMode::Async
	/>
</Routes>

对于包含多个嵌套路由的路径,将使用最严格的模式:即,如果即使单个嵌套路由请求 async 渲染,整个初始请求也将以 async 方式渲染。async 是最严格的要求,其次是顺序,然后是乱序。(如果你仔细想想,这可能是合理的。)

阻塞资源

任何晚于 0.2.5 的 Leptos 版本(即 git main 和 0.3.x 或更高版本)都引入了一个新的资源原语 create_blocking_resource。阻塞资源仍然像 Rust 中的任何其他 async/.await 一样异步加载;它不会阻塞服务器线程或任何东西。相反,在 <Suspense/> 下读取阻塞资源会阻止 HTML 返回任何内容,包括其初始同步外壳,直到该 <Suspense/> 解析完成。

现在从性能的角度来看,这并不理想。你的页面的任何同步外壳都不会加载,直到该资源准备就绪。但是,不渲染任何内容意味着你可以执行以下操作,例如在实际 HTML 的 <head> 中设置 <title><meta> 标签。这听起来很像 async 渲染,但有一个很大的区别:如果你有多个 <Suspense/> 部分,你可以阻塞其中_一个_,但仍然渲染一个占位符,然后流式传输另一个。

例如,想想一篇博客文章。为了 SEO 和社交分享,我肯定希望我的博客文章的标题和元数据出现在初始 HTML <head> 中。但我真的不关心评论是否已经加载;我想尽可能延迟加载它们。

使用阻塞资源,我可以执行以下操作:

#[component]
pub fn BlogPost() -> impl IntoView {
	let post_data = create_blocking_resource(/* 加载博客文章 */);
	let comments_data = create_resource(/* 加载博客评论 */);
	view! {
		<Suspense fallback=|| ()>
			{move || {
				post_data.with(|data| {
					view! {
						<Title text=data.title/>
						<Meta name="description" content=data.excerpt/>
						<article>
							/* 渲染帖子内容 */
						</article>
					}
				})
			}}
		</Suspense>
		<Suspense fallback=|| "Loading comments...">
			/* 在这里渲染评论数据 */
		</Suspense>
	}
}

第一个 <Suspense/>,包含博客文章的正文,将阻塞我的 HTML 流,因为它从阻塞资源中读取。元标签和其他等待阻塞资源的头部元素将在发送流之前渲染。

与以下路由定义相结合,该定义使用 SsrMode::PartiallyBlocked,阻塞资源将在服务器端完全渲染,从而使禁用 WebAssembly 或 JavaScript 的用户可以访问它。

<Routes>
	// 我们将使用乱序流式传输和 `<Suspense/>` 加载主页
	<Route path="" view=HomePage/>

	// 我们将使用异步渲染加载帖子,以便它们可以在加载数据*后*设置
	// 标题和元数据
	<Route
		path="/post/:id"
		view=Post
		ssr=SsrMode::PartiallyBlocked
	/>
</Routes>

第二个 <Suspense/>,包含评论,不会阻塞流。阻塞资源给了我优化页面 SEO 和用户体验所需的功能和粒度。

水合错误(以及如何避免它们)

一个思想实验

让我们尝试一个实验来测试你的直觉。打开你使用 cargo-leptos 进行服务器端渲染的应用程序。(如果你到目前为止一直在使用 trunk 来玩示例,为了这个练习,请去克隆一个 cargo-leptos 模板。)

在你的根组件的某个地方放置一个日志。(我通常称之为 <App/>,但任何东西都可以。)

#[component]
pub fn App() -> impl IntoView {
	logging::log!("我在哪里运行?");
	// ... 任何内容
}

让我们启动它

cargo leptos watch

你希望 我在哪里运行? 记录在哪里?

  • 在你运行服务器的命令行中?
  • 在你加载页面时的浏览器控制台中?
  • 两者都不是?
  • 两者都是?

试一试。

...

...

...

好的,考虑一下剧透警报。

你当然会注意到它在两个地方都记录了,假设一切按计划进行。实际上,它在服务器上记录了两次——第一次是在初始服务器启动期间,当 Leptos 渲染你的应用程序一次以提取路由树时,然后在你发出请求时第二次。每次重新加载页面时,我在哪里运行? 应该在服务器上记录一次,在客户端上记录一次。

如果你回想一下最后几节中的描述,希望这很有道理。你的应用程序在服务器上运行一次,它在那里构建一个 HTML 树,然后发送到客户端。在此初始渲染期间,我在哪里运行? 在服务器上记录。

一旦 WASM 二进制文件在浏览器中加载完成,你的应用程序将第二次运行,遍历同一个用户界面树并添加交互性。

这听起来像是一种浪费吗?从某种意义上说,确实如此。但减少这种浪费是一个真正困难的问题。这就是像 Qwik 这样的一些 JS 框架旨在解决的问题,尽管现在判断它与其他方法相比是否能带来净性能提升还为时过早。

错误的可能性

好的,希望所有这些都有意义。但它与本章标题“水合错误(以及如何避免它们)”有什么关系?

请记住,应用程序需要在服务器和客户端上运行。这会产生几组不同的潜在问题,你需要知道如何避免它们。

服务器和客户端代码之间的不匹配

创建错误的一种方法是在服务器发送的 HTML 和客户端渲染的内容之间创建不匹配。我认为这样做是相当困难的(至少从我收到的错误报告来看)。但想象一下,我做了这样的事情

#[component]
pub fn App() -> impl IntoView {
    let data = if cfg!(target_arch = "wasm32") {
        vec![0, 1, 2]
    } else {
        vec![]
    };
    data.into_iter()
        .map(|value| view! { <span>{value}</span> })
        .collect_view()
}

换句话说,如果它被编译为 WASM,它有三个项目;否则它是空的。

当我在浏览器中加载页面时,我什么也看不到。如果我打开控制台,我会看到一堆警告:

找不到 id 为 0-3 的元素,忽略它进行水合
找不到 id 为 0-4 的元素,忽略它进行水合
找不到 id 为 0-5 的元素,忽略它进行水合
找不到 id 为 _0-6c 的组件,忽略它进行水合
找不到 id 为 _0-6o 的组件,忽略它进行水合

在浏览器中运行的你的应用程序的 WASM 版本希望找到三个项目;但 HTML 中没有。

解决方案

你很少会故意这样做,但它可能会在服务器和浏览器中以某种方式运行不同的逻辑时发生。如果你看到这样的警告,并且你认为这不是你的错,那么更有可能是 <Suspense/> 或其他东西的错误。请随意在 GitHub 上打开 问题讨论 以获得帮助。

并非所有客户端代码都可以在服务器上运行

想象一下,你很高兴地导入了一个像 gloo-net 这样的依赖项,你已经习惯于在浏览器中使用它来发出请求,并在服务器端渲染的应用程序中的 create_resource 中使用它。

你可能会立即看到可怕的消息

panicked at 'cannot call wasm-bindgen imported functions on non-wasm targets'

哦,哦。

但当然,这很有道理。我们刚刚说过,你的应用程序需要在客户端和服务器上运行。

解决方案

有几种方法可以避免这种情况:

  1. 仅使用可以在服务器和客户端上运行的库。例如,reqwest 适用于在两种环境中发出 HTTP 请求。
  2. 在服务器和客户端上使用不同的库,并使用 #[cfg] 宏来区分它们。(点击这里查看示例。)
  3. 将仅限客户端的代码包装在 create_effect 中。因为 create_effect 仅在客户端运行,所以这可以是访问初始渲染不需要的浏览器 API 的有效方法。

例如,假设我想在信号发生变化时将某些内容存储在浏览器的 localStorage 中。

#[component]
pub fn App() -> impl IntoView {
    use gloo_storage::Storage;
	let storage = gloo_storage::LocalStorage::raw();
	logging::log!("{storage:?}");
}

这会恐慌,因为我无法在服务器端渲染期间访问 LocalStorage

但如果我把它包装在一个效果中...

#[component]
pub fn App() -> impl IntoView {
    use gloo_storage::Storage;
    create_effect(move |_| {
        let storage = gloo_storage::LocalStorage::raw();
		logging::log!("{storage:?}");
    });
}

就好了!这将在服务器上正确渲染,忽略仅限客户端的代码,然后在浏览器上访问存储并记录一条消息。

并非所有服务器代码都可以在客户端上运行

在浏览器中运行的 WebAssembly 是一个相当有限的环境。你无法访问文件系统或标准库可能习惯使用的许多其他东西。并非每个 crate 都可以编译为 WASM,更不用说在 WASM 环境中运行了。

特别是,你有时会看到有关 crate mio 的错误或 core 中缺少的东西。这通常表明你正在尝试将某些不能编译为 WASM 的内容编译为 WASM。如果你要添加仅限服务器的依赖项,你将希望在你的 Cargo.toml 中将它们标记为 optional = true,然后在 ssr 功能定义中启用它们。(查看其中一个模板 Cargo.toml 文件以查看更多详细信息。)

你可以使用 create_effect 来指定某些内容只能在客户端运行,而不能在服务器上运行。有没有办法指定某些内容只能在服务器上运行,而不能在客户端上运行?

实际上,有。下一章将详细介绍服务器函数的主题。(同时,你可以 在此处 查看它们的文档。)

与服务器协同工作

上一节描述了服务器端渲染的过程,使用服务器生成一个 HTML 版本的页面,该页面将在浏览器中变得有交互性。到目前为止,一切都“同构(isomorphic)”;换句话说,你的应用程序在客户端和服务器上具有“相同的(iso)形状(morphe)”。

译注: 引号中的内容比较难以说明,参看原文: So far, everything has been “isomorphic”; in other words, your app has had the “same (iso) shape (morphe)” on the client and the server.

但服务器的功能远不止渲染 HTML!事实上,服务器可以做很多你的浏览器_不能_做的事情,比如从 SQL 数据库读取和写入数据。

如果你习惯于构建 JavaScript 前端应用程序,你可能习惯于调用某种 REST API 来完成这种服务器端工作。如果你习惯于使用 PHP 或 Python 或 Ruby(或 Java 或 C# 或...)构建网站,那么这种服务器端工作是你的主要工作,而客户端交互性往往是事后才想到的。

使用 Leptos,你两者都可以做到:不仅使用相同的语言,不仅共享相同的类型,甚至在同一个文件中!

本节将讨论如何构建应用程序中独特的服务器端部分。

服务器函数

如果你正在创建的不仅仅是一个玩具应用程序,你的代码需要一直运行服务器上:从仅在服务器上运行的数据库读取或写入数据、使用你不想发送到客户端的库运行昂贵的计算、访问需要从服务器而不是客户端调用的 API(出于 CORS 原因或因为你需要存储在服务器上的 API 密钥,而且绝对不应该发送到用户的浏览器)。

传统上,这是通过将服务器和客户端代码分离,并设置诸如 REST API 或 GraphQL API 之类的东西来允许你的客户端获取和修改服务器上的数据来完成的。这很好,但它要求你在多个不同的地方编写和维护你的代码(用于获取的客户端代码,用于运行的服务器端函数),以及创建第三件事来管理,即两者之间的 API 契约。

Leptos 是众多引入服务器函数概念的现代框架之一。服务器函数有两个关键特征:

  1. 服务器函数与你的组件代码位于同一位置,因此你可以按功能组织你的工作,而不是按技术组织。例如,你可能有一个“暗模式”功能,应该在多个会话中保留用户的暗/亮模式首选项,并在服务器端渲染期间应用,这样就不会出现闪烁。这需要一个需要在客户端交互的组件,以及一些需要在服务器上完成的工作(设置一个 cookie,甚至可能将用户存储在数据库中)。传统上,此功能最终可能会分布在代码中的两个不同位置,一个在你的“前端”,一个在你的“后端”。使用服务器函数,你可能只会在一个 dark_mode.rs 中同时编写它们,然后忘记它。
  2. 服务器函数是同构的,即它们可以从服务器或浏览器调用。这是通过为两个平台生成不同的代码来完成的。在服务器上,服务器函数只是运行。在浏览器中,服务器函数的正文被替换为一个存根,该存根实际上向服务器发出一个获取请求,将参数序列化到请求中,并从响应中反序列化返回值。但在任何一端,都可以简单地调用该函数:你可以创建一个将数据写入数据库的 add_todo 函数,并简单地从浏览器中按钮上的点击处理程序中调用它!

使用服务器函数

其实,我挺喜欢这个例子的。它会是什么样子的呢?实际上,它非常简单。

// todo.rs

#[server(AddTodo, "/api")]
pub async fn add_todo(title: String) -> Result<(), ServerFnError> {
    let mut conn = db().await?;

    match sqlx::query("INSERT INTO todos (title, completed) VALUES ($1, false)")
        .bind(title)
        .execute(&mut conn)
        .await
    {
        Ok(_row) => Ok(()),
        Err(e) => Err(ServerFnError::ServerError(e.to_string())),
    }
}

#[component]
pub fn BusyButton() -> impl IntoView {
	view! {
        <button on:click=move |_| {
            spawn_local(async {
                add_todo("So much to do!".to_string()).await;
            });
        }>
            "Add Todo"
        </button>
	}
}

你会立即注意到这里有几件事:

  • 服务器函数可以使用仅限服务器的依赖项,例如 sqlx,并且可以访问仅限服务器的资源,例如我们的数据库。
  • 服务器函数是 async 的。即使它们只在服务器上执行同步工作,函数签名仍然需要是 async 的,因为从浏览器调用它们必须是异步的。
  • 服务器函数返回 Result<T, ServerFnError>。同样,即使它们只在服务器上执行不会失败的工作,也是如此,因为 ServerFnError 的变体包括在发出网络请求的过程中可能出错的各种情况。
  • 服务器函数可以从客户端调用。看看我们的点击处理程序。这段代码会在客户端运行。但它可以调用 add_todo 函数(使用 spawn_local 来运行 Future),就像它是一个普通的异步函数一样:
move |_| {
	spawn_local(async {
		add_todo("So much to do!".to_string()).await;
	});
}
  • 服务器函数是用 fn 定义的顶级函数。与事件监听器、派生信号和 Leptos 中的大多数其他内容不同,它们不是闭包!作为 fn 调用,它们无法访问你的应用程序的响应式状态或任何未作为参数传入的内容。再说一遍,这很有道理:当你向服务器发出请求时,服务器无法访问客户端状态,除非你显式发送它。(否则,我们必须序列化整个响应式系统,并在每次请求时通过网络发送它,这虽然可以为经典 ASP 提供一段时间服务,但这是一个非常糟糕的主意。)
  • 服务器函数的参数和返回值都需要使用 serde 进行序列化。同样,希望这很有道理:虽然函数参数通常不需要序列化,但从浏览器调用服务器函数意味着序列化参数并通过 HTTP 发送它们。

关于定义服务器函数的方式,还有几点需要注意。

  • 服务器函数是通过使用 #[server] 来注释顶级函数来创建的,该函数可以在任何地方定义。
  • 我们为宏提供了一个类型名称。类型名称在内部用作一个容器来保存、序列化和反序列化参数。
  • 我们为宏提供了一个路径。这是我们将在服务器上挂载服务器函数处理程序的路径的前缀。(请参阅 ActixAxum 的示例。)
  • 你需要将 serde 作为依赖项,并启用 derive 功能,以便宏正常工作。你可以使用 cargo add serde --features=derive 轻松地将其添加到 Cargo.toml 中。

服务器函数 URL 前缀

你可以选择定义一个特定的 URL 前缀,用于服务器函数的定义。 这是通过为 #[server] 宏提供一个可选的第二个参数来完成的。 默认情况下,URL 前缀将是 /api,如果未指定。 以下是一些示例:

#[server(AddTodo)]         // 将使用默认的 URL 前缀 `/api`
#[server(AddTodo, "/foo")] // 将使用 URL 前缀 `/foo`

服务器函数编码

默认情况下,服务器函数调用是一个 POST 请求,它将参数作为 URL 编码的表单数据序列化到请求体中。(这意味着服务器函数可以从 HTML 表单中调用,我们将在以后的章节中看到。)但也支持其他几种方法。我们可以选择为 #[server] 宏提供另一个参数来指定备用编码:

#[server(AddTodo, "/api", "Url")]
#[server(AddTodo, "/api", "GetJson")]
#[server(AddTodo, "/api", "Cbor")]
#[server(AddTodo, "/api", "GetCbor")]

这四个选项使用 HTTP 动词和编码方法的不同组合:

名称方法请求响应
Url(默认)POSTURL 编码JSON
GetJsonGETURL 编码JSON
CborPOSTCBORCBOR
GetCborGETURL 编码CBOR

换句话说,你有两个选择:

  • GET 还是 POST?这对浏览器或 CDN 缓存等内容有影响;虽然 POST 请求不应该被缓存,但 GET 请求可以被缓存。
  • 纯文本(使用 URL/表单编码发送的参数,作为 JSON 发送的结果)还是二进制格式(CBOR,编码为 base64 字符串)?

但请记住:Leptos 将为你处理此编码和解码的所有细节。当你使用服务器函数时,它看起来就像调用任何其他异步函数一样!

为什么不用 PUTDELETE?为什么用 URL/表单编码,而不是 JSON?

这些都是合理的问题。许多 Web 都是建立在 REST API 模式之上的,这些模式鼓励使用语义 HTTP 方法(如 DELETE)从数据库中删除项目,并且许多开发人员习惯于以 JSON 格式向 API 发送数据。

我们默认使用带有 URL 编码数据的 POSTGET 的原因是对 <form> 的支持。无论好坏,HTML 表单都不支持 PUTDELETE,也不支持发送 JSON。这意味着,如果你使用除 GETPOST 请求之外的任何带有 URL 编码数据的请求,它只有在 WASM 加载完成后才能工作。正如我们将在后面的章节中看到的那样,这并不总是一个好主意。

支持 CBOR 编码是出于历史原因;早期版本的服务器函数使用一种 URL 编码,该编码不支持嵌套对象(如结构体或向量)作为服务器函数的参数,而 CBOR 则支持。但请注意,CBOR 形式遇到了与 PUTDELETE 或 JSON 相同的问题:如果你的应用程序的 WASM 版本不可用,它们将无法优雅地降级。

服务器函数端点路径

默认情况下,将生成一个唯一的路径。你可以选择定义一个特定的端点路径,用于 URL 中。这是通过为 #[server] 宏提供一个可选的第四个参数来完成的。Leptos 将通过连接 URL 前缀(第二个参数)和端点路径(第四个参数)来生成完整路径。 例如,

#[server(MyServerFnType, "/api", "Url", "hello")]

将在 /api/hello 处生成一个接受 POST 请求的服务器函数端点。

我可以将相同的服务器函数端点路径与多个编码一起使用吗?

不可以。不同的服务器函数必须具有唯一的路径。#[server] 宏会自动生成唯一的路径,但如果你选择手动指定完整路径,则需要小心,因为服务器会按路径查找服务器函数。

关于安全性的重要说明

服务器函数是一项很酷的技术,但记住这一点非常重要。**服务器函数不是魔法;它们是定义公共 API 的语法糖。**服务器函数的_主体_永远不会公开;它只是你的服务器二进制文件的一部分。但是服务器函数是一个公开可访问的 API 端点,它的返回值只是一个 JSON 或类似的 blob。除非它是公开的,或者你已实施了适当的安全程序,否则不要从服务器函数中返回信息。这些程序可能包括对传入请求进行身份验证、确保适当的加密、限制访问速率等等。

将服务器函数与 Leptos 集成

到目前为止,我所说的一切实际上都与框架无关。(事实上,Leptos 服务器函数 crate 也已经集成到 Dioxus 中!)服务器函数只是一种定义类似函数的 RPC 调用的方法,它依赖于 HTTP 请求和 URL 编码等 Web 标准。

但在某种程度上,它们也提供了我们迄今为止故事中最后缺失的原语。因为服务器函数只是一个普通的 Rust 异步函数,所以它与我们之前讨论过的异步 Leptos 原语完美集成 之前。因此,你可以轻松地将你的服务器函数与应用程序的其余部分集成:

  • 创建调用服务器函数以从服务器加载数据的 resource
  • <Suspense/><Transition/> 下读取这些资源,以便在数据加载时启用流式 SSR 和回退状态。
  • 创建调用服务器函数以在服务器上修改数据的 action

本书的最后一节将通过介绍使用渐进增强的 HTML 表单来运行这些服务器操作的模式来使这一点更加具体。

但在接下来的几章中,我们将实际看一下你可能想用你的服务器函数做什么的一些细节,包括与 Actix 和 Axum 服务器框架提供的强大提取器集成的最佳方法。

提取器

我们在上一章中看到的服务器函数展示了如何在服务器上运行代码,并将其与你在浏览器中渲染的用户界面集成。但它们并没有展示太多关于如何充分利用你的服务器的潜力。

服务器框架

我们称 Leptos 为“全栈”框架,但“全栈”始终是一个不当用词(毕竟,它从不意味着从浏览器到你所在电力公司的所有内容)。对我们来说,“全栈”意味着你的 Leptos 应用程序可以在浏览器中运行,可以在服务器上运行,并且可以将两者集成在一起,将每个环境中可用的独特功能融合在一起;正如我们到目前为止在本书中看到的那样,浏览器上的按钮点击可以驱动服务器上的数据库读取,两者都写在同一个 Rust 模块中。但 Leptos 本身不提供服务器(或数据库、操作系统、固件或电缆...)

相反,Leptos 为两个最流行的 Rust Web 服务器框架提供了集成,分别是 Actix Web (leptos_actix) 和 Axum (leptos_axum)。我们已经构建了与每个服务器的路由器的集成,这样你就可以使用 .leptos_routes() 简单地将你的 Leptos 应用程序插入到现有的服务器中,并轻松处理服务器函数调用。

如果你还没有看过我们的 ActixAxum 模板,现在是查看它们的好时机。

使用提取器

Actix 和 Axum 处理程序都建立在相同的强大的提取器理念之上。提取器从 HTTP 请求中“提取”类型化数据,让你可以轻松访问特定于服务器的数据。

Leptos 提供了 extract 帮助程序函数,让你可以使用与每个框架的处理程序非常相似的便捷语法,直接在你的服务器函数中使用这些提取器。

Actix 提取器

leptos_actix 中的 extract 函数 接受一个处理程序函数作为参数。处理程序遵循与 Actix 处理程序类似的规则:它是一个异步函数,接收将从请求中提取的参数并返回一些值。处理程序函数接收提取的数据作为其参数,并可以在 async move 块的主体内对它们进行进一步的 async 工作。它将你返回的任何值返回到服务器函数中。

use serde::Deserialize;

#[derive(Deserialize, Debug)]
struct MyQuery {
    foo: String,
}

#[server]
pub async fn actix_extract() -> Result<String, ServerFnError> {
    use actix_web::dev::ConnectionInfo;
    use actix_web::web::{Data, Query};
    use leptos_actix::extract;

    let (Query(search), connection): (Query<MyQuery>, ConnectionInfo) = extract().await?;
    Ok(format!("search = {search:?}\nconnection = {connection:?}",))
}

Axum 提取器

leptos_axum::extract 函数的语法非常相似。

use serde::Deserialize;

#[derive(Deserialize, Debug)]
struct MyQuery {
    foo: String,
}

#[server]
pub async fn axum_extract() -> Result<String, ServerFnError> {
    use axum::{extract::Query, http::Method};
    use leptos_axum::extract;

    let (method, query): (Method, Query<MyQuery>) = extract().await?;

    Ok(format!("{method:?} and {query:?}"))
}

这些是从服务器访问基本数据的相对简单的示例。但你可以使用提取器来访问标头、cookie、数据库连接池等内容,使用完全相同的 extract() 模式。

Axum extract 函数仅支持状态为 () 的提取器。如果你需要一个使用 State 的提取器,你应该使用 extract_with_state。这需要你提供状态。你可以通过使用 Axum FromRef 模式扩展现有的 LeptosOptions 状态来做到这一点,该模式在渲染和服务器函数期间使用自定义处理程序将状态作为上下文提供。

use axum::extract::FromRef;

/// 派生 FromRef 以允许状态中的多个项目,使用 Axum 的
/// SubStates 模式。
#[derive(FromRef, Debug, Clone)]
pub struct AppState{
    pub leptos_options: LeptosOptions,
    pub pool: SqlitePool
}

点击这里查看在自定义处理程序中提供上下文的示例

Axum 状态

Axum 的依赖注入的典型模式是提供一个 State,然后可以在你的路由处理程序中提取它。Leptos 通过上下文提供了自己的依赖注入方法。上下文通常可以用来代替 State 来提供共享的服务器数据(例如,数据库连接池)。

let connection_pool = /* 一些共享状态 */;

let app = Router::new()
    .leptos_routes_with_context(
        &app_state,
        routes,
        move || provide_context(connection_pool.clone()),
        App,
    )
    // 等等。

然后可以在你的服务器函数中使用简单的 use_context::<T>() 访问此上下文。

如果你需要在服务器函数中使用 State——例如,如果你有一个现有的 Axum 提取器需要 State——那么也可以使用 Axum 的 FromRef 模式和 extract_with_state。本质上,你需要通过上下文和 Axum 路由器状态来提供状态:

#[derive(FromRef, Debug, Clone)]
pub struct MyData {
    pub value: usize,
    pub leptos_options: LeptosOptions,
}

let app_state = MyData {
    value: 42,
    leptos_options,
};

// 使用路由构建我们的应用程序
let app = Router::new()
    .leptos_routes_with_context(
        &app_state,
        routes,
        {
            let app_state = app_state.clone();
            move || provide_context(app_state.clone())
        },
        App,
    )
    .fallback(file_and_error_handler)
    .with_state(app_state);

// ... 
#[server] 
pub async fn uses_state() -> Result<(), ServerFnError> {
    let state = expect_context::<AppState>();
    let SomeStateExtractor(data) = extract_with_state(&state).await?;
    // 待办事项
}

关于数据加载模式的说明

因为 Actix 和(尤其是)Axum 建立在单次往返 HTTP 请求和响应的理念之上,所以你通常会在应用程序的“顶部”(即在你开始渲染之前)附近运行提取器,并使用提取的数据来确定应该如何渲染。在你渲染 <button> 之前,你会加载你的应用程序可能需要的所有数据。并且任何给定的路由处理程序都需要知道该路由需要提取的所有数据。

但 Leptos 集成了客户端和服务器,并且能够使用来自服务器的新数据刷新你的 UI 的小部分,而无需强制完全重新加载所有数据,这一点很重要。因此,Leptos 喜欢将数据加载“下推”到你的应用程序中,尽可能靠近你的用户界面的叶子节点。当你点击 <button> 时,它可以只刷新它需要的数据。这正是服务器函数的用途:它们让你可以粒度地访问要加载和重新加载的数据。

extract() 函数允许你通过在你的服务器函数中使用提取器来组合这两种模式。你可以访问路由提取器的全部功能,同时将需要提取的内容分散到你的各个组件中。这使得重构和重新组织路由变得更容易:你不需要预先指定路由需要的所有数据。

响应和重定向

提取器提供了一种简单的方法来访问服务器函数内部的请求数据。Leptos 还提供了一种使用 ResponseOptions 类型(请参阅 ActixAxum 的文档)和 redirect 帮助函数(请参阅 ActixAxum 的文档)来修改 HTTP 响应的方法。

ResponseOptions

ResponseOptions 在初始服务器渲染响应期间和任何后续服务器函数调用期间通过上下文提供。它允许你轻松地设置 HTTP 响应的状态码,或向 HTTP 响应添加标头,例如设置 cookie。

#[server(TeaAndCookies)]
pub async fn tea_and_cookies() -> Result<(), ServerFnError> {
	use actix_web::{cookie::Cookie, http::header, http::header::HeaderValue};
	use leptos_actix::ResponseOptions;

	// 从上下文中提取 ResponseOptions
	let response = expect_context::<ResponseOptions>();

	// 设置 HTTP 状态码
	response.set_status(StatusCode::IM_A_TEAPOT);

	// 在 HTTP 响应中设置一个 cookie
	let mut cookie = Cookie::build("biscuits", "yes").finish();
	if let Ok(cookie) = HeaderValue::from_str(&cookie.to_string()) {
		response.insert_header(header::SET_COOKIE, cookie);
	}
}

redirect

对 HTTP 响应的一种常见修改是重定向到另一个页面。Actix 和 Axum 集成提供了一个 redirect 函数来简化此操作。redirect 只需设置 HTTP 状态码 302 Found 并设置 Location 标头即可。

以下是一个来自我们的 session_auth_axum 示例 的简化示例。

#[server(Login, "/api")]
pub async fn login(
    username: String,
    password: String,
    remember: Option<String>,
) -> Result<(), ServerFnError> {
	// 从上下文中提取数据库池和身份验证提供程序
    let pool = pool()?;
    let auth = auth()?;

	// 检查用户是否存在
    let user: User = User::get_from_username(username, &pool)
        .await
        .ok_or_else(|| {
            ServerFnError::ServerError("User does not exist.".into())
        })?;

	// 检查用户是否提供了正确的密码
    match verify(password, &user.password)? {
		// 如果密码正确...
        true => {
			// 登录用户
            auth.login_user(user.id);
            auth.remember_user(remember.is_some());

			// 并重定向到主页
            leptos_axum::redirect("/");
            Ok(())
        }
		// 如果不正确,则返回错误
        false => Err(ServerFnError::ServerError(
            "Password does not match.".to_string(),
        )),
    }
}

然后可以从你的应用程序中使用此服务器函数。此 redirect 与渐进增强的 <ActionForm/> 组件配合良好:如果没有 JS/WASM,服务器响应将因状态码和标头而重定向。使用 JS/WASM,<ActionForm/> 将检测服务器函数响应中的重定向,并使用客户端导航重定向到新页面。

渐进增强(和优雅降级)

我在波士顿开车已经大约十五年了。如果你不了解波士顿,让我告诉你:马萨诸塞州拥有一些世界上最激进的司机(和行人!)我学会了实践有时被称为“防御性驾驶”的东西:假设当你在交叉路口有路权时,有人即将在你面前突然转向,准备好随时有行人过马路,并相应地驾驶。

“渐进增强”是网页设计的“防御性驾驶”。或者实际上,那是“优雅降级”,尽管它们是同一枚硬币的两个面,或者说是同一个过程,只是方向不同。

在这种情况下,渐进增强 意味着从一个简单的 HTML 网站或应用程序开始,该网站或应用程序适用于访问你的页面的任何用户,并逐渐使用其他功能层对其进行增强:用于样式的 CSS、用于交互性的 JavaScript、用于 Rust 驱动的交互性的 WebAssembly;如果可用,则根据需要使用特定的 Web API 以获得更丰富的体验。

优雅降级 意味着当增强堆栈的某些部分不可用时,能够优雅地处理故障。以下是一些你的用户在你的应用程序中可能遇到的故障来源:

  • 他们的浏览器不支持 WebAssembly,因为它需要更新。
  • 他们的浏览器无法支持 WebAssembly,因为浏览器更新仅限于较新的操作系统版本,而这些版本无法在其设备上安装。(说你呢,Apple。)
  • 出于安全或隐私原因,他们已关闭 WASM。
  • 出于安全或隐私原因,他们已关闭 JavaScript。
  • 他们的设备不支持 JavaScript(例如,某些辅助功能设备仅支持 HTML 浏览)
  • 因为他们走到室外并在 WASM 完成加载之前丢失了 WiFi,所以 JavaScript(或 WASM)从未到达他们的设备。
  • 他们在加载初始页面后踏上地铁车厢,后续导航无法加载数据。
  • ... 等等。

如果其中之一成立,你的应用程序还有多少可以正常工作?两个?三个?

如果答案类似于“95%... 好吧,然后是 90%... 好吧,然后是 75%”,那就是优雅降级。如果答案是“除非一切正常,否则我的应用程序会显示一个空白屏幕”,那就是... 快速计划外拆卸。

优雅降级对于 WASM 应用程序尤其重要, 因为 WASM 是在浏览器中运行的四种语言(HTML、CSS、JS、WASM)中最新的、最不可能被支持的。

幸运的是,我们有一些工具可以提供帮助。

防御性设计

有一些实践可以帮助你的应用程序更优雅地降级:

  1. 服务器端渲染。 没有 SSR,你的应用程序在没有加载 JS 和 WASM 的情况下根本无法工作。在某些情况下,这可能是合适的(想想在登录后受保护的内部应用程序),但在其他情况下,它只是坏了。
  2. 原生 HTML 元素。 使用可以完成你想要的事情的 HTML 元素,而无需额外的代码:<a> 用于导航(包括页面内的哈希值)、<details> 用于手风琴、<form> 用于在 URL 中保存信息等。
  3. URL 驱动的状态。 你的全局状态越多存储在 URL 中(作为路由参数或查询字符串的一部分),在服务器端渲染期间可以生成的页面就越多,并且可以通过 <a><form> 更新,这意味着不仅导航而且状态更改都可以在没有 JS/WASM 的情况下工作。
  4. SsrMode::PartiallyBlockedSsrMode::InOrder 乱序流式传输需要少量内联 JS,但如果 1) 连接在响应中途断开或 2) 客户端设备不支持 JS,则可能会失败。异步流式传输将提供一个完整的 HTML 页面,但只有在所有资源加载完成后才会提供。顺序流式传输会更快地开始显示页面的各个部分,按自上而下的顺序。“部分阻塞”SSR 通过在服务器上替换从阻塞资源读取的 <Suspense/> 片段来构建乱序流式传输。这会略微增加初始响应时间(因为 O(n) 字符串替换工作),以换取更完整的初始 HTML 响应。对于“更重要”和“不太重要”内容之间有明显区别的情况,例如博客文章与评论,或产品信息与评论,这是一个不错的选择。如果你选择阻塞所有内容,则实际上你已经重新创建了异步渲染。
  5. 依赖 <form> 最近 <form> 有点复兴,这并不奇怪。<form> 以易于增强的方式管理复杂的 POSTGET 请求的能力使其成为优雅降级的强大工具。例如,<Form/> 章节 中的示例在没有 JS/WASM 的情况下也能很好地工作:因为它使用 <form method="GET"> 在 URL 中保存状态,所以它通过发出普通的 HTTP 请求来使用纯 HTML,然后逐步增强以使用客户端导航代替。

框架还有一个我们尚未看到的功能,它建立在表单的这一特性之上,用于构建强大的应用程序:<ActionForm/>

<ActionForm/>

<ActionForm/> 是一个特殊的 <Form/>,它接受一个服务器操作,并在表单提交时自动调度它。这允许你直接从 <form> 调用服务器函数,即使没有 JS/WASM。

过程很简单:

  1. 使用 #[server] 定义一个服务器函数(参见 服务器函数)。
  2. 使用 create_server_action 创建一个操作,指定你定义的服务器函数的类型。
  3. 创建一个 <ActionForm/>,在 action prop 中提供服务器操作。
  4. 将命名参数作为具有相同名称的表单字段传递给服务器函数。

注意: <ActionForm/> 仅适用于服务器函数的默认 URL 编码的 POST 编码,以确保作为 HTML 表单的优雅降级/正确行为。

#[server(AddTodo, "/api")]
pub async fn add_todo(title: String) -> Result<(), ServerFnError> {
    todo!()
}

#[component]
fn AddTodo() -> impl IntoView {
	let add_todo = create_server_action::<AddTodo>();
	// 保存从服务器返回的最新值
	let value = add_todo.value();
	// 检查服务器是否返回了错误
	let has_error = move || value.with(|val| matches!(val, Some(Err(_))));

	view! {
		<ActionForm action=add_todo>
			<label>
				"Add a Todo"
				// `title` 与 `add_todo` 的 `title` 参数匹配
				<input type="text" name="title"/>
			</label>
			<input type="submit" value="Add"/>
		</ActionForm>
	}
}

真的就这么简单。使用 JS/WASM,你的表单将在没有页面重新加载的情况下提交,将其最近提交的内容存储在操作的 .input() 信号中,将其待处理状态存储在 .pending() 中,等等。(如果需要,请参阅 Action 文档以进行复习。)如果没有 JS/WASM,你的表单将在页面重新加载时提交。如果你调用一个 redirect 函数(来自 leptos_axumleptos_actix),它将重定向到正确的页面。默认情况下,它会重定向回你当前所在的页面。HTML、HTTP 和同构渲染的强大功能意味着你的 <ActionForm/> 可以正常工作,即使没有 JS/WASM。

客户端验证

因为 <ActionForm/> 只是一个 <form>,所以它会触发一个 submit 事件。你可以在 on:submit 中使用 HTML 验证或你自己的客户端验证逻辑。只需调用 ev.prevent_default() 即可防止提交。

FromFormData 特征在这里可能会有所帮助,用于尝试从提交的表单中解析你的服务器函数的数据类型。

let on_submit = move |ev| {
	let data = AddTodo::from_event(&ev);
	// 愚蠢的验证示例:如果待办事项是“nope!”,则不执行
	if data.is_err() || data.unwrap().title == "nope!" {
		// ev.prevent_default() 将阻止表单提交
		ev.prevent_default();
	}
}

复杂输入

作为具有嵌套可序列化字段的结构体的服务器函数参数,应使用 serde_qs 的索引符号。

use leptos::*;
use leptos_router::*;

#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
struct HeftyData {
    first_name: String,
    last_name: String,
}

#[component]
fn ComplexInput() -> impl IntoView {
    let submit = Action::<VeryImportantFn, _>::server();

    view! {
      <ActionForm action=submit>
        <input type="text" name="hefty_arg[first_name]" value="leptos"/>
        <input
          type="text"
          name="hefty_arg[last_name]"
          value="closures-everywhere"
        />
        <input type="submit"/>
      </ActionForm>
    }
}

#[server]
async fn very_important_fn(
    hefty_arg: HeftyData,
) -> Result<(), ServerFnError> {
    assert_eq!(hefty_arg.first_name.as_str(), "leptos");
    assert_eq!(hefty_arg.last_name.as_str(), "closures-everywhere");
    Ok(())
}

部署

部署 Web 应用程序的方式和开发人员一样多,所以这完全不是问题。但在部署应用程序时,有一些有用的提示要记住。

一般建议

  1. 请记住:始终部署在 --release 模式下构建的 Rust 应用程序,而不是调试模式。这对性能和二进制文件大小都有巨大影响。
  2. 还要在发布模式下进行本地测试。该框架在发布模式下应用了某些在调试模式下不应用的优化,因此此时可能会出现错误。(如果你的应用程序的行为不同,或者你确实遇到了错误,那么很可能是一个框架级别的错误,你应该打开一个 GitHub 问题并提供重现步骤。)
  3. 请参阅“优化 WASM 二进制文件大小”一章,以获取更多技巧,以进一步改善你的 WASM 应用程序在首次加载时的交互时间指标。

我们要求用户提交他们的部署设置,以帮助完成本章。我将在下面引用它们,但你可以 在此处 阅读完整帖子。

部署客户端渲染的应用程序

如果你一直在构建一个仅使用客户端渲染的应用程序,使用 Trunk 作为开发服务器和构建工具,那么这个过程非常简单。

trunk build --release

trunk build 将在 dist/ 目录中创建许多构建工件。将 dist 发布到网上的某个地方应该是部署你的应用程序所需的全部内容。这应该与部署任何 JavaScript 应用程序非常相似。

我们创建了几个示例存储库,展示了如何设置 Leptos CSR 应用程序并将其部署到各种托管服务。

注意:Leptos 不认可使用任何特定的托管服务——你可以随意使用任何支持静态站点部署的服务。

示例:

Github Pages

将 Leptos CSR 应用程序部署到 Github Pages 是一件很简单的事情。首先,转到你的 Github 仓库的设置,然后点击左侧菜单中的“页面”。在页面的“构建和部署”部分,将“来源”更改为“Github Actions”。然后将以下内容复制到文件 .github/workflows/gh-pages-deploy.yml

Example

name: Release to Github Pages

on:
  push:
    branches: [main]
  workflow_dispatch:

permissions:
  contents: write # 允许写入 gh-pages 分支。
  pages: write
  id-token: write

# 只允许一个并发部署,跳过正在进行的运行和最新排队的运行之间排队的运行。
# 但是,不要取消正在进行的运行,因为我们希望允许这些生产部署完成。
concurrency:
  group: "pages"
  cancel-in-progress: false

jobs:
  Github-Pages-Release:

    timeout-minutes: 10

    environment:
      name: github-pages
      url: ${{ steps.deployment.outputs.page_url }}

    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4 # 检出代码库

      # 安装 Rust Nightly 工具链,包括 Clippy 和 Rustfmt
      - name: Install nightly Rust
        uses: dtolnay/rust-toolchain@nightly
        with:
          components: clippy, rustfmt

      - name: Add WASM target
        run: rustup target add wasm32-unknown-unknown

      - name: lint
        run: cargo clippy & cargo fmt


      # 如果使用 tailwind...
      # - name: Download and install tailwindcss binary
      #   run: npm install -D tailwindcss && npx tailwindcss -i <INPUT/PATH.css> -o <OUTPUT/PATH.css>  # 运行 tailwind


      - name: Download and install Trunk binary
        run: wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.18.4/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-

      - name: Build with Trunk
        # "${GITHUB_REPOSITORY#*/}" 计算为存储库的名称
        # 使用 --public-url something 将允许 trunk 修改所有 href 路径,例如从 favicon.ico 到 repo_name/favicon.ico 。
        # 这对于将站点部署到 username.github.io/repo_name 的 github pages 是必要的,并且所有文件都必须作为 favicon.ico 相对请求。
        # 如果我们跳过 public-url 选项,href 路径将改为请求 username.github.io/favicon.ico,这显然会返回错误 404 未找到。
        run: ./trunk build --release --public-url "${GITHUB_REPOSITORY#*/}"


      # 部署到 gh-pages 分支
      # - name: Deploy 🚀
      #   uses: JamesIves/github-pages-deploy-action@v4
      #   with:
      #     folder: dist


      # 使用 Github 静态页面部署

      - name: Setup Pages
        uses: actions/configure-pages@v4
        with:
          enablement: true
          # token:

      - name: Upload artifact
        uses: actions/upload-pages-artifact@v2
        with:
          # 上传 dist 目录
          path: './dist'

      - name: Deploy to GitHub Pages 🚀
        id: deployment
        uses: actions/deploy-pages@v3

有关部署到 Github Pages 的更多信息,请参阅此处的示例仓库

Vercel

步骤 1:设置 Vercel

在 Vercel Web UI 中...

  1. 创建一个新项目
  2. 确保
    • “构建命令” 留空,并启用覆盖
    • “输出目录” 更改为 dist(这是 Trunk 构建的默认输出目录),并启用覆盖

步骤 2:为 GitHub Actions 添加 Vercel 凭据

注意:预览和部署操作都需要在 GitHub secrets 中设置你的 Vercel 凭据

  1. 通过转到“帐户设置”>“令牌”并创建一个新令牌来获取你的 Vercel 访问令牌 - 保存该令牌以在下面的子步骤 5 中使用。

  2. 使用 npm i -g vercel 命令安装 Vercel CLI,然后运行 vercel login 登录到你的帐户。

  3. 在你的文件夹中,运行 vercel link 创建一个新的 Vercel 项目;在 CLI 中,你将被问到“链接到现有项目吗?” - 回答是,然后输入你在步骤 1 中创建的名称。将为你创建一个新的 .vercel 文件夹。

  4. 在生成的 .vercel 文件夹中,打开 project.json 文件并保存“projectId”和“orgId”以用于下一步。

  5. 在 GitHub 中,转到仓库的“设置”>“密钥和变量”>“操作”,并将以下内容添加为 仓库密钥

    • 将你的 Vercel 访问令牌(来自子步骤 1)保存为 VERCEL_TOKEN 密钥
    • .vercel/project.json 添加“projectID”作为 VERCEL_PROJECT_ID
    • .vercel/project.json 添加“orgId”作为 VERCEL_ORG_ID

有关完整说明,请参阅 “如何在 Vercel 中使用 Github Actions”

步骤 3:添加 Github Action 脚本

最后,你只需从下方或 示例仓库的 .github/workflows/ 文件夹 中复制粘贴这两个文件——一个用于部署,一个用于 PR 预览——到你的 Github 工作流文件夹中,然后,在你的下一次提交或 PR 时,部署将自动进行。

生产部署脚本:vercel_deploy.yml

Example

name: 发布到 Vercel

on: push: branches:

  • main env: CARGO_TERM_COLOR: always VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }} VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}

jobs: Vercel-Production-Deployment: runs-on: ubuntu-latest environment: production steps:

  • name: git-checkout uses: actions/checkout@v3

  • uses: dtolnay/rust-toolchain@nightly with: components: clippy, rustfmt

  • uses: Swatinem/rust-cache@v2

  • name: 设置 Rust run: | rustup target add wasm32-unknown-unknown cargo clippy cargo fmt --check

  • name: 下载并安装 Trunk 二进制文件 run: wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.18.2/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-

  • name: 使用 Trunk 构建 run: ./trunk build --release

  • name: 安装 Vercel CLI run: npm install --global vercel@latest

  • name: 拉取 Vercel 环境信息 run: vercel pull --yes --environment=production --token=${{ secrets.VERCEL_TOKEN }}

  • name: 部署到 Vercel 并显示 URL id: deployment working-directory: ./dist run: | vercel deploy --prod --token=${{ secrets.VERCEL_TOKEN }} >> $GITHUB_STEP_SUMMARY echo $GITHUB_STEP_SUMMARY

预览部署脚本:vercel_preview.yml

Example

有关 Vercel 操作的更多信息,请参阅:

https://github.com/amondnet/vercel-action

name: Leptos CSR Vercel 预览

on: pull_request: branches: [ "main" ]

workflow_dispatch:

env: CARGO_TERM_COLOR: always VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }} VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}

jobs: fmt: name: Rustfmt runs-on: ubuntu-latest steps:

  • uses: actions/checkout@v4
  • uses: dtolnay/rust-toolchain@nightly with: components: rustfmt
  • name: 强制格式化 run: cargo fmt --check

clippy: name: Clippy runs-on: ubuntu-latest steps:

  • uses: actions/checkout@v4
  • uses: dtolnay/rust-toolchain@nightly with: components: clippy
  • uses: Swatinem/rust-cache@v2
  • name: Lint run: cargo clippy -- -D warnings

test: name: 测试 runs-on: ubuntu-latest needs: [fmt, clippy] steps:

  • uses: actions/checkout@v4
  • uses: dtolnay/rust-toolchain@nightly
  • uses: Swatinem/rust-cache@v2
  • name: 运行测试 run: cargo test

build-and-preview-deploy: runs-on: ubuntu-latest name: 构建和预览

needs: [test, clippy, fmt]

permissions: pull-requests: write

environment: name: preview url: ${{ steps.preview.outputs.preview-url }}

steps:

  • name: git-checkout uses: actions/checkout@v4

  • uses: dtolnay/rust-toolchain@nightly

  • uses: Swatinem/rust-cache@v2

  • name: 构建 run: rustup target add wasm32-unknown-unknown

  • name: 下载并安装 Trunk 二进制文件 run: wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.18.2/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-

  • name: 使用 Trunk 构建 run: ./trunk build --release

  • name: 预览部署 id: preview uses: amondnet/[email protected] with: vercel-token: ${{ secrets.VERCEL_TOKEN }} github-token: ${{ secrets.GITHUB_TOKEN }} vercel-org-id: ${{ secrets.VERCEL_ORG_ID }} vercel-project-id: ${{ secrets.VERCEL_PROJECT_ID }} github-comment: true working-directory: ./dist

  • name: 显示已部署 URL run: | echo "已部署的应用程序 URL:${{ steps.preview.outputs.preview-url }}" >> $GITHUB_STEP_SUMMARY

有关更多信息,请参阅 此处的示例仓库

Spin - 无服务器 WebAssembly

另一种选择是使用 Spin 等无服务器平台。虽然 Spin 是开源的,你可以在自己的基础设施上运行它(例如在 Kubernetes 内部),但在生产环境中开始使用 Spin 最简单的方法是使用 Fermyon Cloud。

首先按照 此处的说明 安装 Spin CLI,并为你的 Leptos CSR 项目创建一个 Github 仓库(如果你还没有这样做)。

  1. 打开“Fermyon Cloud”>“用户设置”。如果你尚未登录,请选择“使用 GitHub 登录”按钮。

  2. 在“个人访问令牌”中,选择“添加令牌”。输入名称“gh_actions”并单击“创建令牌”。

  3. Fermyon Cloud 将显示该令牌;单击复制按钮将其复制到剪贴板。

  4. 进入你的 Github 仓库,打开“设置”>“密钥和变量”>“操作”,并将 Fermyon 云令牌添加到“存储库密钥”中,使用变量名“FERMYON_CLOUD_TOKEN”

  5. 将以下 Github Actions 脚本(如下)复制并粘贴到你的 .github/workflows/<SCRIPT_NAME>.yml 文件中

  6. 激活“预览”和“部署”脚本后,Github Actions 现在将在拉取请求时生成预览,并在更新到“主”分支时自动部署。

生产部署脚本:spin_deploy.yml

Example

有关 Fermyon Cloud 所需的设置说明,请参阅:

https://developer.fermyon.com/cloud/github-actions

供参考,请参阅:

https://developer.fermyon.com/cloud/changelog/gh-actions-spin-deploy

对于 Fermyon gh 操作本身,请参阅:

https://github.com/fermyon/actions

name: 发布到 Spin Cloud

on: push: branches: [main] workflow_dispatch:

permissions: contents: read id-token: write

仅允许一个并发部署,跳过正在运行的运行和最新排队的运行之间排队的运行。

但是,不要取消正在进行的运行,因为我们希望允许这些生产部署完成。

concurrency: group: "spin" cancel-in-progress: false

jobs: Spin-Release:

timeout-minutes: 10

environment: name: production url: ${{ steps.deployment.outputs.app-url }}

runs-on: ubuntu-latest

steps:

  • uses: actions/checkout@v4 # repo checkout

安装 Rust Nightly 工具链,包括 Clippy 和 Rustfmt

  • name: 安装 nightly Rust uses: dtolnay/rust-toolchain@nightly with: components: clippy, rustfmt

  • name: 添加 WASM 和 WASI 目标 run: rustup target add wasm32-unknown-unknown && rustup target add wasm32-wasi

  • name: lint run: cargo clippy & cargo fmt

如果使用 tailwind...

- name: 下载并安装 tailwindcss 二进制文件

run: npm install -D tailwindcss && npx tailwindcss -i <INPUT/PATH.css> -o <OUTPUT/PATH.css> # 运行 tailwind

  • name: 下载并安装 Trunk 二进制文件 run: wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.18.2/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-

  • name: 使用 Trunk 构建 run: ./trunk build --release

安装 Spin CLI 并部署

  • name: 设置 Spin uses: fermyon/actions/spin/setup@v1

with:

plugins:

  • name: 构建和部署 id: deployment uses: fermyon/actions/spin/deploy@v1 with: fermyon_token: ${{ secrets.FERMYON_CLOUD_TOKEN }}

key_values: |-

# abc=xyz
# foo=bar

variables: |-

# password=${{ secrets.SECURE_PASSWORD }}
# apikey=${{ secrets.API_KEY }}

创建一条显式消息以显示已部署应用程序的 URL,以及在作业图中显示

  • name: 已部署的 URL run: | echo "已部署的应用程序 URL:${{ steps.deployment.outputs.app-url }}" >> $GITHUB_STEP_SUMMARY

预览部署脚本:spin_preview.yml

Example

有关 Fermyon Cloud 所需的设置说明,请参阅:

https://developer.fermyon.com/cloud/github-actions

对于 Fermyon gh 操作本身,请参阅:

https://github.com/fermyon/actions

具体来说:

https://github.com/fermyon/actions?tab=readme-ov-file#deploy-preview-of-spin-app-to-fermyon-cloud---fermyonactionsspinpreviewv1

name: 在 Spin Cloud 上预览

on: pull_request: branches: ["main", "v*"] types: ['opened', 'synchronize', 'reopened', 'closed'] workflow_dispatch:

permissions: contents: read pull-requests: write

仅允许一个并发部署,跳过正在运行的运行和最新排队的运行之间排队的运行。

但是,不要取消正在进行的运行,因为我们希望允许这些生产部署完成。

concurrency: group: "spin" cancel-in-progress: false

jobs: Spin-Preview:

timeout-minutes: 10

environment: name: preview url: ${{ steps.preview.outputs.app-url }}

runs-on: ubuntu-latest

steps:

  • uses: actions/checkout@v4 # repo checkout

安装 Rust Nightly 工具链,包括 Clippy 和 Rustfmt

  • name: 安装 nightly Rust uses: dtolnay/rust-toolchain@nightly with: components: clippy, rustfmt

  • name: 添加 WASM 和 WASI 目标 run: rustup target add wasm32-unknown-unknown && rustup target add wasm32-wasi

  • name: lint run: cargo clippy & cargo fmt

如果使用 tailwind...

- name: 下载并安装 tailwindcss 二进制文件

run: npm install -D tailwindcss && npx tailwindcss -i <INPUT/PATH.css> -o <OUTPUT/PATH.css> # 运行 tailwind

  • name: 下载并安装 Trunk 二进制文件 run: wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.18.2/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-

  • name: 使用 Trunk 构建 run: ./trunk build --release

安装 Spin CLI 并部署

  • name: 设置 Spin uses: fermyon/actions/spin/setup@v1

with:

plugins:

  • name: 构建和预览 id: preview uses: fermyon/actions/spin/preview@v1 with: fermyon_token: ${{ secrets.FERMYON_CLOUD_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }} undeploy: ${{ github.event.pull_request && github.event.action == 'closed' }}

key_values: |-

# abc=xyz
# foo=bar

variables: |-

# password=${{ secrets.SECURE_PASSWORD }}
# apikey=${{ secrets.API_KEY }}
  • name: 显示已部署 URL run: | echo "已部署的应用程序 URL:${{ steps.preview.outputs.app-url }}" >> $GITHUB_STEP_SUMMARY

请参阅 此处的示例仓库

部署全栈 SSR 应用程序

可以将 Leptos 全栈 SSR 应用程序部署到任意数量的服务器或容器托管服务。将 Leptos SSR 应用程序投入生产的最简单方法可能是使用 VPS 服务,并在 VM 中本地运行 Leptos(有关更多详细信息,请参阅此处)。或者,你可以将你的 Leptos 应用程序容器化,并在任何托管或云服务器上的 PodmanDocker 中运行它。

有许多不同的部署设置和托管服务,一般来说,Leptos 本身与你使用的部署设置无关。考虑到部署目标的多样性,在本页我们将介绍:

注意:Leptos 不支持使用任何特定的部署方法或托管服务。

创建一个 Containerfile

人们部署使用 cargo-leptos 构建的全栈应用程序最流行的方式是使用支持通过 Podman 或 Docker 构建进行部署的云托管服务。这是一个示例 Containerfile / Dockerfile,它基于我们用于部署 Leptos 网站的示例。

Debian

# 从包含 Rust nightly 的构建环境开始
FROM rustlang/rust:nightly-bullseye as builder

# 如果你使用的是稳定版,请改用此版本
# FROM rust:1.74-bullseye as builder

# 安装 cargo-binstall,这使得安装其他
# cargo 扩展(如 cargo-leptos)变得更容易
RUN wget https://github.com/cargo-bins/cargo-binstall/releases/latest/download/cargo-binstall-x86_64-unknown-linux-musl.tgz
RUN tar -xvf cargo-binstall-x86_64-unknown-linux-musl.tgz
RUN cp cargo-binstall /usr/local/cargo/bin

# 安装 cargo-leptos
RUN cargo binstall cargo-leptos -y

# 添加 WASM 目标
RUN rustup target add wasm32-unknown-unknown

# 创建一个 /app 目录,所有内容最终都将位于其中
RUN mkdir -p /app
WORKDIR /app
COPY . .

# 构建应用程序
RUN cargo leptos build --release -vv

FROM debian:bookworm-slim as runtime
WORKDIR /app
RUN apt-get update -y \
  && apt-get install -y --no-install-recommends openssl ca-certificates \
  && apt-get autoremove -y \
  && apt-get clean -y \
  && rm -rf /var/lib/apt/lists/*

# -- 注意:将二进制文件名从“leptos_start”更新为与 Cargo.toml 中的应用程序名称匹配 --
# 将服务器二进制文件复制到 /app 目录
COPY --from=builder /app/target/release/leptos_start /app/

# /target/site 包含我们的 JS/WASM/CSS 等。
COPY --from=builder /app/target/site /app/site

# 如果在运行时需要 Cargo.toml,请复制它
COPY --from=builder /app/Cargo.toml /app/

# 设置任何所需的 env 变量并
ENV RUST_LOG="info"
ENV LEPTOS_SITE_ADDR="0.0.0.0:8080"
ENV LEPTOS_SITE_ROOT="site"
EXPOSE 8080

# -- 注意:将二进制文件名从“leptos_start”更新为与 Cargo.toml 中的应用程序名称匹配 --
# 运行服务器
CMD ["/app/leptos_start"]

Alpine

# 从包含 Rust nightly 的构建环境开始
FROM rustlang/rust:nightly-alpine as builder

RUN apk update && \
    apk add --no-cache bash curl npm libc-dev binaryen

RUN npm install -g sass

RUN curl --proto '=https' --tlsv1.2 -LsSf https://github.com/leptos-rs/cargo-leptos/releases/latest/download/cargo-leptos-installer.sh | sh

# 添加 WASM 目标
RUN rustup target add wasm32-unknown-unknown

WORKDIR /work
COPY . .

RUN cargo leptos build --release -vv

FROM rustlang/rust:nightly-alpine as runner

WORKDIR /app

COPY --from=builder /work/target/release/leptos_start /app/
COPY --from=builder /work/target/site /app/site
COPY --from=builder /work/Cargo.toml /app/

EXPOSE $PORT
ENV LEPTOS_SITE_ROOT=./site

CMD ["/app/leptos_start"]

阅读更多:Leptos 应用程序的 gnumusl 构建文件

云部署

部署到 Fly.io

部署 Leptos SSR 应用程序的一种选择是使用 Fly.io 之类的服务,该服务采用 Leptos 应用程序的 Dockerfile 定义,并在快速启动的微型虚拟机中运行它;Fly 还提供各种存储选项和托管数据库以用于你的项目。以下示例将展示如何部署一个简单的 Leptos 入门应用程序,只是为了让你入门;如有需要,请参阅此处以了解有关在 Fly.io 上使用存储选项的更多信息

首先,在你的应用程序的根目录中创建一个 Dockerfile,并使用建议的内容(如上)填充它;确保将 Dockerfile 示例中的二进制文件名更新为你的应用程序的名称,并根据需要进行其他调整。

此外,确保你已安装 flyctl CLI 工具,并在 Fly.io 上设置了一个帐户。要在 MacOS、Linux 或 Windows WSL 上安装 flyctl,请运行:

curl -L https://fly.io/install.sh | sh

如果你遇到问题,或者要安装到其他平台,请参阅此处的完整说明

然后登录 Fly.io

fly auth login

并使用以下命令手动启动你的应用程序

fly launch

flyctl CLI 工具将引导你完成将你的应用程序部署到 Fly.io 的过程。

Note

默认情况下,Fly.io 会在一段时间后自动停止没有流量进入的机器。虽然 Fly.io 的轻量级虚拟机启动速度很快,但如果你想最大限度地减少 Leptos 应用程序的延迟并确保它始终能够快速响应,请进入生成的 fly.toml 文件并将 min_machines_running 从默认值 0 更改为 1。

有关更多详细信息,请参阅 Fly.io 文档中的此页面

如果你希望使用 Github Actions 来管理你的部署,你将需要通过 Fly.io Web UI 创建一个新的访问令牌。

转到“帐户”>“访问令牌”并创建一个名为“github_actions”之类的令牌,然后通过进入你的项目的 Github 仓库,然后单击“设置”>“秘密和变量”>“操作”,并将 Fermyon 云令牌添加到你的 Github 仓库的秘密中,并创建一个名为“FLY_API_TOKEN”的“新存储库秘密”。

要生成一个用于部署到 Fly.io 的 fly.toml 配置文件,你必须首先从项目源目录中运行以下命令

fly launch --no-deploy

以创建一个新的 Fly 应用程序并将其注册到服务中。Git 提交你的新 fly.toml 文件。

要设置 Github Actions 部署工作流,请将以下内容复制到 .github/workflows/fly_deploy.yml 文件中:

Example

# 有关更多详细信息,请参阅:https://fly.io/docs/app-guides/continuous-deployment-with-github-actions/

name: 部署到 Fly.io
on:
push:
	branches:
	- main
jobs:
deploy:
	name: 部署应用程序
	runs-on: ubuntu-latest
	steps:
	- uses: actions/checkout@v4
	- uses: superfly/flyctl-actions/setup-flyctl@master
	- name: 部署到 fly
		id: deployment
		run: |
		  flyctl deploy --remote-only | tail -n 1 >> $GITHUB_STEP_SUMMARY
		env:
		  FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}

下次提交到你的 Github main 分支时,你的项目将自动部署到 Fly.io。

请参阅 此处的示例仓库

Railway

另一个云部署提供商是 Railway。 Railway 与 GitHub 集成以自动部署你的代码。

有一个自以为是的社区模板可以让你快速入门:

在 Railway 上部署

该模板已设置 renovate 以保持依赖项最新,并支持 GitHub Actions 在部署之前测试你的代码。

Railway 有一个免费套餐,不需要信用卡,而且由于 Leptos 需要的资源很少,该免费套餐应该可以使用很长时间。

请参阅 此处的示例仓库

部署到无服务器运行时

Leptos 支持部署到 FaaS(函数即服务)或“无服务器”运行时(如 AWS Lambda),以及 WinterCG 兼容的 JS 运行时(如 Deno 和 Cloudflare)。请注意,与虚拟机或容器类型的部署相比,无服务器环境确实对 SSR 应用程序可用的功能有一些限制(请参阅下面的注释)。

AWS Lambda

借助 Cargo Lambda 工具,Leptos SSR 应用程序可以部署到 AWS Lambda。leptos-rs/start-aws 提供了一个使用 Axum 作为服务器的入门模板仓库;那里的说明可以适用于你使用 Leptos+Actix-web 服务器。入门仓库包括一个用于 CI/CD 的 Github Actions 脚本,以及有关设置你的 Lambda 函数和获取云部署所需凭据的说明。

但是,请记住,某些本机服务器功能不适用于 Lambda 等 FaaS 服务,因为环境在不同请求之间不一定一致。特别是,'start-aws' 文档 指出,“由于 AWS Lambda 是一个无服务器平台,因此你需要更加小心地管理长期存在的州。写入磁盘或使用状态提取器在不同请求之间无法可靠地工作。相反,你需要一个数据库或其他微服务,你可以从 Lambda 函数中查询它们。”

要记住的另一个因素是函数即服务的“冷启动”时间——根据你的用例和你使用的 FaaS 平台,这可能满足也可能不满足你的延迟要求;你可能需要始终保持一个函数运行以优化你的请求的速度。

Deno 和 Cloudflare Workers

目前,Leptos-Axum 支持在 Javascript 托管的 WebAssembly 运行时(如 Deno、Cloudflare Workers 等)中运行。此选项需要对你的源代码设置进行一些更改(例如,在 Cargo.toml 中,你必须使用 crate-type = ["cdylib"] 定义你的应用程序,并且必须为 leptos_axum 启用“wasm”功能)。Leptos HackerNews JS-fetch 示例 演示了所需的修改,并展示了如何在 Deno 运行时中运行应用程序。此外,leptos_axum crate 文档 是为 JS 托管的 WASM 运行时设置你自己的 Cargo.toml 文件时的有用参考。

虽然 JS 托管的 WASM 运行时的初始设置并不繁琐,但要记住更重要的限制是,由于你的应用程序将在服务器和客户端上编译为 WebAssembly (wasm32-unknown-unknown),因此你必须确保你应用程序中使用的 crate 都与 WASM 兼容;根据你的应用程序的要求,这可能是也可能不是一个障碍,因为并非 Rust 生态系统中的所有 crate 都支持 WASM。

如果你愿意接受 WASM 服务器端的限制,那么现在开始的最佳方式是查看官方 Leptos Github 仓库中 使用 Deno 运行 Leptos 的示例

正在进行 Leptos 支持的平台

部署到 Spin 无服务器 WASI(使用 Leptos SSR)

服务器端的 WebAssembly 最近一直在蓬勃发展,开源无服务器 WebAssembly 框架 Spin 的开发人员正在努力原生支持 Leptos。虽然 Leptos-Spin SSR 集成仍处于早期阶段,但有一个你可以尝试的有效示例。

让 Leptos SSR 和 Spin 协同工作的完整说明可在 Fermyon 博客上的一篇文章 中找到,或者,如果你想跳过文章并直接开始玩一个有效的入门仓库,请参阅此处

部署到 Shuttle.rs

一些 Leptos 用户询问了使用对 Rust 友好的 Shuttle.rs 服务部署 Leptos 应用程序的可能性。不幸的是,Leptos 目前尚未得到 Shuttle.rs 服务的官方支持。

但是,Shuttle.rs 的工作人员致力于在未来获得 Leptos 支持;如果你想了解这项工作的最新状态,请关注 此 Github 问题

此外,已经做出了一些努力来让 Shuttle 与 Leptos 协同工作,但到目前为止,部署到 Shuttle 云仍然无法按预期工作。如果你想自己调查或贡献修复程序,这项工作可在此处获得:用于 Shuttle.rs 的 Leptos Axum 入门模板

优化 WASM 二进制文件大小

部署 Rust/WebAssembly 前端应用程序的主要缺点之一是将 WASM 文件拆分成更小的块以动态加载比拆分 JavaScript 包要困难得多。在 Emscripten 生态系统中已经有一些实验,例如 wasm-split,但目前还没有办法拆分和动态加载 Rust/wasm-bindgen 二进制文件。这意味着需要加载整个 WASM 二进制文件后,你的应用程序才能进行交互。由于 WASM 格式是为流式编译而设计的,因此与 JavaScript 文件相比,WASM 文件每千字节的编译速度要快得多。(要深入了解,你可以阅读 Mozilla 团队的这篇精彩文章,了解流式 WASM 编译。)

尽管如此,将最小的 WASM 二进制文件发送给用户仍然很重要,因为它会减少他们的网络使用量并使你的应用程序尽快进行交互。

那么有哪些实际步骤呢?

要做的事情

  1. 确保你正在查看发布版本。(调试版本要大得多。)
  2. 为 WASM 添加一个发布配置文件,该配置文件针对大小进行优化,而不是速度。

例如,对于 cargo-leptos 项目,你可以将此添加到你的 Cargo.toml 中:

[profile.wasm-release]
inherits = "release"
opt-level = 'z'
lto = true
codegen-units = 1

# ....

[package.metadata.leptos]
# ....
lib-profile-release = "wasm-release"

这将针对大小对你的发布版本的 WASM 进行超优化,同时保持你的服务器版本针对速度进行优化。(对于没有服务器考虑的纯客户端渲染应用程序,只需使用 [profile.wasm-release] 块作为你的 [profile.release]。)

  1. 始终在生产环境中提供压缩的 WASM。WASM 往往压缩得很好,通常会缩小到其未压缩大小的 50% 以下,并且为从 Actix 或 Axum 提供的静态文件启用压缩非常简单。

  2. 如果你使用的是 nightly Rust,你可以使用相同的配置文件而不是随 wasm32-unknown-unknown 目标一起分发的预构建标准库来重建标准库。

为此,请在你的项目的 .cargo/config.toml 中创建一个文件

[unstable]
build-std = ["std", "panic_abort", "core", "alloc"]
build-std-features = ["panic_immediate_abort"]

请注意,如果你也将此用于 SSR,则将应用相同的 Cargo 配置文件。你需要明确指定你的目标:

[build]
target = "x86_64-unknown-linux-gnu" # 或其他任何内容

还要注意,在某些情况下,不会设置 cfg 功能 has_std,这可能会导致某些依赖项的构建错误,这些依赖项会检查 has_std。你可以通过添加以下内容来修复由此导致的任何构建错误:

[build]
rustflags = ["--cfg=has_std"]

你需要在 Cargo.toml[profile.release] 中添加 panic = "abort"。请注意,这会将相同的 build-std 和恐慌设置应用于你的服务器二进制文件,这可能不是你想要的。这里可能需要进一步探索。

  1. WASM 二进制文件中二进制大小的来源之一可能是 serde 序列化/反序列化代码。Leptos 默认使用 serde 来序列化和反序列化使用 create_resource 创建的资源。你可以尝试使用 miniserdeserde-lite 功能,它们允许你使用这些 crate 代替进行序列化和反序列化;它们都只实现了 serde 功能的一个子集,但通常针对大小而不是速度进行优化。

要避免的事情

有些 crate 往往会增加二进制文件的大小。例如,具有默认功能的 regex crate 会为 WASM 二进制文件增加约 500kb(主要是因为它必须引入 Unicode 表数据!)。在大小敏感的环境中,你可能会考虑一般避免使用正则表达式,甚至放弃并调用浏览器 API 来使用内置的正则表达式引擎。(这就是 leptos_router 在需要正则表达式的少数情况下所做的事情。)

通常,Rust 对运行时性能的承诺有时与对小型二进制文件的承诺不一致。例如,Rust 会对泛型函数进行单态化,这意味着它会为它调用的每个泛型类型创建一个不同的函数副本。这比动态调度快得多,但会增加二进制文件的大小。Leptos 尝试非常谨慎地平衡运行时性能和二进制文件大小的考虑;但你可能会发现,编写使用许多泛型的代码往往会增加二进制文件的大小。例如,如果你有一个在其主体中包含大量代码的泛型组件,并使用四种不同的类型调用它,请记住编译器可能包含该代码的四个副本。重构以使用具体的内部函数或辅助函数通常可以保持性能和人体工程学,同时减小二进制文件的大小。

最后的想法

请记住,在服务器端渲染的应用程序中,JS 包大小/WASM 二进制文件大小仅影响_一_件事:首次加载时的交互时间。这对良好的用户体验非常重要:没有人希望点击一个按钮三次而它什么也不做,因为交互式代码仍在加载——但这并不是唯一重要的指标。

特别值得记住的是,流式传输单个 WASM 二进制文件意味着所有后续导航几乎都是瞬时的,仅取决于任何额外的数据加载。正是因为你的 WASM 二进制文件进行包拆分,所以导航到新路由不需要加载额外的 JS/WASM,这与几乎所有 JavaScript 框架中的情况一样。这是在自我安慰吗?也许。或者,这可能只是两种方法之间的一种诚实的权衡!

始终抓住机会优化应用程序中唾手可得的成果。并且在做出任何英勇的努力之前,始终在真实环境下使用真实的用户网络速度和设备来测试你的应用程序。

Guide: Islands

Leptos 0.5 introduces the new experimental-islands feature. This guide will walk through the islands feature and core concepts, while implementing a demo app using the islands architecture.

The Islands Architecture

The dominant JavaScript frontend frameworks (React, Vue, Svelte, Solid, Angular) all originated as frameworks for building client-rendered single-page apps (SPAs). The initial page load is rendered to HTML, then hydrated, and subsequent navigations are handled directly in the client. (Hence “single page”: everything happens from a single page load from the server, even if there is client-side routing later.) Each of these frameworks later added server-side rendering to improve initial load times, SEO, and user experience.

This means that by default, the entire app is interactive. It also means that the entire app has to be shipped to the client as JavaScript in order to be hydrated. Leptos has followed this same pattern.

You can read more in the chapters on server-side rendering.

But it’s also possible to work in the opposite direction. Rather than taking an entirely-interactive app, rendering it to HTML on the server, and then hydrating it in the browser, you can begin with a plain HTML page and add small areas of interactivity. This is the traditional format for any website or app before the 2010s: your browser makes a series of requests to the server and returns the HTML for each new page in response. After the rise of “single-page apps” (SPA), this approach has sometimes become known as a “multi-page app” (MPA) by comparison.

The phrase “islands architecture” has emerged recently to describe the approach of beginning with a “sea” of server-rendered HTML pages, and adding “islands” of interactivity throughout the page.

Additional Reading

The rest of this guide will look at how to use islands with Leptos. For more background on the approach in general, check out some of the articles below:

Activating Islands Mode

Let’s start with a fresh cargo-leptos app:

cargo leptos new --git leptos-rs/start

I’m using Actix because I like it. Feel free to use Axum; there should be approximately no server-specific differences in this guide.

I’m just going to run

cargo leptos build

in the background while I fire up my editor and keep writing.

The first thing I’ll do is to add the experimental-islands feature in my Cargo.toml. I need to add this to both leptos and leptos_actix:

leptos = { version = "0.5", features = ["nightly", "experimental-islands"] }
leptos_actix = { version = "0.5", optional = true, features = [
  "experimental-islands",
] }

Next I’m going to modify the hydrate function exported from src/lib.rs. I’m going to remove the line that calls leptos::mount_to_body(App) and replace it with

leptos::leptos_dom::HydrationCtx::stop_hydrating();

Each “island” we create will actually act as its own entrypoint, so our hydrate() function just says “okay, hydration’s done now.”

Okay, now fire up your cargo leptos watch and go to http://localhost:3000 (or wherever).

Click the button, and...

Nothing happens!

Perfect.

Note

The starter templates include use app::*; in their hydrate() function definitions. Once you've switched over to islands mode, you are no longer using the imported main App function, so you might think you can delete this. (And in fact, Rust lint tools might issue warnings if you don't!)

However, this can cause issues if you are using a workspace setup. We use wasm-bindgen to independently export an entrypoint for each function. In my experience, if you are using a workspace setup and nothing in your frontend crate actually uses the app crate, those bindings will not be generated correctly. See this discussion for more.

Using Islands

Nothing happens because we’ve just totally inverted the mental model of our app. Rather than being interactive by default and hydrating everything, the app is now plain HTML by default, and we need to opt into interactivity.

This has a big effect on WASM binary sizes: if I compile in release mode, this app is a measly 24kb of WASM (uncompressed), compared to 355kb in non-islands mode. (355kb is quite large for a “Hello, world!” It’s really just all the code related to client-side routing, which isn’t being used in the demo.)

When we click the button, nothing happens, because our whole page is static.

So how do we make something happen?

Let’s turn the HomePage component into an island!

Here was the non-interactive version:

#[component]
fn HomePage() -> impl IntoView {
    // Creates a reactive value to update the button
    let (count, set_count) = create_signal(0);
    let on_click = move |_| set_count.update(|count| *count += 1);

    view! {
        <h1>"Welcome to Leptos!"</h1>
        <button on:click=on_click>"Click Me: " {count}</button>
    }
}

Here’s the interactive version:

#[island]
fn HomePage() -> impl IntoView {
    // Creates a reactive value to update the button
    let (count, set_count) = create_signal(0);
    let on_click = move |_| set_count.update(|count| *count += 1);

    view! {
        <h1>"Welcome to Leptos!"</h1>
        <button on:click=on_click>"Click Me: " {count}</button>
    }
}

Now when I click the button, it works!

The #[island] macro works exactly like the #[component] macro, except that in islands mode, it designates this as an interactive island. If we check the binary size again, this is 166kb uncompressed in release mode; much larger than the 24kb totally static version, but much smaller than the 355kb fully-hydrated version.

If you open up the source for the page now, you’ll see that your HomePage island has been rendered as a special <leptos-island> HTML element which specifies which component should be used to hydrate it:

<leptos-island data-component="HomePage" data-hkc="0-0-0">
  <h1 data-hk="0-0-2">Welcome to Leptos!</h1>
  <button data-hk="0-0-3">
    Click Me:
    <!-- <DynChild> -->11<!-- </DynChild> -->
  </button>
</leptos-island>

The typical Leptos hydration keys and markers are only present inside the island, only the island is hydrated.

Using Islands Effectively

Remember that only code within an #[island] needs to be compiled to WASM and shipped to the browser. This means that islands should be as small and specific as possible. My HomePage, for example, would be better broken apart into a regular component and an island:

#[component]
fn HomePage() -> impl IntoView {
    view! {
        <h1>"Welcome to Leptos!"</h1>
        <Counter/>
    }
}

#[island]
fn Counter() -> impl IntoView {
    // Creates a reactive value to update the button
    let (count, set_count) = create_signal(0);
    let on_click = move |_| set_count.update(|count| *count += 1);

    view! {
        <button on:click=on_click>"Click Me: " {count}</button>
    }
}

Now the <h1> doesn’t need to be included in the client bundle, or hydrated. This seems like a silly distinction now; but note that you can now add as much inert HTML content as you want to the HomePage itself, and the WASM binary size will remain exactly the same.

In regular hydration mode, your WASM binary size grows as a function of the size/complexity of your app. In islands mode, your WASM binary grows as a function of the amount of interactivity in your app. You can add as much non-interactive content as you want, outside islands, and it will not increase that binary size.

Unlocking Superpowers

So, this 50% reduction in WASM binary size is nice. But really, what’s the point?

The point comes when you combine two key facts:

  1. Code inside #[component] functions now only runs on the server.
  2. Children and props can be passed from the server to islands, without being included in the WASM binary.

This means you can run server-only code directly in the body of a component, and pass it directly into the children. Certain tasks that take a complex blend of server functions and Suspense in fully-hydrated apps can be done inline in islands.

We’re going to rely on a third fact in the rest of this demo:

  1. Context can be passed between otherwise-independent islands.

So, instead of our counter demo, let’s make something a little more fun: a tabbed interface that reads data from files on the server.

Passing Server Children to Islands

One of the most powerful things about islands is that you can pass server-rendered children into an island, without the island needing to know anything about them. Islands hydrate their own content, but not children that are passed to them.

As Dan Abramov of React put it (in the very similar context of RSCs), islands aren’t really islands: they’re donuts. You can pass server-only content directly into the “donut hole,” as it were, allowing you to create tiny atolls of interactivity, surrounded on both sides by the sea of inert server HTML.

In the demo code included below, I added some styles to show all server content as a light-blue “sea,” and all islands as light-green “land.” Hopefully that will help picture what I’m talking about!

To continue with the demo: I’m going to create a Tabs component. Switching between tabs will require some interactivity, so of course this will be an island. Let’s start simple for now:

#[island]
fn Tabs(labels: Vec<String>) -> impl IntoView {
    let buttons = labels
        .into_iter()
        .map(|label| view! { <button>{label}</button> })
        .collect_view();
    view! {
        <div style="display: flex; width: 100%; justify-content: space-between;">
            {buttons}
        </div>
    }
}

Oops. This gives me an error

error[E0463]: can't find crate for `serde`
  --> src/app.rs:43:1
   |
43 | #[island]
   | ^^^^^^^^^ can't find crate

Easy fix: let’s cargo add serde --features=derive. The #[island] macro wants to pull in serde here because it needs to serialize and deserialize the labels prop.

Now let’s update the HomePage to use Tabs.

#[component]
fn HomePage() -> impl IntoView {
	// these are the files we’re going to read
    let files = ["a.txt", "b.txt", "c.txt"];
	// the tab labels will just be the file names
	let labels = files.iter().copied().map(Into::into).collect();
    view! {
        <h1>"Welcome to Leptos!"</h1>
        <p>"Click any of the tabs below to read a recipe."</p>
        <Tabs labels/>
    }
}

If you take a look in the DOM inspector, you’ll see the island is now something like

<leptos-island
  data-component="Tabs"
  data-hkc="0-0-0"
  data-props='{"labels":["a.txt","b.txt","c.txt"]}'
></leptos-island>

Our labels prop is getting serialized to JSON and stored in an HTML attribute so it can be used to hydrate the island.

Now let’s add some tabs. For the moment, a Tab island will be really simple:

#[island]
fn Tab(index: usize, children: Children) -> impl IntoView {
    view! {
        <div>{children()}</div>
    }
}

Each tab, for now will just be a <div> wrapping its children.

Our Tabs component will also get some children: for now, let’s just show them all.

#[island]
fn Tabs(labels: Vec<String>, children: Children) -> impl IntoView {
    let buttons = labels
        .into_iter()
        .map(|label| view! { <button>{label}</button> })
        .collect_view();
    view! {
        <div style="display: flex; width: 100%; justify-content: space-around;">
            {buttons}
        </div>
        {children()}
    }
}

Okay, now let’s go back into the HomePage. We’re going to create the list of tabs to put into our tab box.

#[component]
fn HomePage() -> impl IntoView {
    let files = ["a.txt", "b.txt", "c.txt"];
    let labels = files.iter().copied().map(Into::into).collect();
	let tabs = move || {
        files
            .into_iter()
            .enumerate()
            .map(|(index, filename)| {
                let content = std::fs::read_to_string(filename).unwrap();
                view! {
                    <Tab index>
                        <h2>{filename.to_string()}</h2>
                        <p>{content}</p>
                    </Tab>
                }
            })
            .collect_view()
    };

    view! {
        <h1>"Welcome to Leptos!"</h1>
        <p>"Click any of the tabs below to read a recipe."</p>
        <Tabs labels>
            <div>{tabs()}</div>
        </Tabs>
    }
}

Uh... What?

If you’re used to using Leptos, you know that you just can’t do this. All code in the body of components has to run on the server (to be rendered to HTML) and in the browser (to hydrate), so you can’t just call std::fs; it will panic, because there’s no access to the local filesystem (and certainly not to the server filesystem!) in the browser. This would be a security nightmare!

Except... wait. We’re in islands mode. This HomePage component really does only run on the server. So we can, in fact, just use ordinary server code like this.

Is this a dumb example? Yes! Synchronously reading from three different local files in a .map() is not a good choice in real life. The point here is just to demonstrate that this is, definitely, server-only content.

Go ahead and create three files in the root of the project called a.txt, b.txt, and c.txt, and fill them in with whatever content you’d like.

Refresh the page and you should see the content in the browser. Edit the files and refresh again; it will be updated.

You can pass server-only content from a #[component] into the children of an #[island], without the island needing to know anything about how to access that data or render that content.

This is really important. Passing server children to islands means that you can keep islands small. Ideally, you don’t want to slap and #[island] around a whole chunk of your page. You want to break that chunk out into an interactive piece, which can be an #[island], and a bunch of additional server content that can be passed to that island as children, so that the non-interactive subsections of an interactive part of the page can be kept out of the WASM binary.

Passing Context Between Islands

These aren’t really “tabs” yet: they just show every tab, all the time. So let’s add some simple logic to our Tabs and Tab components.

We’ll modify Tabs to create a simple selected signal. We provide the read half via context, and set the value of the signal whenever someone clicks one of our buttons.

#[island]
fn Tabs(labels: Vec<String>, children: Children) -> impl IntoView {
    let (selected, set_selected) = create_signal(0);
    provide_context(selected);

    let buttons = labels
        .into_iter()
        .enumerate()
        .map(|(index, label)| view! {
            <button on:click=move |_| set_selected(index)>
                {label}
            </button>
        })
        .collect_view();
// ...

And let’s modify the Tab island to use that context to show or hide itself:

#[island]
fn Tab(children: Children) -> impl IntoView {
    let selected = expect_context::<ReadSignal<usize>>();
    view! {
        <div style:display=move || if selected() == index {
            "block"
        } else {
            "none"
        }>
// ...

Now the tabs behave exactly as I’d expect. Tabs passes the signal via context to each Tab, which uses it to determine whether it should be open or not.

That’s why in HomePage, I made let tabs = move || a function, and called it like {tabs()}: creating the tabs lazily this way meant that the Tabs island would already have provided the selected context by the time each Tab went looking for it.

Our complete tabs demo is about 220kb uncompressed: not the smallest demo in the world, but still about a third smaller than the counter button! Just for kicks, I built the same demo without islands mode, using #[server] functions and Suspense. and it was 429kb. So again, this was about a 50% savings in binary size. And this app includes quite minimal server-only content: remember that as we add additional server-only components and pages, this 220 will not grow.

Overview

This demo may seem pretty basic. It is. But there are a number of immediate takeaways:

  • 50% WASM binary size reduction, which means measurable improvements in time to interactivity and initial load times for clients.
  • Reduced HTML page size. This one is less obvious, but it’s true and important: HTML generated from #[component]s doesn’t need all the hydration IDs and other boilerplate added.
  • Reduced data serialization costs. Creating a resource and reading it on the client means you need to serialize the data, so it can be used for hydration. If you’ve also read that data to create HTML in a Suspense, you end up with “double data,” i.e., the same exact data is both rendered to HTML and serialized as JSON, increasing the size of responses, and therefore slowing them down.
  • Easily use server-only APIs inside a #[component] as if it were a normal, native Rust function running on the server—which, in islands mode, it is!
  • Reduced #[server]/create_resource/Suspense boilerplate for loading server data.

Future Exploration

The experimental-islands feature included in 0.5 reflects work at the cutting edge of what frontend web frameworks are exploring right now. As it stands, our islands approach is very similar to Astro (before its recent View Transitions support): it allows you to build a traditional server-rendered, multi-page app and pretty seamlessly integrate islands of interactivity.

There are some small improvements that will be easy to add. For example, we can do something very much like Astro's View Transitions approach:

  • add client-side routing for islands apps by fetching subsequent navigations from the server and replacing the HTML document with the new one
  • add animated transitions between the old and new document using the View Transitions API
  • support explicit persistent islands, i.e., islands that you can mark with unique IDs (something like persist:searchbar on the component in the view), which can be copied over from the old to the new document without losing their current state

There are other, larger architectural changes that I’m not sold on yet.

Additional Information

Check out the islands PR, roadmap, and Hackernews demo for additional discussion.

Demo Code

use leptos::*;
use leptos_router::*;

#[component]
pub fn App() -> impl IntoView {
    view! {
        <Router>
            <main style="background-color: lightblue; padding: 10px">
                <Routes>
                    <Route path="" view=HomePage/>
                </Routes>
            </main>
        </Router>
    }
}

/// Renders the home page of your application.
#[component]
fn HomePage() -> impl IntoView {
    let files = ["a.txt", "b.txt", "c.txt"];
    let labels = files.iter().copied().map(Into::into).collect();
    let tabs = move || {
        files
            .into_iter()
            .enumerate()
            .map(|(index, filename)| {
                let content = std::fs::read_to_string(filename).unwrap();
                view! {
                    <Tab index>
                        <div style="background-color: lightblue; padding: 10px">
                            <h2>{filename.to_string()}</h2>
                            <p>{content}</p>
                        </div>
                    </Tab>
                }
            })
            .collect_view()
    };

    view! {
        <h1>"Welcome to Leptos!"</h1>
        <p>"Click any of the tabs below to read a recipe."</p>
        <Tabs labels>
            <div>{tabs()}</div>
        </Tabs>
    }
}

#[island]
fn Tabs(labels: Vec<String>, children: Children) -> impl IntoView {
    let (selected, set_selected) = create_signal(0);
    provide_context(selected);

    let buttons = labels
        .into_iter()
        .enumerate()
        .map(|(index, label)| {
            view! {
                <button on:click=move |_| set_selected(index)>
                    {label}
                </button>
            }
        })
        .collect_view();
    view! {
        <div
            style="display: flex; width: 100%; justify-content: space-around;\
            background-color: lightgreen; padding: 10px;"
        >
            {buttons}
        </div>
        {children()}
    }
}

#[island]
fn Tab(index: usize, children: Children) -> impl IntoView {
    let selected = expect_context::<ReadSignal<usize>>();
    view! {
        <div
            style:background-color="lightgreen"
            style:padding="10px"
            style:display=move || if selected() == index {
                "block"
            } else {
                "none"
            }
        >
            {children()}
        </div>
    }
}

Appendix: How does the Reactive System Work?

You don’t need to know very much about how the reactive system actually works in order to use the library successfully. But it’s always useful to understand what’s going on behind the scenes once you start working with the framework at an advanced level.

The reactive primitives you use are divided into three sets:

  • Signals (ReadSignal/WriteSignal, RwSignal, Resource, Trigger) Values you can actively change to trigger reactive updates.
  • Computations (Memos) Values that depend on signals (or other computations) and derive a new reactive value through some pure computation.
  • Effects Observers that listen to changes in some signals or computations and run a function, causing some side effect.

Derived signals are a kind of non-primitive computation: as plain closures, they simply allow you to refactor some repeated signal-based computation into a reusable function that can be called in multiple places, but they are not represented in the reactive system itself.

All the other primitives actually exist in the reactive system as nodes in a reactive graph.

Most of the work of the reactive system consists of propagating changes from signals to effects, possibly through some intervening memos.

The assumption of the reactive system is that effects (like rendering to the DOM or making a network request) are orders of magnitude more expensive than things like updating a Rust data structure inside your app.

So the primary goal of the reactive system is to run effects as infrequently as possible.

Leptos does this through the construction of a reactive graph.

Leptos’s current reactive system is based heavily on the Reactively library for JavaScript. You can read Milo’s article “Super-Charging Fine-Grained Reactivity” for an excellent account of its algorithm, as well as fine-grained reactivity in general—including some beautiful diagrams!

The Reactive Graph

Signals, memos, and effects all share three characteristics:

  • Value They have a current value: either the signal’s value, or (for memos and effects) the value returned by the previous run, if any.
  • Sources Any other reactive primitives they depend on. (For signals, this is an empty set.)
  • Subscribers Any other reactive primitives that depend on them. (For effects, this is an empty set.)

In reality then, signals, memos, and effects are just conventional names for one generic concept of a “node” in a reactive graph. Signals are always “root nodes,” with no sources/parents. Effects are always “leaf nodes,” with no subscribers. Memos typically have both sources and subscribers.

Simple Dependencies

So imagine the following code:

// A
let (name, set_name) = create_signal("Alice");

// B
let name_upper = create_memo(move |_| name.with(|n| n.to_uppercase()));

// C
create_effect(move |_| {
	log!("{}", name_upper());
});

set_name("Bob");

You can easily imagine the reactive graph here: name is the only signal/origin node, the create_effect is the only effect/terminal node, and there’s one intervening memo.

A   (name)
|
B   (name_upper)
|
C   (the effect)

Splitting Branches

Let’s make it a little more complex.

// A
let (name, set_name) = create_signal("Alice");

// B
let name_upper = create_memo(move |_| name.with(|n| n.to_uppercase()));

// C
let name_len = create_memo(move |_| name.len());

// D
create_effect(move |_| {
	log!("len = {}", name_len());
});

// E
create_effect(move |_| {
	log!("name = {}", name_upper());
});

This is also pretty straightforward: a signal source signal (name/A) divides into two parallel tracks: name_upper/B and name_len/C, each of which has an effect that depends on it.

 __A__
|     |
B     C
|     |
E     D

Now let’s update the signal.

set_name("Bob");

We immediately log

len = 3
name = BOB

Let’s do it again.

set_name("Tim");

The log should shows

name = TIM

len = 3 does not log again.

Remember: the goal of the reactive system is to run effects as infrequently as possible. Changing name from "Bob" to "Tim" will cause each of the memos to re-run. But they will only notify their subscribers if their value has actually changed. "BOB" and "TIM" are different, so that effect runs again. But both names have the length 3, so they do not run again.

Reuniting Branches

One more example, of what’s sometimes called the diamond problem.

// A
let (name, set_name) = create_signal("Alice");

// B
let name_upper = create_memo(move |_| name.with(|n| n.to_uppercase()));

// C
let name_len = create_memo(move |_| name.len());

// D
create_effect(move |_| {
	log!("{} is {} characters long", name_upper(), name_len());
});

What does the graph look like for this?

 __A__
|     |
B     C
|     |
|__D__|

You can see why it's called the “diamond problem.” If I’d connected the nodes with straight lines instead of bad ASCII art, it would form a diamond: two memos, each of which depend on a signal, which feed into the same effect.

A naive, push-based reactive implementation would cause this effect to run twice, which would be bad. (Remember, our goal is to run effects as infrequently as we can.) For example, you could implement a reactive system such that signals and memos immediately propagate their changes all the way down the graph, through each dependency, essentially traversing the graph depth-first. In other words, updating A would notify B, which would notify D; then A would notify C, which would notify D again. This is both inefficient (D runs twice) and glitchy (D actually runs with the incorrect value for the second memo during its first run.)

Solving the Diamond Problem

Any reactive implementation worth its salt is dedicated to solving this issue. There are a number of different approaches (again, see Milo’s article for an excellent overview).

Here’s how ours works, in brief.

A reactive node is always in one of three states:

  • Clean: it is known not to have changed
  • Check: it is possible it has changed
  • Dirty: it has definitely changed

Updating a signal Dirty marks that signal Dirty, and marks all its descendants Check, recursively. Any of its descendants that are effects are added to a queue to be re-run.

    ____A (DIRTY)___
   |               |
B (CHECK)    C (CHECK)
   |               |
   |____D (CHECK)__|

Now those effects are run. (All of the effects will be marked Check at this point.) Before re-running its computation, the effect checks its parents to see if they are dirty. So

  • So D goes to B and checks if it is Dirty.
  • But B is also marked Check. So B does the same thing:
    • B goes to A, and finds that it is Dirty.
    • This means B needs to re-run, because one of its sources has changed.
    • B re-runs, generating a new value, and marks itself Clean
    • Because B is a memo, it then checks its prior value against the new value.
    • If they are the same, B returns "no change." Otherwise, it returns "yes, I changed."
  • If B returned “yes, I changed,” D knows that it definitely needs to run and re-runs immediately before checking any other sources.
  • If B returned “no, I didn’t change,” D continues on to check C (see process above for B.)
  • If neither B nor C has changed, the effect does not need to re-run.
  • If either B or C did change, the effect now re-runs.

Because the effect is only marked Check once and only queued once, it only runs once.

If the naive version was a “push-based” reactive system, simply pushing reactive changes all the way down the graph and therefore running the effect twice, this version could be called “push-pull.” It pushes the Check status all the way down the graph, but then “pulls” its way back up. In fact, for large graphs it may end up bouncing back up and down and left and right on the graph as it tries to determine exactly which nodes need to re-run.

Note this important trade-off: Push-based reactivity propagates signal changes more quickly, at the expense of over-re-running memos and effects. Remember: the reactive system is designed to minimize how often you re-run effects, on the (accurate) assumption that side effects are orders of magnitude more expensive than this kind of cache-friendly graph traversal happening entirely inside the library’s Rust code. The measurement of a good reactive system is not how quickly it propagates changes, but how quickly it propagates changes without over-notifying.

Memos vs. Signals

Note that signals always notify their children; i.e., a signal is always marked Dirty when it updates, even if its new value is the same as the old value. Otherwise, we’d have to require PartialEq on signals, and this is actually quite an expensive check on some types. (For example, add an unnecessary equality check to something like some_vec_signal.update(|n| n.pop()) when it’s clear that it has in fact changed.)

Memos, on the other hand, check whether they change before notifying their children. They only run their calculation once, no matter how many times you .get() the result, but they run whenever their signal sources change. This means that if the memo’s computation is very expensive, you may actually want to memoize its inputs as well, so that the memo only re-calculates when it is sure its inputs have changed.

Memos vs. Derived Signals

All of this is cool, and memos are pretty great. But most actual applications have reactive graphs that are quite shallow and quite wide: you might have 100 source signals and 500 effects, but no memos or, in rare case, three or four memos between the signal and the effect. Memos are extremely good at what they do: limiting how often they notify their subscribers that they have changed. But as this description of the reactive system should show, they come with overhead in two forms:

  1. A PartialEq check, which may or may not be expensive.
  2. Added memory cost of storing another node in the reactive system.
  3. Added computational cost of reactive graph traversal.

In cases in which the computation itself is cheaper than this reactive work, you should avoid “over-wrapping” with memos and simply use derived signals. Here’s a great example in which you should never use a memo:

let (a, set_a) = create_signal(1);
// none of these make sense as memos
let b = move || a() + 2;
let c = move || b() % 2 == 0;
let d = move || if c() { "even" } else { "odd" };

set_a(2);
set_a(3);
set_a(5);

Even though memoizing would technically save an extra calculation of d between setting a to 3 and 5, these calculations are themselves cheaper than the reactive algorithm.

At the very most, you might consider memoizing the final node before running some expensive side effect:

let text = create_memo(move |_| {
    d()
});
create_effect(move |_| {
    engrave_text_into_bar_of_gold(&text());
});

Appendix: The Life Cycle of a Signal

Three questions commonly arise at the intermediate level when using Leptos:

  1. How can I connect to the component lifecycle, running some code when a component mounts or unmounts?
  2. How do I know when signals are disposed, and why do I get an occasional panic when trying to access a disposed signal?
  3. How is it possible that signals are Copy and can be moved into closures and other structures without being explicitly cloned?

The answers to these three questions are closely inter-related, and are each somewhat complicated. This appendix will try to give you the context for understanding the answers, so that you can reason correctly about your application's code and how it runs.

The Component Tree vs. The Decision Tree

Consider the following simple Leptos app:

use leptos::logging::log;
use leptos::*;

#[component]
pub fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    view! {
        <button on:click=move |_| set_count.update(|n| *n += 1)>"+1"</button>
        {move || if count() % 2 == 0 {
            view! { <p>"Even numbers are fine."</p> }.into_view()
        } else {
            view! { <InnerComponent count/> }.into_view()
        }}
    }
}

#[component]
pub fn InnerComponent(count: ReadSignal<usize>) -> impl IntoView {
    create_effect(move |_| {
        log!("count is odd and is {}", count());
    });

    view! {
        <OddDuck/>
        <p>{count}</p>
    }
}

#[component]
pub fn OddDuck() -> impl IntoView {
    view! {
        <p>"You're an odd duck."</p>
    }
}

All it does is show a counter button, and then one message if it's even, and a different message if it's odd. If it's odd, it also logs the values in the console.

One way to map out this simple application would be to draw a tree of nested components:

App 
|_ InnerComponent
   |_ OddDuck

Another way would be to draw the tree of decision points:

root
|_ is count even?
   |_ yes
   |_ no

If you combine the two together, you'll notice that they don't map onto one another perfectly. The decision tree slices the view we created in InnerComponent into three pieces, and combines part of InnerComponent with the OddDuck component:

DECISION            COMPONENT           DATA    SIDE EFFECTS
root                <App/>              (count) render <button>
|_ is count even?   <InnerComponent/>
   |_ yes                                       render even <p>
   |_ no                                        start logging the count 
                    <OddDuck/>                  render odd <p> 
                                                render odd <p> (in <InnerComponent/>!)

Looking at this table, I notice the following things:

  1. The component tree and the decision tree don't match one another: the "is count even?" decision splits <InnerComponent/> into three parts (one that never changes, one if even, one if odd), and merges one of these with the <OddDuck/> component.
  2. The decision tree and the list of side effects correspond perfectly: each side effect is created at a specific decision point.
  3. The decision tree and the tree of data also line up. It's hard to see with only one signal in the table, but unlike a component, which is a function that can include multiple decisions or none, a signal is always created at a specific line in the tree of decisions.

Here's the thing: The structure of your data and the structure of side effects affect the actual functionality of your application. The structure of your components is just a convenience of authoring. You don't care, and you shouldn't care, which component rendered which <p> tag, or which component created the effect to log the values. All that matters is that they happen at the right times.

In Leptos, components do not exist. That is to say: You can write your application as a tree of components, because that's convenient, and we provide some debugging tools and logging built around components, because that's convenient too. But your components do not exist at runtime: Components are not a unit of change detection or of rendering. They are simply function calls. You can write your whole application in one big component, or split it into a hundred components, and it does not affect the runtime behavior, because components don't really exist.

The decision tree, on the other hand, does exist. And it's really important!

The Decision Tree, Rendering, and Ownership

Every decision point is some kind of reactive statement: a signal or a function that can change over time. When you pass a signal or a function into the renderer, it automatically wraps it in an effect that subscribes to any signals it contains, and updates the view accordingly over time.

This means that when your application is rendered, it creates a tree of nested effects that perfectly mirrors the decision tree. In pseudo-code:

// root
let button = /* render the <button> once */;

// the renderer wraps an effect around the `move || if count() ...`
create_effect(|_| {
    if count() % 2 == 0 {
        let p = /* render the even <p> */;
    } else {
        // the user created an effect to log the count
        create_effect(|_| {
            log!("count is odd and is {}", count());
        });

        let p1 = /* render the <p> from OddDuck */;
        let p2 = /* render the second <p> */ 

        // the renderer creates an effect to update the second <p>
        create_effect(|_| {
            // update the content of the <p> with the signal
            p2.set_text_content(count.get());
        });
    }
})

Each reactive value is wrapped in its own effect to update the DOM, or run any other side effects of changes to signals. But you don't need these effects to keep running forever. For example, when count switches from an odd number back to an even number, the second <p> no longer exists, so the effect to keep updating it is no longer useful. Instead of running forever, effects are canceled when the decision that created them changes. In other words, and more precisely: effects are canceled whenever the effect that was running when they were created re-runs. If they were created in a conditional branch, and re-running the effect goes through the same branch, the effect will be created again: if not, it will not.

From the perspective of the reactive system itself, your application's "decision tree" is really a reactive "ownership tree." Simply put, a reactive "owner" is the effect or memo that is currently running. It owns effects created within it, they own their own children, and so on. When an effect is going to re-run, it first "cleans up" its children, then runs again.

So far, this model is shared with the reactive system as it exists in JavaScript frameworks like S.js or Solid, in which the concept of ownership exists to automatically cancel effects.

What Leptos adds is that we add a second, similar meaning to ownership: a reactive owner not only owns its child effects, so that it can cancel them; it also owns its signals (memos, etc.) so that it can dispose of them.

Ownership and the Copy Arena

This is the innovation that allows Leptos to be usable as a Rust UI framework. Traditionally, managing UI state in Rust has been hard, because UI is all about shared mutability. (A simple counter button is enough to see the problem: You need both immutable access to set the text node showing the counter's value, and mutable access in the click handler, and every Rust UI framework is designed around the fact that Rust is designed to prevent exactly that!) Using something like an event handler in Rust traditionally relies on primitives for communicating via shared memory with interior mutability (Rc<RefCell<_>>, Arc<Mutex<_>>) or for shared memory by communicating via channels, either of which often requires explicit .clone()ing to be moved into an event listener. This is kind of fine, but also an enormous inconvenience.

Leptos has always used a form of arena allocation for signals instead. A signal itself is essentially an index into a data structure that's held elsewhere. It's a cheap-to-copy integer type that does not do reference counting on its own, so it can be copied around, moved into event listeners, etc. without explicit cloning.

Instead of Rust lifetimes or reference counting, the life cycles of these signals are determined by the ownership tree.

Just as all effects belong to an owning parent effect, and the children are canceled when the owner reruns, so too all signals belong to an owner, and are disposed of when the parent reruns.

In most cases, this is completely fine. Imagine that in our example above, <OddDuck/> created some other signal that it used to update part of its UI. In most cases, that signal will be used for local state in that component, or maybe passed down as a prop to another component. It's unusual for it to be hoisted up out of the decision tree and used somewhere else in the application. When the count switches back to an even number, it is no longer needed and can be disposed.

However, this means there are two possible issues that can arise.

Signals can be used after they are disposed

The ReadSignal or WriteSignal that you hold is just an integer: say, 3 if it's the 3rd signal in the application. (As always, the reality is a bit more complicated, but not much.) You can copy that number all over the place and use it to say, "Hey, get me signal 3." When the owner cleans up, the value of signal 3 will be invalidated; but the number 3 that you've copied all over the place can't be invalidated. (Not without a whole garbage collector!) That means that if you push signals back "up" the decision tree, and store them somewhere conceptually "higher" in your application than they were created, they can be accessed after being disposed.

If you try to update a signal after it was disposed, nothing bad really happens. The framework will just warn you that you tried to update a signal that no longer exists. But if you try to access one, there's no coherent answer other than panicking: there is no value that could be returned. (There are try_ equivalents to the .get() and .with() methods that will simply return None if a signal has been disposed).

Signals can be leaked if you create them in a higher scope and never dispose of them

The opposite is also true, and comes up particularly when working with collections of signals, like an RwSignal<Vec<RwSignal<_>>>. If you create a signal at a higher level, and pass it down to a component at a lower level, it is not disposed until the higher-up owner is cleaned up.

For example, if you have a todo app that creates a new RwSignal<Todo> for each todo, stores it in an RwSignal<Vec<RwSignal<Todo>>>, and then passes it down to a <Todo/>, that signal is not automatically disposed when you remove the todo from the list, but must be manually disposed, or it will "leak" for as long as its owner is still alive. (See the TodoMVC example for more discussion.)

This is only an issue when you create signals, store them in a collection, and remove them from the collection without manually disposing of them as well.

Connecting the Dots

The answers to the questions we started with should probably make some sense now.

Component Life-Cycle

There is no component life-cycle, because components don't really exist. But there is an ownership lifecycle, and you can use it to accomplish the same things:

  • before mount: simply running code in the body of a component will run it "before the component mounts"
  • on mount: create_effect runs a tick after the rest of the component, so it can be useful for effects that need to wait for the view to be mounted to the DOM.
  • on unmount: You can use on_cleanup to give the reactive system code that should run while the current owner is cleaning up, before running again. Because an owner is around a "decision," this means that on_cleanup will run when your component unmounts: if something can unmount, the renderer must have created an effect that's unmounting it!

Issues with Disposed Signals

Generally speaking, problems can only arise here if you are creating a signal lower down in the ownership tree and storing it somewhere higher up. If you run into issues here, you should instead "hoist" the signal creation up into the parent, and then pass the created signals down—making sure to dispose of them on removal, if needed!

Copy signals

The whole system of Copyable wrapper types (signals, StoredValue, and so on) uses the ownership tree as a close approximation of the life-cycle of different parts of your UI. In effect, it parallels the Rust language's system of lifetimes based on blocks of code with a system of lifetimes based on sections of UI. This can't always be perfectly checked at compile time, but overall we think it's a net positive.

Introduction

This book is intended as an introduction to the Leptos Web framework. It will walk through the fundamental concepts you need to build applications, beginning with a simple application rendered in the browser, and building toward a full-stack application with server-side rendering and hydration.

The guide doesn’t assume you know anything about fine-grained reactivity or the details of modern Web frameworks. It does assume you are familiar with the Rust programming language, HTML, CSS, and the DOM and basic Web APIs.

Leptos is most similar to frameworks like Solid (JavaScript) and Sycamore (Rust). There are some similarities to other frameworks like React (JavaScript), Svelte (JavaScript), Yew (Rust), and Dioxus (Rust), so knowledge of one of those frameworks may also make it easier to understand Leptos.

You can find more detailed docs for each part of the API at Docs.rs.

The source code for the book is available here. PRs for typos or clarification are always welcome.

Getting Started

There are two basic paths to getting started with Leptos:

  1. Client-side rendering (CSR) with Trunk - a great option if you just want to make a snappy website with Leptos, or work with a pre-existing server or API. In CSR mode, Trunk compiles your Leptos app to WebAssembly (WASM) and runs it in the browser like a typical Javascript single-page app (SPA). The advantages of Leptos CSR include faster build times and a quicker iterative development cycle, as well as a simpler mental model and more options for deploying your app. CSR apps do come with some disadvantages: initial load times for your end users are slower compared to a server-side rendering approach, and the usual SEO challenges that come along with using a JS single-page app model apply to Leptos CSR apps as well. Also note that, under the hood, an auto-generated snippet of JS is used to load the Leptos WASM bundle, so JS must be enabled on the client device for your CSR app to display properly. As with all software engineering, there are trade-offs here you'll need to consider.

  2. Full-stack, server-side rendering (SSR) with cargo-leptos - SSR is a great option for building CRUD-style websites and custom web apps if you want Rust powering both your frontend and backend. With the Leptos SSR option, your app is rendered to HTML on the server and sent down to the browser; then, WebAssembly is used to instrument the HTML so your app becomes interactive - this process is called 'hydration'. On the server side, Leptos SSR apps integrate closely with your choice of either Actix-web or Axum server libraries, so you can leverage those communities' crates to help build out your Leptos server. The advantages of taking the SSR route with Leptos include helping you get the best initial load times and optimal SEO scores for your web app. SSR apps can also dramatically simplify working across the server/client boundary via a Leptos feature called "server functions", which lets you transparently call functions on the server from your client code (more on this feature later). Full-stack SSR isn't all rainbows and butterflies, though - disadvantages include a slower developer iteration loop (because you need to recompile both the server and client when making Rust code changes), as well as some added complexity that comes along with hydration.

By the end of the book, you should have a good idea of which trade-offs to make and which route to take - CSR or SSR - depending on your project's requirements.

In Part 1 of this book, we'll start with client-side rendering Leptos sites and building reactive UIs using Trunk to serve our JS and WASM bundle to the browser.

We’ll introduce cargo-leptos in Part 2 of this book, which is all about working with the full power of Leptos in its full-stack, SSR mode.

Note

If you're coming from the Javascript world and terms like client-side rendering (CSR) and server-side rendering (SSR) are unfamiliar to you, the easiest way to understand the difference is by analogy:

Leptos' CSR mode is similar to working with React (or a 'signals'-based framework like SolidJS), and focuses on producing a client-side UI which you can use with any tech stack on the server.

Using Leptos' SSR mode is similar to working with a full-stack framework like Next.js in the React world (or Solid's "SolidStart" framework) - SSR helps you build sites and apps that are rendered on the server then sent down to the client. SSR can help to improve your site's loading performance and accessibility as well as make it easier for one person to work on both client- and server-side without needing to context-switch between different languages for frontend and backend.

The Leptos framework can be used either in CSR mode to just make a UI (like React), or you can use Leptos in full-stack SSR mode (like Next.js) so that you can build both your UI and your server with one language: Rust.

Hello World! Getting Set up for Leptos CSR Development

First up, make sure Rust is installed and up-to-date (see here if you need instructions).

If you don’t have it installed already, you can install the "Trunk" tool for running Leptos CSR sites by running the following on the command-line:

cargo install trunk

And then create a basic Rust project

cargo init leptos-tutorial

cd into your new leptos-tutorial project and add leptos as a dependency

cargo add leptos --features=csr,nightly

Or you can leave off nightly if you're using stable Rust

cargo add leptos --features=csr

Using nightly Rust, and the nightly feature in Leptos enables the function-call syntax for signal getters and setters that is used in most of this book.

To use nightly Rust, you can either opt into nightly for all your Rust projects by running

rustup toolchain install nightly
rustup default nightly

or only for this project

rustup toolchain install nightly
cd <into your project>
rustup override set nightly

See here for more details.

If you’d rather use stable Rust with Leptos, you can do that too. In the guide and examples, you’ll just use the ReadSignal::get() and WriteSignal::set() methods instead of calling signal getters and setters as functions.

Make sure you've added the wasm32-unknown-unknown target so that Rust can compile your code to WebAssembly to run in the browser.

rustup target add wasm32-unknown-unknown

Create a simple index.html in the root of the leptos-tutorial directory

<!DOCTYPE html>
<html>
  <head></head>
  <body></body>
</html>

And add a simple “Hello, world!” to your main.rs

use leptos::*;

fn main() {
    mount_to_body(|| view! { <p>"Hello, world!"</p> })
}

Your directory structure should now look something like this

leptos_tutorial
├── src
│   └── main.rs
├── Cargo.toml
├── index.html

Now run trunk serve --open from the root of the leptos-tutorial directory. Trunk should automatically compile your app and open it in your default browser. If you make edits to main.rs, Trunk will recompile your source code and live-reload the page.

Welcome to the world of UI development with Rust and WebAssembly (WASM), powered by Leptos and Trunk!

Note

If you are using Windows, note that trunk serve --open may not work. If you have issues with --open, simply use trunk serve and open a browser tab manually.


Now before we get started building your first real UI's with Leptos, there are a couple of things you might want to know to help make your experience with Leptos just a little bit easier.

Leptos Developer Experience Improvements

There are a couple of things you can do to improve your experience of developing websites and apps with Leptos. You may want to take a few minutes and set up your environment to optimize your development experience, especially if you want to code along with the examples in this book.

1) Set up console_error_panic_hook

By default, panics that happen while running your WASM code in the browser just throw an error in the browser with an unhelpful message like Unreachable executed and a stack trace that points into your WASM binary.

With console_error_panic_hook, you get an actual Rust stack trace that includes a line in your Rust source code.

It's very easy to set up:

  1. Run cargo add console_error_panic_hook in your project
  2. In your main function, add console_error_panic_hook::set_once();

If this is unclear, click here for an example.

Now you should have much better panic messages in the browser console!

2) Editor Autocompletion inside #[component] and #[server]

Because of the nature of macros (they can expand from anything to anything, but only if the input is exactly correct at that instant) it can be hard for rust-analyzer to do proper autocompletion and other support.

If you run into issues using these macros in your editor, you can explicitly tell rust-analyzer to ignore certain proc macros. For the #[server] macro especially, which annotates function bodies but doesn't actually transform anything inside the body of your function, this can be really helpful.

Starting in Leptos version 0.5.3, rust-analyzer support was added for the #[component] macro, but if you run into issues, you may want to add #[component] to the macro ignore list as well (see below). Note that this means that rust-analyzer doesn't know about your component props, which may generate its own set of errors or warnings in the IDE.

VSCode settings.json:

"rust-analyzer.procMacro.ignored": {
	"leptos_macro": [
        // optional:
		// "component",
		"server"
	],
}

VSCode with cargo-leptos settings.json:

"rust-analyzer.procMacro.ignored": {
	"leptos_macro": [
        // optional:
		// "component",
		"server"
	],
},
// if code that is cfg-gated for the `ssr` feature is shown as inactive,
// you may want to tell rust-analyzer to enable the `ssr` feature by default
//
// you can also use `rust-analyzer.cargo.allFeatures` to enable all features
"rust-analyzer.cargo.features": ["ssr"]

neovim with lspconfig:

require('lspconfig').rust_analyzer.setup {
  -- Other Configs ...
  settings = {
    ["rust-analyzer"] = {
      -- Other Settings ...
      procMacro = {
        ignored = {
            leptos_macro = {
                -- optional: --
                -- "component",
                "server",
            },
        },
      },
    },
  }
}

Helix, in .helix/languages.toml:

[[language]]
name = "rust"

[language-server.rust-analyzer]
config = { procMacro = { ignored = { leptos_macro = [
	# Optional:
	# "component",
	"server"
] } } }

Zed, in settings.json:

{
  -- Other Settings ...
  "lsp": {
    "rust-analyzer": {
      "procMacro": {
        "ignored": [
          // optional:
          // "component",
          "server"
        ]
      }
    }
  }
}

SublimeText 3, under LSP-rust-analyzer.sublime-settings in Goto Anything... menu:

// Settings in here override those in "LSP-rust-analyzer/LSP-rust-analyzer.sublime-settings"
{
  "rust-analyzer.procMacro.ignored": {
    "leptos_macro": [
      // optional:
      // "component",
      "server"
    ],
  },
}

3) Set up leptosfmt With Rust Analyzer (optional)

leptosfmt is a formatter for the Leptos view! macro (inside of which you'll typically write your UI code). Because the view! macro enables an 'RSX' (like JSX) style of writing your UI's, cargo-fmt has a harder time auto-formatting your code that's inside the view! macro. leptosfmt is a crate that solves your formatting issues and keeps your RSX-style UI code looking nice and tidy!

leptosfmt can be installed and used via the command line or from within your code editor:

First, install the tool with cargo install leptosfmt.

If you just want to use the default options from the command line, just run leptosfmt ./**/*.rs from the root of your project to format all the rust files using leptosfmt.

If you wish to set up your editor to work with leptosfmt, or if you wish to customize your leptosfmt experience, please see the instructions available on the leptosfmt github repo's README.md page.

Just note that it's recommended to set up your editor with leptosfmt on a per-workspace basis for best results.

The Leptos Community and leptos-* Crates

The Community

One final note before we get to building with Leptos: if you haven't already, feel free to join the growing community on the Leptos Discord and on Github. Our Discord channel in particular is very active and friendly - we'd love to have you there!

Note

If you find a chapter or an explanation that isn't clear while you're working your way through the Leptos book, just mention it in the "docs-and-education" channel or ask a question in "help" so we can clear things up and update the book for others.

As you get further along in your Leptos journey and find that you have questions about "how to do 'x' with Leptos", then search the Discord "help" channel to see if a similar question has been asked before, or feel free to post your own question - the community is quite helpful and very responsive.

The "Discussions" on Github are also a great place for asking questions and keeping up with Leptos announcements.

And of course, if you run into any bugs while developing with Leptos or would like to make a feature request (or contribute a bug fix / new feature), open up an issue on the Github issue tracker.

Leptos-* Crates

The community has built a growing number of Leptos-related crates that will help you get productive with Leptos projects more quickly - check out the list of crates built on top of Leptos and contributed by the community on the Awesome Leptos repo on Github.

If you want to find the newest, up-and-coming Leptos-related crates, check out the "Tools and Libraries" section of the Leptos Discord. In that section, there are channels for the Leptos view! macro formatter (in the "leptosfmt" channel); there's a channel for the utility library "leptos-use"; another channel for the UI component libary "leptonic"; and a "libraries" channel where new leptos-* crates are discussed before making their way into the growing list of crates and resources available on Awesome Leptos.

Part 1: Building User Interfaces

In the first part of the book, we're going to look at building user interfaces on the client-side using Leptos. Under the hood, Leptos and Trunk are bundling up a snippet of Javascript which will load up the Leptos UI, which has been compiled to WebAssembly to drive the interactivity in your CSR (client-side rendered) website.

Part 1 will introduce you to the basic tools you need to build a reactive user interface powered by Leptos and Rust. By the end of Part 1, you should be able to build a snappy synchronous website that's rendered in the browser and which you can deploy on any static-site hosting service, like Github Pages or Vercel.

Info

To get the most out of this book, we encourage you to code along with the examples provided. In the Getting Started and Leptos DX chapters, we showed you how to set up a basic project with Leptos and Trunk, including WASM error handling in the browser. That basic setup is enough to get you started developing with Leptos.

If you'd prefer to get started using a more full-featured template which demonstrates how to set up a few of the basics you'd see in a real Leptos project, such as routing, (covered later in the book), injecting <Title> and <Meta> tags into the page head, and a few other niceties, then feel free to utilize the leptos-rs start-trunk template repo to get up and running.

The start-trunk template requires that you have Trunk and cargo-generate installed, which you can get by running cargo install trunk and cargo install cargo-generate.

To use the template to set up your project, just run

cargo generate --git https://github.com/leptos-community/start-csr

then run

trunk serve --port 3000 --open

in the newly created app's directory to start developing your app. The Trunk server will reload your app on file changes, making development relatively seamless.

A Basic Component

That “Hello, world!” was a very simple example. Let’s move on to something a little more like an ordinary app.

First, let’s edit the main function so that, instead of rendering the whole app, it just renders an <App/> component. Components are the basic unit of composition and design in most web frameworks, and Leptos is no exception. Conceptually, they are similar to HTML elements: they represent a section of the DOM, with self-contained, defined behavior. Unlike HTML elements, they are in PascalCase, so most Leptos applications will start with something like an <App/> component.

fn main() {
    leptos::mount_to_body(|| view! { <App/> })
}

Now let’s define our <App/> component itself. Because it’s relatively simple, I’ll give you the whole thing up front, then walk through it line by line.

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    view! {
        <button
            on:click=move |_| {
                // on stable, this is set_count.set(3);
                set_count(3);
            }
        >
            "Click me: "
            // on stable, this is move || count.get();
            {move || count()}
        </button>
    }
}

The Component Signature

#[component]

Like all component definitions, this begins with the #[component] macro. #[component] annotates a function so it can be used as a component in your Leptos application. We’ll see some of the other features of this macro in a couple chapters.

fn App() -> impl IntoView

Every component is a function with the following characteristics

  1. It takes zero or more arguments of any type.
  2. It returns impl IntoView, which is an opaque type that includes anything you could return from a Leptos view.

Component function arguments are gathered together into a single props struct which is built by the view macro as needed.

The Component Body

The body of the component function is a set-up function that runs once, not a render function that reruns multiple times. You’ll typically use it to create a few reactive variables, define any side effects that run in response to those values changing, and describe the user interface.

let (count, set_count) = create_signal(0);

create_signal creates a signal, the basic unit of reactive change and state management in Leptos. This returns a (getter, setter) tuple. To access the current value, you’ll use count.get() (or, on nightly Rust, the shorthand count()). To set the current value, you’ll call set_count.set(...) (or set_count(...)).

.get() clones the value and .set() overwrites it. In many cases, it’s more efficient to use .with() or .update(); check out the docs for ReadSignal and WriteSignal if you’d like to learn more about those trade-offs at this point.

The View

Leptos defines user interfaces using a JSX-like format via the view macro.

view! {
    <button
        // define an event listener with on:
        on:click=move |_| {
            set_count(3);
        }
    >
        // text nodes are wrapped in quotation marks
        "Click me: "
        // blocks can include Rust code
        {move || count()}
    </button>
}

This should mostly be easy to understand: it looks like HTML, with a special on:click to define a click event listener, a text node that’s formatted like a Rust string, and then...

{move || count()}

whatever that is.

People sometimes joke that they use more closures in their first Leptos application than they’ve ever used in their lives. And fair enough. Basically, passing a function into the view tells the framework: “Hey, this is something that might change.”

When we click the button and call set_count, the count signal is updated. This move || count() closure, whose value depends on the value of count, reruns, and the framework makes a targeted update to that one specific text node, touching nothing else in your application. This is what allows for extremely efficient updates to the DOM.

Now, if you have Clippy on—or if you have a particularly sharp eye—you might notice that this closure is redundant, at least if you’re in nightly Rust. If you’re using Leptos with nightly Rust, signals are already functions, so the closure is unnecessary. As a result, you can write a simpler view:

view! {
    <button /* ... */>
        "Click me: "
        // identical to {move || count()}
        {count}
    </button>
}

Remember—and this is very important—only functions are reactive. This means that {count} and {count()} do very different things in your view. {count} passes in a function, telling the framework to update the view every time count changes. {count()} accesses the value of count once, and passes an i32 into the view, rendering it once, unreactively. You can see the difference in the CodeSandbox below!

Let’s make one final change. set_count(3) is a pretty useless thing for a click handler to do. Let’s replace “set this value to 3” with “increment this value by 1”:

move |_| {
    set_count.update(|n| *n += 1);
}

You can see here that while set_count just sets the value, set_count.update() gives us a mutable reference and mutates the value in place. Either one will trigger a reactive update in our UI.

Throughout this tutorial, we’ll use CodeSandbox to show interactive examples. Hover over any of the variables to show Rust-Analyzer details and docs for what’s going on. Feel free to fork the examples to play with them yourself!

Live example

Click to open CodeSandbox.

To show the browser in the sandbox, you may need to click Add DevTools > Other Previews > 8080.

CodeSandbox Source
use leptos::*;

// The #[component] macro marks a function as a reusable component
// Components are the building blocks of your user interface
// They define a reusable unit of behavior
#[component]
fn App() -> impl IntoView {
    // here we create a reactive signal
    // and get a (getter, setter) pair
    // signals are the basic unit of change in the framework
    // we'll talk more about them later
    let (count, set_count) = create_signal(0);

    // the `view` macro is how we define the user interface
    // it uses an HTML-like format that can accept certain Rust values
    view! {
        <button
            // on:click will run whenever the `click` event fires
            // every event handler is defined as `on:{eventname}`

            // we're able to move `set_count` into the closure
            // because signals are Copy and 'static
            on:click=move |_| {
                set_count.update(|n| *n += 1);
            }
        >
            // text nodes in RSX should be wrapped in quotes,
            // like a normal Rust string
            "Click me"
        </button>
        <p>
            <strong>"Reactive: "</strong>
            // you can insert Rust expressions as values in the DOM
            // by wrapping them in curly braces
            // if you pass in a function, it will reactively update
            {move || count()}
        </p>
        <p>
            <strong>"Reactive shorthand: "</strong>
            // signals are functions, so we can remove the wrapping closure
            {count}
        </p>
        <p>
            <strong>"Not reactive: "</strong>
            // NOTE: if you write {count()}, this will *not* be reactive
            // it simply gets the value of count once
            {count()}
        </p>
    }
}

// This `main` function is the entry point into the app
// It just mounts our component to the <body>
// Because we defined it as `fn App`, we can now use it in a
// template as <App/>
fn main() {
    leptos::mount_to_body(|| view! { <App/> })
}

view: Dynamic Classes, Styles and Attributes

So far we’ve seen how to use the view macro to create event listeners and to create dynamic text by passing a function (such as a signal) into the view.

But of course there are other things you might want to update in your user interface. In this section, we’ll look at how to update classes, styles and attributes dynamically, and we’ll introduce the concept of a derived signal.

Let’s start with a simple component that should be familiar: click a button to increment a counter.

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    view! {
        <button
            on:click=move |_| {
                set_count.update(|n| *n += 1);
            }
        >
            "Click me: "
            {move || count()}
        </button>
    }
}

So far, this is just the example from the last chapter.

Dynamic Classes

Now let’s say I’d like to update the list of CSS classes on this element dynamically. For example, let’s say I want to add the class red when the count is odd. I can do this using the class: syntax.

class:red=move || count() % 2 == 1

class: attributes take

  1. the class name, following the colon (red)
  2. a value, which can be a bool or a function that returns a bool

When the value is true, the class is added. When the value is false, the class is removed. And if the value is a function that accesses a signal, the class will reactively update when the signal changes.

Now every time I click the button, the text should toggle between red and black as the number switches between even and odd.

<button
    on:click=move |_| {
        set_count.update(|n| *n += 1);
    }
    // the class: syntax reactively updates a single class
    // here, we'll set the `red` class when `count` is odd
    class:red=move || count() % 2 == 1
>
    "Click me"
</button>

If you’re following along, make sure you go into your index.html and add something like this:

<style>
  .red {
    color: red;
  }
</style>

Some CSS class names can’t be directly parsed by the view macro, especially if they include a mix of dashes and numbers or other characters. In that case, you can use a tuple syntax: class=("name", value) still directly updates a single class.

class=("button-20", move || count() % 2 == 1)

Dynamic Styles

Individual CSS properties can be directly updated with a similar style: syntax.

    let (x, set_x) = create_signal(0);
        view! {
            <button
                on:click={move |_| {
                    set_x.update(|n| *n += 10);
                }}
                // set the `style` attribute
                style="position: absolute"
                // and toggle individual CSS properties with `style:`
                style:left=move || format!("{}px", x() + 100)
                style:background-color=move || format!("rgb({}, {}, 100)", x(), 100)
                style:max-width="400px"
                // Set a CSS variable for stylesheet use
                style=("--columns", x)
            >
                "Click to Move"
            </button>
    }

Dynamic Attributes

The same applies to plain attributes. Passing a plain string or primitive value to an attribute gives it a static value. Passing a function (including a signal) to an attribute causes it to update its value reactively. Let’s add another element to our view:

<progress
    max="50"
    // signals are functions, so `value=count` and `value=move || count.get()`
    // are interchangeable.
    value=count
/>

Now every time we set the count, not only will the class of the <button> be toggled, but the value of the <progress> bar will increase, which means that our progress bar will move forward.

Derived Signals

Let’s go one layer deeper, just for fun.

You already know that we create reactive interfaces just by passing functions into the view. This means that we can easily change our progress bar. For example, suppose we want it to move twice as fast:

<progress
    max="50"
    value=move || count() * 2
/>

But imagine we want to reuse that calculation in more than one place. You can do this using a derived signal: a closure that accesses a signal.

let double_count = move || count() * 2;

/* insert the rest of the view */
<progress
    max="50"
    // we use it once here
    value=double_count
/>
<p>
    "Double Count: "
    // and again here
    {double_count}
</p>

Derived signals let you create reactive computed values that can be used in multiple places in your application with minimal overhead.

Note: Using a derived signal like this means that the calculation runs once per signal change (when count() changes) and once per place we access double_count; in other words, twice. This is a very cheap calculation, so that’s fine. We’ll look at memos in a later chapter, which were designed to solve this problem for expensive calculations.

Advanced Topic: Injecting Raw HTML

The view macro provides support for an additional attribute, inner_html, which can be used to directly set the HTML contents of any element, wiping out any other children you’ve given it. Note that this does not escape the HTML you provide. You should make sure that it only contains trusted input or that any HTML entities are escaped, to prevent cross-site scripting (XSS) attacks.

let html = "<p>This HTML will be injected.</p>";
view! {
  <div inner_html=html/>
}

Click here for the full view macros docs.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    // a "derived signal" is a function that accesses other signals
    // we can use this to create reactive values that depend on the
    // values of one or more other signals
    let double_count = move || count() * 2;

    view! {
        <button
            on:click=move |_| {
                set_count.update(|n| *n += 1);
            }

            // the class: syntax reactively updates a single class
            // here, we'll set the `red` class when `count` is odd
            class:red=move || count() % 2 == 1
        >
            "Click me"
        </button>
        // NOTE: self-closing tags like <br> need an explicit /
        <br/>

        // We'll update this progress bar every time `count` changes
        <progress
            // static attributes work as in HTML
            max="50"

            // passing a function to an attribute
            // reactively sets that attribute
            // signals are functions, so `value=count` and `value=move || count.get()`
            // are interchangeable.
            value=count
        ></progress>
        <br/>

        // This progress bar will use `double_count`
        // so it should move twice as fast!
        <progress
            max="50"
            // derived signals are functions, so they can also
            // reactively update the DOM
            value=double_count
        ></progress>
        <p>"Count: " {count}</p>
        <p>"Double Count: " {double_count}</p>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

Components and Props

So far, we’ve been building our whole application in a single component. This is fine for really tiny examples, but in any real application you’ll need to break the user interface out into multiple components, so you can break your interface down into smaller, reusable, composable chunks.

Let’s take our progress bar example. Imagine that you want two progress bars instead of one: one that advances one tick per click, one that advances two ticks per click.

You could do this by just creating two <progress> elements:

let (count, set_count) = create_signal(0);
let double_count = move || count() * 2;

view! {
    <progress
        max="50"
        value=count
    />
    <progress
        max="50"
        value=double_count
    />
}

But of course, this doesn’t scale very well. If you want to add a third progress bar, you need to add this code another time. And if you want to edit anything about it, you need to edit it in triplicate.

Instead, let’s create a <ProgressBar/> component.

#[component]
fn ProgressBar() -> impl IntoView {
    view! {
        <progress
            max="50"
            // hmm... where will we get this from?
            value=progress
        />
    }
}

There’s just one problem: progress is not defined. Where should it come from? When we were defining everything manually, we just used the local variable names. Now we need some way to pass an argument into the component.

Component Props

We do this using component properties, or “props.” If you’ve used another frontend framework, this is probably a familiar idea. Basically, properties are to components as attributes are to HTML elements: they let you pass additional information into the component.

In Leptos, you define props by giving additional arguments to the component function.

#[component]
fn ProgressBar(
    progress: ReadSignal<i32>
) -> impl IntoView {
    view! {
        <progress
            max="50"
            // now this works
            value=progress
        />
    }
}

Now we can use our component in the main <App/> component’s view.

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);
    view! {
        <button on:click=move |_| { set_count.update(|n| *n += 1); }>
            "Click me"
        </button>
        // now we use our component!
        <ProgressBar progress=count/>
    }
}

Using a component in the view looks a lot like using an HTML element. You’ll notice that you can easily tell the difference between an element and a component because components always have PascalCase names. You pass the progress prop in as if it were an HTML element attribute. Simple.

Reactive and Static Props

You’ll notice that throughout this example, progress takes a reactive ReadSignal<i32>, and not a plain i32. This is very important.

Component props have no special meaning attached to them. A component is simply a function that runs once to set up the user interface. The only way to tell the interface to respond to changes is to pass it a signal type. So if you have a component property that will change over time, like our progress, it should be a signal.

optional Props

Right now the max setting is hard-coded. Let’s take that as a prop too. But let’s add a catch: let’s make this prop optional by annotating the particular argument to the component function with #[prop(optional)].

#[component]
fn ProgressBar(
    // mark this prop optional
    // you can specify it or not when you use <ProgressBar/>
    #[prop(optional)]
    max: u16,
    progress: ReadSignal<i32>
) -> impl IntoView {
    view! {
        <progress
            max=max
            value=progress
        />
    }
}

Now, we can use <ProgressBar max=50 progress=count/>, or we can omit max to use the default value (i.e., <ProgressBar progress=count/>). The default value on an optional is its Default::default() value, which for a u16 is going to be 0. In the case of a progress bar, a max value of 0 is not very useful.

So let’s give it a particular default value instead.

default props

You can specify a default value other than Default::default() pretty simply with #[prop(default = ...).

#[component]
fn ProgressBar(
    #[prop(default = 100)]
    max: u16,
    progress: ReadSignal<i32>
) -> impl IntoView {
    view! {
        <progress
            max=max
            value=progress
        />
    }
}

Generic Props

This is great. But we began with two counters, one driven by count, and one by the derived signal double_count. Let’s recreate that by using double_count as the progress prop on another <ProgressBar/>.

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);
    let double_count = move || count() * 2;

    view! {
        <button on:click=move |_| { set_count.update(|n| *n += 1); }>
            "Click me"
        </button>
        <ProgressBar progress=count/>
        // add a second progress bar
        <ProgressBar progress=double_count/>
    }
}

Hm... this won’t compile. It should be pretty easy to understand why: we’ve declared that the progress prop takes ReadSignal<i32>, and double_count is not ReadSignal<i32>. As rust-analyzer will tell you, its type is || -> i32, i.e., it’s a closure that returns an i32.

There are a couple ways to handle this. One would be to say: “Well, I know that a ReadSignal is a function, and I know that a closure is a function; maybe I could just take any function?” If you’re savvy, you may know that both these implement the trait Fn() -> i32. So you could use a generic component:

#[component]
fn ProgressBar(
    #[prop(default = 100)]
    max: u16,
    progress: impl Fn() -> i32 + 'static
) -> impl IntoView {
    view! {
        <progress
            max=max
            value=progress
        />
        // Add a line-break to avoid overlap
        <br/>
    }
}

This is a perfectly reasonable way to write this component: progress now takes any value that implements this Fn() trait.

Generic props can also be specified using a where clause, or using inline generics like ProgressBar<F: Fn() -> i32 + 'static>. Note that support for impl Trait syntax was released in 0.6.12; if you receive an error message you may need to cargo update to ensure that you are on the latest version.

Generics need to be used somewhere in the component props. This is because props are built into a struct, so all generic types must be used somewhere in the struct. This is often easily accomplished using an optional PhantomData prop. You can then specify a generic in the view using the syntax for expressing types: <Component<T>/> (not with the turbofish-style <Component::<T>/>).

#[component]
fn SizeOf<T: Sized>(#[prop(optional)] _ty: PhantomData<T>) -> impl IntoView {
    std::mem::size_of::<T>()
}

#[component]
pub fn App() -> impl IntoView {
    view! {
        <SizeOf<usize>/>
        <SizeOf<String>/>
    }
}

Note that there are some limitations. For example, our view macro parser can’t handle nested generics like <SizeOf<Vec<T>>/>.

into Props

There’s one more way we could implement this, and it would be to use #[prop(into)]. This attribute automatically calls .into() on the values you pass as props, which allows you to easily pass props with different values.

In this case, it’s helpful to know about the Signal type. Signal is an enumerated type that represents any kind of readable reactive signal. It can be useful when defining APIs for components you’ll want to reuse while passing different sorts of signals. The MaybeSignal type is useful when you want to be able to take either a static or reactive value.

#[component]
fn ProgressBar(
    #[prop(default = 100)]
    max: u16,
    #[prop(into)]
    progress: Signal<i32>
) -> impl IntoView
{
    view! {
        <progress
            max=max
            value=progress
        />
        <br/>
    }
}

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);
    let double_count = move || count() * 2;

    view! {
        <button on:click=move |_| { set_count.update(|n| *n += 1); }>
            "Click me"
        </button>
        // .into() converts `ReadSignal` to `Signal`
        <ProgressBar progress=count/>
        // use `Signal::derive()` to wrap a derived signal
        <ProgressBar progress=Signal::derive(double_count)/>
    }
}

Optional Generic Props

Note that you can’t specify optional generic props for a component. Let’s see what would happen if you try:

#[component]
fn ProgressBar<F: Fn() -> i32 + 'static>(
    #[prop(optional)] progress: Option<F>,
) -> impl IntoView {
    progress.map(|progress| {
        view! {
            <progress
                max=100
                value=progress
            />
            <br/>
        }
    })
}

#[component]
pub fn App() -> impl IntoView {
    view! {
        <ProgressBar/>
    }
}

Rust helpfully gives the error

xx |         <ProgressBar/>
   |          ^^^^^^^^^^^ cannot infer type of the type parameter `F` declared on the function `ProgressBar`
   |
help: consider specifying the generic argument
   |
xx |         <ProgressBar::<F>/>
   |                     +++++

You can specify generics on components with a <ProgressBar<F>/> syntax (no turbofish in the view macro). Specifying the correct type here is not possible; closures and functions in general are unnameable types. The compiler can display them with a shorthand, but you can’t specify them.

However, you can get around this by providing a concrete type using Box<dyn _> or &dyn _:

#[component]
fn ProgressBar(
    #[prop(optional)] progress: Option<Box<dyn Fn() -> i32>>,
) -> impl IntoView {
    progress.map(|progress| {
        view! {
            <progress
                max=100
                value=progress
            />
            <br/>
        }
    })
}

#[component]
pub fn App() -> impl IntoView {
    view! {
        <ProgressBar/>
    }
}

Because the Rust compiler now knows the concrete type of the prop, and therefore its size in memory even in the None case, this compiles fine.

In this particular case, &dyn Fn() -> i32 will cause lifetime issues, but in other cases, it may be a possibility.

Documenting Components

This is one of the least essential but most important sections of this book. It’s not strictly necessary to document your components and their props. It may be very important, depending on the size of your team and your app. But it’s very easy, and bears immediate fruit.

To document a component and its props, you can simply add doc comments on the component function, and each one of the props:

/// Shows progress toward a goal.
#[component]
fn ProgressBar(
    /// The maximum value of the progress bar.
    #[prop(default = 100)]
    max: u16,
    /// How much progress should be displayed.
    #[prop(into)]
    progress: Signal<i32>,
) -> impl IntoView {
    /* ... */
}

That’s all you need to do. These behave like ordinary Rust doc comments, except that you can document individual component props, which can’t be done with Rust function arguments.

This will automatically generate documentation for your component, its Props type, and each of the fields used to add props. It can be a little hard to understand how powerful this is until you hover over the component name or props and see the power of the #[component] macro combined with rust-analyzer here.

Advanced Topic: #[component(transparent)]

All Leptos components return -> impl IntoView. Some, though, need to return some data directly without any additional wrapping. These can be marked with #[component(transparent)], in which case they return exactly the value they return, without the rendering system transforming them in any way.

This is mostly used in two situations:

  1. Creating wrappers around <Suspense/> or <Transition/>, which return a transparent suspense structure to integrate with SSR and hydration properly.
  2. Refactoring <Route/> definitions for leptos_router out into separate components, because <Route/> is a transparent component that returns a RouteDefinition struct rather than a view.

In general, you should not need to use transparent components unless you are creating custom wrapping components that fall into one of these two categories.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;

// Composing different components together is how we build
// user interfaces. Here, we'll define a reusable <ProgressBar/>.
// You'll see how doc comments can be used to document components
// and their properties.

/// Shows progress toward a goal.
#[component]
fn ProgressBar(
    // Marks this as an optional prop. It will default to the default
    // value of its type, i.e., 0.
    #[prop(default = 100)]
    /// The maximum value of the progress bar.
    max: u16,
    // Will run `.into()` on the value passed into the prop.
    #[prop(into)]
    // `Signal<T>` is a wrapper for several reactive types.
    // It can be helpful in component APIs like this, where we
    // might want to take any kind of reactive value
    /// How much progress should be displayed.
    progress: Signal<i32>,
) -> impl IntoView {
    view! {
        <progress
            max={max}
            value=progress
        />
        <br/>
    }
}

#[component]
fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    let double_count = move || count() * 2;

    view! {
        <button
            on:click=move |_| {
                set_count.update(|n| *n += 1);
            }
        >
            "Click me"
        </button>
        <br/>
        // If you have this open in CodeSandbox or an editor with
        // rust-analyzer support, try hovering over `ProgressBar`,
        // `max`, or `progress` to see the docs we defined above
        <ProgressBar max=50 progress=count/>
        // Let's use the default max value on this one
        // the default is 100, so it should move half as fast
        <ProgressBar progress=count/>
        // Signal::derive creates a Signal wrapper from our derived signal
        // using double_count means it should move twice as fast
        <ProgressBar max=50 progress=Signal::derive(double_count)/>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

Iteration

Whether you’re listing todos, displaying a table, or showing product images, iterating over a list of items is a common task in web applications. Reconciling the differences between changing sets of items can also be one of the trickiest tasks for a framework to handle well.

Leptos supports two different patterns for iterating over items:

  1. For static views: Vec<_>
  2. For dynamic lists: <For/>

Static Views with Vec<_>

Sometimes you need to show an item repeatedly, but the list you’re drawing from does not often change. In this case, it’s important to know that you can insert any Vec<IV> where IV: IntoView into your view. In other words, if you can render T, you can render Vec<T>.

let values = vec![0, 1, 2];
view! {
    // this will just render "012"
    <p>{values.clone()}</p>
    // or we can wrap them in <li>
    <ul>
        {values.into_iter()
            .map(|n| view! { <li>{n}</li>})
            .collect::<Vec<_>>()}
    </ul>
}

Leptos also provides a .collect_view() helper function that allows you to collect any iterator of T: IntoView into Vec<View>.

let values = vec![0, 1, 2];
view! {
    // this will just render "012"
    <p>{values.clone()}</p>
    // or we can wrap them in <li>
    <ul>
        {values.into_iter()
            .map(|n| view! { <li>{n}</li>})
            .collect_view()}
    </ul>
}

The fact that the list is static doesn’t mean the interface needs to be static. You can render dynamic items as part of a static list.

// create a list of 5 signals
let length = 5;
let counters = (1..=length).map(|idx| create_signal(idx));

// each item manages a reactive view
// but the list itself will never change
let counter_buttons = counters
    .map(|(count, set_count)| {
        view! {
            <li>
                <button
                    on:click=move |_| set_count.update(|n| *n += 1)
                >
                    {count}
                </button>
            </li>
        }
    })
    .collect_view();

view! {
    <ul>{counter_buttons}</ul>
}

You can render a Fn() -> Vec<_> reactively as well. But note that every time it changes, this will rerender every item in the list. This is quite inefficient! Fortunately, there’s a better way.

Dynamic Rendering with the <For/> Component

The <For/> component is a keyed dynamic list. It takes three props:

  • each: a function (such as a signal) that returns the items T to be iterated over
  • key: a key function that takes &T and returns a stable, unique key or ID
  • children: renders each T into a view

key is, well, the key. You can add, remove, and move items within the list. As long as each item’s key is stable over time, the framework does not need to rerender any of the items, unless they are new additions, and it can very efficiently add, remove, and move items as they change. This allows for extremely efficient updates to the list as it changes, with minimal additional work.

Creating a good key can be a little tricky. You generally do not want to use an index for this purpose, as it is not stable—if you remove or move items, their indices change.

But it’s a great idea to do something like generating a unique ID for each row as it is generated, and using that as an ID for the key function.

Check out the <DynamicList/> component below for an example.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;

// Iteration is a very common task in most applications.
// So how do you take a list of data and render it in the DOM?
// This example will show you the two ways:
// 1) for mostly-static lists, using Rust iterators
// 2) for lists that grow, shrink, or move items, using <For/>

#[component]
fn App() -> impl IntoView {
    view! {
        <h1>"Iteration"</h1>
        <h2>"Static List"</h2>
        <p>"Use this pattern if the list itself is static."</p>
        <StaticList length=5/>
        <h2>"Dynamic List"</h2>
        <p>"Use this pattern if the rows in your list will change."</p>
        <DynamicList initial_length=5/>
    }
}

/// A list of counters, without the ability
/// to add or remove any.
#[component]
fn StaticList(
    /// How many counters to include in this list.
    length: usize,
) -> impl IntoView {
    // create counter signals that start at incrementing numbers
    let counters = (1..=length).map(|idx| create_signal(idx));

    // when you have a list that doesn't change, you can
    // manipulate it using ordinary Rust iterators
    // and collect it into a Vec<_> to insert it into the DOM
    let counter_buttons = counters
        .map(|(count, set_count)| {
            view! {
                <li>
                    <button
                        on:click=move |_| set_count.update(|n| *n += 1)
                    >
                        {count}
                    </button>
                </li>
            }
        })
        .collect::<Vec<_>>();

    // Note that if `counter_buttons` were a reactive list
    // and its value changed, this would be very inefficient:
    // it would rerender every row every time the list changed.
    view! {
        <ul>{counter_buttons}</ul>
    }
}

/// A list of counters that allows you to add or
/// remove counters.
#[component]
fn DynamicList(
    /// The number of counters to begin with.
    initial_length: usize,
) -> impl IntoView {
    // This dynamic list will use the <For/> component.
    // <For/> is a keyed list. This means that each row
    // has a defined key. If the key does not change, the row
    // will not be re-rendered. When the list changes, only
    // the minimum number of changes will be made to the DOM.

    // `next_counter_id` will let us generate unique IDs
    // we do this by simply incrementing the ID by one
    // each time we create a counter
    let mut next_counter_id = initial_length;

    // we generate an initial list as in <StaticList/>
    // but this time we include the ID along with the signal
    let initial_counters = (0..initial_length)
        .map(|id| (id, create_signal(id + 1)))
        .collect::<Vec<_>>();

    // now we store that initial list in a signal
    // this way, we'll be able to modify the list over time,
    // adding and removing counters, and it will change reactively
    let (counters, set_counters) = create_signal(initial_counters);

    let add_counter = move |_| {
        // create a signal for the new counter
        let sig = create_signal(next_counter_id + 1);
        // add this counter to the list of counters
        set_counters.update(move |counters| {
            // since `.update()` gives us `&mut T`
            // we can just use normal Vec methods like `push`
            counters.push((next_counter_id, sig))
        });
        // increment the ID so it's always unique
        next_counter_id += 1;
    };

    view! {
        <div>
            <button on:click=add_counter>
                "Add Counter"
            </button>
            <ul>
                // The <For/> component is central here
                // This allows for efficient, key list rendering
                <For
                    // `each` takes any function that returns an iterator
                    // this should usually be a signal or derived signal
                    // if it's not reactive, just render a Vec<_> instead of <For/>
                    each=counters
                    // the key should be unique and stable for each row
                    // using an index is usually a bad idea, unless your list
                    // can only grow, because moving items around inside the list
                    // means their indices will change and they will all rerender
                    key=|counter| counter.0
                    // `children` receives each item from your `each` iterator
                    // and returns a view
                    children=move |(id, (count, set_count))| {
                        view! {
                            <li>
                                <button
                                    on:click=move |_| set_count.update(|n| *n += 1)
                                >
                                    {count}
                                </button>
                                <button
                                    on:click=move |_| {
                                        set_counters.update(|counters| {
                                            counters.retain(|(counter_id, _)| counter_id != &id)
                                        });
                                    }
                                >
                                    "Remove"
                                </button>
                            </li>
                        }
                    }
                />
            </ul>
        </div>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

Iterating over More Complex Data with <For/>

This chapter goes into iteration over nested data structures in a bit more depth. It belongs here with the other chapter on iteration, but feel free to skip it and come back if you’d like to stick with simpler subjects for now.

The Problem

I just said that the framework does not rerender any of the items in one of the rows, unless the key has changed. This probably makes sense at first, but it can easily trip you up.

Let’s consider an example in which each of the items in our row is some data structure. Imagine, for example, that the items come from some JSON array of keys and values:

#[derive(Debug, Clone)]
struct DatabaseEntry {
    key: String,
    value: i32,
}

Let’s define a simple component that will iterate over the rows and display each one:

#[component]
pub fn App() -> impl IntoView {
	// start with a set of three rows
    let (data, set_data) = create_signal(vec![
        DatabaseEntry {
            key: "foo".to_string(),
            value: 10,
        },
        DatabaseEntry {
            key: "bar".to_string(),
            value: 20,
        },
        DatabaseEntry {
            key: "baz".to_string(),
            value: 15,
        },
    ]);
    view! {
		// when we click, update each row,
		// doubling its value
        <button on:click=move |_| {
            set_data.update(|data| {
                for row in data {
                    row.value *= 2;
                }
            });
			// log the new value of the signal
            logging::log!("{:?}", data.get());
        }>
            "Update Values"
        </button>
		// iterate over the rows and display each value
        <For
            each=data
            key=|state| state.key.clone()
            let:child
        >
            <p>{child.value}</p>
        </For>
    }
}

Note the let:child syntax here. In the previous chapter we introduced <For/> with a children prop. We can actually create this value directly in the children of the <For/> component, without breaking out of the view macro: the let:child combined with <p>{child.value}</p> above is the equivalent of

children=|child| view! { <p>{child.value}</p> }

When you click the Update Values button... nothing happens. Or rather: the signal is updated, the new value is logged, but the {child.value} for each row doesn’t update.

Let’s see: is that because we forgot to add a closure to make it reactive? Let’s try {move || child.value}.

...Nope. Still nothing.

Here’s the problem: as I said, each row is only rerendered when the key changes. We’ve updated the value for each row, but not the key for any of the rows, so nothing has rerendered. And if you look at the type of child.value, it’s a plain i32, not a reactive ReadSignal<i32> or something. This means that even if we wrap a closure around it, the value in this row will never update.

We have three possible solutions:

  1. change the key so that it always updates when the data structure changes
  2. change the value so that it’s reactive
  3. take a reactive slice of the data structure instead of using each row directly

Option 1: Change the Key

Each row is only rerendered when the key changes. Our rows above didn’t rerender, because the key didn’t change. So: why not just force the key to change?

<For
	each=data
	key=|state| (state.key.clone(), state.value)
	let:child
>
	<p>{child.value}</p>
</For>

Now we include both the key and the value in the key. This means that whenever the value of a row changes, <For/> will treat it as if it’s an entirely new row, and replace the previous one.

Pros

This is very easy. We can make it even easier by deriving PartialEq, Eq, and Hash on DatabaseEntry, in which case we could just key=|state| state.clone().

Cons

This is the least efficient of the three options. Every time the value of a row changes, it throws out the previous <p> element and replaces it with an entirely new one. Rather than making a fine-grained update to the text node, in other words, it really does rerender the entire row on every change, and this is expensive in proportion to how complex the UI of the row is.

You’ll notice we also end up cloning the whole data structure so that <For/> can hold onto a copy of the key. For more complex structures, this can become a bad idea fast!

Option 2: Nested Signals

If we do want that fine-grained reactivity for the value, one option is to wrap the value of each row in a signal.

#[derive(Debug, Clone)]
struct DatabaseEntry {
    key: String,
    value: RwSignal<i32>,
}

RwSignal<_> is a “read-write signal,” which combines the getter and setter in one object. I’m using it here because it’s a little easier to store in a struct than separate getters and setters.

#[component]
pub fn App() -> impl IntoView {
	// start with a set of three rows
    let (data, set_data) = create_signal(vec![
        DatabaseEntry {
            key: "foo".to_string(),
            value: create_rw_signal(10),
        },
        DatabaseEntry {
            key: "bar".to_string(),
            value: create_rw_signal(20),
        },
        DatabaseEntry {
            key: "baz".to_string(),
            value: create_rw_signal(15),
        },
    ]);
    view! {
		// when we click, update each row,
		// doubling its value
        <button on:click=move |_| {
            data.with(|data| {
                for row in data {
                    row.value.update(|value| *value *= 2);
                }
            });
			// log the new value of the signal
            logging::log!("{:?}", data.get());
        }>
            "Update Values"
        </button>
		// iterate over the rows and display each value
        <For
            each=data
            key=|state| state.key.clone()
            let:child
        >
            <p>{child.value}</p>
        </For>
    }
}

This version works! And if you look in the DOM inspector in your browser, you’ll see that unlike in the previous version, in this version only the individual text nodes are updated. Passing the signal directly into {child.value} works, as signals do keep their reactivity if you pass them into the view.

Note that I changed the set_data.update() to a data.with(). .with() is the non-cloning way of accessing a signal’s value. In this case, we are only updating the internal values, not updating the list of values: because signals maintain their own state, we don’t actually need to update the data signal at all, so the immutable .with() is fine here.

In fact, this version doesn’t update data, so the <For/> is essentially a static list as in the last chapter, and this could just be a plain iterator. But the <For/> is useful if we want to add or remove rows in the future.

Pros

This is the most efficient option, and fits directly with the rest of the mental model of the framework: values that change over time are wrapped in signals so the interface can respond to them.

Cons

Nested reactivity can be cumbersome if you’re receiving data from an API or another data source you don’t control, and you don’t want to create a different struct wrapping each field in a signal.

Option 3: Memoized Slices

Leptos provides a primitive called create_memo, which creates a derived computation that only triggers a reactive update when its value has changed.

This allows you to create reactive values for subfields of a larger data structure, without needing to wrap the fields of that structure in signals.

Most of the application can remain the same as the initial (broken) version, but the <For/> will be updated to this:

<For
    each=move || data().into_iter().enumerate()
    key=|(_, state)| state.key.clone()
    children=move |(index, _)| {
        let value = create_memo(move |_| {
            data.with(|data| data.get(index).map(|d| d.value).unwrap_or(0))
        });
        view! {
            <p>{value}</p>
        }
    }
/>

You’ll notice a few differences here:

  • we convert the data signal into an enumerated iterator
  • we use the children prop explicitly, to make it easier to run some non-view code
  • we define a value memo and use that in the view. This value field doesn’t actually use the child being passed into each row. Instead, it uses the index and reaches back into the original data to get the value.

Every time data changes, now, each memo will be recalculated. If its value has changed, it will update its text node, without rerendering the whole row.

Pros

We get the same fine-grained reactivity of the signal-wrapped version, without needing to wrap the data in signals.

Cons

It’s a bit more complex to set up this memo-per-row inside the <For/> loop rather than using nested signals. For example, you’ll notice that we have to guard against the possibility that the data[index] would panic by using data.get(index), because this memo may be triggered to re-run once just after the row is removed. (This is because the memo for each row and the whole <For/> both depend on the same data signal, and the order of execution for multiple reactive values that depend on the same signal isn’t guaranteed.)

Note also that while memos memoize their reactive changes, the same calculation does need to re-run to check the value every time, so nested reactive signals will still be more efficient for pinpoint updates here.

Forms and Inputs

Forms and form inputs are an important part of interactive apps. There are two basic patterns for interacting with inputs in Leptos, which you may recognize if you’re familiar with React, SolidJS, or a similar framework: using controlled or uncontrolled inputs.

Controlled Inputs

In a "controlled input," the framework controls the state of the input element. On every input event, it updates a local signal that holds the current state, which in turn updates the value prop of the input.

There are two important things to remember:

  1. The input event fires on (almost) every change to the element, while the change event fires (more or less) when you unfocus the input. You probably want on:input, but we give you the freedom to choose.
  2. The value attribute only sets the initial value of the input, i.e., it only updates the input up to the point that you begin typing. The value property continues updating the input after that. You usually want to set prop:value for this reason. (The same is true for checked and prop:checked on an <input type="checkbox">.)
let (name, set_name) = create_signal("Controlled".to_string());

view! {
    <input type="text"
        on:input=move |ev| {
            // event_target_value is a Leptos helper function
            // it functions the same way as event.target.value
            // in JavaScript, but smooths out some of the typecasting
            // necessary to make this work in Rust
            set_name(event_target_value(&ev));
        }

        // the `prop:` syntax lets you update a DOM property,
        // rather than an attribute.
        prop:value=name
    />
    <p>"Name is: " {name}</p>
}

Why do you need prop:value?

Web browsers are the most ubiquitous and stable platform for rendering graphical user interfaces in existence. They have also maintained an incredible backwards compatibility over their three decades of existence. Inevitably, this means there are some quirks.

One odd quirk is that there is a distinction between HTML attributes and DOM element properties, i.e., between something called an “attribute” which is parsed from HTML and can be set on a DOM element with .setAttribute(), and something called a “property” which is a field of the JavaScript class representation of that parsed HTML element.

In the case of an <input value=...>, setting the value attribute is defined as setting the initial value for the input, and setting value property sets its current value. It maybe easiest to understand this by opening about:blank and running the following JavaScript in the browser console, line by line:

// create an input and append it to the DOM
const el = document.createElement("input");
document.body.appendChild(el);

el.setAttribute("value", "test"); // updates the input
el.setAttribute("value", "another test"); // updates the input again

// now go and type into the input: delete some characters, etc.

el.setAttribute("value", "one more time?");
// nothing should have changed. setting the "initial value" does nothing now

// however...
el.value = "But this works";

Many other frontend frameworks conflate attributes and properties, or create a special case for inputs that sets the value correctly. Maybe Leptos should do this too; but for now, I prefer giving users the maximum amount of control over whether they’re setting an attribute or a property, and doing my best to educate people about the actual underlying browser behavior rather than obscuring it.

Uncontrolled Inputs

In an "uncontrolled input," the browser controls the state of the input element. Rather than continuously updating a signal to hold its value, we use a NodeRef to access the input when we want to get its value.

In this example, we only notify the framework when the <form> fires a submit event. Note the use of the leptos::html module, which provides a bunch of types for every HTML element.

let (name, set_name) = create_signal("Uncontrolled".to_string());

let input_element: NodeRef<html::Input> = create_node_ref();

view! {
    <form on:submit=on_submit> // on_submit defined below
        <input type="text"
            value=name
            node_ref=input_element
        />
        <input type="submit" value="Submit"/>
    </form>
    <p>"Name is: " {name}</p>
}

The view should be pretty self-explanatory by now. Note two things:

  1. Unlike in the controlled input example, we use value (not prop:value). This is because we’re just setting the initial value of the input, and letting the browser control its state. (We could use prop:value instead.)
  2. We use node_ref=... to fill the NodeRef. (Older examples sometimes use _ref. They are the same thing, but node_ref has better rust-analyzer support.)

NodeRef is a kind of reactive smart pointer: we can use it to access the underlying DOM node. Its value will be set when the element is rendered.

let on_submit = move |ev: leptos::ev::SubmitEvent| {
    // stop the page from reloading!
    ev.prevent_default();

    // here, we'll extract the value from the input
    let value = input_element()
        // event handlers can only fire after the view
        // is mounted to the DOM, so the `NodeRef` will be `Some`
        .expect("<input> should be mounted")
        // `leptos::HtmlElement<html::Input>` implements `Deref`
        // to a `web_sys::HtmlInputElement`.
        // this means we can call`HtmlInputElement::value()`
        // to get the current value of the input
        .value();
    set_name(value);
};

Our on_submit handler will access the input’s value and use it to call set_name. To access the DOM node stored in the NodeRef, we can simply call it as a function (or using .get()). This will return Option<leptos::HtmlElement<html::Input>>, but we know that the element has already been mounted (how else did you fire this event!), so it's safe to unwrap here.

We can then call .value() to get the value out of the input, because NodeRef gives us access to a correctly-typed HTML element.

Take a look at web_sys and HtmlElement to learn more about using a leptos::HtmlElement. Also see the full CodeSandbox example at the end of this page.

Special Cases: <textarea> and <select>

Two form elements tend to cause some confusion, in different ways.

<textarea>

Unlike <input>, the <textarea> element does not support a value attribute. Instead, it receives its value as a plain text node in its HTML children.

In the current version of Leptos (in fact in Leptos 0.1-0.6), creating a dynamic child inserts a comment marker node. This can cause incorrect <textarea> rendering (and issues during hydration) if you try to use it to show dynamic content.

Instead, you can pass a non-reactive initial value as a child, and use prop:value to set its current value. (<textarea> doesn’t support the value attribute, but does support the value property...)

view! {
    <textarea
        prop:value=move || some_value.get()
        on:input=/* etc */
    >
        /* plain-text initial value, does not change if the signal changes */
        {some_value.get_untracked()}
    </textarea>
}

<select>

The <select> element can likewise be controlled via a value property on the <select> itself, which will select whichever <option> has that value.

let (value, set_value) = create_signal(0i32);
view! {
  <select
    on:change=move |ev| {
      let new_value = event_target_value(&ev);
      set_value(new_value.parse().unwrap());
    }
    prop:value=move || value.get().to_string()
  >
    <option value="0">"0"</option>
    <option value="1">"1"</option>
    <option value="2">"2"</option>
  </select>
  // a button that will cycle through the options
  <button on:click=move |_| set_value.update(|n| {
    if *n == 2 {
      *n = 0;
    } else {
      *n += 1;
    }
  })>
    "Next Option"
  </button>
}

Controlled vs uncontrolled forms CodeSandbox

Click to open CodeSandbox.

CodeSandbox Source
use leptos::{ev::SubmitEvent, *};

#[component]
fn App() -> impl IntoView {
    view! {
        <h2>"Controlled Component"</h2>
        <ControlledComponent/>
        <h2>"Uncontrolled Component"</h2>
        <UncontrolledComponent/>
    }
}

#[component]
fn ControlledComponent() -> impl IntoView {
    // create a signal to hold the value
    let (name, set_name) = create_signal("Controlled".to_string());

    view! {
        <input type="text"
            // fire an event whenever the input changes
            on:input=move |ev| {
                // event_target_value is a Leptos helper function
                // it functions the same way as event.target.value
                // in JavaScript, but smooths out some of the typecasting
                // necessary to make this work in Rust
                set_name(event_target_value(&ev));
            }

            // the `prop:` syntax lets you update a DOM property,
            // rather than an attribute.
            //
            // IMPORTANT: the `value` *attribute* only sets the
            // initial value, until you have made a change.
            // The `value` *property* sets the current value.
            // This is a quirk of the DOM; I didn't invent it.
            // Other frameworks gloss this over; I think it's
            // more important to give you access to the browser
            // as it really works.
            //
            // tl;dr: use prop:value for form inputs
            prop:value=name
        />
        <p>"Name is: " {name}</p>
    }
}

#[component]
fn UncontrolledComponent() -> impl IntoView {
    // import the type for <input>
    use leptos::html::Input;

    let (name, set_name) = create_signal("Uncontrolled".to_string());

    // we'll use a NodeRef to store a reference to the input element
    // this will be filled when the element is created
    let input_element: NodeRef<Input> = create_node_ref();

    // fires when the form `submit` event happens
    // this will store the value of the <input> in our signal
    let on_submit = move |ev: SubmitEvent| {
        // stop the page from reloading!
        ev.prevent_default();

        // here, we'll extract the value from the input
        let value = input_element()
            // event handlers can only fire after the view
            // is mounted to the DOM, so the `NodeRef` will be `Some`
            .expect("<input> to exist")
            // `NodeRef` implements `Deref` for the DOM element type
            // this means we can call`HtmlInputElement::value()`
            // to get the current value of the input
            .value();
        set_name(value);
    };

    view! {
        <form on:submit=on_submit>
            <input type="text"
                // here, we use the `value` *attribute* to set only
                // the initial value, letting the browser maintain
                // the state after that
                value=name

                // store a reference to this input in `input_element`
                node_ref=input_element
            />
            <input type="submit" value="Submit"/>
        </form>
        <p>"Name is: " {name}</p>
    }
}

// This `main` function is the entry point into the app
// It just mounts our component to the <body>
// Because we defined it as `fn App`, we can now use it in a
// template as <App/>
fn main() {
    leptos::mount_to_body(App)
}

Control Flow

In most applications, you sometimes need to make a decision: Should I render this part of the view, or not? Should I render <ButtonA/> or <WidgetB/>? This is control flow.

A Few Tips

When thinking about how to do this with Leptos, it’s important to remember a few things:

  1. Rust is an expression-oriented language: control-flow expressions like if x() { y } else { z } and match x() { ... } return their values. This makes them very useful for declarative user interfaces.
  2. For any T that implements IntoView—in other words, for any type that Leptos knows how to render—Option<T> and Result<T, impl Error> also implement IntoView. And just as Fn() -> T renders a reactive T, Fn() -> Option<T> and Fn() -> Result<T, impl Error> are reactive.
  3. Rust has lots of handy helpers like Option::map, Option::and_then, Option::ok_or, Result::map, Result::ok, and bool::then that allow you to convert, in a declarative way, between a few different standard types, all of which can be rendered. Spending time in the Option and Result docs in particular is one of the best ways to level up your Rust game.
  4. And always remember: to be reactive, values must be functions. You’ll see me constantly wrap things in a move || closure, below. This is to ensure that they actually rerun when the signal they depend on changes, keeping the UI reactive.

So What?

To connect the dots a little: this means that you can actually implement most of your control flow with native Rust code, without any control-flow components or special knowledge.

For example, let’s start with a simple signal and derived signal:

let (value, set_value) = create_signal(0);
let is_odd = move || value() & 1 == 1;

If you don’t recognize what’s going on with is_odd, don’t worry about it too much. It’s just a simple way to test whether an integer is odd by doing a bitwise AND with 1.

We can use these signals and ordinary Rust to build most control flow.

if statements

Let’s say I want to render some text if the number is odd, and some other text if it’s even. Well, how about this?

view! {
    <p>
    {move || if is_odd() {
        "Odd"
    } else {
        "Even"
    }}
    </p>
}

An if expression returns its value, and a &str implements IntoView, so a Fn() -> &str implements IntoView, so this... just works!

Option<T>

Let’s say we want to render some text if it’s odd, and nothing if it’s even.

let message = move || {
    if is_odd() {
        Some("Ding ding ding!")
    } else {
        None
    }
};

view! {
    <p>{message}</p>
}

This works fine. We can make it a little shorter if we’d like, using bool::then().

let message = move || is_odd().then(|| "Ding ding ding!");
view! {
    <p>{message}</p>
}

You could even inline this if you’d like, although personally I sometimes like the better cargo fmt and rust-analyzer support I get by pulling things out of the view.

match statements

We’re still just writing ordinary Rust code, right? So you have all the power of Rust’s pattern matching at your disposal.

let message = move || {
    match value() {
        0 => "Zero",
        1 => "One",
        n if is_odd() => "Odd",
        _ => "Even"
    }
};
view! {
    <p>{message}</p>
}

And why not? YOLO, right?

Preventing Over-Rendering

Not so YOLO.

Everything we’ve just done is basically fine. But there’s one thing you should remember and try to be careful with. Each one of the control-flow functions we’ve created so far is basically a derived signal: it will rerun every time the value changes. In the examples above, where the value switches from even to odd on every change, this is fine.

But consider the following example:

let (value, set_value) = create_signal(0);

let message = move || if value() > 5 {
    "Big"
} else {
    "Small"
};

view! {
    <p>{message}</p>
}

This works, for sure. But if you added a log, you might be surprised

let message = move || if value() > 5 {
    logging::log!("{}: rendering Big", value());
    "Big"
} else {
    logging::log!("{}: rendering Small", value());
    "Small"
};

As a user clicks a button, you’d see something like this:

1: rendering Small
2: rendering Small
3: rendering Small
4: rendering Small
5: rendering Small
6: rendering Big
7: rendering Big
8: rendering Big
... ad infinitum

Every time value changes, it reruns the if statement. This makes sense, with how reactivity works. But it has a downside. For a simple text node, rerunning the if statement and rerendering isn’t a big deal. But imagine it were like this:

let message = move || if value() > 5 {
    <Big/>
} else {
    <Small/>
};

This rerenders <Small/> five times, then <Big/> infinitely. If they’re loading resources, creating signals, or even just creating DOM nodes, this is unnecessary work.

<Show/>

The <Show/> component is the answer. You pass it a when condition function, a fallback to be shown if the when function returns false, and children to be rendered if when is true.

let (value, set_value) = create_signal(0);

view! {
  <Show
    when=move || { value() > 5 }
    fallback=|| view! { <Small/> }
  >
    <Big/>
  </Show>
}

<Show/> memoizes the when condition, so it only renders its <Small/> once, continuing to show the same component until value is greater than five; then it renders <Big/> once, continuing to show it indefinitely or until value goes below five and then renders <Small/> again.

This is a helpful tool to avoid rerendering when using dynamic if expressions. As always, there's some overhead: for a very simple node (like updating a single text node, or updating a class or attribute), a move || if ... will be more efficient. But if it’s at all expensive to render either branch, reach for <Show/>.

Note: Type Conversions

There‘s one final thing it’s important to say in this section.

The view macro doesn’t return the most-generic wrapping type View. Instead, it returns things with types like Fragment or HtmlElement<Input>. This can be a little annoying if you’re returning different HTML elements from different branches of a conditional:

view! {
    <main>
        {move || match is_odd() {
            true if value() == 1 => {
                // returns HtmlElement<Pre>
                view! { <pre>"One"</pre> }
            },
            false if value() == 2 => {
                // returns HtmlElement<P>
                view! { <p>"Two"</p> }
            }
            // returns HtmlElement<Textarea>
            _ => view! { <textarea>{value()}</textarea> }
        }}
    </main>
}

This strong typing is actually very powerful, because HtmlElement is, among other things, a smart pointer: each HtmlElement<T> type implements Deref for the appropriate underlying web_sys type. In other words, in the browser your view returns real DOM elements, and you can access native DOM methods on them.

But it can be a little annoying in conditional logic like this, because you can’t return different types from different branches of a condition in Rust. There are two ways to get yourself out of this situation:

  1. If you have multiple HtmlElement types, convert them to HtmlElement<AnyElement> with .into_any()
  2. If you have a variety of view types that are not all HtmlElement, convert them to Views with .into_view().

Here’s the same example, with the conversion added:

view! {
    <main>
        {move || match is_odd() {
            true if value() == 1 => {
                // returns HtmlElement<Pre>
                view! { <pre>"One"</pre> }.into_any()
            },
            false if value() == 2 => {
                // returns HtmlElement<P>
                view! { <p>"Two"</p> }.into_any()
            }
            // returns HtmlElement<Textarea>
            _ => view! { <textarea>{value()}</textarea> }.into_any()
        }}
    </main>
}

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;

#[component]
fn App() -> impl IntoView {
    let (value, set_value) = create_signal(0);
    let is_odd = move || value() & 1 == 1;
    let odd_text = move || if is_odd() { Some("How odd!") } else { None };

    view! {
        <h1>"Control Flow"</h1>

        // Simple UI to update and show a value
        <button on:click=move |_| set_value.update(|n| *n += 1)>
            "+1"
        </button>
        <p>"Value is: " {value}</p>

        <hr/>

        <h2><code>"Option<T>"</code></h2>
        // For any `T` that implements `IntoView`,
        // so does `Option<T>`

        <p>{odd_text}</p>
        // This means you can use `Option` methods on it
        <p>{move || odd_text().map(|text| text.len())}</p>

        <h2>"Conditional Logic"</h2>
        // You can do dynamic conditional if-then-else
        // logic in several ways
        //
        // a. An "if" expression in a function
        //    This will simply re-render every time the value
        //    changes, which makes it good for lightweight UI
        <p>
            {move || if is_odd() {
                "Odd"
            } else {
                "Even"
            }}
        </p>

        // b. Toggling some kind of class
        //    This is smart for an element that's going to
        //    toggled often, because it doesn't destroy
        //    it in between states
        //    (you can find the `hidden` class in `index.html`)
        <p class:hidden=is_odd>"Appears if even."</p>

        // c. The <Show/> component
        //    This only renders the fallback and the child
        //    once, lazily, and toggles between them when
        //    needed. This makes it more efficient in many cases
        //    than a {move || if ...} block
        <Show when=is_odd
            fallback=|| view! { <p>"Even steven"</p> }
        >
            <p>"Oddment"</p>
        </Show>

        // d. Because `bool::then()` converts a `bool` to
        //    `Option`, you can use it to create a show/hide toggled
        {move || is_odd().then(|| view! { <p>"Oddity!"</p> })}

        <h2>"Converting between Types"</h2>
        // e. Note: if branches return different types,
        //    you can convert between them with
        //    `.into_any()` (for different HTML element types)
        //    or `.into_view()` (for all view types)
        {move || match is_odd() {
            true if value() == 1 => {
                // <pre> returns HtmlElement<Pre>
                view! { <pre>"One"</pre> }.into_any()
            },
            false if value() == 2 => {
                // <p> returns HtmlElement<P>
                // so we convert into a more generic type
                view! { <p>"Two"</p> }.into_any()
            }
            _ => view! { <textarea>{value()}</textarea> }.into_any()
        }}
    }
}

fn main() {
    leptos::mount_to_body(App)
}

Error Handling

In the last chapter, we saw that you can render Option<T>: in the None case, it will render nothing, and in the Some(T) case, it will render T (that is, if T implements IntoView). You can actually do something very similar with a Result<T, E>. In the Err(_) case, it will render nothing. In the Ok(T) case, it will render the T.

Let’s start with a simple component to capture a number input.

#[component]
fn NumericInput() -> impl IntoView {
    let (value, set_value) = create_signal(Ok(0));

    // when input changes, try to parse a number from the input
    let on_input = move |ev| set_value(event_target_value(&ev).parse::<i32>());

    view! {
        <label>
            "Type an integer (or not!)"
            <input type="number" on:input=on_input/>
            <p>
                "You entered "
                <strong>{value}</strong>
            </p>
        </label>
    }
}

Every time you change the input, on_input will attempt to parse its value into a 32-bit integer (i32), and store it in our value signal, which is a Result<i32, _>. If you type the number 42, the UI will display

You entered 42

But if you type the string foo, it will display

You entered

This is not great. It saves us using .unwrap_or_default() or something, but it would be much nicer if we could catch the error and do something with it.

You can do that, with the <ErrorBoundary/> component.

Note

People often try to point out that <input type="number"> prevents you from typing a string like foo, or anything else that's not a number. This is true in some browsers, but not in all! Moreover, there are a variety of things that can be typed into a plain number input that are not an i32: a floating-point number, a larger-than-32-bit number, the letter e, and so on. The browser can be told to uphold some of these invariants, but browser behavior still varies: Parsing for yourself is important!

<ErrorBoundary/>

An <ErrorBoundary/> is a little like the <Show/> component we saw in the last chapter. If everything’s okay—which is to say, if everything is Ok(_)—it renders its children. But if there’s an Err(_) rendered among those children, it will trigger the <ErrorBoundary/>’s fallback.

Let’s add an <ErrorBoundary/> to this example.

#[component]
fn NumericInput() -> impl IntoView {
    let (value, set_value) = create_signal(Ok(0));

    let on_input = move |ev| set_value(event_target_value(&ev).parse::<i32>());

    view! {
        <h1>"Error Handling"</h1>
        <label>
            "Type a number (or something that's not a number!)"
            <input type="number" on:input=on_input/>
            <ErrorBoundary
                // the fallback receives a signal containing current errors
                fallback=|errors| view! {
                    <div class="error">
                        <p>"Not a number! Errors: "</p>
                        // we can render a list of errors as strings, if we'd like
                        <ul>
                            {move || errors.get()
                                .into_iter()
                                .map(|(_, e)| view! { <li>{e.to_string()}</li>})
                                .collect_view()
                            }
                        </ul>
                    </div>
                }
            >
                <p>"You entered " <strong>{value}</strong></p>
            </ErrorBoundary>
        </label>
    }
}

Now, if you type 42, value is Ok(42) and you’ll see

You entered 42

If you type foo, value is Err(_) and the fallback will render. We’ve chosen to render the list of errors as a String, so you’ll see something like

Not a number! Errors:
- cannot parse integer from empty string

If you fix the error, the error message will disappear and the content you’re wrapping in an <ErrorBoundary/> will appear again.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;

#[component]
fn App() -> impl IntoView {
    let (value, set_value) = create_signal(Ok(0));

    // when input changes, try to parse a number from the input
    let on_input = move |ev| set_value(event_target_value(&ev).parse::<i32>());

    view! {
        <h1>"Error Handling"</h1>
        <label>
            "Type a number (or something that's not a number!)"
            <input type="number" on:input=on_input/>
            // If an `Err(_) had been rendered inside the <ErrorBoundary/>,
            // the fallback will be displayed. Otherwise, the children of the
            // <ErrorBoundary/> will be displayed.
            <ErrorBoundary
                // the fallback receives a signal containing current errors
                fallback=|errors| view! {
                    <div class="error">
                        <p>"Not a number! Errors: "</p>
                        // we can render a list of errors
                        // as strings, if we'd like
                        <ul>
                            {move || errors.get()
                                .into_iter()
                                .map(|(_, e)| view! { <li>{e.to_string()}</li>})
                                .collect::<Vec<_>>()
                            }
                        </ul>
                    </div>
                }
            >
                <p>
                    "You entered "
                    // because `value` is `Result<i32, _>`,
                    // it will render the `i32` if it is `Ok`,
                    // and render nothing and trigger the error boundary
                    // if it is `Err`. It's a signal, so this will dynamically
                    // update when `value` changes
                    <strong>{value}</strong>
                </p>
            </ErrorBoundary>
        </label>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

Parent-Child Communication

You can think of your application as a nested tree of components. Each component handles its own local state and manages a section of the user interface, so components tend to be relatively self-contained.

Sometimes, though, you’ll want to communicate between a parent component and its child. For example, imagine you’ve defined a <FancyButton/> component that adds some styling, logging, or something else to a <button/>. You want to use a <FancyButton/> in your <App/> component. But how can you communicate between the two?

It’s easy to communicate state from a parent component to a child component. We covered some of this in the material on components and props. Basically if you want the parent to communicate to the child, you can pass a ReadSignal, a Signal, or even a MaybeSignal as a prop.

But what about the other direction? How can a child send notifications about events or state changes back up to the parent?

There are four basic patterns of parent-child communication in Leptos.

1. Pass a WriteSignal

One approach is simply to pass a WriteSignal from the parent down to the child, and update it in the child. This lets you manipulate the state of the parent from the child.

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        <ButtonA setter=set_toggled/>
    }
}

#[component]
pub fn ButtonA(setter: WriteSignal<bool>) -> impl IntoView {
    view! {
        <button
            on:click=move |_| setter.update(|value| *value = !*value)
        >
            "Toggle"
        </button>
    }
}

This pattern is simple, but you should be careful with it: passing around a WriteSignal can make it hard to reason about your code. In this example, it’s pretty clear when you read <App/> that you are handing off the ability to mutate toggled, but it’s not at all clear when or how it will change. In this small, local example it’s easy to understand, but if you find yourself passing around WriteSignals like this throughout your code, you should really consider whether this is making it too easy to write spaghetti code.

2. Use a Callback

Another approach would be to pass a callback to the child: say, on_click.

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        <ButtonB on_click=move |_| set_toggled.update(|value| *value = !*value)/>
    }
}


#[component]
pub fn ButtonB(#[prop(into)] on_click: Callback<MouseEvent>) -> impl IntoView
{
    view! {
        <button on:click=on_click>
            "Toggle"
        </button>
    }
}

You’ll notice that whereas <ButtonA/> was given a WriteSignal and decided how to mutate it, <ButtonB/> simply fires an event: the mutation happens back in <App/>. This has the advantage of keeping local state local, preventing the problem of spaghetti mutation. But it also means the logic to mutate that signal needs to exist up in <App/>, not down in <ButtonB/>. These are real trade-offs, not a simple right-or-wrong choice.

Note the way we use the Callback<In, Out> type. This is basically a wrapper around a closure Fn(In) -> Out that is also Copy and makes it easy to pass around.

We also used the #[prop(into)] attribute so we can pass a normal closure into on_click. Please see the chapter "into Props" for more details.

2.1 Use Closure instead of Callback

You can use a Rust closure Fn(MouseEvent) directly instead of Callback:

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        <ButtonB on_click=move |_| set_toggled.update(|value| *value = !*value)/>
    }
}


#[component]
pub fn ButtonB<F>(on_click: F) -> impl IntoView
where
    F: Fn(MouseEvent) + 'static
{
    view! {
        <button on:click=on_click>
            "Toggle"
        </button>
    }
}

The code is very similar in this case. On more advanced use-cases using a closure might require some cloning compared to using a Callback.

Note the way we declare the generic type F here for the callback. If you’re confused, look back at the generic props section of the chapter on components.

3. Use an Event Listener

You can actually write Option 2 in a slightly different way. If the callback maps directly onto a native DOM event, you can add an on: listener directly to the place you use the component in your view macro in <App/>.

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        // note the on:click instead of on_click
        // this is the same syntax as an HTML element event listener
        <ButtonC on:click=move |_| set_toggled.update(|value| *value = !*value)/>
    }
}


#[component]
pub fn ButtonC() -> impl IntoView {
    view! {
        <button>"Toggle"</button>
    }
}

This lets you write way less code in <ButtonC/> than you did for <ButtonB/>, and still gives a correctly-typed event to the listener. This works by adding an on: event listener to each element that <ButtonC/> returns: in this case, just the one <button>.

Of course, this only works for actual DOM events that you’re passing directly through to the elements you’re rendering in the component. For more complex logic that doesn’t map directly onto an element (say you create <ValidatedForm/> and want an on_valid_form_submit callback) you should use Option 2.

4. Providing a Context

This version is actually a variant on Option 1. Say you have a deeply-nested component tree:

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        <Layout/>
    }
}

#[component]
pub fn Layout() -> impl IntoView {
    view! {
        <header>
            <h1>"My Page"</h1>
        </header>
        <main>
            <Content/>
        </main>
    }
}

#[component]
pub fn Content() -> impl IntoView {
    view! {
        <div class="content">
            <ButtonD/>
        </div>
    }
}

#[component]
pub fn ButtonD<F>() -> impl IntoView {
    todo!()
}

Now <ButtonD/> is no longer a direct child of <App/>, so you can’t simply pass your WriteSignal to its props. You could do what’s sometimes called “prop drilling,” adding a prop to each layer between the two:

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);
    view! {
        <p>"Toggled? " {toggled}</p>
        <Layout set_toggled/>
    }
}

#[component]
pub fn Layout(set_toggled: WriteSignal<bool>) -> impl IntoView {
    view! {
        <header>
            <h1>"My Page"</h1>
        </header>
        <main>
            <Content set_toggled/>
        </main>
    }
}

#[component]
pub fn Content(set_toggled: WriteSignal<bool>) -> impl IntoView {
    view! {
        <div class="content">
            <ButtonD set_toggled/>
        </div>
    }
}

#[component]
pub fn ButtonD<F>(set_toggled: WriteSignal<bool>) -> impl IntoView {
    todo!()
}

This is a mess. <Layout/> and <Content/> don’t need set_toggled; they just pass it through to <ButtonD/>. But I need to declare the prop in triplicate. This is not only annoying but hard to maintain: imagine we add a “half-toggled” option and the type of set_toggled needs to change to an enum. We have to change it in three places!

Isn’t there some way to skip levels?

There is!

4.1 The Context API

You can provide data that skips levels by using provide_context and use_context. Contexts are identified by the type of the data you provide (in this example, WriteSignal<bool>), and they exist in a top-down tree that follows the contours of your UI tree. In this example, we can use context to skip the unnecessary prop drilling.

#[component]
pub fn App() -> impl IntoView {
    let (toggled, set_toggled) = create_signal(false);

    // share `set_toggled` with all children of this component
    provide_context(set_toggled);

    view! {
        <p>"Toggled? " {toggled}</p>
        <Layout/>
    }
}

// <Layout/> and <Content/> omitted
// To work in this version, drop their references to set_toggled

#[component]
pub fn ButtonD() -> impl IntoView {
    // use_context searches up the context tree, hoping to
    // find a `WriteSignal<bool>`
    // in this case, I .expect() because I know I provided it
    let setter = use_context::<WriteSignal<bool>>()
        .expect("to have found the setter provided");

    view! {
        <button
            on:click=move |_| setter.update(|value| *value = !*value)
        >
            "Toggle"
        </button>
    }
}

The same caveats apply to this as to <ButtonA/>: passing a WriteSignal around should be done with caution, as it allows you to mutate state from arbitrary parts of your code. But when done carefully, this can be one of the most effective techniques for global state management in Leptos: simply provide the state at the highest level you’ll need it, and use it wherever you need it lower down.

Note that there are no performance downsides to this approach. Because you are passing a fine-grained reactive signal, nothing happens in the intervening components (<Layout/> and <Content/>) when you update it. You are communicating directly between <ButtonD/> and <App/>. In fact—and this is the power of fine-grained reactivity—you are communicating directly between a button click in <ButtonD/> and a single text node in <App/>. It’s as if the components themselves don’t exist at all. And, well... at runtime, they don’t. It’s just signals and effects, all the way down.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::{ev::MouseEvent, *};

// This highlights four different ways that child components can communicate
// with their parent:
// 1) <ButtonA/>: passing a WriteSignal as one of the child component props,
//    for the child component to write into and the parent to read
// 2) <ButtonB/>: passing a closure as one of the child component props, for
//    the child component to call
// 3) <ButtonC/>: adding an `on:` event listener to a component
// 4) <ButtonD/>: providing a context that is used in the component (rather than prop drilling)

#[derive(Copy, Clone)]
struct SmallcapsContext(WriteSignal<bool>);

#[component]
pub fn App() -> impl IntoView {
    // just some signals to toggle three classes on our <p>
    let (red, set_red) = create_signal(false);
    let (right, set_right) = create_signal(false);
    let (italics, set_italics) = create_signal(false);
    let (smallcaps, set_smallcaps) = create_signal(false);

    // the newtype pattern isn't *necessary* here but is a good practice
    // it avoids confusion with other possible future `WriteSignal<bool>` contexts
    // and makes it easier to refer to it in ButtonC
    provide_context(SmallcapsContext(set_smallcaps));

    view! {
        <main>
            <p
                // class: attributes take F: Fn() => bool, and these signals all implement Fn()
                class:red=red
                class:right=right
                class:italics=italics
                class:smallcaps=smallcaps
            >
                "Lorem ipsum sit dolor amet."
            </p>

            // Button A: pass the signal setter
            <ButtonA setter=set_red/>

            // Button B: pass a closure
            <ButtonB on_click=move |_| set_right.update(|value| *value = !*value)/>

            // Button B: use a regular event listener
            // setting an event listener on a component like this applies it
            // to each of the top-level elements the component returns
            <ButtonC on:click=move |_| set_italics.update(|value| *value = !*value)/>

            // Button D gets its setter from context rather than props
            <ButtonD/>
        </main>
    }
}

/// Button A receives a signal setter and updates the signal itself
#[component]
pub fn ButtonA(
    /// Signal that will be toggled when the button is clicked.
    setter: WriteSignal<bool>,
) -> impl IntoView {
    view! {
        <button
            on:click=move |_| setter.update(|value| *value = !*value)
        >
            "Toggle Red"
        </button>
    }
}

/// Button B receives a closure
#[component]
pub fn ButtonB<F>(
    /// Callback that will be invoked when the button is clicked.
    on_click: F,
) -> impl IntoView
where
    F: Fn(MouseEvent) + 'static,
{
    view! {
        <button
            on:click=on_click
        >
            "Toggle Right"
        </button>
    }

    // just a note: in an ordinary function ButtonB could take on_click: impl Fn(MouseEvent) + 'static
    // and save you from typing out the generic
    // the component macro actually expands to define a
    //
    // struct ButtonBProps<F> where F: Fn(MouseEvent) + 'static {
    //   on_click: F
    // }
    //
    // this is what allows us to have named props in our component invocation,
    // instead of an ordered list of function arguments
    // if Rust ever had named function arguments we could drop this requirement
}

/// Button C is a dummy: it renders a button but doesn't handle
/// its click. Instead, the parent component adds an event listener.
#[component]
pub fn ButtonC() -> impl IntoView {
    view! {
        <button>
            "Toggle Italics"
        </button>
    }
}

/// Button D is very similar to Button A, but instead of passing the setter as a prop
/// we get it from the context
#[component]
pub fn ButtonD() -> impl IntoView {
    let setter = use_context::<SmallcapsContext>().unwrap().0;

    view! {
        <button
            on:click=move |_| setter.update(|value| *value = !*value)
        >
            "Toggle Small Caps"
        </button>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

Component Children

It’s pretty common to want to pass children into a component, just as you can pass children into an HTML element. For example, imagine I have a <FancyForm/> component that enhances an HTML <form>. I need some way to pass all its inputs.

view! {
    <FancyForm>
        <fieldset>
            <label>
                "Some Input"
                <input type="text" name="something"/>
            </label>
        </fieldset>
        <button>"Submit"</button>
    </FancyForm>
}

How can you do this in Leptos? There are basically two ways to pass components to other components:

  1. render props: properties that are functions that return a view
  2. the children prop: a special component property that includes anything you pass as a child to the component.

In fact, you’ve already seen these both in action in the <Show/> component:

view! {
  <Show
    // `when` is a normal prop
    when=move || value() > 5
    // `fallback` is a "render prop": a function that returns a view
    fallback=|| view! { <Small/> }
  >
    // `<Big/>` (and anything else here)
    // will be given to the `children` prop
    <Big/>
  </Show>
}

Let’s define a component that takes some children and a render prop.

#[component]
pub fn TakesChildren<F, IV>(
    /// Takes a function (type F) that returns anything that can be
    /// converted into a View (type IV)
    render_prop: F,
    /// `children` takes the `Children` type
    children: Children,
) -> impl IntoView
where
    F: Fn() -> IV,
    IV: IntoView,
{
    view! {
        <h2>"Render Prop"</h2>
        {render_prop()}

        <h2>"Children"</h2>
        {children()}
    }
}

render_prop and children are both functions, so we can call them to generate the appropriate views. children, in particular, is an alias for Box<dyn FnOnce() -> Fragment>. (Aren't you glad we named it Children instead?)

If you need a Fn or FnMut here because you need to call children more than once, we also provide ChildrenFn and ChildrenMut aliases.

We can use the component like this:

view! {
    <TakesChildren render_prop=|| view! { <p>"Hi, there!"</p> }>
        // these get passed to `children`
        "Some text"
        <span>"A span"</span>
    </TakesChildren>
}

Manipulating Children

The Fragment type is basically a way of wrapping a Vec<View>. You can insert it anywhere into your view.

But you can also access those inner views directly to manipulate them. For example, here’s a component that takes its children and turns them into an unordered list.

#[component]
pub fn WrapsChildren(children: Children) -> impl IntoView {
    // Fragment has `nodes` field that contains a Vec<View>
    let children = children()
        .nodes
        .into_iter()
        .map(|child| view! { <li>{child}</li> })
        .collect_view();

    view! {
        <ul>{children}</ul>
    }
}

Calling it like this will create a list:

view! {
    <WrapsChildren>
        "A"
        "B"
        "C"
    </WrapsChildren>
}

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;

// Often, you want to pass some kind of child view to another
// component. There are two basic patterns for doing this:
// - "render props": creating a component prop that takes a function
//   that creates a view
// - the `children` prop: a special property that contains content
//   passed as the children of a component in your view, not as a
//   property

#[component]
pub fn App() -> impl IntoView {
    let (items, set_items) = create_signal(vec![0, 1, 2]);
    let render_prop = move || {
        // items.with(...) reacts to the value without cloning
        // by applying a function. Here, we pass the `len` method
        // on a `Vec<_>` directly
        let len = move || items.with(Vec::len);
        view! {
            <p>"Length: " {len}</p>
        }
    };

    view! {
        // This component just displays the two kinds of children,
        // embedding them in some other markup
        <TakesChildren
            // for component props, you can shorthand
            // `render_prop=render_prop` => `render_prop`
            // (this doesn't work for HTML element attributes)
            render_prop
        >
            // these look just like the children of an HTML element
            <p>"Here's a child."</p>
            <p>"Here's another child."</p>
        </TakesChildren>
        <hr/>
        // This component actually iterates over and wraps the children
        <WrapsChildren>
            <p>"Here's a child."</p>
            <p>"Here's another child."</p>
        </WrapsChildren>
    }
}

/// Displays a `render_prop` and some children within markup.
#[component]
pub fn TakesChildren<F, IV>(
    /// Takes a function (type F) that returns anything that can be
    /// converted into a View (type IV)
    render_prop: F,
    /// `children` takes the `Children` type
    /// this is an alias for `Box<dyn FnOnce() -> Fragment>`
    /// ... aren't you glad we named it `Children` instead?
    children: Children,
) -> impl IntoView
where
    F: Fn() -> IV,
    IV: IntoView,
{
    view! {
        <h1><code>"<TakesChildren/>"</code></h1>
        <h2>"Render Prop"</h2>
        {render_prop()}
        <hr/>
        <h2>"Children"</h2>
        {children()}
    }
}

/// Wraps each child in an `<li>` and embeds them in a `<ul>`.
#[component]
pub fn WrapsChildren(children: Children) -> impl IntoView {
    // children() returns a `Fragment`, which has a
    // `nodes` field that contains a Vec<View>
    // this means we can iterate over the children
    // to create something new!
    let children = children()
        .nodes
        .into_iter()
        .map(|child| view! { <li>{child}</li> })
        .collect::<Vec<_>>();

    view! {
        <h1><code>"<WrapsChildren/>"</code></h1>
        // wrap our wrapped children in a UL
        <ul>{children}</ul>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

No Macros: The View Builder Syntax

If you’re perfectly happy with the view! macro syntax described so far, you’re welcome to skip this chapter. The builder syntax described in this section is always available, but never required.

For one reason or another, many developers would prefer to avoid macros. Perhaps you don’t like the limited rustfmt support. (Although, you should check out leptosfmt, which is an excellent tool!) Perhaps you worry about the effect of macros on compile time. Perhaps you prefer the aesthetics of pure Rust syntax, or you have trouble context-switching between an HTML-like syntax and your Rust code. Or perhaps you want more flexibility in how you create and manipulate HTML elements than the view macro provides.

If you fall into any of those camps, the builder syntax may be for you.

The view macro expands an HTML-like syntax to a series of Rust functions and method calls. If you’d rather not use the view macro, you can simply use that expanded syntax yourself. And it’s actually pretty nice!

First off, if you want you can even drop the #[component] macro: a component is just a setup function that creates your view, so you can define a component as a simple function call:

pub fn counter(initial_value: i32, step: u32) -> impl IntoView { }

Elements are created by calling a function with the same name as the HTML element:

p()

You can add children to the element with .child(), which takes a single child or a tuple or array of types that implement IntoView.

p().child((em().child("Big, "), strong().child("bold "), "text"))

Attributes are added with .attr(). This can take any of the same types that you could pass as an attribute into the view macro (types that implement IntoAttribute).

p().attr("id", "foo").attr("data-count", move || count().to_string())

Similarly, the class:, prop:, and style: syntaxes map directly onto .class(), .prop(), and .style() methods.

Event listeners can be added with .on(). Typed events found in leptos::ev prevent typos in event names and allow for correct type inference in the callback function.

button()
    .on(ev::click, move |_| set_count.update(|count| *count = 0))
    .child("Clear")

Many additional methods can be found in the HtmlElement docs, including some methods that are not directly available in the view macro.

All of this adds up to a very Rusty syntax to build full-featured views, if you prefer this style.

/// A simple counter view.
// A component is really just a function call: it runs once to create the DOM and reactive system
pub fn counter(initial_value: i32, step: u32) -> impl IntoView {
    let (count, set_count) = create_signal(0);
    div().child((
        button()
            // typed events found in leptos::ev
            // 1) prevent typos in event names
            // 2) allow for correct type inference in callbacks
            .on(ev::click, move |_| set_count.update(|count| *count = 0))
            .child("Clear"),
        button()
            .on(ev::click, move |_| set_count.update(|count| *count -= 1))
            .child("-1"),
        span().child(("Value: ", move || count.get(), "!")),
        button()
            .on(ev::click, move |_| set_count.update(|count| *count += 1))
            .child("+1"),
    ))
}

This also has the benefit of being more flexible: because these are all plain Rust functions and methods, it’s easier to use them in things like iterator adapters without any additional “magic”:

// take some set of attribute names and values
let attrs: Vec<(&str, AttributeValue)> = todo!();
// you can use the builder syntax to “spread” these onto the
// element in a way that’s not possible with the view macro
let p = attrs
    .into_iter()
    .fold(p(), |el, (name, value)| el.attr(name, value));

Performance Note

One caveat: the view macro applies significant optimizations in server-side-rendering (SSR) mode to improve HTML rendering performance significantly (think 2-4x faster, depending on the characteristics of any given app). It does this by analyzing your view at compile time and converting the static parts into simple HTML strings, rather than expanding them into the builder syntax.

This means two things:

  1. The builder syntax and view macro should not be mixed, or should only be mixed very carefully: at least in SSR mode, the output of the view should be treated as a “black box” that can’t have additional builder methods applied to it without causing inconsistencies.
  2. Using the builder syntax will result in less-than-optimal SSR performance. It won’t be slow, by any means (and it’s worth running your own benchmarks in any case), just slower than the view-optimized version.

Reactivity

Leptos is built on top of a fine-grained reactive system, designed to run expensive side effects (like rendering something in a browser, or making a network request) as infrequently as possible in response to change, reactive values.

So far we’ve seen signals in action. These chapters will go into a bit more depth, and look at effects, which are the other half of the story.

Working with Signals

So far we’ve used some simple examples of create_signal, which returns a ReadSignal getter and a WriteSignal setter.

Getting and Setting

There are four basic signal operations:

  1. .get() clones the current value of the signal and tracks any future changes to the value reactively.
  2. .with() takes a function, which receives the current value of the signal by reference (&T), and tracks any future changes.
  3. .set() replaces the current value of the signal and notifies any subscribers that they need to update.
  4. .update() takes a function, which receives a mutable reference to the current value of the signal (&mut T), and notifies any subscribers that they need to update. (.update() doesn’t return the value returned by the closure, but you can use .try_update() if you need to; for example, if you’re removing an item from a Vec<_> and want the removed item.)

Calling a ReadSignal as a function is syntax sugar for .get(). Calling a WriteSignal as a function is syntax sugar for .set(). So

let (count, set_count) = create_signal(0);
set_count(1);
logging::log!(count());

is the same as

let (count, set_count) = create_signal(0);
set_count.set(1);
logging::log!(count.get());

You might notice that .get() and .set() can be implemented in terms of .with() and .update(). In other words, count.get() is identical with count.with(|n| n.clone()), and count.set(1) is implemented by doing count.update(|n| *n = 1).

But of course, .get() and .set() (or the plain function-call forms!) are much nicer syntax.

However, there are some very good use cases for .with() and .update().

For example, consider a signal that holds a Vec<String>.

let (names, set_names) = create_signal(Vec::new());
if names().is_empty() {
	set_names(vec!["Alice".to_string()]);
}

In terms of logic, this is simple enough, but it’s hiding some significant inefficiencies. Remember that names().is_empty() is sugar for names.get().is_empty(), which clones the value (it’s names.with(|n| n.clone()).is_empty()). This means we clone the whole Vec<String>, run is_empty(), and then immediately throw away the clone.

Likewise, set_names replaces the value with a whole new Vec<_>. This is fine, but we might as well just mutate the original Vec<_> in place.

let (names, set_names) = create_signal(Vec::new());
if names.with(|names| names.is_empty()) {
	set_names.update(|names| names.push("Alice".to_string()));
}

Now our function simply takes names by reference to run is_empty(), avoiding that clone.

And if you have Clippy on, or if you have sharp eyes, you may notice we can make this even neater:

if names.with(Vec::is_empty) {
	// ...
}

After all, .with() simply takes a function that takes the value by reference. Since Vec::is_empty takes &self, we can pass it in directly and avoid the unnecessary closure.

There are some helper macros to make using .with() and .update() easier to use, especially when using multiple signals.

let (first, _) = create_signal("Bob".to_string());
let (middle, _) = create_signal("J.".to_string());
let (last, _) = create_signal("Smith".to_string());

If you wanted to concatenate these 3 signals together without unnecessary cloning, you would have to write something like:

let name = move || {
	first.with(|first| {
		middle.with(|middle| last.with(|last| format!("{first} {middle} {last}")))
	})
};

Which is very long and annoying to write.

Instead, you can use the with! macro to get references to all the signals at the same time.

let name = move || with!(|first, middle, last| format!("{first} {middle} {last}"));

This expands to the same thing as above. Take a look at the with! docs for more info, and the corresponding macros update!, with_value! and update_value!.

Making signals depend on each other

Often people ask about situations in which some signal needs to change based on some other signal’s value. There are three good ways to do this, and one that’s less than ideal but okay under controlled circumstances.

Good Options

1) B is a function of A. Create a signal for A and a derived signal or memo for B.

let (count, set_count) = create_signal(1); // A
let derived_signal_double_count = move || count() * 2; // B is a function of A
let memoized_double_count = create_memo(move |_| count() * 2); // B is a function of A  

For guidance on whether to use a derived signal or a memo, see the docs for create_memo

2) C is a function of A and some other thing B. Create signals for A and B and a derived signal or memo for C.

let (first_name, set_first_name) = create_signal("Bridget".to_string()); // A
let (last_name, set_last_name) = create_signal("Jones".to_string()); // B
let full_name = move || with!(|first_name, last_name| format!("{first_name} {last_name}")); // C is a function of A and B

3) A and B are independent signals, but sometimes updated at the same time. When you make the call to update A, make a separate call to update B.

let (age, set_age) = create_signal(32); // A
let (favorite_number, set_favorite_number) = create_signal(42); // B
// use this to handle a click on a `Clear` button
let clear_handler = move |_| {
  // update both A and B
  set_age(0);
  set_favorite_number(0);
};

If you really must...

4) Create an effect to write to B whenever A changes. This is officially discouraged, for several reasons: a) It will always be less efficient, as it means every time A updates you do two full trips through the reactive process. (You set A, which causes the effect to run, as well as any other effects that depend on A. Then you set B, which causes any effects that depend on B to run.) b) It increases your chances of accidentally creating things like infinite loops or over-re-running effects. This is the kind of ping-ponging, reactive spaghetti code that was common in the early 2010s and that we try to avoid with things like read-write segregation and discouraging writing to signals from effects.

In most situations, it’s best to rewrite things such that there’s a clear, top-down data flow based on derived signals or memos. But this isn’t the end of the world.

I’m intentionally not providing an example here. Read the create_effect docs to figure out how this would work.

Responding to Changes with create_effect

We’ve made it this far without having mentioned half of the reactive system: effects.

Reactivity works in two halves: updating individual reactive values (“signals”) notifies the pieces of code that depend on them (“effects”) that they need to run again. These two halves of the reactive system are inter-dependent. Without effects, signals can change within the reactive system but never be observed in a way that interacts with the outside world. Without signals, effects run once but never again, as there’s no observable value to subscribe to. Effects are quite literally “side effects” of the reactive system: they exist to synchronize the reactive system with the non-reactive world outside it.

Hidden behind the whole reactive DOM renderer that we’ve seen so far is a function called create_effect.

create_effect takes a function as its argument. It immediately runs the function. If you access any reactive signal inside that function, it registers the fact that the effect depends on that signal with the reactive runtime. Whenever one of the signals that the effect depends on changes, the effect runs again.

let (a, set_a) = create_signal(0);
let (b, set_b) = create_signal(0);

create_effect(move |_| {
  // immediately prints "Value: 0" and subscribes to `a`
  log::debug!("Value: {}", a());
});

The effect function is called with an argument containing whatever value it returned the last time it ran. On the initial run, this is None.

By default, effects do not run on the server. This means you can call browser-specific APIs within the effect function without causing issues. If you need an effect to run on the server, use create_isomorphic_effect.

Auto-tracking and Dynamic Dependencies

If you’re familiar with a framework like React, you might notice one key difference. React and similar frameworks typically require you to pass a “dependency array,” an explicit set of variables that determine when the effect should rerun.

Because Leptos comes from the tradition of synchronous reactive programming, we don’t need this explicit dependency list. Instead, we automatically track dependencies depending on which signals are accessed within the effect.

This has two effects (no pun intended). Dependencies are:

  1. Automatic: You don’t need to maintain a dependency list, or worry about what should or shouldn’t be included. The framework simply tracks which signals might cause the effect to rerun, and handles it for you.
  2. Dynamic: The dependency list is cleared and updated every time the effect runs. If your effect contains a conditional (for example), only signals that are used in the current branch are tracked. This means that effects rerun the absolute minimum number of times.

If this sounds like magic, and if you want a deep dive into how automatic dependency tracking works, check out this video. (Apologies for the low volume!)

Effects as Zero-Cost-ish Abstraction

While they’re not a “zero-cost abstraction” in the most technical sense—they require some additional memory use, exist at runtime, etc.—at a higher level, from the perspective of whatever expensive API calls or other work you’re doing within them, effects are a zero-cost abstraction. They rerun the absolute minimum number of times necessary, given how you’ve described them.

Imagine that I’m creating some kind of chat software, and I want people to be able to display their full name, or just their first name, and to notify the server whenever their name changes:

let (first, set_first) = create_signal(String::new());
let (last, set_last) = create_signal(String::new());
let (use_last, set_use_last) = create_signal(true);

// this will add the name to the log
// any time one of the source signals changes
create_effect(move |_| {
    log(
        if use_last() {
            format!("{} {}", first(), last())
        } else {
            first()
        },
    )
});

If use_last is true, effect should rerun whenever first, last, or use_last changes. But if I toggle use_last to false, a change in last will never cause the full name to change. In fact, last will be removed from the dependency list until use_last toggles again. This saves us from sending multiple unnecessary requests to the API if I change last multiple times while use_last is still false.

To create_effect, or not to create_effect?

Effects are intended to synchronize the reactive system with the non-reactive world outside, not to synchronize between different reactive values. In other words: using an effect to read a value from one signal and set it in another is always sub-optimal.

If you need to define a signal that depends on the value of other signals, use a derived signal or create_memo. Writing to a signal inside an effect isn’t the end of the world, and it won’t cause your computer to light on fire, but a derived signal or memo is always better—not only because the dataflow is clear, but because the performance is better.

let (a, set_a) = create_signal(0);

// ⚠️ not great
let (b, set_b) = create_signal(0);
create_effect(move |_| {
    set_b(a() * 2);
});

// ✅ woo-hoo!
let b = move || a() * 2;

If you need to synchronize some reactive value with the non-reactive world outside—like a web API, the console, the filesystem, or the DOM—writing to a signal in an effect is a fine way to do that. In many cases, though, you’ll find that you’re really writing to a signal inside an event listener or something else, not inside an effect. In these cases, you should check out leptos-use to see if it already provides a reactive wrapping primitive to do that!

If you’re curious for more information about when you should and shouldn’t use create_effect, check out this video for a more in-depth consideration!

Effects and Rendering

We’ve managed to get this far without mentioning effects because they’re built into the Leptos DOM renderer. We’ve seen that you can create a signal and pass it into the view macro, and it will update the relevant DOM node whenever the signal changes:

let (count, set_count) = create_signal(0);

view! {
    <p>{count}</p>
}

This works because the framework essentially creates an effect wrapping this update. You can imagine Leptos translating this view into something like this:

let (count, set_count) = create_signal(0);

// create a DOM element
let document = leptos::document();
let p = document.create_element("p").unwrap();

// create an effect to reactively update the text
create_effect(move |prev_value| {
    // first, access the signal’s value and convert it to a string
    let text = count().to_string();

    // if this is different from the previous value, update the node
    if prev_value != Some(text) {
        p.set_text_content(&text);
    }

    // return this value so we can memoize the next update
    text
});

Every time count is updated, this effect will rerun. This is what allows reactive, fine-grained updates to the DOM.

Explicit, Cancelable Tracking with watch

In addition to create_effect, Leptos provides a watch function, which can be used for two main purposes:

  1. Separating tracking and responding to changes by explicitly passing in a set of values to track.
  2. Canceling tracking by calling a stop function.

Like create_resource, watch takes a first argument, which is reactively tracked, and a second, which is not. Whenever a reactive value in its deps argument is changed, the callback is run. watch returns a function that can be called to stop tracking the dependencies.

let (num, set_num) = create_signal(0);

let stop = watch(
    move || num.get(),
    move |num, prev_num, _| {
        log::debug!("Number: {}; Prev: {:?}", num, prev_num);
    },
    false,
);

set_num.set(1); // > "Number: 1; Prev: Some(0)"

stop(); // stop watching

set_num.set(2); // (nothing happens)

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::html::Input;
use leptos::*;

#[derive(Copy, Clone)]
struct LogContext(RwSignal<Vec<String>>);

#[component]
fn App() -> impl IntoView {
    // Just making a visible log here
    // You can ignore this...
    let log = create_rw_signal::<Vec<String>>(vec![]);
    let logged = move || log().join("\n");

    // the newtype pattern isn't *necessary* here but is a good practice
    // it avoids confusion with other possible future `RwSignal<Vec<String>>` contexts
    // and makes it easier to refer to it
    provide_context(LogContext(log));

    view! {
        <CreateAnEffect/>
        <pre>{logged}</pre>
    }
}

#[component]
fn CreateAnEffect() -> impl IntoView {
    let (first, set_first) = create_signal(String::new());
    let (last, set_last) = create_signal(String::new());
    let (use_last, set_use_last) = create_signal(true);

    // this will add the name to the log
    // any time one of the source signals changes
    create_effect(move |_| {
        log(if use_last() {
            with!(|first, last| format!("{first} {last}"))
        } else {
            first()
        })
    });

    view! {
        <h1>
            <code>"create_effect"</code>
            " Version"
        </h1>
        <form>
            <label>
                "First Name"
                <input
                    type="text"
                    name="first"
                    prop:value=first
                    on:change=move |ev| set_first(event_target_value(&ev))
                />
            </label>
            <label>
                "Last Name"
                <input
                    type="text"
                    name="last"
                    prop:value=last
                    on:change=move |ev| set_last(event_target_value(&ev))
                />
            </label>
            <label>
                "Show Last Name"
                <input
                    type="checkbox"
                    name="use_last"
                    prop:checked=use_last
                    on:change=move |ev| set_use_last(event_target_checked(&ev))
                />
            </label>
        </form>
    }
}

#[component]
fn ManualVersion() -> impl IntoView {
    let first = create_node_ref::<Input>();
    let last = create_node_ref::<Input>();
    let use_last = create_node_ref::<Input>();

    let mut prev_name = String::new();
    let on_change = move |_| {
        log("      listener");
        let first = first.get().unwrap();
        let last = last.get().unwrap();
        let use_last = use_last.get().unwrap();
        let this_one = if use_last.checked() {
            format!("{} {}", first.value(), last.value())
        } else {
            first.value()
        };

        if this_one != prev_name {
            log(&this_one);
            prev_name = this_one;
        }
    };

    view! {
        <h1>"Manual Version"</h1>
        <form on:change=on_change>
            <label>"First Name" <input type="text" name="first" node_ref=first/></label>
            <label>"Last Name" <input type="text" name="last" node_ref=last/></label>
            <label>
                "Show Last Name" <input type="checkbox" name="use_last" checked node_ref=use_last/>
            </label>
        </form>
    }
}

#[component]
fn EffectVsDerivedSignal() -> impl IntoView {
    let (my_value, set_my_value) = create_signal(String::new());
    // Don't do this.
    /*let (my_optional_value, set_optional_my_value) = create_signal(Option::<String>::None);

    create_effect(move |_| {
        if !my_value.get().is_empty() {
            set_optional_my_value(Some(my_value.get()));
        } else {
            set_optional_my_value(None);
        }
    });*/

    // Do this
    let my_optional_value =
        move || (!my_value.with(String::is_empty)).then(|| Some(my_value.get()));

    view! {
        <input prop:value=my_value on:input=move |ev| set_my_value(event_target_value(&ev))/>

        <p>
            <code>"my_optional_value"</code>
            " is "
            <code>
                <Show when=move || my_optional_value().is_some() fallback=|| view! { "None" }>
                    "Some(\""
                    {my_optional_value().unwrap()}
                    "\")"
                </Show>
            </code>
        </p>
    }
}

#[component]
pub fn Show<F, W, IV>(
    /// The components Show wraps
    children: Box<dyn Fn() -> Fragment>,
    /// A closure that returns a bool that determines whether this thing runs
    when: W,
    /// A closure that returns what gets rendered if the when statement is false
    fallback: F,
) -> impl IntoView
where
    W: Fn() -> bool + 'static,
    F: Fn() -> IV + 'static,
    IV: IntoView,
{
    let memoized_when = create_memo(move |_| when());

    move || match memoized_when.get() {
        true => children().into_view(),
        false => fallback().into_view(),
    }
}

fn log(msg: impl std::fmt::Display) {
    let log = use_context::<LogContext>().unwrap().0;
    log.update(|log| log.push(msg.to_string()));
}

fn main() {
    leptos::mount_to_body(App)
}

Interlude: Reactivity and Functions

One of our core contributors said to me recently: “I never used closures this often until I started using Leptos.” And it’s true. Closures are at the heart of any Leptos application. It sometimes looks a little silly:

// a signal holds a value, and can be updated
let (count, set_count) = create_signal(0);

// a derived signal is a function that accesses other signals
let double_count = move || count() * 2;
let count_is_odd = move || count() & 1 == 1;
let text = move || if count_is_odd() {
    "odd"
} else {
    "even"
};

// an effect automatically tracks the signals it depends on
// and reruns when they change
create_effect(move |_| {
    logging::log!("text = {}", text());
});

view! {
    <p>{move || text().to_uppercase()}</p>
}

Closures, closures everywhere!

But why?

Functions and UI Frameworks

Functions are at the heart of every UI framework. And this makes perfect sense. Creating a user interface is basically divided into two phases:

  1. initial rendering
  2. updates

In a web framework, the framework does some kind of initial rendering. Then it hands control back over to the browser. When certain events fire (like a mouse click) or asynchronous tasks finish (like an HTTP request finishing), the browser wakes the framework back up to update something. The framework runs some kind of code to update your user interface, and goes back asleep until the browser wakes it up again.

The key phrase here is “runs some kind of code.” The natural way to “run some kind of code” at an arbitrary point in time—in Rust or in any other programming language—is to call a function. And in fact every UI framework is based on rerunning some kind of function over and over:

  1. virtual DOM (VDOM) frameworks like React, Yew, or Dioxus rerun a component or render function over and over, to generate a virtual DOM tree that can be reconciled with the previous result to patch the DOM
  2. compiled frameworks like Angular and Svelte divide your component templates into “create” and “update” functions, rerunning the update function when they detect a change to the component’s state
  3. in fine-grained reactive frameworks like SolidJS, Sycamore, or Leptos, you define the functions that rerun

That’s what all our components are doing.

Take our typical <SimpleCounter/> example in its simplest form:

#[component]
pub fn SimpleCounter() -> impl IntoView {
    let (value, set_value) = create_signal(0);

    let increment = move |_| set_value.update(|value| *value += 1);

    view! {
        <button on:click=increment>
            {value}
        </button>
    }
}

The SimpleCounter function itself runs once. The value signal is created once. The framework hands off the increment function to the browser as an event listener. When you click the button, the browser calls increment, which updates value via set_value. And that updates the single text node represented in our view by {value}.

Closures are key to reactivity. They provide the framework with the ability to rerun the smallest possible unit of your application in response to a change.

So remember two things:

  1. Your component function is a setup function, not a render function: it only runs once.
  2. For values in your view template to be reactive, they must be functions: either signals (which implement the Fn traits) or closures.

Testing Your Components

Testing user interfaces can be relatively tricky, but really important. This article will discuss a couple principles and approaches for testing a Leptos app.

1. Test business logic with ordinary Rust tests

In many cases, it makes sense to pull the logic out of your components and test it separately. For some simple components, there’s no particular logic to test, but for many it’s worth using a testable wrapping type and implementing the logic in ordinary Rust impl blocks.

For example, instead of embedding logic in a component directly like this:

#[component]
pub fn TodoApp() -> impl IntoView {
    let (todos, set_todos) = create_signal(vec![Todo { /* ... */ }]);
    // ⚠️ this is hard to test because it's embedded in the component
    let num_remaining = move || todos.with(|todos| {
        todos.iter().filter(|todo| !todo.completed).sum()
    });
}

You could pull that logic out into a separate data structure and test it:

pub struct Todos(Vec<Todo>);

impl Todos {
    pub fn num_remaining(&self) -> usize {
        self.0.iter().filter(|todo| !todo.completed).sum()
    }
}

#[cfg(test)]
mod tests {
    #[test]
    fn test_remaining() {
        // ...
    }
}

#[component]
pub fn TodoApp() -> impl IntoView {
    let (todos, set_todos) = create_signal(Todos(vec![Todo { /* ... */ }]));
    // ✅ this has a test associated with it
    let num_remaining = move || todos.with(Todos::num_remaining);
}

In general, the less of your logic is wrapped into your components themselves, the more idiomatic your code will feel and the easier it will be to test.

2. Test components with end-to-end (e2e) testing

Our examples directory has several examples with extensive end-to-end testing, using different testing tools.

The easiest way to see how to use these is to take a look at the test examples themselves:

wasm-bindgen-test with counter

This is a fairly simple manual testing setup that uses the wasm-pack test command.

Sample Test

#[wasm_bindgen_test]
fn clear() {
    let document = leptos::document();
    let test_wrapper = document.create_element("section").unwrap();
    let _ = document.body().unwrap().append_child(&test_wrapper);

    mount_to(
        test_wrapper.clone().unchecked_into(),
        || view! { <SimpleCounter initial_value=10 step=1/> },
    );

    let div = test_wrapper.query_selector("div").unwrap().unwrap();
    let clear = test_wrapper
        .query_selector("button")
        .unwrap()
        .unwrap()
        .unchecked_into::<web_sys::HtmlElement>();

    clear.click();

assert_eq!(
    div.outer_html(),
    // here we spawn a mini reactive system to render the test case
    run_scope(create_runtime(), || {
        // it's as if we're creating it with a value of 0, right?
        let (value, set_value) = create_signal(0);

        // we can remove the event listeners because they're not rendered to HTML
        view! {
            <div>
                <button>"Clear"</button>
                <button>"-1"</button>
                <span>"Value: " {value} "!"</span>
                <button>"+1"</button>
            </div>
        }
        // the view returned an HtmlElement<Div>, which is a smart pointer for
        // a DOM element. So we can still just call .outer_html()
        .outer_html()
    })
);
}

wasm-bindgen-test with counters

This more developed test suite uses a system of fixtures to refactor the manual DOM manipulation of the counter tests and easily test a wide range of cases.

Sample Test

use super::*;
use crate::counters_page as ui;
use pretty_assertions::assert_eq;

#[wasm_bindgen_test]
fn should_increase_the_total_count() {
    // Given
    ui::view_counters();
    ui::add_counter();

    // When
    ui::increment_counter(1);
    ui::increment_counter(1);
    ui::increment_counter(1);

    // Then
    assert_eq!(ui::total(), 3);
}

Playwright with counters

These tests use the common JavaScript testing tool Playwright to run end-to-end tests on the same example, using a library and testing approach familiar to many who have done frontend development before.

Sample Test

import { test, expect } from "@playwright/test";
import { CountersPage } from "./fixtures/counters_page";

test.describe("Increment Count", () => {
  test("should increase the total count", async ({ page }) => {
    const ui = new CountersPage(page);
    await ui.goto();
    await ui.addCounter();

    await ui.incrementCount();
    await ui.incrementCount();
    await ui.incrementCount();

    await expect(ui.total).toHaveText("3");
  });
});

Gherkin/Cucumber Tests with todo_app_sqlite

You can integrate any testing tool you’d like into this flow. This example uses Cucumber, a testing framework based on natural language.

@add_todo
Feature: Add Todo

    Background:
        Given I see the app

    @add_todo-see
    Scenario: Should see the todo
        Given I set the todo as Buy Bread
        When I click the Add button
        Then I see the todo named Buy Bread

    # @allow.skipped
    @add_todo-style
    Scenario: Should see the pending todo
        When I add a todo as Buy Oranges
        Then I see the pending todo

The definitions for these actions are defined in Rust code.

use crate::fixtures::{action, world::AppWorld};
use anyhow::{Ok, Result};
use cucumber::{given, when};

#[given("I see the app")]
#[when("I open the app")]
async fn i_open_the_app(world: &mut AppWorld) -> Result<()> {
    let client = &world.client;
    action::goto_path(client, "").await?;

    Ok(())
}

#[given(regex = "^I add a todo as (.*)$")]
#[when(regex = "^I add a todo as (.*)$")]
async fn i_add_a_todo_titled(world: &mut AppWorld, text: String) -> Result<()> {
    let client = &world.client;
    action::add_todo(client, text.as_str()).await?;

    Ok(())
}

// etc.

Learning More

Feel free to check out the CI setup in the Leptos repo to learn more about how to use these tools in your own application. All of these testing methods are run regularly against actual Leptos example apps.

Working with async

So far we’ve only been working with synchronous user interfaces: You provide some input, the app immediately processes it and updates the interface. This is great, but is a tiny subset of what web applications do. In particular, most web apps have to deal with some kind of asynchronous data loading, usually loading something from an API.

Asynchronous data is notoriously hard to integrate with the synchronous parts of your code. Leptos provides a cross-platform spawn_local function that makes it easy to run a Future, but there’s much more to it than that.

In this chapter, we’ll see how Leptos helps smooth out that process for you.

Loading Data with Resources

A Resource is a reactive data structure that reflects the current state of an asynchronous task, allowing you to integrate asynchronous Futures into the synchronous reactive system. Rather than waiting for its data to load with .await, you transform the Future into a signal that returns Some(T) if it has resolved, and None if it’s still pending.

You do this by using the create_resource function. This takes two arguments:

  1. a source signal, which will generate a new Future whenever it changes
  2. a fetcher function, which takes the data from that signal and returns a Future

Here’s an example

// our source signal: some synchronous, local state
let (count, set_count) = create_signal(0);

// our resource
let async_data = create_resource(
    count,
    // every time `count` changes, this will run
    |value| async move {
        logging::log!("loading data from API");
        load_data(value).await
    },
);

To create a resource that simply runs once, you can pass a non-reactive, empty source signal:

let once = create_resource(|| (), |_| async move { load_data().await });

To access the value you can use .get() or .with(|data| /* */). These work just like .get() and .with() on a signal—get clones the value and returns it, with applies a closure to it—but for any Resource<_, T>, they always return Option<T>, not T: because it’s always possible that your resource is still loading.

So, you can show the current state of a resource in your view:

let once = create_resource(|| (), |_| async move { load_data().await });
view! {
    <h1>"My Data"</h1>
    {move || match once.get() {
        None => view! { <p>"Loading..."</p> }.into_view(),
        Some(data) => view! { <ShowData data/> }.into_view()
    }}
}

Resources also provide a refetch() method that allows you to manually reload the data (for example, in response to a button click) and a loading() method that returns a ReadSignal<bool> indicating whether the resource is currently loading or not.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use gloo_timers::future::TimeoutFuture;
use leptos::*;

// Here we define an async function
// This could be anything: a network request, database read, etc.
// Here, we just multiply a number by 10
async fn load_data(value: i32) -> i32 {
    // fake a one-second delay
    TimeoutFuture::new(1_000).await;
    value * 10
}

#[component]
fn App() -> impl IntoView {
    // this count is our synchronous, local state
    let (count, set_count) = create_signal(0);

    // create_resource takes two arguments after its scope
    let async_data = create_resource(
        // the first is the "source signal"
        count,
        // the second is the loader
        // it takes the source signal's value as its argument
        // and does some async work
        |value| async move { load_data(value).await },
    );
    // whenever the source signal changes, the loader reloads

    // you can also create resources that only load once
    // just return the unit type () from the source signal
    // that doesn't depend on anything: we just load it once
    let stable = create_resource(|| (), |_| async move { load_data(1).await });

    // we can access the resource values with .get()
    // this will reactively return None before the Future has resolved
    // and update to Some(T) when it has resolved
    let async_result = move || {
        async_data
            .get()
            .map(|value| format!("Server returned {value:?}"))
            // This loading state will only show before the first load
            .unwrap_or_else(|| "Loading...".into())
    };

    // the resource's loading() method gives us a
    // signal to indicate whether it's currently loading
    let loading = async_data.loading();
    let is_loading = move || if loading() { "Loading..." } else { "Idle." };

    view! {
        <button
            on:click=move |_| {
                set_count.update(|n| *n += 1);
            }
        >
            "Click me"
        </button>
        <p>
            <code>"stable"</code>": " {move || stable.get()}
        </p>
        <p>
            <code>"count"</code>": " {count}
        </p>
        <p>
            <code>"async_value"</code>": "
            {async_result}
            <br/>
            {is_loading}
        </p>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

<Suspense/>

In the previous chapter, we showed how you can create a simple loading screen to show some fallback while a resource is loading.

let (count, set_count) = create_signal(0);
let once = create_resource(count, |count| async move { load_a(count).await });

view! {
    <h1>"My Data"</h1>
    {move || match once.get() {
        None => view! { <p>"Loading..."</p> }.into_view(),
        Some(data) => view! { <ShowData data/> }.into_view()
    }}
}

But what if we have two resources, and want to wait for both of them?

let (count, set_count) = create_signal(0);
let (count2, set_count2) = create_signal(0);
let a = create_resource(count, |count| async move { load_a(count).await });
let b = create_resource(count2, |count| async move { load_b(count).await });

view! {
    <h1>"My Data"</h1>
    {move || match (a.get(), b.get()) {
        (Some(a), Some(b)) => view! {
            <ShowA a/>
            <ShowA b/>
        }.into_view(),
        _ => view! { <p>"Loading..."</p> }.into_view()
    }}
}

That’s not so bad, but it’s kind of annoying. What if we could invert the flow of control?

The <Suspense/> component lets us do exactly that. You give it a fallback prop and children, one or more of which usually involves reading from a resource. Reading from a resource “under” a <Suspense/> (i.e., in one of its children) registers that resource with the <Suspense/>. If it’s still waiting for resources to load, it shows the fallback. When they’ve all loaded, it shows the children.

let (count, set_count) = create_signal(0);
let (count2, set_count2) = create_signal(0);
let a = create_resource(count, |count| async move { load_a(count).await });
let b = create_resource(count2, |count| async move { load_b(count).await });

view! {
    <h1>"My Data"</h1>
    <Suspense
        fallback=move || view! { <p>"Loading..."</p> }
    >
        <h2>"My Data"</h2>
        <h3>"A"</h3>
        {move || {
            a.get()
                .map(|a| view! { <ShowA a/> })
        }}
        <h3>"B"</h3>
        {move || {
            b.get()
                .map(|b| view! { <ShowB b/> })
        }}
    </Suspense>
}

Every time one of the resources is reloading, the "Loading..." fallback will show again.

This inversion of the flow of control makes it easier to add or remove individual resources, as you don’t need to handle the matching yourself. It also unlocks some massive performance improvements during server-side rendering, which we’ll talk about during a later chapter.

<Await/>

If you’re simply trying to wait for some Future to resolve before rendering, you may find the <Await/> component helpful in reducing boilerplate. <Await/> essentially combines a resource with the source argument || () with a <Suspense/> with no fallback.

In other words:

  1. It only polls the Future once, and does not respond to any reactive changes.
  2. It does not render anything until the Future resolves.
  3. After the Future resolves, it binds its data to whatever variable name you choose and then renders its children with that variable in scope.
async fn fetch_monkeys(monkey: i32) -> i32 {
    // maybe this didn't need to be async
    monkey * 2
}
view! {
    <Await
        // `future` provides the `Future` to be resolved
        future=|| fetch_monkeys(3)
        // the data is bound to whatever variable name you provide
        let:data
    >
        // you receive the data by reference and can use it in your view here
        <p>{*data} " little monkeys, jumping on the bed."</p>
    </Await>
}

Live example

Click to open CodeSandbox.

CodeSandbox Source
use gloo_timers::future::TimeoutFuture;
use leptos::*;

async fn important_api_call(name: String) -> String {
    TimeoutFuture::new(1_000).await;
    name.to_ascii_uppercase()
}

#[component]
fn App() -> impl IntoView {
    let (name, set_name) = create_signal("Bill".to_string());

    // this will reload every time `name` changes
    let async_data = create_resource(

        name,
        |name| async move { important_api_call(name).await },
    );

    view! {
        <input
            on:input=move |ev| {
                set_name(event_target_value(&ev));
            }
            prop:value=name
        />
        <p><code>"name:"</code> {name}</p>
        <Suspense
            // the fallback will show whenever a resource
            // read "under" the suspense is loading
            fallback=move || view! { <p>"Loading..."</p> }
        >
            // the children will be rendered once initially,
            // and then whenever any resources has been resolved
            <p>
                "Your shouting name is "
                {move || async_data.get()}
            </p>
        </Suspense>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

<Transition/>

You’ll notice in the <Suspense/> example that if you keep reloading the data, it keeps flickering back to "Loading...". Sometimes this is fine. For other times, there’s <Transition/>.

<Transition/> behaves exactly the same as <Suspense/>, but instead of falling back every time, it only shows the fallback the first time. On all subsequent loads, it continues showing the old data until the new data are ready. This can be really handy to prevent the flickering effect, and to allow users to continue interacting with your application.

This example shows how you can create a simple tabbed contact list with <Transition/>. When you select a new tab, it continues showing the current contact until the new data loads. This can be a much better user experience than constantly falling back to a loading message.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use gloo_timers::future::TimeoutFuture;
use leptos::*;

async fn important_api_call(id: usize) -> String {
    TimeoutFuture::new(1_000).await;
    match id {
        0 => "Alice",
        1 => "Bob",
        2 => "Carol",
        _ => "User not found",
    }
    .to_string()
}

#[component]
fn App() -> impl IntoView {
    let (tab, set_tab) = create_signal(0);

    // this will reload every time `tab` changes
    let user_data = create_resource(tab, |tab| async move { important_api_call(tab).await });

    view! {
        <div class="buttons">
            <button
                on:click=move |_| set_tab(0)
                class:selected=move || tab() == 0
            >
                "Tab A"
            </button>
            <button
                on:click=move |_| set_tab(1)
                class:selected=move || tab() == 1
            >
                "Tab B"
            </button>
            <button
                on:click=move |_| set_tab(2)
                class:selected=move || tab() == 2
            >
                "Tab C"
            </button>
            {move || if user_data.loading().get() {
                "Loading..."
            } else {
                ""
            }}
        </div>
        <Transition
            // the fallback will show initially
            // on subsequent reloads, the current child will
            // continue showing
            fallback=move || view! { <p>"Loading..."</p> }
        >
            <p>
                {move || user_data.read()}
            </p>
        </Transition>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

Mutating Data with Actions

We’ve talked about how to load async data with resources. Resources immediately load data and work closely with <Suspense/> and <Transition/> components to show whether data is loading in your app. But what if you just want to call some arbitrary async function and keep track of what it’s doing?

Well, you could always use spawn_local. This allows you to just spawn an async task in a synchronous environment by handing the Future off to the browser (or, on the server, Tokio or whatever other runtime you’re using). But how do you know if it’s still pending? Well, you could just set a signal to show whether it’s loading, and another one to show the result...

All of this is true. Or you could use the final async primitive: create_action.

Actions and resources seem similar, but they represent fundamentally different things. If you’re trying to load data by running an async function, either once or when some other value changes, you probably want to use create_resource. If you’re trying to occasionally run an async function in response to something like a user clicking a button, you probably want to use create_action.

Say we have some async function we want to run.

async fn add_todo_request(new_title: &str) -> Uuid {
    /* do some stuff on the server to add a new todo */
}

create_action takes an async function that takes a reference to a single argument, which you could think of as its “input type.”

The input is always a single type. If you want to pass in multiple arguments, you can do it with a struct or tuple.

// if there's a single argument, just use that
let action1 = create_action(|input: &String| {
   let input = input.clone();
   async move { todo!() }
});

// if there are no arguments, use the unit type `()`
let action2 = create_action(|input: &()| async { todo!() });

// if there are multiple arguments, use a tuple
let action3 = create_action(
  |input: &(usize, String)| async { todo!() }
);

Because the action function takes a reference but the Future needs to have a 'static lifetime, you’ll usually need to clone the value to pass it into the Future. This is admittedly awkward but it unlocks some powerful features like optimistic UI. We’ll see a little more about that in future chapters.

So in this case, all we need to do to create an action is

let add_todo_action = create_action(|input: &String| {
    let input = input.to_owned();
    async move { add_todo_request(&input).await }
});

Rather than calling add_todo_action directly, we’ll call it with .dispatch(), as in

add_todo_action.dispatch("Some value".to_string());

You can do this from an event listener, a timeout, or anywhere; because .dispatch() isn’t an async function, it can be called from a synchronous context.

Actions provide access to a few signals that synchronize between the asynchronous action you’re calling and the synchronous reactive system:

let submitted = add_todo_action.input(); // RwSignal<Option<String>>
let pending = add_todo_action.pending(); // ReadSignal<bool>
let todo_id = add_todo_action.value(); // RwSignal<Option<Uuid>>

This makes it easy to track the current state of your request, show a loading indicator, or do “optimistic UI” based on the assumption that the submission will succeed.

let input_ref = create_node_ref::<Input>();

view! {
    <form
        on:submit=move |ev| {
            ev.prevent_default(); // don't reload the page...
            let input = input_ref.get().expect("input to exist");
            add_todo_action.dispatch(input.value());
        }
    >
        <label>
            "What do you need to do?"
            <input type="text"
                node_ref=input_ref
            />
        </label>
        <button type="submit">"Add Todo"</button>
    </form>
    // use our loading state
    <p>{move || pending().then("Loading...")}</p>
}

Now, there’s a chance this all seems a little over-complicated, or maybe too restricted. I wanted to include actions here, alongside resources, as the missing piece of the puzzle. In a real Leptos app, you’ll actually most often use actions alongside server functions, create_server_action, and the <ActionForm/> component to create really powerful progressively-enhanced forms. So if this primitive seems useless to you... Don’t worry! Maybe it will make sense later. (Or check out our todo_app_sqlite example now.)

Live example

Click to open CodeSandbox.

CodeSandbox Source
use gloo_timers::future::TimeoutFuture;
use leptos::{html::Input, *};
use uuid::Uuid;

// Here we define an async function
// This could be anything: a network request, database read, etc.
// Think of it as a mutation: some imperative async action you run,
// whereas a resource would be some async data you load
async fn add_todo(text: &str) -> Uuid {
    _ = text;
    // fake a one-second delay
    TimeoutFuture::new(1_000).await;
    // pretend this is a post ID or something
    Uuid::new_v4()
}

#[component]
fn App() -> impl IntoView {
    // an action takes an async function with single argument
    // it can be a simple type, a struct, or ()
    let add_todo = create_action(|input: &String| {
        // the input is a reference, but we need the Future to own it
        // this is important: we need to clone and move into the Future
        // so it has a 'static lifetime
        let input = input.to_owned();
        async move { add_todo(&input).await }
    });

    // actions provide a bunch of synchronous, reactive variables
    // that tell us different things about the state of the action
    let submitted = add_todo.input();
    let pending = add_todo.pending();
    let todo_id = add_todo.value();

    let input_ref = create_node_ref::<Input>();

    view! {
        <form
            on:submit=move |ev| {
                ev.prevent_default(); // don't reload the page...
                let input = input_ref.get().expect("input to exist");
                add_todo.dispatch(input.value());
            }
        >
            <label>
                "What do you need to do?"
                <input type="text"
                    node_ref=input_ref
                />
            </label>
            <button type="submit">"Add Todo"</button>
        </form>
        <p>{move || pending().then(|| "Loading...")}</p>
        <p>
            "Submitted: "
            <code>{move || format!("{:#?}", submitted())}</code>
        </p>
        <p>
            "Pending: "
            <code>{move || format!("{:#?}", pending())}</code>
        </p>
        <p>
            "Todo ID: "
            <code>{move || format!("{:#?}", todo_id())}</code>
        </p>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

Projecting Children

As you build components you may occasionally find yourself wanting to “project” children through multiple layers of components.

The Problem

Consider the following:

pub fn LoggedIn<F, IV>(fallback: F, children: ChildrenFn) -> impl IntoView
where
    F: Fn() -> IV + 'static,
    IV: IntoView,
{
    view! {
        <Suspense
            fallback=|| ()
        >
            <Show
				// check whether user is verified
				// by reading from the resource
                when=move || todo!()
                fallback=fallback
            >
				{children()}
			</Show>
        </Suspense>
    }
}

This is pretty straightforward: when the user is logged in, we want to show children. If the user is not logged in, we want to show fallback. And while we’re waiting to find out, we just render (), i.e., nothing.

In other words, we want to pass the children of <LoggedIn/> through the <Suspense/> component to become the children of the <Show/>. This is what I mean by “projection.”

This won’t compile.

error[E0507]: cannot move out of `fallback`, a captured variable in an `Fn` closure
error[E0507]: cannot move out of `children`, a captured variable in an `Fn` closure

The problem here is that both <Suspense/> and <Show/> need to be able to construct their children multiple times. The first time you construct <Suspense/>’s children, it would take ownership of fallback and children to move them into the invocation of <Show/>, but then they're not available for future <Suspense/> children construction.

The Details

Feel free to skip ahead to the solution.

If you want to really understand the issue here, it may help to look at the expanded view macro. Here’s a cleaned-up version:

Suspense(
    ::leptos::component_props_builder(&Suspense)
        .fallback(|| ())
        .children({
            // fallback and children are moved into this closure
            Box::new(move || {
                {
                    // fallback and children captured here
                    leptos::Fragment::lazy(|| {
                        vec![
                            (Show(
                                ::leptos::component_props_builder(&Show)
                                    .when(|| true)
									// but fallback is moved into Show here
                                    .fallback(fallback)
									// and children is moved into Show here
                                    .children(children)
                                    .build(),
                            )
                            .into_view()),
                        ]
                    })
                }
            })
        })
        .build(),
)

All components own their props; so the <Show/> in this case can’t be called because it only has captured references to fallback and children.

Solution

However, both <Suspense/> and <Show/> take ChildrenFn, i.e., their children should implement the Fn type so they can be called multiple times with only an immutable reference. This means we don’t need to own children or fallback; we just need to be able to pass 'static references to them.

We can solve this problem by using the store_value primitive. This essentially stores a value in the reactive system, handing ownership off to the framework in exchange for a reference that is, like signals, Copy and 'static, which we can access or modify through certain methods.

In this case, it’s really simple:

pub fn LoggedIn<F, IV>(fallback: F, children: ChildrenFn) -> impl IntoView
where
    F: Fn() -> IV + 'static,
    IV: IntoView,
{
    let fallback = store_value(fallback);
    let children = store_value(children);
    view! {
        <Suspense
            fallback=|| ()
        >
            <Show
                when=|| todo!()
                fallback=move || fallback.with_value(|fallback| fallback())
            >
                {children.with_value(|children| children())}
            </Show>
        </Suspense>
    }
}

At the top level, we store both fallback and children in the reactive scope owned by LoggedIn. Now we can simply move those references down through the other layers into the <Show/> component and call them there.

A Final Note

Note that this works because <Show/> and <Suspense/> only need an immutable reference to their children (which .with_value can give it), not ownership.

In other cases, you may need to project owned props through a function that takes ChildrenFn and therefore needs to be called more than once. In this case, you may find the clone: helper in theview macro helpful.

Consider this example

#[component]
pub fn App() -> impl IntoView {
    let name = "Alice".to_string();
    view! {
        <Outer>
            <Inner>
                <Inmost name=name.clone()/>
            </Inner>
        </Outer>
    }
}

#[component]
pub fn Outer(children: ChildrenFn) -> impl IntoView {
    children()
}

#[component]
pub fn Inner(children: ChildrenFn) -> impl IntoView {
    children()
}

#[component]
pub fn Inmost(name: String) -> impl IntoView {
    view! {
        <p>{name}</p>
    }
}

Even with name=name.clone(), this gives the error

cannot move out of `name`, a captured variable in an `Fn` closure

It’s captured through multiple levels of children that need to run more than once, and there’s no obvious way to clone it into the children.

In this case, the clone: syntax comes in handy. Calling clone:name will clone name before moving it into <Inner/>’s children, which solves our ownership issue.

view! {
	<Outer>
		<Inner clone:name>
			<Inmost name=name.clone()/>
		</Inner>
	</Outer>
}

These issues can be a little tricky to understand or debug, because of the opacity of the view macro. But in general, they can always be solved.

Global State Management

So far, we've only been working with local state in components, and we’ve seen how to coordinate state between parent and child components. On occasion, there are times where people look for a more general solution for global state management that can work throughout an application.

In general, you do not need this chapter. The typical pattern is to compose your application out of components, each of which manages its own local state, not to store all state in a global structure. However, there are some cases (like theming, saving user settings, or sharing data between components in different parts of your UI) in which you may want to use some kind of global state management.

The three best approaches to global state are

  1. Using the router to drive global state via the URL
  2. Passing signals through context
  3. Creating a global state struct and creating lenses into it with create_slice

Option #1: URL as Global State

In many ways, the URL is actually the best way to store global state. It can be accessed from any component, anywhere in your tree. There are native HTML elements like <form> and <a> that exist solely to update the URL. And it persists across page reloads and between devices; you can share a URL with a friend or send it from your phone to your laptop and any state stored in it will be replicated.

The next few sections of the tutorial will be about the router, and we’ll get much more into these topics.

But for now, we'll just look at options #2 and #3.

Option #2: Passing Signals through Context

In the section on parent-child communication, we saw that you can use provide_context to pass signal from a parent component to a child, and use_context to read it in the child. But provide_context works across any distance. If you want to create a global signal that holds some piece of state, you can provide it and access it via context anywhere in the descendants of the component where you provide it.

A signal provided via context only causes reactive updates where it is read, not in any of the components in between, so it maintains the power of fine-grained reactive updates, even at a distance.

We start by creating a signal in the root of the app and providing it to all its children and descendants using provide_context.

#[component]
fn App() -> impl IntoView {
    // here we create a signal in the root that can be consumed
    // anywhere in the app.
    let (count, set_count) = create_signal(0);
    // we'll pass the setter to specific components,
    // but provide the count itself to the whole app via context
    provide_context(count);

    view! {
        // SetterButton is allowed to modify the count
        <SetterButton set_count/>
        // These consumers can only read from it
        // But we could give them write access by passing `set_count` if we wanted
        <FancyMath/>
        <ListItems/>
    }
}

<SetterButton/> is the kind of counter we’ve written several times now. (See the sandbox below if you don’t understand what I mean.)

<FancyMath/> and <ListItems/> both consume the signal we’re providing via use_context and do something with it.

/// A component that does some "fancy" math with the global count
#[component]
fn FancyMath() -> impl IntoView {
    // here we consume the global count signal with `use_context`
    let count = use_context::<ReadSignal<u32>>()
        // we know we just provided this in the parent component
        .expect("there to be a `count` signal provided");
    let is_even = move || count() & 1 == 0;

    view! {
        <div class="consumer blue">
            "The number "
            <strong>{count}</strong>
            {move || if is_even() {
                " is"
            } else {
                " is not"
            }}
            " even."
        </div>
    }
}

Note that this same pattern can be applied to more complex state. If you have multiple fields you want to update independently, you can do that by providing some struct of signals:

#[derive(Copy, Clone, Debug)]
struct GlobalState {
    count: RwSignal<i32>,
    name: RwSignal<String>
}

impl GlobalState {
    pub fn new() -> Self {
        Self {
            count: create_rw_signal(0),
            name: create_rw_signal("Bob".to_string())
        }
    }
}

#[component]
fn App() -> impl IntoView {
    provide_context(GlobalState::new());

    // etc.
}

Option #3: Create a Global State Struct and Slices

You may find it cumbersome to wrap each field of a structure in a separate signal like this. In some cases, it can be useful to create a plain struct with non-reactive fields, and then wrap that in a signal.

#[derive(Copy, Clone, Debug, Default)]
struct GlobalState {
    count: i32,
    name: String
}

#[component]
fn App() -> impl IntoView {
    provide_context(create_rw_signal(GlobalState::default()));

    // etc.
}

But there’s a problem: because our whole state is wrapped in one signal, updating the value of one field will cause reactive updates in parts of the UI that only depend on the other.

let state = expect_context::<RwSignal<GlobalState>>();
view! {
    <button on:click=move |_| state.update(|state| state.count += 1)>"+1"</button>
    <p>{move || state.with(|state| state.name.clone())}</p>
}

In this example, clicking the button will cause the text inside <p> to be updated, cloning state.name again! Because signals are the atomic unit of reactivity, updating any field of the signal triggers updates to everything that depends on the signal.

There’s a better way. You can take fine-grained, reactive slices by using create_memo or create_slice (which uses create_memo but also provides a setter). “Memoizing” a value means creating a new reactive value which will only update when it changes. “Memoizing a slice” means creating a new reactive value which will only update when some field of the state struct updates.

Here, instead of reading from the state signal directly, we create “slices” of that state with fine-grained updates via create_slice. Each slice signal only updates when the particular piece of the larger struct it accesses updates. This means you can create a single root signal, and then take independent, fine-grained slices of it in different components, each of which can update without notifying the others of changes.

/// A component that updates the count in the global state.
#[component]
fn GlobalStateCounter() -> impl IntoView {
    let state = expect_context::<RwSignal<GlobalState>>();

    // `create_slice` lets us create a "lens" into the data
    let (count, set_count) = create_slice(

        // we take a slice *from* `state`
        state,
        // our getter returns a "slice" of the data
        |state| state.count,
        // our setter describes how to mutate that slice, given a new value
        |state, n| state.count = n,
    );

    view! {
        <div class="consumer blue">
            <button
                on:click=move |_| {
                    set_count(count() + 1);
                }
            >
                "Increment Global Count"
            </button>
            <br/>
            <span>"Count is: " {count}</span>
        </div>
    }
}

Clicking this button only updates state.count, so if we create another slice somewhere else that only takes state.name, clicking the button won’t cause that other slice to update. This allows you to combine the benefits of a top-down data flow and of fine-grained reactive updates.

Note: There are some significant drawbacks to this approach. Both signals and memos need to own their values, so a memo will need to clone the field’s value on every change. The most natural way to manage state in a framework like Leptos is always to provide signals that are as locally-scoped and fine-grained as they can be, not to hoist everything up into global state. But when you do need some kind of global state, create_slice can be a useful tool.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;

// So far, we've only been working with local state in components
// We've only seen how to communicate between parent and child components
// But there are also more general ways to manage global state
//
// The three best approaches to global state are
// 1. Using the router to drive global state via the URL
// 2. Passing signals through context
// 3. Creating a global state struct and creating lenses into it with `create_slice`
//
// Option #1: URL as Global State
// The next few sections of the tutorial will be about the router.
// So for now, we'll just look at options #2 and #3.

// Option #2: Pass Signals through Context
//
// In virtual DOM libraries like React, using the Context API to manage global
// state is a bad idea: because the entire app exists in a tree, changing
// some value provided high up in the tree can cause the whole app to render.
//
// In fine-grained reactive libraries like Leptos, this is simply not the case.
// You can create a signal in the root of your app and pass it down to other
// components using provide_context(). Changing it will only cause rerendering
// in the specific places it is actually used, not the whole app.
#[component]
fn Option2() -> impl IntoView {
    // here we create a signal in the root that can be consumed
    // anywhere in the app.
    let (count, set_count) = create_signal(0);
    // we'll pass the setter to specific components,
    // but provide the count itself to the whole app via context
    provide_context(count);

    view! {
        <h1>"Option 2: Passing Signals"</h1>
        // SetterButton is allowed to modify the count
        <SetterButton set_count/>
        // These consumers can only read from it
        // But we could give them write access by passing `set_count` if we wanted
        <div style="display: flex">
            <FancyMath/>
            <ListItems/>
        </div>
    }
}

/// A button that increments our global counter.
#[component]
fn SetterButton(set_count: WriteSignal<u32>) -> impl IntoView {
    view! {
        <div class="provider red">
            <button on:click=move |_| set_count.update(|count| *count += 1)>
                "Increment Global Count"
            </button>
        </div>
    }
}

/// A component that does some "fancy" math with the global count
#[component]
fn FancyMath() -> impl IntoView {
    // here we consume the global count signal with `use_context`
    let count = use_context::<ReadSignal<u32>>()
        // we know we just provided this in the parent component
        .expect("there to be a `count` signal provided");
    let is_even = move || count() & 1 == 0;

    view! {
        <div class="consumer blue">
            "The number "
            <strong>{count}</strong>
            {move || if is_even() {
                " is"
            } else {
                " is not"
            }}
            " even."
        </div>
    }
}

/// A component that shows a list of items generated from the global count.
#[component]
fn ListItems() -> impl IntoView {
    // again, consume the global count signal with `use_context`
    let count = use_context::<ReadSignal<u32>>().expect("there to be a `count` signal provided");

    let squares = move || {
        (0..count())
            .map(|n| view! { <li>{n}<sup>"2"</sup> " is " {n * n}</li> })
            .collect::<Vec<_>>()
    };

    view! {
        <div class="consumer green">
            <ul>{squares}</ul>
        </div>
    }
}

// Option #3: Create a Global State Struct
//
// You can use this approach to build a single global data structure
// that holds the state for your whole app, and then access it by
// taking fine-grained slices using `create_slice` or `create_memo`,
// so that changing one part of the state doesn't cause parts of your
// app that depend on other parts of the state to change.

#[derive(Default, Clone, Debug)]
struct GlobalState {
    count: u32,
    name: String,
}

#[component]
fn Option3() -> impl IntoView {
    // we'll provide a single signal that holds the whole state
    // each component will be responsible for creating its own "lens" into it
    let state = create_rw_signal(GlobalState::default());
    provide_context(state);

    view! {
        <h1>"Option 3: Passing Signals"</h1>
        <div class="red consumer" style="width: 100%">
            <h2>"Current Global State"</h2>
            <pre>
                {move || {
                    format!("{:#?}", state.get())
                }}
            </pre>
        </div>
        <div style="display: flex">
            <GlobalStateCounter/>
            <GlobalStateInput/>
        </div>
    }
}

/// A component that updates the count in the global state.
#[component]
fn GlobalStateCounter() -> impl IntoView {
    let state = use_context::<RwSignal<GlobalState>>().expect("state to have been provided");

    // `create_slice` lets us create a "lens" into the data
    let (count, set_count) = create_slice(

        // we take a slice *from* `state`
        state,
        // our getter returns a "slice" of the data
        |state| state.count,
        // our setter describes how to mutate that slice, given a new value
        |state, n| state.count = n,
    );

    view! {
        <div class="consumer blue">
            <button
                on:click=move |_| {
                    set_count(count() + 1);
                }
            >
                "Increment Global Count"
            </button>
            <br/>
            <span>"Count is: " {count}</span>
        </div>
    }
}

/// A component that updates the count in the global state.
#[component]
fn GlobalStateInput() -> impl IntoView {
    let state = use_context::<RwSignal<GlobalState>>().expect("state to have been provided");

    // this slice is completely independent of the `count` slice
    // that we created in the other component
    // neither of them will cause the other to rerun
    let (name, set_name) = create_slice(
        // we take a slice *from* `state`
        state,
        // our getter returns a "slice" of the data
        |state| state.name.clone(),
        // our setter describes how to mutate that slice, given a new value
        |state, n| state.name = n,
    );

    view! {
        <div class="consumer green">
            <input
                type="text"
                prop:value=name
                on:input=move |ev| {
                    set_name(event_target_value(&ev));
                }
            />
            <br/>
            <span>"Name is: " {name}</span>
        </div>
    }
}
// This `main` function is the entry point into the app
// It just mounts our component to the <body>
// Because we defined it as `fn App`, we can now use it in a
// template as <App/>
fn main() {
    leptos::mount_to_body(|| view! { <Option2/><Option3/> })
}

Routing

The Basics

Routing drives most websites. A router is the answer to the question, “Given this URL, what should appear on the page?”

A URL consists of many parts. For example, the URL https://my-cool-blog.com/blog/search?q=Search#results consists of

  • a scheme: https
  • a domain: my-cool-blog.com
  • a path: /blog/search
  • a query (or search): ?q=Search
  • a hash: #results

The Leptos Router works with the path and query (/blog/search?q=Search). Given this piece of the URL, what should the app render on the page?

The Philosophy

In most cases, the path should drive what is displayed on the page. From the user’s perspective, for most applications, most major changes in the state of the app should be reflected in the URL. If you copy and paste the URL and open it in another tab, you should find yourself more or less in the same place.

In this sense, the router is really at the heart of the global state management for your application. More than anything else, it drives what is displayed on the page.

The router handles most of this work for you by mapping the current location to particular components.

Defining Routes

Getting Started

It’s easy to get started with the router.

First things first, make sure you’ve added the leptos_router package to your dependencies. Like leptos, the router relies on activating a csr, hydrate, or ssr feature. For example, if you’re adding the router to a client-side rendered app, you’ll want to run

cargo add leptos_router --features=csr 

It’s important that the router is a separate package from leptos itself. This means that everything in the router can be defined in user-land code. If you want to create your own router, or use no router, you’re completely free to do that!

And import the relevant types from the router, either with something like

use leptos_router::{Route, RouteProps, Router, RouterProps, Routes, RoutesProps};

or simply

use leptos_router::*;

Providing the <Router/>

Routing behavior is provided by the <Router/> component. This should usually be somewhere near the root of your application, wrapping the rest of the app.

You shouldn’t try to use multiple <Router/>s in your app. Remember that the router drives global state: if you have multiple routers, which one decides what to do when the URL changes?

Let’s start with a simple <App/> component using the router:

use leptos::*;
use leptos_router::*;

#[component]
pub fn App() -> impl IntoView {
  view! {
    <Router>
      <nav>
        /* ... */
      </nav>
      <main>
        /* ... */
      </main>
    </Router>
  }
}

Defining <Routes/>

The <Routes/> component is where you define all the routes to which a user can navigate in your application. Each possible route is defined by a <Route/> component.

You should place the <Routes/> component at the location within your app where you want routes to be rendered. Everything outside <Routes/> will be present on every page, so you can leave things like a navigation bar or menu outside the <Routes/>.

use leptos::*;
use leptos_router::*;

#[component]
pub fn App() -> impl IntoView {
  view! {
    <Router>
      <nav>
        /* ... */
      </nav>
      <main>
        // all our routes will appear inside <main>
        <Routes>
          /* ... */
        </Routes>
      </main>
    </Router>
  }
}

Individual routes are defined by providing children to <Routes/> with the <Route/> component. <Route/> takes a path and a view. When the current location matches path, the view will be created and displayed.

The path can include

  • a static path (/users),
  • dynamic, named parameters beginning with a colon (/:id),
  • and/or a wildcard beginning with an asterisk (/user/*any)

The view is a function that returns a view. Any component with no props works here, as does a closure that returns some view.

<Routes>
  <Route path="/" view=Home/>
  <Route path="/users" view=Users/>
  <Route path="/users/:id" view=UserProfile/>
  <Route path="/*any" view=|| view! { <h1>"Not Found"</h1> }/>
</Routes>

view takes a Fn() -> impl IntoView. If a component has no props, it can be passed directly into the view. In this case, view=Home is just a shorthand for || view! { <Home/> }.

Now if you navigate to / or to /users you’ll get the home page or the <Users/>. If you go to /users/3 or /blahblah you’ll get a user profile or your 404 page (<NotFound/>). On every navigation, the router determines which <Route/> should be matched, and therefore what content should be displayed where the <Routes/> component is defined.

Note that you can define your routes in any order. The router scores each route to see how good a match it is, rather than simply trying to match them top to bottom.

Simple enough?

Conditional Routes

leptos_router is based on the assumption that you have one and only one <Routes/> component in your app. It uses this to generate routes on the server side, optimize route matching by caching calculated branches, and render your application.

You should not conditionally render <Routes/> using another component like <Show/> or <Suspense/>.

// ❌ don't do this!
view! {
  <Show when=|| is_loaded() fallback=|| view! { <p>"Loading"</p> }>
    <Routes>
      <Route path="/" view=Home/>
    </Routes>
  </Show>
}

Instead, you can use nested routing to render your <Routes/> once, and conditionally render the router outlet:

// ✅ do this instead!
view! {
  <Routes>
    // parent route
    <Route path="/" view=move || {
      view! {
        // only show the outlet if data have loaded
        <Show when=|| is_loaded() fallback=|| view! { <p>"Loading"</p> }>
          <Outlet/>
        </Show>
      }
    }>
      // nested child route
      <Route path="/" view=Home/>
    </Route>
  </Routes>
}

If this looks bizarre, don’t worry! The next section of the book is about this kind of nested routing.

Nested Routing

We just defined the following set of routes:

<Routes>
  <Route path="/" view=Home/>
  <Route path="/users" view=Users/>
  <Route path="/users/:id" view=UserProfile/>
  <Route path="/*any" view=NotFound/>
</Routes>

There’s a certain amount of duplication here: /users and /users/:id. This is fine for a small app, but you can probably already tell it won’t scale well. Wouldn’t it be nice if we could nest these routes?

Well... you can!

<Routes>
  <Route path="/" view=Home/>
  <Route path="/users" view=Users>
    <Route path=":id" view=UserProfile/>
  </Route>
  <Route path="/*any" view=NotFound/>
</Routes>

But wait. We’ve just subtly changed what our application does.

The next section is one of the most important in this entire routing section of the guide. Read it carefully, and feel free to ask questions if there’s anything you don’t understand.

Nested Routes as Layout

Nested routes are a form of layout, not a method of route definition.

Let me put that another way: The goal of defining nested routes is not primarily to avoid repeating yourself when typing out the paths in your route definitions. It is actually to tell the router to display multiple <Route/>s on the page at the same time, side by side.

Let’s look back at our practical example.

<Routes>
  <Route path="/users" view=Users/>
  <Route path="/users/:id" view=UserProfile/>
</Routes>

This means:

  • If I go to /users, I get the <Users/> component.
  • If I go to /users/3, I get the <UserProfile/> component (with the parameter id set to 3; more on that later)

Let’s say I use nested routes instead:

<Routes>
  <Route path="/users" view=Users>
    <Route path=":id" view=UserProfile/>
  </Route>
</Routes>

This means:

  • If I go to /users/3, the path matches two <Route/>s: <Users/> and <UserProfile/>.
  • If I go to /users, the path is not matched.

I actually need to add a fallback route

<Routes>
  <Route path="/users" view=Users>
    <Route path=":id" view=UserProfile/>
    <Route path="" view=NoUser/>
  </Route>
</Routes>

Now:

  • If I go to /users/3, the path matches <Users/> and <UserProfile/>.
  • If I go to /users, the path matches <Users/> and <NoUser/>.

When I use nested routes, in other words, each path can match multiple routes: each URL can render the views provided by multiple <Route/> components, at the same time, on the same page.

This may be counter-intuitive, but it’s very powerful, for reasons you’ll hopefully see in a few minutes.

Why Nested Routing?

Why bother with this?

Most web applications contain levels of navigation that correspond to different parts of the layout. For example, in an email app you might have a URL like /contacts/greg, which shows a list of contacts on the left of the screen, and contact details for Greg on the right of the screen. The contact list and the contact details should always appear on the screen at the same time. If there’s no contact selected, maybe you want to show a little instructional text.

You can easily define this with nested routes

<Routes>
  <Route path="/contacts" view=ContactList>
    <Route path=":id" view=ContactInfo/>
    <Route path="" view=|| view! {
      <p>"Select a contact to view more info."</p>
    }/>
  </Route>
</Routes>

You can go even deeper. Say you want to have tabs for each contact’s address, email/phone, and your conversations with them. You can add another set of nested routes inside :id:

<Routes>
  <Route path="/contacts" view=ContactList>
    <Route path=":id" view=ContactInfo>
      <Route path="" view=EmailAndPhone/>
      <Route path="address" view=Address/>
      <Route path="messages" view=Messages/>
    </Route>
    <Route path="" view=|| view! {
      <p>"Select a contact to view more info."</p>
    }/>
  </Route>
</Routes>

The main page of the Remix website, a React framework from the creators of React Router, has a great visual example if you scroll down, with three levels of nested routing: Sales > Invoices > an invoice.

<Outlet/>

Parent routes do not automatically render their nested routes. After all, they are just components; they don’t know exactly where they should render their children, and “just stick it at the end of the parent component” is not a great answer.

Instead, you tell a parent component where to render any nested components with an <Outlet/> component. The <Outlet/> simply renders one of two things:

  • if there is no nested route that has been matched, it shows nothing
  • if there is a nested route that has been matched, it shows its view

That’s all! But it’s important to know and to remember, because it’s a common source of “Why isn’t this working?” frustration. If you don’t provide an <Outlet/>, the nested route won’t be displayed.

#[component]
pub fn ContactList() -> impl IntoView {
  let contacts = todo!();

  view! {
    <div style="display: flex">
      // the contact list
      <For each=contacts
        key=|contact| contact.id
        children=|contact| todo!()
      />
      // the nested child, if any
      // don’t forget this!
      <Outlet/>
    </div>
  }
}

Refactoring Route Definitions

You don’t need to define all your routes in one place if you don’t want to. You can refactor any <Route/> and its children out into a separate component.

For example, you can refactor the example above to use two separate components:

#[component]
fn App() -> impl IntoView {
  view! {
    <Router>
      <Routes>
        <Route path="/contacts" view=ContactList>
          <ContactInfoRoutes/>
          <Route path="" view=|| view! {
            <p>"Select a contact to view more info."</p>
          }/>
        </Route>
      </Routes>
    </Router>
  }
}

#[component(transparent)]
fn ContactInfoRoutes() -> impl IntoView {
  view! {
    <Route path=":id" view=ContactInfo>
      <Route path="" view=EmailAndPhone/>
      <Route path="address" view=Address/>
      <Route path="messages" view=Messages/>
    </Route>
  }
}

This second component is a #[component(transparent)], meaning it just returns its data, not a view: in this case, it's a RouteDefinition struct, which is what the <Route/> returns. As long as it is marked #[component(transparent)], this sub-route can be defined wherever you want, and inserted as a component into your tree of route definitions.

Nested Routing and Performance

All of this is nice, conceptually, but again—what’s the big deal?

Performance.

In a fine-grained reactive library like Leptos, it’s always important to do the least amount of rendering work you can. Because we’re working with real DOM nodes and not diffing a virtual DOM, we want to “rerender” components as infrequently as possible. Nested routing makes this extremely easy.

Imagine my contact list example. If I navigate from Greg to Alice to Bob and back to Greg, the contact information needs to change on each navigation. But the <ContactList/> should never be rerendered. Not only does this save on rendering performance, it also maintains state in the UI. For example, if I have a search bar at the top of <ContactList/>, navigating from Greg to Alice to Bob won’t clear the search.

In fact, in this case, we don’t even need to rerender the <Contact/> component when moving between contacts. The router will just reactively update the :id parameter as we navigate, allowing us to make fine-grained updates. As we navigate between contacts, we’ll update single text nodes to change the contact’s name, address, and so on, without doing any additional rerendering.

This sandbox includes a couple features (like nested routing) discussed in this section and the previous one, and a couple we’ll cover in the rest of this chapter. The router is such an integrated system that it makes sense to provide a single example, so don’t be surprised if there’s anything you don’t understand.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;
use leptos_router::*;

#[component]
fn App() -> impl IntoView {
    view! {
        <Router>
            <h1>"Contact App"</h1>
            // this <nav> will show on every routes,
            // because it's outside the <Routes/>
            // note: we can just use normal <a> tags
            // and the router will use client-side navigation
            <nav>
                <h2>"Navigation"</h2>
                <a href="/">"Home"</a>
                <a href="/contacts">"Contacts"</a>
            </nav>
            <main>
                <Routes>
                    // / just has an un-nested "Home"
                    <Route path="/" view=|| view! {
                        <h3>"Home"</h3>
                    }/>
                    // /contacts has nested routes
                    <Route
                        path="/contacts"
                        view=ContactList
                      >
                        // if no id specified, fall back
                        <Route path=":id" view=ContactInfo>
                            <Route path="" view=|| view! {
                                <div class="tab">
                                    "(Contact Info)"
                                </div>
                            }/>
                            <Route path="conversations" view=|| view! {
                                <div class="tab">
                                    "(Conversations)"
                                </div>
                            }/>
                        </Route>
                        // if no id specified, fall back
                        <Route path="" view=|| view! {
                            <div class="select-user">
                                "Select a user to view contact info."
                            </div>
                        }/>
                    </Route>
                </Routes>
            </main>
        </Router>
    }
}

#[component]
fn ContactList() -> impl IntoView {
    view! {
        <div class="contact-list">
            // here's our contact list component itself
            <div class="contact-list-contacts">
                <h3>"Contacts"</h3>
                <A href="alice">"Alice"</A>
                <A href="bob">"Bob"</A>
                <A href="steve">"Steve"</A>
            </div>

            // <Outlet/> will show the nested child route
            // we can position this outlet wherever we want
            // within the layout
            <Outlet/>
        </div>
    }
}

#[component]
fn ContactInfo() -> impl IntoView {
    // we can access the :id param reactively with `use_params_map`
    let params = use_params_map();
    let id = move || params.with(|params| params.get("id").cloned().unwrap_or_default());

    // imagine we're loading data from an API here
    let name = move || match id().as_str() {
        "alice" => "Alice",
        "bob" => "Bob",
        "steve" => "Steve",
        _ => "User not found.",
    };

    view! {
        <div class="contact-info">
            <h4>{name}</h4>
            <div class="tabs">
                <A href="" exact=true>"Contact Info"</A>
                <A href="conversations">"Conversations"</A>
            </div>

            // <Outlet/> here is the tabs that are nested
            // underneath the /contacts/:id route
            <Outlet/>
        </div>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

Params and Queries

Static paths are useful for distinguishing between different pages, but almost every application wants to pass data through the URL at some point.

There are two ways you can do this:

  1. named route params like id in /users/:id
  2. named route queries like q in /search?q=Foo

Because of the way URLs are built, you can access the query from any <Route/> view. You can access route params from the <Route/> that defines them or any of its nested children.

Accessing params and queries is pretty simple with a couple of hooks:

Each of these comes with a typed option (use_query and use_params) and an untyped option (use_query_map and use_params_map).

The untyped versions hold a simple key-value map. To use the typed versions, derive the Params trait on a struct.

Params is a very lightweight trait to convert a flat key-value map of strings into a struct by applying FromStr to each field. Because of the flat structure of route params and URL queries, it’s significantly less flexible than something like serde; it also adds much less weight to your binary.

use leptos::*;
use leptos_router::*;

#[derive(Params, PartialEq)]
struct ContactParams {
	id: usize
}

#[derive(Params, PartialEq)]
struct ContactSearch {
	q: String
}

Note: The Params derive macro is located at leptos::Params, and the Params trait is at leptos_router::Params. If you avoid using glob imports like use leptos::*;, make sure you’re importing the right one for the derive macro.

If you are not using the nightly feature, you will get the error

no function or associated item named `into_param` found for struct `std::string::String` in the current scope

At the moment, supporting both T: FromStr and Option<T> for typed params requires a nightly feature. You can fix this by simply changing the struct to use q: Option<String> instead of q: String.

Now we can use them in a component. Imagine a URL that has both params and a query, like /contacts/:id?q=Search.

The typed versions return Memo<Result<T, _>>. It’s a Memo so it reacts to changes in the URL. It’s a Result because the params or query need to be parsed from the URL, and may or may not be valid.

let params = use_params::<ContactParams>();
let query = use_query::<ContactSearch>();

// id: || -> usize
let id = move || {
	params.with(|params| {
		params.as_ref()
			.map(|params| params.id)
			.unwrap_or_default()
	})
};

The untyped versions return Memo<ParamsMap>. Again, it’s memo to react to changes in the URL. ParamsMap behaves a lot like any other map type, with a .get() method that returns Option<&String>.

let params = use_params_map();
let query = use_query_map();

// id: || -> Option<String>
let id = move || {
	params.with(|params| params.get("id").cloned())
};

This can get a little messy: deriving a signal that wraps an Option<_> or Result<_> can involve a couple steps. But it’s worth doing this for two reasons:

  1. It’s correct, i.e., it forces you to consider the cases, “What if the user doesn’t pass a value for this query field? What if they pass an invalid value?”
  2. It’s performant. Specifically, when you navigate between different paths that match the same <Route/> with only params or the query changing, you can get fine-grained updates to different parts of your app without rerendering. For example, navigating between different contacts in our contact-list example does a targeted update to the name field (and eventually contact info) without needing to replace or rerender the wrapping <Contact/>. This is what fine-grained reactivity is for.

This is the same example from the previous section. The router is such an integrated system that it makes sense to provide a single example highlighting multiple features, even if we haven’t explained them all yet.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;
use leptos_router::*;

#[component]
fn App() -> impl IntoView {
    view! {
        <Router>
            <h1>"Contact App"</h1>
            // this <nav> will show on every routes,
            // because it's outside the <Routes/>
            // note: we can just use normal <a> tags
            // and the router will use client-side navigation
            <nav>
                <h2>"Navigation"</h2>
                <a href="/">"Home"</a>
                <a href="/contacts">"Contacts"</a>
            </nav>
            <main>
                <Routes>
                    // / just has an un-nested "Home"
                    <Route path="/" view=|| view! {
                        <h3>"Home"</h3>
                    }/>
                    // /contacts has nested routes
                    <Route
                        path="/contacts"
                        view=ContactList
                      >
                        // if no id specified, fall back
                        <Route path=":id" view=ContactInfo>
                            <Route path="" view=|| view! {
                                <div class="tab">
                                    "(Contact Info)"
                                </div>
                            }/>
                            <Route path="conversations" view=|| view! {
                                <div class="tab">
                                    "(Conversations)"
                                </div>
                            }/>
                        </Route>
                        // if no id specified, fall back
                        <Route path="" view=|| view! {
                            <div class="select-user">
                                "Select a user to view contact info."
                            </div>
                        }/>
                    </Route>
                </Routes>
            </main>
        </Router>
    }
}

#[component]
fn ContactList() -> impl IntoView {
    view! {
        <div class="contact-list">
            // here's our contact list component itself
            <div class="contact-list-contacts">
                <h3>"Contacts"</h3>
                <A href="alice">"Alice"</A>
                <A href="bob">"Bob"</A>
                <A href="steve">"Steve"</A>
            </div>

            // <Outlet/> will show the nested child route
            // we can position this outlet wherever we want
            // within the layout
            <Outlet/>
        </div>
    }
}

#[component]
fn ContactInfo() -> impl IntoView {
    // we can access the :id param reactively with `use_params_map`
    let params = use_params_map();
    let id = move || params.with(|params| params.get("id").cloned().unwrap_or_default());

    // imagine we're loading data from an API here
    let name = move || match id().as_str() {
        "alice" => "Alice",
        "bob" => "Bob",
        "steve" => "Steve",
        _ => "User not found.",
    };

    view! {
        <div class="contact-info">
            <h4>{name}</h4>
            <div class="tabs">
                <A href="" exact=true>"Contact Info"</A>
                <A href="conversations">"Conversations"</A>
            </div>

            // <Outlet/> here is the tabs that are nested
            // underneath the /contacts/:id route
            <Outlet/>
        </div>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

The <A/> Component

Client-side navigation works perfectly fine with ordinary HTML <a> elements. The router adds a listener that handles every click on a <a> element and tries to handle it on the client side, i.e., without doing another round trip to the server to request HTML. This is what enables the snappy “single-page app” navigations you’re probably familiar with from most modern web apps.

The router will bail out of handling an <a> click under a number of situations

  • the click event has had prevent_default() called on it
  • the Meta, Alt, Ctrl, or Shift keys were held during click
  • the <a> has a target or download attribute, or rel="external"
  • the link has a different origin from the current location

In other words, the router will only try to do a client-side navigation when it’s pretty sure it can handle it, and it will upgrade every <a> element to get this special behavior.

This also means that if you need to opt out of client-side routing, you can do so easily. For example, if you have a link to another page on the same domain, but which isn’t part of your Leptos app, you can just use <a rel="external"> to tell the router it isn’t something it can handle.

The router also provides an <A> component, which does two additional things:

  1. Correctly resolves relative nested routes. Relative routing with ordinary <a> tags can be tricky. For example, if you have a route like /post/:id, <A href="1"> will generate the correct relative route, but <a href="1"> likely will not (depending on where it appears in your view.) <A/> resolves routes relative to the path of the nested route within which it appears.
  2. Sets the aria-current attribute to page if this link is the active link (i.e., it’s a link to the page you’re on). This is helpful for accessibility and for styling. For example, if you want to set the link a different color if it’s a link to the page you’re currently on, you can match this attribute with a CSS selector.

Your most-used methods of navigating between pages should be with <a> and <form> elements or with the enhanced <A/> and <Form/> components. Using links and forms to navigate is the best solution for accessibility and graceful degradation.

On occasion, though, you’ll want to navigate programmatically, i.e., call a function that can navigate to a new page. In that case, you should use the use_navigate function.

let navigate = leptos_router::use_navigate();
navigate("/somewhere", Default::default());

You should almost never do something like <button on:click=move |_| navigate(/* ... */)>. Any on:click that navigates should be an <a>, for reasons of accessibility.

The second argument here is a set of NavigateOptions, which includes options to resolve the navigation relative to the current route as the <A/> component does, replace it in the navigation stack, include some navigation state, and maintain the current scroll state on navigation.

Once again, this is the same example. Check out the relative <A/> components, and take a look at the CSS in index.html to see the ARIA-based styling.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;
use leptos_router::*;

#[component]
fn App() -> impl IntoView {
    view! {
        <Router>
            <h1>"Contact App"</h1>
            // this <nav> will show on every routes,
            // because it's outside the <Routes/>
            // note: we can just use normal <a> tags
            // and the router will use client-side navigation
            <nav>
                <h2>"Navigation"</h2>
                <a href="/">"Home"</a>
                <a href="/contacts">"Contacts"</a>
            </nav>
            <main>
                <Routes>
                    // / just has an un-nested "Home"
                    <Route path="/" view=|| view! {
                        <h3>"Home"</h3>
                    }/>
                    // /contacts has nested routes
                    <Route
                        path="/contacts"
                        view=ContactList
                      >
                        // if no id specified, fall back
                        <Route path=":id" view=ContactInfo>
                            <Route path="" view=|| view! {
                                <div class="tab">
                                    "(Contact Info)"
                                </div>
                            }/>
                            <Route path="conversations" view=|| view! {
                                <div class="tab">
                                    "(Conversations)"
                                </div>
                            }/>
                        </Route>
                        // if no id specified, fall back
                        <Route path="" view=|| view! {
                            <div class="select-user">
                                "Select a user to view contact info."
                            </div>
                        }/>
                    </Route>
                </Routes>
            </main>
        </Router>
    }
}

#[component]
fn ContactList() -> impl IntoView {
    view! {
        <div class="contact-list">
            // here's our contact list component itself
            <div class="contact-list-contacts">
                <h3>"Contacts"</h3>
                <A href="alice">"Alice"</A>
                <A href="bob">"Bob"</A>
                <A href="steve">"Steve"</A>
            </div>

            // <Outlet/> will show the nested child route
            // we can position this outlet wherever we want
            // within the layout
            <Outlet/>
        </div>
    }
}

#[component]
fn ContactInfo() -> impl IntoView {
    // we can access the :id param reactively with `use_params_map`
    let params = use_params_map();
    let id = move || params.with(|params| params.get("id").cloned().unwrap_or_default());

    // imagine we're loading data from an API here
    let name = move || match id().as_str() {
        "alice" => "Alice",
        "bob" => "Bob",
        "steve" => "Steve",
        _ => "User not found.",
    };

    view! {
        <div class="contact-info">
            <h4>{name}</h4>
            <div class="tabs">
                <A href="" exact=true>"Contact Info"</A>
                <A href="conversations">"Conversations"</A>
            </div>

            // <Outlet/> here is the tabs that are nested
            // underneath the /contacts/:id route
            <Outlet/>
        </div>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

The <Form/> Component

Links and forms sometimes seem completely unrelated. But, in fact, they work in very similar ways.

In plain HTML, there are three ways to navigate to another page:

  1. An <a> element that links to another page: Navigates to the URL in its href attribute with the GET HTTP method.
  2. A <form method="GET">: Navigates to the URL in its action attribute with the GET HTTP method and the form data from its inputs encoded in the URL query string.
  3. A <form method="POST">: Navigates to the URL in its action attribute with the POST HTTP method and the form data from its inputs encoded in the body of the request.

Since we have a client-side router, we can do client-side link navigations without reloading the page, i.e., without a full round-trip to the server and back. It makes sense that we can do client-side form navigations in the same way.

The router provides a <Form> component, which works like the HTML <form> element, but uses client-side navigations instead of full page reloads. <Form/> works with both GET and POST requests. With method="GET", it will navigate to the URL encoded in the form data. With method="POST" it will make a POST request and handle the server’s response.

<Form/> provides the basis for some components like <ActionForm/> and <MultiActionForm/> that we’ll see in later chapters. But it also enables some powerful patterns of its own.

For example, imagine that you want to create a search field that updates search results in real time as the user searches, without a page reload, but that also stores the search in the URL so a user can copy and paste it to share results with someone else.

It turns out that the patterns we’ve learned so far make this easy to implement.

async fn fetch_results() {
	// some async function to fetch our search results
}

#[component]
pub fn FormExample() -> impl IntoView {
    // reactive access to URL query strings
    let query = use_query_map();
	// search stored as ?q=
    let search = move || query().get("q").cloned().unwrap_or_default();
	// a resource driven by the search string
	let search_results = create_resource(search, fetch_results);

	view! {
		<Form method="GET" action="">
			<input type="search" name="q" value=search/>
			<input type="submit"/>
		</Form>
		<Transition fallback=move || ()>
			/* render search results */
		</Transition>
	}
}

Whenever you click Submit, the <Form/> will “navigate” to ?q={search}. But because this navigation is done on the client side, there’s no page flicker or reload. The URL query string changes, which triggers search to update. Because search is the source signal for the search_results resource, this triggers search_results to reload its resource. The <Transition/> continues displaying the current search results until the new ones have loaded. When they are complete, it switches to displaying the new result.

This is a great pattern. The data flow is extremely clear: all data flows from the URL to the resource into the UI. The current state of the application is stored in the URL, which means you can refresh the page or text the link to a friend and it will show exactly what you’re expecting. And once we introduce server rendering, this pattern will prove to be really fault-tolerant, too: because it uses a <form> element and URLs under the hood, it actually works really well without even loading your WASM on the client.

We can actually take it a step further and do something kind of clever:

view! {
	<Form method="GET" action="">
		<input type="search" name="q" value=search
			oninput="this.form.requestSubmit()"
		/>
	</Form>
}

You’ll notice that this version drops the Submit button. Instead, we add an oninput attribute to the input. Note that this is not on:input, which would listen for the input event and run some Rust code. Without the colon, oninput is the plain HTML attribute. So the string is actually a JavaScript string. this.form gives us the form the input is attached to. requestSubmit() fires the submit event on the <form>, which is caught by <Form/> just as if we had clicked a Submit button. Now the form will “navigate” on every keystroke or input to keep the URL (and therefore the search) perfectly in sync with the user’s input as they type.

Live example

Click to open CodeSandbox.

CodeSandbox Source
use leptos::*;
use leptos_router::*;

#[component]
fn App() -> impl IntoView {
    view! {
        <Router>
            <h1><code>"<Form/>"</code></h1>
            <main>
                <Routes>
                    <Route path="" view=FormExample/>
                </Routes>
            </main>
        </Router>
    }
}

#[component]
pub fn FormExample() -> impl IntoView {
    // reactive access to URL query
    let query = use_query_map();
    let name = move || query().get("name").cloned().unwrap_or_default();
    let number = move || query().get("number").cloned().unwrap_or_default();
    let select = move || query().get("select").cloned().unwrap_or_default();

    view! {
        // read out the URL query strings
        <table>
            <tr>
                <td><code>"name"</code></td>
                <td>{name}</td>
            </tr>
            <tr>
                <td><code>"number"</code></td>
                <td>{number}</td>
            </tr>
            <tr>
                <td><code>"select"</code></td>
                <td>{select}</td>
            </tr>
        </table>
        // <Form/> will navigate whenever submitted
        <h2>"Manual Submission"</h2>
        <Form method="GET" action="">
            // input names determine query string key
            <input type="text" name="name" value=name/>
            <input type="number" name="number" value=number/>
            <select name="select">
                // `selected` will set which starts as selected
                <option selected=move || select() == "A">
                    "A"
                </option>
                <option selected=move || select() == "B">
                    "B"
                </option>
                <option selected=move || select() == "C">
                    "C"
                </option>
            </select>
            // submitting should cause a client-side
            // navigation, not a full reload
            <input type="submit"/>
        </Form>
        // This <Form/> uses some JavaScript to submit
        // on every input
        <h2>"Automatic Submission"</h2>
        <Form method="GET" action="">
            <input
                type="text"
                name="name"
                value=name
                // this oninput attribute will cause the
                // form to submit on every input to the field
                oninput="this.form.requestSubmit()"
            />
            <input
                type="number"
                name="number"
                value=number
                oninput="this.form.requestSubmit()"
            />
            <select name="select"
                onchange="this.form.requestSubmit()"
            >
                <option selected=move || select() == "A">
                    "A"
                </option>
                <option selected=move || select() == "B">
                    "B"
                </option>
                <option selected=move || select() == "C">
                    "C"
                </option>
            </select>
            // submitting should cause a client-side
            // navigation, not a full reload
            <input type="submit"/>
        </Form>
    }
}

fn main() {
    leptos::mount_to_body(App)
}

Interlude: Styling

Anyone creating a website or application soon runs into the question of styling. For a small app, a single CSS file is probably plenty to style your user interface. But as an application grows, many developers find that plain CSS becomes increasingly hard to manage.

Some frontend frameworks (like Angular, Vue, and Svelte) provide built-in ways to scope your CSS to particular components, making it easier to manage styles across a whole application without styles meant to modify one small component having a global effect. Other frameworks (like React or Solid) don’t provide built-in CSS scoping, but rely on libraries in the ecosystem to do it for them. Leptos is in this latter camp: the framework itself has no opinions about CSS at all, but provides a few tools and primitives that allow others to build styling libraries.

Here are a few different approaches to styling your Leptos app, other than plain CSS.

TailwindCSS: Utility-first CSS

TailwindCSS is a popular utility-first CSS library. It allows you to style your application by using inline utility classes, with a custom CLI tool that scans your files for Tailwind class names and bundles the necessary CSS.

This allows you to write components like this:

#[component]
fn Home() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    view! {
        <main class="my-0 mx-auto max-w-3xl text-center">
            <h2 class="p-6 text-4xl">"Welcome to Leptos with Tailwind"</h2>
            <p class="px-10 pb-10 text-left">"Tailwind will scan your Rust files for Tailwind class names and compile them into a CSS file."</p>
            <button
                class="bg-sky-600 hover:bg-sky-700 px-5 py-3 text-white rounded-lg"
                on:click=move |_| set_count.update(|count| *count += 1)
            >
                {move || if count() == 0 {
                    "Click me!".to_string()
                } else {
                    count().to_string()
                }}
            </button>
        </main>
    }
}

It can be a little complicated to set up the Tailwind integration at first, but you can check out our two examples of how to use Tailwind with a client-side-rendered trunk application or with a server-rendered cargo-leptos application. cargo-leptos also has some built-in Tailwind support that you can use as an alternative to Tailwind’s CLI.

Stylers: Compile-time CSS Extraction

Stylers is a compile-time scoped CSS library that lets you declare scoped CSS in the body of your component. Stylers will extract this CSS at compile time into CSS files that you can then import into your app, which means that it doesn’t add anything to the WASM binary size of your application.

This allows you to write components like this:

use stylers::style;

#[component]
pub fn App() -> impl IntoView {
    let styler_class = style! { "App",
        #two{
            color: blue;
        }
        div.one{
            color: red;
            content: raw_str(r#"\hello"#);
            font: "1.3em/1.2" Arial, Helvetica, sans-serif;
        }
        div {
            border: 1px solid black;
            margin: 25px 50px 75px 100px;
            background-color: lightblue;
        }
        h2 {
            color: purple;
        }
        @media only screen and (max-width: 1000px) {
            h3 {
                background-color: lightblue;
                color: blue
            }
        }
    };

    view! { class = styler_class,
        <div class="one">
            <h1 id="two">"Hello"</h1>
            <h2>"World"</h2>
            <h2>"and"</h2>
            <h3>"friends!"</h3>
        </div>
    }
}

Stylance: Scoped CSS Written in CSS Files

Stylers lets you write CSS inline in your Rust code, extracts it at compile time, and scopes it. Stylance allows you to write your CSS in CSS files alongside your components, import those files into your components, and scope the CSS classes to your components.

This works well with the live-reloading features of trunk and cargo-leptos because edited CSS files can be updated immediately in the browser.

import_style!(style, "app.module.scss");

#[component]
fn HomePage() -> impl IntoView {
    view! {
        <div class=style::jumbotron/>
    }
}

You can edit the CSS directly without causing a Rust recompile.

.jumbotron {
  background: blue;
}

Styled: Runtime CSS Scoping

Styled is a runtime scoped CSS library that integrates well with Leptos. It lets you declare scoped CSS in the body of your component function, and then applies those styles at runtime.

use styled::style;

#[component]
pub fn MyComponent() -> impl IntoView {
    let styles = style!(
      div {
        background-color: red;
        color: white;
      }
    );

    styled::view! { styles,
        <div>"This text should be red with white text."</div>
    }
}

Contributions Welcome

Leptos has no opinions on how you style your website or app, but we’re very happy to provide support to any tools you’re trying to create to make it easier. If you’re working on a CSS or styling approach that you’d like to add to this list, please let us know!

Metadata

So far, everything we’ve rendered has been inside the <body> of the HTML document. And this makes sense. After all, everything you can see on a web page lives inside the <body>.

However, there are plenty of occasions where you might want to update something inside the <head> of the document using the same reactive primitives and component patterns you use for your UI.

That’s where the leptos_meta package comes in.

Metadata Components

leptos_meta provides special components that let you inject data from inside components anywhere in your application into the <head>:

<Title/> allows you to set the document’s title from any component. It also takes a formatter function that can be used to apply the same format to the title set by other pages. So, for example, if you put <Title formatter=|text| format!("{text} — My Awesome Site")/> in your <App/> component, and then <Title text="Page 1"/> and <Title text="Page 2"/> on your routes, you’ll get Page 1 — My Awesome Site and Page 2 — My Awesome Site.

<Link/> takes the standard attributes of the <link> element.

<Stylesheet/> creates a <link rel="stylesheet"> with the href you give.

<Style/> creates a <style> with the children you pass in (usually a string). You can use this to import some custom CSS from another file at compile time <Style>{include_str!("my_route.css")}</Style>.

<Meta/> lets you set <meta> tags with descriptions and other metadata.

<Script/> and <script>

leptos_meta also provides a <Script/> component, and it’s worth pausing here for a second. All of the other components we’ve considered inject <head>-only elements in the <head>. But a <script> can also be included in the body.

There’s a very simple way to determine whether you should use a capital-S <Script/> component or a lowercase-s <script> element: the <Script/> component will be rendered in the <head>, and the <script> element will be rendered wherever in the <body> of your user interface you put it in, alongside other normal HTML elements. These cause JavaScript to load and run at different times, so use whichever is appropriate to your needs.

<Body/> and <Html/>

There are even a couple elements designed to make semantic HTML and styling easier. <Html/> lets you set the lang and dir on your <html> tag from your application code. <Html/> and <Body/> both have class props that let you set their respective class attributes, which is sometimes needed by CSS frameworks for styling.

<Body/> and <Html/> both also have attributes props which can be used to set any number of additional attributes on them via the attr: syntax:

<Html
	lang="he"
	dir="rtl"
	attr:data-theme="dark"
/>

Metadata and Server Rendering

Now, some of this is useful in any scenario, but some of it is especially important for search-engine optimization (SEO). Making sure you have things like appropriate <title> and <meta> tags is crucial. Modern search engine crawlers do handle client-side rendering, i.e., apps that are shipped as an empty index.html and rendered entirely in JS/WASM. But they prefer to receive pages in which your app has been rendered to actual HTML, with metadata in the <head>.

This is exactly what leptos_meta is for. And in fact, during server rendering, this is exactly what it does: collect all the <head> content you’ve declared by using its components throughout your application, and then inject it into the actual <head>.

But I’m getting ahead of myself. We haven’t actually talked about server-side rendering yet. The next chapter will talk about integrating with JavaScript libraries. Then we’ll wrap up the discussion of the client side, and move onto server side rendering.

Integrating with JavaScript: wasm-bindgen, web_sys and HtmlElement

Leptos provides a variety of tools to allow you to build declarative web applications without leaving the world of the framework. Things like the reactive system, component and view macros, and router allow you to build user interfaces without directly interacting with the Web APIs provided by the browser. And they let you do it all directly in Rust, which is great—assuming you like Rust. (And if you’ve gotten this far in the book, we assume you like Rust.)

Ecosystem crates like the fantastic set of utilities provided by leptos-use can take you even further, by providing Leptos-specific reactive wrappers around many Web APIs.

Nevertheless, in many cases you will need to access JavaScript libraries or Web APIs directly. This chapter can help.

Using JS Libraries with wasm-bindgen

Your Rust code can be compiled to a WebAssembly (WASM) module and loaded to run in the browser. However, WASM does not have direct access to browser APIs. Instead, the Rust/WASM ecosystem depends on generating bindings from your Rust code to the JavaScript browser environment that hosts it.

The wasm-bindgen crate is at the center of that ecosystem. It provides both an interface for marking parts of Rust code with annotations telling it how to call JS, and a CLI tool for generating the necessary JS glue code. You’ve been using this without knowing it all along: both trunk and cargo-leptos rely on wasm-bindgen under the hood.

If there is a JavaScript library that you want to call from Rust, you should refer to the wasm-bindgen docs on importing functions from JS. It is relatively easy to import individual functions, classes, or values from JavaScript to use in your Rust app.

It is not always easy to integrate JS libraries into your app directly. In particular, any library that depends on a particular JS framework like React may be hard to integrated. Libraries that manipulate DOM state in some way (for example, rich text editors) should also be used with care: both Leptos and the JS library will probably assume that they are the ultimate source of truth for the app’s state, so you should be careful to separate their responsibilities.

Accessing Web APIs with web-sys

If you just need to access some browser APIs without pulling in a separate JS library, you can do so using the web_sys crate. This provides bindings for all of the Web APIs provided by the browser, with 1:1 mappings from browser types and functions to Rust structs and methods.

In general, if you’re asking “how do I do X with Leptos?” where do X is accessing some Web API, looking up a vanilla JavaScript solution and translating it to Rust using the web-sys docs is a good approach.

After this section, you might find the wasm-bindgen guide chapter on web-sys useful for additional reading.

Enabling features

web_sys is heavily feature-gated to keep compile times low. If you would like to use one of its many APIs, you may need to enable a feature to use it.

The features required to use an item are always listed in its documentation. For example, to use Element::get_bounding_rect_client, you need to enable the DomRect and Element features.

Leptos already enables a whole bunch of features - if the required feature is already enabled here, you won't have to enable it in your own app. Otherwise, add it to your Cargo.toml and you’re good to go!

[dependencies.web-sys]
version = "0.3"
features = ["DomRect"]

However, as the JavaScript standard evolves and APIs are being written, you may want to use browser features that are technically not fully stable yet, such as WebGPU. web_sys will follow the (potentially frequently changing) standard, which means that no stability guarantees are made.

In order to use this, you need to add RUSTFLAGS=--cfg=web_sys_unstable_apis as an environment variable. This can either be done by adding it to every command, or add it to .cargo/config.toml in your repository.

As part of a command:

RUSTFLAGS=--cfg=web_sys_unstable_apis cargo # ...

In .cargo/config.toml:

[env]
RUSTFLAGS = "--cfg=web_sys_unstable_apis"

Accessing raw HtmlElements from your view

The declarative style of the framework means that you don’t need to directly manipulate DOM nodes to build up your user interface. However, in some cases you want direct access to the underlying DOM element that represents part of your view. The section of the book on “uncontrolled inputs” showed how to do this using the NodeRef type.

You may notice that NodeRef::get returns an Option<leptos::HtmlElement<T>>. This is not the same type as a web_sys::HtmlElement, although they are related. So what is this HtmlElement<T> type, and how do you use it?

Overview

web_sys::HtmlElement is the Rust equivalent of the browser’s HTMLElement interface, which is implemented for all HTML elements. It provides access to a minimal set of functions and APIs that are guaranteed to be available for any HTML element. Each particular HTML element then has its own element class, which implements additional functionality. The goal of leptos::HtmlElement<T> is to bridge the gap between elements in your view and these more specific JavaScript types, so that you can access the particular functionality of those elements.

This is implement by using the Rust Deref trait to allow you to dereference a leptos::HtmlElement<T> to the appropriately-typed JS object for that particular element type T.

Definition

Understanding this relationship involves understanding some related traits.

The following simply defines what types are allowed inside the T of leptos::HtmlElement<T> and how it links to web_sys.

pub struct HtmlElement<El> where El: ElementDescriptor { /* ... */ }

pub trait ElementDescriptor: ElementDescriptorBounds { /* ... */ }

pub trait ElementDescriptorBounds: Debug {}
impl<El> ElementDescriptorBounds for El where El: Debug {}

// this is implemented for every single element in `leptos::{html, svg, math}::*`
impl ElementDescriptor for leptos::html::Div { /* ... */ }

// same with this, derefs to the corresponding `web_sys::Html*Element`
impl Deref for leptos::html::Div {
    type Target = web_sys::HtmlDivElement;
    // ...
}

The following is from web_sys:

impl Deref for web_sys::HtmlDivElement {
    type Target = web_sys::HtmlElement;
    // ...
}

impl Deref for web_sys::HtmlElement {
    type Target = web_sys::Element;
    // ...
}

impl Deref for web_sys::Element {
    type Target = web_sys::Node;
    // ...
}

impl Deref for web_sys::Node {
    type Target = web_sys::EventTarget;
    // ...
}

web_sys uses long deref chains to emulate the inheritance used in JavaScript. If you can't find the method you're looking for on one type, take a look further down the deref chain. The leptos::html::* types all deref into web_sys::Html*Element or web_sys::HtmlElement. By calling element.method(), Rust will automatically add more derefs as needed to call the correct method!

However, some methods have the same name, such as leptos::HtmlElement::style and web_sys::HtmlElement::style. In these cases, Rust will pick the one that requires the least amount of derefs, which is leptos::HtmlElement::style if you're getting an element straight from a NodeRef. If you wish to use the web_sys method instead, you can manually deref with (*element).style().

If you want to have even more control over which type you are calling a method from, AsRef<T> is implemented for all types that are part of the deref chain, so you can explicitly state which type you want.

See also: The wasm-bindgen Guide: Inheritance in web-sys.

Clones

The web_sys::HtmlElement (and by extension the leptos::HtmlElement too) actually only store references to the HTML element it affects. Therefore, calling .clone() doesn't actually make a new HTML element, it simply gets another reference to the same one. Calling methods that change the element from any of its clones will affect the original element.

Unfortunately, web_sys::HtmlElement does not implement Copy, so you may need to add a bunch of clones especially when using it in closures. Don't worry though, these clones are cheap!

Casting

You can get less specific types through Deref or AsRef, so use those when possible. However, if you need to cast to a more specific type (e.g. from an EventTarget to a HtmlInputElement), you will need to use the methods provided by wasm_bindgen::JsCast (re-exported through web_sys::wasm_bindgen::JsCast). You'll probably only need the dyn_ref method.

use web_sys::wasm_bindgen::JsCast;

let on_click = |ev: MouseEvent| {
    let target: HtmlInputElement = ev.current_target().unwrap().dyn_ref().unwrap();
    // or, just use the existing `leptos::event_target_*` functions
}

See the event_target_* functions here, if you're curious.

leptos::HtmlElement

The leptos::HtmlElement adds some extra convenience methods to make it easier to manipulate common attributes. These methods were built for the builder syntax, so it takes and returns self. You can just do _ = element.clone().<method>() to ignore the element it returns - it'll still affect the original element, even though it doesn't look like it (see previous section on Clones)!

Here are some of the common methods you may want to use, for example in event listeners or use: directives.

  • id: overwrites the id on the element.
  • classes: adds the classes to the element. You can specify multiple classes with a space-separated string. You can also use class to conditionally add a single class: do not add multiple with this method.
  • attr: sets a key=value attribute to the element.
  • prop: sets a property on the element: see the distinction between properties and attributes here.
  • on: adds an event listener to the element. Specify the event type through one of leptos::ev::* (it's the ones in all lowercase).
  • child: adds an element as the last child of the element.

Take a look at the rest of the leptos::HtmlElement methods too. If none of them fit your requirements, also take a look at leptos-use. Otherwise, you’ll have to use the web_sys APIs.

Wrapping Up Part 1: Client-Side Rendering

So far, everything we’ve written has been rendered almost entirely in the browser. When we create an app using Trunk, it’s served using a local development server. If you build it for production and deploy it, it’s served by whatever server or CDN you’re using. In either case, what’s served is an HTML page with

  1. the URL of your Leptos app, which has been compiled to WebAssembly (WASM)
  2. the URL of the JavaScript used to initialize this WASM blob
  3. an empty <body> element

When the JS and WASM have loaded, Leptos will render your app into the <body>. This means that nothing appears on the screen until JS/WASM have loaded and run. This has some drawbacks:

  1. It increases load time, as your user’s screen is blank until additional resources have been downloaded.
  2. It’s bad for SEO, as load times are longer and the HTML you serve has no meaningful content.
  3. It’s broken for users for whom JS/WASM don’t load for some reason (e.g., they’re on a train and just went into a tunnel before WASM finished loading; they’re using an older device that doesn’t support WASM; they have JavaScript or WASM turned off for some reason; etc.)

These downsides apply across the web ecosystem, but especially to WASM apps.

However, depending on the requirements of your project, you may be fine with these limitations.

If you just want to deploy your Client-Side Rendered website, skip ahead to the chapter on "Deployment" - there, you'll find directions on how best to deploy your Leptos CSR site.

But what do you do if you want to return more than just an empty <body> tag in your index.html page? Use “Server-Side Rendering”!

Whole books could be (and probably have been) written about this topic, but at its core, it’s really simple: rather than returning an empty <body> tag, with SSR, you'll return an initial HTML page that reflects the actual starting state of your app or site, so that while JS/WASM are loading, and until they load, the user can access the plain HTML version.

Part 2 of this book, on Leptos SSR, will cover this topic in some detail!

Part 2: Server Side Rendering

The second part of the book is all about how to turn your beautiful UIs into full-stack Rust + Leptos powered websites and applications.

As you read in the last chapter, there are some limitations to using client-side rendered Leptos apps - over the next few chapters, you'll see how we can overcome those limitations and get the best performance and SEO out of your Leptos apps.

Info

When working with Leptos on the server side, you're free to choose either the Actix-web or the Axum integrations - the full feature set of Leptos is available with either option.

If, however, you need deploy to a WinterCG-compatible runtime like Deno, Cloudflare, etc., then choose the Axum integration as this deployment option is only available with Axum on the server. Lastly, if you'd like to go full-stack WASM/WASI and deploy to WASM-based serverless runtimes, then Axum is your go-to choice here too.

NB: this is a limitation of the web frameworks themselves, not Leptos.

Introducing cargo-leptos

So far, we’ve just been running code in the browser and using Trunk to coordinate the build process and run a local development process. If we’re going to add server-side rendering, we’ll need to run our application code on the server as well. This means we’ll need to build two separate binaries, one compiled to native code and running the server, the other compiled to WebAssembly (WASM) and running in the user’s browser. Additionally, the server needs to know how to serve this WASM version (and the JavaScript required to initialize it) to the browser.

This is not an insurmountable task but it adds some complication. For convenience and an easier developer experience, we built the cargo-leptos build tool. cargo-leptos basically exists to coordinate the build process for your app, handling recompiling the server and client halves when you make changes, and adding some built-in support for things like Tailwind, SASS, and testing.

Getting started is pretty easy. Just run

cargo install cargo-leptos

And then to create a new project, you can run either

# for an Actix template
cargo leptos new --git leptos-rs/start

or

# for an Axum template
cargo leptos new --git leptos-rs/start-axum

Make sure you've added the wasm32-unknown-unknown target so that Rust can compile your code to WebAssembly to run in the browser.

rustup target add wasm32-unknown-unknown

Now cd into the directory you’ve created and run

cargo leptos watch

Note: Remember that Leptos has a nightly feature, which each of these starters use. If you're using the stable Rust compiler, that’s fine; just remove the nightly feature from each of the Leptos dependencies in your new Cargo.toml and you should be all set.

Once your app has compiled you can open up your browser to http://localhost:3000 to see it.

cargo-leptos has lots of additional features and built in tools. You can learn more in its README.

But what exactly is happening when you open our browser to localhost:3000? Well, read on to find out.

The Life of a Page Load

Before we get into the weeds it might be helpful to have a higher-level overview. What exactly happens between the moment you type in the URL of a server-rendered Leptos app, and the moment you click a button and a counter increases?

I’m assuming some basic knowledge of how the Internet works here, and won’t get into the weeds about HTTP or whatever. Instead, I’ll try to show how different parts of the Leptos APIs map onto each part of the process.

This description also starts from the premise that your app is being compiled for two separate targets:

  1. A server version, often running on Actix or Axum, compiled with the Leptos ssr feature
  2. A browser version, compiled to WebAssembly (WASM) with the Leptos hydrate feature

The cargo-leptos build tool exists to coordinate the process of compiling your app for these two different targets.

On the Server

  • Your browser makes a GET request for that URL to your server. At this point, the browser knows almost nothing about the page that’s going to be rendered. (The question “How does the browser know where to ask for the page?” is an interesting one, but out of the scope of this tutorial!)
  • The server receives that request, and checks whether it has a way to handle a GET request at that path. This is what the .leptos_routes() methods in leptos_axum and leptos_actix are for. When the server starts up, these methods walk over the routing structure you provide in <Routes/>, generating a list of all possible routes your app can handle and telling the server’s router “for each of these routes, if you get a request... hand it off to Leptos.”
  • The server sees that this route can be handled by Leptos. So it renders your root component (often called something like <App/>), providing it with the URL that’s being requested and some other data like the HTTP headers and request metadata.
  • Your application runs once on the server, building up an HTML version of the component tree that will be rendered at that route. (There’s more to be said here about resources and <Suspense/> in the next chapter.)
  • The server returns this HTML page, also injecting information on how to load the version of your app that has been compiled to WASM so that it can run in the browser.

The HTML page that’s returned is essentially your app, “dehydrated” or “freeze-dried”: it is HTML without any of the reactivity or event listeners you’ve added. The browser will “rehydrate” this HTML page by adding the reactive system and attaching event listeners to that server-rendered HTML. Hence the two feature flags that apply to the two halves of this process: ssr on the server for “server-side rendering”, and hydrate in the browser for that process of rehydration.

In the Browser

  • The browser receives this HTML page from the server. It immediately goes back to the server to begin loading the JS and WASM necessary to run the interactive, client side version of the app.
  • In the meantime, it renders the HTML version.
  • When the WASM version has reloaded, it does the same route-matching process that the server did. Because the <Routes/> component is identical on the server and in the client, the browser version will read the URL and render the same page that was already returned by the server.
  • During this initial “hydration” phase, the WASM version of your app doesn’t re-create the DOM nodes that make up your application. Instead, it walks over the existing HTML tree, “picking up” existing elements and adding the necessary interactivity.

Note that there are some trade-offs here. Before this hydration process is complete, the page will appear interactive but won’t actually respond to interactions. For example, if you have a counter button and click it before WASM has loaded, the count will not increment, because the necessary event listeners and reactivity have not been added yet. We’ll look at some ways to build in “graceful degradation” in future chapters.

Client-Side Navigation

The next step is very important. Imagine that the user now clicks a link to navigate to another page in your application.

The browser will not make another round trip to the server, reloading the full page as it would for navigating between plain HTML pages or an application that uses server rendering (for example with PHP) but without a client-side half.

Instead, the WASM version of your app will load the new page, right there in the browser, without requesting another page from the server. Essentially, your app upgrades itself from a server-loaded “multi-page app” into a browser-rendered “single-page app.” This yields the best of both worlds: a fast initial load time due to the server-rendered HTML, and fast secondary navigations because of the client-side routing.

Some of what will be described in the following chapters—like the interactions between server functions, resources, and <Suspense/>—may seem overly complicated. You might find yourself asking, “If my page is being rendered to HTML on the server, why can’t I just .await this on the server? If I can just call library X in a server function, why can’t I call it in my component?” The reason is pretty simple: to enable the upgrade from server rendering to client rendering, everything in your application must be able to run either on the server or in the browser.

This is not the only way to create a website or web framework, of course. But it’s the most common way, and we happen to think it’s quite a good way, to create the smoothest possible experience for your users.

Async Rendering and SSR “Modes”

Server-rendering a page that uses only synchronous data is pretty simple: You just walk down the component tree, rendering each element to an HTML string. But this is a pretty big caveat: it doesn’t answer the question of what we should do with pages that includes asynchronous data, i.e., the sort of stuff that would be rendered under a <Suspense/> node on the client.

When a page loads async data that it needs to render, what should we do? Should we wait for all the async data to load, and then render everything at once? (Let’s call this “async” rendering) Should we go all the way in the opposite direction, just sending the HTML we have immediately down to the client and letting the client load the resources and fill them in? (Let’s call this “synchronous” rendering) Or is there some middle-ground solution that somehow beats them both? (Hint: There is.)

If you’ve ever listened to streaming music or watched a video online, I’m sure you realize that HTTP supports streaming, allowing a single connection to send chunks of data one after another without waiting for the full content to load. You may not realize that browsers are also really good at rendering partial HTML pages. Taken together, this means that you can actually enhance your users’ experience by streaming HTML: and this is something that Leptos supports out of the box, with no configuration at all. And there’s actually more than one way to stream HTML: you can stream the chunks of HTML that make up your page in order, like frames of a video, or you can stream them... well, out of order.

Let me say a little more about what I mean.

Leptos supports all the major ways of rendering HTML that includes asynchronous data:

  1. Synchronous Rendering
  2. Async Rendering
  3. In-Order streaming
  4. Out-of-Order Streaming (and a partially-blocked variant)

Synchronous Rendering

  1. Synchronous: Serve an HTML shell that includes fallback for any <Suspense/>. Load data on the client using create_local_resource, replacing fallback once resources are loaded.
  • Pros: App shell appears very quickly: great TTFB (time to first byte).
  • Cons
    • Resources load relatively slowly; you need to wait for JS + WASM to load before even making a request.
    • No ability to include data from async resources in the <title> or other <meta> tags, hurting SEO and things like social media link previews.

If you’re using server-side rendering, the synchronous mode is almost never what you actually want, from a performance perspective. This is because it misses out on an important optimization. If you’re loading async resources during server rendering, you can actually begin loading the data on the server. Rather than waiting for the client to receive the HTML response, then loading its JS + WASM, then realize it needs the resources and begin loading them, server rendering can actually begin loading the resources when the client first makes the response. In this sense, during server rendering an async resource is like a Future that begins loading on the server and resolves on the client. As long as the resources are actually serializable, this will always lead to a faster total load time.

This is why create_resource requires resources data to be serializable by default, and why you need to explicitly use create_local_resource for any async data that is not serializable and should therefore only be loaded in the browser itself. Creating a local resource when you could create a serializable resource is always a deoptimization.

Async Rendering

  1. async: Load all resources on the server. Wait until all data are loaded, and render HTML in one sweep.
  • Pros: Better handling for meta tags (because you know async data even before you render the <head>). Faster complete load than synchronous because async resources begin loading on server.
  • Cons: Slower load time/TTFB: you need to wait for all async resources to load before displaying anything on the client. The page is totally blank until everything is loaded.

In-Order Streaming

  1. In-order streaming: Walk through the component tree, rendering HTML until you hit a <Suspense/>. Send down all the HTML you’ve got so far as a chunk in the stream, wait for all the resources accessed under the <Suspense/> to load, then render it to HTML and keep walking until you hit another <Suspense/> or the end of the page.
  • Pros: Rather than a blank screen, shows at least something before the data are ready.
  • Cons
    • Loads the shell more slowly than synchronous rendering (or out-of-order streaming) because it needs to pause at every <Suspense/>.
    • Unable to show fallback states for <Suspense/>.
    • Can’t begin hydration until the entire page has loaded, so earlier pieces of the page will not be interactive until the suspended chunks have loaded.

Out-of-Order Streaming

  1. Out-of-order streaming: Like synchronous rendering, serve an HTML shell that includes fallback for any <Suspense/>. But load data on the server, streaming it down to the client as it resolves, and streaming down HTML for <Suspense/> nodes, which is swapped in to replace the fallback.
  • Pros: Combines the best of synchronous and async.
    • Fast initial response/TTFB because it immediately sends the whole synchronous shell
    • Fast total time because resources begin loading on the server.
    • Able to show the fallback loading state and dynamically replace it, instead of showing blank sections for un-loaded data.
  • Cons: Requires JavaScript to be enabled for suspended fragments to appear in correct order. (This small chunk of JS streamed down in a <script> tag alongside the <template> tag that contains the rendered <Suspense/> fragment, so it does not need to load any additional JS files.)
  1. Partially-blocked streaming: “Partially-blocked” streaming is useful when you have multiple separate <Suspense/> components on the page. It is triggered by setting ssr=SsrMode::PartiallyBlocked on a route, and depending on blocking resources within the view. If one of the <Suspense/> components reads from one or more “blocking resources” (see below), the fallback will not be sent; rather, the server will wait until that <Suspense/> has resolved and then replace the fallback with the resolved fragment on the server, which means that it is included in the initial HTML response and appears even if JavaScript is disabled or not supported. Other <Suspense/> stream in out of order, similar to the SsrMode::OutOfOrder default.

This is useful when you have multiple <Suspense/> on the page, and one is more important than the other: think of a blog post and comments, or product information and reviews. It is not useful if there’s only one <Suspense/>, or if every <Suspense/> reads from blocking resources. In those cases it is a slower form of async rendering.

  • Pros: Works if JavaScript is disabled or not supported on the user’s device.
  • Cons
    • Slower initial response time than out-of-order.
    • Marginally overall response due to additional work on the server.
    • No fallback state shown.

Using SSR Modes

Because it offers the best blend of performance characteristics, Leptos defaults to out-of-order streaming. But it’s really simple to opt into these different modes. You do it by adding an ssr property onto one or more of your <Route/> components, like in the ssr_modes example.

<Routes>
	// We’ll load the home page with out-of-order streaming and <Suspense/>
	<Route path="" view=HomePage/>

	// We'll load the posts with async rendering, so they can set
	// the title and metadata *after* loading the data
	<Route
		path="/post/:id"
		view=Post
		ssr=SsrMode::Async
	/>
</Routes>

For a path that includes multiple nested routes, the most restrictive mode will be used: i.e., if even a single nested route asks for async rendering, the whole initial request will be rendered async. async is the most restricted requirement, followed by in-order, and then out-of-order. (This probably makes sense if you think about it for a few minutes.)

Blocking Resources

Any Leptos versions later than 0.2.5 (i.e., git main and 0.3.x or later) introduce a new resource primitive with create_blocking_resource. A blocking resource still loads asynchronously like any other async/.await in Rust; it doesn’t block a server thread or anything. Instead, reading from a blocking resource under a <Suspense/> blocks the HTML stream from returning anything, including its initial synchronous shell, until that <Suspense/> has resolved.

Now from a performance perspective, this is not ideal. None of the synchronous shell for your page will load until that resource is ready. However, rendering nothing means that you can do things like set the <title> or <meta> tags in your <head> in actual HTML. This sounds a lot like async rendering, but there’s one big difference: if you have multiple <Suspense/> sections, you can block on one of them but still render a placeholder and then stream in the other.

For example, think about a blog post. For SEO and for social sharing, I definitely want my blog post’s title and metadata in the initial HTML <head>. But I really don’t care whether comments have loaded yet or not; I’d like to load those as lazily as possible.

With blocking resources, I can do something like this:

#[component]
pub fn BlogPost() -> impl IntoView {
	let post_data = create_blocking_resource(/* load blog post */);
	let comments_data = create_resource(/* load blog comments */);
	view! {
		<Suspense fallback=|| ()>
			{move || {
				post_data.with(|data| {
					view! {
						<Title text=data.title/>
						<Meta name="description" content=data.excerpt/>
						<article>
							/* render the post content */
						</article>
					}
				})
			}}
		</Suspense>
		<Suspense fallback=|| "Loading comments...">
			/* render comments data here */
		</Suspense>
	}
}

The first <Suspense/>, with the body of the blog post, will block my HTML stream, because it reads from a blocking resource. Meta tags and other head elements awaiting the blocking resource will be rendered before the stream is sent.

Combined with the following route definition, which uses SsrMode::PartiallyBlocked, the blocking resource will be fully rendered on the server side, making it accessible to users who disable WebAssembly or JavaScript.

<Routes>
	// We’ll load the home page with out-of-order streaming and <Suspense/>
	<Route path="" view=HomePage/>

	// We'll load the posts with async rendering, so they can set
	// the title and metadata *after* loading the data
	<Route
		path="/post/:id"
		view=Post
		ssr=SsrMode::PartiallyBlocked
	/>
</Routes>

The second <Suspense/>, with the comments, will not block the stream. Blocking resources gave me exactly the power and granularity I needed to optimize my page for SEO and user experience.

Hydration Bugs (and how to avoid them)

A Thought Experiment

Let’s try an experiment to test your intuitions. Open up an app you’re server-rendering with cargo-leptos. (If you’ve just been using trunk so far to play with examples, go clone a cargo-leptos template just for the sake of this exercise.)

Put a log somewhere in your root component. (I usually call mine <App/>, but anything will do.)

#[component]
pub fn App() -> impl IntoView {
	logging::log!("where do I run?");
	// ... whatever
}

And let’s fire it up

cargo leptos watch

Where do you expect where do I run? to log?

  • In the command line where you’re running the server?
  • In the browser console when you load the page?
  • Neither?
  • Both?

Try it out.

...

...

...

Okay, consider the spoiler alerted.

You’ll notice of course that it logs in both places, assuming everything goes according to plan. In fact on the server it logs twice—first during the initial server startup, when Leptos renders your app once to extract the route tree, then a second time when you make a request. Each time you reload the page, where do I run? should log once on the server and once on the client.

If you think about the description in the last couple sections, hopefully this makes sense. Your application runs once on the server, where it builds up a tree of HTML which is sent to the client. During this initial render, where do I run? logs on the server.

Once the WASM binary has loaded in the browser, your application runs a second time, walking over the same user interface tree and adding interactivity.

Does that sound like a waste? It is, in a sense. But reducing that waste is a genuinely hard problem. It’s what some JS frameworks like Qwik are intended to solve, although it’s probably too early to tell whether it’s a net performance gain as opposed to other approaches.

The Potential for Bugs

Okay, hopefully all of that made sense. But what does it have to do with the title of this chapter, which is “Hydration bugs (and how to avoid them)”?

Remember that the application needs to run on both the server and the client. This generates a few different sets of potential issues you need to know how to avoid.

Mismatches between server and client code

One way to create a bug is by creating a mismatch between the HTML that’s sent down by the server and what’s rendered on the client. It’s actually fairly hard to do this unintentionally, I think (at least judging by the bug reports I get from people.) But imagine I do something like this

#[component]
pub fn App() -> impl IntoView {
    let data = if cfg!(target_arch = "wasm32") {
        vec![0, 1, 2]
    } else {
        vec![]
    };
    data.into_iter()
        .map(|value| view! { <span>{value}</span> })
        .collect_view()
}

In other words, if this is being compiled to WASM, it has three items; otherwise it’s empty.

When I load the page in the browser, I see nothing. If I open the console I see a bunch of warnings:

element with id 0-3 not found, ignoring it for hydration
element with id 0-4 not found, ignoring it for hydration
element with id 0-5 not found, ignoring it for hydration
component with id _0-6c not found, ignoring it for hydration
component with id _0-6o not found, ignoring it for hydration

The WASM version of your app, running in the browser, expects to find three items; but the HTML has none.

Solution

It’s pretty rare that you do this intentionally, but it could happen from somehow running different logic on the server and in the browser. If you’re seeing warnings like this and you don’t think it’s your fault, it’s much more likely that it’s a bug with <Suspense/> or something. Feel free to go ahead and open an issue or discussion on GitHub for help.

Not all client code can run on the server

Imagine you happily import a dependency like gloo-net that you’ve been used to using to make requests in the browser, and use it in a create_resource in a server-rendered app.

You’ll probably instantly see the dreaded message

panicked at 'cannot call wasm-bindgen imported functions on non-wasm targets'

Uh-oh.

But of course this makes sense. We’ve just said that your app needs to run on the client and the server.

Solution

There are a few ways to avoid this:

  1. Only use libraries that can run on both the server and the client. reqwest, for example, works for making HTTP requests in both settings.
  2. Use different libraries on the server and the client, and gate them using the #[cfg] macro. (Click here for an example.)
  3. Wrap client-only code in create_effect. Because create_effect only runs on the client, this can be an effective way to access browser APIs that are not needed for initial rendering.

For example, say that I want to store something in the browser’s localStorage whenever a signal changes.

#[component]
pub fn App() -> impl IntoView {
    use gloo_storage::Storage;
	let storage = gloo_storage::LocalStorage::raw();
	logging::log!("{storage:?}");
}

This panics because I can’t access LocalStorage during server rendering.

But if I wrap it in an effect...

#[component]
pub fn App() -> impl IntoView {
    use gloo_storage::Storage;
    create_effect(move |_| {
        let storage = gloo_storage::LocalStorage::raw();
		logging::log!("{storage:?}");
    });
}

It’s fine! This will render appropriately on the server, ignoring the client-only code, and then access the storage and log a message on the browser.

Not all server code can run on the client

WebAssembly running in the browser is a pretty limited environment. You don’t have access to a file-system or to many of the other things the standard library may be used to having. Not every crate can even be compiled to WASM, let alone run in a WASM environment.

In particular, you’ll sometimes see errors about the crate mio or missing things from core. This is generally a sign that you are trying to compile something to WASM that can’t be compiled to WASM. If you’re adding server-only dependencies, you’ll want to mark them optional = true in your Cargo.toml and then enable them in the ssr feature definition. (Check out one of the template Cargo.toml files to see more details.)

You can use create_effect to specify that something should only run on the client, and not in the server. Is there a way to specify that something should run only on the server, and not the client?

In fact, there is. The next chapter will cover the topic of server functions in some detail. (In the meantime, you can check out their docs here.)

Working with the Server

The previous section described the process of server-side rendering, using the server to generate an HTML version of the page that will become interactive in the browser. So far, everything has been “isomorphic”; in other words, your app has had the “same (iso) shape (morphe)” on the client and the server.

But a server can do a lot more than just render HTML! In fact, a server can do a whole bunch of things your browser can’t, like reading from and writing to a SQL database.

If you’re used to building JavaScript frontend apps, you’re probably used to calling out to some kind of REST API to do this sort of server work. If you’re used to building sites with PHP or Python or Ruby (or Java or C# or...), this server-side work is your bread and butter, and it’s the client-side interactivity that tends to be an afterthought.

With Leptos, you can do both: not only in the same language, not only sharing the same types, but even in the same files!

This section will talk about how to build the uniquely-server-side parts of your application.

Server Functions

If you’re creating anything beyond a toy app, you’ll need to run code on the server all the time: reading from or writing to a database that only runs on the server, running expensive computations using libraries you don’t want to ship down to the client, accessing APIs that need to be called from the server rather than the client for CORS reasons or because you need a secret API key that’s stored on the server and definitely shouldn’t be shipped down to a user’s browser.

Traditionally, this is done by separating your server and client code, and by setting up something like a REST API or GraphQL API to allow your client to fetch and mutate data on the server. This is fine, but it requires you to write and maintain your code in multiple separate places (client-side code for fetching, server-side functions to run), as well as creating a third thing to manage, which is the API contract between the two.

Leptos is one of a number of modern frameworks that introduce the concept of server functions. Server functions have two key characteristics:

  1. Server functions are co-located with your component code, so that you can organize your work by feature, not by technology. For example, you might have a “dark mode” feature that should persist a user’s dark/light mode preference across sessions, and be applied during server rendering so there’s no flicker. This requires a component that needs to be interactive on the client, and some work to be done on the server (setting a cookie, maybe even storing a user in a database.) Traditionally, this feature might end up being split between two different locations in your code, one in your “frontend” and one in your “backend.” With server functions, you’ll probably just write them both in one dark_mode.rs and forget about it.
  2. Server functions are isomorphic, i.e., they can be called either from the server or the browser. This is done by generating code differently for the two platforms. On the server, a server function simply runs. In the browser, the server function’s body is replaced with a stub that actually makes a fetch request to the server, serializing the arguments into the request and deserializing the return value from the response. But on either end, the function can simply be called: you can create an add_todo function that writes to your database, and simply call it from a click handler on a button in the browser!

Using Server Functions

Actually, I kind of like that example. What would it look like? It’s pretty simple, actually.

// todo.rs

#[server(AddTodo, "/api")]
pub async fn add_todo(title: String) -> Result<(), ServerFnError> {
    let mut conn = db().await?;

    match sqlx::query("INSERT INTO todos (title, completed) VALUES ($1, false)")
        .bind(title)
        .execute(&mut conn)
        .await
    {
        Ok(_row) => Ok(()),
        Err(e) => Err(ServerFnError::ServerError(e.to_string())),
    }
}

#[component]
pub fn BusyButton() -> impl IntoView {
	view! {
        <button on:click=move |_| {
            spawn_local(async {
                add_todo("So much to do!".to_string()).await;
            });
        }>
            "Add Todo"
        </button>
	}
}

You’ll notice a couple things here right away:

  • Server functions can use server-only dependencies, like sqlx, and can access server-only resources, like our database.
  • Server functions are async. Even if they only did synchronous work on the server, the function signature would still need to be async, because calling them from the browser must be asynchronous.
  • Server functions return Result<T, ServerFnError>. Again, even if they only do infallible work on the server, this is true, because ServerFnError’s variants include the various things that can be wrong during the process of making a network request.
  • Server functions can be called from the client. Take a look at our click handler. This is code that will only ever run on the client. But it can call the function add_todo (using spawn_local to run the Future) as if it were an ordinary async function:
move |_| {
	spawn_local(async {
		add_todo("So much to do!".to_string()).await;
	});
}
  • Server functions are top-level functions defined with fn. Unlike event listeners, derived signals, and most everything else in Leptos, they are not closures! As fn calls, they have no access to the reactive state of your app or anything else that is not passed in as an argument. And again, this makes perfect sense: When you make a request to the server, the server doesn’t have access to client state unless you send it explicitly. (Otherwise we’d have to serialize the whole reactive system and send it across the wire with every request, which—while it served classic ASP for a while—is a really bad idea.)
  • Server function arguments and return values both need to be serializable with serde. Again, hopefully this makes sense: while function arguments in general don’t need to be serialized, calling a server function from the browser means serializing the arguments and sending them over HTTP.

There are a few things to note about the way you define a server function, too.

  • Server functions are created by using the #[server] macro to annotate a top-level function, which can be defined anywhere.
  • We provide the macro a type name. The type name is used internally as a container to hold, serialize, and deserialize the arguments.
  • We provide the macro a path. This is a prefix for the path at which we’ll mount a server function handler on our server. (See examples for Actix and Axum.)
  • You’ll need to have serde as a dependency with the derive featured enabled for the macro to work properly. You can easily add it to Cargo.toml with cargo add serde --features=derive.

Server Function URL Prefixes

You can optionally define a specific URL prefix to be used in the definition of the server function. This is done by providing an optional 2nd argument to the #[server] macro. By default the URL prefix will be /api, if not specified. Here are some examples:

#[server(AddTodo)]         // will use the default URL prefix of `/api`
#[server(AddTodo, "/foo")] // will use the URL prefix of `/foo`

Server Function Encodings

By default, the server function call is a POST request that serializes the arguments as URL-encoded form data in the body of the request. (This means that server functions can be called from HTML forms, which we’ll see in a future chapter.) But there are a few other methods supported. Optionally, we can provide another argument to the #[server] macro to specify an alternate encoding:

#[server(AddTodo, "/api", "Url")]
#[server(AddTodo, "/api", "GetJson")]
#[server(AddTodo, "/api", "Cbor")]
#[server(AddTodo, "/api", "GetCbor")]

The four options use different combinations of HTTP verbs and encoding methods:

NameMethodRequestResponse
Url (default)POSTURL encodedJSON
GetJsonGETURL encodedJSON
CborPOSTCBORCBOR
GetCborGETURL encodedCBOR

In other words, you have two choices:

  • GET or POST? This has implications for things like browser or CDN caching; while POST requests should not be cached, GET requests can be.
  • Plain text (arguments sent with URL/form encoding, results sent as JSON) or a binary format (CBOR, encoded as a base64 string)?

But remember: Leptos will handle all the details of this encoding and decoding for you. When you use a server function, it looks just like calling any other asynchronous function!

Why not PUT or DELETE? Why URL/form encoding, and not JSON?

These are reasonable questions. Much of the web is built on REST API patterns that encourage the use of semantic HTTP methods like DELETE to delete an item from a database, and many devs are accustomed to sending data to APIs in the JSON format.

The reason we use POST or GET with URL-encoded data by default is the <form> support. For better or for worse, HTML forms don’t support PUT or DELETE, and they don’t support sending JSON. This means that if you use anything but a GET or POST request with URL-encoded data, it can only work once WASM has loaded. As we’ll see in a later chapter, this isn’t always a great idea.

The CBOR encoding is supported for historical reasons; an earlier version of server functions used a URL encoding that didn’t support nested objects like structs or vectors as server function arguments, which CBOR did. But note that the CBOR forms encounter the same issue as PUT, DELETE, or JSON: they do not degrade gracefully if the WASM version of your app is not available.

Server Functions Endpoint Paths

By default, a unique path will be generated. You can optionally define a specific endpoint path to be used in the URL. This is done by providing an optional 4th argument to the #[server] macro. Leptos will generate the complete path by concatenating the URL prefix (2nd argument) and the endpoint path (4th argument). For example,

#[server(MyServerFnType, "/api", "Url", "hello")]

will generate a server function endpoint at /api/hello that accepts a POST request.

Can I use the same server function endpoint path with multiple encodings?

No. Different server functions must have unique paths. The #[server] macro automatically generates unique paths, but you need to be careful if you choose to specify the complete path manually, as the server looks up server functions by their path.

An Important Note on Security

Server functions are a cool technology, but it’s very important to remember. Server functions are not magic; they’re syntax sugar for defining a public API. The body of a server function is never made public; it’s just part of your server binary. But the server function is a publicly accessible API endpoint, and its return value is just a JSON or similar blob. Do not return information from a server function unless it is public, or you've implemented proper security procedures. These procedures might include authenticating incoming requests, ensuring proper encryption, rate limiting access, and more.

Integrating Server Functions with Leptos

So far, everything I’ve said is actually framework agnostic. (And in fact, the Leptos server function crate has been integrated into Dioxus as well!) Server functions are simply a way of defining a function-like RPC call that leans on Web standards like HTTP requests and URL encoding.

But in a way, they also provide the last missing primitive in our story so far. Because a server function is just a plain Rust async function, it integrates perfectly with the async Leptos primitives we discussed earlier. So you can easily integrate your server functions with the rest of your applications:

  • Create resources that call the server function to load data from the server
  • Read these resources under <Suspense/> or <Transition/> to enable streaming SSR and fallback states while data loads.
  • Create actions that call the server function to mutate data on the server

The final section of this book will make this a little more concrete by introducing patterns that use progressively-enhanced HTML forms to run these server actions.

But in the next few chapters, we’ll actually take a look at some of the details of what you might want to do with your server functions, including the best ways to integrate with the powerful extractors provided by the Actix and Axum server frameworks.

Extractors

The server functions we looked at in the last chapter showed how to run code on the server, and integrate it with the user interface you’re rendering in the browser. But they didn’t show you much about how to actually use your server to its full potential.

Server Frameworks

We call Leptos a “full-stack” framework, but “full-stack” is always a misnomer (after all, it never means everything from the browser to your power company.) For us, “full stack” means that your Leptos app can run in the browser, and can run on the server, and can integrate the two, drawing together the unique features available in each; as we’ve seen in the book so far, a button click on the browser can drive a database read on the server, both written in the same Rust module. But Leptos itself doesn’t provide the server (or the database, or the operating system, or the firmware, or the electrical cables...)

Instead, Leptos provides integrations for the two most popular Rust web server frameworks, Actix Web (leptos_actix) and Axum (leptos_axum). We’ve built integrations with each server’s router so that you can simply plug your Leptos app into an existing server with .leptos_routes(), and easily handle server function calls.

If you haven’t seen our Actix and Axum templates, now’s a good time to check them out.

Using Extractors

Both Actix and Axum handlers are built on the same powerful idea of extractors. Extractors “extract” typed data from an HTTP request, allowing you to access server-specific data easily.

Leptos provides extract helper functions to let you use these extractors directly in your server functions, with a convenient syntax very similar to handlers for each framework.

Actix Extractors

The extract function in leptos_actix takes a handler function as its argument. The handler follows similar rules to an Actix handler: it is an async function that receives arguments that will be extracted from the request and returns some value. The handler function receives that extracted data as its arguments, and can do further async work on them inside the body of the async move block. It returns whatever value you return back out into the server function.

use serde::Deserialize;

#[derive(Deserialize, Debug)]
struct MyQuery {
    foo: String,
}

#[server]
pub async fn actix_extract() -> Result<String, ServerFnError> {
    use actix_web::dev::ConnectionInfo;
    use actix_web::web::{Data, Query};
    use leptos_actix::extract;

    let (Query(search), connection): (Query<MyQuery>, ConnectionInfo) = extract().await?;
    Ok(format!("search = {search:?}\nconnection = {connection:?}",))
}

Axum Extractors

The syntax for the leptos_axum::extract function is very similar.

use serde::Deserialize;

#[derive(Deserialize, Debug)]
struct MyQuery {
    foo: String,
}

#[server]
pub async fn axum_extract() -> Result<String, ServerFnError> {
    use axum::{extract::Query, http::Method};
    use leptos_axum::extract;

    let (method, query): (Method, Query<MyQuery>) = extract().await?;

    Ok(format!("{method:?} and {query:?}"))
}

These are relatively simple examples accessing basic data from the server. But you can use extractors to access things like headers, cookies, database connection pools, and more, using the exact same extract() pattern.

The Axum extract function only supports extractors for which the state is (). If you need an extractor that uses State, you should use extract_with_state. This requires you to provide the state. You can do this by extending the existing LeptosOptions state using the Axum FromRef pattern, which providing the state as context during render and server functions with custom handlers.

use axum::extract::FromRef;

/// Derive FromRef to allow multiple items in state, using Axum’s
/// SubStates pattern.
#[derive(FromRef, Debug, Clone)]
pub struct AppState{
    pub leptos_options: LeptosOptions,
    pub pool: SqlitePool
}

Click here for an example of providing context in custom handlers.

Axum State

Axum's typical pattern for dependency injection is to provide a State, which can then be extracted in your route handler. Leptos provides its own method of dependency injection via context. Context can often be used instead of State to provide shared server data (for example, a database connection pool).

let connection_pool = /* some shared state here */;

let app = Router::new()
    .leptos_routes_with_context(
        &app_state,
        routes,
        move || provide_context(connection_pool.clone()),
        App,
    )
    // etc.

This context can then be accessed with a simple use_context::<T>() inside your server functions.

If you need to use State in a server function—for example, if you have an existing Axum extractor that requires State—that is also possible using Axum's FromRef pattern and extract_with_state. Essentially you'll need to provide the state both via context and via Axum router state:

#[derive(FromRef, Debug, Clone)]
pub struct MyData {
    pub value: usize,
    pub leptos_options: LeptosOptions,
}

let app_state = MyData {
    value: 42,
    leptos_options,
};

// build our application with a route
let app = Router::new()
    .leptos_routes_with_context(
        &app_state,
        routes,
        {
            let app_state = app_state.clone();
            move || provide_context(app_state.clone())
        },
        App,
    )
    .fallback(file_and_error_handler)
    .with_state(app_state);

// ... 
#[server] 
pub async fn uses_state() -> Result<(), ServerFnError> {
    let state = expect_context::<AppState>();
    let SomeStateExtractor(data) = extract_with_state(&state).await?;
    // todo
}

A Note about Data-Loading Patterns

Because Actix and (especially) Axum are built on the idea of a single round-trip HTTP request and response, you typically run extractors near the “top” of your application (i.e., before you start rendering) and use the extracted data to determine how that should be rendered. Before you render a <button>, you load all the data your app could need. And any given route handler needs to know all the data that will need to be extracted by that route.

But Leptos integrates both the client and the server, and it’s important to be able to refresh small pieces of your UI with new data from the server without forcing a full reload of all the data. So Leptos likes to push data loading “down” in your application, as far towards the leaves of your user interface as possible. When you click a <button>, it can refresh just the data it needs. This is exactly what server functions are for: they give you granular access to data to be loaded and reloaded.

The extract() functions let you combine both models by using extractors in your server functions. You get access to the full power of route extractors, while decentralizing knowledge of what needs to be extracted down to your individual components. This makes it easier to refactor and reorganize routes: you don’t need to specify all the data a route needs up front.

Responses and Redirects

Extractors provide an easy way to access request data inside server functions. Leptos also provides a way to modify the HTTP response, using the ResponseOptions type (see docs for Actix or Axum) types and the redirect helper function (see docs for Actix or Axum).

ResponseOptions

ResponseOptions is provided via context during the initial server rendering response and during any subsequent server function call. It allows you to easily set the status code for the HTTP response, or to add headers to the HTTP response, e.g., to set cookies.

#[server(TeaAndCookies)]
pub async fn tea_and_cookies() -> Result<(), ServerFnError> {
	use actix_web::{cookie::Cookie, http::header, http::header::HeaderValue};
	use leptos_actix::ResponseOptions;

	// pull ResponseOptions from context
	let response = expect_context::<ResponseOptions>();

	// set the HTTP status code
	response.set_status(StatusCode::IM_A_TEAPOT);

	// set a cookie in the HTTP response
	let mut cookie = Cookie::build("biscuits", "yes").finish();
	if let Ok(cookie) = HeaderValue::from_str(&cookie.to_string()) {
		response.insert_header(header::SET_COOKIE, cookie);
	}
}

redirect

One common modification to an HTTP response is to redirect to another page. The Actix and Axum integrations provide a redirect function to make this easy to do. redirect simply sets an HTTP status code of 302 Found and sets the Location header.

Here’s a simplified example from our session_auth_axum example.

#[server(Login, "/api")]
pub async fn login(
    username: String,
    password: String,
    remember: Option<String>,
) -> Result<(), ServerFnError> {
	// pull the DB pool and auth provider from context
    let pool = pool()?;
    let auth = auth()?;

	// check whether the user exists
    let user: User = User::get_from_username(username, &pool)
        .await
        .ok_or_else(|| {
            ServerFnError::ServerError("User does not exist.".into())
        })?;

	// check whether the user has provided the correct password
    match verify(password, &user.password)? {
		// if the password is correct...
        true => {
			// log the user in
            auth.login_user(user.id);
            auth.remember_user(remember.is_some());

			// and redirect to the home page
            leptos_axum::redirect("/");
            Ok(())
        }
		// if not, return an error
        false => Err(ServerFnError::ServerError(
            "Password does not match.".to_string(),
        )),
    }
}

This server function can then be used from your application. This redirect works well with the progressively-enhanced <ActionForm/> component: without JS/WASM, the server response will redirect because of the status code and header. With JS/WASM, the <ActionForm/> will detect the redirect in the server function response, and use client-side navigation to redirect to the new page.

Progressive Enhancement (and Graceful Degradation)

I’ve been driving around Boston for about fifteen years. If you don’t know Boston, let me tell you: Massachusetts has some of the most aggressive drivers(and pedestrians!) in the world. I’ve learned to practice what’s sometimes called “defensive driving”: assuming that someone’s about to swerve in front of you at an intersection when you have the right of way, preparing for a pedestrian to cross into the street at any moment, and driving accordingly.

“Progressive enhancement” is the “defensive driving” of web design. Or really, that’s “graceful degradation,” although they’re two sides of the same coin, or the same process, from two different directions.

Progressive enhancement, in this context, means beginning with a simple HTML site or application that works for any user who arrives at your page, and gradually enhancing it with layers of additional features: CSS for styling, JavaScript for interactivity, WebAssembly for Rust-powered interactivity; using particular Web APIs for a richer experience if they’re available and as needed.

Graceful degradation means handling failure gracefully when parts of that stack of enhancement aren’t available. Here are some sources of failure your users might encounter in your app:

  • Their browser doesn’t support WebAssembly because it needs to be updated.
  • Their browser can’t support WebAssembly because browser updates are limited to newer OS versions, which can’t be installed on the device. (Looking at you, Apple.)
  • They have WASM turned off for security or privacy reasons.
  • They have JavaScript turned off for security or privacy reasons.
  • JavaScript isn’t supported on their device (for example, some accessibility devices only support HTML browsing)
  • The JavaScript (or WASM) never arrived at their device because they walked outside and lost WiFi.
  • They stepped onto a subway car after loading the initial page and subsequent navigations can’t load data.
  • ... and so on.

How much of your app still works if one of these holds true? Two of them? Three?

If the answer is something like “95%... okay, then 90%... okay, then 75%,” that’s graceful degradation. If the answer is “my app shows a blank screen unless everything works correctly,” that’s... rapid unscheduled disassembly.

Graceful degradation is especially important for WASM apps, because WASM is the newest and least-likely-to-be-supported of the four languages that run in the browser (HTML, CSS, JS, WASM).

Luckily, we’ve got some tools to help.

Defensive Design

There are a few practices that can help your apps degrade more gracefully:

  1. Server-side rendering. Without SSR, your app simply doesn’t work without both JS and WASM loading. In some cases this may be appropriate (think internal apps gated behind a login) but in others it’s simply broken.
  2. Native HTML elements. Use HTML elements that do the things that you want, without additional code: <a> for navigation (including to hashes within the page), <details> for an accordion, <form> to persist information in the URL, etc.
  3. URL-driven state. The more of your global state is stored in the URL (as a route param or part of the query string), the more of the page can be generated during server rendering and updated by an <a> or a <form>, which means that not only navigations but state changes can work without JS/WASM.
  4. SsrMode::PartiallyBlocked or SsrMode::InOrder. Out-of-order streaming requires a small amount of inline JS, but can fail if 1) the connection is broken halfway through the response or 2) the client’s device doesn’t support JS. Async streaming will give a complete HTML page, but only after all resources load. In-order streaming begins showing pieces of the page sooner, in top-down order. “Partially-blocked” SSR builds on out-of-order streaming by replacing <Suspense/> fragments that read from blocking resources on the server. This adds marginally to the initial response time (because of the O(n) string replacement work), in exchange for a more complete initial HTML response. This can be a good choice for situations in which there’s a clear distinction between “more important” and “less important” content, e.g., blog post vs. comments, or product info vs. reviews. If you choose to block on all the content, you’ve essentially recreated async rendering.
  5. Leaning on <form>s. There’s been a bit of a <form> renaissance recently, and it’s no surprise. The ability of a <form> to manage complicated POST or GET requests in an easily-enhanced way makes it a powerful tool for graceful degradation. The example in the <Form/> chapter, for example, would work fine with no JS/WASM: because it uses a <form method="GET"> to persist state in the URL, it works with pure HTML by making normal HTTP requests and then progressively enhances to use client-side navigations instead.

There’s one final feature of the framework that we haven’t seen yet, and which builds on this characteristic of forms to build powerful applications: the <ActionForm/>.

<ActionForm/>

<ActionForm/> is a specialized <Form/> that takes a server action, and automatically dispatches it on form submission. This allows you to call a server function directly from a <form>, even without JS/WASM.

The process is simple:

  1. Define a server function using the #[server] macro (see Server Functions.)
  2. Create an action using create_server_action, specifying the type of the server function you’ve defined.
  3. Create an <ActionForm/>, providing the server action in the action prop.
  4. Pass the named arguments to the server function as form fields with the same names.

Note: <ActionForm/> only works with the default URL-encoded POST encoding for server functions, to ensure graceful degradation/correct behavior as an HTML form.

#[server(AddTodo, "/api")]
pub async fn add_todo(title: String) -> Result<(), ServerFnError> {
    todo!()
}

#[component]
fn AddTodo() -> impl IntoView {
	let add_todo = create_server_action::<AddTodo>();
	// holds the latest *returned* value from the server
	let value = add_todo.value();
	// check if the server has returned an error
	let has_error = move || value.with(|val| matches!(val, Some(Err(_))));

	view! {
		<ActionForm action=add_todo>
			<label>
				"Add a Todo"
				// `title` matches the `title` argument to `add_todo`
				<input type="text" name="title"/>
			</label>
			<input type="submit" value="Add"/>
		</ActionForm>
	}
}

It’s really that easy. With JS/WASM, your form will submit without a page reload, storing its most recent submission in the .input() signal of the action, its pending status in .pending(), and so on. (See the Action docs for a refresher, if you need.) Without JS/WASM, your form will submit with a page reload. If you call a redirect function (from leptos_axum or leptos_actix) it will redirect to the correct page. By default, it will redirect back to the page you’re currently on. The power of HTML, HTTP, and isomorphic rendering mean that your <ActionForm/> simply works, even with no JS/WASM.

Client-Side Validation

Because the <ActionForm/> is just a <form>, it fires a submit event. You can use either HTML validation, or your own client-side validation logic in an on:submit. Just call ev.prevent_default() to prevent submission.

The FromFormData trait can be helpful here, for attempting to parse your server function’s data type from the submitted form.

let on_submit = move |ev| {
	let data = AddTodo::from_event(&ev);
	// silly example of validation: if the todo is "nope!", nope it
	if data.is_err() || data.unwrap().title == "nope!" {
		// ev.prevent_default() will prevent form submission
		ev.prevent_default();
	}
}

Complex Inputs

Server function arguments that are structs with nested serializable fields should make use of indexing notation of serde_qs.

use leptos::*;
use leptos_router::*;

#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
struct HeftyData {
    first_name: String,
    last_name: String,
}

#[component]
fn ComplexInput() -> impl IntoView {
    let submit = Action::<VeryImportantFn, _>::server();

    view! {
      <ActionForm action=submit>
        <input type="text" name="hefty_arg[first_name]" value="leptos"/>
        <input
          type="text"
          name="hefty_arg[last_name]"
          value="closures-everywhere"
        />
        <input type="submit"/>
      </ActionForm>
    }
}

#[server]
async fn very_important_fn(
    hefty_arg: HeftyData,
) -> Result<(), ServerFnError> {
    assert_eq!(hefty_arg.first_name.as_str(), "leptos");
    assert_eq!(hefty_arg.last_name.as_str(), "closures-everywhere");
    Ok(())
}

Deployment

There are as many ways to deploy a web application as there are developers, let alone applications. But there are a couple useful tips to keep in mind when deploying an app.

General Advice

  1. Remember: Always deploy Rust apps built in --release mode, not debug mode. This has a huge effect on both performance and binary size.
  2. Test locally in release mode as well. The framework applies certain optimizations in release mode that it does not apply in debug mode, so it’s possible for bugs to surface at this point. (If your app behaves differently or you do encounter a bug, it’s likely a framework-level bug and you should open a GitHub issue with a reproduction.)
  3. See the chapter on "Optimizing WASM Binary Size" for additional tips and tricks to further improve the time-to-interactive metric for your WASM app on first load.

We asked users to submit their deployment setups to help with this chapter. I’ll quote from them below, but you can read the full thread here.

Deploying a Client-Side-Rendered App

If you’ve been building an app that only uses client-side rendering, working with Trunk as a dev server and build tool, the process is quite easy.

trunk build --release

trunk build will create a number of build artifacts in a dist/ directory. Publishing dist somewhere online should be all you need to deploy your app. This should work very similarly to deploying any JavaScript application.

We've created several example repositories which show how to set up and deploy a Leptos CSR app to various hosting services.

Note: Leptos does not endorse the use of any particular hosting service - feel free to use any service that supports static site deploys.

Examples:

Github Pages

Deploying a Leptos CSR app to Github pages is a simple affair. First, go to your Github repo's settings and click on "Pages" in the left side menu. In the "Build and deployment" section of the page, change the "source" to "Github Actions". Then copy the following into a file such as .github/workflows/gh-pages-deploy.yml

Example

name: Release to Github Pages

on:
  push:
    branches: [main]
  workflow_dispatch:

permissions:
  contents: write # for committing to gh-pages branch.
  pages: write
  id-token: write

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
  group: "pages"
  cancel-in-progress: false

jobs:
  Github-Pages-Release:

    timeout-minutes: 10

    environment:
      name: github-pages
      url: ${{ steps.deployment.outputs.page_url }}

    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4 # repo checkout

      # Install Rust Nightly Toolchain, with Clippy & Rustfmt
      - name: Install nightly Rust
        uses: dtolnay/rust-toolchain@nightly
        with:
          components: clippy, rustfmt

      - name: Add WASM target
        run: rustup target add wasm32-unknown-unknown

      - name: lint
        run: cargo clippy & cargo fmt


      # If using tailwind...
      # - name: Download and install tailwindcss binary
      #   run: npm install -D tailwindcss && npx tailwindcss -i <INPUT/PATH.css> -o <OUTPUT/PATH.css>  # run tailwind


      - name: Download and install Trunk binary
        run: wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.18.4/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-

      - name: Build with Trunk
        # "${GITHUB_REPOSITORY#*/}" evaluates into the name of the repository
        # using --public-url something will allow trunk to modify all the href paths like from favicon.ico to repo_name/favicon.ico .
        # this is necessary for github pages where the site is deployed to username.github.io/repo_name and all files must be requested
        # relatively as favicon.ico. if we skip public-url option, the href paths will instead request username.github.io/favicon.ico which
        # will obviously return error 404 not found.
        run: ./trunk build --release --public-url "${GITHUB_REPOSITORY#*/}"


      # Deploy to gh-pages branch
      # - name: Deploy 🚀
      #   uses: JamesIves/github-pages-deploy-action@v4
      #   with:
      #     folder: dist


      # Deploy with Github Static Pages

      - name: Setup Pages
        uses: actions/configure-pages@v4
        with:
          enablement: true
          # token:

      - name: Upload artifact
        uses: actions/upload-pages-artifact@v2
        with:
          # Upload dist dir
          path: './dist'

      - name: Deploy to GitHub Pages 🚀
        id: deployment
        uses: actions/deploy-pages@v3

For more on deploying to Github Pages see the example repo here

Vercel

Step 1: Set Up Vercel

In the Vercel Web UI...

  1. Create a new project
  2. Ensure
    • The "Build Command" is left empty with Override on
    • The "Output Directory" is changed to dist (which is the default output directory for Trunk builds) and the Override is on

Step 2: Add Vercel Credentials for GitHub Actions

Note: Both the preview and deploy actions will need your Vercel credentials setup in GitHub secrets

  1. Retrieve your Vercel Access Token by going to "Account Settings" > "Tokens" and creating a new token - save the token to use in sub-step 5, below.

  2. Install the Vercel CLI using the npm i -g vercel command, then run vercel login to login to your acccount.

  3. Inside your folder, run vercel link to create a new Vercel project; in the CLI, you will be asked to 'Link to an existing project?' - answer yes, then enter the name you created in step 1. A new .vercel folder will be created for you.

  4. Inside the generated .vercel folder, open the the project.json file and save the "projectId" and "orgId" for the next step.

  5. Inside GitHub, go the repo's "Settings" > "Secrets and Variables" > "Actions" and add the following as Repository secrets:

    • save your Vercel Access Token (from sub-step 1) as the VERCEL_TOKEN secret
    • from the .vercel/project.json add "projectID" as VERCEL_PROJECT_ID
    • from the .vercel/project.json add "orgId" as VERCEL_ORG_ID

For full instructions see "How can I use Github Actions with Vercel"

Step 3: Add Github Action Scripts

Finally, you're ready to simply copy and paste the two files - one for deployment, one for PR previews - from below or from the example repo's .github/workflows/ folder into your own github workflows folder - then, on your next commit or PR deploys will occur automatically.

Production deployment script: vercel_deploy.yml

Example

name: Release to Vercel

on:
push:
	branches:
	- main
env:
CARGO_TERM_COLOR: always
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}

jobs:
Vercel-Production-Deployment:
	runs-on: ubuntu-latest
	environment: production
	steps:
	- name: git-checkout
		uses: actions/checkout@v3

	- uses: dtolnay/rust-toolchain@nightly
		with:
		components: clippy, rustfmt
	- uses: Swatinem/rust-cache@v2
	- name: Setup Rust
		run: |
		rustup target add wasm32-unknown-unknown
		cargo clippy
		cargo fmt --check

	- name: Download and install Trunk binary
		run: wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.18.2/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-


	- name: Build with Trunk
		run: ./trunk build --release

	- name: Install Vercel CLI
		run: npm install --global vercel@latest

	- name: Pull Vercel Environment Information
		run: vercel pull --yes --environment=production --token=${{ secrets.VERCEL_TOKEN }}

	- name: Deploy to Vercel & Display URL
		id: deployment
		working-directory: ./dist
		run: |
		vercel deploy --prod --token=${{ secrets.VERCEL_TOKEN }} >> $GITHUB_STEP_SUMMARY
		echo $GITHUB_STEP_SUMMARY

Preview deployments script: vercel_preview.yml

Example

# For more info re: vercel action see:
# https://github.com/amondnet/vercel-action

name: Leptos CSR Vercel Preview

on:
pull_request:
	branches: [ "main" ]

workflow_dispatch:

env:
CARGO_TERM_COLOR: always
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}

jobs:
fmt:
	name: Rustfmt
	runs-on: ubuntu-latest
	steps:
	- uses: actions/checkout@v4
	- uses: dtolnay/rust-toolchain@nightly
		with:
		components: rustfmt
	- name: Enforce formatting
		run: cargo fmt --check

clippy:
	name: Clippy
	runs-on: ubuntu-latest
	steps:
	- uses: actions/checkout@v4
	- uses: dtolnay/rust-toolchain@nightly
		with:
		components: clippy
	- uses: Swatinem/rust-cache@v2
	- name: Linting
		run: cargo clippy -- -D warnings

test:
	name: Test
	runs-on: ubuntu-latest
	needs: [fmt, clippy]
	steps:
	- uses: actions/checkout@v4
	- uses: dtolnay/rust-toolchain@nightly
	- uses: Swatinem/rust-cache@v2
	- name: Run tests
		run: cargo test

build-and-preview-deploy:
	runs-on: ubuntu-latest
	name: Build and Preview

	needs: [test, clippy, fmt]

	permissions:
	pull-requests: write

	environment:
	name: preview
	url: ${{ steps.preview.outputs.preview-url }}

	steps:
	- name: git-checkout
		uses: actions/checkout@v4

	- uses: dtolnay/rust-toolchain@nightly
	- uses: Swatinem/rust-cache@v2
	- name: Build
		run: rustup target add wasm32-unknown-unknown

	- name: Download and install Trunk binary
		run: wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.18.2/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-


	- name: Build with Trunk
		run: ./trunk build --release

	- name: Preview Deploy
		id: preview
		uses: amondnet/[email protected]
		with:
		vercel-token: ${{ secrets.VERCEL_TOKEN }}
		github-token: ${{ secrets.GITHUB_TOKEN }}
		vercel-org-id: ${{ secrets.VERCEL_ORG_ID }}
		vercel-project-id: ${{ secrets.VERCEL_PROJECT_ID }}
		github-comment: true
		working-directory: ./dist

	- name: Display Deployed URL
		run: |
		echo "Deployed app URL: ${{ steps.preview.outputs.preview-url }}" >> $GITHUB_STEP_SUMMARY

See the example repo here for more.

Spin - Serverless WebAssembly

Another option is using a serverless platform such as Spin. Although Spin is open source and you can run it on your own infrastructure (eg. inside Kubernetes), the easiest way to get started with Spin in production is to use the Fermyon Cloud.

Start by installing the Spin CLI using the instructions, here, and creating a Github repo for your Leptos CSR project, if you haven't done so already.

  1. Open "Fermyon Cloud" > "User Settings". If you’re not logged in, choose the Login With GitHub button.

  2. In the “Personal Access Tokens”, choose “Add a Token”. Enter the name “gh_actions” and click “Create Token”.

  3. Fermyon Cloud displays the token; click the copy button to copy it to your clipboard.

  4. Go into your Github repo and open "Settings" > "Secrets and Variables" > "Actions" and add the Fermyon cloud token to "Repository secrets" using the variable name "FERMYON_CLOUD_TOKEN"

  5. Copy and paste the following Github Actions scripts (below) into your .github/workflows/<SCRIPT_NAME>.yml files

  6. With the 'preview' and 'deploy' scripts active, Github Actions will now generate previews on pull requests & deploy automatically on updates to your 'main' branch.

Production deployment script: spin_deploy.yml

Example

# For setup instructions needed for Fermyon Cloud, see:
# https://developer.fermyon.com/cloud/github-actions

# For reference, see:
# https://developer.fermyon.com/cloud/changelog/gh-actions-spin-deploy

# For the Fermyon gh actions themselves, see:
# https://github.com/fermyon/actions

name: Release to Spin Cloud

on:
push:
	branches: [main]
workflow_dispatch:

permissions:
contents: read
id-token: write

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "spin"
cancel-in-progress: false

jobs:
Spin-Release:

	timeout-minutes: 10

	environment:
	name: production
	url: ${{ steps.deployment.outputs.app-url }}

	runs-on: ubuntu-latest

	steps:
	- uses: actions/checkout@v4 # repo checkout

	# Install Rust Nightly Toolchain, with Clippy & Rustfmt
	- name: Install nightly Rust
		uses: dtolnay/rust-toolchain@nightly
		with:
		components: clippy, rustfmt

	- name: Add WASM & WASI targets
		run: rustup target add wasm32-unknown-unknown && rustup target add wasm32-wasi

	- name: lint
		run: cargo clippy & cargo fmt


	# If using tailwind...
	# - name: Download and install tailwindcss binary
	#   run: npm install -D tailwindcss && npx tailwindcss -i <INPUT/PATH.css> -o <OUTPUT/PATH.css>  # run tailwind


	- name: Download and install Trunk binary
		run: wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.18.2/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-


	- name: Build with Trunk
		run: ./trunk build --release


	# Install Spin CLI & Deploy

	- name: Setup Spin
		uses: fermyon/actions/spin/setup@v1
		# with:
		# plugins:


	- name: Build and deploy
		id: deployment
		uses: fermyon/actions/spin/deploy@v1
		with:
		fermyon_token: ${{ secrets.FERMYON_CLOUD_TOKEN }}
		# key_values: |-
			# abc=xyz
			# foo=bar
		# variables: |-
			# password=${{ secrets.SECURE_PASSWORD }}
			# apikey=${{ secrets.API_KEY }}

	# Create an explicit message to display the URL of the deployed app, as well as in the job graph
	- name: Deployed URL
		run: |
		echo "Deployed app URL: ${{ steps.deployment.outputs.app-url }}" >> $GITHUB_STEP_SUMMARY

Preview deployment script: spin_preview.yml

Example

# For setup instructions needed for Fermyon Cloud, see:
# https://developer.fermyon.com/cloud/github-actions


# For the Fermyon gh actions themselves, see:
# https://github.com/fermyon/actions

# Specifically:
# https://github.com/fermyon/actions?tab=readme-ov-file#deploy-preview-of-spin-app-to-fermyon-cloud---fermyonactionsspinpreviewv1

name: Preview on Spin Cloud

on:
pull_request:
	branches: ["main", "v*"]
	types: ['opened', 'synchronize', 'reopened', 'closed']
workflow_dispatch:

permissions:
contents: read
pull-requests: write

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "spin"
cancel-in-progress: false

jobs:
Spin-Preview:

	timeout-minutes: 10

	environment:
	name: preview
	url: ${{ steps.preview.outputs.app-url }}

	runs-on: ubuntu-latest

	steps:
	- uses: actions/checkout@v4 # repo checkout

	# Install Rust Nightly Toolchain, with Clippy & Rustfmt
	- name: Install nightly Rust
		uses: dtolnay/rust-toolchain@nightly
		with:
		components: clippy, rustfmt

	- name: Add WASM & WASI targets
		run: rustup target add wasm32-unknown-unknown && rustup target add wasm32-wasi

	- name: lint
		run: cargo clippy & cargo fmt


	# If using tailwind...
	# - name: Download and install tailwindcss binary
	#   run: npm install -D tailwindcss && npx tailwindcss -i <INPUT/PATH.css> -o <OUTPUT/PATH.css>  # run tailwind


	- name: Download and install Trunk binary
		run: wget -qO- https://github.com/trunk-rs/trunk/releases/download/v0.18.2/trunk-x86_64-unknown-linux-gnu.tar.gz | tar -xzf-


	- name: Build with Trunk
		run: ./trunk build --release


	# Install Spin CLI & Deploy

	- name: Setup Spin
		uses: fermyon/actions/spin/setup@v1
		# with:
		# plugins:


	- name: Build and preview
		id: preview
		uses: fermyon/actions/spin/preview@v1
		with:
		fermyon_token: ${{ secrets.FERMYON_CLOUD_TOKEN }}
		github_token: ${{ secrets.GITHUB_TOKEN }}
		undeploy: ${{ github.event.pull_request && github.event.action == 'closed' }}
		# key_values: |-
			# abc=xyz
			# foo=bar
		# variables: |-
			# password=${{ secrets.SECURE_PASSWORD }}
			# apikey=${{ secrets.API_KEY }}


	- name: Display Deployed URL
		run: |
		echo "Deployed app URL: ${{ steps.preview.outputs.app-url }}" >> $GITHUB_STEP_SUMMARY

See the example repo here.

Deploying a Full-Stack SSR App

It's possible to deploy Leptos fullstack, SSR apps to any number of server or container hosting services. The most simple way to get a Leptos SSR app into production might be to use a VPS service and either run Leptos natively in a VM (see here for more details). Alternatively, you could containerize your Leptos app and run it in Podman or Docker on any colocated or cloud server.

There are a multitude of different deployment setups and hosting services, and in general, Leptos itself is agnostic to the deployment setup you use. With this diversity of deployment targets in mind, on this page we will go over:

Note: Leptos does not endorse the use of any particular method of deployment or hosting service.

Creating a Containerfile

The most popular way for people to deploy full-stack apps built with cargo-leptos is to use a cloud hosting service that supports deployment via a Podman or Docker build. Here’s a sample Containerfile / Dockerfile, which is based on the one we use to deploy the Leptos website.

Debian

# Get started with a build env with Rust nightly
FROM rustlang/rust:nightly-bullseye as builder

# If you’re using stable, use this instead
# FROM rust:1.74-bullseye as builder

# Install cargo-binstall, which makes it easier to install other
# cargo extensions like cargo-leptos
RUN wget https://github.com/cargo-bins/cargo-binstall/releases/latest/download/cargo-binstall-x86_64-unknown-linux-musl.tgz
RUN tar -xvf cargo-binstall-x86_64-unknown-linux-musl.tgz
RUN cp cargo-binstall /usr/local/cargo/bin

# Install cargo-leptos
RUN cargo binstall cargo-leptos -y

# Add the WASM target
RUN rustup target add wasm32-unknown-unknown

# Make an /app dir, which everything will eventually live in
RUN mkdir -p /app
WORKDIR /app
COPY . .

# Build the app
RUN cargo leptos build --release -vv

FROM debian:bookworm-slim as runtime
WORKDIR /app
RUN apt-get update -y \
  && apt-get install -y --no-install-recommends openssl ca-certificates \
  && apt-get autoremove -y \
  && apt-get clean -y \
  && rm -rf /var/lib/apt/lists/*

# -- NB: update binary name from "leptos_start" to match your app name in Cargo.toml --
# Copy the server binary to the /app directory
COPY --from=builder /app/target/release/leptos_start /app/

# /target/site contains our JS/WASM/CSS, etc.
COPY --from=builder /app/target/site /app/site

# Copy Cargo.toml if it’s needed at runtime
COPY --from=builder /app/Cargo.toml /app/

# Set any required env variables and
ENV RUST_LOG="info"
ENV LEPTOS_SITE_ADDR="0.0.0.0:8080"
ENV LEPTOS_SITE_ROOT="site"
EXPOSE 8080

# -- NB: update binary name from "leptos_start" to match your app name in Cargo.toml --
# Run the server
CMD ["/app/leptos_start"]

Alpine

# Get started with a build env with Rust nightly
FROM rustlang/rust:nightly-alpine as builder

RUN apk update && \
    apk add --no-cache bash curl npm libc-dev binaryen

RUN npm install -g sass

RUN curl --proto '=https' --tlsv1.2 -LsSf https://github.com/leptos-rs/cargo-leptos/releases/latest/download/cargo-leptos-installer.sh | sh

# Add the WASM target
RUN rustup target add wasm32-unknown-unknown

WORKDIR /work
COPY . .

RUN cargo leptos build --release -vv

FROM rustlang/rust:nightly-alpine as runner

WORKDIR /app

COPY --from=builder /work/target/release/leptos_start /app/
COPY --from=builder /work/target/site /app/site
COPY --from=builder /work/Cargo.toml /app/

EXPOSE $PORT
ENV LEPTOS_SITE_ROOT=./site

CMD ["/app/leptos_start"]

Read more: gnu and musl build files for Leptos apps.

Cloud Deployments

Deploy to Fly.io

One option for deploying your Leptos SSR app is to use a service like Fly.io, which takes a Dockerfile definition of your Leptos app and runs it in a quick-starting micro-VM; Fly also offers a variety of storage options and managed DBs to use with your projects. The following example will show how to deploy a simple Leptos starter app, just to get you up and going; see here for more about working with storage options on Fly.io if and when required.

First, create a Dockerfile in the root of your application and fill it in with the suggested contents (above); make sure to update the binary names in the Dockerfile example to the name of your own application, and make other adjustments as necessary.

Also, ensure you have the flyctl CLI tool installed, and have an account set up at Fly.io. To install flyctl on MacOS, Linux, or Windows WSL, run:

curl -L https://fly.io/install.sh | sh

If you have issues, or for installing to other platforms see the full instructions here

Then login to Fly.io

fly auth login

and manually launch your app using the command

fly launch

The flyctl CLI tool will walk you through the process of deploying your app to Fly.io.

Note

By default, Fly.io will auto-stop machines that don't have traffic coming to them after a certain period of time. Although Fly.io's lightweight VM's start up quickly, if you want to minimize the latency of your Leptos app and ensure it's always swift to respond, go into the generated fly.toml file and change the min_machines_running to 1 from the default of 0.

See this page in the Fly.io docs for more details.

If you would prefer to use Github Actions to manage your deployments, you will need to create a new access token via the Fly.io web UI.

Go to "Account" > "Access Tokens" and create a token named something like "github_actions", then add the token to your Github repo's secrets by going into your project's Github repo, then clicking "Settings" > "Secrets and Variables" > "Actions" and creating a "New repository secret" with the name "FLY_API_TOKEN".

To generate a fly.toml config file for deployment to Fly.io, you must first run the following from within the project source directory

fly launch --no-deploy

to create a new Fly app and register it with the service. Git commit your new fly.toml file.

To set up the Github Actions deployment workflow, copy the following into a .github/workflows/fly_deploy.yml file:

Example

# For more details, see: https://fly.io/docs/app-guides/continuous-deployment-with-github-actions/

name: Deploy to Fly.io
on:
push:
	branches:
	- main
jobs:
deploy:
	name: Deploy app
	runs-on: ubuntu-latest
	steps:
	- uses: actions/checkout@v4
	- uses: superfly/flyctl-actions/setup-flyctl@master
	- name: Deploy to fly
		id: deployment
		run: |
		  flyctl deploy --remote-only | tail -n 1 >> $GITHUB_STEP_SUMMARY
		env:
		  FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}

On the next commit to your Github main branch, your project will automatically deploy to Fly.io.

See the example repo here.

Railway

Another provider for cloud deployments is Railway. Railway integrates with GitHub to automatically deploy your code.

There is an opinionated community template that gets you started quickly:

Deploy on Railway

The template has renovate setup to keep dependencies up to date and supports GitHub Actions to test your code before a deploy happens.

Railway has a free tier that does not require a credit card, and with how little resources Leptos needs that free tier should last a long time.

See the example repo here.

Deploy to Serverless Runtimes

Leptos supports deploying to FaaS (Function as a Service) or 'serverless' runtimes such as AWS Lambda as well as WinterCG-compatible JS runtimes such as Deno and Cloudflare. Just be aware that serverless environments do place some restrictions on the functionality available to your SSR app when compared with VM or container type deployments (see notes, below).

AWS Lambda

With a little help from the Cargo Lambda tool, Leptos SSR apps can be deployed to AWS Lambda. A starter template repo using Axum as the server is available at leptos-rs/start-aws; the instructions there can be adapted for you to use a Leptos+Actix-web server as well. The starter repo includes a Github Actions script for CI/CD, as well as instructions for setting up your Lambda functions and getting the necessary credentials for cloud deployment.

However, please keep in mind that some native server functionality does not work with FaaS services like Lambda because the environment is not necessarily consistent from one request to the next. In particular, the 'start-aws' docs state that "since AWS Lambda is a serverless platform, you'll need to be more careful about how you manage long-lived state. Writing to disk or using a state extractor will not work reliably across requests. Instead, you'll need a database or other microservices that you can query from the Lambda function."

The other factor to bear in mind is the 'cold-start' time for functions as a service - depending on your use case and the FaaS platform you use, this may or may not meet your latency requirements; you may need to keep one function running at all times to optimize the speed of your requests.

Deno & Cloudflare Workers

Currently, Leptos-Axum supports running in Javascript-hosted WebAssembly runtimes such as Deno, Cloudflare Workers, etc. This option requires some changes to the setup of your source code (for example, in Cargo.toml you must define your app using crate-type = ["cdylib"] and the "wasm" feature must be enabled for leptos_axum). The Leptos HackerNews JS-fetch example demonstrates the required modifications and shows how to run an app in the Deno runtime. Additionally, the leptos_axum crate docs are a helpful reference when setting up your own Cargo.toml file for JS-hosted WASM runtimes.

While the initial setup for JS-hosted WASM runtimes is not onerous, the more important restriction to keep in mind is that since your app will be compiled to WebAssembly (wasm32-unknown-unknown) on the server as well as the client, you must ensure that the crates you use in your app are all WASM-compatible; this may or may not be a deal-breaker depending on your app's requirements, as not all crates in the Rust ecosystem have WASM support.

If you're willing to live with the limitations of WASM server-side, the best place to get started right now is by checking out the example of running Leptos with Deno in the official Leptos Github repo.

Platforms Working on Leptos Support

Deploy to Spin Serverless WASI (with Leptos SSR)

WebAssembly on the server has been gaining steam lately, and the developers of the open source serverless WebAssembly framework Spin are working on natively supporting Leptos. While the Leptos-Spin SSR integration is still in its early stages, there is a working example you may wish to try out.

The full set of instructions to get Leptos SSR & Spin working together are available as a post on the Fermyon blog, or if you want to skip the article and just start playing around with a working starter repo, see here.

Deploy to Shuttle.rs

Several Leptos users have asked about the possibility of using the Rust-friendly Shuttle.rs service to deploy Leptos apps. Unfortunately, Leptos is not officially supported by the Shuttle.rs service at the moment.

However, the folks at Shuttle.rs are committed to getting Leptos support in the future; if you would like to keep up-to-date on the status of that work, keep an eye on this Github issue.

Additionally, some effort has been made to get Shuttle working with Leptos, but to date, deploys to the Shuttle cloud are still not working as expected. That work is available here, if you would like to investigate for yourself or contribute fixes: Leptos Axum Starter Template for Shuttle.rs.

Optimizing WASM Binary Size

One of the primary downsides of deploying a Rust/WebAssembly frontend app is that splitting a WASM file into smaller chunks to be dynamically loaded is significantly more difficult than splitting a JavaScript bundle. There have been experiments like wasm-split in the Emscripten ecosystem but at present there’s no way to split and dynamically load a Rust/wasm-bindgen binary. This means that the whole WASM binary needs to be loaded before your app becomes interactive. Because the WASM format is designed for streaming compilation, WASM files are much faster to compile per kilobyte than JavaScript files. (For a deeper look, you can read this great article from the Mozilla team on streaming WASM compilation.)

Still, it’s important to ship the smallest WASM binary to users that you can, as it will reduce their network usage and make your app interactive as quickly as possible.

So what are some practical steps?

Things to Do

  1. Make sure you’re looking at a release build. (Debug builds are much, much larger.)
  2. Add a release profile for WASM that optimizes for size, not speed.

For a cargo-leptos project, for example, you can add this to your Cargo.toml:

[profile.wasm-release]
inherits = "release"
opt-level = 'z'
lto = true
codegen-units = 1

# ....

[package.metadata.leptos]
# ....
lib-profile-release = "wasm-release"

This will hyper-optimize the WASM for your release build for size, while keeping your server build optimized for speed. (For a pure client-rendered app without server considerations, just use the [profile.wasm-release] block as your [profile.release].)

  1. Always serve compressed WASM in production. WASM tends to compress very well, typically shrinking to less than 50% its uncompressed size, and it’s trivial to enable compression for static files being served from Actix or Axum.

  2. If you’re using nightly Rust, you can rebuild the standard library with this same profile rather than the prebuilt standard library that’s distributed with the wasm32-unknown-unknown target.

To do this, create a file in your project at .cargo/config.toml

[unstable]
build-std = ["std", "panic_abort", "core", "alloc"]
build-std-features = ["panic_immediate_abort"]

Note that if you're using this with SSR too, the same Cargo profile will be applied. You'll need to explicitly specify your target:

[build]
target = "x86_64-unknown-linux-gnu" # or whatever

Also note that in some cases, the cfg feature has_std will not be set, which may cause build errors with some dependencies which check for has_std. You may fix any build errors due to this by adding:

[build]
rustflags = ["--cfg=has_std"]

And you'll need to add panic = "abort" to [profile.release] in Cargo.toml. Note that this applies the same build-std and panic settings to your server binary, which may not be desirable. Some further exploration is probably needed here.

  1. One of the sources of binary size in WASM binaries can be serde serialization/deserialization code. Leptos uses serde by default to serialize and deserialize resources created with create_resource. You might try experimenting with the miniserde and serde-lite features, which allow you to use those crates for serialization and deserialization instead; each only implements a subset of serde’s functionality, but typically optimizes for size over speed.

Things to Avoid

There are certain crates that tend to inflate binary sizes. For example, the regex crate with its default features adds about 500kb to a WASM binary (largely because it has to pull in Unicode table data!). In a size-conscious setting, you might consider avoiding regexes in general, or even dropping down and calling browser APIs to use the built-in regex engine instead. (This is what leptos_router does on the few occasions it needs a regular expression.)

In general, Rust’s commitment to runtime performance is sometimes at odds with a commitment to a small binary. For example, Rust monomorphizes generic functions, meaning it creates a distinct copy of the function for each generic type it’s called with. This is significantly faster than dynamic dispatch, but increases binary size. Leptos tries to balance runtime performance with binary size considerations pretty carefully; but you might find that writing code that uses many generics tends to increase binary size. For example, if you have a generic component with a lot of code in its body and call it with four different types, remember that the compiler could include four copies of that same code. Refactoring to use a concrete inner function or helper can often maintain performance and ergonomics while reducing binary size.

A Final Thought

Remember that in a server-rendered app, JS bundle size/WASM binary size affects only one thing: time to interactivity on the first load. This is very important to a good user experience: nobody wants to click a button three times and have it do nothing because the interactive code is still loading — but it's not the only important measure.

It’s especially worth remembering that streaming in a single WASM binary means all subsequent navigations are nearly instantaneous, depending only on any additional data loading. Precisely because your WASM binary is not bundle split, navigating to a new route does not require loading additional JS/WASM, as it does in nearly every JavaScript framework. Is this copium? Maybe. Or maybe it’s just an honest trade-off between the two approaches!

Always take the opportunity to optimize the low-hanging fruit in your application. And always test your app under real circumstances with real user network speeds and devices before making any heroic efforts.

Guide: Islands

Leptos 0.5 introduces the new experimental-islands feature. This guide will walk through the islands feature and core concepts, while implementing a demo app using the islands architecture.

The Islands Architecture

The dominant JavaScript frontend frameworks (React, Vue, Svelte, Solid, Angular) all originated as frameworks for building client-rendered single-page apps (SPAs). The initial page load is rendered to HTML, then hydrated, and subsequent navigations are handled directly in the client. (Hence “single page”: everything happens from a single page load from the server, even if there is client-side routing later.) Each of these frameworks later added server-side rendering to improve initial load times, SEO, and user experience.

This means that by default, the entire app is interactive. It also means that the entire app has to be shipped to the client as JavaScript in order to be hydrated. Leptos has followed this same pattern.

You can read more in the chapters on server-side rendering.

But it’s also possible to work in the opposite direction. Rather than taking an entirely-interactive app, rendering it to HTML on the server, and then hydrating it in the browser, you can begin with a plain HTML page and add small areas of interactivity. This is the traditional format for any website or app before the 2010s: your browser makes a series of requests to the server and returns the HTML for each new page in response. After the rise of “single-page apps” (SPA), this approach has sometimes become known as a “multi-page app” (MPA) by comparison.

The phrase “islands architecture” has emerged recently to describe the approach of beginning with a “sea” of server-rendered HTML pages, and adding “islands” of interactivity throughout the page.

Additional Reading

The rest of this guide will look at how to use islands with Leptos. For more background on the approach in general, check out some of the articles below:

Activating Islands Mode

Let’s start with a fresh cargo-leptos app:

cargo leptos new --git leptos-rs/start

I’m using Actix because I like it. Feel free to use Axum; there should be approximately no server-specific differences in this guide.

I’m just going to run

cargo leptos build

in the background while I fire up my editor and keep writing.

The first thing I’ll do is to add the experimental-islands feature in my Cargo.toml. I need to add this to both leptos and leptos_actix:

leptos = { version = "0.5", features = ["nightly", "experimental-islands"] }
leptos_actix = { version = "0.5", optional = true, features = [
  "experimental-islands",
] }

Next I’m going to modify the hydrate function exported from src/lib.rs. I’m going to remove the line that calls leptos::mount_to_body(App) and replace it with

leptos::leptos_dom::HydrationCtx::stop_hydrating();

Each “island” we create will actually act as its own entrypoint, so our hydrate() function just says “okay, hydration’s done now.”

Okay, now fire up your cargo leptos watch and go to http://localhost:3000 (or wherever).

Click the button, and...

Nothing happens!

Perfect.

Note

The starter templates include use app::*; in their hydrate() function definitions. Once you've switched over to islands mode, you are no longer using the imported main App function, so you might think you can delete this. (And in fact, Rust lint tools might issue warnings if you don't!)

However, this can cause issues if you are using a workspace setup. We use wasm-bindgen to independently export an entrypoint for each function. In my experience, if you are using a workspace setup and nothing in your frontend crate actually uses the app crate, those bindings will not be generated correctly. See this discussion for more.

Using Islands

Nothing happens because we’ve just totally inverted the mental model of our app. Rather than being interactive by default and hydrating everything, the app is now plain HTML by default, and we need to opt into interactivity.

This has a big effect on WASM binary sizes: if I compile in release mode, this app is a measly 24kb of WASM (uncompressed), compared to 355kb in non-islands mode. (355kb is quite large for a “Hello, world!” It’s really just all the code related to client-side routing, which isn’t being used in the demo.)

When we click the button, nothing happens, because our whole page is static.

So how do we make something happen?

Let’s turn the HomePage component into an island!

Here was the non-interactive version:

#[component]
fn HomePage() -> impl IntoView {
    // Creates a reactive value to update the button
    let (count, set_count) = create_signal(0);
    let on_click = move |_| set_count.update(|count| *count += 1);

    view! {
        <h1>"Welcome to Leptos!"</h1>
        <button on:click=on_click>"Click Me: " {count}</button>
    }
}

Here’s the interactive version:

#[island]
fn HomePage() -> impl IntoView {
    // Creates a reactive value to update the button
    let (count, set_count) = create_signal(0);
    let on_click = move |_| set_count.update(|count| *count += 1);

    view! {
        <h1>"Welcome to Leptos!"</h1>
        <button on:click=on_click>"Click Me: " {count}</button>
    }
}

Now when I click the button, it works!

The #[island] macro works exactly like the #[component] macro, except that in islands mode, it designates this as an interactive island. If we check the binary size again, this is 166kb uncompressed in release mode; much larger than the 24kb totally static version, but much smaller than the 355kb fully-hydrated version.

If you open up the source for the page now, you’ll see that your HomePage island has been rendered as a special <leptos-island> HTML element which specifies which component should be used to hydrate it:

<leptos-island data-component="HomePage" data-hkc="0-0-0">
  <h1 data-hk="0-0-2">Welcome to Leptos!</h1>
  <button data-hk="0-0-3">
    Click Me:
    <!-- <DynChild> -->11<!-- </DynChild> -->
  </button>
</leptos-island>

The typical Leptos hydration keys and markers are only present inside the island, only the island is hydrated.

Using Islands Effectively

Remember that only code within an #[island] needs to be compiled to WASM and shipped to the browser. This means that islands should be as small and specific as possible. My HomePage, for example, would be better broken apart into a regular component and an island:

#[component]
fn HomePage() -> impl IntoView {
    view! {
        <h1>"Welcome to Leptos!"</h1>
        <Counter/>
    }
}

#[island]
fn Counter() -> impl IntoView {
    // Creates a reactive value to update the button
    let (count, set_count) = create_signal(0);
    let on_click = move |_| set_count.update(|count| *count += 1);

    view! {
        <button on:click=on_click>"Click Me: " {count}</button>
    }
}

Now the <h1> doesn’t need to be included in the client bundle, or hydrated. This seems like a silly distinction now; but note that you can now add as much inert HTML content as you want to the HomePage itself, and the WASM binary size will remain exactly the same.

In regular hydration mode, your WASM binary size grows as a function of the size/complexity of your app. In islands mode, your WASM binary grows as a function of the amount of interactivity in your app. You can add as much non-interactive content as you want, outside islands, and it will not increase that binary size.

Unlocking Superpowers

So, this 50% reduction in WASM binary size is nice. But really, what’s the point?

The point comes when you combine two key facts:

  1. Code inside #[component] functions now only runs on the server.
  2. Children and props can be passed from the server to islands, without being included in the WASM binary.

This means you can run server-only code directly in the body of a component, and pass it directly into the children. Certain tasks that take a complex blend of server functions and Suspense in fully-hydrated apps can be done inline in islands.

We’re going to rely on a third fact in the rest of this demo:

  1. Context can be passed between otherwise-independent islands.

So, instead of our counter demo, let’s make something a little more fun: a tabbed interface that reads data from files on the server.

Passing Server Children to Islands

One of the most powerful things about islands is that you can pass server-rendered children into an island, without the island needing to know anything about them. Islands hydrate their own content, but not children that are passed to them.

As Dan Abramov of React put it (in the very similar context of RSCs), islands aren’t really islands: they’re donuts. You can pass server-only content directly into the “donut hole,” as it were, allowing you to create tiny atolls of interactivity, surrounded on both sides by the sea of inert server HTML.

In the demo code included below, I added some styles to show all server content as a light-blue “sea,” and all islands as light-green “land.” Hopefully that will help picture what I’m talking about!

To continue with the demo: I’m going to create a Tabs component. Switching between tabs will require some interactivity, so of course this will be an island. Let’s start simple for now:

#[island]
fn Tabs(labels: Vec<String>) -> impl IntoView {
    let buttons = labels
        .into_iter()
        .map(|label| view! { <button>{label}</button> })
        .collect_view();
    view! {
        <div style="display: flex; width: 100%; justify-content: space-between;">
            {buttons}
        </div>
    }
}

Oops. This gives me an error

error[E0463]: can't find crate for `serde`
  --> src/app.rs:43:1
   |
43 | #[island]
   | ^^^^^^^^^ can't find crate

Easy fix: let’s cargo add serde --features=derive. The #[island] macro wants to pull in serde here because it needs to serialize and deserialize the labels prop.

Now let’s update the HomePage to use Tabs.

#[component]
fn HomePage() -> impl IntoView {
	// these are the files we’re going to read
    let files = ["a.txt", "b.txt", "c.txt"];
	// the tab labels will just be the file names
	let labels = files.iter().copied().map(Into::into).collect();
    view! {
        <h1>"Welcome to Leptos!"</h1>
        <p>"Click any of the tabs below to read a recipe."</p>
        <Tabs labels/>
    }
}

If you take a look in the DOM inspector, you’ll see the island is now something like

<leptos-island
  data-component="Tabs"
  data-hkc="0-0-0"
  data-props='{"labels":["a.txt","b.txt","c.txt"]}'
></leptos-island>

Our labels prop is getting serialized to JSON and stored in an HTML attribute so it can be used to hydrate the island.

Now let’s add some tabs. For the moment, a Tab island will be really simple:

#[island]
fn Tab(index: usize, children: Children) -> impl IntoView {
    view! {
        <div>{children()}</div>
    }
}

Each tab, for now will just be a <div> wrapping its children.

Our Tabs component will also get some children: for now, let’s just show them all.

#[island]
fn Tabs(labels: Vec<String>, children: Children) -> impl IntoView {
    let buttons = labels
        .into_iter()
        .map(|label| view! { <button>{label}</button> })
        .collect_view();
    view! {
        <div style="display: flex; width: 100%; justify-content: space-around;">
            {buttons}
        </div>
        {children()}
    }
}

Okay, now let’s go back into the HomePage. We’re going to create the list of tabs to put into our tab box.

#[component]
fn HomePage() -> impl IntoView {
    let files = ["a.txt", "b.txt", "c.txt"];
    let labels = files.iter().copied().map(Into::into).collect();
	let tabs = move || {
        files
            .into_iter()
            .enumerate()
            .map(|(index, filename)| {
                let content = std::fs::read_to_string(filename).unwrap();
                view! {
                    <Tab index>
                        <h2>{filename.to_string()}</h2>
                        <p>{content}</p>
                    </Tab>
                }
            })
            .collect_view()
    };

    view! {
        <h1>"Welcome to Leptos!"</h1>
        <p>"Click any of the tabs below to read a recipe."</p>
        <Tabs labels>
            <div>{tabs()}</div>
        </Tabs>
    }
}

Uh... What?

If you’re used to using Leptos, you know that you just can’t do this. All code in the body of components has to run on the server (to be rendered to HTML) and in the browser (to hydrate), so you can’t just call std::fs; it will panic, because there’s no access to the local filesystem (and certainly not to the server filesystem!) in the browser. This would be a security nightmare!

Except... wait. We’re in islands mode. This HomePage component really does only run on the server. So we can, in fact, just use ordinary server code like this.

Is this a dumb example? Yes! Synchronously reading from three different local files in a .map() is not a good choice in real life. The point here is just to demonstrate that this is, definitely, server-only content.

Go ahead and create three files in the root of the project called a.txt, b.txt, and c.txt, and fill them in with whatever content you’d like.

Refresh the page and you should see the content in the browser. Edit the files and refresh again; it will be updated.

You can pass server-only content from a #[component] into the children of an #[island], without the island needing to know anything about how to access that data or render that content.

This is really important. Passing server children to islands means that you can keep islands small. Ideally, you don’t want to slap and #[island] around a whole chunk of your page. You want to break that chunk out into an interactive piece, which can be an #[island], and a bunch of additional server content that can be passed to that island as children, so that the non-interactive subsections of an interactive part of the page can be kept out of the WASM binary.

Passing Context Between Islands

These aren’t really “tabs” yet: they just show every tab, all the time. So let’s add some simple logic to our Tabs and Tab components.

We’ll modify Tabs to create a simple selected signal. We provide the read half via context, and set the value of the signal whenever someone clicks one of our buttons.

#[island]
fn Tabs(labels: Vec<String>, children: Children) -> impl IntoView {
    let (selected, set_selected) = create_signal(0);
    provide_context(selected);

    let buttons = labels
        .into_iter()
        .enumerate()
        .map(|(index, label)| view! {
            <button on:click=move |_| set_selected(index)>
                {label}
            </button>
        })
        .collect_view();
// ...

And let’s modify the Tab island to use that context to show or hide itself:

#[island]
fn Tab(children: Children) -> impl IntoView {
    let selected = expect_context::<ReadSignal<usize>>();
    view! {
        <div style:display=move || if selected() == index {
            "block"
        } else {
            "none"
        }>
// ...

Now the tabs behave exactly as I’d expect. Tabs passes the signal via context to each Tab, which uses it to determine whether it should be open or not.

That’s why in HomePage, I made let tabs = move || a function, and called it like {tabs()}: creating the tabs lazily this way meant that the Tabs island would already have provided the selected context by the time each Tab went looking for it.

Our complete tabs demo is about 220kb uncompressed: not the smallest demo in the world, but still about a third smaller than the counter button! Just for kicks, I built the same demo without islands mode, using #[server] functions and Suspense. and it was 429kb. So again, this was about a 50% savings in binary size. And this app includes quite minimal server-only content: remember that as we add additional server-only components and pages, this 220 will not grow.

Overview

This demo may seem pretty basic. It is. But there are a number of immediate takeaways:

  • 50% WASM binary size reduction, which means measurable improvements in time to interactivity and initial load times for clients.
  • Reduced HTML page size. This one is less obvious, but it’s true and important: HTML generated from #[component]s doesn’t need all the hydration IDs and other boilerplate added.
  • Reduced data serialization costs. Creating a resource and reading it on the client means you need to serialize the data, so it can be used for hydration. If you’ve also read that data to create HTML in a Suspense, you end up with “double data,” i.e., the same exact data is both rendered to HTML and serialized as JSON, increasing the size of responses, and therefore slowing them down.
  • Easily use server-only APIs inside a #[component] as if it were a normal, native Rust function running on the server—which, in islands mode, it is!
  • Reduced #[server]/create_resource/Suspense boilerplate for loading server data.

Future Exploration

The experimental-islands feature included in 0.5 reflects work at the cutting edge of what frontend web frameworks are exploring right now. As it stands, our islands approach is very similar to Astro (before its recent View Transitions support): it allows you to build a traditional server-rendered, multi-page app and pretty seamlessly integrate islands of interactivity.

There are some small improvements that will be easy to add. For example, we can do something very much like Astro's View Transitions approach:

  • add client-side routing for islands apps by fetching subsequent navigations from the server and replacing the HTML document with the new one
  • add animated transitions between the old and new document using the View Transitions API
  • support explicit persistent islands, i.e., islands that you can mark with unique IDs (something like persist:searchbar on the component in the view), which can be copied over from the old to the new document without losing their current state

There are other, larger architectural changes that I’m not sold on yet.

Additional Information

Check out the islands PR, roadmap, and Hackernews demo for additional discussion.

Demo Code

use leptos::*;
use leptos_router::*;

#[component]
pub fn App() -> impl IntoView {
    view! {
        <Router>
            <main style="background-color: lightblue; padding: 10px">
                <Routes>
                    <Route path="" view=HomePage/>
                </Routes>
            </main>
        </Router>
    }
}

/// Renders the home page of your application.
#[component]
fn HomePage() -> impl IntoView {
    let files = ["a.txt", "b.txt", "c.txt"];
    let labels = files.iter().copied().map(Into::into).collect();
    let tabs = move || {
        files
            .into_iter()
            .enumerate()
            .map(|(index, filename)| {
                let content = std::fs::read_to_string(filename).unwrap();
                view! {
                    <Tab index>
                        <div style="background-color: lightblue; padding: 10px">
                            <h2>{filename.to_string()}</h2>
                            <p>{content}</p>
                        </div>
                    </Tab>
                }
            })
            .collect_view()
    };

    view! {
        <h1>"Welcome to Leptos!"</h1>
        <p>"Click any of the tabs below to read a recipe."</p>
        <Tabs labels>
            <div>{tabs()}</div>
        </Tabs>
    }
}

#[island]
fn Tabs(labels: Vec<String>, children: Children) -> impl IntoView {
    let (selected, set_selected) = create_signal(0);
    provide_context(selected);

    let buttons = labels
        .into_iter()
        .enumerate()
        .map(|(index, label)| {
            view! {
                <button on:click=move |_| set_selected(index)>
                    {label}
                </button>
            }
        })
        .collect_view();
    view! {
        <div
            style="display: flex; width: 100%; justify-content: space-around;\
            background-color: lightgreen; padding: 10px;"
        >
            {buttons}
        </div>
        {children()}
    }
}

#[island]
fn Tab(index: usize, children: Children) -> impl IntoView {
    let selected = expect_context::<ReadSignal<usize>>();
    view! {
        <div
            style:background-color="lightgreen"
            style:padding="10px"
            style:display=move || if selected() == index {
                "block"
            } else {
                "none"
            }
        >
            {children()}
        </div>
    }
}

Appendix: How does the Reactive System Work?

You don’t need to know very much about how the reactive system actually works in order to use the library successfully. But it’s always useful to understand what’s going on behind the scenes once you start working with the framework at an advanced level.

The reactive primitives you use are divided into three sets:

  • Signals (ReadSignal/WriteSignal, RwSignal, Resource, Trigger) Values you can actively change to trigger reactive updates.
  • Computations (Memos) Values that depend on signals (or other computations) and derive a new reactive value through some pure computation.
  • Effects Observers that listen to changes in some signals or computations and run a function, causing some side effect.

Derived signals are a kind of non-primitive computation: as plain closures, they simply allow you to refactor some repeated signal-based computation into a reusable function that can be called in multiple places, but they are not represented in the reactive system itself.

All the other primitives actually exist in the reactive system as nodes in a reactive graph.

Most of the work of the reactive system consists of propagating changes from signals to effects, possibly through some intervening memos.

The assumption of the reactive system is that effects (like rendering to the DOM or making a network request) are orders of magnitude more expensive than things like updating a Rust data structure inside your app.

So the primary goal of the reactive system is to run effects as infrequently as possible.

Leptos does this through the construction of a reactive graph.

Leptos’s current reactive system is based heavily on the Reactively library for JavaScript. You can read Milo’s article “Super-Charging Fine-Grained Reactivity” for an excellent account of its algorithm, as well as fine-grained reactivity in general—including some beautiful diagrams!

The Reactive Graph

Signals, memos, and effects all share three characteristics:

  • Value They have a current value: either the signal’s value, or (for memos and effects) the value returned by the previous run, if any.
  • Sources Any other reactive primitives they depend on. (For signals, this is an empty set.)
  • Subscribers Any other reactive primitives that depend on them. (For effects, this is an empty set.)

In reality then, signals, memos, and effects are just conventional names for one generic concept of a “node” in a reactive graph. Signals are always “root nodes,” with no sources/parents. Effects are always “leaf nodes,” with no subscribers. Memos typically have both sources and subscribers.

Simple Dependencies

So imagine the following code:

// A
let (name, set_name) = create_signal("Alice");

// B
let name_upper = create_memo(move |_| name.with(|n| n.to_uppercase()));

// C
create_effect(move |_| {
	log!("{}", name_upper());
});

set_name("Bob");

You can easily imagine the reactive graph here: name is the only signal/origin node, the create_effect is the only effect/terminal node, and there’s one intervening memo.

A   (name)
|
B   (name_upper)
|
C   (the effect)

Splitting Branches

Let’s make it a little more complex.

// A
let (name, set_name) = create_signal("Alice");

// B
let name_upper = create_memo(move |_| name.with(|n| n.to_uppercase()));

// C
let name_len = create_memo(move |_| name.len());

// D
create_effect(move |_| {
	log!("len = {}", name_len());
});

// E
create_effect(move |_| {
	log!("name = {}", name_upper());
});

This is also pretty straightforward: a signal source signal (name/A) divides into two parallel tracks: name_upper/B and name_len/C, each of which has an effect that depends on it.

 __A__
|     |
B     C
|     |
E     D

Now let’s update the signal.

set_name("Bob");

We immediately log

len = 3
name = BOB

Let’s do it again.

set_name("Tim");

The log should shows

name = TIM

len = 3 does not log again.

Remember: the goal of the reactive system is to run effects as infrequently as possible. Changing name from "Bob" to "Tim" will cause each of the memos to re-run. But they will only notify their subscribers if their value has actually changed. "BOB" and "TIM" are different, so that effect runs again. But both names have the length 3, so they do not run again.

Reuniting Branches

One more example, of what’s sometimes called the diamond problem.

// A
let (name, set_name) = create_signal("Alice");

// B
let name_upper = create_memo(move |_| name.with(|n| n.to_uppercase()));

// C
let name_len = create_memo(move |_| name.len());

// D
create_effect(move |_| {
	log!("{} is {} characters long", name_upper(), name_len());
});

What does the graph look like for this?

 __A__
|     |
B     C
|     |
|__D__|

You can see why it's called the “diamond problem.” If I’d connected the nodes with straight lines instead of bad ASCII art, it would form a diamond: two memos, each of which depend on a signal, which feed into the same effect.

A naive, push-based reactive implementation would cause this effect to run twice, which would be bad. (Remember, our goal is to run effects as infrequently as we can.) For example, you could implement a reactive system such that signals and memos immediately propagate their changes all the way down the graph, through each dependency, essentially traversing the graph depth-first. In other words, updating A would notify B, which would notify D; then A would notify C, which would notify D again. This is both inefficient (D runs twice) and glitchy (D actually runs with the incorrect value for the second memo during its first run.)

Solving the Diamond Problem

Any reactive implementation worth its salt is dedicated to solving this issue. There are a number of different approaches (again, see Milo’s article for an excellent overview).

Here’s how ours works, in brief.

A reactive node is always in one of three states:

  • Clean: it is known not to have changed
  • Check: it is possible it has changed
  • Dirty: it has definitely changed

Updating a signal Dirty marks that signal Dirty, and marks all its descendants Check, recursively. Any of its descendants that are effects are added to a queue to be re-run.

    ____A (DIRTY)___
   |               |
B (CHECK)    C (CHECK)
   |               |
   |____D (CHECK)__|

Now those effects are run. (All of the effects will be marked Check at this point.) Before re-running its computation, the effect checks its parents to see if they are dirty. So

  • So D goes to B and checks if it is Dirty.
  • But B is also marked Check. So B does the same thing:
    • B goes to A, and finds that it is Dirty.
    • This means B needs to re-run, because one of its sources has changed.
    • B re-runs, generating a new value, and marks itself Clean
    • Because B is a memo, it then checks its prior value against the new value.
    • If they are the same, B returns "no change." Otherwise, it returns "yes, I changed."
  • If B returned “yes, I changed,” D knows that it definitely needs to run and re-runs immediately before checking any other sources.
  • If B returned “no, I didn’t change,” D continues on to check C (see process above for B.)
  • If neither B nor C has changed, the effect does not need to re-run.
  • If either B or C did change, the effect now re-runs.

Because the effect is only marked Check once and only queued once, it only runs once.

If the naive version was a “push-based” reactive system, simply pushing reactive changes all the way down the graph and therefore running the effect twice, this version could be called “push-pull.” It pushes the Check status all the way down the graph, but then “pulls” its way back up. In fact, for large graphs it may end up bouncing back up and down and left and right on the graph as it tries to determine exactly which nodes need to re-run.

Note this important trade-off: Push-based reactivity propagates signal changes more quickly, at the expense of over-re-running memos and effects. Remember: the reactive system is designed to minimize how often you re-run effects, on the (accurate) assumption that side effects are orders of magnitude more expensive than this kind of cache-friendly graph traversal happening entirely inside the library’s Rust code. The measurement of a good reactive system is not how quickly it propagates changes, but how quickly it propagates changes without over-notifying.

Memos vs. Signals

Note that signals always notify their children; i.e., a signal is always marked Dirty when it updates, even if its new value is the same as the old value. Otherwise, we’d have to require PartialEq on signals, and this is actually quite an expensive check on some types. (For example, add an unnecessary equality check to something like some_vec_signal.update(|n| n.pop()) when it’s clear that it has in fact changed.)

Memos, on the other hand, check whether they change before notifying their children. They only run their calculation once, no matter how many times you .get() the result, but they run whenever their signal sources change. This means that if the memo’s computation is very expensive, you may actually want to memoize its inputs as well, so that the memo only re-calculates when it is sure its inputs have changed.

Memos vs. Derived Signals

All of this is cool, and memos are pretty great. But most actual applications have reactive graphs that are quite shallow and quite wide: you might have 100 source signals and 500 effects, but no memos or, in rare case, three or four memos between the signal and the effect. Memos are extremely good at what they do: limiting how often they notify their subscribers that they have changed. But as this description of the reactive system should show, they come with overhead in two forms:

  1. A PartialEq check, which may or may not be expensive.
  2. Added memory cost of storing another node in the reactive system.
  3. Added computational cost of reactive graph traversal.

In cases in which the computation itself is cheaper than this reactive work, you should avoid “over-wrapping” with memos and simply use derived signals. Here’s a great example in which you should never use a memo:

let (a, set_a) = create_signal(1);
// none of these make sense as memos
let b = move || a() + 2;
let c = move || b() % 2 == 0;
let d = move || if c() { "even" } else { "odd" };

set_a(2);
set_a(3);
set_a(5);

Even though memoizing would technically save an extra calculation of d between setting a to 3 and 5, these calculations are themselves cheaper than the reactive algorithm.

At the very most, you might consider memoizing the final node before running some expensive side effect:

let text = create_memo(move |_| {
    d()
});
create_effect(move |_| {
    engrave_text_into_bar_of_gold(&text());
});

Appendix: The Life Cycle of a Signal

Three questions commonly arise at the intermediate level when using Leptos:

  1. How can I connect to the component lifecycle, running some code when a component mounts or unmounts?
  2. How do I know when signals are disposed, and why do I get an occasional panic when trying to access a disposed signal?
  3. How is it possible that signals are Copy and can be moved into closures and other structures without being explicitly cloned?

The answers to these three questions are closely inter-related, and are each somewhat complicated. This appendix will try to give you the context for understanding the answers, so that you can reason correctly about your application's code and how it runs.

The Component Tree vs. The Decision Tree

Consider the following simple Leptos app:

use leptos::logging::log;
use leptos::*;

#[component]
pub fn App() -> impl IntoView {
    let (count, set_count) = create_signal(0);

    view! {
        <button on:click=move |_| set_count.update(|n| *n += 1)>"+1"</button>
        {move || if count() % 2 == 0 {
            view! { <p>"Even numbers are fine."</p> }.into_view()
        } else {
            view! { <InnerComponent count/> }.into_view()
        }}
    }
}

#[component]
pub fn InnerComponent(count: ReadSignal<usize>) -> impl IntoView {
    create_effect(move |_| {
        log!("count is odd and is {}", count());
    });

    view! {
        <OddDuck/>
        <p>{count}</p>
    }
}

#[component]
pub fn OddDuck() -> impl IntoView {
    view! {
        <p>"You're an odd duck."</p>
    }
}

All it does is show a counter button, and then one message if it's even, and a different message if it's odd. If it's odd, it also logs the values in the console.

One way to map out this simple application would be to draw a tree of nested components:

App 
|_ InnerComponent
   |_ OddDuck

Another way would be to draw the tree of decision points:

root
|_ is count even?
   |_ yes
   |_ no

If you combine the two together, you'll notice that they don't map onto one another perfectly. The decision tree slices the view we created in InnerComponent into three pieces, and combines part of InnerComponent with the OddDuck component:

DECISION            COMPONENT           DATA    SIDE EFFECTS
root                <App/>              (count) render <button>
|_ is count even?   <InnerComponent/>
   |_ yes                                       render even <p>
   |_ no                                        start logging the count 
                    <OddDuck/>                  render odd <p> 
                                                render odd <p> (in <InnerComponent/>!)

Looking at this table, I notice the following things:

  1. The component tree and the decision tree don't match one another: the "is count even?" decision splits <InnerComponent/> into three parts (one that never changes, one if even, one if odd), and merges one of these with the <OddDuck/> component.
  2. The decision tree and the list of side effects correspond perfectly: each side effect is created at a specific decision point.
  3. The decision tree and the tree of data also line up. It's hard to see with only one signal in the table, but unlike a component, which is a function that can include multiple decisions or none, a signal is always created at a specific line in the tree of decisions.

Here's the thing: The structure of your data and the structure of side effects affect the actual functionality of your application. The structure of your components is just a convenience of authoring. You don't care, and you shouldn't care, which component rendered which <p> tag, or which component created the effect to log the values. All that matters is that they happen at the right times.

In Leptos, components do not exist. That is to say: You can write your application as a tree of components, because that's convenient, and we provide some debugging tools and logging built around components, because that's convenient too. But your components do not exist at runtime: Components are not a unit of change detection or of rendering. They are simply function calls. You can write your whole application in one big component, or split it into a hundred components, and it does not affect the runtime behavior, because components don't really exist.

The decision tree, on the other hand, does exist. And it's really important!

The Decision Tree, Rendering, and Ownership

Every decision point is some kind of reactive statement: a signal or a function that can change over time. When you pass a signal or a function into the renderer, it automatically wraps it in an effect that subscribes to any signals it contains, and updates the view accordingly over time.

This means that when your application is rendered, it creates a tree of nested effects that perfectly mirrors the decision tree. In pseudo-code:

// root
let button = /* render the <button> once */;

// the renderer wraps an effect around the `move || if count() ...`
create_effect(|_| {
    if count() % 2 == 0 {
        let p = /* render the even <p> */;
    } else {
        // the user created an effect to log the count
        create_effect(|_| {
            log!("count is odd and is {}", count());
        });

        let p1 = /* render the <p> from OddDuck */;
        let p2 = /* render the second <p> */ 

        // the renderer creates an effect to update the second <p>
        create_effect(|_| {
            // update the content of the <p> with the signal
            p2.set_text_content(count.get());
        });
    }
})

Each reactive value is wrapped in its own effect to update the DOM, or run any other side effects of changes to signals. But you don't need these effects to keep running forever. For example, when count switches from an odd number back to an even number, the second <p> no longer exists, so the effect to keep updating it is no longer useful. Instead of running forever, effects are canceled when the decision that created them changes. In other words, and more precisely: effects are canceled whenever the effect that was running when they were created re-runs. If they were created in a conditional branch, and re-running the effect goes through the same branch, the effect will be created again: if not, it will not.

From the perspective of the reactive system itself, your application's "decision tree" is really a reactive "ownership tree." Simply put, a reactive "owner" is the effect or memo that is currently running. It owns effects created within it, they own their own children, and so on. When an effect is going to re-run, it first "cleans up" its children, then runs again.

So far, this model is shared with the reactive system as it exists in JavaScript frameworks like S.js or Solid, in which the concept of ownership exists to automatically cancel effects.

What Leptos adds is that we add a second, similar meaning to ownership: a reactive owner not only owns its child effects, so that it can cancel them; it also owns its signals (memos, etc.) so that it can dispose of them.

Ownership and the Copy Arena

This is the innovation that allows Leptos to be usable as a Rust UI framework. Traditionally, managing UI state in Rust has been hard, because UI is all about shared mutability. (A simple counter button is enough to see the problem: You need both immutable access to set the text node showing the counter's value, and mutable access in the click handler, and every Rust UI framework is designed around the fact that Rust is designed to prevent exactly that!) Using something like an event handler in Rust traditionally relies on primitives for communicating via shared memory with interior mutability (Rc<RefCell<_>>, Arc<Mutex<_>>) or for shared memory by communicating via channels, either of which often requires explicit .clone()ing to be moved into an event listener. This is kind of fine, but also an enormous inconvenience.

Leptos has always used a form of arena allocation for signals instead. A signal itself is essentially an index into a data structure that's held elsewhere. It's a cheap-to-copy integer type that does not do reference counting on its own, so it can be copied around, moved into event listeners, etc. without explicit cloning.

Instead of Rust lifetimes or reference counting, the life cycles of these signals are determined by the ownership tree.

Just as all effects belong to an owning parent effect, and the children are canceled when the owner reruns, so too all signals belong to an owner, and are disposed of when the parent reruns.

In most cases, this is completely fine. Imagine that in our example above, <OddDuck/> created some other signal that it used to update part of its UI. In most cases, that signal will be used for local state in that component, or maybe passed down as a prop to another component. It's unusual for it to be hoisted up out of the decision tree and used somewhere else in the application. When the count switches back to an even number, it is no longer needed and can be disposed.

However, this means there are two possible issues that can arise.

Signals can be used after they are disposed

The ReadSignal or WriteSignal that you hold is just an integer: say, 3 if it's the 3rd signal in the application. (As always, the reality is a bit more complicated, but not much.) You can copy that number all over the place and use it to say, "Hey, get me signal 3." When the owner cleans up, the value of signal 3 will be invalidated; but the number 3 that you've copied all over the place can't be invalidated. (Not without a whole garbage collector!) That means that if you push signals back "up" the decision tree, and store them somewhere conceptually "higher" in your application than they were created, they can be accessed after being disposed.

If you try to update a signal after it was disposed, nothing bad really happens. The framework will just warn you that you tried to update a signal that no longer exists. But if you try to access one, there's no coherent answer other than panicking: there is no value that could be returned. (There are try_ equivalents to the .get() and .with() methods that will simply return None if a signal has been disposed).

Signals can be leaked if you create them in a higher scope and never dispose of them

The opposite is also true, and comes up particularly when working with collections of signals, like an RwSignal<Vec<RwSignal<_>>>. If you create a signal at a higher level, and pass it down to a component at a lower level, it is not disposed until the higher-up owner is cleaned up.

For example, if you have a todo app that creates a new RwSignal<Todo> for each todo, stores it in an RwSignal<Vec<RwSignal<Todo>>>, and then passes it down to a <Todo/>, that signal is not automatically disposed when you remove the todo from the list, but must be manually disposed, or it will "leak" for as long as its owner is still alive. (See the TodoMVC example for more discussion.)

This is only an issue when you create signals, store them in a collection, and remove them from the collection without manually disposing of them as well.

Connecting the Dots

The answers to the questions we started with should probably make some sense now.

Component Life-Cycle

There is no component life-cycle, because components don't really exist. But there is an ownership lifecycle, and you can use it to accomplish the same things:

  • before mount: simply running code in the body of a component will run it "before the component mounts"
  • on mount: create_effect runs a tick after the rest of the component, so it can be useful for effects that need to wait for the view to be mounted to the DOM.
  • on unmount: You can use on_cleanup to give the reactive system code that should run while the current owner is cleaning up, before running again. Because an owner is around a "decision," this means that on_cleanup will run when your component unmounts: if something can unmount, the renderer must have created an effect that's unmounting it!

Issues with Disposed Signals

Generally speaking, problems can only arise here if you are creating a signal lower down in the ownership tree and storing it somewhere higher up. If you run into issues here, you should instead "hoist" the signal creation up into the parent, and then pass the created signals down—making sure to dispose of them on removal, if needed!

Copy signals

The whole system of Copyable wrapper types (signals, StoredValue, and so on) uses the ownership tree as a close approximation of the life-cycle of different parts of your UI. In effect, it parallels the Rust language's system of lifetimes based on blocks of code with a system of lifetimes based on sections of UI. This can't always be perfectly checked at compile time, but overall we think it's a net positive.