The Future of Native HTML Templating and Data Binding

EisenbergEffect
15 min readAug 7, 2023

One of the longest running requests of the Web Platform is the ability to have native templating and data binding features directly in HTML. For at least two decades innovative developers have been building libraries and frameworks to make up for this platform limitation. In this article, I’d like to take some time to share current work happening in W3C Community Groups and with browser vendors to see if we can finally bring this to the platform.

Before we get started, I want to note that nothing I show here is a Web Standard yet and many things don’t have formal specs or completed proposals. My goal is to peel back the veneer of the standards process a bit, show you a few exciting things that are being explored, and help bring to light some of the complexities that surround this space.

NOTE: For some additional background, please see my previous articles “2023 State of Web Components” and “Web Components 2023 Spring Update”. Even though the proposals below are independent of Web Components, they are related to the overall story. If you need an HTML or DOM refresher, please also have a read through “A Few DOM Reminders”.

Moving Parts

There are so many complex moving parts to the HTML templating puzzle, each one with its own set of challenges. In this article, we’ll take a look at three areas that are currently under investigation: DOM Parts, template syntax, and reactivity.

DOM Parts

In May and July of 2023, the members of the W3C Web Components CG and browser vendors met to discuss the DOM Parts proposal.

A “DOM Part” is a cacheable representation of a part of the DOM that can be updated in a performant way. It could be a NodePart representing a single HTML node, an AttributePart representing an HTML attribute value, or a ChildNodePart, representing a range of child nodes. Templating engines today use various techniques to tag nodes, attributes, and content ranges that they need to insert and update with dynamic data. The DOM Parts proposal would enable a standard, platform optimized mechanism for marking, collecting, cloning, and batch updating parts. It is designed to improve performance and reduce JavaScript for all frameworks, libraries, and apps that adopt it.

Use Cases

In the May CG meeting, Google’s Justin Fagnani laid out the basic use cases that are being used to drive the feature design:

  • Template-based Client-side Rendering — Client-side rendering needs the ability to instantiate a template, find its DOM Parts, and update them. An ideal API would allow a library to instantiate a template and locate its parts with a single, platform-optimized call, improving performance by eliminating the need for the library to implement its own tagging system or perform superfluous DOM tree walks.
  • Server-Side Rendering (SSR) with Hydration/Continuation — Client-side hydration of previously SSR’d HTML needs the ability to find DOM Parts in live HTML content and connect them to data or behavior. Today, many frameworks either have to recreate the DOM when they hydrate from SSR output or go through an expensive upgrade process on existing DOM. A browser standard would greatly improve performance and simplify the process for library authors. It would also help to unify the output patterns of different SSR frameworks.
  • Deferred SSR — In more complex applications, while rendering on the server, data may not be ready to render during the typical request flow. In this scenario, libraries will render markers into the DOM which the library will later update with the correct data or components, once the dependencies are available. DOM Parts could provide a standard API for this pattern, reducing library size and complexity.
  • Declarative Custom Elements — One of the big picture goals of the Web Components effort is to eventually have fully-declarative custom elements, without the need for JavaScript. In order to make this possible, a standard, ergonomic template syntax is needed, including control flow for conditionals and loops, as well as the ability for markers to hold binding expressions. DOM Parts could provide the low-level infrastructure needed to mark nodes, associate expressions, and update DOM efficiently.

The Imperative API

Based on combining the original proposal with updates from the the latest proposal and recent WG discussions resulting in this PR

NOTE: Yes, this is just about as hot off the presses as it can get.

Imagine we have the following template with our own framework syntax (more on standard syntax below):

<template>
<section>
<h1 id="name">{name}</h1>
Email: <a id="link" href="mailto:{email}">{email}</a>
</section>
</template>

We could parse this HTML to remove our expressions turning it into something like this:

<template>
<section>
<h1 id="name"></h1>
Email: <a id="link"></a>
</section>
</template>

Then, we could create DOM Part primitives to represent all the places where our templating engine needs to make updates:

// Associate the template content with the document.
const fragment = document.adoptNode(template.content);

// Locate the nodes we want to dynamically update.
const name = fragment.getElementById("name");
const link = fragment.getElementById("link");

// New API to get the root part.
const root = fragment.getPartRoot();

// New APIs to create/associate parts with the root and their nodes.
const namePart = new ChildNodePart(root, name);
const emailPart = new ChildNodePart(root, link);
const emailAttrPart = new AttributePart(root, link, "href", ["mailto"]);

// Update the parts with data.
namePart.value = "John Doe";
emailPart.value = "john@doe.org"
emailAttrPart.values = ["john@doe.org"];

// Commit the changes to the nodes.
namePart.commit();
emailPart.commit();
emailAttrPart.commit();

// Add the content to the document.
document.appendChild(fragment);

Then the resulting DOM will look like this:

<section>
<h1 id="name">John Doe</h1>
Email: <a id="link" href="mailto:john@doe.org">john@doe.org</a>
</section>

The proposal example above shows how basic parts can be created, associated with nodes, and updated, with controlled timing of the actual DOM commit. You can probably imagine how a templating library could parse its own syntax, automatically create the relevant parts, and handle updating them for you.

REMINDER: The example above is an aggregate of the early and revised proposals plus feedback from recent CG meetings and in-review GitHub PRs to proposals. It’s an approximation of what things could look like that shows the core concepts. The details will likely be different. For example, the AttributePart API is still being heavily debated with various options under consideration.

To further assist templating engines, the API for rapidly instantiating templates along with their associated parts looks like this:

// Your chosen library parses its syntax into a DocumentFragment 
// and creates the standard parts along with their associated directives,
// using code similar to the example above.
const { fragment, directives } = parseWithLibrary(template);

// Later, the library uses the browser's APIs to create
// as many instances as needed.
const instance = fragment.cloneNode(true); // clones the nodes and parts
const root = instance.getPartRoot(); // gets the root part
const parts = root.getParts(); // gets the associated parts

// Update the parts.
for (let i = 0; i < parts.length; ++i) {
// framework directive sets/commits values internally
directives[i].applyBehavior(parts[i]);
}

ASIDE: Hopefully, you can start to see the layering of features here. First, there’s the ability to create and update parts. Then, there’s the ability to have the platform instantiate templates and parts for you, provided that you had a way to parse out the parts beforehand. When we look at syntax below, we’ll then layer on the ability of the browser to parse out the parts as well.

When working on Web Standards, its critically important to come up with incremental value on the way to big value, otherwise the project can be too expensive or risky for browser vendors to take on all at once. This also enables library authors to ease and enable adoption for their users without forcing them to re-write as much. For example, the two sets of APIs described above could be used by virtually every modern view engine today without breaking any consumer. Even the declarative syntax described below could be used in this way if the view library wanted to compile its own syntax into the browser target syntax, which of course could be done at build-time. That goal is that at each incremental point, adopting libraries are able to reduce their size and improve their performance.

Open Questions and Challenges

While the above API may seem well-defined, there are a lot of open questions and some interesting challenges to work through.

Here are a few things the CG discussed, and aims to continue discussing in the future:

  • What should these APIs be called? The term “part” is already overloaded.
  • Should the API have an explicit commit or is it fine to just set values?
  • If introducing explicit batching, should the batch apply its changes in order that they were added to the batch or in tree order? There are performance implications for this.
  • If a part update can trigger JavaScript code via a mutation event or a cascading series of changes, then how and to what extent can updates be optimized automatically by the browser?
  • Could the DOM Parts API be shipped more incrementally? For example, could batching be added at a later time?
  • What is the best way to handle attribute parts? They are unique in that an attribute value is singular in the DOM but in templating scenarios can be broken down into static and dynamic parts.
  • Should we introduce the explicit idea of a PropertyPart and an EventPart or should that be handled more generically by a CallbackPart? Could callback part scenarios be better handled by other proposals, such as custom attributes/behaviors/enhancements?

One of the ways we’re trying to answer these questions is through prototyping in an actual browser. The really cool thing is that the Chrome team already has a working version of most of the above and is testing it with various Google libraries include Lit, Angular, and other internal projects. We’re very enthusiastic about this process and are looking forward to seeing how it progresses and what we will learn.

NOTE: If you are a framework or library author, feel free to reach out to me and I’ll try to figure out how we can get you connected into the process so you can explore how these APIs might be used by your library/framework.

Template Syntax

Template syntax is at minimum, a way of declaring DOM Parts. It may become more than that (we hope to enable full binding and control flow), but it must be at least that in its first version in order to enable the core SSR use cases described for DOM Parts.

Some of the trickiness of this is that a syntax needs to:

  • Work in the document.body for Declarative Shadow DOM (DSD) and other SSR scenarios.
  • Be ergonomic for developers to code by hand for the future of Declarative Custom Elements and general client-side templating.

Historically, the W3C Technical Architecture Group (TAG) has indicated that it does not want to introduce new HTML parser modes. This means that any document.body syntax has to function without changing the parser rules.

What?!

Ok, there is a way to accomplish this, which is what this proposal outlines. Here’s an approximation of what the same template above would look like using HTML processing instructions as outlined in the original proposal…

<section>
<h1 id="name"><?child-node-part?><?/child-node-part?></h1>
Email:
<?node-part?><a id="link"><?child-node-part?><?/child-node-part?></a>
</section>

The above syntax fulfills the requirements of SSR Hydration and Deferred SSR by providing a declarative syntax that the browser can optimize, without introducing a new parser node. However, there’s a big problem with this. It’s not the slightest bit ergonomic, which is a problem for direct client-side templating (not library generated) and for the Declarative Custom Elements scenario.

How do we solve this?

As it turns out, the TAG’s prohibition on new parser nodes, while it applies to the document, doesn’t necessarily apply to template elements, since they already run in a special mode. This means that the above syntax could be used in the document, specifically for DSD scenarios, but a more ergonomic second syntax could be introduced for templates, enabling view libraries and Declarative Custom Elements.

Uh…yeah. That’s also a problem.

Two syntaxes for the same thing and needing to pick between them based on whether you are targeting the document or a template: that is not good.

For this reason, the CG is seeking to revisit the document mode question with the W3C TAG. There is no technical reason why a new mode couldn’t be introduced. That said, based on our latest CG discussions, we think we can introduce the below syntax without changing the parser itself. If we can reach a consensus on this, it would enable us to have a single, ergonomic syntax that meets all the requirements and works everywhere.

The Current Proposal

Assuming that we can introduce a new non-parser-changing mode, we have started to discuss a basic, extensible micro syntax for templating in HTML:

The initial idea is that we would use {{}} to mark out parts, and something like {{#}}...{{/}} to denote ranges and parts with default values. If this looks familiar, it’s because it is explicitly and gratefully borrowed directly from the popular handlebars/mustache libraries.

We propose to pair this syntax with something like a parseparts attribute on elements which tells the HTML parser that after it parses the HTML using its normal rules, when it adds the nodes and attributes to the DOM, it should make the appropriate conversion to parts and store those parts with the parent part root.

The syntax is still being experimented with. Minimally, it needs to be able to:

  • Clearly mark out the three types of parts.
  • Provide a way for an “expression” to be specified.
  • Provide a way for a “default value” or range to be specified.

The expression is needed for dynamic updates on the client-side. The default value is needed both to represent the result of an SSR’d value for the part as well as to enable the HTML engine to remain unblocked on main document rendering when runtime data is not yet present.

Keeping with the simple example from above, if our server wanted to respond with HTML, including server rendered values, and also include the runtime binding expressions for the client-side, we’d have something like this:

<section parseparts>
<h1>{{# name}}John Doe{{/}}</h1>
Email:
<a href="mailto:{{# email}}john@doe.org{{/}}">
{{# email}}john@doe.org{{/}}
</a>
</section>

The browser would immediately, without running any JavaScript, render the HTML like this:


<section>
<h1 id="name">John Doe</h1>
Email: <a id="link" href="mailto:john@doe.org">john@doe.org</a>
</section>

The parts, with their expression metadata (e.g. name, email, etc.), would be available via the APIs shown above and client-side state/behavior could easily be attached to the existing parts when the JavaScript has loaded without the need to re-render, re-create, or re-walk DOM.

REMINDER: HTML like {{# name}}John Doe{{/}} is not something you would write by hand because this shows the combined output of a server process that includes both the live template binding expression and the server-rendered value of that expression. When authoring your HTML, you would just write {{name}} and your SSR library would expand that out to the above when it interpolates the actual data as part of the HTML response stream. That gives the browser everything it needs to render your SSR content immediately and it gives your client library everything it needs to continue where the server left off without repeating work.

NOTE: At this point, there isn’t a defined syntax for the binding expression itself. In fact, all the syntax I’m showing you is experimental. We are currently trying to work through what an extensible system would look like. This would enable libraries to bring their own expression language or metadata in the short-term and then the browser to add one later once the expression language details are worked out.

In the future, once the reactivity model, expression syntax, block rendering (conditional, list, etc.) are all worked out, we could add a new mode attribute that adds the additional behavior of setting up bindings against the parts and automatically handling updates. This would preserve the extensibility while also enabling developers to opt into the fully declarative standard.

If the TAG is willing to allow the new parseparts mode, I believe we can make these APIs and capabilities happen. There is great cross-browser consensus forming now and prototype implementations are already being tested.

I am deeply grateful for the particular investment that both WebKit and Chrome have already made into this effort. Dear TAG, please generously consider the tremendous value this would bring to the web community and ecosystem.

Reactivity

The third major piece is reactivity. In some ways this is ahead of all the other pieces, and at the same time it is much farther behind.

Let me to explain…

If you are using Custom Elements today, you can configure observedAttributes and the browser will call back your custom element any time one of your attributes changes. There is also the MutationObserver which can detect attribute changes on any element. So, in this sense, the browser has some basic reactivity around attributes already in place.

But this doesn’t meet many needs. For a general-purpose binding system, reactivity is needed across not just custom elements and not just the DOM. It is needed for arbitrary objects and values, which may or may not be associated with a DOM node.

Signals

Across many libraries and frameworks, there seems to be a growing consensus that signals work well as a general-purpose reactivity primitive. Signals are not new; they have been widely used on the web since at least 2010 when Knockout was introduced. Even at that time, signals weren’t a new idea, as they had been used widely in native application development for years.

A signal can be understood in terms of the following three characteristics:

  • It holds a single piece of state.
  • It emits a write signal whenever its held state changes.
  • It emits a read signal whenever its held state is read.

In code, we might have something like this:

// Create a signal to hold the number 0.
const count = Signal(0);

// Read the count and output it to the console.
console.log(`The count is ${count()}.`); // count() emits a read signal.

// Update the state held by the signal.
count.set(1); // count.set() emits a write signal.

// Read the count and output it to the console.
console.log(`The count is ${count()}.`)); // count() emits a read signal.

With this basic primitive in place, one could observe expressions that are based on any number of signals and react to them. A common example of this is an “effect” function that runs some code, observes signals accessed from within the code, and then re-runs the code when any signal values change.

// Run the callback function; observe signal reads to capture dependencies.
effect(() => element.innerText = count());
// Automatically re-run the callback when a count write signal is received.

The Signal primitive can be used directly or under the hood on more complex models. For example, using decorators to metaprogram signals, we can imagine something like this:

// Define a reactive person class based on signals.
class Person {
@signal accessor firstName;
@signal accessor lastName;

get fullName() {
return `${this.firstName} ${this.lastName}`;
}

constructor(firstName, lastName) {
this.firstName = firstName;
this.lastName = lastName;
}
}

// Create an instance of the model.
const p = new Person("John", "Doe");

// Bind the model's fullName to the DOM.
effect(() => element.innerText = p.fullName);

p.firstName = "Jim"; // update the model
// The DOM is automatically updated.

Custom collections can also be built where every slot in an array or map is a signal, a technique pioneered by Starbeam.

Native Signals

With a growing consensus around signals and so many successful applications over at least two decades across the industry, the obvious question becomes: why don’t we just add a core Signal type to the JavaScript language?

In answer to that question, I have begun pursuing this idea in TC39 with the help of Daniel Ehrenberg, a long-time TC39 member. We are just at the beginning of this process, so if you are a library or framework author, please reach out to me. I want to collect your use cases and thoughts on this approach. If we can design something that is broadly usable across libraries and frameworks, this could be a huge win for everyone.

Signal Integration with the DOM and DOM Parts

With signals as a core primitive, we would then have a tool to enable the above HTML template syntax to bind to data by connecting signals to DOM Parts automatically. But so much more would be possible if we wanted to pursue deeper platform integration. For example, the core DOM APIs could be signal enabled. Here are a few examples:

// Automatically update the attribute when the signal changes.
element.setAttribute("some-attr", signal);

// Automatically update the text when the signal changes.
textNode.data = signal;

// Automatically update classes when the signal changes.
element.className = signal;

Like all standards, platform reactivity will boil down to a combination of consensus and implementor willingness. I’m hopeful that we’re at a place where enough libraries and frameworks would be willing to use a platform signal capability. If we can establish that, we could really move the web forward.

Wrapping Up

As with all my articles, I hope this has been insightful and valuable to you. The Web has a long history, one that we’re all part of. Its future is important to all of us, so we must continue to grow its capabilities, but do so with great care. Getting HTML Templating and Data Binding right is a difficult challenge, but one that will be transformative if we can work together to make it happen. We’ve made huge progress in just the last few months, with multiple browser interest, and at least one working prototype. I’m extremely hopeful for the future.

If you enjoyed this look into Web Standards work, you might want to check out my Web Component Engineering course. I’d also love it if you would subscribe to this blog, subscribe to my YouTube channel, or follow me on twitter. Your support greatly helps me continue writing and bringing this kind of content to the broader community. Thank you!

--

--

EisenbergEffect

W3C, WICG & TC39 contributor focused on Web Standards, UI Architecture & Engineering Culture. Former Technical Fellow, Principal Architect & VP-level tech lead.