Thinking Outside the DOM: Concepts and Setup

Christian Johansen
Christian Johansen

If I were to name one thing that most JavaScript code bases struggle with, it would be tight coupling in general, and coupling to the DOM in particular. Tight coupling causes developer headaches and problems when unit-testing the code.

In this two-part series, I will give you some hints on how to achieve loosely coupling code, and walk you through an example of how to detach your code from the DOM. In this first installment, I’ll introduce you to the problems of having a tightly coupled code and we’ll also walk through a real-world situation where we can apply the concepts discussed: the validation of a form.

What is Coupling?

In many applications the code interacts with a variety of external APIs. In web applications, we interact with the DOM API, possibly the network (through XMLHttpRequest), JSON or XML for data exchange, and many others. On a conceptual level, these concerns are strictly separated from each other.

If the REST API your app interacts with makes a few structural changes, it’s reasonable that you’ll need to update the code interacting with the REST service. It isn’t reasonable that this requires changes in the UI rendering code. Yet, very often it does. And when that happens, you have what is called “tight coupling”.

Loose coupling is the opposite of tight coupling. In a loosely coupled system, changing network requirements does not cause changes in the rendering code. A revamped CSS stylesheet and new rules for class names does not cause changes in the data serialization code. This means fewer problems, and a code base that is easier to reason about.

Now that I’ve given you some context, let’s take a look at what this means in practice.

Form Validation

Form validation is perhaps the deadest horse you could ever beat with a JavaScript stick. It’s one of the oldest use cases for JavaScript, and has been solved by open source libraries a gazillion times, not to mention the introduction of HTML5 attributes such as required and pattern. Yet, new libraries still pop up, indicating that:

  1. We aren’t creating the right abstractions, resulting in a constant need to rewrite.
  2. JavaScript developers really enjoy reinventing the wheel (and releasing the result as open source software).

I can’t really help with the latter, but I sure hope to cast some light on the former, even if I have contributed to the mess that’s already out there myself.

Form validation is “close” to the DOM in many ways. We’re testing a set of expectations against the current state of a form, and then we’re reporting back to the user by making changes to the DOM. However, if we take a step back, we can easily imagine some relevant use cases that involve the DOM to a lesser degree:

  • Sending validation reports to an analytics system to gain understanding of how to improve the site design
  • Validating data fetched over the network
  • Validating data from files dragged to the browser
  • Outputting validation messages using libraries such as React

Even if the DOM is heavily involved, there are multiple factors that vary:

  • When is the validation triggered? When the onsubmit event is fired? onblur? onchange? Programmatically through JavaScript code?
  • Error reporting form-wide, or per field? Both?
  • Error reporting markup details may vary a lot
  • Error reporting needs may be different depending on context

Tying the input-validate-output cycle tightly together will make it hard to account for all imaginable combinations of these things. If you plan ahead really well, you can make a pretty flexible solution, but I guarantee you that someone will show up with a use case that breaks the camel’s back. Believe me, I’ve been done this road before, falling into every ditch along the way.

As if this wasn’t enough, consider the fact that many kinds of validation rules depend on more than one field. How do we solve those situations? The answer can be found by first analyzing what we need to accomplish and then decide how to best do it:

  • Reading data from a form (DOM-centric)
  • Validating data against a set of rules (pure business logic)
  • Outputting validation results (possibly DOM-centric)

Additionally, we’ll need a thin layer of code that combines the pieces and triggers validation at the desired times. There may be more aspects to consider as well, but as long as we’re able to implement these as orthogonal concerns, we should be able to layer onto this abstraction with relative ease.

Validating Data

The core of any validation library is its set of validation functions. These functions should be applicable to any data, not just form elements. After all, the only thing that differentiates enforcing that the name field in a form is required from enforcing that the name property of an object is present is how we access the value. The validation logic itself is the same. For this reason it would be wise to design the validator functions to work with pure data, and then provide different mechanisms for extracting the values to run through the validator separately. This would also mean that our unit tests can use plain JavaScript objects, which is nice and easy to do.

What input should our validators expect? We’ll need to specify rules for individual fields (as well as compound rules, more on that later), and it’ll be very helpful to associate contextual error messages with each check. So something like:

var checkName = required("name", "Please enter your name");

The required function returns a function that will inspect all the data, and look for name. It could be called like:

var result = checkName({name: 'Chris'});

If the data provided to the function passes the check, it returns undefined. If it fails, the function returns an object describing the problem:

// returns {id: "name", msg: "Please enter your name"}

This data can be used “on the other end”, e.g. to render messages onto a form.

To implement this function, let’s formulate a test:

describe('required', function () {
  it('does not allow required fields to be blank', function () {
    var rule = required('name', 'Name cannot be blank');

    assert.equals(rule({}), {
      id: 'name',
      msg: 'Name cannot be blank'

The function checks for a non-empty value:

function required(id, msg) {
  return function (data) {
    if (data[id] === null ||
        data[id] === undefined ||
        data[id] === ''
    ) {
      return {id: id, msg: msg};

While calling individual validation functions is neat, our primary use case is to validate a full form. To do that we will use another function which will take a set of rules (as produced by various validator functions) and match them up against a dataset. The result will be an array of errors. If the array is empty, the validation was successful. So, we might have something like this:

var rules = [
  required("name", "Please enter your name"),
  required("email", "Please enter your email")

var data = {name: "Christian"};

// [{id: "email", messages: ["Please enter your email"]}]
var errors = enforceRules(rules, data);

Notice that the resulting messages property is an array because enforceRules may encounter multiple rules failing for the same property. Therefore, we must account for multiple error messages per property name.

This looks like a reasonable design: it’s straightforward, has no external dependencies, and makes no assumptions about where the data comes from, or where the result is going. Let’s attempt an implementation. We’ll start with a test:

describe('required', function () {
  it('does not allow required fields to be blank', function () {
    var rules = [required('name', 'Name cannot be blank')];

    assert.equals(enforceRules(rules, {}), [
      {id: 'name', messages: ['Name cannot be blank']}

This test well describes the design we planned. There’s an array of rules, an object with data, and an array of errors as the result. The function has no side-effects. This is the kind of design that has a chance of surviving changing requirements.

After a few more tests, you might end up with an implementation of enforceRules that looks like the following:

function enforceRules(rules, data) {
  var tmp = {};

  function addError(errors, error) {
    if (!tmp[]) {
      tmp[] = {id:};
      tmp[].messages = [];


  return rules.reduce(function (errors, rule) {
    var error = rule(data);
    if (error) {
      addError(errors, error);
    return errors;
  }, []);

At this point, we have a system in place where implementing new validators is pretty straightforward. As an example, regular expressions tests are pretty common in form validators, and one could be implemented as such:

function pattern(id, re, msg) {
  return function (data) {
    if (data[id] && !re.test(data[id])) {
      return {id: id, msg: msg};

It’s important to note that this validator is designed to pass if the data in question is empty/non-existent. If we fail in this case, the validator will implicitly also be a required check. Since we already have that in a standalone version, it’s better to allow the user of the API to combine these to suit their needs.

In case you want to see the code created so far in action and play with it, take a look at this codepen.


In this first part we’ve discussed a problem common to many form validation libraries: tightly coupled code. Then, I described the drawbacks that come with tightly coupled code and also shown how to create validation functions that doesn’t exhibit this issue.

In the next installment, I’ll introduce you to compound validators, and the other orthogonal concerns: gathering data from HTML forms and reporting errors back to the user. Finally, I’ll put it all together to have a full visual example that you can play with.