Composable End-to-End tests for React Apps
So, you’ve finally arrived at the place where you must write an end-to-end test suite for your React project. You’ve probably looked at things like Selenium or even just Chrome headless, but aren’t sure if you’re willing to buy in to them. You should know that you aren’t alone, and that many developers face a difficult choice as well.
Before we start exploring what I’m about to prescribe, I’d like to outline the current reasons why I’m dissatisfied with the ecosystem. These are likely the same points that lead you here.
1. Expensive and complex setup
Lots and lots of libraries out there are incredibly bulky and complex to setup. Mature libraries, especially, are victim of this due to the high amount of use-cases they have to support, plus the general lack of tools that were available on the onset (Selenium especially). Many issue their own DSL or some other opaque medium for interacting with the browser. If you you’re driving a headless unit with lots of $
or xml
then something isn’t right.
2. General sluggishness and hard to scale
End-to-end tests are already painful to write, but why do they have to be so doggone slow? Obviously there’s costs to starting up and driving a web browser, but new technologies like headless-chrome should help expedite execution. Clearly there’s room for improvement here.
3. Painful API with no way to compose workflows
What do you do if, before anything else, you have to login to an application before doing any test what so ever? Right now, at least as far as I’m aware, it’s either difficult or impossible to compose tests with various behaviors. I, personally, would love to define a library of user-interactions and compose them on-the-fly at test time.
It was these challenges (and a few more) that lead me to develop Navalia, which is an open-source project for painlessly driving, scaling, and automating headless browsers. As of this writing it only supports Google Chrome, but I will slowly be adding more browsers over time.
Let’s take a look at how this tool can help us with tests.
The Setup
You’re probably already using Jest for your React project, so you’ll be happy to know that we’ll be using Jest to write our test suite! The only thing you’ll really need is to install Navalia:
// Yarn
$ yarn add navalia// If using npm
$ npm install --save-dev navalia
And that’s it! If you want to go a step further you could also install Chrome Canary as it has better overall support for headless.
The Navalia API
Navalia has a dead-simple API. All actions return promises making them extremely easy to chain. It also is written in TypeScript, meaning that robust editors can read through its code and give you nice type-ahead options and more.
Standing up Chrome is pretty simple:
import { Chrome } from 'navalia';const chrome = new Chrome();
Once that’s done you can easily begin to interact with a browser, and call done
when you are finished to close it:
Now let’s spice it up with a little user interaction and use async/await
for better overall readability:
Here we’re going to visit the Navalia Docs page and type in some text to search for. We could even .screenshot
the result to verify, but for now let’s move onto tests!
The React App
Below is the source for the React application I’ve written as a grounds to demonstrate testing. It’s a small login page where we ask for username and password, and show help messages when the user clicks “submit”. Today we’ll just test a few things:
- That both inputs exist, as well as a submit button.
- That an empty username input shows a message when clicking submit.
- That an empty password input shows a message when clicking submit.
- The error messages are dismissible.
- Clicking “submit” dismisses either of these when the input is valid.
Here’s the source-code of the application itself for those who are curious:
The Tests
By now you should be somewhat comforable with Navalia’s API, and familiar with React/Jest. To start off on our testing journey, here’s the beginnings of our test suite:
Setup:
As of now we’re just testing that the input even exists with the exists
method. You’ll notice I’m using the data-test
selector, which I think is a good practice as it separates your classes (which have style implications) from your tests (which have use-case implications). You can always target whatever selector you’d like, but testing with classes can quickly become an anti-pattern in larger projects.
Let’s dig in a bit further with some user-interaction.
Asserting errors appear when the form is invalid:
Here we assert that clicking submit without any input fails with an error message. Could you have even imagined this was possible in just 3–4 lines of code?!? What makes it even sweeter is the ability to compose these actions.
Action Composition:
With that now in place we can compose further upon it:
📖 The Story so far
We’ve pretty effectively solved the first few pain-points on E2E testing (Setup and API), but we haven’t talked much speed. Right now, each assertion is fairly costly as we are starting and closing a live browser for each it
. This is probably acceptable for a few quick tests, but what happens when you being to write more elaborate suits such as this:
Running this suite (even thought it’s only 8 tests) costs nearly 10 seconds:
Test Suites: 1 passed, 1 total
Tests: 8 passed, 8 total
Snapshots: 0 total
Time: 9.161s // OUCH
Ran all test suites.
Yuck. This problem only gets worse as you begin to write more complex tests that span pages and API interactions. This issue prevents us from scaling, or even using, E2E tests at all.
Thankfully Navalia has a great way of handling this.
💪 Load-balancing Chrome
On top of the Chrome
export, Navalia also exports a “browser-balancer”. This module can balance work across multiple browser instances, and even browser tabs internally, making for a much much faster test run. The API is just as straightforward as Chrome
, and all of your test code largely remains the same.
Setup
const { Navalia } = require('navalia');
const navalia = new Navalia();
Very similar to Chrome
, we simply construct a new instance to begin using for queries. To register work against this balancer, you simply callrun
with a callback. This function will be immediately called if there’s available resources, and be queued if not.
Basic Example:
navalia.run((chrome) => chrome.goto(...));
navalia.run
returns a Promise
, which resolves after our function returns, making it easy to orchestrate further. In our case we merely want to send a signal that the test has completed. Our prior assertions change a bit, but not a lot.
Assertions with a Navalia instance
Since the chrome
argument here is identical to the one used prior, all the compositions you write still remain valid, making the switch-over quite trivial.
Now that we have all of this in place, the test run dropped a good chunk of time since browser startup and shutdown are factored out:
Test Suites: 1 passed, 1 total
Tests: 8 passed, 8 total
Snapshots: 0 total
Time: 4.11s
Ran all test suites.
But we can do more…
🎉 Concurrency FTW
Jest exposes a concurrent
method on it
which tells jest to not block tests based on prior assertions. Since all of our assertions are self-contained (meaning no side-effects), and we have the ability to compose, we can take full benefit of this feature. The change is super trivial: it
=> it.concurrent
.
Moving over to it.concurrent
shaves off quite a bit:
Test Suites: 1 passed, 1 total
Tests: 8 passed, 8 total
Snapshots: 0 total
Time: 1.739s, estimated 2s
Ran all test suites.
I’ll wait for that to sink in…
This concurrency alone reduced the time our suite takes by nearly an order of magnitude. Even better: all of these run
executions get their own Chrome context, meaning things like cookies and other storage mechanisms are empty so that there’s no bleed-through.
Here’s the suite with all of the functionality discussed (minus composing functions to keep it verbose):
🛣 The Road Ahead
I hope this has you considering Navalia as your browser-automation library, as I intend to grow it by leaps-and-bounds over time. You can read more about it at the doc-site, star it on GitHub, or file an issue/feature. Currently the focus is getting the Chrome API “just right” before moving on to loading other vendors. There’s also the possibility of breaking changes, but hopefully not many at this point (and they’re captured in the CHANGELOG).
Thanks for reading, and happy testing!