Zeit Now v2 workflow

A few tips on how to use Zeit Now v2 serverless with TypeScript and a little bit of testing.

I like Zeit Now v1 and now there is something even more powerful - Zeit Now v2. I got to admit, at first I was skeptical. But after trying to write a GitHub bot using Zeit I am excited. The Now v2 deployment process has hugely shifted how I think about my code, and how the deployment process works.

While writing the bot, I have tried to use TypeScript (mostly to avoid writing tests), and in this blog post I will show a couple of tricks I had to use in order to get my development workflow into overdrive.

Basics

So let's start with a GitHub hook that will receive events from our GitHub App installation. We should write a server to ... wait, stop! No, we should write the event handler, don't worry about servers. So here is our file hooks/gh/index.ts that should receive the events

hooks/gh/index.ts
1
2
3
4
import { IncomingMessage, ServerResponse } from 'http'
module.exports = async (req: IncomingMessage, res: ServerResponse) => {
// handle incoming request
}

Note: you will probably need to install TypeScript and Node type definitions with npm i -D typescript @types/node and initialize tsconfig.json with npx tsc --init commands.

So, how will our hook get to the cloud? All we need is to map each file we are interested in deploying to a builder. There are static builders for serving HTML, PHP builders, Docker builders, full Express server builders, etc. But we are only interested in the default Node builder. We need TypeScript support, so we will need canary version as of February 2019 and an existing tsconfig.json.

now.json
1
2
3
4
5
6
7
8
9
{
"version": 2,
"builds": [
{
"src": "hooks/gh/index.ts",
"use": "@now/node@canary"
}
]
}

Every time we run command now from the terminal it will go through the files in the builds list (and src could be a wildcard, mind blown!), and will build new lambda if there are file changes, and then will deploy it to the cloud at a new immutable url. So a single command, in a monorepo can produce hundreds of separate deploys - where the deploy is super fast, because it smartly computes what has changed for each lambda.

If we have a hundred separate lambdas, how do we provide uniform API endpoints? We can define a routing structure on top of individual deploys. For now, we are just using the file paths as the endpoints by default. For example, our hook will be accessible at some url like https://folder-name-aoesid9xn.now.sh/hooks/gh.

Micro

So our request handler needs to decode the input body, perform its magic, and then respond. For simplicity, I will use Zeit micro.

hooks/gh/index.ts
1
2
3
4
5
6
7
8
9
10
11
const { json, send } = require('micro')
module.exports = async (req: IncomingMessage, res: ServerResponse) => {
try {
const data = await json(req)
// do our stuff
send(res, 200)
} catch (e) {
console.error(e.message)
return send(res, 400)
}
}

We set secret environment variables and can deploy the event handler to Now cloud. We can always follow the logs to see what is going on

1
now logs -f https://folder-name-aoesid9xn.now.sh

Local development

But what about local development? Now v2 is fast enough to keep deploying code changes, doing something on GitHub and receiving events - but that's not the best way to develop code. We need to work locally.

There is micro-dev that wraps the single event handler with an actual server, hot code reloading, etc. To use it with TypeScript we need ts-node and an intermediate file to register .ts Node hook.

local/gh-hook.js
1
2
3
4
require('ts-node').register({
transpileOnly: true
})
module.exports = require('../hooks/gh')

We can start the local development from package.json

package.json
1
2
3
4
5
{
"scripts": {
"local:gh-hook": "micro-dev local/gh-hook.js"
}
}

Here is micro-dev in action

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ npm run local:gh-hook

> [email protected] local:gh-hook /my-folder
> micro-dev local/gh-hook.js


┌──────────────────────────────────────────────────┐
│ │
│ Micro is running! │
│ │
│ • Local: http://localhost:3000 │
│ • On Your Network: http://10.130.4.201:3000 │
│ │
│ Copied local address to clipboard! │
│ │
└──────────────────────────────────────────────────┘

Perfect, local server with hot reloading. But we need to get a couple of events from GitHub to know.

ngrok

Here is where ngrok comes in. We have a team account where we have reserved a domain word. So I can start npm run local:gh-hook and then from another terminal run ngrok http -subdomain=my-folder-bot 3000. This requires authenticated ngrok CLI, but works immediately.

Now I have a permanent external domain that GitHub can call with events: https://my-folder-bot.ngrok.io/webhook, and it gets to my local event handler. And here is a cool thing: ngrok starts a local dashboard, where I can see each request, replay it, copy and save it into a JSON file.

inspect request in the ngrok dashboard

Cypress

If we have request JSON bodies, we can install Cypress test runner and use it as API tester with a GUI. Just copy a request from GitHub and save it as a JSON fixture file.

cypress/fixtures/pr-opened.json
1
2
3
4
5
6
7
{
"action": "opened",
"number": 3376,
"pull_request": {
...
}
}

Here is a typical test where we load the fixture, use it as a request body and assert that the response responds with expected result

cypress/integration/spec.js
1
2
3
4
5
6
7
8
9
10
/// <reference types="Cypress" />
context('Pull requests', () => {
it('finds issues mentioned', function () {
// fixture fixes issue 3353
cy.fixture('pr-opened')
.then(data => cy.request('/webhook', data))
.its('body')
.should('equal', 'handled opened pull request for issues 3353')
})
})

The test passes and we can inspect each request and response in Cypress Command Log

Cypress request inspection

So we capture test data using ngrok, write simple, focused code with individual functions and let now package and deploy lambdas to the cloud. No need to worry about servers, complex stacks of middleware, etc.

Note: ⚠️ adding Cypress as a dev dependency includes it in the lambda, slowing down the deploy. I have not found a way to exclude it yet.