I found a functional way to build Express’s middleware

Using functional design is the new cool thing. This is especially true when talking about JavaScript. Functional programming can make complex code much simpler and much shorter. I am going to highlight some code that I wrote for my Packt Publishing video course, The Complete Guide to Node.js.

I want to note first that this is not a perfect apples to apples comparison. It is also not a judgment on the code written for Express. Express was not built to be functional and cannot be faulted for not using functional ideas. I just want to highlight a functional way of accomplishing a similar task.

The code

We are going to specifically look at middleware in Express. We will focus on the implementation of next. Middleware accomplishes the task of taking a bunch of functions that need to run on a request. Some middleware needs to run on every request and others only on specific requests. This means functions that are not the final item in the chain need to continue the chain. This is done with the next function. If you have not used Express middleware I recommend Express’s documentation.

Let’s look at how Express does this on Github. The code we will look at is in lib/router/index.js. We will look at the handle function and focus at the code starting at line 178. Again I will highlight that the Express version does more than just handle the middleware stack.

Here we can see that there are 12 ifs in the while as it loops over each item in the middleware stack. The good thing is that these ifs are not nested, if they were it would be crazy. This makes it a little difficult to really grok what is happening quickly. Go ahead, jump in and see how long it takes you to determine what is happening.

My code

This is not production ready code but it shows a different way of approaching the same problem. Let’s look at the code first and then discuss what is happening.

const url = require('url');
var routes = [];
var registerRoute = (method, url, fn) => {
  routes.push({method: method,
  url: url,
  fn: fn});
};

var routeMatch = (route, url) => {
  return route === url || route === undefined;
};

var methodMatch = (routeMethod, method) => {
  return routeMethod === method || routeMethod === undefined;
};

var isError = (fn) => fn.length === 3;
var isNormal = (fn) => fn.length === 2;

var mapToRouteMatch = (reqUrl, reqMethod) => {
  return (route) => {
    return routeMatch(route.url, reqUrl)
    && methodMatch(route.method, reqMethod);
  }
};

var handleRequest = (req, res) => {
  var matchedRoutes = routes
  .filter((route) => isNormal(route.fn))
  .filter(mapToRouteMatch(url.parse(req.url).pathname, req.method));
  try{
    matchedRoutes.some((route) => route.fn(req, res));
  }catch(e){
    let errorRoutes = routes
    .filter((route) => isError(route.fn))
    .filter(mapToRouteMatch(url.parse(req.url).pathname, req.method));
    errorRoutes.some((route) => route.fn(req, res, e));
  }
};

module.exports.registerRoute = registerRoute;
module.exports.handleRequest = handleRequest;

This is 42 lines that completely implements a routing middleware much like Express. The routes array and registerRoute are the core data of this. In fact, if you wanted you could change registerRoute to use to make it, even more, like Express.

There are then four filtering functions (routeMatch, methodMatch, isError, and isNormal). They are all one line of code so I won’t spend time discussing them. Next there is a higher order function mapToRouteMatch. This takes a URL and a method and then combines the return of the matching functions. This allows us to make routes that match both/either the method and/or the URL. This gives us the flexibility to run a piece of middleware for every request or just one.

One quick aside, the function mapToRouteMatch is really just a partially applied function. The function partially applies the URL and method which returns a new function that will then expect each route. Which the function will get when mapped over the array.

Finally we get to the core handling, handleRequest. Thinking functionally, there is a clear way to get which pieces of middleware to run on this request, filter! We already have functions that can filter down the array to just the functions that have 2 parameters (req and res) and match the current URL and method. After that we just run some over the array. some will continue over each item until one of them returns true. This is perfect if any function is the final function in the chain, just return true.

This is wrapped in a try/catch. If there is an error we catch it and then find each error middleware and execute it with the error. Let’s see how to actually use this now.

Here is a simple server with two endpoints and six total middleware.

const http = require('http'),
      routes = require('./routes.js');

//actual responses
var log = (req, res) => {
  console.log(`${req.method} ${req.url}`);
  return false;
};

var poweredBy = (req, res) => {
  res.setHeader('X-Powered-By', 'ejosh.co/de');
  return false;
};

var index = (req, res) => {
  res.write("<html><head><title>Page" +
"</title><head><body><h1>Our Web Application</h1>" +
"</body></html>");
  res.end();
  return true;
};

var createError = (req, res) => {
  throw new Error('this will always throw');
  return false;
}

var defaultRoute = (req, res) => {
  res.end();
  return true;
};

var errorRoute = (req, res, err) => {
  res.write(err.message);
  res.end();
  return true;
}

routes.registerRoute(undefined, undefined, log);
routes.registerRoute(undefined, undefined, poweredBy);
routes.registerRoute('GET', '/', index);
routes.registerRoute('GET', '/error', createError);
routes.registerRoute(undefined, undefined, errorRoute);
routes.registerRoute(undefined, undefined, defaultRoute);

var server = http.createServer();
server.on('request', routes.handleRequest);
server.listen(8081, '127.0.0.1');

We are using the built-in HTTP server and including the code we just looked at as routes.js. The next six functions should look really familiar if you have ever built an Express middleware. The main difference is that there is no next function. Just return false to continue processing and true to stop processing.

Next is the section where the routes are registered. This is different than Express as it is much more explicit. Every call has to pass all three parameters; method, URL, and function. Passing in undefined means that it will match for every request. Remember order matters.

Finally, it is wired up by starting the HTTP server and setting handleRequest to handle the requests.

As you can see we now have a functioning Express-like router and application in just 90 lines of code. The router does not have all of the features of Express, but hopefully, it is clear where the features can easily be added. For example regular expressions could be added as another function and used in routeMatch. Ultimately the main advantage is that we have simplified finding and running the correct middleware down to two filters and then a map (some is essentially a map). This allows us to use very simple functions for the actual logic portion.

The Secret of Functional Programming in JavaScript

I just finished an amazing book by Luis Atencio named Functional Programming in JavaScript. It is published by Manning and you can purchase it from Manning.

If you have been reading my blog you will see that I have been trying to push myself further into the functional paradigm. I have been classically trained as an object oriented programmer, much like most of the programming world. The idea of functional programming initially just seemed weird and something that people talked about, but never really implemented. Although I was drawn to the compactness of code and ease in which someone can reason about what is happening.

I purchased this book as a MEAP (Manning Early Access Program). MEAP means I received the eBooks as they were written, but I have found it is difficult to finish technical books in a digital format. I still prefer the physical book. The book has been out since the summer, but I have just now finished it. I highly recommend it to anyone wanting to learn more about functional programming.

The book starts off with the common cases for why use functional programming. It is similar to all the other articles on this subject. I would not count that has a detraction as Luis makes a good case for why functional programming.

Next, the book really becomes great in the next part, Get Functional. It takes a simple problem and shows the difference between building an imperative version and a functional version. This part finishes with one of the best descriptions of a monad I have ever read. Monads are one of those things that there are a lot of poor explanations of out on the Internet. I may one day add to that pile. There is a wonderful Douglas Crockford quote about monads.

In addition to it being useful, it is also cursed and the curse of the monad is that once you get the epiphany, once you understand – “oh that’s what it is” – you lose the ability to explain it to anybody.
Douglas Crockford

Luis, though, does not fall to this curse and nails both theory and application.

The final part of the book deals with performance and functional reactive programming. The book notes, correctly, that the value of using functional programming is not in any performance gained, but in writing clean easy to understand code. I feel Luis does a good job of wrapping up the core concept of the book and leaving the reader with somewhere to go.

I currently have my next book already purchased and sitting on my desk, Functional Reactive Programming from Manning. This is directly related as it is one of the books recommended in Functional Programming in JavaScript.

Trecco – my First iOS app

I have not posted in awhile, but I have been busy. I find that this is a recurring theme with my blog. I make a string of posts on time and then I get caught up writing code. Back to the post at hand.

Over the holidays I spent some time to learn Swift and write an iOS app. Trecco was the app that I built. It will record a voice note, use IBM’s Watson to transcribe the voice note, and then save it to a Trello board as a card. The name Trecco comes from recording a note and saving it to Trello.

Trecco App on iPhone

I do recommend that you download and use it. Especially if you use Trello.  Furthermore I am using this opportunity to create a new book! While I do not have a title yet, the book will focus on building a Swift iOS app from the ground up. This means starting from an empty folder all the way through submitting it through the app store. Every part will be covered.

I am hoping to release a sample chapter and video, so you can get a feel for what the book will be like.

12 Days of Posts: Day 12

Technically this one is late. I did not account for all the things I would have to do on Christmas eve. Picking a subject and writing around 200 words on it every day for eleven days was more difficult than I expected. I had chosen some subjects before, but 3 were pretty much day of selections.

I wanted these posts to be about the things that I learned or focused on for 2015. I think the main thing I tried to do this year was focus on programming more functionally. The more I learn about functional programming the more I like the way it makes you think. This is especially true from a testing point of view. When writing code I can easily create a lot of little edge cases and then you have to write tests for all of them. If I change how I think about the problem I can usually cut through all of that and make a simpler codebase. I think functional programming does that.

The other thing I have embraced is Docker. I love the ability to have my application running exactly the same in development as production. In my mind this is another simplification of development like functional programming. Once I have it working exactly how I want it in development there is really no work to get it working anywhere else.

This last year has been about writing and developing simpler software. I am not even close to where I think I can be. In 2016 I plan on continuing this. Hopefully you will come with me as well as you read my blog!

12 Days of Posts: Day 11 – explicit dependencies

Today we are going to talk about a subject that relates to Docker and how projects are setup. A few years ago, managing a software project was more difficult. Every piece of software that is not a few lines of code relies on other software usually written by someone else. Managing the versions (if the software was even versioned!) was very difficult. In .NET things went into the GAC so that you had to prep your environment before deployment. PHP was whatever you downloaded and threw in your project. JavaScript was essentially the same. I remember many times downloading jQuery for a project. I will cede that many of these methods were not the best ways at the time.

Things started to get better. Python has virtualenv. This allows developers to virtualize entire dependency stacks. You can work on the same project with different dependencies on the same machine. .NET has NuGet. In addition to embracing NuGet Microsoft has started to break large monolithic dependencies into small packages. Look no further than ASP.NET 5 to see this in action.

Software projects are now composable. This is a key concept that I felt the entire industry forgot about for a long time. Unix has a philosophy of small sharp tools. This means creating some small tool that does one thing and does it well. We seem to have come full circle back to this idea. We do not want large libraries that try to do everything, we want one tool that does something well.

I have one aside to this. Pin your dependencies! I see many projects, I am looking toward Node.js projects, where dependencies are loosely defined. For example a dependency referencing anything over version 2 when the latest version is 4. If you built your project with version 2.2.0 then explicitly define that in your dependencies. Small rant over.

One of the reasons why I have been doing so much Docker stuff is that I feel this is the current step of this idea. Docker is a virtualenv that is a little larger in scope. I know that the idea of containerization is not new. The tooling and by extension, the ease, is new though. We have made development environments a commodity. We can wrap an entire production stack, download it to a new system, and then just start it up. I am excited to see where we as a community will go next.

12 Days of Posts: Day 10 – functional state with Redux

Yesterday we finished the post with questions about how to store and use state. All applications have state and have to manage state otherwise they would be static documents. This becomes even more difficult if we try to make our application functional. Functional applications try to minimize state changes as much as possible. How do we do this in a functional application like React?

The best way to visualize what functional state looks like is to imagine a series of actions with facts. Each fact is a small immutable piece of data. We had a simple example last post about a list of items. An action with a fact would be, “here is a new item”. The action would be add an item and the fact being the actual item. An application then is just a stream of actions with facts.

This idea allows us to reason about state using the idea of a reducer function. A reducer function is a way to summarize or aggregate a list of data.  In our example each action will be run through a reducer which will then summarize what the state should be. This means that the list is being built as the application runs.

A great benefit of this is that we can now store each action with its data and have a complete picture of what happened. We can easily create an exact state we want to test or even playback errors.

When using React there is a great library that we can use that implements state in this way. It is called Redux.

My hope is that I have explained this enough. If not here is a great video series that goes more in-depth with creator of Redux.

12 Days of Posts: Day 9 – thinking in React

We are going to continue our journey into the functional paradigm. In many posts I have made it clear that I like React. Which leads us to the question, why?

In the simplest terms, React is functional. It is best utilized when functional ideas are used to design and build components. This forces most programmers to have to think differently. Think in React, as it were.

What makes React functional? Functional programming gets its name from Mathematics. The definition of a function in Mathematics is:

a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output.

This means that we should build React components in such a way that they will only have one output for each input. This may seem simple, but becomes harder the more we think about it.

Image a few years ago and we are programming in jQuery (not that there’s anything wrong with that). We have a list of items and a few different ways that someone can add to the list of items. Someone adds a new item, what is the output of the list? It depends. It depends on what has happened before. Application state is held in the list. We cannot state with any certainty that an input will map to one output.

With React we will just build a component that renders a list. The component will not worry about where the data comes from or how someone can modify the list. If 3 items are passed in, that is what will be rendered. This is functional.

The core idea of React is to build composable components. This means that we start with very simple components which can then be put together to build larger and larger components until we have an entire application. Each one of these components should be functional in nature. They should not hold state and only render what is handed to them. This breaks down how large of a mental model is needed to work on each separate component.

There is a good question to be asked. If all components do not hold state, how do we have an actual application? Does state not have to be changed? If no state is changed, did we not just create a static HTML page? All good questions. We will look at them tomorrow.

Here is a presentation I gave at a local meetup where I cover thinking in React.

12 Days of Posts: Day 8 – functional vs imperative

I would say that I am like most programmers, in that I have been trained to program in an imperative way. There is nothing wrong with this.

There is another programming paradigm and that is functional. If you have only programmed imperatively, functional programming forces you to reason about your application differently. Functional programming relies on the concepts of immutable data and creating no side effects.

I am not making this post to declare a winner between these two. I feel they both have their places in this world. I do feel very strongly that knowing both is strictly better than just knowing one of them. Especially if the only one you know is imperative.

Let’s look at a short functional vs imperative exercise here. We will use JavaScript, which the language itself is almost a mixture of functional and imperative ideas all thrown together, for this example. The main concept we will look at here is not modifying state.

We will compare JavaScript’s array methods slice and splice. slice is functional in that when executed it does not modify the array. splice is not because it modifies the array. This may seem like much, but it is a very important distinction.

Imperatively the thought is that we need to remove an element from the array. That is exactly what splice does. Functionally we will create a new array with the elements that we need. There are no side effects by doing this.

Here is the example in code:

function Slice() {
  var array = [1, 2, 3, 4, 5];
  console.log(array.slice(0, 3));
  console.log(array.slice(0, 3));
  console.log(array.slice(0, 3));
}

function Splice() {
  var array = [1, 2, 3, 4, 5];
  console.log(array.splice(0, 3));
  console.log(array.splice(0, 3));
  console.log(array.splice(0, 3));
}

console.log('This is functional');
Slice();
console.log('This is not');
Splice();

Here is the example on jsFiddle:

12 Days of Posts: Day 7 – why use Vagrant

We will continue in this post covering system related content. Not that long ago it was very difficult to have a development system match a production system. The task of making your local system match any other build was essentially impossible. There were steps you could take, but there was always a small amount of differences.  These differences could introduce issues. Those issues would then be very difficult to track down. The cliche, “it works for me”, comes from this.

Docker is a great start at mitigating this. In fact, I would say a properly designed Docker setup goes almost all of the way toward fixing those differences. The next step past Docker is using something like Vagrant.

Vagrant is just a wrapper around VirtualBox that can automate the creation and provisioning of virtual machines. Vagrant allows us to have, for all intents and purposes, a version of production on our local machine.

Are you using Ubuntu 14.04 for production? Use Ubuntu 14.04 for Vagrant. Provision it with the same tools and configurations. At this point, to your application it is exactly the same. You will never run into any new or strange bugs because of a difference between environments.

If you look over my post history (pushing 5 years already!), hopefully you will see a growth in my projects. This is something that I have really tried to embrace because I personally have been bitten by these issues. My last project included a way to completely run it locally in Vagrant, in addition to be being able to push to the cloud. I also plan on updating some of the more popular projects to utilize Vagrant as well.

12 Days of Posts: Day 6 – registering services with Registrator

Today we will are continuing the topic of system orchestration. When using service discovery you need to be able to register your services for them to be discovered. This is difficult in any case, but even more so with Docker.

For the most part I subscribe to the one process per container paradigm. If we need a service registration process that runs along side the container’s main process then that immediately breaks one process per container. In addition to this every container you spin up will need to be a custom container. You won’t be able to just grab the official Docker image of anything.

This is where Registrator comes in. Registrator automatically registers your container as a service with a few service registries, Consul being one. Registrator does this by listening for container start and stop events. It allows us to bring up containers and have them automatically added to Consul.

I recommend going through the quickstart. You can have this going in just a few minutes.