12 Days of Posts: Day 11 – explicit dependencies

Today we are going to talk about a subject that relates to Docker and how projects are setup. A few years ago, managing a software project was more difficult. Every piece of software that is not a few lines of code relies on other software usually written by someone else. Managing the versions (if the software was even versioned!) was very difficult. In .NET things went into the GAC so that you had to prep your environment before deployment. PHP was whatever you downloaded and threw in your project. JavaScript was essentially the same. I remember many times downloading jQuery for a project. I will cede that many of these methods were not the best ways at the time.

Things started to get better. Python has virtualenv. This allows developers to virtualize entire dependency stacks. You can work on the same project with different dependencies on the same machine. .NET has NuGet. In addition to embracing NuGet Microsoft has started to break large monolithic dependencies into small packages. Look no further than ASP.NET 5 to see this in action.

Software projects are now composable. This is a key concept that I felt the entire industry forgot about for a long time. Unix has a philosophy of small sharp tools. This means creating some small tool that does one thing and does it well. We seem to have come full circle back to this idea. We do not want large libraries that try to do everything, we want one tool that does something well.

I have one aside to this. Pin your dependencies! I see many projects, I am looking toward Node.js projects, where dependencies are loosely defined. For example a dependency referencing anything over version 2 when the latest version is 4. If you built your project with version 2.2.0 then explicitly define that in your dependencies. Small rant over.

One of the reasons why I have been doing so much Docker stuff is that I feel this is the current step of this idea. Docker is a virtualenv that is a little larger in scope. I know that the idea of containerization is not new. The tooling and by extension, the ease, is new though. We have made development environments a commodity. We can wrap an entire production stack, download it to a new system, and then just start it up. I am excited to see where we as a community will go next.

12 Days of Posts: Day 10 – functional state with Redux

Yesterday we finished the post with questions about how to store and use state. All applications have state and have to manage state otherwise they would be static documents. This becomes even more difficult if we try to make our application functional. Functional applications try to minimize state changes as much as possible. How do we do this in a functional application like React?

The best way to visualize what functional state looks like is to imagine a series of actions with facts. Each fact is a small immutable piece of data. We had a simple example last post about a list of items. An action with a fact would be, “here is a new item”. The action would be add an item and the fact being the actual item. An application then is just a stream of actions with facts.

This idea allows us to reason about state using the idea of a reducer function. A reducer function is a way to summarize or aggregate a list of data.  In our example each action will be run through a reducer which will then summarize what the state should be. This means that the list is being built as the application runs.

A great benefit of this is that we can now store each action with its data and have a complete picture of what happened. We can easily create an exact state we want to test or even playback errors.

When using React there is a great library that we can use that implements state in this way. It is called Redux.

My hope is that I have explained this enough. If not here is a great video series that goes more in-depth with creator of Redux.

12 Days of Posts: Day 9 – thinking in React

We are going to continue our journey into the functional paradigm. In many posts I have made it clear that I like React. Which leads us to the question, why?

In the simplest terms, React is functional. It is best utilized when functional ideas are used to design and build components. This forces most programmers to have to think differently. Think in React, as it were.

What makes React functional? Functional programming gets its name from Mathematics. The definition of a function in Mathematics is:

a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output.

This means that we should build React components in such a way that they will only have one output for each input. This may seem simple, but becomes harder the more we think about it.

Image a few years ago and we are programming in jQuery (not that there’s anything wrong with that). We have a list of items and a few different ways that someone can add to the list of items. Someone adds a new item, what is the output of the list? It depends. It depends on what has happened before. Application state is held in the list. We cannot state with any certainty that an input will map to one output.

With React we will just build a component that renders a list. The component will not worry about where the data comes from or how someone can modify the list. If 3 items are passed in, that is what will be rendered. This is functional.

The core idea of React is to build composable components. This means that we start with very simple components which can then be put together to build larger and larger components until we have an entire application. Each one of these components should be functional in nature. They should not hold state and only render what is handed to them. This breaks down how large of a mental model is needed to work on each separate component.

There is a good question to be asked. If all components do not hold state, how do we have an actual application? Does state not have to be changed? If no state is changed, did we not just create a static HTML page? All good questions. We will look at them tomorrow.

Here is a presentation I gave at a local meetup where I cover thinking in React.

12 Days of Posts: Day 8 – functional vs imperative

I would say that I am like most programmers, in that I have been trained to program in an imperative way. There is nothing wrong with this.

There is another programming paradigm and that is functional. If you have only programmed imperatively, functional programming forces you to reason about your application differently. Functional programming relies on the concepts of immutable data and creating no side effects.

I am not making this post to declare a winner between these two. I feel they both have their places in this world. I do feel very strongly that knowing both is strictly better than just knowing one of them. Especially if the only one you know is imperative.

Let’s look at a short functional vs imperative exercise here. We will use JavaScript, which the language itself is almost a mixture of functional and imperative ideas all thrown together, for this example. The main concept we will look at here is not modifying state.

We will compare JavaScript’s array methods slice and splice. slice is functional in that when executed it does not modify the array. splice is not because it modifies the array. This may seem like much, but it is a very important distinction.

Imperatively the thought is that we need to remove an element from the array. That is exactly what splice does. Functionally we will create a new array with the elements that we need. There are no side effects by doing this.

Here is the example in code:

function Slice() {
  var array = [1, 2, 3, 4, 5];
  console.log(array.slice(0, 3));
  console.log(array.slice(0, 3));
  console.log(array.slice(0, 3));
}

function Splice() {
  var array = [1, 2, 3, 4, 5];
  console.log(array.splice(0, 3));
  console.log(array.splice(0, 3));
  console.log(array.splice(0, 3));
}

console.log('This is functional');
Slice();
console.log('This is not');
Splice();

Here is the example on jsFiddle:

12 Days of Posts: Day 7 – why use Vagrant

We will continue in this post covering system related content. Not that long ago it was very difficult to have a development system match a production system. The task of making your local system match any other build was essentially impossible. There were steps you could take, but there was always a small amount of differences.  These differences could introduce issues. Those issues would then be very difficult to track down. The cliche, “it works for me”, comes from this.

Docker is a great start at mitigating this. In fact, I would say a properly designed Docker setup goes almost all of the way toward fixing those differences. The next step past Docker is using something like Vagrant.

Vagrant is just a wrapper around VirtualBox that can automate the creation and provisioning of virtual machines. Vagrant allows us to have, for all intents and purposes, a version of production on our local machine.

Are you using Ubuntu 14.04 for production? Use Ubuntu 14.04 for Vagrant. Provision it with the same tools and configurations. At this point, to your application it is exactly the same. You will never run into any new or strange bugs because of a difference between environments.

If you look over my post history (pushing 5 years already!), hopefully you will see a growth in my projects. This is something that I have really tried to embrace because I personally have been bitten by these issues. My last project included a way to completely run it locally in Vagrant, in addition to be being able to push to the cloud. I also plan on updating some of the more popular projects to utilize Vagrant as well.

12 Days of Posts: Day 6 – registering services with Registrator

Today we will are continuing the topic of system orchestration. When using service discovery you need to be able to register your services for them to be discovered. This is difficult in any case, but even more so with Docker.

For the most part I subscribe to the one process per container paradigm. If we need a service registration process that runs along side the container’s main process then that immediately breaks one process per container. In addition to this every container you spin up will need to be a custom container. You won’t be able to just grab the official Docker image of anything.

This is where Registrator comes in. Registrator automatically registers your container as a service with a few service registries, Consul being one. Registrator does this by listening for container start and stop events. It allows us to bring up containers and have them automatically added to Consul.

I recommend going through the quickstart. You can have this going in just a few minutes.

12 Days of Posts: Day 5 – Service discovery with Consul

We are going to move away from JavaScript for a few days. The last few months I have been making quite a few posts about Docker, Ansible, and Vagrant. The main reason for this is because I recently moved my blog from a Linux server that was setup by hand to some Docker containers that are automatically configured.

In this post I will touch on Consul. Consul is a service discovery tool. Service discovery allows us to uncouple the creation and linking of Docker containers. In my setup I felt that my containers are too dependent on each other and the Docker-compose definition. This makes scaling horizontally very difficult. In addition to this adding new services, like say another web server for proxying is harder than it should be.

This is where Consul comes in. This means that when a container comes up it can query Consul and find all the web servers it needs to know about. The container can also be alerted when a new container is created and react to it.

Unfortunately I do not have any real examples other than you can play around with Consul in Docker with this command.

docker run -d --name=consul --net=host gliderlabs/consul-server -bootstrap

If you cannot tell I am working on some new posts that will deal with Consul and Docker so stay tuned.

12 Days of Posts: Day 4 – using call and apply

We have seen some ways that we can take advantage of JavaScript’s functional roots. This theme will continue for this post as we will look at call and apply.

Call and apply are very similar. Both methods will execute the function, set the this value and pass in parameters. Call does this by having the parameters each explicitly defined while apply takes an array of parameters. Here is an example using our trusty add function and whatIsThis.

function add(x, y){
  return x + y;
}

function whatIsThis() {
  console.log(this);
}

console.log(add.call(undefined, 1, 2));
//returns 3
console.log(add.apply(undefined, [1, 2]));
//returns 3
console.log(whatIsThis.call({prop: 'test'}, null));
//returns Object {prop="test"}
console.log(whatIsThis.apply({prop: 'test'}, null));
//returns Object {prop="test"}

We can see the differences in execution, but the result will be exactly the same.

These examples are arbitrary so let’s build a little less arbitrary example. Let’s imagine that we have a comparing object that has a function to make a comparison. We want to make it so that the comparison function can be changed at a later point in time. We can use apply to make this easy. Here is that example.

var newCompare = function(a,b) {return a>b;}

var comparer = {
  compareFunc: function() {
    return false;
  },
  compare: function compare(a, b) {
    return this.compareFunc.apply(this, [a, b]);
  }
}

console.log(comparer.compare(5,1));
//returns false
comparer.compareFunc = newCompare;
console.log(comparer.compare(5,1));
//returns true

The compare function just executes another function. This means that we can change it out whenever we need to. The best part is that the new function does not need to know about the current object as it will all of its context passed to it by apply.

I have a real world example of this in my Webcam library. The library has a draw function, but we can change it out whenever we want. The new function will have access to a canvas, the 2d context, the 3d context, and the video stream. We can see the code on line 130 in video.js.

Here are all the examples in jsFiddle.

12 Days of Posts: Day 3 – using bind with a function

We are already at day 3. We are continuing with JavaScript and building on what we have already covered. We are going to cover the bind method of the function object.

Bind is used to define what this is inside of a function. It will return a new function (there is that ‘functions are a first class object’ idea again!) and not modify the current function. Let’s look at an example. Here is a simple function:

function whatIsThis(){
  console.log(this);
}
whatIsThis()
//this will return the window object

The default this value for an anonymous function will be the window object. We will now bind a new value for this using bind:

var newBoundWhatIsThis = whatIsThis.bind({property: 'Test'});
newBoundWhatIsThis();
//this will return Object {property: "Test"}

We have changed the value of this with bind. This will allow us to build functions that can operate on many different objects as long as the objects all have the same properties. Here is an example.

var josh = {name: 'Josh'};
var brian = {name: 'Brian'};
function outputName(){
  console.log(this.name);
}

outputName.bind(josh)();
outputName.bind(brian)();

Here outputName can operate on both objects.

Finally we will use bind to partially apply parameters. This is the same concept as the last post. Any other parameters passed into bind will be prepended to the list of arguments when executed later. Here are the examples from the last post, but this time with bind.

var boundAddOne = add.bind(window, 1);
boundAddOne(5);
//will return 6

var boundAddTen = add.bind(undefined, 10);
boundAddTen(5);
//will return 15

The add function does not do anything with this so it does not really matter what we use. We then pass in the first parameter so that a new function will be returned that will accept one parameter.

Here are all the examples on jsFiddle

12 Days of Posts: Day 2 – Partial application of a function

This day’s post is a continuation of the last post. In the last post we covered that a function can return a function, but we did not cover any use cases. This post will discuss a very useful one.

We have an add function:

function add(x, y){
  return x + y;
}

We can partially apply one of the parameters and return a new function. To reinforce what this means we will look at an example.

function partiallyApply(param, func){
  return function(nextParam){
    return func(nextParam, param);
  }
}

var addOne = partiallyApply(1, add);

The variable addOne takes the add function and partially applies one of the parameters (y is now always 1). When we execute partiallyApply we bind the first parameter and return a new function which allows us to bind the other parameter later. This means we can do this:

addOne(5);
//returns 6
addOne(10);
//returns 11
var addTen = partiallyApply(10, add);
addTen(5);
//returns 15

What good is this? Well we essentially are passing in parameters over two separate points in time. I had a problem where I needed an index for an item in a list and the event object. The issue was that I did not have these two things at the same time. The event object is only created in response to an event. I partially applied the index first which returned a function that would accept the event object. I have the code in the react_components.js file in my Where to eat repository.

The next post will continue the JavaScript theme.