Trecco – my First iOS app

I have not posted in awhile, but I have been busy. I find that this is a recurring theme with my blog. I make a string of posts on time and then I get caught up writing code. Back to the post at hand.

Over the holidays I spent some time to learn Swift and write an iOS app. Trecco was the app that I built. It will record a voice note, use IBM’s Watson to transcribe the voice note, and then save it to a Trello board as a card. The name Trecco comes from recording a note and saving it to Trello.

Trecco App on iPhone

I do recommend that you download and use it. Especially if you use Trello.  Furthermore I am using this opportunity to create a new book! While I do not have a title yet, the book will focus on building a Swift iOS app from the ground up. This means starting from an empty folder all the way through submitting it through the app store. Every part will be covered.

I am hoping to release a sample chapter and video, so you can get a feel for what the book will be like.

12 Days of Posts: Day 12

Technically this one is late. I did not account for all the things I would have to do on Christmas eve. Picking a subject and writing around 200 words on it every day for eleven days was more difficult than I expected. I had chosen some subjects before, but 3 were pretty much day of selections.

I wanted these posts to be about the things that I learned or focused on for 2015. I think the main thing I tried to do this year was focus on programming more functionally. The more I learn about functional programming the more I like the way it makes you think. This is especially true from a testing point of view. When writing code I can easily create a lot of little edge cases and then you have to write tests for all of them. If I change how I think about the problem I can usually cut through all of that and make a simpler codebase. I think functional programming does that.

The other thing I have embraced is Docker. I love the ability to have my application running exactly the same in development as production. In my mind this is another simplification of development like functional programming. Once I have it working exactly how I want it in development there is really no work to get it working anywhere else.

This last year has been about writing and developing simpler software. I am not even close to where I think I can be. In 2016 I plan on continuing this. Hopefully you will come with me as well as you read my blog!

12 Days of Posts: Day 11 – explicit dependencies

Today we are going to talk about a subject that relates to Docker and how projects are setup. A few years ago, managing a software project was more difficult. Every piece of software that is not a few lines of code relies on other software usually written by someone else. Managing the versions (if the software was even versioned!) was very difficult. In .NET things went into the GAC so that you had to prep your environment before deployment. PHP was whatever you downloaded and threw in your project. JavaScript was essentially the same. I remember many times downloading jQuery for a project. I will cede that many of these methods were not the best ways at the time.

Things started to get better. Python has virtualenv. This allows developers to virtualize entire dependency stacks. You can work on the same project with different dependencies on the same machine. .NET has NuGet. In addition to embracing NuGet Microsoft has started to break large monolithic dependencies into small packages. Look no further than ASP.NET 5 to see this in action.

Software projects are now composable. This is a key concept that I felt the entire industry forgot about for a long time. Unix has a philosophy of small sharp tools. This means creating some small tool that does one thing and does it well. We seem to have come full circle back to this idea. We do not want large libraries that try to do everything, we want one tool that does something well.

I have one aside to this. Pin your dependencies! I see many projects, I am looking toward Node.js projects, where dependencies are loosely defined. For example a dependency referencing anything over version 2 when the latest version is 4. If you built your project with version 2.2.0 then explicitly define that in your dependencies. Small rant over.

One of the reasons why I have been doing so much Docker stuff is that I feel this is the current step of this idea. Docker is a virtualenv that is a little larger in scope. I know that the idea of containerization is not new. The tooling and by extension, the ease, is new though. We have made development environments a commodity. We can wrap an entire production stack, download it to a new system, and then just start it up. I am excited to see where we as a community will go next.

12 Days of Posts: Day 10 – functional state with Redux

Yesterday we finished the post with questions about how to store and use state. All applications have state and have to manage state otherwise they would be static documents. This becomes even more difficult if we try to make our application functional. Functional applications try to minimize state changes as much as possible. How do we do this in a functional application like React?

The best way to visualize what functional state looks like is to imagine a series of actions with facts. Each fact is a small immutable piece of data. We had a simple example last post about a list of items. An action with a fact would be, “here is a new item”. The action would be add an item and the fact being the actual item. An application then is just a stream of actions with facts.

This idea allows us to reason about state using the idea of a reducer function. A reducer function is a way to summarize or aggregate a list of data.  In our example each action will be run through a reducer which will then summarize what the state should be. This means that the list is being built as the application runs.

A great benefit of this is that we can now store each action with its data and have a complete picture of what happened. We can easily create an exact state we want to test or even playback errors.

When using React there is a great library that we can use that implements state in this way. It is called Redux.

My hope is that I have explained this enough. If not here is a great video series that goes more in-depth with creator of Redux.

12 Days of Posts: Day 9 – thinking in React

We are going to continue our journey into the functional paradigm. In many posts I have made it clear that I like React. Which leads us to the question, why?

In the simplest terms, React is functional. It is best utilized when functional ideas are used to design and build components. This forces most programmers to have to think differently. Think in React, as it were.

What makes React functional? Functional programming gets its name from Mathematics. The definition of a function in Mathematics is:

a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output.

This means that we should build React components in such a way that they will only have one output for each input. This may seem simple, but becomes harder the more we think about it.

Image a few years ago and we are programming in jQuery (not that there’s anything wrong with that). We have a list of items and a few different ways that someone can add to the list of items. Someone adds a new item, what is the output of the list? It depends. It depends on what has happened before. Application state is held in the list. We cannot state with any certainty that an input will map to one output.

With React we will just build a component that renders a list. The component will not worry about where the data comes from or how someone can modify the list. If 3 items are passed in, that is what will be rendered. This is functional.

The core idea of React is to build composable components. This means that we start with very simple components which can then be put together to build larger and larger components until we have an entire application. Each one of these components should be functional in nature. They should not hold state and only render what is handed to them. This breaks down how large of a mental model is needed to work on each separate component.

There is a good question to be asked. If all components do not hold state, how do we have an actual application? Does state not have to be changed? If no state is changed, did we not just create a static HTML page? All good questions. We will look at them tomorrow.

Here is a presentation I gave at a local meetup where I cover thinking in React.

12 Days of Posts: Day 8 – functional vs imperative

I would say that I am like most programmers, in that I have been trained to program in an imperative way. There is nothing wrong with this.

There is another programming paradigm and that is functional. If you have only programmed imperatively, functional programming forces you to reason about your application differently. Functional programming relies on the concepts of immutable data and creating no side effects.

I am not making this post to declare a winner between these two. I feel they both have their places in this world. I do feel very strongly that knowing both is strictly better than just knowing one of them. Especially if the only one you know is imperative.

Let’s look at a short functional vs imperative exercise here. We will use JavaScript, which the language itself is almost a mixture of functional and imperative ideas all thrown together, for this example. The main concept we will look at here is not modifying state.

We will compare JavaScript’s array methods slice and splice. slice is functional in that when executed it does not modify the array. splice is not because it modifies the array. This may seem like much, but it is a very important distinction.

Imperatively the thought is that we need to remove an element from the array. That is exactly what splice does. Functionally we will create a new array with the elements that we need. There are no side effects by doing this.

Here is the example in code:

function Slice() {
  var array = [1, 2, 3, 4, 5];
  console.log(array.slice(0, 3));
  console.log(array.slice(0, 3));
  console.log(array.slice(0, 3));
}

function Splice() {
  var array = [1, 2, 3, 4, 5];
  console.log(array.splice(0, 3));
  console.log(array.splice(0, 3));
  console.log(array.splice(0, 3));
}

console.log('This is functional');
Slice();
console.log('This is not');
Splice();

Here is the example on jsFiddle:

12 Days of Posts: Day 7 – why use Vagrant

We will continue in this post covering system related content. Not that long ago it was very difficult to have a development system match a production system. The task of making your local system match any other build was essentially impossible. There were steps you could take, but there was always a small amount of differences.  These differences could introduce issues. Those issues would then be very difficult to track down. The cliche, “it works for me”, comes from this.

Docker is a great start at mitigating this. In fact, I would say a properly designed Docker setup goes almost all of the way toward fixing those differences. The next step past Docker is using something like Vagrant.

Vagrant is just a wrapper around VirtualBox that can automate the creation and provisioning of virtual machines. Vagrant allows us to have, for all intents and purposes, a version of production on our local machine.

Are you using Ubuntu 14.04 for production? Use Ubuntu 14.04 for Vagrant. Provision it with the same tools and configurations. At this point, to your application it is exactly the same. You will never run into any new or strange bugs because of a difference between environments.

If you look over my post history (pushing 5 years already!), hopefully you will see a growth in my projects. This is something that I have really tried to embrace because I personally have been bitten by these issues. My last project included a way to completely run it locally in Vagrant, in addition to be being able to push to the cloud. I also plan on updating some of the more popular projects to utilize Vagrant as well.

12 Days of Posts: Day 6 – registering services with Registrator

Today we will are continuing the topic of system orchestration. When using service discovery you need to be able to register your services for them to be discovered. This is difficult in any case, but even more so with Docker.

For the most part I subscribe to the one process per container paradigm. If we need a service registration process that runs along side the container’s main process then that immediately breaks one process per container. In addition to this every container you spin up will need to be a custom container. You won’t be able to just grab the official Docker image of anything.

This is where Registrator comes in. Registrator automatically registers your container as a service with a few service registries, Consul being one. Registrator does this by listening for container start and stop events. It allows us to bring up containers and have them automatically added to Consul.

I recommend going through the quickstart. You can have this going in just a few minutes.

12 Days of Posts: Day 5 – Service discovery with Consul

We are going to move away from JavaScript for a few days. The last few months I have been making quite a few posts about Docker, Ansible, and Vagrant. The main reason for this is because I recently moved my blog from a Linux server that was setup by hand to some Docker containers that are automatically configured.

In this post I will touch on Consul. Consul is a service discovery tool. Service discovery allows us to uncouple the creation and linking of Docker containers. In my setup I felt that my containers are too dependent on each other and the Docker-compose definition. This makes scaling horizontally very difficult. In addition to this adding new services, like say another web server for proxying is harder than it should be.

This is where Consul comes in. This means that when a container comes up it can query Consul and find all the web servers it needs to know about. The container can also be alerted when a new container is created and react to it.

Unfortunately I do not have any real examples other than you can play around with Consul in Docker with this command.

docker run -d --name=consul --net=host gliderlabs/consul-server -bootstrap

If you cannot tell I am working on some new posts that will deal with Consul and Docker so stay tuned.

12 Days of Posts: Day 4 – using call and apply

We have seen some ways that we can take advantage of JavaScript’s functional roots. This theme will continue for this post as we will look at call and apply.

Call and apply are very similar. Both methods will execute the function, set the this value and pass in parameters. Call does this by having the parameters each explicitly defined while apply takes an array of parameters. Here is an example using our trusty add function and whatIsThis.

function add(x, y){
  return x + y;
}

function whatIsThis() {
  console.log(this);
}

console.log(add.call(undefined, 1, 2));
//returns 3
console.log(add.apply(undefined, [1, 2]));
//returns 3
console.log(whatIsThis.call({prop: 'test'}, null));
//returns Object {prop="test"}
console.log(whatIsThis.apply({prop: 'test'}, null));
//returns Object {prop="test"}

We can see the differences in execution, but the result will be exactly the same.

These examples are arbitrary so let’s build a little less arbitrary example. Let’s imagine that we have a comparing object that has a function to make a comparison. We want to make it so that the comparison function can be changed at a later point in time. We can use apply to make this easy. Here is that example.

var newCompare = function(a,b) {return a>b;}

var comparer = {
  compareFunc: function() {
    return false;
  },
  compare: function compare(a, b) {
    return this.compareFunc.apply(this, [a, b]);
  }
}

console.log(comparer.compare(5,1));
//returns false
comparer.compareFunc = newCompare;
console.log(comparer.compare(5,1));
//returns true

The compare function just executes another function. This means that we can change it out whenever we need to. The best part is that the new function does not need to know about the current object as it will all of its context passed to it by apply.

I have a real world example of this in my Webcam library. The library has a draw function, but we can change it out whenever we want. The new function will have access to a canvas, the 2d context, the 3d context, and the video stream. We can see the code on line 130 in video.js.

Here are all the examples in jsFiddle.