Working for Hashicorp

Today is my first day working for Hashicorp!

I've been helping in various parts of open source Hashicorp software for a little while now, just on my own time late at night or over weekends. Having the opportunity to do what I love full time for an awesome company is an absolute blessing.

Hashicorp makes DevOps tooling. We are the company behind software like Vagrant, Packer, Serf, Consul, and Terraform.

I couldn't ask for a better team to work with. I haven't met everyone on our 6-person roster yet, but am excited to get started collaborating.

I hope to really find my stride here, and become much more involved in the community, the vision, and in the code itself. Check out Hashicorp and the projects we work on if you haven't already at hashicorp.com.

 
The blogless year: What I've been up to

It has been quite a while since I've posted anything here - almost an entire year, which doesn't normally happen. Things have been pretty busy, though. I got married, changed jobs, and found plenty of trouble along the way. There was a 3-month period where life was happening and I did almost nothing in the open source world, which is clearly visible on my github profile activity stream.

However, in the last year, I have a few interesting things in technology that I think are worth mentioning.

Oaf

Oaf is the ruby framework I developed and mentioned in my previous post. Although it hasn't seen any major popularity, the tool has definitely been helpful to me on a number of occasions where I wanted to slap down a mock API really quick that actually performed some function. As an example, I used it to prototype an orchestration REST API which wrapped fabric. I think I probably wrote about 50 lines of bash scripts to make what I wanted happen. I definitely don't regret the time I spent making Oaf, especially with the continuous test lessons that I learned from it.

yum-rocket

I played around with parallelizing YUM downloads in RHEL6 with a small plugin called yum-rocket. It substituted urllib in place of URLGrabber to allow parallel downloads at the cost of two things: single-file progress indication, and HTTP keepalive's. The progress indication I really did not care about so much as I could see how many files had downloaded / how many total downloads. The keepalive connections, however, were an interesting thing to observe. Even though yum-rocket could be downloading using as many threads as possible, in some cases the original URLGrabber interface performed better. I think it really came down to the mirrors which were selected for the particular session. yum-rocket would also pull from multiple source repositories, which in some cases would get around QOS/bandwidth limitations enforced on a per-connection level from remote hosts. At times this proved to be extremely helpful, especially in conjunction with the yum-fastestmirror plugin, which would essentially allow you to download from the "fastest-N" mirrors concurrently. There are definitely cases where this hurt performance though, for instance if you have a local mirror with everything you need but you have configured yum-rocket to span up to 5 mirrors, you would probably get worse performance than just sticking to your one local mirror. All in all, good learning experience in plugin systems for interpreted languages.

slide.sh

This one was really me going off on a tangent with a little idea I had to create slide decks in my terminal. At the time, I worked for Cisco Systems, and while my role there didn't ordinarily have me creating slide decks, I loathed powerpoint so much that I made a shell script to display some text, pause, and advance to the next slide when I was ready. I used it a few times at Cisco, and it worked well. Having some initial success with it, I added things like horizontal rules, color text, in-slide pause, and more. I presented at PuppetConf 2013 with slide.sh, which went well, despite spilling over into GitHub's timeslot. Sorry, dudes.

pakrat

pakrat is a YUM repo snapshotter tool for use with RHEL6 (almost exclusively).

For a while, Keith Chambers and I were really into snapshotting package repositories as a way of version controlling an entire operating system. RightScale was doing it pretty much how we wanted to do it, so we made a ghetto prototype using a curl script with a few loops in it. It worked surprisingly well, and even when we didn't check on it for a few months, whenever we would come back to it, it was chugging along, doing it's thing. I decided to make a more formal implementation, although I made a few mistakes in doing so. My first mistake was insisting on using the existing YUM libraries. This worked great on RHEL distributions, but anywhere outside of RHEL, it was pretty much impossible to use. And now that RHEL6 and its version of the YUM libraries will soon be obsolete, the library would make little sense to use anywhere else. The other mistakes were more minor, things like the command line syntax and whatnot. Nonetheless, pakrat was another good learning experience and actually had a fairly decent progress indication subsystem.

serf

Since early this year, I have been helping @hashicorp implement some features in their peer discovery/orchestration framework known as serf. Hashicorp (both @armon and @mitchellh) are just awesome to work with. I helped with a few things like the key rotation feature, among other things. The time I've spent working on serf and learning from the team has definitely not been mis-spent.

go-otp

While working on serf, @armon and I discussed the idea of using "one-time pads" as a way of ensuring perfect forward security during Serf's key exchange. The final design ended up a little different, but the conversation led me to create go-otp, which is a very small reference implementation of the OTP concept and how it can be applied using golang.

columnize

Also while working on serf, I cleaned up some of the command-line output by writing a simple little library called columnize for golang. It will take a list of strings, and produce perfectly aligned columns of output without relying on tabs (\t). I've used it in a number of side projects and it is still currently in use in both serf and consul.

go-semver

go-semver is a library providing easy access to semantic versioning logic in golang. It allows you to parse version numbers and compare them.

ruby-aptly

I spent about 6 months trying to work with Ubuntu as a base operating system, as the product I was working on at the time was running OpenStack and the enterprise support was best for Ubuntu. I really hated using .deb packages, and found the standard tooling to be nauseating, but thankfully a smart guy @smira realized the same and did something about it with aptly. While young in its life, aptly offers some impressive features, including repository snapshots and snapshot merging. I worked briefly with @smira on aptly, and eventually created a quick ruby wrapper around its cli, called ruby-aptly. While not the most elegant thing I've written, it made writing higher-level logic around how to compose our release of debian packages much easier than it could have been.

go-glob

While developing a new project in golang, I found it odd that POSIX basic (non-ERE) regualr expressions were not implemented anywhere that I could find. This didn't bother me until I wanted to provide a command-line interface that allows passing a string glob in using the '*' character as a wildcard. There were a few IO-related packages that had functionality close to what I wanted, but no exact matches. I ended up implementing just the globbing part as part of a new golang package called go-glob.

go-license

During development of another project, I found the need to scan license text for different software packages and guess what license type the text described. I also saw a need for a standardized set if string identifiers for license types which just doesn't exist right now. I ended up created go-license in golang to satisfy both use cases.

huck

After making a career change and joining a team of engineers who think purely in cloud concepts, I saw a fairly common pattern in some things we were doing that could be standardized, so I spent a week over nights and weekends creating huck, which is a pretty small ruby framework for tossing messages into a queue and executing logic based on it from the remote end. I debated open sourcing the framework to myself, as I know there are probably better ways for some of huck's use cases to be implemented, but nonetheless, I decided to realease it because a framework is better than re-implementing something 5 different times using different sets of scripts that are all doing something similar.

So all in all, summer 2013 to summer 2014 was a pretty okay year in code for me, but I still think I can do better and hopefully can prove that during this year. I have been hacking away on a new project which I plan to open source sometime in the coming months, and there will likely be more small golang libraries to support it as I develop more on it.

 
Measuring the Quality of Code

What do you think about when you hear the phrase "code quality" ? For me it used to be more about reliability of the operation of code, which is in part accurate, but leaves much to be assumed. When considering a new project for non-experimental use, you would probably want the answers to these 3 questions:

Does it work reliably?

Setting up a development environment and getting an application to work as expected once is normally pretty easy (as it should be). Determining if a program will exhibit the same expected behavior in N number of other circumstances is a bit harder. A consumer of any application will want to know two things:

  • Does it work for most normal use cases?
  • Does it work for my edge cases?

Writing unit tests is a good way to take care of "most normal use cases" in an automated and repeatable way that will be visible to any interested consumer without any footwork on their own. By including typical sunny-day (positive) test cases as well as negative test cases for as much of the code as possible, it becomes easier for consumers to determine if the application in question meets the requirements.

Is it well tested?

Tests passing is a very good sign, but without taking a look at the code doing the testing, it doesn't really tell you much other than "something executed, and it worked out". The next step to unit testing is code coverage, which is also gaining popularity in many projects. Some unit testing frameworks have code coverage built right into them, and most others have a module that implements it. Essentially what this gives you is a general idea of how much of the code is exercised during testing.

There are two useful ways to look at code coverage. The first and most obvious is a generated percentage. This can tell you at a glance how much code is executed during your tests. The second useful way of looking at code coverage is in generated reports, which could be in XML form, JSON, or if your test suite supports it, HTML. I find the HTML report to be the easiest to digest, especially if it contains clickable links to any given class or include, and a way of visualizing line-by-line whether the code was executed or not.

Code coverage in the high-90's or even 100% does not come easy. I found that code that performs a fork() or similar is not easy to cover, since the process that is running the test code is now in a different thread and will likely not communicate coverage back to the main process. This means that the code coverage rating as indicated by your test suite is not always 100% accurate.

In multi-threaded applications, the results from worker processes are easily testable if they are returned to main() (or equivalent), or even testing side effects is generally somewhat easy to do. Some unit test frameworks allow you to claim coverage with annotations, even if the test framework would otherwise indicate a block of code as not covered. The reason that this exists is clear, but I would argue that it is better to let the coverage be less than 100% to expose the amount of code that is not truly considered in the line-for-line metrics.

Is it maintainable?

Code complexity is completely ignored by unit tests. It does not matter how complex or unreadable the code is. If it works, the tests pass.

Tests passing is a very machine-centric way of determining code quality. The fact of the matter is that humans are writing the code being tested, so how is it possible to determine human readability and comprehension? If there is a 300-line method that forks two separate times and depends on the side effects of other bits of code in the application, how long would it take for a new developer to fully understand what that method accomplishes and what its dependencies are versus having five or six separate methods that perform smaller tasks to accomplish the same thing?

Code Climate is an awesome approach at surfacing coding practice for open source applications. Code Climate will measure things like number of lines in any given method, how many conditions are in each method, number of nested loops, assignments, and a number of other things to determine the overall complexity of any given code block. Code Climate also detects code repetition, which makes determining when something needs to be broken out into a function very easy. Code Climate also breaks all of this down into an easy-to-understand "GPA" of sorts (0.0 through 4.0) to indicate at a glance what the overall complexity of the application looks like.

Having readable and comprehensible code is critical to the maintainability of the application, and deterministic to patches submitted versus bugs opened in hopes that the maintainer will fix it.

Exposing code quality metrics

There are probably a number of ways to expose the quality metrics of a given application. Some online application quality and testing tools even expose a small GIF image that can be used on a project's main web page to indicate the current metrics, which is great if you have an open source project with a publicly-accessible home page or project namespace. Some examples include:

Travis CI

Coveralls

Code Climate

Exposing code quality metrics is like maintaining a front lawn - having a thick, lush green carpet across the yard shows passersby that it is cared for and enjoyed by its maintainer. In the same way, having unit tests that pass along with a high rating in both code coverage and code climate says something about the quality of a project that a good README file just can't.

Example Application

I recently released a small open source application called Oaf, which demonstrates all of the code quality tools mentioned in this post. Below are example "badges", which are a more-or-less live representation of build status, code coverage, and code climate:

Build Status Coverage Status Code Climate

You can also click on the above badges to view test output and coverage / climate trending.

A few bottom lines

  • Using metrics like build status, code coverage, and code climate can give you an idea of the state of code in an application.
  • Perfect ratings don't always mean better code.