The Great Migration of 2016

The Great Migration of 2016

It has been a long time since I've blogged for fun. A lot has changed and a lot has remained.

It is my goal to start writing more about what I'm up to, as much for an archive for my kids, family, and friends to read as it is to just flex the writing muscles. Most posts will continue on the nerdy theme as relates to computers and programming. However, I do plan to write up summaries about activities that involve the family and friends, for when nostalgia or curiosity about a time in life comes up.

Initially, though, I will start to write up a series of posts about how I'm consolidating as much as possible of my digital life into servers and services that I run, and using Emacs to interact with those services as much as possible.

Blog migration

I was a happy Wordpress user and developer when blogging first became a "thing". Cranking out plugins helped pay the bills out of college and blogging about technical things is ostensibly why I got a job at 2600Hz in 2010.

There are certainly ways to interact with Wordpress installations via Emacs but in the end, I wasn't happy with them and wanted something more streamlined. Through some series of events, I came across Nikola and appreciated the minimalist nature of the default installation, the ease with which I migrated existing posts from Wordpress, and the ability to manage posts using Emacs' org-mode, with which I've made a conscious effort to learn this year as well.

Git migration

Part of the appeal is that I can now put my posts, as they're static files, into version control. I've setup Gogs on my server and am transitioning my personal repos to it (and off of GitHub). I also have the full power of the command line (grep, awk, sed, etc) to work with my blog's corpus.

Going forward

I'm excited by the prospects of these (and other) changes. My goal has been to reduce the applications I use with regularity to two: Emacs and a browser. The more I can accomplish in Emacs, the less friction there is to me getting things done, which is part of why I'm so excited. Emacs is a tool that has gotten out of my way to the point that I don't even think about most keybindings I use. Emacs has become a natural extension of my thought process, and as long as my fingers can keep up with my mind, there's no impedance from my editor.

Hopefully this is the restart of my blog; no excuses aside from laziness now!

Better Blogging via Emacs

Finally trying to figure out how to get back into writing a little more consistently. Since I spend so much time in Emacs, blogging from Emacs might help in that endeavor. To that end, I've installed [org2blog]( via [this blog post]( One caveat, you'll need to download xml-rpc.el from and add (require 'xml-rpc) to your .emacs file.

Using ibrowse to POST form data

It is not immediately obvious how to use ibrowse to send an HTTP POST request with form data (perhaps to simulate a web form post). Turns out its pretty simple:

    ibrowse:send_req(URI, [{"Content-Type", "application/x-www-form-urlencoded"}], post, FormData)

Where URI is where you want to send the request ("") and FormData is an iolist() of URL-encoded values ("foo=bar&fizz=buzz"). There's obviously a lot more that can be done, but for a quick snippet, this is pretty sweet.

Emulating Webmachine's {halt, StatusCode} in Cowboy

At 2600Hz, we recently converted our REST webserver from Mochiweb/Webmachine to Cowboy, with cowboy\_http\_rest giving us a comparable API to process our REST requests with. One feature that was missing, however, was an equivalent to Webmachine's {halt, StatusCode} return. While there has been chatter about adding this to cowboy\_http\_rest, we've got a function that emulates the behaviour pretty well (this is cleaned up a bit from our actual function, removing project-specific details).

    -spec halt/4 :: (#http_req{}, integer(), iolist(), #state{}) -> {'halt', #http_req{}, #state{}}.
    halt(Req0, StatusCode, RespContent, State) ->
        {ok, Req1} = cowboy_http_req:set_resp_body(Content, Req0),
        {ok, Req2} = cowboy_http_req:reply(StatusCode, Req1),
        {halt, Req2, State}.

Obviously you can omit setting the response body if you don't plan to return one.

CouchDB/BigCouch Bulk Insert/Update

While writing a bulk importer for Crossbar, I took a look at squeezing some performance out of BigCouch for the actual inserting of documents into the database. My first time running all the documents into BigCouch at the same time resulted in some poor performance, so I went digging around for some ideas on how to improve the insertions. Reading up on the High Performance Guide for CouchDB (which BigCouch is API-compliant with), I started to play with chunking my inserts up to get better overall execution time. Note: the following are very unscientific results, but I think are fairly instructive for what one might expect.

Docs Per Insertion Elapsed Time (ms)
26618 107176
1000 8325
1500 5679
2000 3087
2500 1644
Docs Per Insertion Elapsed Time (ms)

Based on the CouchDB guide, I decided to not pursue this further, as dropping insertion time 2 orders of magnitude was fine enough for me! I may have to bake this into the platform natively. For those interested in the Erlang code, it is pretty simple. Taking a list of documents to save, use lists:split/2 to try and split the list. By catching the error, we can know that the list is less than our threshold, and can save the remaining list to BigCouch. Otherwise, lists:split/2 chunks our list into one for saving, and one for recursing back into the function. Since we don't really care about the results of couch\_mgr:save\_docs/2, we could put the calls in the second clause of the case in a spawn to speed this up (relative to the calling process).

    -spec save_bulk_rates/1 :: (wh_json:json_objects()) -> no_return().
    save_bulk_rates(Rates) ->
        case catch(lists:split(?MAX_BULK_INSERT, Rates)) of
            {'EXIT', _} ->
                couch_mgr:save_docs(?WH_RATES_DB, Rates);
            {Save, Cont} ->
                couch_mgr:save_docs(?WH_RATES_DB, Save),

Life Update

Updated the blog to run 3.3.1 - lot of cobwebs around these parts. Hopefully I can be more proactive in blogging about things going on at work, and perhaps starting to write about what I'm up to personally (not that I have much of that right now). Maybe my Google stats will jump over the 0.3 hits I average! Dare to dream!

cURL stripping newlines from your CSV or other file?

I'm in the process of writing a REST endpoint for uploading CSVs to Crossbar as part of our communications platform at 2600hz. Not wanting to invoke the full REST client interface, I generally use cURL to send the HTTP requests. Today, however, I had quite the time figuring out why my CSV files were being stripped of their newline characters. The initial invocation:

    $> curl http://localhost:8000/v1/path/to/upload -H "Content-Type: text/csv" -X POST -d @file.csv

Walking through the code, from where I was processing the CSV down to the webserver handling the connection itself, looking for who was stripping the newlines, I determined it was coming in sans-newlines and decided to check out cURL's man pages for what might be amiss. I quickly found that the -d option was treating the file as ascii, and although the docs don't explicitly say so, it appears this option will strip the newlines. The resolution is to use the –data-binary flag so cURL doesn't touch the file before sending it to the server.

Cron and infinite loops do not mix

More "expert" code time! From the "expert":

Please put this script in a cron to run every minute

    while true; do
      rsync -a server:remote_dir local_dir
      sleep $freq

local\_dir is going to be really, really, really up to date after a few minutes…the server crash will be epic. Perhaps we should write a script to find and kill these rogue processes and run it every minute too, but stagger it with the other cron…