Erlang, Primes, and Sieves Again (Lazy edition)

I was working on the Project Euler problems, specifically problem 7, and decided to implement a prime number generator based of my previous prime number implementation based off the Melissa O'Neill paper. The problem says that, given that the 6th prime number is 13, find the 10,001st prime. Since we don't have a known upper bound to use the primes:queue/1 function, I created a lazy sieve that would return the next prime, its position, and an iterator. The code:

    -export([lazy_sieve/0]).

    lazy_sieve() ->
        Table = insert_prime(2, skew_kv:empty()),
        [ { 2, 1 } | fun() -> lazy_sieve({ 3, 2 }, Table) end ].

    lazy_sieve({ X, Pos }, Table) ->
        {NextComposite, _Value} = skew_kv:min(Table),
        case  NextComposite =< X of
        true -> lazy_sieve({X+1, Pos}, adjust(Table, X));
        _Else -> [ {X, Pos} | fun() -> lazy_sieve({X+1, Pos+1}, insert_prime(X,Table)) end]
        end.

This can be used thusly:

    1>[{Prime1, Pos1} | Next1] = primes:lazy_sieve().
    % Prime1 = 2, Pos1 = 1, Next1 = fun()
    2>[{Prime2, Pos2} | Next2] = Next1().
    % Prime2 = 3, Pos2 = 2, Next2 = fun()
    3>[{Prime3, Pos3} | Next3] = Next2().
    % Prime3 = 5, Pos3 = 3, Next3 = fun()

This makes defining a function to find the nth prime trivial:

    -export([nth/1]).

    nth(N) ->
        nth(lazy_sieve(), N).

    nth([ {Prime, N} | _Next], N) -> Prime;
    nth([ {_Prime, _Pos} | Next], N) ->
        nth(Next(), N).

Lazy evaluation twice, in the main functionality and in the value of a key in the queue. And the performance of nth is pretty good too (time is the left number, in mirco-seconds, and the 10,000st prime is the right number):

    9> timer:tc(primes, nth, [10001]).
    {1635839,104743} % 1.6 seconds
    10> timer:tc(primes, queue, [104743]).
    {1788997, % 1.7 seconds to generate the full list of primes
     [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,
      79,83,89,97,101,103|...]}

So it takes 1.6 seconds to find the 10,001st prime and 1.7 seconds to generate the list of primes up to the 10,001st prime.

Erlang, Primes, and the Sieve of Eratosthenes

Working through the Project Euler again and using Erlang to do so, there are quite a few problems that deal with primes. It is important, then, to have a library of functions that make generating and validating primes easy. A typical method for generating primes is using the Sieve of Eratosthenes. As discussed in the wiki article, Melissa O'Neill has shown the given Haskell implementation is not a true implementation, and shows a couple versions that are more true to the algorithm and more performant. I took the implementation she described using a priority queue on page 7 of the pdf. I used a skew heap implementation I found as the priority queue, modified it slightly to handle a Key and a Value parameter, and away I went. Here's the implementation I came up with:

    %% primes.erl
    -module(primes).

    -export([queue/1]).

    queue(N) ->
        sieve_queue(lists:seq(2, N)).

    sieve_queue([]) ->
        [];
    sieve_queue([X|XS]) ->
        Table = insert_prime(X, skew_kv:empty()),
        [X | sieve_queue(XS, Table)].

    insert_prime(P, Table) ->
        skew_kv:insert(P*P, from(P*P, P), Table).

    sieve_queue([], _Table) ->
        [];
    sieve_queue([X|XS], Table) ->
        {NextComposite, _Value} = skew_kv:min(Table),
        case  NextComposite =< X of
            true -> sieve_queue(XS, adjust(Table, X));
            _Else -> [X | sieve_queue(XS, insert_prime(X, Table))]
        end.

    adjust(Table, X) ->
        {N, [Nprime | NS]} = skew_kv:min(Table),
        case N =< X of
            true ->
                T = skew_kv:delete_min(Table),
                T2 = skew_kv:insert(Nprime, NS(), T),
                adjust(T2, X);
            _Else -> Table
        end.

    %% from http://www.erlang.org/cgi-bin/ezmlm-cgi?4:mss:177:khdaceipfabbicmifdhf
    %% a lazy list that starts at K and increments by Inc
    from(K, Inc) ->
        [K|fun()-> from(K+Inc, Inc) end].

I tried to keep variable names similar to the O'Neill implementation, but I have modified it a bit to use lazy lists, Erlang-style. Not being familiar with Haskell, I think lists there are defined lazily by default, but Erlang needs an explicit construct to do lazy lists. Fortunately, it isn't terribly hard. So, the code! I named the function queue because I was implementing prime number generators using other methods from the paper (the unfaithful sieve and a sieve with a map instead of a queue), hence why you need to call the function with:

    1> primes:queue(10).

I don't think I'll expound on the why, as O'Neill does a nice job explaining it in her paper, so I'll focus on the how and leave it to the reader to learn the why. My insert\_prime function differs from the O'Neill insertprime because mine has to account for Erlang-style lazy lists. My assumption is that the map call in O'Neill's creates a lazy list and so all the mappings are not computed, since XS is a potentially huge list of numbers. Instead, I observed that those lists are created by the map call are arithmetic progressions that start at p*p and increase by p. My simple lazy list that accomplishes this is called from/2, which I got from a Joe Armstrong suggestion and adapted to my needs. Let's take a quick look at from/2. We pass in a number K and an increment Inc and return a list with K at the head and a fun as the tail. The fun wraps a call to from/2, which acts as the iterator for the lazy list. So evaluating

    1> primes:from(2, 2).
    [2|#Fun]

Here's an example showing the iterator in action:

    57> [K|F1] = primes:from(2, 2).
    [2|#Fun]
    58> [K2|F2] = F1().
    [4|#Fun]
    59> [K3|F3] = F2().
    [6|#Fun]
    60>

This lazy list is stored in the queue as the value (line 16) for the key P*P. The lazy list is then used in the adjust/2 function when we want to compare the smallest key of the queue to the current number we're evaluating. Line 28 extracts the K into Nprime, and the fun into NS, from the from/2 call earlier. Line 32 shows how to get the next number in the lazy list, NS(), which returns [K+Inc | fun()], and inserts it as the value into the queue. Again, for the why, consult the O'Neill paper. Otherwise, the implementation is roughly the same. There are some improvements and optimizations that could be made, I'm sure, but for a rough draft, it is fairly speedy, minimal memory usage, and for Project Euler, it is more than adequate in producing the primes I need to solve the problems.

Red-black Trees in Erlang

Working through Chris Okasaki's "Purely Functional Data Structures", I found that I couldn't find an Erlang version of the red-black tree implementation Chris shows on page 28. The code, also available as a pastie and download (with more comments of mine) .erl and .hrl:

    %% redblack.hrl
    -record(node, {color, data, left, right}).

    %% redblack.erl
    %% Adapted from Okasaki "Purely Functional Data Structures" p. 28
    member(_X, undefined) ->
        false;
    member(X, #node{left=A, data=Y}) when X < Y ->
        member(X, A);
    member(X, #node{data=Y, right=B}) when X > Y ->
        member(X, B);
    member(_X, _S) ->
        true.

    insert(X, undefined) ->
        #node{color=black, data=X};
    insert(X, S) ->
        % to do recursive anonymous functions, pass the created fun in as 2nd parameter
        Ins = fun(undefined, _F) ->
                      #node{color=red, data=Data}; % insert the new data as a red node
                 (#node{color=Color, left=A, data=Y, right=B}, F) when X < Y->
                      balance(Color, F(A, F), Y, B);
                 (#node{color=Color, left=A, data=Y, right=B}, F) when X > Y->
                      balance(Color, A, Y, F(B, F));
                 (Node, _F) ->
                      Node
              end,
        #node{left=A, data=Y, right=B} = Ins(S, Ins),
        #node{color=black, left=A, data=Y, right=B}.

    %% detect Black->Red->Red Patterns and balance the situation
    balance(black, #node{color=red, left=#node{color=red, left=A, data=X, right=B}, data=Y, right=C}, Z, D) ->
        #node{color=red, left=#node{color=black, left=A, data=X, right=B}, data=Y, right=#node{color=black, left=C, data=Z, right=D}};
    balance(black, #node{color=red, left=A, data=X, right=#node{color=red, left=B, data=Y, right=C}}, Z, D) ->
        #node{color=red, left=#node{color=black, left=A, data=X, right=B }, data=Y, right=#node{color=black, left=C, data=Z, right=D}};
    balance(black, A, X, #node{color=red, left=#node{color=red, left=B, data=Y, right=C}, data=Z, right=D}) ->
        #node{color=red, left=#node{color=black, left=A, data=X, right=B }, data=Y, right=#node{color=black, left=C, data=Z, right=D}};
    balance(black, A, X, #node{color=red, left=B, data=Y, right=#node{color=red, left=C, data=Z, right=D}}) ->
        #node{color=red, left=#node{color=black, left=A, data=X, right=B }, data=Y, right=#node{color=black, left=C, data=Z, right=D}};
    balance(Color, Left, Data, Right) ->
        #node{color=Color, left=Left, data=Data, right=Right}.

So, I'm going to go through this, mainly to prove to myself that I kinda get it, and because Ben will complain I didn't dumb it down enough for him. Line 2 defines the node structure in an Erlang header file. Comparable to a struct in C. Member Lines 6-13 define the member function. Each member() -> … is a function clause, essentially a case statement on steroids. The first clause says if the tree is undefined, then whatever X is, it is not a member of undefined, so return false. The second clause unpacks the current node ( #node{left=A, data=Y} ) and assigns the values of left and data to A and Y, respectively (I have tried to maintain Chris's variable names as much as possible, though I think Erlang's record syntax complements the Standard ML syntax found in the book). The clause then compares Y, the node's data, to the query X, and if X < Y, checks the left branch (the variable A) of the node. The third clause is nearly identical, except it handles when X > Y, and checks the right branch (B) for X's existence. The fourth and final clause handles a match (because when X is not greater than or less than Y, it must be equal), returning true. Simple enough to follow and grok. Insertion When I read several different articles online concerning red-black trees, the insertion routine sounded quite complex to me. I was quite pleased when I read the algorithm in the book, as it crystallized my thoughts and allowed me to grok the action. The first clause of the insert function handles the empty (or undefined) tree, creating and returning a black node (since the root of the tree must be black). The second clause takes the new data X and a tree S and creates a new function, stored in the var Ins. Now, calling an anonymous function recursively is tricky, and I will explain why this part varies slightly from the text. Since there's no way to refer to the function (as it isn't named), we have to pass the anonymous function as a second parameter to the anonymous function, which is what happens on line 28. So when you see F(A, F), mentally think Ins(A). The first parameter to Ins() is our current node in the tree. If it is undefined, we insert our new data as a red node. Why? That is discussed at length elsewhere, but it boils down to being easier to fix a red violation than a black violation (look up the constraints a red-black tree has on it, and the violations will make more sense). The second clause unpacks the node and makes our less than/greater than comparison like a normal BST. What's different is that we call the appropriate recursive insert call (remember F(A, F)? This inserts X somewhere into the left side of Node, and F(B, F) inserts X somewhere into the right side), and pass the result to the balance function. The final clause handles a match between X and Y and returns the Node and its branches unmodified. Balance Why do we need a balance function? In case our data being inserted into the tree is more ordered than we may have expected. A vanilla BST with inputs [3,2,1] will be a glorified list. We force the situation by inserting a red node in the tree and causing a situation where the parent of the newly inserted red node is also a red node. This violates the constraint that a red node's children must be black nodes. There are four possible combinations of this violation:

  1. (B)-Left->(Red)-Left->(Red)
  2. (B)-Left->(Red)-Right->(Red)
  3. (B)-Right->(Red)-Left->(Red)
  4. (B)-Right->(Red)-Right->(Red)

Each of those possibilities maps to the first four clauses of the balance function. The fifth clause passes a node back unchanged. So let's look at the first clause. This clause is activated when a black node's left child is red, and that red node's left child is also red.

          B3
         /   \
       R2     *
      /   \
    R1     *

So we unpack the three nodes and return the three nodes rotated to remove the red violation.

Variable Meaning Value
A R1's Left node undefined
X R1's Data 1
B R1's Right node undefined
Y R2's Data R2
C R2's Right node undefined
Z B3's Data 3
D B3's Right node undefined

Unpacking from the pattern match, we then pack up a new tree structure according to the clause's definition:

    #node{color=red, left=#node{color=black, left=A, data=X, right=B}, data=Y, right=#node{color=black, left=C, data=Z, right=D}};

This essentially replaces the black node having a red child and red grandchild with a red node that has two black nodes for children. The next illustration shows the result of the balance call:

          R2
         /   \
       B1    B3

Now, the bubbling up of the red node may cause a red violation with the parent and grandparent of R2, which will be handled as the recursion of Ins() unwinds. When the initial Ins() ends (line 28), the resulting node is the root of the new tree, which we then unpack into the data, left, and right fields. We then force the root node to be black by creating the root node with the unpacked parts, forcing color=black. So there you have it. Simplified, with a lot of things glossed over/ignored, but this is my port of the red-black code listing on page 28. Looking forward to progressing through the book more and increasing my data structure- and Erlang-fu. Further fun: Read Chris's reflections 10 years after publishing his book.

Quiero decir español mejor

I want to learn Spanish, and while I wish I could just move somewhere and be immersed, that is not feasible at this juncture. So I am reaching out to the open source world to help me along. The first step was to get a flashcard-like system that would provide me with a way to test my vocabulary and keep track of my progress. To that end, I have installed Mnemosyne. Read all about it because it is a pretty groovy program for flashcard learning. I then downloaded and installed the four Spanish card packages available through the site. As you can see on the left, the site has many languages supported, as well as a variety of other topics to study (including Swedish road signs). This gives me my base of vocabulary and expressions. Second on the list of tools is the Firefox plugin ImTranslator. This is a great plugin for doing on the fly translations. So when I browse a site in Spanish, or come across something I want to know how to say in Spanish, this plugin gets me going in the right direction. Using the default media player in Ubuntu, Rhythmbox and its ability to play online radio stations, I've searched for and subscribed to several Spanish music and news sites. So now I get to hear music, news, and commercials in several different Spanish cities (Madrid and Barcelona stations currently, though I plan to get some Central and South American stations since that's where I'll likely travel to first). And, for some extra vocab, I signed up with the Spanish Word-a-Day mailing list. I've really enjoyed the emails as they have a word, pronunciation guide, synonyms, the word in a sentence, and usually some Spanish trivia, like a joke, expression, or conjugation table for a verb/tense. The most important piece, however, is actually conversing with native speakers, and I am lucky to have a Mexican restaurant across the street with several native speakers who I've gotten to know. Any other tools you suggest? I think my next one will be finding folks who speak both English and Spanish and chatting with them via GChat or Skype.

Symfony/Propel Memory Issues

I have a bulk import process that needs running nightly. Currently there are around 4,200 "rows" to process, which actually can encompass many tables so row is not entirely appropriate. The problem is that the script poops out after ~200 "rows" with memory limit errors. While increasing the memory limit is do-able, I am not interested in that as a solution currently. First, I recorded some numbers to benchmark where the script was in memory consumption. The first number is the "row", the peak is how much memory the script has allocated, and the mem is the delta in memory allocation per row.

0 peak: 41,103,408 mem: 29,884,416 1 peak: 42,440,264 mem: 1,310,720 2 peak: 43,613,848 mem: 1,310,720 3 peak: 43,893,960 mem: 262,144 4 peak: 44,223,040 mem: 262,144 5 peak: 44,896,296 mem: 786,432 6 peak: 45,671,560 mem: 786,432 7 peak: 45,865,952 mem: 0 8 peak: 46,418,272 mem: 786,432 9 peak: 47,917,888 mem: 1,310,720 10 peak: 48,566,312 mem: 786,432

After the initialization phase, memory allocation increases fairly steadily and it is not long until the 128M memory limit is reached. This is unacceptable as I know some "rows" should be much closer to 0 as nothing is imported on the majority of rows. My first solution was to disable logging:

    sfConfig::set('sf_logging_enabled', FALSE);

The initial memory allocation was decreased, but the running deltas remained higher than expected. Second, I inserted a ton of unset() calls in the various functions. This dropped my deltas a little:

0 peak: 30,611,472 mem: 20,971,520 1 peak: 31,969,760 mem: 1,310,720 2 peak: 32,180,112 mem: 262,144 3 peak: 32,333,216 mem: 262,144 4 peak: 32,525,144 mem: 262,144 5 peak: 32,656,616 mem: 0 6 peak: 32,863,736 mem: 262,144 7 peak: 33,103,264 mem: 262,144 8 peak: 33,455,544 mem: 262,144 9 peak: 33,754,288 mem: 262,144 10 peak: 33,984,976 mem: 262,144

But allocation still killed the import before it could complete. Browsing through various sites, I discovered Propel had a hard time cleaning up circular references, which meant PHP couldn't garbage-collect that memory. However, to combat this, Propel 1.3 offers a static method disableInstancePooling that allowed me to override Propel's desire to keep instances around. Adding:

    Propel::disableInstancePooling();

to the beginning of the import gave me these results:

0 peak: 26,569,632 mem: 17,301,504 1 peak: 28,582,152 mem: 2,097,152 2 peak: 30,455,352 mem: 1,835,008 3 peak: 30,536,176 mem: 0 4 peak: 30,536,176 mem: 0 5 peak: 31,517,088 mem: 1,048,576 6 peak: 31,534,152 mem: 0 7 peak: 31,552,120 mem: 0 8 peak: 31,589,632 mem: 0 9 peak: 31,695,504 mem: 0 10 peak: 31,695,504 mem: 0

Now new memory was allocating only when the import was actually doing something of significance. In fact, watching the import proceed with the deltas displayed, I could observe the memory decreasing at times, prolonging the life of the script by orders of magnitude. Whereas before the script was processing ~200 "rows", it currently processes the whole batch (4,237 "rows" currently) in one go. As the importable "rows" increase, I know I won't be butting up against memory limits for some time.

Letter to Representative Goodlatte Concerning the Commerce Clause

A letter to Congressman Goodlatte, concerning the invasive way the Commerce Clause is invoked to pry into the private lives of Americans.

Congressman Goodlatte, I would like to bring to your attention the cases of Wickard v Filburn (http://en.wikipedia.org/wiki/Wickard_v._Filburn ) and Gonzales v Raich (http://en.wikipedia.org/wiki/Gonzales_v._Raich), wherein the Supreme Court found "that Congress can regulate purely INTRASTATE activity that is not itself “commercial,” in that it is not produced for sale, if it concludes that failure to regulate that class of activity would undercut the regulation of the interstate market in that commodity" (emphasis mine). Congress has seemingly confused economics with commerce. Setting prices is not Congress' job. Setting quotas, to be met or not exceeded, is not Congress' job. Telling an American what they can and cannot produce, and in what quantities, for personal use, is not the job of Congress! Taking the Filburn case further, one can see it being applied to any number of self-sufficiency initiatives one may follow, like home gardening or personal solar/wind/hydro power. As we can see with HR 875 and S 425, regulating home gardening is already being proposed. I am glad to see that no Republicans are supporting it, but 41 Democrats are (HR 875, that is). As home power generation technology becomes more affordable and effective at displacing power drawn off the grid, how long until power companies join together to have Congress regulate how much energy a person can generate locally? Congressman, this trend worries me as I'm hoping it worries you. We need strongly worded legislation that reigns in Congress' ability to use the Commerce Clause as carte blanche to stick it's nose in the economy. Congress has the power to regulate commerce between the Several States, NOT in the private lives of Americans, no matter how much Congress "thinks" it may affect the national economy. Please respond by letting me know how you are personally going to reign in Congress' power (something any true Republican would actually do, even though it reigns in their own "power"). I would like to work with you on drafting this legislation so that it protects the individual's ability to take care of themselves and their families without Congress influencing those decisions. Sincerely, James Aimonetti

iwlagn: Microcode SW error detected. Restarting 0x82000000.

If you have gotten the above error in your syslog, you probably are not able to connect to wireless APs that require WPA passphrases, but can connect to ones that require WEP or no security. While searching for solutions, there were some that offered solutions for controlling the power of the card (for temperature-related issues), but those did not address my issue. Fortunately, the solution was fairly simple for my problem. I have Ubuntu Jaunty, with the 2.6.28-15 kernel. The resolution came when I looked in synaptic at the installed linux-restricted-modules and found I had two instances from older kernels still installed. Purging those completely from the system, and retrying to connect to my WPA2-enabled AP succeeded. Give it a shot.

Converting a site to use Cachefly for static content

I recently needed to move static content from a live site to a cachefly account. Rather than go through the directories, looking for the resources (js/css/images) I needed to ftp, I thought, "Man, this sure sounds like it could be automated". The first step was to collect a list of items that needed ftping to cachefly. I know what you're saying, "Use find!" In case Ben is reading this, find "searchs for files in a directory hierarchy" (that's from find's man page Ben). I wanted to separate the resources out so I ran three different invocations. For javascripts and css, the invocation was nearly identical:

    find . -name '*.js' > js.libs
    find . -name '*.css' > css.libs

Images were a little trickier. Most of the images are static content, but some are user-generated, likely to change or be removed. These do not go up to the CDN (at least for now). The user-generated content is located under one directory (call it /images/usergen), so we simply need to exclude it from find's search.

    find -path '*images/usergen*' -prune -o -path . -iname '*.gif' -o -iname '*.jpg' -o -iname '*.png' > image.files

The important parts:

  • #+BEGIN_EXAMPLE -path 'images/usergen' -prune #+END_EXAMPLE

    Remove any found items that contain images/usergen in the path name.

  • #+BEGIN_EXAMPLE -o -path . #+END_EXAMPLE

    Search within the current directory (the root of the project).

  • #+BEGIN_EXAMPLE -iname '.gif' -o -iname '.jpg' -o -iname '*.png' #+END_EXAMPLE

    Match, case-insensitive (-iname instead of -name), any files ending in gif, jpg, or png.

We are then left with three files, each line of which contains the path, relative to the project root, of each resource I want to upload. I created a simple php script to upload the images, maintaining the pathing, to cachefly. So an image with relative path /images/header/header\_left.png would now be accessible at instance.cachefly.com/images/header/header\_left.png. So the images are now up on the CDN. Now we need our code to point there as well. Fortunately, most of the resources were prepended with a domain (stored in the global $live\_site). So the src attribute of an image, for instance, would be src="< ?= $live\_site ?>/images/header/header\_left.png". Creating a $cachefly\_site global, we now only need to find lines in our code that have a basic layout of "stuff……$live\_site…stuff…..png" where stuff is (.*) in regex land. So we utilize two commands, find and egrep. Find locates files we want and egrep searches the found files for a regex that would locate the resources in the code. So first, we build the regex. We know a couple elements that need to be present, and one that should not be present. Needed are live\_site and a resource extension (js/css/jpg/png/gif), and not needed is the "images/usergen" path, as this points to user generated content. So the regex becomes:

    'live_site([^images/usergen])+.+(png|gif|jpg|css|js)'

This is the arg for egrep (the -l switch means print the file names that have a match, rather than the lines of a file that match):

    egrep -lr 'live_site([^images/usergen])+.+(png|gif|jpg|css|js)'

Now we need to tell egrep what files to search using find:

    find . -name "*.php" -exec egrep -lr 'live_site([^images/usergen])+.+(png|gif|jpg|css|js)' {} \;

We then store this list of files into a shell variable:

    export FILES=`find . -name "*.php" -exec egrep -lr 'live_site([^images/usergen])+.+(png|gif|jpg|css|js)' {} \;`

Now that we have the files we need, we can search and replace $live\_site with $cachefly\_site for resources. The goto command for search and replace is sed. The sed command will look generically like this:

    sed -i 's/search/replace/g' FILE

We actually have two issues though. Due to the nature of the code, we have to account for the $live\_site variable being passed in via the global keyword. So not only are we searching for resource files, but we also have to add $cachefly\_site to the global lines to make sure $cachefly\_site is defined within the function where output is generated. Searching and replacing resource files is pretty easy:

    sed -i '/live_site.+\|js\|css\|gif\|png\|jpg/s/live_site/cachefly_site/g' $FILES

$FILES, of course, came from our find/egrep call earlier. There is one catch to the regex used here. It is actually of a different generic form than mentioned above:

    sed -i '/contains/s/search/replace/g' FILE

With this format, we put a condition on whether to replace text, meaning the regex in the "contains" portion must be matched before the search and replace is performed on that line. So our sed above says if the line contains live\_site, followed by anything, ending in one of the listed resources (\| means OR), then replace live\_site with cachefly\_stite. I left of the $ since its common to both variables. Running the sed command replaces everything nicely, but when we reload the page, we see notices about $live\_site being undefined and resources being pulled from the host and not cachefly. So we need to handle the global importing. This one is a little tricker because we are not really replacing live\_site with cachefly\_site, but appending it to the list of imported globals. So a line like

    global $foo, $bar, $live_site, $baz;

becomes

    global $foo, $bar, $live_site, $cachefly_site, $baz;

The other trick is that the global line should not already contain $cachefly\_site. We don't need that redundancy. So, without further ado, the sed:

    sed -i '/global.*live_site.*\(cachefly_site\)\{0\}/s/live_site/live_site,\$cachefly_site/g' $FILES

The "contains" portion matches the keyword global, followed by stuff, followed by live\_site followed by stuff, with cachefly\_site appearing exactly 0 times (denoted by \{0\}). This ensures we only replace live\_site when cachefly\_site is not in the line already. The "search" portion is easy; search for live\_site. The replace portion replaces live\_site with live\_site,$cachefly\_site. This takes into account when live\_site is followed by a comma or semi-colon so we don't get syntax errors. And that is basically how I converted a site to use cachefly for static content.

Re-assert The Federal Government's Role As An Agent Of the Several States

A template for a resolution for you to send to your state legislature requiring the Federal Government to reign itself back into it's Constitutional constraints and cease imposing its will on the States. Remember, the Federal Government is an agent of the States, not the other way around.

  • WHEREAS, the Tenth Amendment to the Constitution of the United States reads as follows: "The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people"; and
  • WHEREAS, the Tenth Amendment defines the total scope of federal power as being that specifically granted by the Constitution of the United States and no more; and
  • WHEREAS, the scope of power defined by the Tenth Amendment means that the federal government was created by the states specifically to be an agent of the states; and
  • WHEREAS, today, in 2009, the states are demonstrably treated as agents of the federal government; and
  • WHEREAS, many federal laws are directly in violation of the Tenth Amendment to the Constitution of the United States; and
  • WHEREAS, the Tenth Amendment assures that we, the people of the United States of America and each sovereign state in the Union of States, now have, and have always had, rights the federal government may not usurp; and
  • WHEREAS, Article IV, section 4, United States Constitution, says in part, "The United States shall guarantee to every State in this Union a Republican Form of Government", and the Ninth Amendment states that "The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people"; and
  • WHEREAS, the United States Supreme Court has ruled in New York v. United States, 112 S. Ct. 2408 (1992), that Congress may not simply commandeer the legislative and regulatory processes of the states; and
  • WHEREAS, a number of proposals from previous administrations and some now pending from the present administration and from Congress may further violate the Constitution of the United States.

THEREFORE - Be it resolved by the House of Representatives of the State of <STATE>, the Senate concurring, that:

  1. That the State of <STATE> hereby claims sovereignty under the Tenth Amendment to the Constitution of the United States over all powers not otherwise enumerated and granted to the federal government by the Constitution of the United States.
  2. That this Resolution serves as notice and demand to the federal government, as our agent, to cease and desist, effective immediately, mandates that are beyond the scope of these constitutionally delegated powers.
  3. That all compulsory federal legislation that directs states to comply under threat of civil or criminal penalties or sanctions or requires states to pass legislation or lose federal funding be prohibited or repealed.
  4. That the Secretary of State of the State of <STATE> transmit copies of this resolution to the President of the United States, the President of the United States Senate, the Speaker of the United States House of Representatives, the Speaker of the House and the President of the Senate of each state's legislature and each Member of Congress from the State of <STATE>.

Replace "State of <STATE>" with your state or commonwealth and send it away. Or create this as a petition, gather signatures, then present it to your legislators. Take back your state from the Federal bureaucrats.

The Argument For Guns

I am not a gun-toting fellow by any means. Typically I have waffled between whether to ban some guns or leave them all available to private citizens. I know it is a slippery slope when talking bans and I don't trust future legislators to not take advantage of a current legislator's good intentions in limiting a subsection of firearms. That said, I think it is always good to refresh your position with a well-articulated essay in support of your thoughts. And while I probably will not be toting any machine guns or bazookas, I think a handgun and some training might be time and money well spent. We'll see. Until then, here's the post that I feel articulates a good reason for the 2nd Amendment to be un-abridged by any gun laws:

The Gun is Civilization Maj. L. Caudill USMC (Ret) Human beings only have two ways to deal with one another: reason and force. If you want me to do something for you, you have a choice of either convincing me via argument, or force me to do your bidding under threat of force. Every human interaction falls into one of those two categories, without exception. Reason or force, that's it. In a truly moral and civilized society, people exclusively interact through persuasion. Force has no place as a valid method of social interaction, and the only thing that removes force from the menu is the personal firearm, as paradoxical as it may sound to some. When I carry a gun, you cannot deal with me by force. You have to use reason and try to persuade me, because I have a way to negate your threat or employment of force. The gun is the only personal weapon that puts a 100-pound woman on equal footing with a 220-pound mugger, a 75-year old retiree on equal footing with a 19-year old gang banger, and a single guy on equal footing with a carload of drunk guys with baseball bats. The gun removes the disparity in physical strength, size, or numbers between a potential attacker and a defender. There are plenty of people who consider the gun as the source of bad force equations. These are the people who think that we'd be more civilized if all guns were removed from society, because a firearm makes it easier for a [armed] mugger to do his job. That, of course, is only true if the mugger's potential victims are mostly disarmed either by choice or by legislative fiat–it has no validity when most of a mugger's potential marks are armed. People who argue for the banning of arms ask for automatic rule by the young, the strong, and the many, and that's the exact opposite of a civilized society. A mugger, even an armed one, can only make a successful living in a society where the state has granted him a force monopoly. Then there's the argument that the gun makes confrontations lethal that otherwise would only result in injury. This argument is fallacious in several ways. Without guns involved, confrontations are won by the physically superior party inflicting overwhelming injury on the loser. People who think that fists, bats, sticks, or stones don't constitute lethal force watch too much TV, where people take beatings and come out of it with a bloody lip at worst. The fact that the gun makes lethal force easier works solely in favor of the weaker defender, not the stronger attacker. If both are armed, the field is level. The gun is the only weapon that's as lethal in the hands of an octogenarian as it is in the hands of a weight lifter. It simply wouldn't work as well as a force equalizer if it wasn't both lethal and easily employable. When I carry a gun, I don't do so because I am looking for a fight, but because I'm looking to be left alone. The gun at my side means that I cannot be forced, only persuaded. I don't carry it because I'm afraid, but because it enables me to be unafraid. It doesn't limit the actions of those who would interact with me through reason, only the actions of those who would do so by force. It removes force from the equation…and that's why carrying a gun is a civilized act.