urgetopunt technologies blog2015-02-23T15:42:16+00:00http://urgetopunt.com/John Parkerjparker@urgetopunt.comExecute Ruby or Python Scripts from Vim2014-02-19T00:00:00+00:00http://urgetopunt.com/vim/2014/02/19/run-scripts-from-vim.html<p>While developing Ruby script using Vim I frequently want to execute the script. This is easy to do by calling <code>:!ruby %</code> in Vim, but that’s rather a lot of typing. With a Vim mapping I shortened this:</p>
<script src="https://gist.github.com/jparker/9104192.js?file=rx.vim"></script><p>Now I just have to hit <code>\rx</code> in Vim to execute the current buffer. (NB: I’m using the default Vim leader <code>\</code>.) Of course, sometimes I want to pass arguments to the script on the command line. This was easy to accomplish by omitting the carriage return at the end of the mapping:</p>
<script src="https://gist.github.com/jparker/9104192.js?file=re.vim"></script><p>Now I can hit <code>\re</code>, and before the script executes, I can enter the desired arguments and hit enter to execute the contents of the buffer.</p>
<p>Recently I started working with Python, and I wanted the same ability. My first iteration defined new mappings for Python:</p>
<script src="https://gist.github.com/jparker/9104192.js?file=px.vim"></script><p>This gave me <code>\pe</code> and <code>\px</code> mappings similar to the originals, but this placed an awkward load on my right pinky finger using a <span class="caps">QWERTY</span> keyboard layout. What I really wanted was the continue using the <code>\rx</code> shortcut I’d been using for years. This called for a function.</p>
<script src="https://gist.github.com/jparker/9104192.js?file=RunFile.vim"></script><p>This version introduces a slight change to my workflow. Now when I hit <code>\rx</code> Vim prompts me for any additional arguments I want to pass to the script on execution — this is what <code>\re</code> and <code>\pe</code> did before. The original behavior of <code>\rx</code> and <code>\px</code> is gone. This means an extra carriage return when I’m executing the script without arguments, but it’s easier to type.</p>Base64.strict_encode642013-05-17T00:00:00+00:00http://urgetopunt.com/ruby/2013/05/17/base64-strict-encode64.html<p>Ruby 1.9 introduced a nice addition to the Base64 module: <code>Base64.strict_encode64</code>. Whereas <code>Base64.encode64</code> prettifies its output with newlines, <code>Base64.strict_encode64</code> yields output without any superfluous line feeds.</p>
<script src="https://gist.github.com/jparker/5603097.js"></script><p>This is a nice feature if you find yourself in need of <span class="caps">RFC</span> 4648-compliant output. You need this, for example, if you are generating policy documents for a form which <a href="http://aws.amazon.com/articles/1434">uploads directly to Amazon S3</a>. In such a scenario, instead of sending <code>#gsub</code> to the output of <code>encode64</code> to strip out line feeds you can simply call <code>strict_encode64</code>.</p>Running RSpec Within Vim2013-02-12T00:00:00+00:00http://urgetopunt.com/rspec/vim/2013/02/12/run-rspec-within-vim.html<p>Below are a handful of mappings and a function I use to run various incantations of <a href="https://www.relishapp.com/rspec">RSpec</a> from within <a href="http://www.vim.org">Vim</a>.</p>
<script src="https://gist.github.com/jparker/4772677.js"></script><p>When a spec file is open in the current buffer I can do the following:</p>
<table class="table table-bordered">
<tr><th>Mapping</th><th>Description</th></tr>
<tr><td><code>\rl</code></td><td>Run the example the includes the current line, e.g., <code>rspec spec/.../foo_spec.rb -l N</code>. (If the current line is an <code>it</code> block, only that example is run. If the current line is a <code>describe</code> block, all examples within the context are run — I’m particularly fond of this feature of RSpec.)</td></tr>
<tr><td><code>\rf</code></td><td>Run the entire spec file, e.g., <code>rspec spec/.../foo_spec.rb</code>.</td></tr>
<tr><td><code>\rd</code></td><td>Run all the spec files in the current spec file’s directory, e.g., <code>rspec spec/...</code>.</td></tr>
<tr><td><code>\rs</code></td><td>Run the entire spec suite, e.g., <code>rspec spec</code>.</td></tr>
</table>
<p>The way the mappings are configured the first three mappings run <code>rspec</code> with the documentation format. The last one runs <code>rspec</code> with the progress (dots) format. I find the former nice when I’m running a small number of examples and the latter perferable when I’m running a large number of examples.</p>Lowering BCrypt cost with has_secure_password2012-11-29T00:00:00+00:00http://urgetopunt.com/rails/2012/11/29/has-secure-password-bcrypt-cost.html<p>One of the strengths of an algorithm like <a href="http://en.wikipedia.org/wiki/Bcrypt">BCrypt</a> for storing encrypted passwords lies in the fact that it is relatively slow and can readily be made slower. This makes brute force attacks time-prohibitive. The <a href="https://rubygems.org/gems/bcrypt-ruby">bcrypt-ruby</a> gem gives you easy access to this cost factor to slow down encryption as needed. The default cost is 10. This provides good security for encrypting user passwords, but if your Rails application depends on users being signed in, you may find this default cost has a substantial impact on the performance of your integration tests. Both <a href="https://github.com/plataformatec/devise">Devise</a> and <a href="https://github.com/binarylogic/authlogic">Authlogic</a> provide hooks into their BCrypt interface which allows you to easily change the cost, and this can come in very handy during testing.</p>
<script src="https://gist.github.com/4062626.js?file=devise.rb"></script><script src="https://gist.github.com/4062626.js?file=authlogic.rb"></script><p>If you’re authentication needs are simple and you have instead opted to use Rails’ <a href="http://api.rubyonrails.org/classes/ActiveModel/SecurePassword/ClassMethods.html#method-i-has_secure_password">SecurePassword</a>, you will find that, at least as of Rails 3.2.9, there is no obvious way to lower the cost factor. However, if you’re willing to live with a little monkey patching, you can achieve the same results.</p>
<script src="https://gist.github.com/4062626.js?file=bcrypt.rb"></script><p>Is it worth doing? Here are the before and after measurements on an application I’m currently working on. This particular application is an internal application for a client. There are no guest features, which means every single feature depends on a user having first signed in.</p>
<script src="https://gist.github.com/4062626.js?file=gistfile1.txt"></script><p>Before the change the integration tests run in 43 seconds; after the change the tests run in 27 seconds. That’s roughly a 33 percent speed-up. I’ll take it.</p>Password validation and has_secure_password2012-11-12T00:00:00+00:00http://urgetopunt.com/rails/2012/11/12/validate-password-presence-has-secure-password.html<p><strong><span class="caps">UPDATE</span> 2012-11-29:</strong> Actually, it appears this will be <a href="https://github.com/rails/rails/pull/6215">a moot point in Rails 4</a>.</p>
<p>If the authentication needs of your Rails application are simple enough, third-party authentication libraries like <a href="https://github.com/plataformatec/devise">Devise</a>, <a href="https://github.com/binarylogic/authlogic">Authlogic</a> or <a href="https://github.com/thoughtbot/clearance">Clearance</a> can introduce considerable, unneeded overhead. Don’t get me wrong. All three libraries are well-written, well-maintained and popular enough that most support is little more than a Google search away. Nevertheless, if your needs are simple, you may find that Rails’ <a href="http://api.rubyonrails.org/classes/ActiveModel/SecurePassword/ClassMethods.html#method-i-has_secure_password">SecurePassword</a> does everything you need with a minimum of fuss.</p>
<p>One of the handy things <code>has_secure_password</code> does for you is establish minimal validation rules for your password — the password must be confirmed (by providing <code>password</code> and <code>password_confirmation</code> attributes) and the <code>password_digest</code> attribute must not be blank. These validation rules are the absolute minimum you need to ensure users aren’t created with blank passwords, but you will almost certainly need to augment them to provide a secure, user-friendly experience for your application.</p>
<p>For starters, left as is, if you leave <code>password</code> and <code>password_confirmation</code> blank, while validation will fail, the validation errors might not be where you expect them.</p>
<script src="https://gist.github.com/4062658.js?file=gistfile1.rb"></script><p>The password is indeed invalid, but the error message “can’t be blank” is attached to the <code>password_digest</code> attribute. The digest is the encrypted hash in which the password is stored. It’s contents are never meant to be presented to the user. Only the <code>password</code> and <code>password_confirmation</code> virtual attributes matter. You probably don’t want to tell the user the password digest can’t be blank, but rather the password itself can’t be blank. When displaying error messages you will have to jump through extra hoops to ensure that the relevant error message is displayed meaningfully.</p>
<p>Of course, you will most likely want to add some additional validations to your password field anyway if you intend to do any password vetting. While you’re at it, why not ensure the password isn’t blank?</p>
<script src="https://gist.github.com/4062658.js?file=user.rb"></script><p>Now if a user is created with a blank password, validation will fail and the “can’t be blank” error will show up on the <code>password</code> attribute itself. The password will also have a minimum length. Throw in some <code>validates_format_of</code> goodness if you must oblige users to use more than one chracter class in their password (upper-case, digits, punctuation). (Of course, obscure gibberish <a href="http://xkcd.com/936/">isn’t all it’s cracked up to be</a>.)</p>Rake Task to Upload Assets to S3 for Cloudfront2012-03-23T00:00:00+00:00http://urgetopunt.com/rails/s3/cloudfront/2012/03/23/upload-assets-to-s3.html<p>In an application serving static assets from <a href="http://aws.amazon.com/cloudfront">Cloudfront</a>, I’m using <a href="http://fog.io">Fog</a> and the following Rake task to upload precompiled assets and remove stale ones.</p>
<script src="https://gist.github.com/2176147.js?file=assets.rake"></script><p>The task depends on <code>assets:clean</code> and <code>assets:precompile</code>, so each time it runs <code>public/assets</code> is cleaned out and the assets are recompiled. The task then calculates the etag (MD5 checksum) of each file, compares it to the etag of the file on S3, and, if it’s different, copies the asset up. If the etags are the same, it skips the file. Finally, after uploading everything, the task runs through the contents of the asset bucket, and removes any files that didn’t also exist in <code>public/assets</code> on the local machine. This assumes the bucket in question is only being used to serve assets for the current application. <strong>Do not use this task as is if you are using the bucket to serve additional content!</strong></p>
<p>As a sanity check, the task aborts before making any changes if <code>public/assets</code> is empty.</p>
<p>This task also takes advantage of Rake’s command line arguments feature to let you run the task in “no-op” mode. In this mode, assets are still removed locally and precompiled, but changes to S3 are reported but not actually carried out. To run it in no-op mode, append <code>[noop]</code> (really, anything in brackets) to the task name on invocation:</p>
<pre><code>$ rake assets:upload[noop] # runs in no-op mode
$ rake assets:upload # runs in dangerous mode
</code></pre>Rails Source Annotations and RSpec2012-02-20T00:00:00+00:00http://urgetopunt.com/rspec/2012/02/20/rails-source-annotations-rspec.html<p>By default Rails’ <a href="https://github.com/rails/rails/blob/master/railties/lib/rails/source_annotation_extractor.rb">source annotation</a> rake tasks (<code>notes</code> and its more specific children <code>notes:todo</code>, <code>notes:fixme</code>, etc.) only search the <code>app</code>, <code>config</code>, <code>lib</code>, <code>scripts</code> and <code>test</code> directories of your application. I use <a href="http://rspec.info">RSpec</a>, and on short notice, dumping this file into <code>lib/tasks</code> of my application was the best I could come up with to add <code>spec</code> to the annotation search path.</p>
<script src="https://gist.github.com/1872367.js?file=annotations.rake"></script><p>(I know some people consider littering your code with notes to your future self [or future replacement] to investigate and fix things is a smell that could indicate you’re lazy and/or bad at prioritizing. They may well be right, but until I retrain myself, this helps.)</p>RSpec + Spork Ignoring Filters2012-01-31T00:00:00+00:00http://urgetopunt.com/rspec/2012/01/31/rspec-spork-ignoring-focus-filter.html<p>I’m posting this as a reminder to myself and as Google fodder to raise awareness of <a href="https://github.com/sporkrb/spork/issues/166">this discussion</a>.</p>
<p>Most of the Rails projects I have in active development use <a href="http://rspec.info">RSpec</a> for testing. I also use <a href="https://github.com/sporkrb/spork">Spork</a> to preload the Rails environment, allowing the tests to run more quickly. When I’m actively working on a specfic example, particularly relatively slow-running <a href="https://www.relishapp.com/rspec/rspec-rails/docs/request-specs/request-spec">request specs</a>, I’ll often use the <code>:focus</code> tag to filter out the specs I don’t need to run. I have the following set up in my RSpec <code>configure</code> block:</p>
<pre><code># spec/spec_helper.rb
RSpec.configure do |config|
config.treat_symbols_as_metadata_keys_with_true_values = true
config.filter_run focus: true
config.run_all_when_everything_filtered = true
end
</code></pre>
<p>Then I tag the spec I’m working on with <code>:focus</code> like so:</p>
<pre><code># spec/requests/some_feature_spec.rb
describe 'SomeFeature' do
it 'successfully does awesome stuff', :focus do
# test awesome behavior
end
end
</code></pre>
<p>I then go to work implementing the feature, periodically checking the window running RSpec to observe my progress towards getting the feature working as described.</p>
<p>At some point recently — apparently after upgrading to RSpec 2.8 — I noticed the <code>:focus</code> tag being ignored. When I’d save my changes, instead of the one focused example being run, the entire spec file was being run. On a slow-running request spec, this could be annoying, especially if I wanted to scroll through the <code>log/test.log</code> file to debug exactly what was happening in the database as the log output was cluttered with unrelated examples.</p>
<p>After spending some time composing suitable Google-fu to find reports of similar problems I ran across <a href="https://github.com/sporkrb/spork/issues/166">#166</a> on Spork’s Github issue tracker. The problem seems to rest in RSpec 2.8 somewhere, and the fix (or, at the very least, workaround) is relatively simple: add <code>--tag focus</code> to the <code>.rspec</code> file at the root of your project.</p>
<p>(As an added reminder, don’t forget to set <code>run_all_when_everything_filtered</code> to true in your <code>RSpec.configure</code> block to ensure all your specs are eligible for running when nothing is tagged with <code>:focus</code>.)</p>Negative Test Suite Runtimes, or Don't Forget to Call Timecop.return2012-01-13T00:00:00+00:00http://urgetopunt.com/2012/01/13/timecop-rspec-negative-runtimes.html<p><strong>tl;dr</strong> When using <a href="https://github.com/jtrupiano/timecop">Timecop</a>, always remember to turn it off at the end of the test.</p>
<p>I post this in the hopes that I’m not the only one to make this mistake and someone else might actually benefit from my stupidity.</p>
<p>It’s a sad truth, but in my experience, when googling, googling again and googling some more for a solution to a problem turns up nary a post from someone having the same problem, it’s usually because <a href="https://en.wikipedia.org/wiki/User_error#PEBKAC"><span class="caps">PEBKAC</span></a> (I’m an idiot). For several days I’ve been noticing in frustration that the RSpec suite on one of my applications was periodically reporting negative runtimes, e.g.,</p>
<div class="code"><pre><code>Finished in -537150.87202 seconds
329 examples, 0 failures
</code></pre></div>
<p>This consistently happened when running the entire test suite, but seemed sporadic when running a subset of the test suite. Adding to my confusion, at just about the same time that this problem appeared I had upgraded to RSpec 2.8 which fixed an <a href="https://github.com/guard/guard-rspec/issues/61">issue</a> that popped up when using <a href="https://github.com/sporkrb/spork">Spork</a>. I grudgingly pushed the matter to the back of my mind since, with the test suite running sufficiently fast, there were higher priority tasks. But today I found myself specfically wanting to benchmark the test suite, and these negative runtimes just wouldn’t do. So I poked around deeper.</p>
<p>I generated a fresh Rails application using the same <a href="http://github.com/jparker/rails-templates">template</a> I’d used to generate the offending application. Running the new application’s nigh empty test suite reported accurate (positive) runtimes, so I started looking for additions/modifications made to <code>spec/spec_helper.rb</code> to see where things diverged. Nothing seemed significant. I then checked <code>Gemfile</code> to see what gems had been added to the offending application. <a href="https://github.com/jtrupiano/timecop">Timecop</a>! Of course, if something was going to interfere with runtime calculations, freezing Time would probably do it. I grepped for Timecop in my spec files and found it in use in only one, but damningly, it was mentioned only once. I was calling <code>Timecop.freeze</code> from a <code>before</code> block, but I was not restoring Time by calling <code>Timecop.return</code> from a corresponding <code>after</code> block.</p>
<div class="code"><pre><code>face.send(:palm)</code></pre></div>
<p>I added the missing <code>after</code> block. Runtime reports were accurate once again. Unicorns! Rainbows! Chocolate milk!</p>
<p>As an aside, if you use <a href="http://rspec.info">RSpec</a> and <a href="https://github.com/sporkrb/spork">Spork</a>, and had found yourself adding the following to your <code>spec_helper.rb</code>:</p>
<div class="code"><pre><code>Spork.each_run do
$rspec_start_time = Time.now
end
</code></pre></div>
<p>You can remove that kludge after you upgrade to RSpec 2.8. Yay!</p>Factory Girl and Guard2011-10-01T00:00:00+00:00http://urgetopunt.com/2011/10/01/guard-factory-girl.html<p>I’ve kept <a href="https://github.com/thoughtbot/factory_girl">factory_girl</a> in my testing toolkit for some time now, and recently, I started using <a href="https://github.com/guard/guard">guard</a> to run my tests automatically as I make changes. I wanted guard to run the appropriate model, controller and request specs when I change a particular factory. Guard doesn’t know what to do with factory files by default, so I added the following to my Guardfile:</p>
<script src="https://gist.github.com/1256677.js?file=Guardfile"></script><p>By convention, my factories are named after the plural form of the model name, and the files live in <code>spec/factories</code>, so, for example, my User factories are defined in <code>spec/factories/users.rb</code>. I require <code>active_support/inflector</code> at the top of the Guardfile because I need access to <code>String#singularize</code> to convert the plural factory name (also used in the name of controller and request specs) into the singular model name. Then I call <code>#watch</code> with the pattern matching the factory files and use the name of the factory to build the array of specs that need to be run.</p>SSL Gotcha with Puppet and Ruby 1.9.22011-09-14T00:00:00+00:00http://urgetopunt.com/puppet/2011/09/14/puppet-ruby19.html<p>I’ve been preparing to build up a new <a href="http://puppet.puppetlabs.com">Puppet</a> installation, and as I’ve been deploying applications running on Ruby 1.9.2 lately, I thought it would be preferable to run Puppet under Ruby 1.9.2 as well. I’ve only begun the work, and I did run into a strange gotcha with <span class="caps">SSL</span>, but I seem to have gotten past it.</p>
<p>My test installation consists of two identical <a href="http://www.debian.org">Debian 6.0</a> virtual machines running inside <a href="http://www.virtualbox.org">VirtualBox</a> on a Mac. I started with a bare minimal install and ensured name resolution was working properly on both machines. The puppet master returns its hostname as “puppet.local”; the client returns its hostname as “client.local”. There are entries for both of these hostnames in <code>/etc/hosts</code> on each virtual machine, and they are able to ping each other by name.</p>
<p>Next, I ran the following bootstrap script on each machine to install Ruby 1.9.2-p290 and the <a href="http://rubygems.org/gems/puppet">puppet gem</a>.</p>
<div class="code"><pre><code>#!/bin/bash
set -e
PATH=/sbin:/usr/sbin:/usr/local/bin:/bin:/usr/bin
useradd -M -r puppet
apt-get update
apt-get dist-upgrade
apt-get install -y build-essential zlib1g-dev libssl-dev libreadline-dev git-core curl ntp
git clone git://github.com/sstephenson/ruby-build.git /root/ruby-build
cd /root/ruby-build
sh ./install.sh
ruby-build 1.9.2-p290 /usr/local
gem install puppet --no-ri --no-rdoc
</code></pre></div>
<p>Then, on the machine designated to become the puppet master, I performed the following to bring the puppet master daemon up.</p>
<div class="code"><pre><code># puppet master --genconfig > /etc/puppet.conf
# mkdir -p /etc/puppet/manifests
# touch /etc/puppet/manifests/site.pp
# puppet master --no-daemonize --verbose --debug
</code></pre></div>
<p>Finally I ran the puppet agent on the client expecting the client to send a certificate signing request to the puppet master which I could then sign to authorize the client.</p>
<div class="code"><pre><code># puppet agent --no-daemonize --verbose --debug --onetime
</code></pre></div>
<p>Instead, the puppet agent bailed out with the following error message:</p>
<div class="code"><pre><code>err: Could not request certificate: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed. This is often because the time is out of sync on the server or client
Exiting; failed to retrieve certificate and waitforcert is disabled
</code></pre></div>
<p>Gah! The puppet master never even logged the connection — the agent aborted immediately because it was unable to verify the certificate authority provided by the puppet master. The error message suggests clocks being out of sync is a common cause of such problems, but I verified the clocks on both machines were perfectly in sync. In fact, the bootstrap script installs ntp for just this purpose.</p>
<p>(The clock synchronization problem stems from the fact that the puppet master’s CA certificate is only valid between two dates determined with the certificate is generated. If the clock on the client is, for whatever reason, set ahead of the certificate’s expiration date or behind the certificate’s inaugration date, it will reject the certificate. You can check the validity dates by running the following command on the puppet master:</p>
<div class="code"><pre><code># openssl x509 -text -noout -in /etc/puppet/ssl/certs/ca.pem | grep -A2 Validity
</code></pre></div>
<p>In my case, the current time on the puppet client was well within the valid date range of the puppet master’s CA.)</p>
<p>It turns out, this is a <a href="http://projects.puppetlabs.com/issues/9084">known issue</a> with the way Ruby 1.9.2 handles <span class="caps">SSL</span> certificate validations. One option is to downgrade the puppet client to Ruby 1.8.7. The puppet agent works fine, but this is not a desirable solution since I started the project with the goal of deploying application servers running Ruby 1.9.2. Another possibility is to install Ruby 1.8.7, initialize the puppet client and then upgrade to Ruby 1.9.2 after the initial <span class="caps">SSL</span> verification has taken place and <code>/etc/puppet/ssl</code> on the client has been populated. That’s reasonably easy to build into the bootstrap script, but it’s hardly desirable. And I suspect I’d just run into the same problem again if I had to regenerate the puppet master’s CA certificate for any reason.</p>
<p>Wouldn’t it be nice if someone had already figured out how to enable Ruby 1.9.2 to accept the puppet master’s CA certificate? <a href="http://groups.google.com/group/puppet-users/msg/72bf694d4e2f3012">Someone has</a>. It turns out, all Ruby 1.9.2 needs is to find a copy of the puppet master CA certificate (or a symlink to it) in OpenSSL’s certificates directory with a filename correspoding to a hash of the certificate subject.</p>
<p>The first step is to copy <code>/etc/puppet/ssl/certs/ca.pem</code> from the puppet master to the puppet client. (This is one of those tasks that should be easily added to the bootstrap script I will use to bring up new puppet clients.) Next, a symlink to that file needs to be added to the OpenSSL certificates directory. This is the <code>certs</code> directory which resides underneath the directory returned by running <code>openssl version -d</code>. On Debian this is <code>/usr/lib/ssl/certs</code> (which is really just a symlink to <code>/etc/ssl/certs</code>). Finally, create the symlink to <code>ca.pem</code> using OpenSSL to calculate the hash value for the symlink name.</p>
<div class="code"><pre><code># ln -s /etc/puppet/ssl/certs/ca.pem /usr/lib/ssl/certs/$(openssl x509 -hash -noout -in /etc/puppet/ssl/certs/ca.pem).0
</code></pre></div>
<p>After performing these steps, running <code>puppet agent</code> for the first time (or again, but after <code>/etc/puppet</code> has been removed on the client) under Ruby 1.9.2 works like a charm. This is as far as I’ve gotten with this project so far. I don’t know if there are any more gotchas with running Puppet under Ruby 1.9.2, but this one stumped me for long enough that I thought it was worth sharing a solution.</p>Restoring Old Dashboard Visibility in OS X Lion2011-09-06T00:00:00+00:00http://urgetopunt.com/lion/2011/09/06/dashboard-workspace-lion.html<p>After a couple weeks working in OS X Lion, I’m generally underwhelmed and occasionally irritated. A few behavioral changes in particular were so annoying the only positivie thing I could say about them is Apple had the good sense to let me undo them. (I wonder how long that will last.)</p>
<p><strong>Problem: The <a href="http://en.wikipedia.org/wiki/Dashboard_(software)">Dashboard</a> no longer overlays the active workspace.</strong> In Snow Leopard, activating the Dashboard caused it to overlay the active <a href="http://en.wikipedia.org/wiki/Spaces_(software)">workspace</a>. Where ever there was a gap between widgets, the underlying workspace was visible. This was useful for quickly accessing the calendar or calculator widgets while leaving an application window visible for reference. In Lion, the Dashboard is now treated as a workspace. When it appears it replaces the active workspace, leaving the application windows completely obscured.</p>
<p><strong>Solution: Don’t treat Dashboard as a space.</strong> Go to System Preferences → Mission Control and uncheck the box labeled “Show Dashboard as a space”.</p>
<p><strong>Problem: The scrolling direction on the touchpad or mouse is unintuitive.</strong> Lion introduced the concept of “natural” scrolling. Whereas before, sliding your fingers down the mouse or touchpad would scroll the page down, in Lion, sliding your fingers down scrolls the page up. Apple describes this as “Content tracks finger movement”, which is accurate enough, and in fact, it matches the behavior you see on most touch devices including the iPhone and iPad. The problem is, unlike an iPhone or iPad, when I’m using a touchpad or mouse, I’m not <em>touching</em> the content. Natural scrolling feels natural on an iPhone, but I’ve got <a href="http://en.wikipedia.org/wiki/Scroll_wheel">fifteen years of experience</a> telling me when I scroll my finger down on a mouse, the indicator on the scrollbar will also scroll down, and therefore the content will slide up. Lion tried to undo that, and I wasn’t grateful.</p>
<p><strong>Solution: Disable natural scrolling.</strong> Go to System Preferences → Mouse → Point & Click and uncheck the box labeled “Scroll direction: natural”. You may have to do the same thing for your trackpad by going to System Preferences → Trackpad → Scroll & Zoom (note that for the trackpad it’s the “Scroll & Zoom” tab, not the “Point & Click” tab).</p>
<p>I also had problems with trackpad swipe navigation on Google Chrome. I’ve <a href="/2011/08/24/trackpad-swipe-chrome-lion.html">written about the solution already</a>.</p>Swipe Navigation in Google Chrome in OS X Lion2011-08-24T00:00:00+00:00http://urgetopunt.com/2011/08/24/trackpad-swipe-chrome-lion.html<p>After installing <a href="http://www.google.com/chrome">Google Chrome</a> on a Mac running <a href="http://www.apple.com/macosx">OS X Lion</a> I was saddened to find myself no longer able to navigate back and forth through the history by swiping three fingers left and right on the trackpad. A little Googling led me to <a href="http://www.google.com/support/forum/p/Chrome/thread?tid=36701a9aeca0d08c&hl=en">this helpful nugget</a>. Going to System Preferences → Trackpad → More Gestures and changing the value for “Swipe between pages” to “Swipe with two or three fingers” restored this helpful navigation gesture in Google Chrome. (If you use <a href="http://www.apple.com/safari">Safari</a>, swiping with two fingers yields the same results, but it’s dressed up with pretty animations.)</p>Bootstrapping a Minimal RubyGem2011-05-24T00:00:00+00:00http://urgetopunt.com/rubygem/2011/05/24/gem-template.html<p>These are the steps I currently follow when starting a new project to be distributed as a RubyGem. Let’s assume the new RubyGem will be named “wozziegoggle”.</p>
<ol>
<li>Bootstrap the project using <a href="http://gembundler.com/">Bundler</a>:<br />
<div class="code"><pre><code>$ bundle gem wozziegoggle
</code></pre></div></li>
<li>I’m using <a href="http://relishapp.com/rspec">RSpec</a> and <a href="http://mocha.rubyforge.org">Mocha</a> on new projects, so I add development gem dependencies for each.<br />
<div class="code"><pre><code># wozziegoggle.gemspec
Gem::Specification.new do |s|
...
s.add_development_dependency 'rspec', '~>2.6.0'
s.add_development_dependency 'mocha', '~>0.9.12'
end
</code></pre></div></li>
<li>I also create a basic <code>spec_helper.rb</code> file that configures RSpec and requires needed libraries.<br />
<div class="code"><pre><code># spec/spec_helper.rb
spec_dir = File.dirname(__FILE__)
lib_dir = File.expand_path(File.join(spec_dir, '..', 'lib'))
$:.unshift(lib_dir)
$:.uniq!
RSpec.configure do |config|
config.mock_with :mocha
end
require 'mocha'
require 'wozziegoggle'
</code></pre></div></li>
<li>The RSpec rake taks will also be needed, so I make changes to the <code>Rakefile</code>. I’ll also add a task which spawns <span class="caps">IRB</span> with the gem libraries preloaded.<br />
<div class="code"><pre><code># Rakefile
require 'rspec/core/rake_task'
RSpec::Core::RakeTask.new
task :default => :spec
desc 'Start IRB with preloaded environment'
task :console do
exec 'irb', "-I#{File.join(File.dirname(__FILE__), 'lib')}", '-rwozziegoggle'
end
</code></pre></div></li>
<li>I usually use <a href="http://www.zenspider.com/ZSS/Products/ZenTest/">ZenTest</a>, specifically <code>autotest</code> so that the specs can by automatically re-run as I’m developing. A discover file is needed to make sure there are sane default mappings.<br />
<div class="code"><pre><code># autotest/discover.rb
Autotest.add_discover {'rspec2'}
</code></pre></div></li>
<li>Finally, I’ll create a <code>.rspec</code> (née <code>spec/spec.opts</code>) file with any options I want <code>rspec</code> to be run with.<br />
<div class="code"><pre><code># .rspec
--colour
</code></pre></div></li>
</ol>Capistrano, Ruby 1.9 and SSH Gateways — Resolved!2011-05-05T00:00:00+00:00http://urgetopunt.com/capistrano/2011/05/05/capistrano-ruby19-ssh-gateway-resolved.html<p>A few months ago <a href="/capistrano/2011/01/16/capistrano-ruby19-ssh-gateway.html">I posted</a> about inconsistent performance when running <a href="https://github.com/capistrano/capistrano/wiki">capistrano</a> under <a href="http://www.ruby-lang.org">Ruby 1.9</a> when <span class="caps">SSH</span> traffic was routed through a gateway host. This problem appears to have been solved in a <a href="https://github.com/net-ssh/net-ssh-gateway/commit/b448fe7da9ade93b798d812fb0c89d6fd2f7659c">recent commit</a> to net-ssh-gateway (kudos to Mat Trudel).</p>
<p>I discovered this when installing capistrano 2.6.0. Reading through the updates I saw some promising references to a thread deadlocking issue the existed under Ruby 1.9. After installing the new version of capistrano (and net-ssh-gateway) I re-ran the tests described in my <a href="/capistrano/2011/01/16/capistrano-ruby19-ssh-gateway.html">previous post</a>, and sure enough, the improvement is outstanding!</p>
<table class="table table-bordered">
<tr><th></th><th>real</th><th>user</th><th>sys</th></tr>
<tr><th>min</th><td>2.75s</td><td>0.78s</td><td>0.22s</td></tr>
<tr><th>max</th><td>7.17s</td><td>1.51s</td><td>0.43s</td></tr>
<tr><th>mean</th><td>3.76s</td><td>0.91s</td><td>0.26s</td></tr>
<tr><th>stddev</th><td>1.22s</td><td>0.14s</td><td>0.04s</td></tr>
<tr><th>median</th><td>3.33s</td><td>0.86s</td><td>0.24s</td></tr>
</table>
<p>(Full runtime data are <a href="https://gist.github.com/958172">also available</a>.)</p>
<p>Congratulations and my gratitude are due to the team of people presently maintaining capistrano, net-ssh-gateway, etc.</p>Parsing Google Data XML with Nokogiri2011-03-08T00:00:00+00:00http://urgetopunt.com/nokogiri/2011/03/08/parse-xml-with-nokogiri.html<p>I recently starting working on a project which needs to consume Google’s <a href="http://code.google.com/googleapps/domain/shared_contacts/gdata_shared_contacts_api_reference.html">Shared Contacts <span class="caps">API</span></a>. I decided to use <a href="http://nokogiri.org/">Nokogiri</a> to parse the <span class="caps">XML</span> feeds, but I ran into perplexing problem when using <code>#xpath</code> to retrieve specific elements from the <span class="caps">XML</span> document. I wanted to retrieave all of the <code>entry</code> tags (there were five in the sample document) under the <code>feed</code> tag. Searching for <code>//feed/entry</code> using <code>#xpath</code> failed, but searching for <code>feed entry</code> using <code>#css</code> worked.</p>
<script src="https://gist.github.com/860430.js?file=gistfile1.rb"></script><p>While experimenting to figure out the problem I noticed that not all XPath searches failed. For example searching for email addresses within the contact feed using <code>//gd:email</code> returned the correct number of elements. A bit of googling turned up this <a href="http://stackoverflow.com/questions/1157138/how-can-i-get-nokogiri-to-parse-and-return-an-xml-document">article on Stack Overflow</a>. Commenter <a href="http://stackoverflow.com/users/23921/pesto">Pesto</a> pointed out that, when using <code>#xpath</code>, you must use the fully qualified <span class="caps">XML</span> namespaces, i.e., <code>//xmlns:feed/xmlns:entry</code>.</p>
<script src="https://gist.github.com/860439.js?file=gistfile1.rb"></script><p>I didn’t catch on at the time, but that’s why <code>//gd:email</code> worked — it included the namespace.</p>Adding a Header to an Existing S3 Object2011-02-14T00:00:00+00:00http://urgetopunt.com/s3/2011/02/14/right-aws-add-header-s3-object.html<p>Two-thirds of the way through a substantial bulk upload of objects to <a href="http://aws.amazon.com/s3">S3</a> I realized I had forgotten to add a header to the objects I was uploading. Specifically, I wanted each object to have a Content-Disposition header to coax browsers into saving objects with a specific filename rather than displaying them inline or saving them with the long, unwieldy key name that had been generated for the objects.</p>
<p>I certainly wouldn’t want to upload the files all over again just to add a header. Thankfully it’s easy enough to add the header without resending the content of the file by simply moving the key onto itself, i.e., move the object but let the source key and the destination key be identical. Using the <a href="http://github.com/rightscale/right_aws">right_aws</a> gem this is accomplished easily enough:<br />
<script src="https://gist.github.com/826376.js?file=gistfile1.rb"></script></p>
<p>The above snippet adds a Content-Disposition header to the object with key <code>foo.txt</code>. Browsers respecting the Content-Disposition header would try to download the file and save it with the name <code>bar.txt</code> rather than trying to display it inline. This comes in handy if your key names tend to run long because you must embed additional information in them but would rather the user did not have put up with that when downloading the object.</p>Useful Objects in the Rails Console2011-02-04T00:00:00+00:00http://urgetopunt.com/rails/2011/02/04/rails-console-tricks.html<p>The Rails console is a wonderful place to be if you need to feel things out in a Rails application. Playing with your models in the console is easy. They are all just <em>there</em>. But two non-model functions I frequently find myself want to play with are view helpers and route helpers. They are there too, but they are abstracted behind objects whose name I’m constantly forgetting. I’m documenting them here in the hopes that it’ll help me remember and or make their names easy to find the next time I inevitably forget.</p>
<p>Route helpers are available on the <code>app</code> object:<br />
<script src="https://gist.github.com/811802.js?file=gistfile1.rb"></script></p>
<p>View helpers are available on the <code>helper</code> object:<br />
<script src="https://gist.github.com/811814.js?file=gistfile1.rb"></script></p>
<p>These helpers go back at least as far back as Rails 2.1.</p>Rails 3 and RightAws 2.0.02011-02-01T00:00:00+00:00http://urgetopunt.com/2011/02/01/rails3-right-aws.html<p>If you intend to use <a href="http://github.com/rightscale/right_aws">right_aws</a> for interacting with <a href="http://aws.amazon.com/s3">S3</a> or other <span class="caps">AWS</span> services you will need, for the time being, to use the git <span class="caps">HEAD</span> rather than the gem. At the time of writing, the most recent gem release, 2.0.0, included some core extensions which were incompatible with similar extensions added by ActiveSupport.</p>
<p>I encountered the problem in a Rails 3 application which used <a href="http://github.com/plataformatec/devise">Devise</a> for authentication. After adding right_aws to the Gemfile, my tests suddenly started failing with the following error:</p>
<div class="code"><pre><code>wrong constant name Devise/sessionsController (ActionController::RoutingError)
</code></pre></div>
<p>After some poking around I discovered <a href="http://groups.google.com/group/plataformatec-devise/browse_thread/thread/d726823ce778597f">this thread</a> which suggested right_aws was defining a version of <code>String#camelize</code> which was incompatible with the version defined by ActiveSupport (ActionController was obviously expecting <code>Devise::SessionsController</code>). The Rightscale team were aware of <a href="https://github.com/rightscale/right_aws/issues/issue/28">the issue</a> and have already committed a fix, but the new gem has not yet been released.</p>
<p>Until it is, you can do one of two things:</p>
<ol>
<li>Update your Gemfile to install right_aws directly from the git repository like so (the dependency on right_http_connection is required, as the as-yet-unrelease version of right_aws depends on the as-yet-unreleased version of right_http_connection):<br />
<script src="https://gist.github.com/805756.js?file=Gemfile"></script></li>
<li>Continue using the 2.0.0 gem, but modify <code>config/application.rb</code> as below:<br />
<script src="https://gist.github.com/805762.js?file=application.rb"></script></li>
</ol>
<p>The latter suggestion comes from <a href="https://github.com/rightscale/right_aws/issues/issue/28#issue/28/comment/581903">a comment</a> <a href="http://www.koziarski.net/">Koz</a> added to the issue discussion. The <code>ActiveSupport::CoreExtensions</code> module is defined before any other gems are loaded (before the call to <code>Bundler.require</code>) which I suppose ends up signaling to right_aws not to define the conflicting version of <code>String#camelize</code>.</p>
<p>Both of these suggestions worked for me on an application using Rails 3.0.3 (and Devise 1.2.rc) on Ruby 1.9.2p136.</p>Using RVM from Launchd Scripts on OS X2011-01-28T00:00:00+00:00http://urgetopunt.com/rvm/osx/2011/01/28/rvm-os-x-launchd.html<p>I use <a href="/osx/2009/08/30/launchd-for-cron-jobs.html">Launchd</a> to run some scripts on a Mac with cron-like regularity. (OS X provides cron, but Launchd is apparently the <em>preferred</em> approach.) Many of those scripts are written in <a href="http://www.ruby-lang.org/">Ruby</a>, and I’m trying to migrate them to Ruby 1.9.2 as part of my overall migration. OS X has Ruby 1.8.7p174 installed by default. I use <a href="http://rvm.beginrescueend.com"><span class="caps">RVM</span></a> to manage other versions including 1.9.2. The scripts are executable and start with a “magic shebang” line of <code>#!/usr/bin/env ruby</code>. Run from my shell, where <span class="caps">RVM</span> is configured and 1.9.2 is my default interpreter, everything just works. When run from Launchd, however, <span class="caps">RVM</span> is not configured. The system ruby is used instead, and as I don’t generally install any gems for the system Ruby, and since some of the scripts aren’t even compatible with Ruby 1.8, things blow up.</p>
<p>To work around this, I installed a wrapper as <code>$HOME/bin/_rvmruby</code> that sets up <span class="caps">RVM</span> and then execs ruby:</p>
<script src="https://gist.github.com/800166.js?file=_rvmruby.sh"></script><p>It requires you to specify the version of Ruby you want. You can even include the gemset if you want. A little extra logic could get it to ue a sensible default, but I’m too lazy.</p>
<p>Next I modify the plist file for the “cron” job to use the wrapper:</p>
<script src="https://gist.github.com/800172.js?file=com.urgetopunt.wozziegoggle.plist"></script><p>The array of strings provided to <code>ProgramArguments</code> is the command and arguments you want Launchd to to run. Obviously replace <code>PATH_TO_WRAPPER</code>, <code>RUBY_VERSION</code> and <code>PATH_TO_RUBY_SCRIPT</code> with the actual path to <code>_rvmruby</code>, the version of Ruby you want to use and the path to the script you want to run, respectively.</p>Gem Dependencies for Devise2011-01-28T00:00:00+00:00http://urgetopunt.com/devise/2011/01/28/devise-gem-dependencies.html<p>I recently worked with <a href="https://github.com/plataformatec/devise">devise</a> in a Rails application for the first time. Using Rails 3.0.3, Bundler 1.0.9 either Devise 1.1.5 or 1.2.RC I found I had to manually add dependencies for Hpricot and Ruby_parser to my Gemfile in order to run the <code>devise:views</code> generator. Without those to gems explicitly declared as dependencies, running the generator produced empty views under <code>RAILS_ROOT/app/views/devise</code> and yielded the following errors:</p>
<div class="code"><pre><code>$ rails g devise:views
Required dependency hpricot not found!
Run "gem install hpricot" to get it.
...
</code></pre></div>
<p>And then after adding hpricot to the Gemfile:</p>
<div class="code"><pre><code>$ rails g devise:views
Required dependency ruby_parser not found!
Run "gem install ruby_parser" to get it.
...
</code></pre></div>
<p>As I wasn’t relying on those gems for anything else in the application, I only added them to the <code>development</code> group:</p>
<div class="code"><pre><code># Gemfile
group :development do
gem 'hpricot'
gem 'ruby_parser'
end
</code></pre></div>
<p>With those dependencies declared running the generator works as expected.</p>Capistrano, Ruby 1.9 and SSH Gateways2011-01-16T00:00:00+00:00http://urgetopunt.com/capistrano/2011/01/16/capistrano-ruby19-ssh-gateway.html<p><strong><span class="caps">UPDATE</span> 2011-05-05 16:17 <span class="caps">PST</span></strong>: It looks like this has <a href="/capistrano/2011/05/05/capistrano-ruby19-ssh-gateway-resolved.html">been resolved</a>!</p>
<p><strong><span class="caps">UPDATE</span> 2011-01-16 16:26 <span class="caps">PST</span></strong>: I’ve updated the data collection <a href="https://gist.github.com/782206">gist</a> to show conversion to <span class="caps">CSV</span> in case that’s useful. I’ve also included the <a href="https://gist.github.com/782305">code</a> I used to calculate the statistics for each data set.</p>
<p>I use <a href="http://www.ruby-lang.org/">Ruby</a> at work for systems administration tasks on a daily basis. One particularly useful tool in my arsenal is <a href="https://github.com/capistrano/capistrano/wiki">Capistrano</a>. However, as I began to transition my daily taks to Ruby 1.9 I noticed Capistrano was running slowly. Tasks that normally took on the order of a couple seconds could take tens of seconds and sometimes even minutes. I do not yet know what the root cause of the problem is, and I’m not even sure anyone else is having such a problem. After some hours of Googling, I have yet to see a single page describing anything remotely like this. I’m posting this in case anyone else has noticed something similar.</p>
<p>The problem appears to only occur when I am using an ssh gateway, defined in my <code>Capfile</code> like so:</p>
<script src="https://gist.github.com/782200.js?file=Capfile"></script><p>With that line, Capistrano connects via ssh to host.example.com first and from there connects via ssh to the target host. (Without the gateway defined, Capistrano will connect via ssh directly to the target host.) With an ssh gateway defined in my <code>Capfile</code> tasks run under Ruby 1.9.2 are painfully slow. I collected some statistics quantifying the problem.</p>
<p>I started with the following dead simple Capistrano task:</p>
<script src="https://gist.github.com/782203.js?file=Capfile"></script><p>This task runs the <code>uptime</code> command on five different servers running Debian Linux. I ran the task 100 times in succession on a 2.6 GHz Core 2 Duo MacBook Pro running OS X 10.6.6 using <a href="http://www.rubyenterpriseedition.com/">Ruby Enterprise Edition 2010.02</a> and Ruby 1.9.2p136 both installed via <a href="http://rvm.beginrescueend.com/"><span class="caps">RVM</span></a>. I ran the tests with and without an ssh gateway. In each case I had <a href="http://rubygems.org/gems/capistrano/versions/2.5.19">capistrano 2.5.19</a>, <a href="http://rubygems.org/gems/net-ssh/versions/2.0.24">net-ssh 2.0.24</a> and <a href="http://rubygems.org/gems/net-ssh-gateway/versions/1.0.1">net-ssh-gateway 1.0.1</a> installed. I timed each run.</p>
<script src="https://gist.github.com/782206.js?file=gistfile1.sh"></script><p>The raw data for all the runs are available <a href="https://gist.github.com/782156">as a gist</a>. I used <a href="https://gist.github.com/782305">this</a> code to calculate the stats. Below is the analysis of those figures. First, running Ruby Enterprise Edition through an ssh gateway:</p>
<table class="table table-bordered">
<tr><th></th><th>real</th><th>user</th><th>sys</td></tr>
<tr><th>min</td><td>2.50s</td><td>0.50s</td><td>0.15s</td></tr>
<tr><th>max</td><td>3.93s</td><td>0.60s</td><td>0.21s</td></tr>
<tr><th>mean</td><td>2.61s</td><td>0.56s</td><td>0.17s</td></tr>
<tr><th>stddev</td><td>0.17s</td><td>0.02s</td><td>0.01s</td></tr>
<tr><th>median</td><td>2.56s</td><td>0.56s</td><td>0.17s</td></tr>
</table>
<p>And running the same task under Ruby Enterprise Edition without an ssh gateway:</p>
<table class="table table-bordered">
<tr><th></th><th>real</th><th>user</th><th>sys</td></tr>
<tr><th>min</td><td>2.54s</td><td>0.47s</td><td>0.13s</td></tr>
<tr><th>max</td><td>3.13s</td><td>0.62s</td><td>0.18s</td></tr>
<tr><th>mean</td><td>2.61s</td><td>0.51s</td><td>0.14s</td></tr>
<tr><th>stddev</td><td>0.11s</td><td>0.02s</td><td>0.01s</td></tr>
<tr><th>median</td><td>2.58s</td><td>0.51s</td><td>0.14s</td></tr>
</table>
<p>And now the same task under Ruby 1.9.2 without an ssh gateway:</p>
<table class="table table-bordered">
<tr><th></th><th>real</th><th>user</th><th>sys</td></tr>
<tr><th>min</td><td>2.29s</td><td>0.64s</td><td>0.21s</td></tr>
<tr><th>max</td><td>3.35s</td><td>0.72s</td><td>0.23s</td></tr>
<tr><th>mean</td><td>2.41s</td><td>0.66s</td><td>0.21s</td></tr>
<tr><th>stddev</td><td>0.11s</td><td>0.01s</td><td>0.00s</td></tr>
<tr><th>median</td><td>2.42s</td><td>0.66s</td><td>0.21s</td></tr>
</table>
<p>And finally the same task under Ruby 1.9.2 through an ssh gateway:</p>
<table class="table table-bordered">
<tr><th></th><th>real</th><th>user</th><th>sys</td></tr>
<tr><th>min</td><td>3.54s</td><td>0.70s</td><td>0.23s</td></tr>
<tr><th>max</td><td><strong>279.17s</strong></td><td>1.83s</td><td>1.11s</td></tr>
<tr><th>mean</td><td><strong>66.80s</strong></td><td>0.96s</td><td>0.43s</td></tr>
<tr><th>stddev</td><td><strong>58.61s</strong></td><td>0.22s</td><td>0.18s</td></tr>
<tr><th>median</td><td><strong>50.64s</strong></td><td>0.92s</td><td>0.39s</td></tr>
</table>
<p>So <span class="caps">REE</span> with and without a gateway and 1.9.2 without a gateway each took about 2.5 seconds to complete the task. That seems reasonable to me. But cripes, the average runtime for that task under 1.9.2 with a gateway was 66.80 seconds, and the peak runtime was 279.17 seconds! The shortest runtime was an almost forgiveable 3.54 seconds. If you look at the <a href="https://gist.github.com/782156#file_cap_ruby192_gw.csv">data</a> for that run you will see the substantial swings in runtime. The standard deviation was a moody 58.61 seconds. With a median runtime of 50.64 seconds, it would seem most of the runs are shorter than average, but there are several exceptionally slow runs. Fourteen runs took two minutes or longer.</p>
<p>Again, apart from knowing it has something to do with using an ssh gateway, I haven’t identified the actual problem yet. If you know what I might be doing wrong, I’d love to know. If you don’t know what’s going on, but you’re having a similar problem, well, at least you’re not alone. In the meanwhile, I’ve modified my <code>Capfile</code> to let me conditionally enable the ssh gateway:</p>
<script src="https://gist.github.com/782209.js?file=Capfile"></script><p>This allows me to continue using Ruby 1.9 for daily systems administration tasks while I’m on a trusted network. When working from an untrusted network, I can throw <code>CAP_SSH_GATEWAY</code> on the end (and switch to Ruby 1.8 — which is easy with <a href="http://rvm.beginrescueend.com/"><span class="caps">RVM</span></a>).</p>Creating and submitting forms with jQuery in Firefox2010-08-29T00:00:00+00:00http://urgetopunt.com/jquery/firefox/2010/08/29/jquery-form-submit-firefox.html<p>There’s an interesting gotcha when creating and submitting forms using <a href="http://jquery.com/">jQuery</a> in <a href="http://www.mozilla.org/firefox/">Firefox</a>. Firefox silently fails to submit forms that have not yet been attached to the <span class="caps">DOM</span>. For example, consider the following scenario:</p>
<script src="http://gist.github.com/555800.js?file=gistfile1.js"></script><p>Clicking on a link with <span class="caps">CSS</span> class “destroy” in <a href="http://www.apple.com/safari/">Safari</a> or <a href="http://www.google.com/chrome/">Chrome</a> works as expected — a form is created and submitted. In Firefox, nothing seems to happen. To get it working in Firefox, the form must be attached to the <span class="caps">DOM</span>, e.g., using <code>appendTo()</code>:</p>
<script src="http://gist.github.com/555807.js?file=gistfile1.js"></script><p>With the addition of <code>f.appendTo($('body'))</code>, clicking on a link with <span class="caps">CSS</span> class “destroy” submits in Firefox (as well as Safari and Chrome). I scratched my head on this for a while before stumbling across <a href="http://api.jquery.com/submit/#comment-45454172">balpha’s comment on the jQuery <span class="caps">API</span> documentation for <code>submit()</code></a>.</p>Parallels Desktop and the /usr/local/lib dependency2010-06-24T00:00:00+00:00http://urgetopunt.com/2010/06/24/paralells-usr-local-lib-dependency.html<p><strong><span class="caps">UPDATE</span> 2010-06-30:</strong> I added a missing backslash (<code>\</code>) to escape a space in the pathname given in the <code>ln -s</code> invocation at the end of the article.</p>
<p>If you use <a href="http://www.parallels.com/">Parallels Desktop</a> on a Mac, be aware that it may install dependencies in <code>/usr/local/lib</code>.</p>
<p>After a couple months of not needing access to any virtual machines, I recently tried to fire up Parallels on my Mac only to find that it would crash immediately. I checked Console.app for possible error messages and discovered the following single message:</p>
<div class="code"><pre><code>([0x0-0x5ba5ba].com.parallels.desktop.console[84395]) Exited with exit code: 9
</code></pre></div>
<p>It was repeated every time I tried to launch Parallels. I started searching the mighty intertubes for a clue, and ran across a <a href="http://forum.parallels.com/printthread.php?t=31231">lengthy thread</a> on the Parallels Forum about the same problem with Parallels 3.0 (I’m using 4.0). Down towards the bottom of the first page user “wa6vvv” mentions that Parallels may create a symlink for <code>libprl_sdk.dylib</code> in <code>/usr/local/lib</code>. I spun up a backup and sure enough <code>libprl_sdk.dylib</code> was a symlink to a file buried deep under <code>/Library/Parallels/Parallels Service.app</code>.</p>
<p>If you don’t have backups (but you really, <em>really</em> should) the symlink is still easy enough to restore via the command line:</p>
<div class="code"><pre><code>$ ln -s /Library/Parallels/Parallels\ Service.app/Contents/Frameworks/ParallelsVirtualizationSDK.framework/Versions/Current/Libraries/libprl_sdk.dylib /usr/local/lib/libprl_sdk.dylib
</code></pre></div>
<p>(Note that I replaced “4.0” with “Current” in the command above. That should make the command portable if you’re having the same problem but with a different version of Parallels.)</p>
<p>After restoring the symlink Parallels is once again working. All is right with the world.</p>Why I use Linode2010-06-16T00:00:00+00:00http://urgetopunt.com/hosting/2010/06/16/linode-plug.html<p>I’m a very satisfied <a href="http://www.linode.com/">Linode</a> customer. Their control panel is powerful, they offer console access via <span class="caps">SSH</span> and <a href="http://www.gnu.org/software/screen/">Screen</a>, and they recently added an on-site backup solution which works well. The hardware has proven reliable as long as I’ve been with them. Their data center offerings have terrific geographic diversity. And they’ve always offered a high bang-to-buck ratio in terms of memory, storage and <span class="caps">CPU</span>.</p>
<p>So when they did <a href="http://blog.linode.com/2010/06/16/linode-turns-7-big-ram-increase/">this</a> it seemed as good a time as any to add to the public praise for the terrific service they provide.</p>Faking POSTs to S3 in Cucumber2010-06-10T00:00:00+00:00http://urgetopunt.com/cucumber/rails/2010/06/10/cucumber-fake-s3.html<p>I’ve been developing an application that allows users to upload files to <a href="http://aws.amazon.com/s3/">Amazon S3</a>. The files will be on the order of 100 MB each, and they will not require post-processing by the application server.</p>
<p>Since uploads will be relatively time-consuming, it would be preferable to upload the files directly to S3 rather than uploading to the application server and then sending them to S3 in a background job. The procedure for direct uploads is <a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1434">well-documented</a> on the <span class="caps">AWS</span> developer community site. The application renders the upload form which POSTs to S3, and upon successful completion of the upload, S3 responds with a redirect: a <code>303 See Other</code> status and a location header. The <span class="caps">URL</span> in the location header is provided by the application in a hidden input field on the upload form.</p>
<p>Development of the application is being driven with <a href="http://cukes.info/">Cucumber</a> and <a href="http://github.com/jnicklas/capybara">Capybara</a>. In general I don’t want scenarios to actually communicate with S3. It would be slow, and it would prevent scenarios from running without a network connection. Other than one specially-tagged scenario validating the actual interaction with S3, I’d like the POSTs to S3 and the resulting redirects to be stubbed out.</p>
<p><a href="http://github.com/chrisk/fakeweb">FakeWeb</a> is a wonderful utility for stubbing out web requests. You can configure it to intercept <code>Net::HTTP</code> requests to specific URLs (or even URLs matching a regular expression) and provide fixed responses. In this case, I wanted to intercept POSTs to <code>http://s3.amazonaws.com/BUCKET_NAME</code> and respond with the expected <code>303</code> redirect (throughout the rest of this article, BUCKET_NAME should be replaced with the name of the actual S3 bucket being used).</p>
<p>Normally I would configure FakeWeb in a <code>Before</code> hook like so:</p>
<div class="code"><pre><code># features/support/hooks.rb
Before('@upload') do
token = Token.first
FakeWeb.register_uri(:post, 'http://s3.amazonaws.com/BUCKET_NAME',
:status => [303, 'See Other'],
:location => "http://example.com/tokens/#{token.to_param}/upload_complete")
end
</code></pre></div>
<p>Where REDIRECT_ACTION would be replaced with the path of the action to which the user should be redirected. However, a <code>Before</code> hook wouldn’t work for me because the redirect <span class="caps">URL</span> was a member action for a resource (<code>Token.first</code>) which is created during the scenario (and thus, after any <code>Before</code> hooks are run). So FakeWeb was instead configured from a step definition:</p>
<div class="code"><pre><code># features/step_definitions/asset_steps.rb
When /^S3 uploads are stubbed out$/ do
token = Token.first
FakeWeb.register_uri(:post, 'http://s3.amazonaws.com/BUCKET_NAME',
:status => [303, 'See Other'],
:location => "http://example.com/tokens/#{token.to_param}/upload_complete")
end
</code></pre></div>
<p>Now by adding “When S3 uploads are stubbed out” to the scenario the expected FakeWeb configuration is added to the environment.</p>
<p>Sadly, after doing this, the scenario didn’t work as expected. Instead of the <span class="caps">POST</span> to S3 being intercepted, the form seemed to be posting to <code>/BUCKET_NAME</code> on the application server, causing a routing error to be raised. I verified that the form had the full <span class="caps">URL</span> to S3 in the <code>action</code> attribute (the form for uploading directly to S3 is complicated enough that I actually wrote a view spec for it). So why was the Cucumber scenario POSTing the form locally?</p>
<p>After a bit of googling I ran across <a href="http://groups.google.com/group/fakeweb-users/browse_thread/thread/c361f0382299093b/830542b4cc08338f">this thread</a> on the fakeweb-users mailing list. It seems <a href="http://github.com/brynary/webrat">Webrat</a> was ignoring the host component when doing a form submission. I’m using <a href="http://github.com/jnicklas/capybara">Capybara</a> instead of Webrat, but I wondered if the same thing might be happening here. A quick look at the section “Calling remote servers” in the <a href="http://github.com/jnicklas/capybara/blob/master/README.rdoc">Capybara <span class="caps">README</span></a> reveals that, indeed, the default driver in Capybara — rack-test — does not support calling out to remote URLs.</p>
<p>So I couldn’t use FakeWeb to stub out the S3 form submission. Looking back at <a href="http://groups.google.com/group/fakeweb-users/browse_thread/thread/c361f0382299093b/830542b4cc08338f">the fakeweb-users thread</a>, one of the suggestions was to create a special route which responds to requests for <code>/BUCKET_NAME</code>. This felt kludgy, but I really wanted to move forward.</p>
<p>Creating a whole controller just to handle a test felt excessive, so instead I opted to play with <a href="http://weblog.rubyonrails.org/2008/12/17/introducing-rails-metal">Rails Metal</a>. I generated a stub named S3Stub and configured it to handle POSTs to <code>/BUCKET_NAME</code>.</p>
<div class="code"><pre><code># app/metal/s3_stub.rb
class S3Stub
def self.call(env)
if Rails.env.cucumber? && env['PATH_INFO'] =~ %r{^/BUCKET_NAME} && env['REQUEST_METHOD'] == 'POST'
request = Rack::Request.new(env)
[303, {'Location' => request.params['success_action_redirect']}, ['See Other']]
else
[404, {"Content-Type" => "text/html"}, ["Not Found"]]
end
end
end
</code></pre></div>
<p>As configured, this handler will respond to requests for <code>/BUCKET_NAME</code>, but it will only do so when running through Cucumber (<code>Rails.env.cucumber?</code> is true) and only for <span class="caps">HTTP</span> <span class="caps">POST</span> requests (<code>env['REQUEST_METHOD']</code> is <span class="caps">POST</span>). In production these requests will return the expected response <code>404 Not Found</code>. Using Metal keeps the handler lightweight (no controller, no additional routes), and I can even forego looking up the Token for the redirect <span class="caps">URL</span>, instead extracting the <span class="caps">URL</span> from the actual request, just as S3 would do.</p>
<p>(As an aside, S3 actually lets you work with individual buckets using two different URLs — <code>http://BUCKET_NAME.s3.amazonaws.com</code> and <code>http://s3.amazonaws.com/BUCKET_NAME</code>. In general, either way would work, but for the specific problem addressed above, you <strong>must</strong> use the latter <span class="caps">URL</span>. The former would result in Cucumber scenarios posting to <code>/</code> on the test server. You probably don’t want to stub out requests to your application’s root <span class="caps">URL</span>.)</p>
<p>I admit, right now I am not fond of the way this is setup. It’s definitely a kludge, but it was a relatively simple approach, and it works. Still, if there is an accepted best practice for this problem — or just a better way to handle it — I’d love to know.</p>Bash Port Scanner2010-06-01T00:00:00+00:00http://urgetopunt.com/2010/06/01/bash-port-scanner.html<p>I haven’t done a very good job keeping up with features that have been added to <a href="http://www.gnu.org/software/bash/">Bash</a> over the years. While sitting in on Hal Pomeranz’s <a href="http://opensourcebridge.org/sessions/349">Return of Command-Line Kung Fu</a> talk at <a href="http://opensourcebridge.org/">Open Source Bridge</a> this year I picked up this gem:</p>
<div class="code"><pre><code>$ for ((i=0; $i<1024; i++)); do
> echo > /dev/tcp/localhost/$i && echo "tcp/$i is alive"
> done 2>/dev/null
tcp/22 is alive
tcp/80 is alive
...
</code></pre></div>
<p>I didn’t know Bash supported C-style for loops, so I’m glad to have learned of them. But I’m not quite sure how to feel about the magic <code>/dev/tcp</code> “files” — that seems potentially useful but so very perverse. Search for “/dev/tcp” in the Bash man page to read more about it (<code>/dev/udp</code> is also supported).</p>Rails 2.3, named_scope, destroy_all and callback confusion2010-04-30T00:00:00+00:00http://urgetopunt.com/2010/04/30/named-scope-destroy-all-callback.html<h3>Background</h3>
<p>I was recently working on a Rails 2.3.5 application when I found myself stumped on some perplexing behavior that arose when using a named scope, the <code>destroy_all</code> method and an after destroy callback fired by an ActiveRecord observer. Let’s begin with the models. There are Asset and User models. The assets table’s columns include the integer <code>file_size</code> and timestamp <code>delete_after</code>. There is a named scope, <code>expired</code>, which returns all assets for which the value of <code>delete_after</code> is in the past. Assets belong to users.</p>
<div class="code"><pre><code>class Asset < ActiveRecord::Base
belongs_to :user, :counter_cache => true
named_scope :expired, lambda { {:conditions => ['delete_after < ?', Time.now]} }
end
# == Schema information
# ...
# user_id :integer not null
# file_size :integer default(0), not null
# delete_after :datetime
# ...
</code></pre></div>
<p>Users have many assets. The users table has a large integer column, <code>current_disk_usage</code>, which holds the denormalized sum of the file sizes of the assets belonging to each user. There is an instance method, <code>#recalculate_disk_usage!</code>, which triggers the User instance to update <code>current_disk_usage</code> be recalculating the sum of the user’s assets.</p>
<div class="code"><pre><code>class User < ActiveRecord::Base
has_many :assets
def recalculate_disk_usage!
update_attribute(:current_disk_usage, assets.sum(:file_size))
end
end
# == Schema information
# ...
# current_disk_usage :integer(12) default(0), not null
# ...
</code></pre></div>
<p>There is an AssetObserver class which triggers <code>User#recalculate_disk_usage!</code> any time an asset record is created, updated or destroyed. (The implementation show below is inefficient, and one workaround to the problem discussed in this article was to make the observer a bit gentler on the database. That version can be found at the bottom of the article.)</p>
<div class="code"><pre><code>class AssetObserver < ActiveRecord::Observer
def after_create(asset)
asset.user.recalculate_disk_usage! unless asset.file_size.zero?
end
def after_update(asset)
asset.user.recalculate_disk_usage! if asset.file_size_changed?
end
def after_destroy(asset)
asset.user.recalculate_disk_usage!
end
end
</code></pre></div>
<h3>The problem</h3>
<p>The application in question must periodically purge asset records which have expired (where <code>delete_after</code> is in the past). This is accomplished with a Rake task which sends <code>destroy_all</code> to the <code>Asset.expired</code> named scope.</p>
<div class="code"><pre><code>task :purge_expired => :environment do
Asset.expired.destroy_all
end
</code></pre></div>
<p>And this is where the confusion set in. Whenever I manually created, updated or destroyed an asset the asset observer kicked in as expected and the user’s current disk usage was updated accordingly. Whenever I ran the <code>purge_expired</code> Rake task, the correct assets were destroyed, but the user’s current disk usage would get set to zero. If I manually created, updated or destroyed another asset the current disk usage would once again be set to the correct, up-to-date value. It was time to see what <span class="caps">SQL</span> was actually being run, so I opened up the log file.</p>
<div class="code"><pre><code>DELETE FROM "assets" WHERE "id" = 2
SELECT sum("assets".file_size) AS sum_file_size FROM "assets" WHERE (("assets".delete_after < '2010-04-30 06:41:20.755388') AND ("assets".user_id = 5))
UPDATE "users" SET "updated_at" = '2010-04-30 06:41:21.189777', "current_disk_usage" = 0 WHERE "id" = 5
</code></pre></div>
<p><em>J’accuse!</em> The <code>DELETE</code> statement on the first line removes the record from the assets table, and the <code>UPDATE</code> on the last line shows the user’s disk usage being set to zero. But what’s up with the <code>SELECT</code> in the middle? It’s calculating the sum of the file sizes of the user’s assets, but it’s limited the sum to those assets which have expired — look at the first half of the <code>WHERE</code> clause. For some reason the conditions imposed by the <code>Asset.expired</code> named scope are filtering into the call to <code>user.assets.sum(:file_size)</code>. I took a look at the implementation of <code>ActiveRecord::Base.destroy_all</code> for a clue.</p>
<div class="code"><pre><code># File activerecord/lib/active_record/base.rb, line 876
def destroy_all(conditions = nil)
find(:all, :conditions => conditions).each { |object| object.destroy }
end
</code></pre></div>
<p>Not really an obvious answer, but it helped to steer my thinking. <code>Asset.expired</code> returns an instance of <code>ActiveRecord::NamedScope::Scope</code>. Sending <code>destroy_all</code> to that scope calls <code>ActiveRecord::Base.find(:all)</code> which returns an array of <code>Asset</code> instances. We then iterate over that array, sending <code>#destroy</code> to each asset. For each asset, after <code>#destroy</code> runs, <code>AssetObserver#after_destroy(asset)</code> is called which calls <code>asset.user.recalculate_disk_usage!</code> which in turn calls <code>asset.user.assets.sum(:file_size)</code>.</p>
<h3>A workaround</h3>
<p>Okay, but I still didn’t know why the <code>expired</code> named scope conditions at the beginning of the process were showing up again when calling <code>user.assets</code> near the end of the process. However I had a hunch the call to <code>ActiveRecord::Base.find</code> had something to do with it, so to test this hunch I modified the Rake task, removing the call to <code>destroy_all</code> and instead iterating over the scope directly.</p>
<div class="code"><pre><code>task :purge_expired => :environment do
Asset.expired.each { |asset| asset.destroy }
end
</code></pre></div>
<p>So what happens when I run the Rake task now?</p>
<div class="code"><pre><code>DELETE FROM "assets" WHERE "id" = 12
SELECT sum("assets".file_size) AS sum_file_size FROM "assets" WHERE ("assets".user_id = 5)
UPDATE "users" SET "updated_at" = '2010-04-30 18:13:41.050527', "current_disk_usage" = 1696416 WHERE "id" = 5
</code></pre></div>
<p><em>Et voilà!</em> I don’t know what <code>destroy_all</code>‘s use of <code>ActiveRecord::Base.find</code> does that leaks into later elements of the destruction process. Bypassing <code>destroy_all</code> by iterating over the named scope directly seems to fix the problem, and to be honest, while the original version with <code>destroy_all</code> looked more like idiomatic Rails, the use of <code>each</code> feels more like idiomatic Ruby, i.e., purer. I think this may be a bug, but I’d need to get deeper into the guts of Rails’ traversal across the association to be sure. I’ve only encountered this problem on Rails 2.3. I haven’t tested the Rails 3 release candidate yet.</p>
<h3>A different workaround</h3>
<p>But wait, there’s more. As I mentioned earlier, the original implementation of <code>AssetObserver</code> isn’t efficient. Here’s the problem: each time an asset record is destroyed, the database is issued both <code>SELECT SUM</code> and <code>UPDATE</code> statements. The latter is unavoidable, but is the former strictly necessary? ActiveRecord provides models with <code>#increment!</code> and <code>#decrement!</code> instance methods. They’d normally be used to increment or decrement a counter by one, but they both take optional second arguments which are the amount by which the counter (<code>current_disk_usage</code>) should be changed. When the observer’s callback is triggered, the observed object (in this case, an asset) is passed as an argument to the callback method. Since the callback is acting on a single object at the time, instead of telling the user to calculate a sum of all remaining asset’s file sizes, we could just increment or decrement the current disk usage by the file size of the asset being changed (or in the case of updates, by the difference between the new and old file sizes). Here’s a simple version of how that would look:</p>
<div class="code"><pre><code>class AssetObserver < ActiveRecord::Observer
def after_save(asset)
if asset.file_size_changed?
difference = asset.file_size_change.last - asset.file_size_change.first
asset.user.increment!(:current_disk_usage, difference)
end
end
def after_destroy(asset)
asset.user.decrement!(:current_disk_usage, asset.file_size)
end
end
</code></pre></div>
<p>By removing the <code>SELECT SUM</code> call we certainly won’t hit the database as hard for each asset being removed, so perhaps this is a more correct way of doing things.</p>
<p>As it happens, the application this problem comes from has very light database use as it is. With only a couple hundred users, each with a few dozen assets, usually not more than a couple dozen assets total expiring each week, and reliably long window of inactivity each night (asset expiration happens once per day), peppering the database with a few dozen <code>SELECT SUM</code> statements isn’t a big deal. The use of <code>User#recalculate_disk_usage!</code> also has the benefit of being self-healing: if, somehow, the current disk usage value for a user were to become corrupted, it would be automatically fixed the next time <code>#recalculate_disk_usage!</code> was called. If instead of simply triggering the user’s recalculating of disk usage, <code>AssetObserver</code> becomes responsible for the actual values being added or subtracted from <code>current_disk_usage</code> there may be implications for possible data corruption (although that should be pretty well mitigated by the asset activities being wrapped in a single database transaction).</p>Updating Firmware on Sharp BD-HP20U Blu-ray Player2010-03-09T00:00:00+00:00http://urgetopunt.com/2010/03/09/sharp-aquos-blu-ray-firmware.html<p>Updating the firmware on a Sharp BD-HP20U Blu-ray player using a <span class="caps">USB</span> stick and a Mac was not entirely straightforward for me. The process is a bit brittle, so I’m documenting it in the hopes that I (and others) won’t waste so much time dealing with it next time around.</p>
<p><a href="http://www.sharpusa.com/">Sharp</a> distributes the firmware update as a <span class="caps">ZIP</span> archive. After you download and unpack it you will have a file with a <code>.RVP</code> extension, e.g., <code>HP20U118.RVP</code>. This file needs to be placed on the <span class="caps">USB</span> stick which will then be attached to the Blu-ray player, but the player will be finicky about the condition of the <span class="caps">USB</span> device.</p>
<p>The filesystem on the <span class="caps">USB</span> stick must be FAT16 — the player will not read FAT32. DiskUtility.app only seems able to do FAT32, so I had to drop into the shell and use <code>/sbin/newfs_msdos</code> to format the device. Plug the stick into your Mac and launch Terminal.app. From the terminal make sure the <span class="caps">USB</span> stick is unmounted (replace <code>/dev/diskX</code> with the actual device file of the <span class="caps">USB</span> stick):</p>
<div class="code"><pre><code>$ diskutil umountdisk /dev/diskX</code></pre></div>
<p>Next, format the <span class="caps">USB</span> stick with a FAT16 filesystem (again, replace <code>/dev/diskX</code> as appropriate):</p>
<div class="code"><pre><code>$ newfs_msdos -F 16 -v FIRMWARE /dev/diskX</code></pre></div>
<p>The <code>-F 16</code> option specifies the FAT16 filesystem, and <code>-v FIRMWARE</code> assigns the label “<span class="caps">FIRMWARE</span>” to the new filesystem. You can use any 1-11 character string that adheres to <span class="caps">DOS</span> file-naming rules instead of “<span class="caps">FIRMWARE</span>”. Whatever you use will end up being all uppercase regardless of how you enter it. See the <code>newfs_msdos</code> man page for more information.</p>
<p>Now remount the <span class="caps">USB</span> stick:</p>
<div class="code"><pre><code>$ diskutil mount /dev/diskX</code></pre></div>
<p>This will mount the <span class="caps">USB</span> stick on <code>/Volumes/FIRMWARE</code> (the mount point is derived from the volume label).</p>
<p>Now copy the <code>.RVP</code> file onto the <span class="caps">USB</span> stick (replace <code>/path/to/HP20U118.RVP</code> with the actual path to whatever firmware version you downloaded):</p>
<div class="code"><pre><code>$ cp /path/to/HP20U118.RVP /Volumes/FIRMWARE</code></pre></div>
<p>Quickly ensure that there are <strong>no</strong> other files on the <span class="caps">USB</span> stick other than the <code>.RVP</code> file you just copied over:</p>
<div class="code"><pre><code>$ ls -a /Volumes/FIRMWARE</code></pre></div>
<p>Besides the firmware update, there may be several hidden files. They must be removed before the device can used by the Blu-ray player. If there are any, <em>any</em> other files on the disk besides the firmware update itself, the Blu-ray player will refuse to run the update complaining there are other files on the disk.</p>
<div class="code"><pre><code>$ rm -rvf /Volumes/FIRMWARE/.[a-zA-Z0-9_]*</code></pre></div>
<p>This command will verbosely remove any hidden files. If for any reason there are any regular files (files whose names do not begin with a dot) besides the firmware update, they should be remove as well.</p>
<p>Finally, unmount the <span class="caps">USB</span> device:</p>
<div class="code"><pre><code>$ diskutil umountDisk /dev/diskX</code></pre></div>
<p>When this completes the <span class="caps">USB</span> device can be safely unplugged from your Mac. Plug it into your Blu-ray player and follow the operating manual instructions for applying firmware updates. At the time of this writing the operating manual for the HP-BD20U is <a href="http://www.sharpusa.com/downloads/archives/product_manuals/dvd_man_BDHP20U.pdf">available here</a>.</p>Pondering Pickled Patterns2010-03-07T00:00:00+00:00http://urgetopunt.com/cucumber/2010/03/07/anchor-capture-model.html<p>For me <a href="http://github.com/ianwhite/pickle">Pickle</a> is what made <a href="http://cuke.info/">Cucumber</a> make sense to me for integration testing. I had a hard time coping with the idea of spending so much time going between nearly English features and huge swaths of regular expressions. Pickle cuts down on a lot of the regular expression drudgery by defining some generic steps for working with models. If you haven’t tried Pickle out yet, I highly recommend it. <a href="http://railscasts.com/episodes/186-pickle-with-cucumber">Railscast 186</a> gives a great overview.</p>
<p>In addition to the generic step definitions Pickle provides, it also gives you access to the powerful regular expressions that drive those steps. I just spent some time hung up on one of those regexps, <code>capture_model</code>.</p>
<p>In the feature I was creating a labeled account and then visiting the users page for that account:</p>
<div class="code"><pre><code>Given an account: "spectre" exists
When I go to the users page for the account: "spectre"
</code></pre></div>
<p>Then I was using <code>capture_model</code> in a pattern in <code>features/support/paths.rb</code> like so:</p>
<div class="code"><pre><code>case page_name
# ...
when /the users page for #{capture_model}/
account = model($1)
users_path(:subdomain => account.subdomain)
# ...
end
</code></pre></div>
<p>Unfortunately, every time I ran the feature, <code>#model</code> would return <code>nil</code>. After banging my head into this for a while I finally determined that <code>#capture_model</code> was capturing an empty pattern, so I was passing <code>""</code> to <code>#model</code>.</p>
<p>I printed out the the regexp returned by <code>#capture_model</code> to see what was going on (brace yourself):</p>
<div class="code"><pre><code>/((?:(?:)|(?:(?:a|an|another|the|that) )?(?:(?:(?:(?:first|last|(?:\\d+(?:st|nd|rd|th))) )?(?:account|user))|(?:(?:account|user)(?::? \"(?:[^\\\"]|\\.)*\")))))/
</code></pre></div>
<p>The regular expression is built up programmatically at runtime based on the models which currently exist in your application. (As you can see above, in the application I was working on I only had Account and User models defined). The regexp is long and looks a little like Lisp code that’s been tweaking, but if you look closely you’ll probably come to realize (faster than I did) that it will happily match an empty string. In fact, in the pattern I was using in <code>paths.rb</code> <code>#capture_model</code> was matching an empty string and the rest of the line was being thrown away. Unfortunately that part — <code>the account: "spectre"</code> — was the most important part…</p>
<p>Realizing where I was going wrong I anchored my pattern to the end of the line:</p>
<div class="code"><pre><code>when /the users page for #{capture_model}$/
</code></pre></div>
<p>Woo! Now the line only matches when everything between "the users page for " and the end of the line can be matched by <code>#capture_model</code>. With this addition, the regexp captured the model label <code>account: "spectre"</code> and <code>#model</code> finally returned the record. <em>Allons-y!</em></p>
<p>The moral of the story: anchor your patterns whenever possible. It’s generally good practice if only to make sure you acting on the correct data.</p>How Do You Run Puppet with Ruby Enterprise Edition?2010-02-28T00:00:00+00:00http://urgetopunt.com/puppet/2010/02/28/running-puppet-with-ree.html<p><strong><span class="caps">UPDATE</span> 2010-03-11:</strong> Jeff McCune notes in the comments that this issue is <a href="http://projects.reductivelabs.com/issues/3363">now documented</a> in the Puppet issue tracker. It is apparently a bug in Ruby Enterprise Edition, and it can be worked around by running puppetmaster as a Passenger/Rack application through Apache, cutting <span class="caps">REE</span> out of the <span class="caps">SSL</span> transaction and thereby bypassing the issue.</p>
<p>Sadly, the short answer is, “So far, I don’t.”</p>
<p>As I’ve posted before, I install <a href="http://www.rubyenterpriseedition.com/">Ruby Enterprise Edition</a> and <a href="http://www.modrails.com/">Passenger</a> on my application servers. Installation is easy and scriptable, which makes it a task I’d prefer to automate.</p>
<p>I’ve also recently started evaluating both <a href="http://wiki.opscode.com/display/chef/Home">Chef</a> and <a href="http://reductivelabs.com/trac/puppet/wiki">Puppet</a> for use in automating server configuration. Both Chef and Puppet and written in <a href="http://www.ruby-lang.org/">Ruby</a> (one of the reasons I picked those two for evaluation). This post isn’t about deciding between the two. They both have satisfied users and rightly so. They both have pros and cons. For me, choosing Puppet came down to the fact that I was able to get up to speed with it more easily than I was with Chef. (In my opinion, Chef has a lot of potential, but at this stage development is so rapid, I felt it would be easy for my crack sysadmin team of one to get left behind).</p>
<p>So eventually I arrived at the thought, if I was going to be using Puppet to deploy <span class="caps">REE</span> to all my servers, it may as well be running on the Puppet master as well. In fact, why bother with <a href="http://www.ubuntu.com/">Ubuntu’s</a> Ruby 1.8.6 packages at all?</p>
<p>So I started with two simple virtual machines running 32-bit Ubuntu 8.04 with up-to-date patches — one to be the Puppet master and one to be the client. I installed <span class="caps">REE</span> 1.8.7-2010.01 as well as the <a href="http://rubygems.org/gems/puppet">puppet 0.25.4</a> and <a href="http://rubygems.org/gems/facter">facter 1.5.7</a> gems. I created some Puppet recipes and got ready to start my first Puppet run. I launched <code>puppetmasterd</code> on the server, and all appeared to be well. I then launched <code>puppetd</code> on the client. This is where it stopped being fun.</p>
<p>As soon as the client connected, <code>puppetmasterd</code> crashed somewhere in <code>/opt/ruby-enterprise/lib/ruby/1.8/i686-linux/openssl.so</code> with <code>undefined symbol: sk_x509_num</code>. Curiosly, only <code>puppetmasterd</code> crashed. The instance of <code>puppetd</code> on the client only complained that the server sent a bum response. I tried installing <span class="caps">REE</span> through <code>dpkg</code> instead of building from source. Nothing worked, and unfortunately, I couldn’t find any mention of this problem on the web.</p>
<p>As a control, I proceeded to install the OS Ruby packages (<code>ruby1.8</code>, <code>libruby1.8</code>, <code>ruby1.8-dev</code> and <code>libopenssl-ruby1.8</code> among others). I then installed Rubygems from source, the puppet and facter gems and fired up <code>puppetmasterd</code> (from the newly-installed puppet gem). Finally I launched <code>puppetd</code> on the client once more. Success!</p>
<p>At this point the client was still running Puppet with <span class="caps">REE</span>. Only the Puppet master had the OS Ruby packages installed. So where did I go wrong? My guess is I should be able to get the Puppet master using <span class="caps">REE</span>, but I’ll need to rebuild it with different build flags to make sure <code>openssl.so</code> has everything it needs. Of course if you already know what I did wrong, please feel free to leave a comment…</p>
<p>Next time: I’ll be writing about the hoops I had to jump through to get a secondary gem package provider working in Puppet alongside the existing provider. (Hint: the <code>GEM_HOME</code> environment variable is set when Puppet installs gems).</p>Parallels Desktop, Debian, Cloning VMs and Networking2010-02-27T00:00:00+00:00http://urgetopunt.com/2010/02/27/parallels-clone-debian-network.html<p>If you clone a <a href="http://www.parallels.com/">Parallels</a> virtual machine running <a href="http://www.debian.org/">Debian</a> or <a href="http://www.ubuntu.com/">Ubuntu</a> you may find that the clone comes up sans networking. I encountered this problem reproducibly with VMs running Debian 5.0, Ubuntu 8.04 and Ubuntu 9.10. While <code>eth0</code> exists and works on the source VM, it does not exist on the clones which instead have <code>eth1</code>. Apparently this is a <a href="http://forum.parallels.com/showthread.php?t=31427">known issue</a> with Debian-based distributions.</p>
<p>To get the network up on the cloned VMs edit the file <code>/etc/network/interfaces</code>, replacing all references to <code>eth0</code> with <code>eth1</code> (or whatever the interface ends up being named on the clone). Save your changes and restart networking.</p>
<p>I was a little perplexed by this when it happened, but given how easy it is to fix and how behind I am on my work, I just don’t care.</p>Hoptoad Notifier v2 and Older Versions of Rails2009-11-13T00:00:00+00:00http://urgetopunt.com/2009/11/13/hoptoad-2-with-older-rails-versions.html<p><strong><span class="caps">UPDATE</span> 2015-02-23:</strong> Hoptoad is now <a href="http://airbrake.io">Airbrake</a>.</p>
<p><strong><span class="caps">UPDATE</span> 2009-11-18:</strong> It appears the issue <a href="http://help.hoptoadapp.com/discussions/problems/401-nomethoderror-undefined-method-to_hash-for-cgisession0x2b73dc4b9c60">has been fixed</a> (though I haven’t had a chance to try it out myself).</p>
<p>(I’m documenting this because I had a hard time googling it.)</p>
<p>With a <a href="http://robots.thoughtbot.com/post/238327967/new-hoptoad-api-and-development-error-tracking">new Hoptoad <span class="caps">API</span></a> around the corner comes a new version of the <a href="http://github.com/thoughtbot/hoptoad_notifier">hoptoad_notifier</a> plugin. I’ve been using <a href="http://hoptoadapp.com/">Hoptoad</a> for about a year, and I’ve been generally pleased with it. Though I have no particular plans for taking advantage of the <span class="caps">API</span> in the foreseeable future, I figured it would be a good idea to start upgrading from 1.2.x to 2.0.x in case there are any gotchas. If you are using Hoptoad on an application that is running a pre-2.3 version of Rails, there are gotchas.</p>
<p>Among the data the notifier sends to Hoptoad when an exception occurs is the contents of the user’s session. Older versions of the notifier tried to convert the session object to a Hash using <code>session#to_hash</code>, but if the session object didn’t respond to <code>#to_hash</code> (and it doesn’t in Rails 2.2 and earlier) it resorted to sending the <code>@data</code> instance variable contained within the session object. In 1.2.x versions of the notifier you can see this going on in <a href="http://github.com/thoughtbot/hoptoad_notifier/blob/v1.2.4/lib/hoptoad_notifier.rb#L293-295"><code>lib/hoptoad_notifier.rb</code></a>. In the newer version of the notifier, this logic has been moved to <a href="http://github.com/thoughtbot/hoptoad_notifier/blob/v2.0.2/lib/hoptoad_notifier/catcher.rb#L40"><code>lib/hoptoad_notifier/catcher.rb</code></a>. It calls <code>session#to_hash</code> blindly which raises <code>NoMethodError</code> in Rails 2.2 and earlier.</p>
<p>After spending too much time figuring out what was going on, I stumbled upon <a href="http://help.hoptoadapp.com/discussions/problems/401-nomethoderror-undefined-method-to_hash-for-cgisession0x2b73dc4b9c60">a discussion</a> about this very problem. As of this writing it appears the developers know what the problem is and are working to resolve it. Until then, hold off on version 2 of the hoptoad_notifier plugin if you are on Rails 2.2 or earlier.</p>Snow Leopard, Ruby & PostgreSQL: A Cautionary Tale2009-11-12T00:00:00+00:00http://urgetopunt.com/2009/11/12/64-bit-ruby-postgres.html<p>My development machine is a Mac running Snow Leopard in 64-bit mode with <a href="http://www.postgresql.org/">PostgreSQL</a> (via <a href="http://www.macports.org/">MacPorts</a>) and <a href="http://www.rubyenterpriseedition.com/">Ruby Enterprise Edition</a> (via <a href="http://rvm.beginrescueend.com/">Ruby Version Manager</a>). I recently upgraded my <span class="caps">REE</span> installation to version 1.8.7-2009.10 only to discover that the ruby-pg gem was no longer able to establish connections to PostgreSQL. In the error message I saw the words <code>pg-0.8.0/lib/pg.bundle: no matching architecture</code>, performed a jaunty shrug (I was in a cheery mood) and charged merrily into the task of reinstalling the ruby-pg gem in 64-bit mode.</p>
<p>(At this point a clever person would have paused to consider the fact that neither PostgreSQL nor the ruby-pg gem had changed at this point. A clever person would have taken a moment to double check the way the new version of <span class="caps">REE</span> was installed before wasting close to an hour that could have been better spent in the pub. I am not a clever person.)</p>
<p>I set about reinstalling the ruby-pg gem with no configuration options only to meet with verbose failure which began with:</p>
<div class="code"><pre><code>In file included from compat.c:16:
compat.h:38:2: error: #error PostgreSQL client version too old, requires 7.3 or later.
In file included from compat.c:16:
compat.h:69: error: conflicting types for ‘PQconnectionNeedsPassword’
/opt/local/include/postgresql83/libpq-fe.h:266: error: previous declaration of ‘PQconnectionNeedsPassword’ was here
compat.h:70: error: conflicting types for ‘PQconnectionUsedPassword’
/opt/local/include/postgresql83/libpq-fe.h:267: error: previous declaration of ‘PQconnectionUsedPassword’ was here
</code></pre></div>
<p>I tried again with <code>ARCHFLAGS="-arch x86_64"</code> but to no avail. I tried <code>ARCHFLAGS="-arch i386"</code>. Skunked again. Version to old? How could that be. The only version of PostgreSQL I’d ever installed on this machine was 8.3.8. I verified the installed version and that <code>pg_config</code> was in my path:</p>
<div class="code"><pre><code>% port installed postgresql\*
The following ports are currently installed:
postgresql83 @8.3.8_0 (active)
postgresql83-server @8.3.8_0 (active)
% pg_config | grep VERSION
VERSION = PostgreSQL 8.3.8
</code></pre></div>
<p>Finally my feeble brain caught up with obvious reality. The build process was working with files from PostgreSQL 8.3.8 (see the path to <code>libpq-fe.h</code> in the error message), but it was unable to read symbols from the PostgreSQL libraries. It was trying to read a 64-bit library as if it were a 32-bit library. I took a closer look at the Ruby and PostgreSQL binaries:</p>
<div class="code"><pre><code>% rvm use ree
<i> Now using ree 1.8.7 2009.10 </i>
% file $(whence ruby)
/Users/jparker/.rvm/ree-1.8.7-2009.10/bin/ruby: Mach-O executable i386
% file $(whence pg_config)
/opt/local/lib/postgresql83/bin/pg_config: Mach-O 64-bit executable x86_64
</code></pre></div>
<p>PostgreSQL was indeed built in 64-bit mode, but <span class="caps">REE</span> had somehow been built in 32-bit mode. I rebuilt <span class="caps">REE</span> explicitly telling it to build in 64-bit mode:</p>
<div class="code"><pre><code>% export ARCHFLAGS="-arch x86_64"
% rvm install ree
</code></pre></div>
<p>After verifying the new <span class="caps">REE</span> binary was indeed 64-bit, I took another shot at rebuilding the ruby-pg gem. It worked perfectly. God’s in his heaven. All is right with the world.</p>
<p>Having opened and closed windows a number of times while trying to fix the problem, I’m unable to verify the state of the environment in which I originally upgraded <span class="caps">REE</span>, but I suspect I had a tainted <code>ARCHFLAGS</code> environment variable left over from an earlier task. The moral of the story is:</p>
<ol>
<li>When building software always make sure your environment is clean.</li>
<li>Build everything 64-bit or nothing 64-bit.</li>
<li>Before wasting prime pub hours fixing something that isn’t broken (ruby-pg), consider what pieces have changed (<span class="caps">REE</span>) and investigate them first.</li>
</ol>Better Conditional Sum in Google Spreadsheet2009-11-06T00:00:00+00:00http://urgetopunt.com/googledocs/2009/11/06/google-spreadsheet-sumif.html<p>In the <a href="/googledocs/2009/05/09/google-spreadsheet-filter-sum.html">original article</a> I described using <code>SUM()</code> and <code>FILTER()</code> in a <a href="http://docs.google.com/">Google Docs</a> spreadsheet to calculate the sum over a subset of cells within a column. Turns out, there’s a better way:</p>
<div class="code"><pre><code>
=SUMIF('Worksheet name'!E:E, "food", 'Worksheet name'!D:D)
</code></pre></div>
<p>In this version, the <code>SUMIF()</code> function combines the behavior of the <code>SUM()</code> and <code>FILTER()</code> functions I was using before. The first argument is the column to be compared to the filter value, the second argument is the filter value itself (or the cell address containing the filter value) and the third argument is the column over which the sum is to be calculated. So in the above example, in column D of the worksheet named “Worksheet name” all cells for which the corresponding cell in column E contains the value “food” are selected, and the sum of those select cells is returned.</p>
<p>In addition to being more succinct and easier to read, the use of <code>SUMIF()</code> has the added benefit of returning <code>0</code> if there are no rows matching the filter. Using <code>SUM()</code> and <code>FILTER()</code> instead returns <code>#N/A</code>.</p>Using Passenger with rvm2009-09-27T00:00:00+00:00http://urgetopunt.com/2009/09/27/passenger-with-rvm.html<p><strong><span class="caps">UPDATE</span> 2010-04-03:</strong> docgecko notes in the comments below that more recent versions of <span class="caps">RVM</span> now provide the <code>passenger_ruby</code> command for using <span class="caps">RVM</span>-installed versions of Ruby with Phusion Passenger. This article has been updated with the newer instructions, but <a href="http://rvm.beginrescueend.com/integration/passenger/">RVM’s Passenger instructions</a> are more likely to be up to date. Thanks for the update, docgecko.</p>
<p>I’ve recently starting using <a href="http://rvm.beginrescueend.com/">Ruby Version Manager</a> (aka rvm) to manage multiple ruby versions on my workstation. So far I’m quite pleased with the ease of use. (My only gripe so far is that the <code>rvm</code> command is uncomfortably close to <code>rm</code> — it’s only a matter of time before I shoot myself in the foot).</p>
<p>There is one gotcha. In production I use <a href="http://www.modrails.com/">Phusion Passenger</a> in conjunction with <a href="http://www.rubyenterpriseedition.com/">Ruby Enterprise Edition</a>. I would like to use the same environment in development. However, when installing the passenger gem into an rvm-managed ruby installation, the path to the ruby binary (which you need to plug somewhere into your Apache config) will not work. It will have an incorrect gem path, and in my case, that resulted in passenger failing to spin up because it was unable to load the fastthread gem.</p>
<div class="code"><pre><code>/Users/jparker/.rvm/ruby-enterprise-1.8.6-20090610/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- fastthread (LoadError)
from /Users/jparker/.rvm/ruby-enterprise-1.8.6-20090610/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require'
from /Users/jparker/.rvm/gems/ruby-enterprise/1.8.6/gems/passenger-2.2.5/lib/phusion_passenger/utils.rb:28
from /Users/jparker/.rvm/gems/ruby-enterprise/1.8.6/gems/passenger-2.2.5/bin/passenger-spawn-server:53:in `require'
from /Users/jparker/.rvm/gems/ruby-enterprise/1.8.6/gems/passenger-2.2.5/bin/passenger-spawn-server:53
</code></pre></div>
<p>The solution is <a href="http://rvm.beginrescueend.com/integration/passenger/">very clearly printed</a> on the rvm web site. Instead of passing the path to the actual ruby binary to the <code>PassengerRuby</code> directive, pass in the <code>RVM_ROOT/bin/passenger_ruby</code> (replacing RVM_PATH with the root of your rvm installation [usually <code>$HOME/.rvm</code>]).</p>
<div class="code"><pre><code># Don't do this
PassengerRuby /Users/jparker/.rvm/ruby-enterprise-1.8.6-20090610/bin/ruby
# Do this instead
PassengerRuby /Users/jparker/.rvm/bin/passenger_ruby</code></pre></div>
<p>The latter is a shell script which execs the actual ruby binary after setting an appropriate <code>GEM_HOME</code> and <code>GEM_PATH</code> (among other things).</p>Using ERB in YAML Configuration File2009-09-12T00:00:00+00:00http://urgetopunt.com/rails/2009/09/12/yaml-config-with-erb.html<p>A while back in <a href="http://railscasts.com/episodes/85-yaml-configuration-file">Railscast #85</a> Ryan Bates demonstrated how to add a <span class="caps">YAML</span>-based configuration file to a Rails application. You start with the configuration file — say <code>RAILS_ROOT/config/app_config.yml</code> — containing your configuration data:</p>
<div class="code"><pre><code># config/app_config.yml
development:
key1: development value 1
test:
key1: test value 1
production:
key1: production value 1
</code></pre></div>
<p>And then you load the file from an initializer — say <code>RAILS_ROOT/config/initializer/load_config.rb</code> — containing the following:</p>
<div class="code"><pre><code># config/initializers/load_config.rb
APP_CONFIG = YAML.load_file("#{Rails.root}/config/app_config.yml")[Rails.env]
</code></pre></div>
<p>And from then on a Hash named <code>APP_CONFIG</code> will be available throughout your application containing the configuration specific to the environment in which your application is running, i.e., development, test or production.</p>
<p>But what if you want to dynamically configure one or more values in your configuration file? Other <span class="caps">YAML</span> files loaded by Rails such as fixture files or <code>database.yml</code> are processed through <span class="caps">ERB</span> before being loaded. Wouldn’t it be nice to be able to do the same in your application configuration file?</p>
<div class="code"><pre><code># config/app_config.yml
development:
key1: <%= # ruby code ... %>
test:
key1: <%= # ruby code ... %>
production:
key1: <%= # ruby code ... %>
</code></pre></div>
<p>As it is Rails will not process those <span class="caps">ERB</span> snippets, but you can change that with one small change to your initializer:</p>
<div class="code"><pre><code># config/initializers/load_config.rb
APP_CONFIG = YAML.load(ERB.new(File.read("#{Rails.root}/config/app_config.yml")).result)[Rails.env]
</code></pre></div>
<p>Now, instead of loading the file directly, <span class="caps">YAML</span> loads the string returned by <code>ERB#result</code> which will contain the contents of the <code>app_config.yml</code> after the <span class="caps">ERB</span> snippets have been evaluated.</p>Launchd for Cron Jobs2009-08-30T00:00:00+00:00http://urgetopunt.com/osx/2009/08/30/launchd-for-cron-jobs.html<p>Although cron is nominally supported on OS X, the preferred alternative seems to be <a href="http://developer.apple.com/MacOsX/launchd.html">launchd</a>. It is often used to run jobs at startup (much like an init script), but it also has configuration options to produce cron-like scheduling. For more information, check out the <code>launchd(8)</code> and <code>launchd.plist(5)</code> man page, and pay particular attention to the <code>StartCalendarInterval</code> attribute.</p>
<p>This is a plist file I’m using to get launchd to periodically run a <a href="http://github.com/jparker/dotfiles/blob/master/bin/backup.rb">script</a> which pulls down copies of database dumps from remote servers and then uploads them to <a href="http://aws.amazon.com/s3/">S3</a>:</p>
<script src="http://gist.github.com/177658.js" type="text/javascript"></script><p>The file lives in <code>$HOME/Library/LaunchAgents/com.urgetopunt.backup.plist</code>, and it was loaded into launchd by running the following command (no root privileges necessary):</p>
<div class="code"><pre><code>$ launchctl load -w $HOME/Library/LaunchAgents/com.urgetopunt.backup.plist</code></pre></div>
<p>Crontabs are still, in my opinion, much easier to deal with than <span class="caps">XML</span> documents, but launchd does offer certain advantages for desktop platforms. If my Mac is asleep when a cron job is scheduled to run, that run is missed. If launchd determines that a run was missed because the system was asleep, it runs the job when the system wakes up. If multiple consecutive runs are missed, it only runs the job once to get caught up.</p>Fibonacci Sequence in a Hash2009-08-13T00:00:00+00:00http://urgetopunt.com/ruby/2009/08/13/fibonacci-hash.html<p>Being able to instantiate a Hash that calculates Fibonacci numbers is just another reason I like Ruby…</p>
<div class="code"><pre><code>fib = Hash.new {|h,n| h[n] = h[n-1] + h[n-2] }
fib[0] = 0
fib[1] = 1
fib[11] # => 89
fib[12] # => 144
fib[101] # => 573147844013817084101
</code></pre></div>
<p>It may not be as compact as what Perl 6 promises, but it’s a lot more legible.</p>
<div class="code"><pre><code># Based on Larry's and Damian's Perl6 talk at OSCON 2009
@fib = 0,1...&[+]
</code></pre></div>OSCON Highlights2009-07-25T00:00:00+00:00http://urgetopunt.com/oscon/2009/07/25/oscon-highlights.html<p><span class="caps">OSCON</span> is over. While there were a number of interesting sessions, the following two struck me as particularly interesting.</p>
<h3><a href="http://en.oreilly.com/oscon2009/public/schedule/detail/8062">7 Principles of <span class="caps">API</span> Design</a></h3>
<p><a href="http://en.oreilly.com/oscon2009/public/schedule/speaker/4710">Damian Conway’s</a> <span class="caps">API</span> design tutorial used Perl for the examples, but the ideas can reasonably be applied to most other languages as well. The talk was organized around Arthur C. Clark’s third <a href="http://en.wikipedia.org/wiki/Clarke's_three_laws">law of prediction</a>:</p>
<blockquote>
<p>Any sufficiently advanced technology is indistinguishable from magic.</p>
</blockquote>
<p>The idea is to develop software modules that are so clever they can be useful with a minimal interface or even no interface at all. A good example of this is Perl’s <a href="http://perldoc.perl.org/strict.html"><code>strict</code></a> pragma which does everything expected just by adding <code>use strict</code>.</p>
<p>The seven principles follow:</p>
<ol>
<li><strong>Do one thing well.</strong> Keep methods small and tightly focused.</li>
<li><strong>Design by coding.</strong> Designing APIs around expected usage leads to intuitive, easy-to-use APIs.</li>
<li><strong>Evolve by subtraction.</strong> Squash needless complexity whenever you can. Find better defaults.</li>
<li><strong>Declarative beats imperative.</strong> Let users say “what” rather than “how”.</li>
<li><strong>Preserve the metadata.</strong> If a module knows something useful at one point, it should remember that information later on.</li>
<li><strong>Leverage the familiar.</strong> The easiest interface to understand and use is one you are already using.</li>
<li><strong>Best code is no code at all.</strong> Modules that can be used with very little conscious effort can be a delight to use.</li>
</ol>
<h3><a href="http://en.oreilly.com/oscon2009/public/schedule/detail/8856">The <span class="caps">HTML</span> 5 Experiment</a></h3>
<p><a href="http://en.oreilly.com/oscon2009/public/schedule/speaker/49932">Bruce Lawson</a> discussed the future with <span class="caps">HTML</span> 5. All of the major browser players — Microsoft (IE), Mozilla (Firefox), Apple (Safari), Opera (Opera) and Google (Chrome) — support some subset of the features of <span class="caps">HTML</span> 5 already, so you can start playing with it today. <span class="caps">HTML</span> 5 is (mostly) a superset of <span class="caps">HTML</span> 4 which means <span class="caps">HTML</span> 5 pages will usually degrade with some semblance of grace. Because it is so common for page structures to include distinct areas for headers (not to be confused with <code>HEAD</code>), footers and navigation menus, <span class="caps">HTML</span> 5 has <code>header</code>, <code>footer</code> and <code>nav</code> tags. Unlike a <code>div</code> with a specific id attribute, these tags have semantic meaning that can be used by page readers. Forms in <span class="caps">HTML</span> 5 have a whole slough of features that previously could only by achieved with a lot of Javascript. Among these features are common validations (required fields, valid email formats, valid numbers, etc.), an <code>autofocus</code> attribute and a calendar picker.</p>Open Source Bridge -- Day One2009-06-17T00:00:00+00:00http://urgetopunt.com/2009/06/17/os-bridge-day-one.html<p>Amber Case on our gradual transition into cyborgs. Very funny talk. I wouldn’t normally think of myself or any of my fellow human beings as cyborgs, but when you stop to think about the gadgetry that we keep on our persons at just about all times nowadays…</p>
<p>Kurt von Finck and the “hacker business model”. What if, instead of specifically asking employees to put in 7.5 hours a day, you asked them to put in 75 hours every two weeks? It’s an interesting idea. While many programmers do their best work with fixed-width, evenly-spaced work shifts, others thrive on binge coding. Perpetual binging is unmaintainable, but with nice, long breaks between sprints, it can be quite productive.</p>
<p>J. Chris Anderson presented <a href="http://opensourcebridge.org/sessions/109">Deploying to the Edge from CouchDB</a>. A good overview of <a href="http://couchdb.apache.org/">CouchDB</a> which seems to get more and more attention these days. CouchDB is not a traditional relational database but rather a document-based database. It’s written in <a href="http://www.erlang.org/">Erlang</a> and communication is based on <span class="caps">HTTP</span> and <span class="caps">JSON</span>. Among its most appealing features is its capacity for distributed and even embedded operation. It has enticing potential for offline applications.</p>
<p>Scott Becker spoke about <a href="http://opensourcebridge.org/sessions/139">Agile Javscript Testing</a>. Informative presentation on a topic on which I really need to get up to speed. He demonstrated <a href="http://github.com/nathansobo/screw-unit/tree/master">Screw.Unit</a>, a <span class="caps">BDD</span> framework inspired by <a href="http://rspec.info/">RSpec</a>. He also demonstrated <a href="http://github.com/relevance/blue-ridge/tree/master">Blue Ridge</a>, a Rails plugin which combines several Javascript-testing tools (including Screw.Unit) and paves the way for integrating Javascript tests into the rest of the test suite for Rails applications.</p>
<p>There were some promising sessions in the afternoon schedule, but, tragically work beckoned and I spent the better part of the afternoon on the computer.</p>The Importance of Being Specific2009-06-05T00:00:00+00:00http://urgetopunt.com/autotest/2009/06/05/importance-of-being-specific.html<p>Do not do this in your <code>.autotest</code> file.</p>
<div class="code"><pre><code>Autotest.add_hook :initialize do |at|
at.add_exception 'vendor'
nil
end
</code></pre></div>
<p>Instead, be specific.</p>
<div class="code"><pre><code>Autotest.add_hook :initialize do |at|
at.add_exception %r{^vendor/}
nil
end
</code></pre></div>
<p>One of the benefits of <a href="http://www.zenspider.com/ZSS/Products/ZenTest/">autotest</a> is that it can save you time. It’s not saving you time when you spend 15 minutes trying to figure out why it isn’t running any of the tests from <code>vendor_controller_test.rb</code>. Be good to your tools, and they will be good to you. Tell them <strong>exactly</strong> what it is you’d like to do, and they will be happy to oblige. Read the <a href="http://zentest.rubyforge.org/ZenTest">documentation</a> about a feature when you use it — then maybe you’ll realize that a <a href="http://zentest.rubyforge.org/ZenTest/Autotest.html#M000043">method</a> actually expects a Regexp as an argument.</p>
<p>If you don’t read the documentation, you risk being a tool… Like me.</p>Refactoring Views for Clarity2009-05-28T00:00:00+00:00http://urgetopunt.com/views/2009/05/28/refactoring-views-for-clarity.html<p>I recently found myself doing some work on the views of an application I developed more than 18 months ago. As seems so often the case when looking at something I wrote long ago, I found myself somewhat dissatisfied with the way some of the code had been written. The partials I was looking at were <span class="caps">DRY</span> but long. In Ruby I’ve learned to extoll the virtues of shorter, more focused methods, and in Rails, by extension, I think these same virtues apply to smaller, more focused views.</p>
<p>The controller I was working on handled the usual <a href="http://api.rubyonrails.org/classes/ActionController/Resources.html#M000501">RESTful actions</a> for a resource named <code>billable</code>, but in addition to the usual suspects, the billable resource also had several custom collection actions named <code>day</code>, <code>week</code> and <code>month</code>. As you might guess from their names, they returned collections of billables by day, week and month, respectively. The base views were simple — they displayed a title and rendered a <code>_billables</code> partial which displayed a table summarizing the collection. The <code>_billables</code> partial was my primary concern as it had grown too long. (See the original code <a href="#before">below</a>).</p>
<p>I decided to address the problem breaking the <code>_billables</code> partial up. I created <code>_head</code> and <code>_foot</code> partials to display the header and footer sections of the table, and I created a <code>_billable</code> (singular) partial which displayed a row for individual billables. I opted to shed some of the original approach’s DRYness by replacing the one call to <code>#render</code> in each view with two calls for the <code>_head</code> and <code>_foot</code> partials and a third call for the <code>_billable</code> partial for the entire collection. (See these changes <a href="#after">below</a>).</p>
<p>While I am not necessarily wed to the details of this new approach, I do think the result is a net gain. There are more files, more lines of code and there’s more duplication. But I believe everything is clearer. I also believe the duplication I’ve added is unlikely to add maintenance headaches — it’s hard to imagine it changing in any way that wouldn’t require significant updates with or without the duplication. (As long as I am using a table to display the collection, I will have a header, footer and main body).</p>
<p><a name="before"></a>These are the partials I started with:</p>
<script src="http://gist.github.com/119137.js"></script><p><a name="after"></a>These are the partials I ended with:</p>
<script src="http://gist.github.com/119140.js"></script>Conditional Sum in Google Spreadsheet2009-05-09T00:00:00+00:00http://urgetopunt.com/googledocs/2009/05/09/google-spreadsheet-filter-sum.html<p>It’s the simplest things I always forget. Hopefully I’ll remember this article next time. When traveling to a conference I record and categorize my expenses in a <a href="http://docs.google.com/">Google Docs</a> spreadsheet, usually creating a new worksheet for each trip. I have a separate worksheet which contains a cumulative summary of each category of expense. This is the formula I use to keep the running total for specific categories.</p>
<div class="code"><pre><code>
=SUM(FILTER('Worsheet name'!D2:D; 'Worksheet name'!E2:E="food"))
</code></pre></div>
<p>The <span class="caps">SUM</span> function is self-explanatory. The <span class="caps">FILTER</span> function selects all cells from a given range which match particular conditions. The range of cells to choose from is the first argument. Subsequent arguments describe the conditions. In the above example we select all cells from column D from row 2 onwards for which the value of the corresponding cell in column E equals “food”.</p>
<p><strong><span class="caps">UPDATE</span></strong>: As is so often the case when I learn something, there is a <a href="/googledocs/2009/11/06/google-spreadsheet-sumif.html">better way</a>.</p>RailsConf Day Four2009-05-07T00:00:00+00:00http://urgetopunt.com/railsconf/railsconf09/2009/05/07/rails-conf-day-four.html<p>I attended <a href="http://en.oreilly.com/rails2009/public/schedule/detail/8739">HTTP’s Best-Kept Secret: Caching</a> with Ryan Tomayko for the early morning session. Ryan gave a brief overview of different types of <span class="caps">HTTP</span> caching (client, shared proxy and gateway), eventually focusing on the server-side gateway caching. This form of caching still sends data over the wire, but allows you to avoid hitting Rails entirely, or else — using something like <a href="http://en.wikipedia.org/wiki/HTTP_ETag">ETags</a> — allows you to minimize the amount of work done within Rails to generate the content. It sounds like <span class="caps">HTTP</span> caching in its current form is still a bit awkward to handle when the content of the response varies based on session state, e.g., whether or not the user is logged in.</p>
<p>Mid morning session found me at <a href="http://en.oreilly.com/rails2009/public/schedule/detail/7485">When to Tell Your Kids About Presentation Caching</a> with Matthew Deiters. This covered some of the same material as the previous caching talk, but Matthew focused more on minimizing the amount of data sent over the wire. In addition to client-side caching, he covered some general tips on reducing the size of the server responses. Tips included reducing the number of resources (e.g., use Rails’ asset caching functionality to condense multiple javascript/<span class="caps">CSS</span> files into fewer [but larger] files) and reducing the size of resources through minification (obfuscation) and compression (e.g., Apache’s mod_deflate).</p>
<p>My late morning session was <a href="http://en.oreilly.com/rails2009/public/schedule/detail/6967">It’s Not Always Sunny in the Clouds: Lessons Learned</a> with Mike Subelsky. Mike described some of the surprises and challenges he encountered over the past year working with <a href="http://aws.amazon.com/ec2/">Amazon EC2</a>. He’s still a fan of the power and convenience introduced by cloud computing, but he’s developed a healthy respect for the complications and expenses it introduces over the old tangible, colocated server route. Turnkey provisioning rocks, but it involves a lot of work.</p>RailsConf Day Three2009-05-06T00:00:00+00:00http://urgetopunt.com/railsconf/railsconf09/2009/05/06/rails-conf-day-three.html<p>My early morning session was <a href="http://en.oreilly.com/rails2009/public/schedule/detail/7935">Using metric_fu to Make Your Rails Code Better</a> with Jake Scruggs. Aside from <a href="http://eigenclass.org/hiki.rb?rcov">rcov</a> and the stats rake task in Rails I haven’t yet spent much time studying code metrics. Jake touched on a number of different tools available today, each of which looked quite interesting and each of which is (or can be) used within <a href="http://metric-fu.rubyforge.org/">metric_fu</a>. The metric_fu rake tasks generate reports which can provide useful information on where you might need to focus your refactoring efforts by identifying potential problem spots like overly complex methods and repetitious code. I’ve seen the announcements about metric_fu, but hadn’t taken a close look at it until this session. I find I’m eager to try it out but dreading what I will discover.</p>
<p>In the late morning I attended <a href="http://en.oreilly.com/rails2009/public/schedule/detail/7591">Are You Taking Things Too Far?</a> with Michael Koziarski. The basic message of this session is pretty easy to extract from the title, and it’s one that I’ve heard in several sessions this week — don’t be overly dogmatic. Rails has introduced plenty of conventions that, in general, make our jobs easier, but it’s important to remember that, sometimes, the conventional way may not be the best way. Sometimes fiercely sticking to convention can mean writing more code than you really need that requires more maintenance than you have time for and which expresses your intent less obviously than a simpler, less conventional approach. Conventions are good, but sometimes you need to be willing to break with them.</p>
<p>Early afternoon session probably would have been interesting, but I had to catch up with work (coffee in Las Vegas isn’t cheap).</p>
<p>For the mid afternoon session I attended <a href="http://en.oreilly.com/rails2009/public/schedule/detail/7847">Working Effectively with Legacy Rails Code</a> with Pat Maddox and BJ Clark. This was a particularly appealing talk, as it’s a challenge I’ve spent a lot of time wrestling recently — taming Rails code I wrote before I’d learned many of Rails most helpful features. One of their suggestions was to keep an eye out for occasions where complex, semi-redundant code can be abstracted into a “mini framework” that you can mix-in as needed, and, as a corollary, keep an eye out for occasions where convoluted code may be unnecessry because Rails already provides a facility to do what needs to be done. They also announced their <a href="http://refactorsquad.com/">new blog</a>.</p>
<p>Late afternoon session <a href="http://en.oreilly.com/rails2009/public/schedule/detail/8519">%w(map reduce).first — A Tale About Rabbits, Latency and Slim Crontabs</a> with Paolo Negri. Paulo gave an interesting overview of how and why to use <a href="http://www.rabbitmq.com/">RabbitMQ</a> from Ruby. This is one of those times where the topic is fascinating, but I really don’t have any immediate application for it.</p>
<p>The <a href="http://en.oreilly.com/rails2009/public/schedule/detail/8482">evening keynote</a> with Bob Martin was a particular treat and one of the most entertaining I’ve attended in years. The value of testing was the core of the talk. He presented the three rules of true test-driven development:</p>
<ol>
<li>You are not allowed to write a single line of production code until you have written a failing test.</li>
<li>You are not allowed to write a single line of additional test code once you have a failing test.</li>
<li>You are not allowed to write a single line of production code beyond what is needed to make the failing test pass.</li>
</ol>
<p>Following those rules to the letter requires extraordinary discipline, but he asked us to think what our development world would be like if we followed through. Tests eliminate fear and risk — if you have the tests, you can refactor the production code with confidence. Ruby (and Rails) can spare itself the fate of Smalltalk (which he said basically died of hubris) by three things:</p>
<ol>
<li>Professional discipline — specifically, writing well-tested code.</li>
<li>Professional humility — not encouraging and adversarial relationship with other languages.</li>
<li>Professional responsibility — not shying away from fixing dirty coding problems when they are discovered.</li>
</ol>
<p>“Uncle Bob” is a marvelous presenter and showman in general, but one comment near the end struck me as particularly funny:</p>
<blockquote>
<p>[When Smalltalk died, its programmers] had to start writing Java, and it nearly killed them.</p>
</blockquote>RailsConf Day Two2009-05-05T00:00:00+00:00http://urgetopunt.com/railsconf/railsconf09/2009/05/05/rails-conf-day-two.html<p>Early morning session was <a href="http://en.oreilly.com/rails2009/public/schedule/detail/8497">The Even Darker Art of Rails Engines</a> by James Adam. Good overview of Rails Engines as they are implemented in Rails 2.3, including caveats of some of the shortcomings. One issue is that when application routes and engine routes collide, the engine route takes precedence. James considers this a <a href="https://rails.lighthouseapp.com/projects/8994/tickets/2592-plugin-routes-override-application-routes">bug</a> (I’m inclined to agree). Another gotcha is migrations. Migrations in an engine are not visible unless they are copied into the top-level migrations directory, but this is tricky because the version number/timestamp on a migration in the application may collide with that of a migration in the engine. Finally, public assets that are bundled with an engine must be copied into the application’s public directory in order to be visible to the web server. Overall, engines have a lot of potential as well as room for improvement. As things stand, they are reasonably easy to deal with across in-house applications, but it becomes much more complicated when they are published by third parties. It’s also worth noting that Rails 3 promises to have mountable applications based on Rails Engines and Merb Slices.</p>
<p>Late morning session was <a href="http://en.oreilly.com/rails2009/public/schedule/detail/8004">In Praise of Non-Fixtured Data</a> by Kevin Barnes. It was interesting to see some of the options for avoiding fixtures that have arisen although for day-to-day work I have already switched over to <a href="http://thoughtbot.com/projects/factory_girl">Factory Girl</a>. I doubt it’s rational, but the idea of data generators being added directly to the model class bothers me. That’s one of the reasons I stick with Factory Girl.</p>
<p>Early afternoon session was “The Future of Deployment: A Killer Panel” with Marc-André Cournoyer, Christian Neukirchen, Ryan Tomayko, Adam Wiggins, Blake Mizerany and James Lindenbaum. The panel members represented different layers of the deployment stack including <a href="http://code.macournoyer.com/thin/">Thin</a>, <a href="http://rack.rubyforge.org/">Rack</a>, <a href="http://tomayko.com/src/rack-cache/">Rack::Cache</a>, <a href="http://www.sinatrarb.com/">Sinatra</a> and <a href="http://www.heroku.com/">Heroku</a>. Discussion centered around how the different components came into being, how they’ve come to interact and how they might need to develop for the future. If I spent more time working with complex deployment issues, I probably would have gotten more out of this session, but as it is my needs are quite simple.</p>
<p>Mid afternoon session was <a href="http://en.oreilly.com/rails2009/public/schedule/detail/7035">I Rock, I Suck, I Am — Jumpstart Your Journey to Agile</a> by Davis W. Frank. Davis went over some practices and guidelines that helped him adapt to the Agile workflow. Sometimes it seems that Agile pays off the same way that <span class="caps">REST</span> does — it’s a simple, well-structured way to breakdown complex tasks.</p>
<p>Late afternoon session was Obie Fernandez’s <a href="http://en.oreilly.com/rails2009/public/schedule/detail/7721">Blood, Sweat and Rails</a>. An interesting talk about some of the perils of launching a formal consultancy. Dealing with contract lawyers and keeping up with collections are serious drawbacks.</p>RailsConf Day One2009-05-04T00:00:00+00:00http://urgetopunt.com/railsconf/railsconf09/2009/05/04/rails-conf-day-one.html<p>The tutorial day for <a href="http://www.railsconf.com/">RailsConf 2009</a> has drawn to a close. I spent the morning in <a href="http://en.oreilly.com/rails2009/public/schedule/detail/7763">Running the Show: Configuration Management with Chef</a> by Edd Dumbill. The first half of the tutorial was a bit awkward when the live demo failed to get off the ground. While I don’t feel I walked away with any concrete training, I did come away with enough appreciation of <a href="http://wiki.opscode.com/display/chef/Home">Chef</a> that I’m looking forward to learning and using it to rein in the increasing number of nearly identical systems I find myself managing.</p>
<p>In the afternoon I attended Joe O’Brien’s and Jim Weirich’s <a href="http://en.oreilly.com/rails2009/public/schedule/detail/7786">Testing, Design and Refactoring</a>. The first half of the tutorial consisted of presentation by both Jim and Joe. Part of Jim’s talk included the material from his “Writing Modular Code” talk from <a href="http://scotlandonrails.com/">Scotland on Rails</a>. Joe’s portion covered some of the general methods of <a href="http://refactoring.com/">refactoring</a> and why you might use them. One concept that struck a chord with me was the concept (or goal) of simplicity. Simple code has the following characteristics:</p>
<ol>
<li>Passes tests, i.e., the code works</li>
<li>No duplication</li>
<li>Clearly expresses intent</li>
<li>Minimal number of classes and methods</li>
</ol>
<p>And those characteristics are in order of importance. If the code doesn’t work, it doesn’t matter whether or not it’s clear. (I’m a little fuzzy about the order of duplication and clarity in that list though. If eliminating duplication results in loss of clarity, that may not be a net win.)</p>Scotland on Rails 20092009-04-01T00:00:00+00:00http://urgetopunt.com/2009/04/01/scotland-on-rails.html<p><a href="http://scotlandonrails.com/">Scotland on Rails 2009</a> was held at the University of Edinburgh 26-28 March. I confess that when I registered and made travel plans I regarded the conference as a good excuse to visit Scotland but not much more. It ended up being one of the finest conferences I have attended in a while. Good assortment of sessions, fascinating speakers, a good venue and superlative organization made the conference as valuable an experience as it was enjoyable. The conference organizers deserve a lot of credit for the job they did.</p>
<p>For me, the highlight sessions included:</p>
<ul>
<li>Marcel Molina’s opening keynote — fascinating and humorous insight into the early development of Rails.</li>
<li>Jim Weirich’s session “Building Blocks of Modularity” — turned out to actually be a thought provoking “Grand Unified Theory of Software Development”.</li>
<li>Joe O’Brien’s and Jim Weirich’s “Ruby Code Review” — shined a light on taking simple steps to rein in untested code (and earned a good many guffaws in the process).</li>
</ul>
<p>But, yeah, it was still a good excuse to visit <a href="http://picasaweb.google.com/johncparker/ScotlandIreland2009?feat=directlink">Scotland</a>.</p>Connascence and Software Development2009-03-27T00:00:00+00:00http://urgetopunt.com/2009/03/27/sor-connascence.html<p>This morning at <a href="http://scotlandonrails.com/">Scotland on Rails</a> <a href="http://onestepback.org/">Jim Weirich</a> gave a fascinating talk covering a software development pattern called <em>connascence_. His slides are <a href="http://github.com/jimweirich/presentation">available on Github</a></em>connascence.</p>
<p>Connascence describes a kind of coupling within your code. Jim covered several forms:</p>
<ul>
<li>Connascence of Name (CoN)</li>
<li>Connascence of Position (CoP)</li>
<li>Connascence of Meaning (CoM)</li>
<li>Contranascence (CN)</li>
<li>Connascence of Algorithm (CoA)</li>
<li>Connascence of Timing (CoT)</li>
</ul>
<p>In addition to the various forms of connascence he described two rules for dealing with connascence in your software.</p>
<ol>
<li>Rule of Locality — As the distance between software elements increases, use weaker forms of connascence.</li>
<li>Rule of Degree — Convert high degrees of connascence into weaker forms of connascence.</li>
</ol>
<p>His slides have good examples of each of these forms, but consider a quick example. If a method has several — say 4 or 5 — positional parameters, it has a high degree of connascence of position. Over time, if parameters have to be added, removed or just reordered, this code will prove to be brittle as any code that calls that method would also have to be updated. The situation can be improved by replacing those positional parameters with a single hash parameter. The hash keys introduce connascence of name but eliminate concern over position, decoupling the method defintion somewhat from the code that calls it.</p>
<p>I haven’t yet attended a talk by Jim that wasn’t enlightening (I got started with Ruby because of his “Ruby for Java Programmers” talk at <span class="caps">OSCON</span> 2005), and this talk was no different. Seeing these patterns isolated and explained helps me with the ongoing goal of achieving greater discipline as a software developer. Thanks, Jim.</p>Hidden Files and the Finder2009-03-23T00:00:00+00:00http://urgetopunt.com/2009/03/23/hidden-files-finder.html<p>If you are working on a Mac and have an unfortunate run-in with a mistyped <code>rm -r</code> command, you may find yourself in need of restoring files, including those elusive dot files which don’t show up in the OS X Finder by default. If you’ve been using <a href="http://www.apple.com/macosx/features/timemachine.html">Time Machine</a> to backup your data, doing restores is easy, but before you can restore hidden files, you have to tell the Finder to display them.</p>
<p>The following commands will do just that:</p>
<div class="code"><pre><code>$ defaults write com.apple.finder AppleShowAllFiles TRUE
$ killall Finder
</code></pre></div>
<p>After the Finder has restarted, you can go into Time Machine and restore hidden files to your heart’s content. When you’re done, if you want to turn display of hidden files back off (they do clutter up Finder windows and the Desktop), the reverse is intuitive:</p>
<div class="code"><pre><code>$ defaults write com.apple.finder AppleShowAllFiles FALSE
$ killall Finder
</code></pre></div>
<p>Voilà.</p>