Sass for the Non-Ruby User

Posted by: TomS on November 17, 2012 @ 10:05 am

A while back, I had heard about Sass (Syntactically Awesome Style Sheets), and I was intrigued at the ability to apply higher level re-use concepts to CSS.  I checked out there website, saw the dependency on Ruby for using it and was a bit turned off, however.  I don’t often use Ruby, and I wasn’t quite ready to plod through the install for some efficiencies in CSS authoring.  And thus, Sass was quickly forgotten.

More recently, I attended Barcamp Philly 2012, and saw John Riviello‘s talk on using Sass, Compass, and Lint.  My curiosity returned, and now with a free weekend, I wanted to document the process of getting Sass working with only minimal messing around with Ruby.  Let’s call this Sass for the Non-Ruby User.

As a side-note, I was running Ubuntu when I did this, but the instructions are somewhat OS independent and should work fine anywhere you can run bash.

Step 1: Install Ruby

Really, there’s no other way around this.  If you want to use Sass, you have to install Ruby.  It’s not required as a run-time dependency, but you will need it to parse your Sass and generate the resulting CSS files at build time at the very least.

I’ve got nothing against Ruby, I just don’t use it often, so my goal here is to get Ruby installed and working with only the minimal steps.  Going to an OS’s package management solution is always an option (yum, rpm, apt, etc.), but I decided to checkout RVM, the Ruby Version Manager.  One of the nice things about this approach is that it only installs Ruby in your home directory.  It’s not a system wide install, so for trying things out, it’s pretty localized.  It also allows you to manage having multiple versions of Ruby installed.

To get started, you download the rvm installer.  The installer will unpack rvm to your ~/.rvm directory and also configure your shell profiles to load a number of rvm commands into your shell.  You will need to either relaunch your shell or source the ruby config that is added at ~/.rvm/scripts/rvm.  I use the GNOME terminal, so I also had to configure the terminal to “Run command as login shell” as described here.

The simple script below will download and run the installer.

curl -L | bash -s stable

The first thing you’ll want to do is run the following command.  It will list any additional dependencies that are required on your operating system to use Ruby with rvm.

rvm requirements

Read the output of this and run any necessary commands.  For me on Ubuntu I had to run:

sudo apt-get install build-essential openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison subversion pkg-config

Next you’ll want to install a version of Ruby.  As of the time I wrote this post, 1.9.3 was the latest stable build.

rvm install 1.9.3

This takes a while, but this simple line will download and compile ruby 1.9.3.

Next up, I want to specify that I want to use the Ruby version I just downloaded.

rvm use 1.9.3

This will adjust the path of your shell to use the Ruby version specified.  You should see a line stating: Using /home/tom/.rvm/gems/ruby-1.9.3-p327.

You can also verify that this worked by running

which ruby


You will need to re-run the rvm use command each time you launch your shell, or you may add it to your shell profile to be run at startup.

Step 2: Install Sass

This is where it get’s pretty easy.  Ruby has a gems system similar to a package manager for most operating systems.  To install Sass, just run

gem install sass

Step 3: Use Sass in Watch Mode (for development)

Browsers can’t read sass files, but sass will do the job of converting these files into plain old css files for you.  Running a command every time you want this done though can be a drag when developing.  Sass has a ‘watch’ mode that will watch for changes on a directory of sass files and convert them to css files in another directory.  Follow the steps below to set up a quick example of this.

#make a directory for the sass files to live in
mkdir scss
#make a directory for the css files to live in
mkdir css
#start up sass in watch mode, looking for changes in sass and outputting to css
sass --watch scss:css

Now in another terminal window or editor, create example.sass in the sass directory.  Use the example content below.

/* example.scss */
#navbar {
  width: 80%;
  height: 23px;

  ul { list-style-type: none; }
  li {
    float: left;
    a { font-weight: bold; }

After saving the file, you should now see the plain css file example.css with the content below in your css directory.

/* example.scss */
#navbar {
  width: 80%;
  height: 23px; }
  #navbar ul {
    list-style-type: none; }
  #navbar li {
    float: left; }
    #navbar li a {
      font-weight: bold; }


Step 4: Integrate Sass into your Build

It’s probably not the best idea to be running the watch command in a production environment though, of for build and deploy purposes, it’s best just to run sass once on your final scss files to create the css files and deploy them.  The command is nearly identical to the watch command above, just omit the watch option and add an update option.
sass --update scss:css

And that is pretty much it. Just add this command into your build tool of choice, and you’ve got Sass up and running.

Installing Nexus on Ubuntu 11.10 and Tomcat 6 Part 2

Posted by: TomS on April 15, 2012 @ 8:58 pm

NexusIn Part 1 of this post, I covered performing a basic installation of Nexus in a Tomcat servlet container on Ubuntu 11.10.  Now I’m going to cover the details of configuring Nexus.

Nexus is a repository that can be used with tools such as Maven to index and store artifacts for dependency management across projects.  I like to use the Nexus/Maven combo for three main reasons:

  1. Storing Versions of My Own Artifacts – I try to do all my Java builds in Maven these days.  This means that all my projects always have a standard lifecycle and release procedures.  I can commit my code to SVN, but its also nice to have a repository to store my compiled artifacts.  Nexus fills this job, allowing me to deploy all versioned projects to the central repository, accessible from all of my machines.  Additionally, the repository makes all these versioned artifacts available as dependencies for any of the other projects I work with.
  2. Proxying Other Repositories – When you work with Java Maven projects, you’ll get pretty used to Maven “downloading the internet” the first time you build a particular project on a machine.  This is Maven getting all the dependencies and transitive dependencies required to build your project.  It might take a little while, but its far better than trying to manage all those dependencies yourself.  This step is often dependent on external repositories outside of my control, however.  The external repositories may gone down (but rarely), and it is common for these repositories to eventually remove old versions of artifacts or reorganize their repositories.   Proxying a repository means that my Maven builds will always fetch artifacts from my Nexus repository first.  If Nexus can’t find the artifact, it will download it from the external repository, but it will store it locally so future requests do not depend on the external repository.  This means that the artifacts I use never go away, and the process is much faster since I only need to retrieve artifacts from my own repository in most cases.
  3. Hosting 3rd Party Artifacts – There are times when I use a jar, but the jar isn’t readily available in a Maven repository somewhere.  I try to avoid just adding those jars to SVN.  Instead, I’ll upload the artifacts to Nexus and I can access them as needed via standard Maven builds.


Installing Nexus on Ubuntu 11.10 and Tomcat 6 Part 1

Posted by: TomS on April 10, 2012 @ 7:37 am

I’ve used Maven and Nexus at work quite extensively, and since I’ve become familiar with the tooling, I’ll never go back.  When using a mature language like Java that has an expansive ecosystem, dependency management is a must.  Maven has its rough edges, but it does get the job done, and I’ve always been pleasantly surprised with how easy it is to use Nexus and how little maintenance it really requires.

I’m planning on doing a bit more Java development at home, and I’d like to have my own Nexus repository hosted on my internal network.  Part 2 of this article will cover configuring Nexus and the benefits of using it.  In this part of the article, I’ll walk through the steps for setting up and configuring Nexus from a base Ubuntu 11.10 Server installation.

For my purposes, I would like the end result of my installation to be that I can go to (a DNS name on my internal network) and be able to interact with Nexus, so I’ll be doing some additional configuration with Apache Web Server to make this happen.

Creating a Digital Identity with OpenID and WordPress

Posted by: TomS on April 16, 2011 @ 4:05 pm

OpenIDOpenID is an open standard for a distributed system that allows users to authenticate with a single identifier on sites across the internet.  For a while now, OpenID has promised to become the tool that lets internet users login to all sites using a single account, and recently, with many of the big web comapnies (Google, AOL, Yahoo, MyOpenID) becoming OpenID providers, and many smaller sites starting to support OpenID authentication, OpenID is coming into its own.  Yes, its still fragmented, yes there’s many sites that still don’t use it, but things are getting better, and for me, there’s enough value in it now, that I want to use my blog as my OpenID for my internet persona.

I have a couple unique requirements for what I’m trying to do, so let me set that up first.  I have a public online persona that I use for this blog and other sites online related to running and technology.  I have no illusions of privacy.  I am sure anyone who is determined enough can find out plenty of personal information from my activity, but in general, most people that come to this site are looking for content about the information I post.  I would rather not broadcast my personal information to all those people so I try and keep my public online accounts separate from my personal ones.

That being said, its a pain to manage multiple logins and passwords, log in and out of sites, and so on.  OpenID can be really useful with this task, and that’s what I’m trying to do: use my WordPress blog at as my digital identity for public web activity and convert as many accounts over to it as possible.  BUT, I don’t always want to remember a second password for my public persona, so I’d still like to be able to login with my private personal OpenID, without broadcasting it to the world.

So here are my requirements:

  • Set up my blog as an OpenID provider.
  • When authenticating at my blog, be able to login using OpenID authentication from another provider (i.e the OpenID I use for my personal activity).
    • I don’t necessarily want to do OpenID delegation here, since it will publicly broadcast my other OpenID.
  • Be able to manage my user account settings on my blog, so I can switch between other OpenID providers I use to authenticate.
    • This gives me portability in the future if I decide to switch OpenID providers.

As usual, WordPress already has all the tools I need available in its extensive plugin library.  Here are the steps I followed to get this up and running.

IBM’s Watson takes on Jeopardy Champions

Posted by: TomS on February 6, 2011 @ 11:58 am

You may have seen a few commercials for it recently, but next week, from Feb. 14- Feb. 16, Jeopardy! will be airing a face-off pitting all-time greats Ken Jennings and Brad Rutter against a research computer built by IBM called Watson.  I make no claim to be an expert, but from some brief exposure in college, I have a general feel for how hard it is for computers to perform natural language processing just to understand the human language.  IBM is taking this one step farther by pairing advanced language processing with search algorithms to try and create the ultimate Jeopardy! champion.  From some of the videos I’ve seen online, it’s pretty impressive, and the showdown next week should be fairly entertaining.

The Wall Street Journal has a good article giving a very simple explanation of how Watson actually works, and the forthcoming book will likely provide even more details on the story of Watson.  I’ll be tuning into Jeopardy! next week to see how things go.  My personal prediction is that Brad Rutter will take the win, but both Ken and Watson will put in good showings.  Watson’s Achilles heel will likely be a few botched categories and questions that put him in hole, but its exciting to see how much progress IBM has made with technologies like these by taking up the IBM Jeopardy! challenge.

A practice round of Watson playing is available on YouTube and embedded below.

Apple, Video Games, and Disruptive Markets

Posted by: TomS on January 28, 2011 @ 7:36 am

AppleAt lunch the other day, the conversation turned to Apple, and one of my co-workers posed the question, “Why hasn’t Apple released a video game system yet?” At the time, I was playing Angry Birds on another co-workers iPhone, and I waved the iPhone at him and responded “They have.”  He of course responded that it isn’t really what he meant, but I thought a bit more about what Apple has done with its gaming strategy up till now, and they are actually positioned surprisingly well to pull in a huge chunk of the video game market over the next few years.Angry Birds

Before I dive into the details, most of my argument is based on Clayton Christensen’s ideas around disruptive innovation and low-end disruption. In a nutshell, Christensen theorizes that most disruptive innovation occurs when established firms neglect certain market segments because they offer too low of a margin to entice the incumbents.  Innovators enter the low-end segments and the incumbents do not react, but overtime, the entrants overshoot the needs of the low-end markets and begin to pick up additional market segments.  Left unchecked, the entrants eventually overtake the market, driving the incumbents out completely.  Its a pattern that has repeated itself throughout history, and Apple may be repeating it again with the growing library of games it distributes on the App Store. (more…)

Setting up a New SVN Repository

Posted by: TomS on January 9, 2011 @ 12:44 pm

It’s been a while since I’ve had to set up a new SVN repository on my home network, so now that its time, I figured I write a quick post documenting the steps along the way.

For those that may be unfamiliar with it, SVN is a commonly used version control system.  It allows users to manage the changes to source code over time so that a user can quickly restore old versions, analyze change sets, and control how change sets are applied to a set of source code.  SVN is likely the de facto open source centralized version control system.  It grew up from the older CVS project, but it should be noted that it is a centralized version control system.  There a number of distributed version control systems out there such as git, Mercurial, and Bazaar, which are gaining adoption very quickly and provide a number of benefits for projects with many users.  For my home use, I’m the only user, and I’m familiar with the ins an outs of SVN, so I’ve stuck with it.

SVN is also one of the most well documented open source projects out there, primarily because of the excellent work that goes into the SVN Book.  The online content is freely available, and its essentially the same content that is published in the O’Reilly reference.  It is update often and is very comprehensive.  If you have any questions about SVN, I’d start there.

Now that I’ve gotten that out of the way, on to the content.  I want to set up an SVN Repository for a new project I’m working on.  There are a number of ways in which you can choose to organize SVN, but I generally follow the 1 project per repo model.  I use Apache and WebDav to connect to my SVN repos so that I can access them directly over http.  Assuming you already have SVN and Apache installed, there are really three quick parts to the setup.  Create the SVN Repo, Configure the WebDav connection, and Setup the SVN Repo.

Create the SVN Repo

To create the svn repo, you’ll make use of the svnadmin command line tool.  It provides a number of SVN mainteance functions including the create repository functionality.  I usually create the repository as root, and then after it is created update the permissions so that the web user is the owner and the group is the SVN group.  This allows both Apache and SVN processes access to the files.  The commands needed to set this up are below.

#create the new SVN repo
sudo svnadmin create /storage/svn/myNewRepo/

#change permissions on the new repo so that Apache and SVN can access it
sudo chown -R www-data.svn /storage/svn/myNewRepo/

Configure the WebDav Connection

Creating the WebDav interface is also just as easy.  You’ll create a simple Location entry in your Apache configuration which defines the parameters needed for users and the SVN connection.  For me, I put the configuration in the default website on my SVN server (on Ubuntu its /etc/apache2/sites-available/default), and I use the Apache AuthUserFile to control users of the repo.  I have set up user accounts for other repos, so I’m just going to hook into the existing file.

The location information in my Apache file is shown below.  I add it directly the default VirtualHost listing for the server.

<Location /svn/myNewRepo>
   DAV svn
   SVNPath /storage/svn/myNewRepo
   AuthType Basic
   AuthName "Subversion Repository"
   AuthUserFile /etc/apache2/passwords
   Require valid-user

From there, its just a quick Apache restart with sudo /etc/init.d/apache2 restart and then you should be able to access the repository over HTTP in your browser by going /svn/myNewRepo on your server (i.e.  You should see a simple page listing the repository information, version 0 as the version number, and an empty directory since we haven’t added any content yet.

Setup the SVN Repo

Your svn repository is now ready to use.  Most repositories follow the general pattern of a trunk/branches/tags structure, so to make sure things are working, I’m going to create those directories using the svn command line client.  Thereare a number of ways to do this, but I’m just going to create one directory at a time directly on the server.  The commands look like this.

svn mkdir --message "Creating the basic SVN structure for the project"
svn mkdir --message "Creating the basic SVN structure for the project"
svn mkdir --message "Creating the basic SVN structure for the project"

And that is all that’s needed.  You can verify that it worked by going to the svn repo’s web address again and verifying that the list of folders has updated.  Typically you would then check out the SVN trunk and then start checking in your code.

Installing Django 1.2.3 on Ubuntu 10.10

Posted by: TomS on December 16, 2010 @ 2:23 am

I’ve been toying around with some small ideas for websites, and I’ve been looking for some straightforward frameworks that would allow me to quickly prototype a site that could still be used, at least for moderate levels of traffic, in a production environment.  I decided to check out Django, a Python based web framework that supports a lot of out-of-the-box functionality and is easily extensible.

This post outlines the steps I took to install Django on test web server, which is currently running Ubuntu 10.10 (Maverick Meerkat).


As is often the case, I like to configure my test system so that it is as close as possible to the production environment I eventually deploy to.  In many cases, that means deviating from Ubuntu’s distribution repositories and installing some packages from source.  In the case of Django, there is an appliance version of Ubuntu that supports Django, but nothing official in the repositories.  So as usual, I’m back to installing my own version.  Before I started, I laid out a few of the requirements I would like to achieve.

  • Use the latest stable version of Django (1.2.3)
  • Install Django in a shared location, but Django sites should be independent.
  • Each Django site should be in its own Apache VirtualHost, so as not to disturb the other sites I have running on my test server.
  • Each Django site should be easily maintained in source control and should not include any major artifacts from the Django library.  This will make the application portable, and will be helpful when I move it to other servers, such as production.

I based most of my work on an article on,  but made some adjustments along the way to suit my needs, so I’m posting my steps for anyone else that might be following the same path.

Adding a Dynamic Sidebar and Dynamic Menus to a WordPress Theme

Posted by: TomS on July 14, 2010 @ 10:42 pm

In a previous post, I covered creating a simple WordPress theme, which forms the basis for the theme I use on this site.  That theme creation guide covered only the very basics of setting up a WordPress theme.  I’ve since upgraded to WordPress 3.0 which introduces a number of new features including Dynamic Menus.  In this post, I’ll cover taking advantage of the dynamic menus api, as well as “widgetizing” my theme, which allows me to configure which widgets will show up on sidebars in my site.

My updated theme is available below for reference.


WordPress 3.0 Took the Plunge

Posted by: TomS on July 8, 2010 @ 7:57 am

My backup system(s) for my WordPress blog have been running for well over a week now, and the upgrade notice in WordPress finally got the better of me.  I went ahead and took the plunge, running the automated upgrade to WordPress 3.0.

The upgrade was easier than can be expected.  One click to start the upgrade, and a few more to confirm that I REALLY wanted to upgrade, and then the process was underway.  Grand total, I think it took me maybe a minute and a half.

Everything seems stable (I did try it out first on my home test machine) so far.  I’m not taking advantage of any of the new features yet, other than the improved interface.  Pretty soon, I’ll update my WordPress theme to take advantage of the new dynamic menu API as well as catching up with older feature sets by widgetizing my theme.  Further down the road, I may look into some custom post types, especially for putting together race reports and recaps.  Expects some new posts sometime soon detailing how I rolled those feature sets into my theme.

The other big feature in WordPress 3.0 is the MU multi-user setup, where multiple blogs can be run from one installation.  I don’t have a huge need for that these days, but it’s a great accomplishment that WordPress now has that in the core.

Other than that, its just business as usual here.  Anyone reading the blog really shouldn’t see any changes, but WordPress 3.0 does look like a solid, polished release, and it looks like it’ll be a solid platform for the WordPress team to continue improving their app.  I’ll leave you with’s video detailing the new features in WordPress 3.0.