How I created my perfect programming environment
By Tom Parkin
One of my favourite things about the Linux desktop experience is how modular and flexible it is. If the attractive GUIs offered by comprehensive desktop environments such as GNOME or KDE don't meet your needs, it is relatively simple to swap them out for something different. Given enough time and appropriate motivation it is possible to create a completely bespoke desktop.
In order to support my day-to-day work as a software engineer I have done exactly this, replacing the GUI on my Fedora development box with something dedicated to my requirements. In this article I will discuss the components that make up my programming desktop, and how they help me do my job more efficiently.
What I want from my Linux desktop
I spend most of my time working with a text editor, a compiler, and a command prompt. Ideally, my desktop should be optimised for the most efficient access to these core tools. Beyond that, my desktop should keep out of my way: to program effectively I need to be able to concentrate.
To fulfil these basic requirements, I have developed my desktop around the following guiding principles:
- I should not have to take my hands from the keyboard. Because so many of my interactions with the computer are through the keyboard, it makes sense to drive my desktop environment using the same interface.
- I should not be disturbed. Notifications, pop-up dialogues, and CPU-stealing animations are not welcome. Anything that keeps me from thinking about my work is to be eliminated wherever possible.
- I should not be kept waiting. Partially this is about removing distractions, since having to wait for the computer to catch up with my commands is a distraction in itself. Beyond that, however, a fast and responsive interface is important to remove the delay between thought and action: if I've just had a good idea I want to try out, I don't want to be waiting a few seconds for a terminal to start up.
Building my desktop
In order to meet my requirements for a programming desktop, I have broken the desktop down into a number of components. The most obvious are the window manager, and the applications. To create a truly streamlined environment, I have also developed or integrated a number of tools which complement my basic desktop. These include scripts for workflow management, wrappers to remove the rough edges from some tools I use, and some "fit and finish" utilities to complete the package.
The Window Manager
The fundamental choice in designing a desktop is that of the window manager.
For my programming desktop, I use a very small and simple window manager called dwm. dwm is a tiling window manager along the lines of xmonad, wmii, ion, and awesome wm. Although the tiling paradigm takes a little adjustment to get used to, I find it a great fit for programming tasks as it makes it easy for me to manage lots of terminal windows.
In addition to the benefits of the tiling layout, dwm boasts a number of other attractive features. Firstly, it is entirely keyboard driven, which means that I can start new applications, close windows, change tiling layouts and switch between virtual desktops without needing to touch the mouse. Secondly, it is very small and sleek (consisting of around 2000 lines of code in total), which means it is very fast to start up and provides no bells or whistles to distract me from my work. Finally, in a particularly hacker-centric design decision, dwm is configured entirely through modifying the header file and recompiling. What better advertisement could there be for a true programming environment?
The Applications
No desktop is complete without applications, and no programmer's toolkit would be complete without an editor! Here I am much more conventional than in my window manager choice. All my editing needs are met by the venerable vim. Vim starts up fast, is very configurable, and offers many powerful commands to help me get the most out of my keystrokes. In conjunction with vim, I use ctags and cscope to help navigate source trees. The latter are made easily accessible via an alias in my ~/.bashrc:
alias mktags='ctags -Rb && cscope -Rb'
My desktop application requirements are rounded off with a combination of mutt for email access and Firefox for web browsing. Firefox is somewhat customised by means of the excellent vimperator extension, which allows me to drive Firefox from the keyboard.
Managing workflow
The window manager and applications are only the building blocks of a productive working environment. In order to make the desktop work for me, I have created a number of tools specific to my workflow. When programming, this workflow is broadly as follows: check out a sandbox from revision control; make some modifications; test those modifications; and check the resulting code back in. The only bits I am really interested in, however, are the making and testing of changes. The rest is just an overhead of doing the interesting work.
Happily, the Linux command line makes it easy to reduce the burden of this overhead via scripting. I have developed three scripts I use on a daily basis to manage the sandboxes I'm working on:
- freshen automates the process of checking out and optionally building a sandbox. This is great for getting up-to-date code and for testing changes in a clean sandbox.
- workroot automates setting up a sandbox for use. This is largely a matter of exporting variables our build system expects to see, but also extends to setting up some convenience variables in the environment to make navigating the tree easier.
- stale automates the process of deleting old sandboxes from my hard drive once I've finished with them. Since our build involves creating a rootfs for an embedded device, certain parts of the sandbox have root ownership, and it is nice to hide any "sudo rm -rf ./*" calls in a carefully-audited script rather than relying on manually performing these operations!
Although freshen, workroot and stale form an important part of my working environment, the inner details of how they do what they do are rather project-specific and unlikely to be of wider interest. As such, and in the interests of brevity, I won't provide code listings for these scripts here.
Removing rough edges
In addition to the scripts I've developed to support my daily workflow, I have also developed various wrappers which make certain programming tools more convenient. The main bugbear in this department is CVS, whose lacklustre diff and status output obscures a lot of potentially useful information. To improve matters I use shell scripts and aliases to mold the raw output from CVS into something more palatable. My cvs diff wrapper, cvsdiff, pipes output from the cvs diff command through a colorising script and a pager. Similarly, my cvs status wrapper parses the output of the cvs status command to display it in a more readable format.
cvsdiff code
cvsdiff utilises the fantastic colordiff project to display nicer diff output. Since the script for this is so short I define it as an alias in my ~/.bashrc file. Note the use of the -R argument to GNU less. This instructs less to pass control characters through in "raw" mode, meaning the color output from colordiff is preserved.
alias cvsdiff='cvs diff -u 2>&1 | grep -v "^\(?\|cvs\)" | colordiff | less -R'
cvstatus code
cvstatus uses awk (or gawk on my Fedora machine) to parse the verbose output of the cvs status command. The gawk code is wrapped in a simple bit of shell script to allow the easy passing of command line arguments. I install this script in ~/bin, which is added to my $PATH in ~/.bashrc.
#!/bin/sh CARGS="vf" VERBOSE=0 FULLPATH=0 while getopts $CARGS opt do case $opt in v) VERBOSE=1;; f) FULLPATH=1;; esac done cvs status 2>&1 | awk -v verbose=$VERBOSE -v fullpath=$FULLPATH ' function printline(path, status, working_rev, repos_rev, tag) { # truncate path if necessary if (!fullpath) { plen = length(path); if (plen >= 30) { path = sprintf("-%s", substr(path, (plen-30+2))); } } printf("%-30s %-25s %-15s %-15s %-20s\n", path, status, working_rev, repos_rev, tag); } BEGIN { do_search=0; do_print=1; state=do_search; printline("Path", "Status", "Working rev", "Repository rev", "Tag"); printline("----", "------", "-----------", "--------------", "---"); } # Track directories /Examining/ { dir=$4; } # Handle unknown files /^\?/ { fn=$2; status="Unknown"; wrev="??"; rrev="??"; tag="No tag"; state=do_print; } # For known files capture the filename, status, revision info and tag /^File/ { status=$0; gsub(/^.*Status: /, "", status); if (status ~ /Locally Removed/) { fn=$4; } else { fn=$2; } if ( (verbose && status ~ /Up-to-date/) || status !~ /Up-to-date/ ) { state=do_print; } } /Working revision/ { wrev=$3; } /Repository revision/ { rrev=$3; } /Sticky Tag/ { tag=$3; } # Print handling (/?/ || /Sticky Options/ || /======/ ) && state == do_print { path = sprintf("%s/%s", dir, fn); printline(path, status, wrev, rrev, tag); state=do_search; } '
Fit and finish
The final components of my programming desktop provide some of the functionality typically found in graphical file managers such as Nautilus, Dolphin or Thunar.
For quick and easy exploration of directory hierarchies, I use tree. Although there are much better tools for finding specific files in a directory structure, tree excels in presenting an overall view by means of intelligent indentation and coloured output.
In order to conveniently mount hot-pluggable media such as USB flash drives, I use the pmount and pumount wrapper utilities. These have been developed to allow an unprivileged user to mount a local volume, and are much more user-friendly than manually messing about with sudo.
Finally, I use a simple script of my own devising to make mounting and unmounting network shares more convenient. This allows me to hide the differences between different network shares behind a common interface. My script is based around a per-share configuration file which describes the share to be mounted. Currently CIFS and sshfs shares are supported. Since the script's job is to handle mounting volumes, I named it mountie.
mountie configuration
The configuration file format for mountie follows the INI "token = value" syntax used by e.g. the Samba project. Valid configuration tokens are as follows:
- "type" defines the protocol to use when connecting to the server
- "user" defines the username to use when authenticating to the server
- "host" defines the remote server to connect to
- "path" defines the path on the remote server to mount
- "mount" defines the local mount point for the network share
For example:
type = sshfs user = tom host = fileserver.site.internal path = /export/media/shared mount = ~/fileserver
mountie code
The bash script for mountie itself is as follows:
#!/bin/bash # # mountie # # Mount remote filesystems # ACTION=mount HOST= REMOTE_PATH= MOUNTPOINT= log() { echo "$@"; } err() { log "$@" 1>&2; false; } die() { err "$@"; exit 1; } # $1 -- user # $2 -- host # $3 -- path # $4 -- mount point sshfs_do_mount() { mkdir -p ${4} && sshfs -o nonempty ${1}@${2}:${3} ${4}; } sshfs_do_umount() { fusermount -u ${4}; } cifs_do_mount() { mkdir -p ${4} && sudo -p "[sudo] $(whoami)'s password: " mount -t cifs \\\\${2}\\${3} ${4} -o username=${1}; } cifs_do_umount() { mkdir -p ${4} && sudo -p "[sudo] $(whoami)'s password: " umount ${4}; } # $1 -- config file path config_get_type() { grep "type" $1 | cut -d"=" -f2; } config_get_user() { grep "user" $1 | cut -d"=" -f2; } config_get_host() { grep "host" $1 | cut -d"=" -f2; } config_get_path() { grep "path" $1 | cut -d"=" -f2 | tr -d " "; } config_get_mountpoint() { grep "mount" $1 | cut -d"=" -f2; } show_usage() { log "Usage: [-uh] $(basename $0)" } # # Entry point # while getopts "uh" opt do case $opt in u) ACTION=umount ;; h) show_usage; exit 0 ;; *) die "Unknown option" ;; esac done shift $((OPTIND-1)) if test -z "$1" then show_usage exit 0 fi for config in $@ do if test -f $config then $(config_get_type $config)_do_$ACTION \ $(config_get_user $config) \ $(config_get_host $config) \ $(config_get_path $config) \ $(config_get_mountpoint $config) || die "Failed to mount $(config_get_host $config):$(config_get_path $config)" else err "Cannot locate configuration file $config" fi done
Conclusions
I've developed my programming desktop to remove distractions, increase efficiency, and to support my workflow. This has been achieved by combining many excellent GUI and command line tools. Where my work has demanded a more specialist tool than the free software ecosystem has provided I have been able to harness the scripting abilities of BASH and gawk to create my own.
Although my programming desktop works well for me for the majority of what I do, it isn't the only desktop I use. On the contrary, there are several applications which are ill served by dwm's tiling paradigm, especially those using the "many floating toolbox windows" UI design pattern, such as the Gimp, Dia or OpenOffice. When I find myself called to such use applications, or even when I fancy something with a bit more graphical bling than dwm offers, I sometimes use a GNOME or Xfce desktop instead.
An article like this one tends to present the subject as though it were a complete and finished work, the reproduction of which can be intimidating to contemplate. Rest assured, however, that my desktop hasn't been conceived that way. Instead, I've developed this environment over time in an evolutionary manner, gradually removing irritations and inefficiencies. I fully expect it will change again in the future, and I look forward to the new tools I might discover, and the new scripts I will develop to make my life ever easier. Most of all, I hope some of the ideas I've presented in this article may give you some ideas for sculpting your own perfect environment.
Share |
Talkback: Discuss this article with The Answer Gang
Tom Parkin has been fascinated by the inner workings of digital technologies ever since his father brought home a VIC-20 sometime in the mid-eighties. Having spent most of his childhood breaking computers in a variety of inventive ways he decided to learn how to fix them again, a motivation which lead him to undertake an MEng degree in Electronic Systems Engineering in 2000. Since graduating he has pursued a career in embedded software engineering, and now feels that he has probably been responsible for more working computers than broken ones.
Tom was introduced to Linux when a friend lent him a thick stack of Mandriva installation CDs, and he has been using Open Source software ever since. Like most Linux users, Tom has tried many different distributions but is currently settled with Fedora at work and Crunchbang on his home machine.
When not tinkering with computers and Linux, Tom enjoys exploring the great outdoors on bike or on foot, and making music.