...making Linux just a little more fun!
Benjamin A. Okopnik (ben at linuxgazette.net)
Wed Mar 29 00:50:24 PST 2006
Hi, Gang -
Anyone interested in hearing Bruce Perens do his "Open Source State of the Union" speech in Boston (4/5/2006), please let me know and I'll arrange a press pass for you (it would be a Very Nice Thing if you sent in a conference report as a result.) Do note that this usually requires a recently-published article with your name on it.
In fact, if you'd like to attend any other industry conferences, the same process applies. I will ask that you do a little research in those cases and send me the contact info for the group that's running the particular con you're interested in.
Bob van der Poel (bvdp at uniserve.com)
Sun Feb 26 18:31:54 PST 2006
Following up on a previous discussion
Just to follow up on this ... I have it working perfectly now. A few fellows on the ALSA mailing list gave me hand and found out that for some reason the OSS sound stuff was being loaded and then the ALSA was ignored (still, I don't see why a cold plug did work). But, the simple solution was to delete or rename the kernel module "usb-midi.ko.gz" so that the system can't find it. Works like a damn.
Brian Sydney Jathanna (briansydney at gmail.com)
Tue Mar 14 18:18:19 PST 2006
Hi all,
I was just wondering if there is a way to enter the default inputs at the cmdline recursively when a program asks for user input. For eg. while compiling the kernel it asks for a whole lot of interactions wherein you just keep pressing Enter. The command 'Yes' is not an option in this case cos at some stages the default input would be a No. Help appreciated and thanks in advance.
[Thomas] - Sure. You can use 'yes' of course -- just tell it not to actually print any characters:
yes "" | make oldconfig
That will just feed into each stage an effective "return key press" that accepts whatever the default is.
[[Ben]] - Conversely, you can use a 'heredoc' - a shell mechanism designed for just that reason:
program <<! yes no maybe !
In the above example, "program" will receive four arguments via STDIN: 'yes<Enter>', 'no<Enter>', 'maybe<Enter>', and the <Enter> key all by itself.
sam luther (smartysam2003 at yahoo.co.in)
Sat Mar 18 12:32:22 PST 2006
I want to develop C code to transfer files from one PC to another PC over the parallel ports(DB 25 on my machine) of linux machines. I got the 18 wire cable ready but not sure with the connections and how to open and control the parallel port. Plz help...advices, sample codes, relevant material will b very helpful. Thx
[Thomas] - Well, if this is Linux -> Linux, setup PLIP.
(Searching linuxgazette.net for 'PLIP' will yield all the answers you need). Having done that you can setup NFS or sshfs, or scp, in the normal way.
[Jimmy] - Hmm. Advice, sample code, relevant material...
http://linuxgazette.net/issue95/pramode.html
http://linuxgazette.net/118/deak.html
http://linuxgazette.net/122/sreejith.html
http://linuxgazette.net/112/radcliffe.htmlare just some of the articles published in previous issues of LG that discuss doing various different things with the parallel port. The article in issue 122 also has diagrams, which may be of help.
Brian Sydney Jathanna (briansydney at gmail.com)
Tue Mar 21 14:44:57 PST 2006
Is there a way to modify the commandline of a running process? By looking at /proc/<pid>/cmdline it looks as though it is readonly even to root user. It would be helpful to add options or change arguments to a running command, if this was possible. Is there a way to get around with it? Thanks in advance.
[Thomas] - I don't see how. Most applications only parse what they're told to at init time, via --command-line options. This means you would have to effectively restart the application.
However, if a said application receives its options via a config file, then sending that application a HUP signal might help you -- providing the said application supports the command-line options in the config file.
[[Ben]] - Not that I can make a totally authoritative statement on this, but I agree with Thomas: when the process is running, what you have is a memory image of a program _after_ it has finished parsing all the command-line options and doing all the branching and processing dependent on them; there's no reason for those command-line parsing mechanisms to be active at that point. Applications that need this kind of functionality - e.g., Apache - use the mechanism that Thomas describes to achieve it, with a "downtime" of only a tiny fraction of a second between runs.
manne neelima (manne_neelima at yahoo.com)
Wed Mar 22 12:08:12 PST 2006
I have a question about rm command. Would you please tell me how to remove all the files excepts certain Folder in Unix?
Thanks in Advance
Neelima
[Thomas] - Given that the 'rm' command is picky about removing non-empty directories anyway (unless it is used with the '-f' flag) I suspect your question is:
"How can I exclude some file from being removed?"
... to which the answer is:
"It depends on the file -- what the name is, whether it has an 'extension', etc."
Now, since you've provided next to nothing about any of that, here's a contrived example. Let's assume you had a directory "/tmp/foo" with some files in it:
```
[n6tadam at workstation foo]$ ls
a b c
'''Now let us add another directory, and add some files into that:
```
[n6tadam at workstation foo2]$ ls
c d e f g
'''Let's now assume you only wanted to remove all the files in foo:
```
[n6tadam at workstation foo]$ rm -i /tmp/foo/*
'''Would remove the files 'a', 'b', and 'c'. It would not remove "foo2" since that's a non-empty directory.
Of course, the answer to your question is really one of globbing. I don't know what kind of files you want removing, but for situations such as this, the find(1) command works "better":
```
find /tmp/foo -type f -print0 | xargs -0 rm
'''... would remove all files in /tmp/foo _recursively_ which means the files in /tmp/foo2 would also be removed. Woops. Note that earlier with the "rm -i *" command, the glob only expands the top-level, as it ought to. You can limit find's visibility, such that it will only remove the files from /tmp/foo and nowhere else:
```
find /tmp/foo -maxdepth 1 -type f -print0 | xargs -0 rm
'''Let's now assume you didn't want to remove file "b" from /tmp/foo, but everything else. You could do:
```
find /tmp/foo -maxdepth 1 -type f -not -name 'b' -print0 | xargs -0 rm
'''... etc.
[Ben] - Well, first off, Unix doesn't have "Folders"; I presume that you're talking about directories. Even so, the 'rm' command doesn't do that - there's no "selector" mechanism built into it except the (somewhat crude) "-i" (interactive) option. However, you can use shell constructs or other methods to create a list of files to be supplied as an argument list to 'rm', which would do what you want - or you could process the file list and use a conditional operator to execute 'rm' based on some factor. Here are a few examples:
# Delete all files except those that end in 'x', 'y', or 'z'
rm *[^xyz]# Delete only the subdirectories in the current dir
for f in *; do [ -d "$f" ] && rm -rf "$f"; done# Delete all except regular files
find /bin/* ! -type f -exec /bin/rm -f {} \;
[[Ben]] - Whoops - forgot to clean up after experimenting. :) That last should, of course, be
find * ! -type f -exec /bin/rm -f {} \;
[[[Francis]]] - My directory ".evil" managed to survive those last few. And my file "(surprise" seemed to cause a small problem...
[[[[Ben]]]] - Heh. Too right; I got too focused on the "except for" nature of the problem. Of course, if I wanted to do a real search-and-destroy mission, I'd do something like
su -c 'chattr -i file1 file2 file3; rm -rf `pwd`; chattr +i *'[evil grin]
No problems with '(surprise', '.evil', or anything else.
[[[[[Thomas]]]]] - ... that won't work across all filesystems, though.
[[[[[[Ben]]]]]] - OK, here's something that will:
1) Copy off the required files.
2) Throw away the hard drive, CD, whatever the medium.
3) Copy the files back to an identical medium.There, a nice portable solution. What, you have more objections? :)
Note that I said "if *I* wanted to do a real search-and-destroy mission". I use ext3 almost exclusively, so It Works For Me. To anyone who wants to include vfat, reiserfs, pcfs, iso9600, and malaysian_crack_monkey_of_the_week_fs, I wish the best of luck and a good supply of their tranquilizer of choice.
[[[Francis]]] - (Of course, you knew that.
[[[[Ben]]]] - Actually, I'd missed it while playing around with the other stuff, so I'm glad you were there to back me up.
[[[Francis]]] - But if the OP cares about *really* all-bar-one, it's worth keeping in mind that "*" tends not to match /^./, and also that "*" is rarely as useful as "./*", at least when you don't know what's in the directory. I suspect this isn't their primary concern, though.)
[[[[Ben]]]] - Well, './*' won't match dot-files any better than '*' will, although you can always futz around with the 'dotglob' setting:
# Blows away everything in $PWD
(GLOBIGNORE=1; rm *)...but you knew that already. :)
[Martin] - try this perl script... This one deletes all hidden files apart from the ones in the hash.
#!/usr/bin/perl -w use strict; use File::Path; #These are the files you want to keep. my %keepfiles = ( ".aptitude" => 1, ".DCOPserver_Anne__0" => 1, ".DCOPserver_Anne_:0" => 1, ".gconf" => 1, ".gconfd" => 1, ".gnome2" => 1, ".gnome2_private" => 1, ".gnupg" => 1, ".kde" => 1, ".kderc" => 1, ".config" => 1, ".local" => 1, ".mozilla" => 1, ".mozilla-thunderbird" => 1, ".qt" => 1, ".xmms" => 1, ".bashrc" => 1, ".prompt" => 1, ".gtk_qt_engine_rc" => 1, ".gtkrc-2.0" => 1, ".bash_profile" => 1, ".kderc" => 1, ".ICEauthority" => 1, ".hushlogin" => 1, ".bash_history" => 1, ); my $inputDir = "."; opendir(DIR,$inputDir) || die("somemessage $!\n"); while (my $file = readdir(DIR)) { next if ($file =~ /^\.\.?$/); # skip . & .. next if ($file !~ /^\./); #skip unless begins with . # carry on if it's a file you wanna keep next if ($keepfiles{$file}); # Else wipe it #print STDERR "I would delete $inputDir/$file\n"; # you should probably test for outcome of these #operations... if (-d $inputDir . "/" . $file) { #rmdir($inputDir . "/" . $file); print STDERR "Deleteing Dir $file\n"; rmtree($inputDir . "/" . $file); } else { print STDERR "Deleting File $file\n"; unlink($inputDir . "/" . $file); } } closedir(DIR);
[[Ben]] - All that, and a module, and 'opendir' too? Wow.
perl -we'@k{qw/.foo .bar .zotz/}=();for(<.*>){unlink if -f&&!exists$k{$_}}':)
Ramon van Alteren (ramon at vanalteren.nl)
Wed Feb 8 15:44:14 PST 2006
Hi All,
I've recently built a 9Tb NAS for our serverpark out of 24 SATA disks & 2 3ware 9550SX controllers. Works like a charm, except....... NFS
We export the storage using nfs version 3 to our servers. Writing onto the local filesystem on the NAS works fine, copying over the network with scp and the like works fine as well.
However writing to a mounted nfs-share at a different machine truncates files at random sizes which appear to be multiples of 16K. I can reproduce the same behaviour with a nfs-share mounted via the loopback interface.
Following is output from a test-case:
On the server in /etc/exports:
/data/tools 10.10.0.0/24(rw,async,no_root_squash) 127.0.0.1/8 (rw,async,no_root_squash)
Kernelsymbols:
Linux spinvis 2.6.14.2 #1 SMP Wed Feb 8 23:58:06 CET 2006 i686 Intel (R) Xeon(TM) CPU 2.80GHz GenuineIntel GNU/Linux
Similar behaviour is observed with gentoo-sources-2.6.14-r5, same options.
CONFIG_NFS_FS=y CONFIG_NFS_V3=y CONFIG_NFS_V3_ACL=y # CONFIG_NFS_V4 is not set # CONFIG_NFS_DIRECTIO is not set CONFIG_NFSD=y CONFIG_NFSD_V2_ACL=y CONFIG_NFSD_V3=y CONFIG_NFSD_V3_ACL=y # CONFIG_NFSD_V4 is not set CONFIG_NFSD_TCP=y # CONFIG_ROOT_NFS is not set CONFIG_NFS_ACL_SUPPORT=y CONFIG_NFS_COMMON=y #root at cl36 ~ 20:29:44 > mount10.10.0.80:/data/tools on /root/tools type nfs (rw,intr,lock,tcp,nfsvers=3,addr=10.10.0.80) #root at cl36 ~ 20:29:56 > for i in `seq 1 30`; do dd count=1000 if=/dev/ zero of=/root/tools/test.tst; ls -la /root/tools/test.tst ; rm /root/ tools/test.tst ; done 1000+0 records in 1000+0 records out dd: closing output file `/root/tools/test.tst': No space left on device -rw-r--r-- 1 root root 163840 Feb 8 20:30 /root/tools/test.tst 1000+0 records in 1000+0 records out dd: closing output file `/root/tools/test.tst': No space left on device -rw-r--r-- 1 root root 98304 Feb 8 20:30 /root/tools/test.tst 1000+0 records in 1000+0 records out dd: closing output file `/root/tools/test.tst': No space left on device -rw-r--r-- 1 root root 98304 Feb 8 20:30 /root/tools/test.tst 1000+0 records in 1000+0 records out dd: closing output file `/root/tools/test.tst': No space left on device -rw-r--r-- 1 root root 131072 Feb 8 20:30 /root/tools/test.tst 1000+0 records in 1000+0 records out dd: closing output file `/root/tools/test.tst': No space left on device -rw-r--r-- 1 root root 163840 Feb 8 20:30 /root/tools/test.tst
<similar thus snipped>
I've so far found this.
Which seems to indicate that RAID + LVM + complex storage and 4KSTACKS can cause problems. However I can't find the 4KSTACK symbol anywhere in my config.
For those wondering.... no it's not out of space:
10.10.0.80:/data/tools 9.0T 204G 8.9T 3% / root/tools
Any help would be much appreciated.......
Forgot to mention:
There's nothing in syslog in either case (loopback mount or remote machine mount or server)
We're using reiserfs 3 in case you're wondering. It's a raid-50 machine based on two raid-50 arrays of 4,55 Tb handled by the hardware controller.
The two raid-50 arrays are "glued" together using LVM2:
--- Volume group --- VG Name data-vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 9.09 TB PE Size 4.00 MB Total PE 2384134 Alloc PE / Size 2359296 / 9.00 TB Free PE / Size 24838 / 97.02 GB VG UUID dyDpX4-mnT5-hFS9-DX7P-jz63-KNli-iqNFTH --- Physical volume --- PV Name /dev/sda1 VG Name data-vg PV Size 4.55 TB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 1192067 Free PE 0 Allocated PE 1192067 PV UUID rfOtx3-EIRR-iUx7-uCSl-h9kE-Sfgu-EJCHLR --- Physical volume --- PV Name /dev/sdb1 VG Name data-vg PV Size 4.55 TB / not usable 0 Allocatable yes PE Size (KByte) 4096 Total PE 1192067 Free PE 24838 Allocated PE 1167229 PV UUID 5U0F3v-ZUag-pRcA-FHvo-OJeD-1q9g-IthGQg --- Logical volume --- LV Name /dev/data-vg/data-lv VG Name data-vg LV UUID 0UUEX8-snHA-dYc8-0qLL-OSXP-kjoa-UyXtdI LV Write Access read/write LV Status available # open 2 LV Size 9.00 TB Current LE 2359296 Segments 2 Allocation inherit Read ahead sectors 0 Block device 253:3[Kapil] - I haven't used NFS in such a massive context so this may not be the right question.
[[Ramon]] - Doesn't matter, I remember once explaining why I was still at work and with what problem to the guy cleaning our office (because he asked). He asked one question which put me on the right track solving the issue....in half an hour after banging my head against it for two days ;-)
Not intending to compare you to a cleaner... sometimes it helps a lot if you get questions from a different mindset.
[Kapil] - Have you tested what happens with the user-space NFS daemon?
[[Ramon]] - Urhm, not clear what you mean.... Tested in what sense ?
[[[Kapil] ]]] - Well. I noticed that you had compiled in the kernel NFS daemon. So I assumed that you were using the kernel NFS daemon rather than the user-space one. Does your problem persist when you use the user-space daemon?[[[[Ramon]]]] - Thanx, no it doesn't.
[[[Kapil]]] - The reason I ask is that it may be the kernel NFS daemon that is leading to too many indirections for the kernel stack to handle.
[[[[ Ramon]]]] - That appears to be the case. I'm writing a mail to lkml & nfs list right now to report on the bug.
[[[[[Ramon]]]]] - Turned out it did, but with a higher threshold. As soon as we started testing with files > 100Mb the same behaviour came up again.
It turned out to be another bug related to reiserfs.
For those interested more data is here and here.
In short: Although reiserfs reports in its FAQ that it supports filesystems up to 16Tb with the default options (4K blocks) it supports only 8Tb. It doesn't fail however and appears to work correctly when using the local filesystem, the problems start showing up when using NFS
I fixed the problem by using another filesystem. Based on comments on the nfs-ml and the excellent article in last months linuxgazette we switched to jfs.
So far it's holding up very good. We haven't seen the problems reappearing with files in excess of 1GbThanx for the help though.
fahad saeed (fahadsaeed11 at yahoo.com )
Thu Feb 9 20:22:37 PST 2006
Hello,
I am an linux enthusiastic and I would like to be on the answer blog team. How
may this be possible. I have one linux article to my credit.
http://new.linuxfocus.org/English/December2005/article390.html
Regards
FAHAD SAEED
[Ben] -
Hi, Saeed -
Yep, I saw that - very good article and excellent work! Congratulations to your entire team.
You're welcome to join the Linux Gazette Answer Gang; simply go to http://lists.linuxgazette.net/mailman/listinfo/tag and sign up. As soon as you're approved, you can start participating.
[Kapil] -
Hello,
On Fri, 10 Feb 2006, fahad saeed wrote:
> <html><div style='background-color:'><DIV class=RTE>
> <P>Thankyou all for the warm welcome.I will try to be of use.</P>
> <P>Kind Regards</P>I somehow managed to read that but please don't send HTML mail!
I started reading your article.
Since I'm only learning the ropes with wireless could you send me some URL indicating how the wireless Ad-Hoc network (the links) are set up? Just to give you an idea of how clueless I am, I didn't get beyond "iwconfig wifi mode ad-hoc".
Regards,
Kapil.
P.S. Its great to see such close neighbours here on TAG.
[Kapil] -
Dear Fahad Saeed,
First of all you again sent HTML mail. You need to configure your webmail account to send plain text mail or you will invite the wrath of Thomas and Heather [or Kat] (who edit these mails for inclusion in LG) upon you. This may just be a preference setting but do it right away!
On Fri, 10 Feb 2006, fahad saeed wrote:
> the command you entered is not very right i suppose.The cards are
> usually configured( check it with ifconfig) as ath0 etc.The command
> must be (assuming that the card configured is 'displayed' as ath0)
> iwconfig ath0 mode ad-hoc.You can also do the same if u change the
> entry in ifcfg-ath0 to ad-hoc.Actually, I just used the "named-interface" feature to call my wireless interface "wifi". So "iwconfig wifi mode ad-hoc" is the same as "iwconfig eth0 mode ad-hoc" on my system.
> Hope this helps.I didnt get your question exactly.Please let me know
> what exactly are you trying to do and looking so that i can be of
> more specific help.My question was what one does after this. Specifically, is the IP address of the interface configured statically via a command like ifconfig eth0 192.168.0.14 and if so what is the netmask setting?
Just to be completely clear. I have not managed to get two laptops to communicate with each other using ad-hoc mode. Both sides say the (wireless) link is up, but I couldn't get them to send IP packets to each other. I have only managed to get an IP link when there is a common access point (hub).
The problem is that getting wireless cards to work with Linux has been such a complicated issue in the past that most HOWTO's spend a lot of time in explaining how to download and compile the relevant kernel modules and load firmware. The authors are probably exhausted by the time they got to the details of setting up networking :)
[[Ben]] - That's something I'd really like to learn to do myself. I've thought about it on occasion, and it always seemed like a doable thing - but I never got any further than that, so a description of the actual setup would be a really cool thing.
When I read about those $100 laptops that Negroponte et al are cranking out, I pictured a continent-wide wireless fabric of laptops stretching across, say, Africa - with some sort of a clever NAT and bandwidth-metering setup on each machine where any host within reach of an AP becomes a "relay station" accessible to everyone on that WAN. Yeah, it would be dead slow if there were only a few hosts within reach of APs... but the capabilities of that kind of system would be awesome.
I must say that, in this case, "the Devil is not as bad as he's painted". In my experience of wireless cards under Linux has consisted of
1) Find the source on the Net and download it;
2) Unpack the archive in /usr/src/modules;
3) 'make; make install' the module and add it to the list in
"/etc/modules".Oh, and run 'make; make install' every time you recompile the kernel. Not what I'd call terribly difficult.
[[[Martin]]] - When my Dad had a wireless card it wasn't that simple...
It was a Netgear something or other USB - I don't think we ever got it work in Linux in the end.
Its not a problem now though as he is using Ethernet over Power and it's working miles better in both Windows and Linux.
[[[[Ben]]]] - Hmm, strange. Netgear and Intel are the only network hardware with which I haven't had any problems - whatever the OS. Well, different folks, different experiences...
[[[Saeed]]] - I would have to agree with Kapil here.Yes the configuration process is sometimes extremely difficulty.As Benjamin potrayed it ...it seems pretty easy and in theory it is. But when done practically it is not that straightforward. The main problem is that the drivers available are for different chipsets.The vendors do not care about the chipsets and change the chipsets without changing the product ID. It happened in our case with WMP11 if I am remembering correctly.
Obviously once you get the correct sets of drivers, kernel and chipsets it is straightforward.
The lab setup that we did in UET LAHORE required the cards to work in ad-hoc mode.We used madwifi drivers. Now as you may know that there is beacon problem in the madwifi drivers and the ad-hoc mode itself does not work reliably.The mode that we implemented was Ad-hoc mode Cluster Head Routing. In simple words it meant that one of the PC's were configured to be in Master mode and there were bunch of PCs around.It would have been really cool if we could get it work in 'pure ad-hoc' mode nevertheless it served the LAB purposes.
[[[Jason]]] - Huh. Weird. I've got an ad-hoc network set up with my PC and my sister's laptop, and it "just works". The network card in my PC is a Netgear PCI MA311, with the "Prism 2.5" chipset. ("orinoco_pci" is the name of the driver module.)
:r!lspci | grep -i prism 0000:00:0e.0 Network controller: Intersil Corporation Prism 2.5 Wavelan chipset (rev 01)The stanza in /etc/network/interfaces is:
auto eth1 iface eth1 inet static address 10.42.42.1 netmask 255.255.255.0 wireless-mode ad-hoc wireless-essid jason wireless-key XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXFWIW, the commands I was using to bring up the interface before I switched to Debian were:
sbin/ifconfig eth1 10.42.42.1 netmask 255.255.255.0 /usr/sbin/iwconfig eth1 mode Ad-Hoc /usr/sbin/iwconfig eth1 essid "jason" /usr/sbin/iwconfig eth1 key "XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XX"I have dnsmasq providing DHCP and DNS. The laptop is running Windows 2000 with some off-brand PCMCIA wifi card.
[[[[Kapil]]]] - I get it. While the link-layer can be setup in ad-hoc mode by the the cards using some WEP/WPA mechanism, there has to be some other mechanism to fix the IP addresses+netmasks. For example, one could use static IP addresses of each of the machines as Fahad Saeed (or Jason) has done. Of course, in a server-less network one doesn't want one machine running a DHCP server. Alles ist klar.
I will try this out the next time I have access to two laptops...
[[Saeed]] - First of all I am sorry abt the html format thing.
Dear Kapil what is your card's chipset type and what drivers you used to configure them with linux.
[[[Kapil]]] - The card is the Intel one and it works with the driver ipw2200 that is built into the default kernel along with the firmware from the Intel site.
In fact, I have had absolutely no complaints about getting the card to work with wireless access stations with/without encryption and/or authentication. But all these modes are "Managed" modes.
I once tried to get the laptop to communicate with another laptop (a Mac) in Ad-Hoc mode and was entirely unsuccessful. So when you said that you managed to get an Ad-Hoc network configured for your Lab, I thought you had used the cards in Ad-Hoc mode.
However, from your previous mail it appears that you configured one card in "Master" mode and the remaining cards in "Managed" mode, so that is somehow different. In particular, I don't think the card I have can be set up in "Master" mode at all.
[[[[Saeed]]]] - You got it right; we did used the master mode type in the lab for demonstration purposes because it was more reliable than the pure 'ad-hoc' mode and it served the lab purposes.However we did use the ad-hoc mode in the lab and it worked fine but wasn't reliable enough for lab purposes ...which has to work reliably at all times.
Talkback: Discuss this article with The Answer Gang
Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.
When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.