...making Linux just a little more fun!
clarjon1 [clarjon1 at gmail.com]
Thu, 7 Dec 2006 08:55:31 -0500
While browsing through the archives, I noticed the discussion about that date, and wondered, why do people get so worked up about this? I mean, is it fear of the unknown? NOTE: could this be worked into an article, i wonder... Let me know!
Anyways, so outta curiosity, I decided to set my PC's date to a few minutes before the accursed moment... So, grabbed a livecd, and set the date/time to Jan 19, 2038, at ten after three in the morning. And then booted. When the clock turned over to that date (in Freespire livecd), it just complained about invalid time, set time/date to first day of 2000. DSL didn't do anything like that, just whined about the files being in the future, and the date was Jan 19 1969. I tried something like this in '99, to see what the fuss was about of the Y2K bug. I KNEW by then that there would be no prob, just by curiosity. Has anyone else tried this?
Steve Brown [steve.stevebrown at gmail.com]
Thu, 7 Dec 2006 14:31:06 +0000
On 07/12/06, clarjon1 <clarjon1 at gmail.com> wrote:
> Has anyone else tried this?
No, but I remember (just) the furore about the Y2K bug - was it really so long ago?
I spent a lot of time explaining to various associates that yes their microwave ovens and washing machines would still work and no they didn't actually know the time, it had to be set, and did it really matter if the date on them was wrong anyway.
It has always baffled me that people seem to believe that computers have some kind of sixth-sense and are really just malicious entities in biege boxes that can sense their environment.
clarjon1 [clarjon1 at gmail.com]
Thu, 7 Dec 2006 09:47:47 -0500
On 12/7/06, Steve Brown <steve.stevebrown at gmail.com> wrote: [...]
> > It has always baffled me that people seem to believe that computers > have some kind of sixth-sense and are really just malicious entities > in biege boxes that can sense their environment.
Umm, mine definately does....
Anyways, no harm to the data, so if my computer were to some how magically make it safely to the date in question, I don't have to worry about my data. By the way, the calendar programs don't have much trouble with dates on, or after the date in question.
Benjamin A. Okopnik [ben at linuxgazette.net]
Thu, 7 Dec 2006 19:24:56 -0500
On Thu, Dec 07, 2006 at 08:55:31AM -0500, clarjon1 wrote:
> While browsing through the archives, I noticed the discussion about > that date, and wondered, why do people get so worked up about this? I > mean, is it fear of the unknown? > NOTE: could this be worked into an article, i wonder... Let me know!
As much as I like the idea of new articles, I'm afraid that there isn't much juice in this one - other than correcting the (very common) misunderstanding that you seem to share.
Here's the Y2K problem in short:
#!/bin/sh echo "Current year (enter 2 digits only):" read year left=`expr 100 - $year` echo "There are $left years left until 2000!"
(Works OK from 1900 to 1999 - although those steam-powered computers were a real pain back then - but think about what would happen if you entered the last two digits of the current year.)
The problem had to do with stupid^Wprogrammers who didn't think far enough ahead. Computers, contrary to popular belief, don't sneak over and look at the dates in the paper calendar hanging on your wall; they just do what you tell them to do (instead of what you want them to do. Shame, that.) The 2038 "problem" is a bit less of a problem (although, unfortunately, the amount of dumbth in the Universe appears to be a constant - or perhaps growing); it's supposed to happen in programs (C, mostly) that used a 4-byte signed integer for date calculations. Similar to the above shell script, the scenario is that someone tries to do a date calculation and the number of seconds since the epoch overflows the max_int value. Nuclear war instantly ensues and wipes out all life on earth. Example (prepare to see all sorts of horrors!):
#!/usr/bin/perl -wl use Date::Manip; $signed_4_byte_int_max = 2147483647; # Date at epoch plus above number of seconds print &ParseDateString("epoch $signed_4_byte_int_max"); $signed_4_byte_int_max++; print &ParseDateString("epoch $signed_4_byte_int_max");
ben at Fenrir:/tmp$ ./date_foo 2038011822:14:07 2038011822:14:08
Hmm, nothing blew up. Darn it... I wanted some fireworks!!!
The answer is, of course, that we don't use a 4-byte int any longer - and haven't for a while. It's 8 bytes now, which puts off the total annihilation for at least a couple more days.
[[[ As it turned out later, I was mistaken - on my AMD-64 laptop, everything works just fine, but on most 32-bit machines, things do indeed get quite crunchy at that date and time. Here's a test script that will actually show the effect:
#!/usr/bin/perl -w # Created by Ben Okopnik on Wed Dec 13 17:47:56 CST 2006 # Shows the Unix "death boundary"; if the date "wraps around", you're vulnerable use POSIX; $ENV{TZ} = "GMT"; print ctime 2147483641 + $_ for 0 .. 10;
-- Ben ]]]
Note that the IBM PC has a 'boom' date of some time in 2116 (unsigned 32-bit int), and WinNT has one set for 2184 (64-bit int, but uses 100nS as an increment.) The Mac is supposed to be OK until 29940... be sure to set an alarm now, or you might miss it.
http://computer.howstuffworks.com/question75.htm
> Anyways, so outta curiosity, I decided to set my PC's date to a few > minutes before the accursed moment... So, grabbed a livecd, and set > the date/time to Jan 19, 2038, at ten after three in the morning. And > then booted. When the clock turned over to that date (in Freespire > livecd), it just complained about invalid time, set time/date to first > day of 2000. DSL didn't do anything like that, just whined about the > files being in the future, and the date was Jan 19 1969. > I tried something like this in '99, to see what the fuss was about of > the Y2K bug. I KNEW by then that there would be no prob, just by > curiosity. Has anyone else tried this?
When I was down in the Caribbean, I worked as a CIO for an insurance company that used a piece of garbage^W^W^Wcustom-written software package to run all their operations. They most certainly ran into the Y2K bug, with pretty horrendous results: the first time they entered a claim for a policy that expired in 2000, the entire claims database became a time bomb. That is, as long as you didn't read that line or anything after it, the database was just fine... but the first time that someone did, the entire database became *completely unreadable*. This also meant that all the backups going back to the original entry creation date were useless (although I managed to extract from backup, via a long iterative process, everything before that point; not very useful, unfortunately, since the recent claims were the ones that mattered the most.)
I spent a week of 8-hour days trying to find a solution, since the claims department was completely dead in the water; in the end, I managed to locate one of the original developers of Clipper (this being the language and the DB format that had been used for the app), which genial soul very nicely gave me the actual byte-by-byte layout for the DB. I then spent some time with a calculator and a binary editor (Norton's DiskEdit, IIRC), and managed to locate and empty that field - at which point the damn thing came back to life.
I immediately fixed the entire codebase to use "required_date-$some_large_fixed_number" in any field that required a date, and added it back in for any date-reading routines; then, I told these people in the strongest terms that they needed to get rid of that entire thing - database and all - and replace it with an actual tested piece of software and something that hadn't been written in the age of dinosaurs. When I left that position shortly thereafter, they were definitely scrambling for a replacement.
Lew Pitcher [lpitcher at sympatico.ca]
Thu, 7 Dec 2006 19:50:22 -0500
On Thursday 07 December 2006 19:24, Benjamin A. Okopnik wrote:
> On Thu, Dec 07, 2006 at 08:55:31AM -0500, clarjon1 wrote: > > While browsing through the archives, I noticed the discussion about > > that date, and wondered, why do people get so worked up about this? I > > mean, is it fear of the unknown?
[snip]
> (Works OK from 1900 to 1999 - although those steam-powered computers > were a real pain back then - but think about what would happen if you > entered the last two digits of the current year.) > > The problem had to do with stupid^Wprogrammers who didn't think far > enough ahead.
Being one of those stupid^Wprogrammers, I take exception (mildly) to your characterization. The Y2K problem was very real to us (I write code for banking applications), with a very real and understandable history.
Think back to the beginning of MSDOS, and the famous (pseudo)quote that "640K ought to be enough for anybody". Now, think back further, to a time when even the biggest, most expensive computers (the ones that banks and insurance companies used in the 1970s, for instance) rarely had more than 128K of memory and less than a gigabyte of hard disk. Think back to those corporate penny pinchers who determined that an additional 32K of memory would cost more than a years salary to a programmer.
Now let's do some math: 2 bytes of additional space for the century digits of the date multiplied by 1 million customer records that each carry one date = 2 million bytes of additional space required to store the century.
That's roughly equivalent to the yearly salary of 30 programmers.
In 1970, the trade off of spending the cost of the memory or paying a programmer for 30 years was obvious: you pay the programmer, and to heck with the memory. You'll get 30 years worth of work from the programmer, but the memory is an unnecessary expense.
And so, you wound up with just the bare minimum data being kept, and a 30-year time bomb being set.
Of course, with each passing year, the time bomb gets one year closer to exploding. But then, in 1975, "we have 25 years to fix this". In 1980 "we have 20 years - a whole generation to fix this". In 1990 "we have 10 years to fix this" It's not worth it. There are more important things to spend our money on."
Finally, in 1995, the bean counters say "What about 5 year mortgages?" And then the panic sets in.
Remember, institutional computing is roughly 40 years old. We deal with programs that are almost as old as we are (usually older ;-) ) on an ongoing basis. You don't throw away costly code that works - you continue to use it until it breaks. And Y2K broke a lot of code. Or would have, if we didn't fix it in time.
> Computers, contrary to popular belief, don't sneak over > and look at the dates in the paper calendar hanging on your wall; they > just do what you tell them to do (instead of what you want them to do. > Shame, that.)
Yah. Pity that. At least it keeps us professionals employed <grin>
> The 2038 "problem" is a bit less of a problem (although, unfortunately, > the amount of dumbth in the Universe appears to be a constant - or > perhaps growing); it's supposed to happen in programs (C, mostly) that > used a 4-byte signed integer for date calculations.
Similar to, but not the same as, the Y2K problem, where we used 2 bytes to carry the date. Of course, we had our overflow in 2000, while Unixish systems will overflow in 2038.
Hopefully, by 2038, we will have moved onward to 8-byte integers for date calculations, and we won't get an overflow until well after the heat-death of the universe. We've got 30 years to fix this. Does that situation ring a bell? <grin>
[snip]
> When I was down in the Caribbean, I worked as a CIO for an insurance > company that used a piece of garbage^W^W^Wcustom-written software > package to run all their operations. They most certainly ran into the > Y2K bug, with pretty horrendous results: the first time they entered a > claim for a policy that expired in 2000, the entire claims database > became a time bomb.
In early January 2000, I overheard two office workers discussing the "Y2K fallout". It appears that their PC-based accounting software stopped being useful, because it thought that the current date was sometime in the 1980s. They were running (limping, actually) by faking the dates manually on all the reports, and keeping accounts adjusted by hand. I don't know if they got it fixed or not, but obviously they didn't originally think that they would be affected by "the Y2K bug". OTOH, my employer had only a minor hiccup or two, and no systems failed or needed manual workarounds. But then again, we'd spent about 6 years fixing the Y2K problems in order to make that transition go as smoothly as it did.
[snip]
I just hope that this generation of managers learned something from Y2K, and won't make the same bean counter "it costs too much" mistake. Next time it might just cost too much, and the lights may just go out for a while.
Benjamin A. Okopnik [ben at linuxgazette.net]
Thu, 7 Dec 2006 20:53:27 -0500
On Thu, Dec 07, 2006 at 07:50:22PM -0500, Lew Pitcher wrote:
> On Thursday 07 December 2006 19:24, Benjamin A. Okopnik wrote: > > > > The problem had to do with stupid^Wprogrammers who didn't think far > > enough ahead. > > Being one of those stupid^Wprogrammers, I take exception (mildly) to your > characterization.
Lew, since you appear to be desperately clawing for the mantle of a stupid programmer, I won't deny you the pleasure. Otherwise, I'd point out your patently false misparsing of what I wrote - a tactic commonly practiced by trolls (but you'd never do that, right?)
> The Y2K problem was very real to us (I write code for > banking applications), with a very real and understandable history.
Whether it was real or not has nothing to do with the shortsightedness of the people involved. Do you have anything that contradicts this point?
> Now let's do some math: > 2 bytes of additional space for the century digits of the date multiplied by 1 > million customer records that each carry one date = 2 million bytes of > additional space required to store the century. > > That's roughly equivalent to the yearly salary of 30 programmers.
No, instead, let's do some math this way: the "corporate penny pinchers" ponied up the amount of money that they were told they were required to spend for functionality X. However, at some point, some short-sighted programmers said "Hey! We could save disk space/memory if we chopped off the century!" - and the penny-pinchers could now happily "unjustify" spending that money. Otherwise, the math would go like this: "Gosh, we can only fit 500,000 customers into the resources that we have. Guess we'd better buy more if we want to grow!"
The "reasons" you present above are similar to saying "Hey! We could save the money we spend on shoes, socks, and even pants by chopping off everyone's legs below the knee!" The fact that the statement is true does not make it advisable - in either case.
> In 1970, the trade off of spending the cost of the memory /or/ paying a > programmer for 30 years was obvious: you pay the programmer, and to heck with > the memory. You'll get 30 years worth of work from the programmer, but the > memory is an unnecessary expense.
"Sure, we've made some compromises in the safety of this car - sure, it'll explode and kill you at some point in the future - but it's significantly cheaper!"
I assume that you can see the fallacy in the above argument. Is there a reason - other than your personal involvement in the problem - that you can't see it in the similar argument that you're making?
> Remember, institutional computing is roughly 40 years old. We deal with > programs that are almost as old as we are (usually older ;-) ) on an ongoing > basis. You don't throw away costly code that works - you continue to use it > until it breaks. And Y2K broke a lot of code. Or would have, if we didn't fix > it in time.
In fact, most of it didn't get fixed - the US government, for one, has well over a billion lines of COBOL source for which there aren't nearly enough programmers to even attempt a significant percentage of fixes. In most cases, people coped by a variety of methods; since they expected it, the impact was less. But a thought experiment tells me that people died as a result, and that a hell of a lot of less-important things were severely thrown out of whack.
> > Computers, contrary to popular belief, don't sneak over > > and look at the dates in the paper calendar hanging on your wall; they > > just do what you tell them to do (instead of what you want them to do. > > Shame, that.) > > Yah. Pity that. At least it keeps us professionals employed <grin>
[raised eyebrows] You have a point. There'd be no need for programmers if DWIM really worked.
> Hopefully, by 2038, we will have moved onward to 8-byte integers for date > calculations, and we won't get an overflow until well after the heat-death of > the universe. We've got 30 years to fix this. Does that situation ring a > bell? <grin>
Sure. Note that I explicitly didn't say that programmers were any more far-sighted today than in the past.
> [snip] > > I just hope that this generation of managers learned something from Y2K, and > won't make the same bean counter "it costs too much" mistake. Next time it > might just cost too much, and the lights may just go out for a while.
And that is one chorus that I'll happily sing alongside of you.
...however, today's horoscope says "$ben{cynicism} = $INT_MAX". Therefore, I don't expect it to happen. :\
Lew Pitcher [lpitcher at sympatico.ca]
Thu, 7 Dec 2006 22:34:15 -0500
On Thursday 07 December 2006 20:53, Benjamin A. Okopnik wrote:
> Lew, since you appear to be desperately clawing for the mantle of a > stupid programmer,
That's strange characterization to be sure, based solely on your own typography.
> I won't deny you the pleasure. Otherwise, I'd point > out your patently false misparsing of what I wrote - a tactic commonly > practiced by trolls (but you'd never do that, right?)
Hmmm... and I guess that an ad-hominum attack is a tactic commonly practiced by those who cannot summon real facts to support their arguments (but you'd never do that, right?)
> > The Y2K problem was very real to us (I write code for > > banking applications), with a very real and understandable history. > > Whether it was real or not has nothing to do with the shortsightedness > of the people involved. Do you have anything that contradicts this > point?
Unfortunately, that wasn't the point you made. Your statement was about programmers, not "the people involved". Many of us programmers pointed out the idiocy of these policies frequently, and received (and still do receive) exactly the excuses I wrote about.
[snip]
> No, instead, let's do some math this way: the "corporate penny > pinchers" ponied up the amount of money that they were told they were > required to spend for functionality X. However, at some point, some > short-sighted programmers said "Hey! We could save disk space/memory if > we chopped off the century!"
Again, your characterization is false to the facts. Having been there, and made the arguments to use 4-digit dates, and having got the response that "the user isn't willing to pay the additional cost for something that works with 2 digit dates", I can't agree that the problem lie with "short-sighted programmers".
> - and the penny-pinchers could now happily > "unjustify" spending that money. Otherwise, the math would go like this: > "Gosh, we can only fit 500,000 customers into the resources that we > have. Guess we'd better buy more if we want to grow!"
Ever buy in bulk because you might need it sometime? On a limited budget? With no guarantee that you'll actually require the bulk?
Not that I agree with the arguments, but that's not how corporate accountants see money. They see it as something to be spent sparingly; just enough to cover immediate needs and no more.
> The "reasons" you present above are similar to saying "Hey! We could > save the money we spend on shoes, socks, and even pants by chopping off > everyone's legs below the knee!" The fact that the statement is true > does not make it advisable - in either case.
Hyperbole doesn't suit you. Your example would have been more believable if it were less over the top.
[snip]
> In fact, most of it didn't get fixed - the US government, for one, has > well over a billion lines of COBOL source for which there aren't nearly > enough programmers to even attempt a significant percentage of fixes. In > most cases, people coped by a variety of methods; since they expected > it, the impact was less. But a thought experiment tells me that people > died as a result, and that a hell of a lot of less-important things were > severely thrown out of whack.
OK, so the US government still has long outstanding issues. But, did your bank fail? (It could have, had we not fixed the problem there.) Did your mortgage suddenly inflate?
> [raised eyebrows] You have a point. There'd be no need for programmers > if DWIM really worked.
Sure there would. Who do you think (and you should know) makes DWIM work? Yes, 90% of the programming would morph into DWIM programming, but there still would be a remainder that cant be solved with DWIM.
> > > The 2038 "problem" is a bit less of a problem (although, unfortunately, > > > the amount of dumbth in the Universe appears to be a constant - or > > > perhaps growing); it's supposed to happen in programs (C, mostly) that > > > used a 4-byte signed integer for date calculations.[snip]
> > We've got 30 years to fix this. Does that situation ring a > > bell? <grin> > > Sure. Note that I explicitly didn't say that programmers were any more > far-sighted today than in the past.
In the Open Source world, this gets fixed fast. In the closed source world, it doesn't. Corporations love their cost/benefit analysis, and right now, the costs outweigh the benefits. I'm of the opinion that Open Source is the right way to go, not just because of the social and technical benefits, but because it bypasses the cost/benefit trap that prevents so many good things from happening.
> > [snip] > > > > I just hope that this generation of managers learned something from Y2K, > > and > > won't make the same bean counter "it costs too much" mistake. Next time it > > might just cost too much, and the lights may just go out for a while. > > And that is one chorus that I'll happily sing alongside of you. > > ...however, today's horoscope says "$ben{cynicism} = $INT_MAX". > Therefore, I don't expect it to happen. :\
Neither do I. More is the pity.
Benjamin A. Okopnik [ben at linuxgazette.net]
Fri, 8 Dec 2006 00:06:24 -0500
On Thu, Dec 07, 2006 at 10:34:15PM -0500, Lew Pitcher wrote:
> That's strange characterization to be sure, based solely on your own > typography.
Why, no. My choice of symbols was clear to anyone familiar with standard email usage: '^W' means "delete the last word", and implies "I'm going to change the original way that I was going to express this (although I'm going to note my previous approach.)" The only rational way to interpret what I wrote was that I was going to talk about someone - I didn't say who [1] - being stupid, but changed my mind and went on to talk about programmers who didn't think far enough ahead.
[1] This doesn't exclude programmers from being stupid, by any means - but neither does it make them the direct target of that adjective.
> > I won't deny you the pleasure. Otherwise, I'd point > > out your patently false misparsing of what I wrote - a tactic commonly > > practiced by trolls (but you'd never do that, right?) > > Hmmm... and I guess that an ad-hominum attack is a tactic commonly practiced > by those who cannot summon real facts to support their arguments (but you'd > never do that, right?)
I've never used an 'ad-hominum attack' in my life - although I may have, at some time or another (nowhere in this exchange, for certain; I just don't see anything worth an actual argument here) used an _ad hominem_ argument. Your revision of what I wrote, though, is an extant - rather than imagined - red herring. If you didn't understand what I wrote and jumped to a wrong conclusion, you have only yourself to blame; if you did understand it and decided to distort it for the purpose of creating an argument and finding offense where none existed - [shrug] feel free to chase it to your heart's content; I'm getting mildly bored with it already.
In fact, I'm done with it now.
Rick Moen [rick at linuxmafia.com]
Fri, 8 Dec 2006 13:09:56 -0800
Quoting Benjamin A. Okopnik (ben at linuxgazette.net):
> I've never used an 'ad-hominum attack' in my life - although I may > have, at some time or another (nowhere in this exchange, for certain; > I just don't see anything worth an actual argument here) used an _ad > hominem_ argument.
In fact (he says, attempting to gracefully exit from Lew's sudden bout of Silly Season), the term "ad hominem" -- in full form, "argumentum ad hominem" -- is frequently misused in debased casual form to mean "personal attack", suggesting that what's wrong with it is that it's mean and unpleasant.
Which completely misses the point of the true "argumentum ad hominem" concept.
It's a notion from formal rhetoric, being the name of one of the classic logical fallacies: It denotes any attempt to cast doubt on a speaker's point by deprecating the speaker's personal qualities _when those aren't relevant_ to the particular point under discussion.
Thus, impugning someone's personal characteristics isn't necessarily argumentum ad hominem. It qualifies as such a fallacy only if those qualities are, in fact, irrelevant to the discussion.
Consider: Let's say I assert to the crowd that you're a Commie. (Yeah, Mike, I know.) Is that argumentum ad hominem? It depends. If you'd been making some argument to the crowd and asserted that they should believe what you're saying is that you're a conservative Republican, then a showing that you're actually a Communist is a relevant rebuttal, and in no way argumentum ad hominem.
Thus, that fallacy is a sub-category of non-sequitur argument.
Neil Youngman [neil.youngman at youngman.org.uk]
Fri, 8 Dec 2006 21:55:55 +0000
On or around Friday 08 December 2006 01:53, Benjamin A. Okopnik reorganised a bunch of electrons to form the message:
> Lew, since you appear to be desperately clawing for the mantle of a > stupid programmer, I won't deny you the pleasure. Otherwise, I'd point > out your patently false misparsing of what I wrote - a tactic commonly > practiced by trolls (but you'd never do that, right?)
I'd parse it the same way as Lew, but hey I'm a programmer ;-)
> The "reasons" you present above are similar to saying "Hey! We could > save the money we spend on shoes, socks, and even pants by chopping off > everyone's legs below the knee!" The fact that the statement is true > does not make it advisable - in either case.
I'd say they were closer to "Hey look, we can make our cars ten times as expensive as Fords by engineering them to last 30 years" (a 1970s British built Ford would be lucky to last 10 years).
Engineering of all kinds is a trade off and maybe it was short sighted to expect a 1970s COBOL program to have been superseded by 2000, but, unlike your example it's not obviously unreasonable, except with hindsight.
> > In 1970, the trade off of spending the cost of the memory /or/ paying a > > programmer for 30 years was obvious: you pay the programmer, and to heck > > with the memory. You'll get 30 years worth of work from the programmer, > > but the memory is an unnecessary expense. > > "Sure, we've made some compromises in the safety of this car - sure, > it'll explode and kill you at some point in the future - but it's > significantly cheaper!"
Show me a car without safety compromises and I'll show you something that makes a main battle tank look unsafe.
> Sure. Note that I explicitly didn't say that programmers were any more > far-sighted today than in the past.
Nor managers, bean counters, politicians, ...
> > I just hope that this generation of managers learned something from Y2K, > > and won't make the same bean counter "it costs too much" mistake. Next > > time it might just cost too much, and the lights may just go out for a > > while. > > And that is one chorus that I'll happily sing alongside of you. > > ...however, today's horoscope says "$ben{cynicism} = $INT_MAX". > Therefore, I don't expect it to happen. :\
A significant percentage of this generation have learnt that "the lights didn't go out, so obviously there was no need to spend all that money to keep them on"
If you don't read the risks list (http://catless.ncl.ac.uk/Risks/) you really should.
Neil
Neil Youngman [ny at youngman.org.uk]
Fri, 8 Dec 2006 23:30:12 +0000
On or around Thursday 07 December 2006 13:55, clarjon1 reorganised a bunch of electrons to form the message:
> Anyways, so outta curiosity, I decided to set my PC's date to a few > minutes before the accursed moment... So, grabbed a livecd, and set > the date/time to Jan 19, 2038, at ten after three in the morning. And > then booted. When the clock turned over to that date (in Freespire > livecd), it just complained about invalid time, set time/date to first > day of 2000. DSL didn't do anything like that, just whined about the > files being in the future, and the date was Jan 19 1969. > I tried something like this in '99, to see what the fuss was about of > the Y2K bug. I KNEW by then that there would be no prob, just by > curiosity. Has anyone else tried this?
If you want an article about this, how about the article concentrating on the limitations of different test strategies. As a minimum it should cover whether a test for Y2K bugs based on a bunch of OS utilities in an operating system whose native date format doesn't roll over until 2038 is a sufficient basis for declaring that all the world's computer systems, including those running aircraft, nuclear power plants, financial systems, etc., programmed in a range of different languages on a range of different operating systems, with a range of potential date formats have no significant issues with dates rolling over in 2000.
Benjamin A. Okopnik [ben at linuxgazette.net]
Fri, 8 Dec 2006 19:30:17 -0500
On Fri, Dec 08, 2006 at 09:55:55PM +0000, Neil Youngman wrote:
> On or around Friday 08 December 2006 01:53, Benjamin A. Okopnik reorganised a > bunch of electrons to form the message: > > > > "Sure, we've made some compromises in the safety of this car - sure, > > it'll explode and kill you at some point in the future - but it's > > significantly cheaper!" > > Show me a car without safety compromises and I'll show you something that > makes a main battle tank look unsafe.
Batle tanks are unsafe, inherently and by design; they're probably the most unsafe vehicle that humans have ever built. Their mission is to provide a a mobile platform that delivers maximum damage to the enemy in the shortest amount of time from the longest possible distance. "Safety" is no more than an incidental concern in that: an error of a second or less will expose a tank to the other side's fire - and no tank ever built will stand up to an APDS (Sabot) round moving at a mile per second. So, you're right: there's no point in comparing car safety against tank safety.
On the other hand, in the 1970s, the NHTSA (National Highway Traffic Safety Administration) had developed a car which would keep the occupants safe in a 60mph crash (no airbags; the results came from good seatbelts and well-designed crumple zones, IIRC). It also averaged better than 25mpg, which was pretty darn good for a small car at that time. The last of these prototypes was destroyed in the late 70s/early 80s because the then-head of the NHTSA announced, speaking as God from on high, that Big Cars Were Safer - research and proof to the contrary notwithstanding.
Clearly, there's no such thing as "without safety compromises" - but "guaranteed to explode at some point" is far beyond any rational compromise.
> > ...however, today's horoscope says "$ben{cynicism} = $INT_MAX". > > Therefore, I don't expect it to happen. :\ > > A significant percentage of this generation have learnt that "the lights > didn't go out, so obviously there was no need to spend all that money to keep > them on" > > If you don't read the risks list (http://catless.ncl.ac.uk/Risks/) you really > should.
[Nod] I've been a subscriber for a number of years. Lots of good brainpower on that list, and an excellent resource for anyone who wants to learn to think well in terms of security, safety planning, and risk estimation. (PGN can be funny as hell, too - particularly if you appreciate subtle and sophisticated humor.)
Benjamin A. Okopnik [ben at linuxgazette.net]
Fri, 8 Dec 2006 19:50:44 -0500
On Fri, Dec 08, 2006 at 01:09:56PM -0800, Rick Moen wrote:
> > Consider: Let's say I assert to the crowd that you're a Commie. > (Yeah, Mike, I know.) Is that argumentum ad hominem? It depends. If > you'd been making some argument to the crowd and asserted that they > should believe what you're saying is that you're a conservative > Republican, then a showing that you're actually a Communist is a > relevant rebuttal, and in no way argumentum ad hominem.
Aaaaargh! You compared me to a Republican - you, you, you... ad-hominemiser! (ad-hominemeister? Ad-hominemaker? Ad-homeboy?)
Can I invoke Godwin's Law yet, or is that stretching things too far?
> Thus, that fallacy is a sub-category of non-sequitur argument.
Sample, 1 each, provided immediately above the previous sentence.
Jason Creighton [jcreigh at gmail.com]
Fri, 8 Dec 2006 19:48:21 -0700
On Thu, Dec 07, 2006 at 08:55:31AM -0500, clarjon1 wrote:
> While browsing through the archives, I noticed the discussion about > that date, and wondered, why do people get so worked up about this? I > mean, is it fear of the unknown? > NOTE: could this be worked into an article, i wonder... Let me know! > > Anyways, so outta curiosity, I decided to set my PC's date to a few > minutes before the accursed moment... So, grabbed a livecd, and set > the date/time to Jan 19, 2038, at ten after three in the morning. And > then booted. When the clock turned over to that date (in Freespire > livecd), it just complained about invalid time, set time/date to first > day of 2000. DSL didn't do anything like that, just whined about the > files being in the future, and the date was Jan 19 1969. > I tried something like this in '99, to see what the fuss was about of > the Y2K bug. I KNEW by then that there would be no prob, just by > curiosity. Has anyone else tried this?
I don't really see the 2038 thing as a problem...hopefully, everybody will be using 64-bit processors with (presumably) a 64-bit time_t by then. However, systems always stay in place longer than you want, so I expect that somebody is going to get burned.
I never really understood the whole Y2K thing in the first place. It seems as if you've got two bytes to store a year, you could just store at as a unsigned 16-bit (assuming 8-bit bytes) integer and be fine for the next 65536 years, give or take. Heck, even with one byte, you could still store 1900-2155.
Or even if you've got some crazy number of bits per byte, it seems to me that you can do better than just storing two digits in whatever character set you happen to be using. But all of this is so obvious, there must be some compelling reason why it wouldn't work, otherwise it would have been done.
Benjamin A. Okopnik [ben at linuxgazette.net]
Fri, 8 Dec 2006 22:17:33 -0500
On Fri, Dec 08, 2006 at 07:48:21PM -0700, Jason Creighton wrote:
> > Or even if you've got some crazy number of bits per byte, it seems to me > that you can do better than just storing two digits in whatever > character set you happen to be using. But all of this is so obvious, > there must be some compelling reason why it wouldn't work, otherwise it > would have been done.
It's not that storing two digits as a binary number instead of (say) an ASCII representation wouldn't work; the problem is that it would require a conversion routine every time you wanted to store it, and a reverse conversion routine every time you wanted to display it. Programmers tend to rebel against that kind of thing.
Lew Pitcher [lpitcher at sympatico.ca]
Fri, 8 Dec 2006 22:20:14 -0500
On Friday 08 December 2006 21:48, Jason Creighton wrote: [snip]
> Or even if you've got some crazy number of bits per byte, it seems to me > that you can do better than just storing two digits in whatever > character set you happen to be using. But all of this is so obvious, > there must be some compelling reason why it wouldn't work, otherwise it > would have been done.
You are right. There is (was) no reason why any of those ideas wouldn't work.
The problem mostly wasn't how to store a 4 digit year in place of a 2 digit year. Mostly the problem was the shear number of changes that needed to be made to use any one of those schemes. Yes, you can take one program and make a simple(ish) change like that. Multiply that by millions of programs and billions of data stores, and you got a really big job.
And, yes it was a mistake to have used a 2 digit year in the first place. It doesn't matter who made the mistake; the fact that it went uncorrected for so long made the "Y2K fix" even more expensive than it needed to be.
Lew Pitcher [lpitcher at sympatico.ca]
Fri, 8 Dec 2006 22:45:49 -0500
On Friday 08 December 2006 22:17, Benjamin A. Okopnik wrote:
> It's not that storing two digits as a binary number instead of (say) an > ASCII representation wouldn't work; the problem is that it would require > a conversion routine every time you wanted to store it, and a reverse > conversion routine every time you wanted to display it. Programmers tend > to rebel against that kind of thing.
Much of the time, we would have been happy for a display character representation (EBCDIC, of course ) of the 2 digit date because that would give us the space to put up a 3 digit date in BCD (COBOL PIC S9(3) COMP-3 aka "packed decimal") or a 4 digit date (COBOL PIC 9(4) COMP aka "binary halfword") in the same space. When you have a million record masterfile with a fixed layout, or fixed-layout records being passed between programs, any change that retains the layout structure is the preferred change. The conversions between binary or packed decimal and display were built into most of the languages we had to work with (COBOL, mostly, but some SAS and EASYTRIEVE, and even SQL)
The difficulties really built up in two places: accomodating "special" techniques (like using a single byte to hold the 2 digit year in unsigned BCD), and in the interoperability testing. Once you've made the changes in a couple of hundred programs (and that's the size of some of the projects I worked on), you had to make sure that the output from one program still worked as input to the next. And then you had to write and test all the data conversion programs, because all that data was stored with 2 digit dates, and the stored data had to be converted to 4 digit dates.
It got even more complicated when dates weren't identified as dates. Typically we'd find
05 CQA-FIELD-7 PIC XX.and have to guess from the logic and the inputs whether or not the defined field was a date or not. Sometimes, you'd get
05 CQA-FIELD-7. 07 CQA-FIELD-7A PIC X. 07 CQA-FIELD-7B PIC X.and elsewhere
IF CQA-FIELD-7A IS GREATER THAN '5' THEN PERFORM F777-SOME-LOGIC ELSE PERFORM F778-SOME-OTHER-LOGICand have to determine that this logic was a 'date correction' process.
Try that with code that's not been documented (but has been changed) in 20 years. Not easy. Many 3rd-party program analysis tools were bought (and vendors made lots of money) just to track down where and how dates were used, even through such obfuscated code.
And, yes, we who had to fix such code did not appreciate the brilliance of the programmers who wrote such code.
Neil Youngman [ny at youngman.org.uk]
Sat, 9 Dec 2006 09:28:46 +0000
On or around Saturday 09 December 2006 00:30, Benjamin A. Okopnik reorganised a bunch of electrons to form the message:
> Clearly, there's no such thing as "without safety compromises" - but > "guaranteed to explode at some point" is far beyond any rational > compromise.
Equally (to take us back to the original point) accepting that the cost of a multi million pound system built in 1970 with an expected life span of 10 years, can't be doubled to handle the slight chance that some of the code might still be around in 40 years does seem to me to be an entirely rational compromise. Your argument seems to that the two are more or less equivalent. I disagree. The Y2K situation seems closer to expecting a 1970s car to still be running safely in 2000 without much more work than occasionally changing the oil and replacing the tyres.
Benjamin A. Okopnik [ben at linuxgazette.net]
Sat, 9 Dec 2006 10:20:18 -0500
On Sat, Dec 09, 2006 at 09:28:46AM +0000, Neil Youngman wrote:
> Equally (to take us back to the original point) accepting that the cost of a > multi million pound system built in 1970 with an expected life span of 10 years ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Can you actually produce some evidence to back that up, Neil? I'd be very interested in seeing it, because it completely contradicts my experience and understanding.
> The Y2K situation seems closer to expecting a 1970s car to still be > running safely in 2000 without much more work than occasionally changing the > oil and replacing the tyres.
That analogy fails immediately, I'm afraid. Software doesn't wear out from usage; cars do.
Neil Youngman [ny at youngman.org.uk]
Sat, 9 Dec 2006 16:41:50 +0000
On or around Saturday 09 December 2006 15:20, Benjamin A. Okopnik reorganised a bunch of electrons to form the message:
> Can you actually produce some evidence to back that up, Neil? I'd be > very interested in seeing it, because it completely contradicts my > experience and understanding.
That was based on my memory of Lew's arguments. At the time I couldn't find the exact email and looking at it now, it seems that what he wrote and what I remembered don't match that closely. The basic point remains that for software that wasn't expected to be running in 2000, the extra cost of storage for 4 digit dates was sufficiently high that 2 digit dates were a reasonable compromise.
Does your experience tell you that people expected these systems to still be running in 2000? My understanding is that people didn't expect these programs to still be around, but of course I wasn't in computing back then and my impressions may be wrong.
> > The Y2K situation seems closer to expecting a 1970s car to still be > > running safely in 2000 without much more work than occasionally changing > > the oil and replacing the tyres. > > That analogy fails immediately, I'm afraid. Software doesn't wear out > from usage; cars do.
Only if you take the analogy very literally. Cars and software both have to be used and maintained in accordance with their design limitations. With cars you replace parts that have worn out or proven defective. With software you replace parts where the design assumptions prove faulty or requirements change or you find a bug.
Benjamin A. Okopnik [ben at linuxgazette.net]
Sat, 9 Dec 2006 18:22:03 -0500
On Fri, Dec 08, 2006 at 10:45:49PM -0500, Lew Pitcher wrote:
> Much of the time, we would have been happy for a display character > representation (EBCDIC, of course ) of the 2 digit date because that
[laugh] Y'know, I hesitated over that 'ASCII' for a second - I was going to write 'EBCDIC', but decided that most people wouldn't know what the hell that was.
> would give us the space to put up a 3 digit date in BCD (COBOL PIC S9(3) > COMP-3 aka "packed decimal") or a 4 digit date (COBOL PIC 9(4) COMP aka > "binary halfword") in the same space. When you have a million record > masterfile with a fixed layout, or fixed-layout records being passed between > programs, any change that retains the layout structure is the preferred > change.
Yep. It's an example of the "preserve source data/format as long as possible" rule.
> The difficulties really built up in two places: accomodating "special" > techniques (like using a single byte to hold the 2 digit year in unsigned > BCD), and in the interoperability testing. Once you've made the changes in a > couple of hundred programs (and that's the size of some of the projects /I/ > worked on), you had to make sure that the output from one program still > worked as input to the next. And then you had to write and test all the data > conversion programs, because all that data was /stored/ with 2 digit dates, > and the stored data had to be converted to 4 digit dates. > > It got even more complicated when dates weren't identified /as/ dates. > Typically we'd find > 05 CQA-FIELD-7 PIC XX. > and have to guess from the logic and the inputs whether or not the defined > field was a date or not. Sometimes, you'd get > 05 CQA-FIELD-7. > 07 CQA-FIELD-7A PIC X. > 07 CQA-FIELD-7B PIC X. > and elsewhere > IF CQA-FIELD-7A IS GREATER THAN '5' > THEN > PERFORM F777-SOME-LOGIC > ELSE > PERFORM F778-SOME-OTHER-LOGIC > and have to determine that this logic was a 'date correction' process.
Chasing that kind of thing down is the job from hell. Sure, it wouldn't be that bad if you saw something as obvious as a bunch of labels for time units... but if the date isn't stored in an obvious, human-readable representation or a standard date structure, even the subroutines for it aren't going to be much of a clue. I've had to do that kind of backtracing with that insurance company package - true, there weren't hundreds of programs involved (only several dozen), but it was a huge pain nonetheless.
> Try /that/ with code that's not been documented (but /has/ been changed) in 20 > years. Not easy.
That being exactly the situation that I was dealing with. The previous guy they had made very, very crude fixes - he'd rip out hunks of code that he didn't understand and put in a few lines that did what he wanted... and when he found out that what he ripped out broke something else, he'd rip out a hunk of code, and... lather, rinse, repeat. Better yet, the people who coded the thing originally thought that "$a", "$b", and "$c" (and maybe, in extreme emergencies, "$d") comprised the entire set of allowable variable names. I suspect that they lived in dread of the day when they would be forced to use a variable name like "$index" or "$time" - their souls would be damned to eternal torment thereby.
Oh, and - in their world - only wimps used comments. They were true, manly men, and far too proud to do so.
> Many 3rd-party program analysis tools were bought (and > vendors made /lots/ of money) just to track down where and how dates were > used, even through such obfuscated code. > > And, yes, we who had to fix such code did not appreciate the brilliance of > the programmers who /wrote/ such code.
Yeah. I have had moments when I wanted to beat certain programmers to death with a copy of "Literate Programming" (if it wasn't heavy enough by itself, I'd have happily added Knuth's magnum opus.) More so in the past than now, though: these days, I originate more code and fix less.
Benjamin A. Okopnik [ben at linuxgazette.net]
Sat, 9 Dec 2006 19:20:52 -0500
On Sat, Dec 09, 2006 at 04:41:50PM +0000, Neil Youngman wrote:
> On or around Saturday 09 December 2006 15:20, Benjamin A. Okopnik reorganised > a bunch of electrons to form the message: > > On Sat, Dec 09, 2006 at 09:28:46AM +0000, Neil Youngman wrote:
[ snip ]
> > > system built in 1970 with an expected life span of 10 years > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > > Can you actually produce some evidence to back that up, Neil? I'd be > > very interested in seeing it, because it completely contradicts my > > experience and understanding. > > That was based on my memory of Lew's arguments. At the time I couldn't find > the exact email and looking at it now, it seems that what he wrote and what I > remembered don't match that closely. The basic point remains that for > software that wasn't expected to be running in 2000, the extra cost of > storage for 4 digit dates was sufficiently high that 2 digit dates were a > reasonable compromise.
And my basic point remains as well: clients expect the software that's written for them to keep working forever. I have never encountered one who asked "how long is this software going to last?", nor a programmer who can give any sort of rational estimate except in the cases where, as Lew described, there's a built-in timed explosive (since the factors that determine the end of usefulness of that program generally *don't exist* at the time that it's written.)
There's a built-in trap for the programmers, as well: the compromises they made at the time are invisible; the explosion, when it comes, is huge - and there's only one obvious place to point fingers. Add in the bean counters involved in those decisions (which, to underscore my original point, could not have been made without the programmers' complicity) who will now blink, look innocent, and say "how could we have known anything like that would happen? It's a programming decision, of course!", and you can hear the jaws of the trap slamming shut (as well as the screams of the programmers whose sensitive bits that trap just crushed.)
> Does your experience tell you that people expected these systems to still be > running in 2000?
They expect to pass them on to endless future generations as heirlooms. Reality is not an issue; they're paying money, and they want good value. After all, they've still got the gold watch from their grandfather's time - programs can't be any more complex that that!
> My understanding is that people didn't expect these programs > to still be around, but of course I wasn't in computing back then and my > impressions may be wrong.
It's not really about computing, Neil; it's about perceptions of people who are paying for "bespoke" software and their understanding of what they're paying for.
> > > The Y2K situation seems closer to expecting a 1970s car to still be > > > running safely in 2000 without much more work than occasionally changing > > > the oil and replacing the tyres. > > > > That analogy fails immediately, I'm afraid. Software doesn't wear out > > from usage; cars do. > > Only if you take the analogy very literally. Cars and software both have to be > used and maintained in accordance with their design limitations. With cars > you replace parts that have worn out or proven defective. With software you > replace parts where the design assumptions prove faulty or requirements > change or you find a bug.
Again, that's the reality. The buyers' perception is something quite different from that. Most car owners don't expect to (and don't) maintain their cars - that's just a simple fact. How often do people change their belts and hoses, for example? Does anyone here, on this list, actually know how often you should do so? (As it happens, I do - but most people, by far, would have not the slightest idea. The answer is every 60 months or 60,000 miles, at least for the average American car.)
Jimmy ORegan [joregan at gmail.com]
Sun, 10 Dec 2006 11:48:21 +0000
On 09/12/06, Lew Pitcher <lpitcher at sympatico.ca> wrote:
> change. The conversions between binary or packed decimal and display were > built into most of the languages we had to work with (COBOL, mostly, but some > SAS and EASYTRIEVE, and even SQL)
I just looked at slashdot: "Grab your COBOL Coding Forms (http://www.csis.ul.ie/cobol/Course/Resources/pics/CodingForm.jpg) and head on over to comp.lang.cobol (http://groups.google.com/group/comp.lang.cobol), kids! Yesterday was Grace Hopper's 100th birthday ( http://news.com.com/100+years+of+Grace+Hopper/2100-1007_3-6142101.html), and many are still singing the praises (http://opinion.zdnet.co.uk/comment/0,1000002138,39285061,00.htm) of her Common Business-Oriented Language."
Neil Youngman [ny at youngman.org.uk]
Sun, 10 Dec 2006 18:29:00 +0000
On or around Sunday 10 December 2006 00:20, Benjamin A. Okopnik reorganised a bunch of electrons to form the message:
> And my basic point remains as well: clients expect the software that's > written for them to keep working forever. I have never encountered one > who asked "how long is this software going to last?", nor a programmer > who can give any sort of rational estimate except in the cases where, > as Lew described, there's a built-in timed explosive (since the factors > that determine the end of usefulness of that program generally *don't > exist* at the time that it's written.)
OK, you're talking about user's expectations, I'm talking about theoretical situations. I expect we can agree that in nearly all cases 2 digit years were used without sufficient thought. I was making a theoretical objection to the implication of your exploding car metaphor that it could never be justified. There may have been cases where it was done on the sort of reasoned trade off I'm thinking of, but I don't claim to know of any. I expect they were rare to non-existent.
It's one of those arguments that probably wasn't worth the verbiage expended, but hey, I'll have another beer from the TAG bar and argue about something else ;-)
> There's a built-in trap for the programmers, as well: the compromises > they made at the time are invisible; the explosion, when it comes, is > huge - and there's only one obvious place to point fingers. Add in the > bean counters involved in those decisions (which, to underscore my > original point, could not have been made without the programmers' > complicity) who will now blink, look innocent, and say "how could we > have known anything like that would happen? It's a programming > decision, of course!", and you can hear the jaws of the trap slamming > shut (as well as the screams of the programmers whose sensitive bits > that trap just crushed.)
All those originally involved have of course retired, or been promoted to a level way above taking any kind of responsibility for anything by then ;-)
> It's not really about computing, Neil; it's about perceptions of > people who are paying for "bespoke" software and their understanding of > what they're paying for.
I've been lucky enough to mostly work for tech savvy clients who could mostly understand the implications of decisions. I'm luckier than most, I guess.
> Again, that's the reality. The buyers' perception is something quite > different from that. Most car owners don't expect to (and don't) > maintain their cars - that's just a simple fact. How often do people > change their belts and hoses, for example? Does anyone here, on this > list, actually know how often you should do so? (As it happens, I do - > but most people, by far, would have not the slightest idea. The answer > is every 60 months or 60,000 miles, at least for the average American > car.)
5 years or 60,000 miles is exactly right for my Honda. I think most car owners to expect to pay for maintenance, even if they don't understand exactly what the garage is doing. Or do American car owners not expect to service their car regularly?
Benjamin A. Okopnik [ben at linuxgazette.net]
Tue, 12 Dec 2006 13:30:29 -0600
On Sun, Dec 10, 2006 at 06:29:00PM +0000, Neil Youngman wrote:
> OK, you're talking about user's expectations, I'm talking about theoretical > situations. I expect we can agree that in nearly all cases 2 digit years were > used without sufficient thought. I was making a theoretical objection to the > implication of your exploding car metaphor that it could never be justified. > There may have been cases where it was done on the sort of reasoned trade off > I'm thinking of, but I don't claim to know of any. I expect they were rare to > non-existent.
Heck, in theoretical situations, it's not a problem. In theory, if I could get a car that was clearly and explicitly and definitely denoted as "GOING TO EXPLODE AT $DATE!", and the date was far enough off and the price was very low, I'd certainly buy it - but I'd set three different alarms, and see if (when the time came) I could hack it not to explode, assuming it still had value to me.
In practice, however, I would never buy such a thing. If the engineers who designed it couldn't remove the flaw that would make it explode, why would I trust their estimates of when it would do so?
> It's one of those arguments that probably wasn't worth the verbiage expended, > but hey, I'll have another beer from the TAG bar and argue about something > else ;-)
[grin] Hey, maybe it's got little or no value to us - especially if you've been through this argument before, as I'm sure a number of us have. However, it's worth exposing decision processes like these to our readers, who may not have heard of this before; it's similar to studying history in order to avoid making those mistakes all over again.
Those who cannot remember the past are condemned to repeat it. [ ... ] This is the condition of children and barbarians, in which instinct has learned nothing from experience. -- George Santayana, The Life of Reason, 1905
> > There's a built-in trap for the programmers, as well: the compromises > > they made at the time are invisible; the explosion, when it comes, is > > huge - and there's only one obvious place to point fingers. Add in the > > bean counters involved in those decisions (which, to underscore my > > original point, could not have been made without the programmers' > > complicity) who will now blink, look innocent, and say "how could we > > have known anything like that would happen? It's a programming > > decision, of course!", and you can hear the jaws of the trap slamming > > shut (as well as the screams of the programmers whose sensitive bits > > that trap just crushed.) > > All those originally involved have of course retired, or been promoted to a > level way above taking any kind of responsibility for anything by then ;-)
Dilbert isn't a cartoon - it's a factual chronicle of office life.
> > It's not really about computing, Neil; it's about perceptions of > > people who are paying for "bespoke" software and their understanding of > > what they're paying for. > > I've been lucky enough to mostly work for tech savvy clients who could mostly > understand the implications of decisions. I'm luckier than most, I guess.
Wow - I do have to say that you've definitely been lucky, then. I've certainly had a number of well-educated clients, and dealing with them is usually easy and even fun - they hire me for a task that they've defined, I do it and get paid, and life is good. But quite often, I have to do a huge amount of educating in order to simply get the client to understand what they need - or to define what benefit they will receive from what I do, no matter how mysterious it is to them.
Simple example: a client gives me, e.g., a sales brochure and says "put it on the Web". I find them a host, negotiate a contract for the service/space/bandwidth/features they need, create a rational layout for the content, reformat it and split it into separate pages, scan and clean up and format the images, create the CSS and the HTML and link all the files properly, upload the files, check to make sure that it all looks good in various browsers, and ping the client to say "it's done; here's my bill." The response - unless they already know what the process involves, or unless I've educated them about it - can be "It still looks exactly like what I gave you - you didn't do anything except copy it!"
> 5 years or 60,000 miles is exactly right for my Honda. I think most car owners > to expect to pay for maintenance, even if they don't understand exactly what > the garage is doing. Or do American car owners not expect to service their > car regularly?
If you ask most mechanics here, the answer will be a loud (and usually profane) "NO!" There are, of course, exceptions - but that's not the norm.
John Karns [johnkarns at gmail.com]
Wed, 13 Dec 2006 17:15:13 -0500
On 12/7/06, Benjamin A. Okopnik <ben at linuxgazette.net> wrote:
> "Sure, we've made some compromises in the safety of this car - sure, > it'll explode and kill you at some point in the future - but it's > significantly cheaper!" > > I assume that you can see the fallacy in the above argument. Is there a > reason - other than your personal involvement in the problem - that you > can't see it in the similar argument that you're making?
Well just off the top of my head I can think of two instances where this kind of thing has happened.
Remember the Ford Pinto back in the 1970's? They were known for the vulnerability of bursting into flames when rear-ended in an accident. As I remember, it turned out that the problem had been evident in the company's crash tests. But they had done some statistical / economic analysis that the projected losses from expected liability lawsuits would work out to be less costly than redesigning the car (or something close to that), so they made the decision to go ahead with the production and ignore any concerns about loss of life. As I remember, the issue was clearly exposed in the news media, something that I feel might not happen in todays climate of the U.S. news media companies being much less inclined to give any opposition to U.S. corporate financial interests.
The 2nd example is a bit different in nature, but is also very illustrative of corporate management decisions being biased strongly by the profit motive to the exclusion of any reasonable concerns for public safety. Sometime during that same period (60's / 70's) Monsanto knowingly dumped PCB's into a river at one of their southern U.S. factory sites. The management was made aware of that the fish in the stream were being killed off in massive quantities, to the point of extermination. They were presented with one or more written studies, but decided to hide it, because the profits were just too lucrative to pass up. Like the Ford Pinto issue, it eventually caught up with them, but I tend to think that it's more the exception than the rule.
I've seen it myself in the corporate work place. Quality is very often a low priority. The penalty of the increased maintenance cost of doing the job poorly is a hidden cost at the time that the planning / programming is taking place. The overriding tendency is to strongly prioritize the short term goals rather than spending any extra time and effort (i.e., $$) of doing it right the first time. This is not to say that it's necessarily universally true in the corporate world, but my opinion is that the scales are heavily tilted in that direction.
Benjamin A. Okopnik [ben at linuxgazette.net]
Thu, 14 Dec 2006 08:07:23 -0600
[ Completely off-topic for TAG, of course - but that's what we do here. It's all fun. ]
On Wed, Dec 13, 2006 at 05:15:13PM -0500, John Karns wrote:
> Well just off the top of my head I can think of two instances where > this kind of thing has happened.
[smile] There are thousands more, as well. My contention wasn't that things like this don't happen; it's that they shouldn't.
> Remember the Ford Pinto back in the 1970's?
[ snip ]
Oh, I certainly do. In my estimation, the motivations underlying both the Pinto debacle and the Y2K problem are remarkably similar.
> As I > remember, the issue was clearly exposed in the news media, something > that I feel might not happen in todays climate of the U.S. news media > companies being much less inclined to give any opposition to U.S. > corporate financial interests.
Oh, I can't agree with that one at all. Yes, big money is a powerful force; yes, the mainstream news media is (with very rare exceptions) venal and sleazy. Today, however, the formula is completely different from those days of the past: there are so many eyes watching (many of them behind video cameras), so many people blogging, and so much social recognition in being "the firstest with the mostest" that keeping something like this hidden is nearly impossible - and once it's out, it's news (and therefore a hot commodity, to be hawked as quickly as possible before the competition gets hold of it.) I mean, Bill Gates is unable to keep internal, tOP-sEKrIt, decoder-ring-only Micr0s0ft memos private - what chance does Detroit's once-powerful-but-now-sucking-hind-tit auto industry have? A billion dollars means nothing if it can't be directed at the target - and it can't, since the target is everybody and anybody, at least anybody who's just overheard a juicy rumor and has access to the Net.
> The 2nd example is a bit different in nature, but is also very > illustrative of corporate management decisions being biased strongly > by the profit motive to the exclusion of any reasonable concerns for > public safety. Sometime during that same period (60's / 70's) > Monsanto knowingly dumped PCB's into a river at one of their southern > U.S. factory sites.
I recall hearing about it at some point in the past. Looking it up now brings back all the horrific stuff I'd heard then: it went on for over 40 years, Monsanto had known all about it (they'd researched PCBs back in the 30s but blamed the effects on "contaminated samples".) They also buried over 10 million pounds of PCBs in the Anniston landfill.
> The management was made aware of that the fish in > the stream were being killed off in massive quantities, to the point > of extermination. They were presented with one or more written > studies, but decided to hide it, because the profits were just too > lucrative to pass up. Like the Ford Pinto issue, it eventually caught > up with them, but I tend to think that it's more the exception than > the rule.
The Bush administration (screw nihil de mortuis nisi bonum, anyway) has tried to bury this one - perhaps in the same landfill, and with as much toxic effect - but:
http://www.ewg.org/reports/anniston/release_15APR02.php?print_version=1
http://www.foxriverwatch.com/nrda/bush_record.html
http://www.commondreams.org/headlines02/0101-02.htm
http://www.ethicalinvesting.com/monsanto/news/10074.htm
...and several thousand other major sites contain all the sordid details - including the payoffs to a large number of the top guns in the administration. As I said - you just can't hide this kind of stuff any more.
David Brin once described a future society ("Kiln People") with a pretty good solution to this kind of garbage.
Kaolin wasn't kidding about the Henchman Law. When first introduced, it soon turned into the quickest way for a fellow to retire early - by tattling on his boss. Whistle-blower prizes grew bigger as one white-collar scam after another collapsed, feeding half of the resulting fines back into new rewards, enticing even more trusted lieutenants, minions, and right-hand men to blab away. [ ... ] The implacable logic of the Prisoner's Dilemma triggered collapse of one conspiracy after another as informers became public heroes, accelerating the rush for publicity and treasure.Not a bad image at all. I will say that there's not much of an enforcement mechanism for this yet, barring class action lawsuits (which, as far as I'm concerned, can bring financial compensation but NOT justice. Justice would be mass murder trials for the Monsanto executives involved.) But at least now the information is out, and the possibility exists.
> I've seen it myself in the corporate work place. Quality is very > often a low priority. The penalty of the increased maintenance cost > of doing the job poorly is a hidden cost at the time that the planning > / programming is taking place. The overriding tendency is to strongly > prioritize the short term goals rather than spending any extra time > and effort (i.e., $$) of doing it right the first time. This is not > to say that it's necessarily universally true in the corporate world, > but my opinion is that the scales are heavily tilted in that > direction.
I agree. There's no such thing as "no compromise" in business - value is all about negotiation and a meeting of the minds - but some compromises should never be made. This, to me, is what a professional code of ethics is all about.
John Karns [johnkarns at gmail.com]
Thu, 14 Dec 2006 16:31:03 -0500
On 12/14/06, Benjamin A. Okopnik <ben at linuxgazette.net> wrote:
> Oh, I can't agree with that one at all. Yes, big money is a powerful > force; yes, the mainstream news media is (with very rare exceptions) > venal and sleazy. Today, however, the formula is completely different > from those days of the past: there are so many eyes watching (many of > them behind video cameras), so many people blogging, and so much social > recognition in being "the firstest with the mostest" that keeping > something like this hidden is nearly impossible - and once it's out, > it's news (and therefore a hot commodity, to be hawked as quickly as > possible before the competition gets hold of it.)
And that's one of the very important social benefits of the internet. In fact, I'm of the opinion that these 2nd stream news channels have assumed the role of the check / balance in our "democratic" system that the mainstream media has abandoned. Of course there are always exceptions, I'm painting in broad strokes here, for the sake of discussion.
Nevertheless, the channel(s) where the alternative media has its strength is still much more limited in terms of the percentage of the population that is reached, to say nothing of the credibility factor - i.e., although I myself put very little credibility in the mainstream media, I'm an iconoclast. Unfortunately, I tend to think that doesn't apply to the majority of people, especially in this country, and joe citizen just won't believe it until he sees it headlined in the NYT or his local newspaper. For some reason I have the impression that the situation is not quite as bleak outside the U.S., particularly in Western Europe.
> I recall hearing about it at some point in the past. Looking it up now > brings back all the horrific stuff I'd heard then: it went on for over > 40 years, Monsanto had known all about it (they'd researched PCBs back > in the 30s but blamed the effects on "contaminated samples".) They also > buried over 10 million pounds of PCBs in the Anniston landfill.
Yeah, that's the incident. Rather incredible, it was.
> The Bush administration (*screw* /nihil de mortuis nisi bonum/, anyway) > has tried to bury this one - perhaps in the same landfill, and with as > much toxic effect - but: > > http://www.ewg.org/reports/anniston/release_15APR02.php?print_version=1 > http://www.foxriverwatch.com/nrda/bush_record.html > http://www.commondreams.org/headlines02/0101-02.htm > http://www.ethicalinvesting.com/monsanto/news/10074.htm
Some interesting links there, thanks.
[snip]
> Not a bad image at all.
Interesting twist, that it would bring financial compensation. However, in reality, I think that some things will remain outside of the realm of monetary compensation, this among them. Indeed if money were to be in the picture, it would arouse my suspicion.
[snipped]
> I agree. There's no such thing as "no compromise" in business - value is > all about negotiation and a meeting of the minds - but some compromises > should never be made. This, to me, is what a professional code of ethics > is all about.
... or a code of ethics on any level (i.e., personal). IMO, the string of financial scandals over the past 20 years have really signalled a change of social values in the U.S. It may come off as na?ve, but the behavior of exploiting society for that kind of personal gain is ultimately unproductive for those doing the exploiting too, not just for the exploited. A case in point is the refusal of the U.S. "establishment" to take action to address the global warming environmental crisis. I can only draw the conclusion that they would prefer to die with as much wealth as they can manage to accumulate, sacrificing the planet in the process, than making any adjustment to their lifestyle. It boggles the mind, and borders on perversity.
http://www.google.com/url?sa=t&ct=res&cd=1 [...]
or
kinda makes you wonder.
Benjamin A. Okopnik [ben at linuxgazette.net]
Thu, 21 Dec 2006 20:09:42 -0600
[ Much as I like talking about this stuff, I think this'll be my last post here about this topic. John, please do feel free to contact me about it off-list. ]
On Thu, Dec 14, 2006 at 04:31:03PM -0500, John Karns wrote:
> On 12/14/06, Benjamin A. Okopnik <ben at linuxgazette.net> wrote:
[ venal newsmedia vs. the social media scene ]
> And that's one of the very important social benefits of the internet. > In fact, I'm of the opinion that these 2nd stream news channels have > assumed the role of the check / balance in our "democratic" system > that the mainstream media has abandoned. Of course there are always > exceptions, I'm painting in broad strokes here, for the sake of > discussion.
Strongly agreed. There are also the internal checks and balances in the blogosphere - there are all sorts of viewpoints, and popular appeal sometimes counts for a lot more than accuracy ([cough]Michelle Malkin[cough]), but an essentially open venue for public discussion is, overall, a huge positive influence and an incredibly powerful force for positive change.
> Nevertheless, the channel(s) where the alternative media has its > strength is still much more limited in terms of the percentage of the > population that is reached, to say nothing of the credibility factor - > i.e., although I myself put very little credibility in the mainstream > media, I'm an iconoclast. Unfortunately, I tend to think that doesn't > apply to the majority of people, especially in this country, and joe > citizen just won't believe it until he sees it headlined in the NYT or > his local newspaper. For some reason I have the impression that the > situation is not quite as bleak outside the U.S., particularly in > Western Europe.
John, in some ways, we're very much on the same page (although I don't think either of us is related to Congressman Foley... whoops, was that my out-loud voice?) - although I have just a little more hope and belief in the average Joe. I think that, after the torrent of lies from the current administration - and after those lies were supported, or should we say colluded in, by the mainstream media - there's been a lot of disillusionment. These days, the supporters of the administration are either desperately screaming their heads off and trying to hammer their now-empty message into the ears of anyone who will listen, or quietly (contrary to their previous habits) not engaging in the arguments that they know they'll lose, or - most hopeful of all - running for the fences and claiming that they've always been Democrats (or, in those cases where honesty actually manages to gain a foothold, admitting that they're switching due to their revulsion with the GOP.) As Bob Dylan once sang, "the times, they are a-changin'."
Not to the degree that I'd like to see - not nearly enough to right the evil done by the people curently in power, not enough to restore the rights that have been taken away, not enough to honestly admit that the Iraq debacle is lost and was lost the minute this country swallowed the BushCo lies and accepted a permanent lien against its freedom in the name of "security" - but at least there's movement in the right direction. I can only hope that it continues, and gets stronger. (And I wish to hell Nancy Pelosi would call for impeachment and a war crimes trial... but I'm almost certain that it's not going to happen.)
> Interesting twist, that it would bring financial compensation. > However, in reality, I think that some things will remain outside of > the realm of monetary compensation, this among them. Indeed if money > were to be in the picture, it would arouse my suspicion.I can't really see why. If the accusation is false, then the money won't be paid out. As Neal Stephenson noted in 'Zodiac', toxic crime is hard to hide.
No chemical crime is perfect. Chemical reactions have inputs and outputs and there's no way to make those outputs disappear. You can try to eliminate them with another chemical reaction, but that's going to have outputs also. You can try to hide them, but they have this way of escaping. The only rational choice is not to be a chemical crook in the first place. Become a chemical crook and you're betting your future on the hope that there aren't any chemical detectives gunning for you. That assumption isn't true anymore.The amounts of toxins involved in commercial production are huge; otherwise, they're simply not worth the risk to the company. It should be very easy to determine whether someone is a toxic criminal or not, and I don't see that money is any kind of a negative influence in that equation; in fact, one of the most successful branches of research psychology today (I can't recall the name for the moment, but "Nature" had an article on strong altruism that was backed by the method) uses financial incentives to expose interesting facets of human behavior.
> ... or a code of ethics on any level (i.e., personal).
Oh, boy. This is where my cynicism comes in. Cometh The Rant.
<rant mode="on" type="strong">
I have seen, over the course of my life, an ongoing and persistent
effort - which has, unfortunately, been very effective - to turn
real education into a kindergarten where the inmates are humored in
their belief that they're receiving an education. Note that, e.g.,
philosophy - the science that deals with the basis of
understanding, morality, and ethics, the things that determine to
what extent a human being can determine right and wrong - is in
disrepute as the province of "eggheads", and is at best held to be
"impractical".
Once you sell a man on that, you can sell him on anything. Killing others for no reason of his own (i.e., not in self-defense)? No problem; just invoke "WMDs", or "world security", or even - the abuse of this once-meaningful term makes me sick at heart - "patriotism", and off they'll go to kill and to die, by the thousands. No one dares to ask for proof of WMDs, or what the exact factors are that affect (and the exact meaning of) "world security", or what patriotism has to do with murdering thousands of Iraqi civilians and torturing helpless prisoners; they just do it, because The People in Authority said to. I mean, after all, we have no idea of what the right thing is; they're in charge, so they surely must!
Clearly, only impractical eggheads need to be able to think. And
ethics, beyond a few rare flashes in the dark, is just some stupid
thing that stops you from making money and makes you a sucker.
Right?
</rant>
> IMO, the > string of financial scandals over the past 20 years have really > signalled a change of social values in the U.S. It may come off as > na?ve, but the behavior of exploiting society for that kind of > personal gain is ultimately unproductive for those doing the > exploiting too, not just for the exploited. A case in point is the > refusal of the U.S. "establishment" to take action to address the > global warming environmental crisis. I can only draw the conclusion > that they would prefer to die with as much wealth as they can manage > to accumulate, sacrificing the planet in the process, than making any > adjustment to their lifestyle. It boggles the mind, and borders on > perversity.
Not at all. Consider what I've said above. If no issue of ethics or morality obtains, then there's nothing wrong with genocide, or wiping out entire classes of animals, or poisoning the oceans, just as long as you make your pile. It's not a failure of ethics if "ethics" is just a meaningless word.