Editorials - 2003/2004/2006
Read past "pronouncements" from this site's main page.
Note: While every reasonable effort is made to ensure that these articles are factually correct, please bear in mind that the opinions expressed here are just that - opinions.
I (don't) want my IPTV
Microsoft & SPF - A New Trend?
SPF - Spam Protection Factor?
Linux Desktop: One Year On
Step on the Small Guy
Living with Sobig
Shoot the Messenger
Anti-spammer's Spam Re. Spam
Spam, spam, everywhere...
A Small Victory
Love and Commitment
Squealing on Snort
Making money in an Open Source world
"Dis-integration" is the key
Linux myths that never die
Previous Years: 
April 12, 2006
Multicast to the edge of the network, (IPTV), is a solution in search of a requirement. MoD, (Media on Demand), is what consumers really want -- a virtual "IPTivo", if you will -- not a multitude of broadcast or cable channels multicast into the home.
Just ask my wife. She rents TV series from Blockbuster/Netflix and watches Comedy Central clips on her computer or iPod. If I weren't so cheap, we'd have Tivo, and we'd probably never watch anything in "real time". The point is that we mostly want different content on different devices, not the same content on multiple devices at the same time.
It does make sense to multicast content to caching servers "near the edge", but there isn't much benefit in multicasting to the home. It just adds another level of complexity to the delivery, and it forces the consumer to cache content they want to view later. It also tends to restrict the supported hardware, or at least require upgrades or lock-ins, which I guess is why corporations are willing to spend millions on the IPTV model.
I'm confident consumers will ultimately reject the "old-school" IPTV model in favor of "Any content, Any device, Any time". A nice feature of MoD is that the protocols required are already available, (SIP, RTP, RTSP, MPEG, etc.), and the "concept" is already widely deployed, (ever watched a QuickTime/Real/Windows Media clip on CNN.com?).
MoD would benefit mightily from multicasting in the core of the network to caching media servers near the edge. This would cost less than building a "multicast to the home" infrastructure, and would ultimately provide more utility to the consumer.
Bring it on...
May 31, 2004
I'm pleased to see that Microsoft and the author of SPF have hammered out an agreement to support each other's Mail Authentication schemes.
Microsoft's support for SPF, and SPF's support for Microsoft's Caller ID represents one of the few instances of Microsoft truly working with, and contributing to, the Open Source community in a way that should prove mutually beneficial.
Let's hope it's the start of a trend.
March 30, 2004
Those who follow my rants will probably know that I'm a big proponent of the idea of verifying Sending Mailservers using the DNS. I am pleased to announce that just such a system is finally gathering momentum.
SPF, or "Sender Policy Framework", uses supplementary DNS TXT records to indicate what mailservers are allowed to send mail for a particular domain. It took me all of about 10 minutes to set up the appropriate records for my own domain using the handy wizard on the SPF website. There are also Perl modules available that can be used to implement SPF verification in spam filtering programs. If a critical mass of domains adopt SPF, (and big names like AOL are already on board), we will (finally) have a simple and effective means of rejecting mail from unauthorized servers.
I'm not so naive as to believe that this will eliminate spam, but it will certainly help cut down on the deluge of spam and email-borne viruses emanating from residential computers.
December 31, 2003
As it's been a little over a year now since I switched to Linux on my desktop, I thought it appropriate to do a year-end review of my experiences to date.
What follows is a selection of my observations, in no particular order:
The Browser Wars
About three months ago, I switched browsers from standard Mozilla to Mozilla Firefox and have been very pleased with the results. The space-saving toolbar layout that I preferred with Internet Explorer is supported, which was my main gripe with Mozilla. Pop-up handling is much better than IE, and having this feature integrated properly into the browser with "sane" default settings is far preferable to using some of the questionable "add-ons" available for IE. I also feel reasonably confident that Mozilla's default security settings protect me from most common scripting attacks, without requiring me to disable scripting for all but a handful of laboriously configured select sites, (which seems to me to be the only plausible solution when using IE).
Unlike older versions of Mozilla or Netscape, sites look correct and behave properly. Getting all the appropriate plug-ins installed and configured has been a bit of a chore, but most media types are now supported. The big exception is Apple's QuickTime format. This is probably my biggest disappointment with Linux, since I like to watch movie trailers. In fairness to the Open Source movement, I place the blame for this omission squarely on Steve Jobs' shoulders. By his own admission he "loves open source", but clearly not enough to release a plug-in for his company's principal media format.
When I started using OpenOffice instead of Microsoft Office, it struck me as a "good enough but not great" solution. It was slow, and I still had to use Microsoft formats almost exclusively, since nobody else on the planet seemed willing to commit to using open document formats. Fortunately, the company I work for has since embraced OpenOffice 1.1 formats, so I no longer have to save documents in multiple formats. The 1.1 release is also significantly faster than the 1.0 release for most operations, (though still not as fast as the MS equivalents). In addition, OpenOffice's improved MS compatability, and the increasing acceptance that there are alternatives to Microsoft-only applications, means I no longer feel obliged to check documents against the MS-equivalent application before distributing them.
While on the OpenOffice/Microsoft Office subject, when reviewing the software on my wife's Mac, we discovered that the copy of Microsoft Word we had was an illegal bootleg that had been copied onto the machine at some point in the past by an over-zealous system admin. So we removed it. Without Word, the version of Mac OS 9 on that machine became almost worthless. The cost of installing Word legitimately, (assuming we wouldn't lie and claim that we were teachers or students), was several hundred dollars. For a machine where we only need to read or write Word documents maybe once or twice a week, this was just too steep a price. The solution? We added a Yellow Dog Linux partition to that machine and installed OpenOffice 1.1. I've since configured OpenOffice on the Mac to read and save MS formats by default, since we mostly use that machine to communicate with friends and family who are still stuck in a Microsoft-only world. With OpenOffice and a better browser installed, the Mac rarely gets booted into Mac OS any more - it's almost always Linux.
Painting a Better Picture
One of my original gripes after switching to Linux was that a handful of Windows programs I used regularly had no satisfactory Linux equivalent. Since then, I've been able to get Paint Shop Pro, (my main problem-application), working to my satisfaction using the Windows emulation available from wine. I also have several other handy Windows tools and games running under wine. With these loose ends tied up, I almost never have any reason to boot Windows. Looking at my Paint Shop Pro problem from another angle, I'm pleased to report that the newer releases of The GIMP are easier to use, so I'm sure that with a little practice I could abandon Paint Shop Pro entirely. I've also found that applications such as ImageMagick, XNView and GNUPlot are very useful, particularly for batch-processing images and graphs.
Life in the Slow Lane
One aspect of Linux that I love is the fact that I can decide when and what to upgrade. Originally, I installed Red-Hat 7.3. For a while I kept this up to date using the Red Hat Network. Lately, however, I've just been updating bits and pieces as I see fit, (replacing stock Mozilla with Firefox, updating OpenOffice, etc.). I'm still using a "stock" Red-Hat kernel, but many of the non-kernel programs that come as part of the O/S have been updated. At some point I would like to move to a more recent distribution, but I like the fact that I can choose when to move, and I am not being penalized for "sitting on the fence". With proprietary software, I often feel that there are deliberate decisions made not to support newer software on older Operating Systems. With Open Source, this restriction rarely applies: in cases where software doesn't run on an older O/S, it's almost always for valid technical reasons, rather than simply because of vendor efforts to generate more sales. It's entirely possible that I'll stick with my Red-Hat 7.3 patchwork until I get a new machine. Or maybe I won't. At least that decision is mine.
The Final Word
My switch to Linux hasn't been achieved entirely without headaches, but I have been impressed with the overall ease of use and the breadth of functionality that is available. I'm not sure if Linux has quite reached the level of maturity where it would be the right solution for everyone, but over the past year it has certainly proven to be the right solution for me.
With regard to the likelihood of Microsoft ever convincing me to part with Linux, I think the following Charlton Heston quote from an NRA rally sums up my feelings nicely: "I have only five words for you: From my cold, dead hands."
September 9, 2003
Road Runner has implemented a spam-blocking policy that prevents large blocks of "residential" IP address space from sending mail to Road Runner's customers. This includes an indeterminate number of legitimate domains owned by individuals and small businesses.
How do I know this? -- "jfitz.com" is one of the domains that is being blocked. Ironic. And annoying.
I'm amazed at the level of ignorance that exists, even among people who claim to be "experts", about the extent to which large ISP's block IP address space. Let's be clear: All major ISP's either refuse to accept mail from certain blocks of IP address space, or, at the very least, flag such mail as spam. Note that I'm not just talking about IP addresses of known spammers, I mean indiscriminate blocking of "residential" IP address space. Frankly, I understand why they do this: If they didn't, their customers would be overwhelmed with spam; not tens of extra mails, but hundreds, or possibly even thousands, of additional spam messages per customer each week.
However, this indiscriminate blocking hurts small, legitimate domains like my own, and, by extension, damages the reliability of the entire mail system. Clearly, we need to tackle spam, but indiscriminate blocking is not the way to go. What follows are three steps that I believe would go a long way toward solving the problem:
1) Stronger government oversight of the Domain Name System, (DNS), with stiff penalties for abuse. From a legal and governmental standpoint, domain registration should be treated with much the same gravity as an application for a driver's license. Fake driver's licenses do exist, and plenty of people drive without a license, but the licensing system itself is generally accepted as being reliable, because the application procedure is strict and the penalties for abusing the system are severe.
2) From a technical standpoint, the security extensions to the Domain Name System (DNSSEC) need widespread adoption. These extensions make it significantly harder to hijack or forge domain name information.
3) Finally, the DNS needs to be expanded so that any type of network based service can be authenticated through the system. The obvious, (and pressing), example of a "network based service" in need of authentication is outbound mail. Fortunately, the DNS was designed with expansion in mind, and proposals for authenticating outbound mail servers already exist.
With a trusted and reliable DNS, and DNS-authenticated outbound mail, a receiving mail server can ask the DNS if an IP address that is trying to send it mail is allowed to send mail for the email address contained in the MAIL FROM portion of the message. If it is, then the acceptability of the mail can be based on our degree of trust in the sender's domain, rather than on the "shifting sands" of some arbitrary IP address blacklist. If the IP address is not tied to the sender's domain, then the mail can be rejected outright.
With this level of authentication in place, ISP's could drop their indiscriminate blocks and instead use the DNS to focus on blocking known spammers.
September 3, 2003
Tracking Sobig's growth has been an interesting exercise. But when the volume of bogus mail hit 6000+ per day, with no sign of letting up, the bandwidth drain it was causing forced me to take more drastic measures.
I now pre-filter incoming packets to my mail server on port 25. When my system detects a likely Sobig mail, it terminates the connection, sending TCP resets to both ends of the connection. This makes it difficult for me to estimate how bad the Sobig situation really is, (because my mail filters are no longer registering the rejection, and the new filtering logs are skewed by retransmit attempts), but, because I'm now cutting off the connection before the main "payload" gets through, I've cut bandwidth utilization by about 75%.
To implement the filtering I use Snort, compiled with the "flexresp" option. Since I'm not particularly interested in extensive Intrusion Detection, I've disabled almost all predefined rules and preprocessors and I run Snort in non-promiscuous mode and without packet-logging.
The Snort "flexresp" code isn't 100% reliable, (and neither are the signatures I coded into my Snort rules for that matter), so I still depend on my mail filters to reject the occasional mail that makes it through this first line of defense.
This exercise demonstrates some of the strengths of Open Source. This is not a particularly well integrated or "pretty" solution. However, it is extremely effective. It was easy to implement, (at least as easy as can be expected when implementing low-level packet filtering), and it required no changes to existing software. It was also free and required no obnoxious reboots.
Strictly speaking, my server could have handled the ongoing load that Sobig created without too much trouble, but the depressing prevalence of Sobig leads me to believe that getting this framework in place now will prove invaluable when the next virus strikes. The arrival of a virus with Sobig's virulence, but with smarter social engineering and better self-mutating features, seems inevitable. In fact I would suspect that social engineering may supplant, (or enhance), exploitation of known security holes, making it significantly more difficult to detect and reject the next virus based on patterns that are unique to "typical" exploitation code.
Needless to say, the results could be devastating, regardless of any advance protection measures we take.
August 22, 2003
As a result of Sobig.F, I'm blocking about 2000 additional email messages a day. Maybe 1800 of these are actual Sobig.F messages. The other 200 or so are "helpful" auto-generated notices from mail systems.
Of those 200, maybe 180 are from mail scanners that are detecting the virus and sending out a rejection notice to the "sender". Needless to say, this is idiotic, since the sender is forged. However, at least these rejection notices have predictable patterns, so they can be rejected by my mail server also.
The other 20 or so messages are really annoying. These relate to things like full mailboxes or out-of-the-office autoreplies and many of them have no discernable pattern. These get through, only to suffer the wrath of my delete key.
- MX, (or RMX), records for outbound mailservers are a good idea.
- There are a large number of postmasters in the world who need to be put out of our misery, (or, at the very least, need instruction on how to run a mail server).
Anti-spammers Spam Re. Spam
July 23, 2003
I found it interesting to read an eweek article which (shockingly) informed me that I don't want spam, and a US Senator has a survey to prove it.
What was interesting wasn't so much the content, it was the fact that I knew what the article was about before I read it. My foreknowledge came as a result of a piece of Unsolicited Commercial Email (UCE) from ePrivacy group, the company that prepared the Senator's survey and one that lists an impressive array of security and privacy "experts" on their board.
Needless to say, the UCE from ePrivacy (reproduced below) was a pretty thinly veiled attempt to flog the company's warez, or at least to raise the profile of the company's website in my mind.
I guess their UCE was successful in one sense -- the profile of "ePrivacy" has been raised in my spam-blocking tools.
What follows is a copy of ePrivacy's informative invitation to visit their website for more details:
To: "Editors & Writers" <firstname.lastname@example.org>
Organization: ePrivacy Group
From: "Vincent Schiavone" <email@example.com>
Subject: Sen. Schumer Spam Press Conference: 11AM 7/23/03
Date: Tue, 22 Jul 2003 21:56:10 -0400
I thought you might be interested in the press conference that Senator Schumer has called for tomorrow, Wednesday, July 23, at the Capitol, Room SC-4. The Senator will release a new national survey showing that email users overwhelmingly favor a federal do-not-spam list. The survey, conducted by ePrivacy Group and the Ponemon Institute, also shows that almost 80% of consumers want a federal law banning spam.
Other key findings indicate current solutions to stop unwanted email, such as filtering and opt-out mechanisms, are not working. Many consumers spend 30 minutes or more each day just dealing with spam. On the hot topic of spoofed email, over 60% of persons surveyed had received fake or spoofed email from a trusted brand, with many reporting that such messages contained pornography, a computer virus, or a false message.
I will be joining Schumer to detail the survey. A summary of the findings will be made available. Electronic copies will be available from ePrivacy Group's web site shortly after noon tomorrow.
CEO and President
June 19, 2003
Note: An extended version of this article also appeared on Newsforge.
It seems like spam is in the news every day lately, and, frankly, some of the proposed "solutions" seem either completely hare-brained, or worse than the problem itself.
I'd like to reiterate a relatively modest proposal I made over a year ago:- require legitimate DNS MX records for all outgoing mail servers.
This proposal could be enacted into law without requiring any initial technological changes whatsoever. Over time, mailservers could be configured to reject mail that comes from mailservers that have no MX record, (many servers already do reject mail on this basis). Furthermore, since the MX record would now be tied to the legal owner of the domain in question, additional filtering could be done to reject mail from servers that are owned by known spammers. In the longer term, this would decrease the complexity, and increase the accuracy, of mail filtering software.
Another attraction of this proposal is that enforcement is not difficult. No army of "men in black" is needed to chase down the lawbreakers - if you choose not to register your mailservers, or repeatedly send spam from those you do, then nobody will accept your mail.
Simple as that.
June 11, 2003
For the first time in a number of years, I recently went two full days, (actually closer to three), without a single piece of spam making it through my spam filters.
I had been using SpamAssassin for quite a while, which did a good job, but I found it increasingly difficult to keep the rules updated, so I wrote my own set of scripts. Over the past few weeks I've been honing the rules to try and block spam more effectively, while minimizing the use of lists of good/bad addresses and domains.
It still ain't perfect, but two spam-free days is a good start.
March 22, 2003
By his own admission, Steve Jobs "loves" open source. Al Gore, (who just joined Apple's board), is "impressed by the company's commitment to open source".
What I want to know is, how come, with all this "love and commitment" flying around, Apple still hasn't produced native Quicktime plug-ins for Linux?
Show us the love, Steve.
March 5, 2003
I was interested to read what Marty Roesch had to say about the recent vulnerability found in the open source intrusion detection system, Snort:
"It's nasty. You don't have to target the box running Snort; you just have to throw the attack on the network, and the box will just receive it because it's doing its job."
This is exactly the kind of disclosure that the security community craves; no fudge, no finger-pointing, just hard fact.
Marty's honesty is refreshing. What impresses me most though, is that Marty is not only the creator of Snort, he is also the president of Sourcefire, a company he founded to sell solutions based on Snort. If he were just another writer of open source code in his spare time, we could take a "so what?" view of his honesty. However, the fact is that he has "bet the farm" on Snort, and still he has the balls to call it like it is .
There are countless corporations, large and small, open source and proprietary, who would do well to take a leaf out of Marty's book when it co mes to discussing limitations and flaws in their own products.
Marty, I applaud your forthrightness and hope it pays off for you and your company.
February 15, 2003
Ignoring the digs, (and, God knows, it's hard not to have a dig at Microsoft), I found this article pretty insightful. It points to a place where proprietary software can flourish in an increasingly open source world.
The key, (without artificially or illegally manipulating the market), is to innovate, then move on before open source commoditizes you. In fact, I'd go one step further and suggest open-sourcing innovations just before the general open source community "catches up", thereby increasing the likelihood of maintaining service contracts. Many of the current Microsoft "cash-cows" are prime candidates - Windows, Office, SQLServer, etc. Face reality: these products are commodities. Give them away and move on.
If this might seem to suggest brutally tight timescales for recouping investment costs, that's because it absolutely does require brutally tight timescales - Give me something I want to pay for.
Innovate or die.
January 27, 2003
I had been planning to write this piece for several months now, and was finally prompted to put pen to (virtual) paper by a slightly unusual manifestation of the problem I intended to analyze - more on that later.
In a sentence, the point I want to make is this: One of the primary benefits of GNU/Linux, *BSD and Free Software in general is that they are NOT integrated solutions.
At first this argument may seem counter-intuitive. It certainly flies in the face of one of the core messages that vendors of proprietary solutions have been trying to sell us for over twenty years:- namely, that fully integrated solutions are cheaper, more reliable and yield higher productivity. As an engineer, this argument for integration just doesn't make sense. "Componentization", ultimately leading to "commoditization", is a core value in any engineering endeavor where reliability at a reasonable price is a key concern. A typical "physical" invention arrives in this world as a highly-customized and highly-integrated prototype. Engineers decompose, (or "dis-integrate", if you will), this prototype into simpler constituent parts that can be manufactured to a measurable specification at a reasonable price. As the body of manufactured items using a particular commoditized part grows, the cost of that part generally decreases, while the reliability of the part, and the efficiency of its manufacturing process, increases.
To put the success of this engineering principle in perspective, something as simple as a standardized nut and bolt, which we can buy pennies-per-pound in any hardware store, would have been considered a marvel of precision engineering at the beginning of the Industrial Revolution.
In a sense, this engineering principle is merely a specific manifestation of more general Darwinian principles - "parts" evolve, and the fittest "parts" survive.
Which brings us to software. Why shouldn't these principles also apply to software? Why not, indeed:- the only argument I can see against their application is economic rather than scientific, and the economic argument I have in mind decidedly favors the manufacturer at the expense of the consumer.
The economic "problem" with a software "part" is that the specification by which it is measured is, for all practical purposes, indistinguishable from the "part" itself. In other words, the ideal specification for a software "part" is its source code, which is in itself the part. And, worse yet, (from the economic perspective), there is no subsequent manufacturing cost.
Going back to my nut and bolt example: I can give you a nut and bolt, you can design a product which uses my nut and bolt, and I can then sell you tens of thousands of nuts and bolts, which you will gladly pay for as long as I am the cheapest manufacturer of nuts and bolts that meet your product's specification. With software, however, as soon as I give you the "sample" nut and bolt required to reliably design and test your product, you effectively have everything you need - there is no economic motivation to buy any more nuts and bolts.
The ironic thing is that from an engineering perspective this economic "problem" represents the ultimate efficiency. The specification by which we measure the part is perfect, because the specification IS the part, and the cost of manufacture immediately falls to zero, because as soon as we have the specification, we have the part, and all the parts we will ever need.
"Integration" is the guise under which we have been deluded into accepting a non-functioning, (or, at best, barely- functioning), "band-aid" solution for this economic problem. By tightly integrating software "parts" into larger, more complex solutions, the specification of each "part" can be tightly held, which allows the software manufacturer to reap economic reward from the sale of the overall "solution".
Unfortunately, in the process we have been sacrificing one of humankind's finest engineering achievements.
Having worked extensively with Free Software for some time now, the engineer in me sees how "right" the model is when viewed purely from an engineering perspective. I am free to choose the parts I need to build my own small contributions to this technological evolution, and, in the source code, I have the ultimate specification of the workings for each part I use. As each part gets reused or reworked for a new purpose, or even just inspected to determine its usability, the part will either evolve to become more reliable and/or more functional, or it will wither because it is not suited to the purpose for which it was designed. Some parts will die, some will prosper, and some will even be resurrected as new uses are found. Darwin would approve.
Vendors of proprietary software tout the integrated solution as more elegant and more cost effective while scoffing at the somewhat haphazard nature of the Free Software development model, where individual parts with limited functionality are pulled together to perform more complex tasks. What they fail to see is that the Free Software model is a better evolutionary "fit" - limited, clearly specified functional components have a better long-term chance of survival than some sprawling piece of software with dubious specification. We should be lauding the quirky, patchwork nature of Free Software development, not laughing at it.
I have no easy solution to the economic conundrum that Free Software poses, but I don't see this as a legitimate excuse to dismiss it as unworkable - it is too valuable. In the end, the success of Free Software is inevitable, so we may as well tackle the economic challenges head on, rather than attempting to paper over them.
At the outset, I said that an unusual manifestation of this problem had prompted me to write this piece. That manifestation is described in this article. In a nutshell, it talks about the unexpected consequences that recent attacks on a certain company's database engine have had on seemingly unrelated systems. Much of this "fallout" is almost certainly a direct result of efforts to integrate disparate proprietary systems, without a clear understanding of the specification of each system. I'm not for a minute suggesting that Free Software in its current state would have completely mitigated these undesirable results, but the discrete and open specification that Free Software encourages offers a much better long-term solution to this kind of problem. I strongly believe that this is just one more nail in the coffin of the dinosaur that is the monolithic, "integrated" software solution.
January 3, 2003
As a long-time Microsoft Windows user who recently switched almost exclusively to Linux, I'd like to share a few observations on the transition.
Rather than writing an exhaustive feature comparison, I'm going to look at a few common (and incredibly persistent) myths about Linux, comparing the myth with my own experience. I emphasize that this is not a technical analysis of Windows/Linux pros and cons - it's a purely subjective study based on my personal experiences with hardware and software that I use every day.
For the Linux faithful my observations will probably read like old news, but these myths are so ingrained in the Windows culture that I think these points bear repeating.
So, in no particular order...
Myth: Linux support for power management is second-rate.
Fact: For me, the most important aspect of power management is the "suspend" function on my laptop. I've found the Linux suspend function works flawlessly, and suspend/resume operations are much faster than under Windows 2K. For easy access, I added a "suspend" button to the taskbar beside the "lock"/"log-out" buttons.
Myth: Only techno-geeks can keep Linux software up to date.
Fact: Red Hat's Update Agent updates all my Red Hat software with the click of a few buttons. I get emailed notifications when updates are available. I decide when and what I want to upgrade, which suits me just fine.
Myth: A switch to Linux means all my Windows "stuff" will be lost.
Fact: I installed Linux on a separate hard disk partition. The Linux boot manager, (GRUB), allows me to boot either Windows or Linux. When I boot Linux, all my old Windows drives are mounted and fully accessible, (I mount them as /win/C, /win/D, etc. so things are easy to find). I installed my Windows fonts on Linux using the graphical font manager, so documents look pretty much as they did under Windows. I haven't had any problems opening my Microsoft Office documents using OpenOffice, (though I confess that I don't use many advanced MS Office features - your mileage may vary if you're a true Microsoft-techie). I use samba to mount remote Windows drives, so I haven't needed to switch O/S on my file servers.
Myth: Linux does not support a wide range of devices.
Fact: I use DVD, CD, wireless networking, wireless keyboard and mouse, Rio MP3 player and various other USB devices on my laptop. On my desktop I use scanners, printers, cameras and a TV card. I've had no problems getting any device to work. In some cases the drivers are not available on the installation CDs, so a little "googling" has been required to find what I need. I suspect that this has more to do with Microsoft's monopoly and dubious licensing practices than any failure on the part of the Linux community. I have occasionally had to do "make; make install" operations from the command line, but, frankly, this is not as scary or technically demanding as certain people might have you believe.
The bottom line is that most things I need to do on a day-to-day basis I can do as well, or better, with Linux. And, needless to say, the TCO myth isn't even worth talking about.
Happy New Year to all,