Scott Lowe’s post this morning echos something I learned a few weeks ago myself: Windows 2008 Server was released as SP1. And I had the same thought as him: WTF.
For this we have to thank all of those organizations that have chosen arbitrary releases as the first time they’ll touch new software. From operating systems to disk array firmware, it seems that a lot of system administrators will only start looking at a new version once it’s had a service pack or patch set released for it. And as a result we now have software vendors gaming the system by releasing first versions as SP1.
I’d like to share with you all a little secret: all software has bugs. Service packs introduce new and potentially buggy features and drivers along with all the fixes for known problems. Heck, sometimes even the fixes need fixes, because the service pack didn’t get as much testing as the beta for the OS itself. As such, waiting for an arbitrary release means that you trade time in the lifecycle of the software for the perception of stability. The time spent waiting for SP1 would be better used developing quality test environments, and quality patch management infrastructure. Then you’d be able to tell for yourself if SP1 is actually any better than the original release was, deploy it promptly when you decide it’s good, and deploy fixes efficiently when you need them. Facts & strategies, versus hearsay & mythology.
As for Microsoft’s gaming the system, I don’t blame them at all. Organizations have decided to be completely arbitrary about software evaluation, why can’t vendors do the same?
I remember years ago when I heard and IBM pronouncement there there would be no more .0 releases. All new products would be released as .1 for the same reason.
It didn’t last though, we’ve got a number of .0 IBM products in house. Although at this point it does get confusing with all the dot levels, ie major release, minor release, fixpack, etc… We’ve got a number of products where the version number is almost as long as the product name i.e. v6.0.2.19