While computer professionals have not found a comprehensive cure for the Y2K disease, they have developed many partial solutions that taken together can render the Y2K bug impotent, saving brute-force software remediation by overseas services as a backup.
People keep asking me how we could have been so shortsighted as not to have foreseen the problem known as the millennium, or year 2000, bug. This question implies that we should have caught the problem at its source and not let it get to the stage where so many people are in a near panic in dealing with it. The next most frequent question posed to me is even more vexing: “You know all about computers. Why don’t you just apply a generalized fix and end this all now?” Such a fix has come to be known as a “silver bullet.” My reply is: “Do you believe in magic? I don’t. The Y2K bug involves so many different types of computers and software applications that producing a silver bullet to eradicate it is beyond my imagination.” Apart from magic, however, there are clever and insightful techniques that information technology professionals are using to control the millennium bug. We will call them “silver pellets.”
In addition to a variety of silver-pellet-type solutions, there is a completely different approach to controlling the Y2K bug that is more akin to brute force: Hire a sufficient number of computer analysts and programmers to fix all the deficient software. This can be ruinously expensive, and in most places in the United States, there is a shortage of qualified people to actually staff such an effort. However, many qualified programmers who can fill the need are available overseas, and the costs of this labor are much lower, so remote programmers have offered another approach to solving the Y2K problem.
The scope of the Y2K problem is massive. In our computer-pervasive society, the millennium bug has been found to inhabit all four types of computers on which we depend: personal computers, telecommunications devices with embedded processors, microcontrollers in “smart” appliances, and mainframe and other shared computers. Personal computers now inhabit most offices and more than half the homes in North America. Our cars, ovens, and televisions are saturated with embedded processors. Everywhere we look we see smart appliances and devices that connect to them [see “Y2K Up Close,” The World & I, May 1999, p. 170]. Furthermore, the interconnectedness of computers at all levels–from embedded microprocessors to PCs to mainframe computers– means that the ill effects of the bug could spread readily, even into commerce and the Internet’s vast array of information sources.
Although the Y2K bug is related to how a computer tells time, using both hardware and software, Y2K bug fixes are almost always achieved through a change in software. Given the hundreds of computer languages and myriad dialects used to program computers, it is hardly surprising that there is no universal silver bullet to kill the bug.
In 1997–98, as concerns about the threat of the Y2K problem were growing, many companies and government agencies still refrained from addressing it. The common wisdom of commentators at the time was that no silver bullet solution had emerged or was likely to, and those who waited to start addressing the problem would face unbeatable deadlines, exorbitant costs, and manpower shortages in trying to correct it. Now, five months away from Y2K day, as this is being written, the situation is not nearly as bleak as had earlier been predicted, and even considerable benefits have been realized through dealing with the problem. Each of the Y2K silver pellets tackles a particular segment of the overall complex issue and, within the segment, applies clever programming techniques to either solve that segment of the problem or greatly simplify its manual resolution. Let’s look at some representative examples.
One of the daunting challenges of having computers on everyone’s desktop is that many of them are not Y2K-ready. The census of PCs in America is now over 60 million. According to a survey completed in 1998, 97 percent of PCs made before 1997, and 47 percent made in 1997, would not be able to make the transition from 1999 to 2000 unless the date/time were manually reset. Compound this with the realization that most knowledge workers are dependent on their personal computers and we see that it is likely to take some substantial effort to assess the Y2K-compliance status of each and every personal computer.
Clearly, a most useful silver pellet would be a tool that could perform an automated test of network-attached personal computers. One such tool is offered by ON Technology. Its product, called ON Command CCM, can check the Y2K compliance of hardware (the BIOS) on multiple PCs simultaneously from a central administrative system. ON Command CCM can also be used to automatically reset local-area network-attached PCs to 1/1/00 (January 1, 2000) without end-user intervention. In this way, the PC system can then be tested for compliance.
Merely asserting that a PC is Y2K-compliant after performing an upgrade is not sufficient. I strongly advise that each computer be tested and verified to be Y2K-compliant. This means testing the system hardware (BIOS and the real-time clock). Even though the effort to upgrade and test one computer’s hardware for compliance may require less than half an hour, to do this across an entire enterprise with thousands of PCs is no small task. ON Technology’s silver pellet is thus a useful innovation.
Even when the PC becomes Y2K-compliant in regard to its internal date/time processing, its local and networked applications software may still need to be upgraded. ON’s tools are also handy for this task. With them, it is possible to use an efficient, centralized approach to distributing Y2K-enhanced applications software, such as is available from Oracle or SAP R/3. Again, we are taking advantage of the local- area network connectivity with its speed of transmission and software- controlled automated management to replace a “catch as one can” software administration. Interestingly, tools such as these can also be used productively after the Y2K hubbub is behind us; PCs will always need testing for effective functioning on specific vulnerable dates such as leap year 2000 (February 29, 2000) or for other purposes.
If you have only one PC to test for compliance of its hardware (real- time clock and BIOS), however, you can use a product from Computer Experts called Millennium Bug Toolkit, which is available over the Internet at www.computerexperts.co.uk/pc2000. One of the nice features of this silver pellet is that it works from the floppy disk drive and does not interact with your hard disk during the testing procedure. In this way, it protects your data and software during testing; not all competitor products do.
By the millions
It is commonplace nowadays for corporations and governments to maintain large bodies of their own custom-written applications software, written mainly in the COBOL language for a mainframe environment. One such company presently completing its Y2K-compliance project has been reviewing and remediating some 21 million lines of programming code. This company is using another type of silver pellet to identify and analyze its software programs that fail the Y2K test. To appreciate the size of this task, consider that 21 million lines of code takes some 400,000 pages to print. Missing even one date occurrence can lead to a failed application. Platinum Technology’s TransCentury Analysis Tools handles more than 150 date formats and uses the power of the mainframe itself to break up the 21 million lines of code into manageable units. Platinum has a complementary product, Calendar Routines, that can automatically generate replacement software to fix noncompliant date logic; its FileAge product can simulate dates after December 31, 1999, for use in testing the changed application code.
Economies of scale
With thousands of personal computers to fix, millions of lines of code to scan, and a hard and fast time deadline of January 1, 2000, the real Y2K problem is one of managing a multitude of checks, upgrades, and tests. Not only must programmers carefully manage their own work, but also upper management must be extremely thorough and careful in managing the total repair job. The real management problem arises not because a particular instance of the date problem is so difficult to fix but because of the sheer number of bug occurrences that must be found and repaired–and all without error. The problem of locating all of the bug occurrences is what concerns me the most.
Another silver pellet aims not only to speed up bug-fix management but also to link the programmer’s task progress with status reports to upper management. Turnkey 2000 of San Jose, California, has such a tool called Unravel 2000, which automates up to 90 percent of a programmer’s work. It can convert noncompliant software code to be Y2K-compliant and can generate project-management reports of the changes. This tool makes it possible for managers to cut conversion costs by performing assessments in mid-project and to update project schedules and change priorities as needed.
Cleverness and brute force
What is the challenge in bringing 21 million lines of code up to Y2K- compliance standards? From surveys, we know that noncompliant date faults occur, on average, with a frequency of one per 1,000 lines of code (about one for every 20 pages of code). This works out to 21,000 faults, and the Y2K-repair effort is aimed at them. If a programmer can find, fix, and test one date fault in half a day’s time on average, then we are looking at about 10,000 days of work. If the company has 20 programmers dedicated full time to the Y2K-remediation task, then it will take 500 days (slightly more than two entire working years for the 20 people) to complete this brute-force solution.
What if it were possible to adopt an approach that would avoid changing the software at all? If this could work, then a tremendous load could be lifted. Approaches of this sort have proven successful in many cases. Understanding how it works requires some background. Let’s start by defining the year 2000 computer problem as a discrepancy between the external, four-digit dates used by people and the internal, two-digit dates used by computers. The discrepancy can be expressed by the two statements: “2000 is greater than 1999 (2000 > 1999),” and “00 is less than 99 (00 99).” A noncompliant computer only knows to drop or add the first two digits when dates pass across the divider between internal dates and external dates. Thus, its internal dates are stuck in the twentieth century, in a loop between 1900 and 1999.
The clever trick called program encapsulation, which can avoid changing all of the software, shifts each date when it crosses the divide from external, four-digit to internal, two-digit or vice versa. If, for example, we subtract 28 years from both 2000 and 1999, we see that 1972 > 1971, and after dropping the first two digits of each year, that 72 > 71. In the reverse path the computer would first convert 72 and 71 to 1972 and 1971, then add 28 to each of those. With this date shift built in, the noncompliant software can continue to work without change. Programmers have selected 28, or multiples of 28, as the preferred date-shift increment because the pattern of the cycles of days of the week (Monday, Tuesday, …) and days of the month (1, 2, …) and the leap year relationship repeat identically every 28 years. Given that 2000 is a leap year but 1900 and 2100 are not leap years, it may appear that this technique can in principle be used for all years whose external representation falls within 1901–2099. That window is further reduced, however, to external dates of 1929–2099 by constraints on the date-shifted internal representation (1929 –28 = 1901).
With program encapsulation, the computer’s clock is operating in its own world 28 years earlier than real time. Everything internal to the boundary is shifted by –28 years on input and +28 years on output. Data files also are taken inside the time boundary–by shifting all the year values on the file by minus 28 years. One has to be careful with this approach that outputs are properly adjusted by +28 years. Usually, this is not a difficult task.
Thus, program encapsulation is one of the easiest methods of correcting the Y2K bug by changing the frame of reference for time. Of course, this silver pellet assumes that the software runs correctly for the relevant span of time in the twentieth century. Seeing this technique, some have commented that it only postpones the day of reckoning. But if you replay it again in 28 years by subtracting and adding years, its applicability is extended. Similarly with 84 years, and other multiples of 28, ad infinitum.
A process patent covering the concept of program encapsulation dating from 1995 is held by the original developers, Turn of the Century Solution, LP. Anyone using the method is required to obtain a license. Seven software developers have licensed the process and offer program encapsulation utilities for all major platforms.
A closely related technique, data encapsulation, handles the time shift inside programs rather than outside programs as in program encapsulation. With data encapsulation, new code to shift the data forward and back is inserted at every input or output statement in the programs– the disadvantage being that many programs have to be changed and recompiled. The advantage, however, is that vast repositories of data do not have to be expanded to handle four-digit years. This process was developed by Paul O’Neil of Raytheon and is in the public domain.
The full significance of program encapsulation is that it avoids the necessity of performing tests with advancing dates. Since the software is not changed to handle dates in two different centuries, it does not need to be tested with dates in two different centuries. Given that, for medium- and large-scale software application systems, advanced date testing is usually 50 percent of the personnel effort, eliminating this task makes program encapsulation a significant silver pellet.
Clearly there is a shortage of computer programmers in the United States. Companies trying to make their systems Y2K-compliant need to dedicate programmers, analysts, managers, and a certain proportion of their information technology infrastructure to the remediation task. However, these companies also have to maintain normal operations such as payroll, accounts receivable, and general ledger. With the fixed deadline of January 1, 2000, and the exigencies of normal maintenance, striking a balance has been difficult. Because the Y2K-remediation task is finite, it has been simpler to contract it to other companies. But America has no surplus personnel resources available. Enterprising groups like Trigent Software of Southborough, Massachusetts, have specialized in filling the need for Y2K personnel by arranging with offshore programmers in countries such as India, Pakistan, and the Philippines. Mexico’s Softtek is working with Ernst and Young, LLP, to provide its “nearshore” programmers for large software development projects. In these countries, there is both the skilled labor force and sufficient familiarity with the English language to be able to read and write technical documentation.
The work performed by remote programmers follows the same approach used in the United States:
- find the instances where dates are used;
- document their location;
- analyze their criticality;
- devise a fix and reprogram using an agreed-upon convention (e.g., to expand the year code from two to four digits);
- carry out testing to validate the changes; and
- report the status of remediation results to the management.
The first four stages of repair take about 30 percent of the total effort. Testing takes about 50 percent, and reporting requires the remaining 20 percent.
Even if some stateside Y2K-remediation projects have the programming and analysis talent, they often skimp on the testing phase. Generating test data can be both difficult and tedious, and programmers are known to be highly optimistic about the outcome of their creativity. As a result, testing is given too little emphasis, and this can spell the ruination of a Y2K project. With remote programmers, testing is often more exhaustively complete because they are happy to get the relatively high-paying work and want to please their U.S. customers. For these offshore firms, their Y2K projects are golden opportunities to show what they can do. They realize that if they do a good job, there is likely to be more work from these sources in the future.
One of the substantial benefits of this approach is cost savings for American companies hiring offshore programmers, whose wages are about a tenth of what they are in the United States. Through the use of the Internet and satellite-based communications facilities, information officers are beaming software that needs repair across the globe to a waiting cadre of technical specialists. By using remote programmers, the time- and manpower-intensive manual labor of correcting hundreds of thousands of lines of computer code is being done at reduced prices by burgeoning offshore industries.
The silver lining
The enormous effort to reach y2k-compliance standards is starting to pay a substantial bonus. Because the efforts are mostly managerial, it should be no surprise that most of the benefits are in that domain. With personal computers so pervasive and software and hardware proliferating so rapidly, successful Y2K-compliance projects have put many companies in a position for the first time to know and keep current on the full inventory of their computers and software. Some of the silver pellets described in this article are being used to track all of a company’s software and to provide upgrades as new versions are released. Because testing and validation of Y2K compliance is so critical, Y2K tools were acquired for this purpose. But these tools will also be used to test for non-Y2K purposes, so the end result will be more thorough testing of future systems and their applications. Our mainframes are getting the cleaning of their lives; cobwebs in software libraries are being swept away after years of accumulation. Having more current systems, better tested, and with better management of both central and distributed information technology assets may not be too much to pay for being Y2K-compliant.
After all is said and done, January 1, 2000, is not the last of our worries about processing dates by computer. The next date to be concerned about is February 29, 2000, which is the first leap day in a century year since 1600. The simplest rule in programming for leap years is to add an extra day when the year is evenly divisible by four. Since computers were not around in 1900, this exception is no problem for real-time work, and 2100, the next exception, is still over 100 years away. So the simplest rule works correctly in 2000. What we have to be concerned with is that the algorithm may be formulated correctly for the 100 years but not the 400-year exception. December 31, 2000, may produce another surprise, as it is the 366th day. If a program tells time by counting days from a fixed point, then it will be incorrect if the programmer forgets that 2000 is a leap year. In a little over a year, we’ll see if anyone got caught on this one.
More ominous is September 9, 2001, at precisely 1:46:39 a.m. This is a special date for UNIX systems. Unlike classical mainframes or personal computers, UNIX computers are programmed to tell time by counting the seconds from a fixed point: midnight, January 1, 1970. On September 9, 2001, the counter reaches 999,999,999. The significance of this number is that programmers often use such a number as the code for the end-of- file. Thus, on September 9, 2001, UNIX programs may mysteriously end prematurely or give erroneous results.
Well, with computers it is always something. I’m glad we have a good stock of silver pellets and remote programmers to call on again.