Silver Pellets and Remote Programmers

While computer professionals have not found a comprehensive cure for the Y2K disease, they have developed many partial solutions that taken together can render the Y2K bug impotent, saving brute-force software remediation by overseas services as a backup.

People keep asking me how we could have been so shortsighted as not to have foreseen the problem known as the millennium, or year 2000, bug. This question implies that we should have caught the problem at its source and not let it get to the stage where so many people are in a near panic in dealing with it. The next most frequent question posed to me is even more vexing: “You know all about computers. Why don’t you just apply a generalized fix and end this all now?” Such a fix has come to be known as a “silver bullet.” My reply is: “Do you believe in magic? I don’t. The Y2K bug involves so many different types of computers and software applications that producing a silver bullet to eradicate it is beyond my imagination.” Apart from magic, however, there are clever and insightful techniques that information technology professionals are using to control the millennium bug. We will call them “silver pellets.”

silver-pellet

In addition to a variety of silver-pellet-type solutions, there is a completely different approach to controlling the Y2K bug that is more akin to brute force: Hire a sufficient number of computer analysts and programmers to fix all the deficient software. This can be ruinously expensive, and in most places in the United States, there is a shortage of qualified people to actually staff such an effort. However, many qualified programmers who can fill the need are available overseas, and the costs of this labor are much lower, so remote programmers have offered another approach to solving the Y2K problem.

The scope of the Y2K problem is massive. In our computer-pervasive society, the millennium bug has been found to inhabit all four types of computers on which we depend: personal computers, telecommunications devices with embedded processors, microcontrollers in “smart” appliances, and mainframe and other shared computers. Personal computers now inhabit most offices and more than half the homes in North America. Our cars, ovens, and televisions are saturated with embedded processors. Everywhere we look we see smart appliances and devices that connect to them [see “Y2K Up Close,” The World & I, May 1999, p. 170]. Furthermore, the interconnectedness of computers at all levels–from embedded microprocessors to PCs to mainframe computers– means that the ill effects of the bug could spread readily, even into commerce and the Internet’s vast array of information sources.

Although the Y2K bug is related to how a computer tells time, using both hardware and software, Y2K bug fixes are almost always achieved through a change in software. Given the hundreds of computer languages and myriad dialects used to program computers, it is hardly surprising that there is no universal silver bullet to kill the bug.

In 1997–98, as concerns about the threat of the Y2K problem were growing, many companies and government agencies still refrained from addressing it. The common wisdom of commentators at the time was that no silver bullet solution had emerged or was likely to, and those who waited to start addressing the problem would face unbeatable deadlines, exorbitant costs, and manpower shortages in trying to correct it. Now, five months away from Y2K day, as this is being written, the situation is not nearly as bleak as had earlier been predicted, and even considerable benefits have been realized through dealing with the problem. Each of the Y2K silver pellets tackles a particular segment of the overall complex issue and, within the segment, applies clever programming techniques to either solve that segment of the problem or greatly simplify its manual resolution. Let’s look at some representative examples.

PCs everywhere

y2k_compliantOne of the daunting challenges of having computers on everyone’s desktop is that many of them are not Y2K-ready. The census of PCs in America is now over 60 million. According to a survey completed in 1998, 97 percent of PCs made before 1997, and 47 percent made in 1997, would not be able to make the transition from 1999 to 2000 unless the date/time were manually reset. Compound this with the realization that most knowledge workers are dependent on their personal computers and we see that it is likely to take some substantial effort to assess the Y2K-compliance status of each and every personal computer.

Clearly, a most useful silver pellet would be a tool that could perform an automated test of network-attached personal computers. One such tool is offered by ON Technology. Its product, called ON Command CCM, can check the Y2K compliance of hardware (the BIOS) on multiple PCs simultaneously from a central administrative system. ON Command CCM can also be used to automatically reset local-area network-attached PCs to 1/1/00 (January 1, 2000) without end-user intervention. In this way, the PC system can then be tested for compliance.

Merely asserting that a PC is Y2K-compliant after performing an upgrade is not sufficient. I strongly advise that each computer be tested and verified to be Y2K-compliant. This means testing the system hardware (BIOS and the real-time clock). Even though the effort to upgrade and test one computer’s hardware for compliance may require less than half an hour, to do this across an entire enterprise with thousands of PCs is no small task. ON Technology’s silver pellet is thus a useful innovation.

Even when the PC becomes Y2K-compliant in regard to its internal date/time processing, its local and networked applications software may still need to be upgraded. ON’s tools are also handy for this task. With them, it is possible to use an efficient, centralized approach to distributing Y2K-enhanced applications software, such as is available from Oracle or SAP R/3. Again, we are taking advantage of the local- area network connectivity with its speed of transmission and software- controlled automated management to replace a “catch as one can” software administration. Interestingly, tools such as these can also be used productively after the Y2K hubbub is behind us; PCs will always need testing for effective functioning on specific vulnerable dates such as leap year 2000 (February 29, 2000) or for other purposes.

If you have only one PC to test for compliance of its hardware (real- time clock and BIOS), however, you can use a product from Computer Experts called Millennium Bug Toolkit, which is available over the Internet at www.computerexperts.co.uk/pc2000. One of the nice features of this silver pellet is that it works from the floppy disk drive and does not interact with your hard disk during the testing procedure. In this way, it protects your data and software during testing; not all competitor products do.

By the millions

It is commonplace nowadays for corporations and governments to maintain large bodies of their own custom-written applications software, written mainly in the COBOL language for a mainframe environment. One such company presently completing its Y2K-compliance project has been reviewing and remediating some 21 million lines of programming code. This company is using another type of silver pellet to identify and analyze its software programs that fail the Y2K test. To appreciate the size of this task, consider that 21 million lines of code takes some 400,000 pages to print. Missing even one date occurrence can lead to a failed application. Platinum Technology’s TransCentury Analysis Tools handles more than 150 date formats and uses the power of the mainframe itself to break up the 21 million lines of code into manageable units. Platinum has a complementary product, Calendar Routines, that can automatically generate replacement software to fix noncompliant date logic; its FileAge product can simulate dates after December 31, 1999, for use in testing the changed application code.

Economies of scale

With thousands of personal computers to fix, millions of lines of code to scan, and a hard and fast time deadline of January 1, 2000, the real Y2K problem is one of managing a multitude of checks, upgrades, and tests. Not only must programmers carefully manage their own work, but also upper management must be extremely thorough and careful in managing the total repair job. The real management problem arises not because a particular instance of the date problem is so difficult to fix but because of the sheer number of bug occurrences that must be found and repaired–and all without error. The problem of locating all of the bug occurrences is what concerns me the most.

Another silver pellet aims not only to speed up bug-fix management but also to link the programmer’s task progress with status reports to upper management. Turnkey 2000 of San Jose, California, has such a tool called Unravel 2000, which automates up to 90 percent of a programmer’s work. It can convert noncompliant software code to be Y2K-compliant and can generate project-management reports of the changes. This tool makes it possible for managers to cut conversion costs by performing assessments in mid-project and to update project schedules and change priorities as needed.

Cleverness and brute force

Y2K.

What is the challenge in bringing 21 million lines of code up to Y2K- compliance standards? From surveys, we know that noncompliant date faults occur, on average, with a frequency of one per 1,000 lines of code (about one for every 20 pages of code). This works out to 21,000 faults, and the Y2K-repair effort is aimed at them. If a programmer can find, fix, and test one date fault in half a day’s time on average, then we are looking at about 10,000 days of work. If the company has 20 programmers dedicated full time to the Y2K-remediation task, then it will take 500 days (slightly more than two entire working years for the 20 people) to complete this brute-force solution.

What if it were possible to adopt an approach that would avoid changing the software at all? If this could work, then a tremendous load could be lifted. Approaches of this sort have proven successful in many cases. Understanding how it works requires some background. Let’s start by defining the year 2000 computer problem as a discrepancy between the external, four-digit dates used by people and the internal, two-digit dates used by computers. The discrepancy can be expressed by the two statements: “2000 is greater than 1999 (2000 > 1999),” and “00 is less than 99 (00 99).” A noncompliant computer only knows to drop or add the first two digits when dates pass across the divider between internal dates and external dates. Thus, its internal dates are stuck in the twentieth century, in a loop between 1900 and 1999.

The clever trick called program encapsulation, which can avoid changing all of the software, shifts each date when it crosses the divide from external, four-digit to internal, two-digit or vice versa. If, for example, we subtract 28 years from both 2000 and 1999, we see that 1972 > 1971, and after dropping the first two digits of each year, that 72 > 71. In the reverse path the computer would first convert 72 and 71 to 1972 and 1971, then add 28 to each of those. With this date shift built in, the noncompliant software can continue to work without change. Programmers have selected 28, or multiples of 28, as the preferred date-shift increment because the pattern of the cycles of days of the week (Monday, Tuesday, …) and days of the month (1, 2, …) and the leap year relationship repeat identically every 28 years. Given that 2000 is a leap year but 1900 and 2100 are not leap years, it may appear that this technique can in principle be used for all years whose external representation falls within 1901–2099. That window is further reduced, however, to external dates of 1929–2099 by constraints on the date-shifted internal representation (1929 –28 = 1901).

With program encapsulation, the computer’s clock is operating in its own world 28 years earlier than real time. Everything internal to the boundary is shifted by –28 years on input and +28 years on output. Data files also are taken inside the time boundary–by shifting all the year values on the file by minus 28 years. One has to be careful with this approach that outputs are properly adjusted by +28 years. Usually, this is not a difficult task.

Thus, program encapsulation is one of the easiest methods of correcting the Y2K bug by changing the frame of reference for time. Of course, this silver pellet assumes that the software runs correctly for the relevant span of time in the twentieth century. Seeing this technique, some have commented that it only postpones the day of reckoning. But if you replay it again in 28 years by subtracting and adding years, its applicability is extended. Similarly with 84 years, and other multiples of 28, ad infinitum.

A process patent covering the concept of program encapsulation dating from 1995 is held by the original developers, Turn of the Century Solution, LP. Anyone using the method is required to obtain a license. Seven software developers have licensed the process and offer program encapsulation utilities for all major platforms.

A closely related technique, data encapsulation, handles the time shift inside programs rather than outside programs as in program encapsulation. With data encapsulation, new code to shift the data forward and back is inserted at every input or output statement in the programs– the disadvantage being that many programs have to be changed and recompiled. The advantage, however, is that vast repositories of data do not have to be expanded to handle four-digit years. This process was developed by Paul O’Neil of Raytheon and is in the public domain.

The full significance of program encapsulation is that it avoids the necessity of performing tests with advancing dates. Since the software is not changed to handle dates in two different centuries, it does not need to be tested with dates in two different centuries. Given that, for medium- and large-scale software application systems, advanced date testing is usually 50 percent of the personnel effort, eliminating this task makes program encapsulation a significant silver pellet.

Remote programmers

Clearly there is a shortage of computer programmers in the United States. Companies trying to make their systems Y2K-compliant need to dedicate programmers, analysts, managers, and a certain proportion of their information technology infrastructure to the remediation task. However, these companies also have to maintain normal operations such as payroll, accounts receivable, and general ledger. With the fixed deadline of January 1, 2000, and the exigencies of normal maintenance, striking a balance has been difficult. Because the Y2K-remediation task is finite, it has been simpler to contract it to other companies. But America has no surplus personnel resources available. Enterprising groups like Trigent Software of Southborough, Massachusetts, have specialized in filling the need for Y2K personnel by arranging with offshore programmers in countries such as India, Pakistan, and the Philippines. Mexico’s Softtek is working with Ernst and Young, LLP, to provide its “nearshore” programmers for large software development projects. In these countries, there is both the skilled labor force and sufficient familiarity with the English language to be able to read and write technical documentation.

The work performed by remote programmers follows the same approach used in the United States:

  1. find the instances where dates are used;
  2. document their location;
  3. analyze their criticality;
  4. devise a fix and reprogram using an agreed-upon convention (e.g., to expand the year code from two to four digits);
  5. carry out testing to validate the changes; and
  6. report the status of remediation results to the management.

The first four stages of repair take about 30 percent of the total effort. Testing takes about 50 percent, and reporting requires the remaining 20 percent.

Even if some stateside Y2K-remediation projects have the programming and analysis talent, they often skimp on the testing phase. Generating test data can be both difficult and tedious, and programmers are known to be highly optimistic about the outcome of their creativity. As a result, testing is given too little emphasis, and this can spell the ruination of a Y2K project. With remote programmers, testing is often more exhaustively complete because they are happy to get the relatively high-paying work and want to please their U.S. customers. For these offshore firms, their Y2K projects are golden opportunities to show what they can do. They realize that if they do a good job, there is likely to be more work from these sources in the future.

One of the substantial benefits of this approach is cost savings for American companies hiring offshore programmers, whose wages are about a tenth of what they are in the United States. Through the use of the Internet and satellite-based communications facilities, information officers are beaming software that needs repair across the globe to a waiting cadre of technical specialists. By using remote programmers, the time- and manpower-intensive manual labor of correcting hundreds of thousands of lines of computer code is being done at reduced prices by burgeoning offshore industries.

The silver lining

The enormous effort to reach y2k-compliance standards is starting to pay a substantial bonus. Because the efforts are mostly managerial, it should be no surprise that most of the benefits are in that domain. With personal computers so pervasive and software and hardware proliferating so rapidly, successful Y2K-compliance projects have put many companies in a position for the first time to know and keep current on the full inventory of their computers and software. Some of the silver pellets described in this article are being used to track all of a company’s software and to provide upgrades as new versions are released. Because testing and validation of Y2K compliance is so critical, Y2K tools were acquired for this purpose. But these tools will also be used to test for non-Y2K purposes, so the end result will be more thorough testing of future systems and their applications. Our mainframes are getting the cleaning of their lives; cobwebs in software libraries are being swept away after years of accumulation. Having more current systems, better tested, and with better management of both central and distributed information technology assets may not be too much to pay for being Y2K-compliant.

After all is said and done, January 1, 2000, is not the last of our worries about processing dates by computer. The next date to be concerned about is February 29, 2000, which is the first leap day in a century year since 1600. The simplest rule in programming for leap years is to add an extra day when the year is evenly divisible by four. Since computers were not around in 1900, this exception is no problem for real-time work, and 2100, the next exception, is still over 100 years away. So the simplest rule works correctly in 2000. What we have to be concerned with is that the algorithm may be formulated correctly for the 100 years but not the 400-year exception. December 31, 2000, may produce another surprise, as it is the 366th day. If a program tells time by counting days from a fixed point, then it will be incorrect if the programmer forgets that 2000 is a leap year. In a little over a year, we’ll see if anyone got caught on this one.

More ominous is September 9, 2001, at precisely 1:46:39 a.m. This is a special date for UNIX systems. Unlike classical mainframes or personal computers, UNIX computers are programmed to tell time by counting the seconds from a fixed point: midnight, January 1, 1970. On September 9, 2001, the counter reaches 999,999,999. The significance of this number is that programmers often use such a number as the code for the end-of- file. Thus, on September 9, 2001, UNIX programs may mysteriously end prematurely or give erroneous results.

Well, with computers it is always something. I’m glad we have a good stock of silver pellets and remote programmers to call on again.

Ubiquitous computer leads way to improved accessibility

technology

THE WORLD is zooming ahead in technology, and people with disabilities ask to be taken along on the journey.

Propelling the global craft is the computer, an instrument of enormous interest to many in the estimated 15 per cent of the Canadian population with disabilities.

“As computers increased in their prominence, it was always something cited as the great solution because the potential is there to have a great range of accessibility,” says Dr. Graham Strong, director of the Centre for Sight Enhancement at the University of Waterloo’s School of Optometry.

Realizing that potential requires industry to build the accessibility in when it devises its products.

“For people who are vision-impaired,” says Dr. Strong, “they have to wait while someone figures out how the technology can be adapted.”

That said, Dr. Strong and his colleagues at the Ontario Rehabilitation Technology Consortium are themselves working on new devices that will adapt the tools we take for granted. For example:

A spectacle-mounted autofocus telescope allows for swift, accurate, hands-free focusing for any viewing distance, something extremely useful for those with visual impairments and limited manipulative abilities. “Just by dropping their chin slightly, the person can automatically focus on the approaching bus, even though it’s moving,” Dr. Strong says.

New optical-character-recognition technology will extend the ability of people with low vision to work with scanning, faxing, photocopying and printing.

photocopying-and-printing

An electronic video telescope can enlarge the person’s view and offer a more highly contrasted image. “You have the ability to control light if it’s presented in a poor way, such as if you’re looking at somebody standing in front of a window,” Dr. Strong says.

These are specialized devices created expressly for a population with disabilities, of course. When it comes to keeping up with advances in the mainstream, says Bill Bennett, head of the technology transfer unit for the ORTC in Toronto, the task is nearly impossible.

“We really feel we need to have some touchstones or checks while the technology is developing,” says Bennett, who adds that a number of accommodations have been made by computer-industry leaders.

Sometimes advances for the general population can leave others trailing, as in the case of early Internet access dependent on keystroking being superseded by graphical-interface systems that leave the visually impaired behind. When web sites factor in an outlet for keystroke access, conversely, the Internet again becomes a valuable tool for this population.

Internet“If you have mobility problems, difficulty getting out, the telecommunications tools can act as a replacement for going somewhere and that is significant,” Bennett says. “Also, people can work electronically with text without revealing their disability.”

The idea that adaptations for people with disabilities is always a costly process is a key misconception.

Mary Frances Laughton, chief of the federal assistive-devices program office in Ottawa, notes that “with a 15-cent chip put into a telephone, it will speak to you.” Such a feature would allow the visually-impaired to have services like call display in a usable manner.

The alternative device bought separately for the telephone, Laughton adds, costs $300.

Moreover, it is not just people with disabilities who benefit from such gestures. In the case of something like the graphical interface model now used in much computer software, many people have older computers that must rely on keystroking just as blind users do.

“When people are told what the issue is, I would say 98 per cent of the people do the retrofitting,” Laughton says. “It’s not that people aren’t willing to do it.”

Still, technological advances must always be examined carefully. Dr. Strong gives the example of talking traffic signals that tell the blind to cross the street or not. While that sounds helpful, safety may be compromised since the computer knows what colour the light is but not, as a dog would, whether a car is speeding through the red signal anyway.

“Technology is never a cure-all for everybody,” Laughton concludes, “but technology is going a long way to give people a lot more freedom than before.”

How to talk your way into a computer

computer-talkThe Globe and Mail LAST week, the long-distance phone company Sprint began offering its customers an automated voice-recognition service: Punch in your access code, then just say “phone home” or “phone office,” and a computer does the rest.

Limited voice-recognition products have been around for years, but they’ve had problems with accuracy and limited vocabularies.

But with recent improvements in the power of microprocessors, in the software behind voice recognition and in the quality of microphones, we can expect to see more of the technology.

Among the new applications are video games, educational software, interactive cable television, desktop publishing, word processing, electronic mail and just about anything else to which you can apply computer power.

The arrival of dictation software, which can convert your spoken words into printed text, resulted directly from the development of microphones that can cancel out background noise. If you’re in a noisy office, factory or playroom and the computer can’t differentiate between your spoken commands and the sound of screaming in the background, the system isn’t going to work.

Voice-recognition systems such as those made by IBM can be divided into two basic categories.

The first is the small-vocabulary system, which is called speaker- independent. That means it will understand about 1,000 words as they are spoken by almost anybody. So a video-game player might be able to say, “Pick up the sword, grab the gold, blow out the candle and jump through the window on the right.” The system will obey.

Small-vocabulary systems are useful for jobs in which the language is precise and specific to a task at hand. With a home banking system, for example, you would say, “Pay Visa $1,400 from chequing,” or “Deposit $1,237 to savings.”

In these systems, the acoustic signal – the sound of your voice – comes in to an analog signal chip, which converts it to the digital language of computers.

That digital voiceprint is compared to a composite voiceprint of the way thousands of people have uttered the sounds in a single word.

So if the word is “four,” the computer charts the consonant sound at the beginning, the vowel sound in the middle and the consonant sound at the end, comparing them all to its library of sounds and words.

These small-vocabulary systems have high accuracy rates; they can be relied on to understand 99.5 per cent of what they hear. They can even decipher the speech of people who talk fast or speak with accents or have head colds.

IBM-system

Large-vocabulary systems are trickier. First, they tend to be speaker- dependent, which means you must train them to recognize your voice. To do this, IBM’s large-vocabulary system asks the user to read Mark Twain’s A Ghost Story, which takes about an hour. That story was picked because it is acoustically juicy, using many of the sounds in the English language in both their usual and unusual combinations.

After listening to your recital, the system spends about four hours building a mathematical model of your voice. There are more than 100 ways you can speak a long e, for example, depending on what sounds come before or after it.

Once the training is done, the system is ready to take dictation from you, at a speed of up to 70 words a minute.

IBM’s version works by listening to combinations of three words. If you start a sentence with a word that sounds like “there” it will print “there” on your computer screen because, according to its mathematical model, that is the most likely spelling of that sound at the beginning of a sentence.

But if your second word is “parking,” the system changes the spelling of the first word to “they’re” because that is the most likely word to precede “parking” when it’s a verb.

If your third word is “spot,” the system will change the spelling of the phrase to “their parking spot.” IBM’s system can back up as far as five words to calculate the most likely context.

The large-vocabulary IBM system, which now runs on late-model personal computers, can handle about 32,000 words and most of the combinations that can be made with them. Other systems, created primarily for research, have libraries of more than 100,000 words but require larger computers.

IBM expects to shrink its voice-recognition software down to credit- card size by this summer, which means that by next Christmas you can expect to see hand-held devices that convert spoken words into text.

Europeans push computer plan

supercomputer

European physicists, looking enviously across the Atlantic at the $638-million high-speed computing initiative proposed by the Bush Administration, are pushing for an even more ambitious European effort. Last week, a working group of the European Commission, chaired by CERN director Carlo Rubbia, laid out a proposal for a high-speed computer network spanning the continent, and massive investment in the development of a European supercomputer industry. Total cost: about $1.4 billion a year over the next decade, half from government and half from industry.

Europe has a long way to go to rival the United States and Japan in supercomputing, however. Although Europe represents 30% of the $2.6-billion world market for supercomputers, not a single European company manufacturers the machines. And that, says Rubbia, is “an unacceptable situation.”

It might seem a bit late to play catch-up, but Rubbia argues that Europe has a window of opportunity because high-performance computing is at a watershed. Current machines are capable of several gigaflops. (A flop is essentially one calculation per second.) The next generation will be teraflops machines, capable of flops. That will require completely new approaches to hardware and software, which could be developed in Europe.

European-industryThe report, drawn up by 18 high-level users of supercomputers, outlines a five-stage program. First would be an effort to encourage the use of existing supercomputers. That’s where the new pan-European high-speed network comes in. Existing links are relatively slow and fragmented within individual countries. Rubbia would like to see a multi-megabaud backbone to create what he calls “a European high-performance computing community” and position Europe to build the next generation of gigabaud links. While that is going on, manufacturers should “vigorously” pursue advanced machines, while programmers concentrate on “the inventive development of novel software.” Basic research will be needed “to raise the competitive level of European industry.” And education and training – even at the high school level – should be stepped up to ensure that Europe’s scientists become aware of the potential of high-performance computing.

As for funding, the Rubbia report says spending – currently about $150 million for “advanced architectures and their application” – should increase gradually to about 1 billion European Currency Units year by 1995. (One ecu is currently worth about $1.4.) But it does not say exactly where that funding should come from. Rubbia took the easy route: “We are scientists and engineers, calling attention to the needs rather than suggesting a clear financial strategy of how to solve these problems.”

The working group unveiled its proposal to the European Commission last week, and it got a favorable reception. Fillipo Maria pandolfi, vice president of the commission, hinted that Rubbia’s proposals fit well with future plans of Directorate-General XIII, which is responsible for relecommunications, information industries, and innovation, and which commissioned the report. In 1992 the directorate will reasses priorities under its third Framework program. That will involve concentrating resources in specific areas, Pandolfi said, and supercomputing is likely to be one of them.

Does Europe really need its own supercomputer industry? Rubbia and other members of the working group stressed the benefits that supercomputers bring to science, engineering, and everyday life. But they were less specific on the benefits of building, rather than buying the capability. “It is just inconceivable to buy everything from abroad,” said Rubbia. Pierre Perrier of Dassault Aviation stated baldly that “without a supercomputer industry, Europe would return to the second world. It would not be part of the first world.”

Computer woes still few and far between

 

monitor-skyA computer glitch that pushed the year back to 1900 on computer screens throughout D.C. government was considered a “nonevent” and has been fixed, officials said yesterday.

Employees with access to such databases as payroll or tax and revenue were first confronted Saturday with the incorrect date on a security window.

Once they moved to a new window by entering their user name and password, the system ran smoothly, said Henry Debnam, chief computer technician for the office of Chief Financial Officer Valerie Holt.

“It was a nonevent as far as we were concerned,” he said.

Mr. Debnam did not inform all government employees or the public, he said, because technicians solved the problem in the amount of time notification would have taken. Yesterday morning, the date on computer screens read Jan. 5, 2000 after workers finished their repairs Tuesday night.

Officials said the problem did not slow city operations.

They could not estimate how many workers ran into the problem, though the number was limited to those with the proper security clearance.

“From Day One we have said that we would probably have some very minor date-change problems, but that residents would continue to receive the full range of government services,” said Mayor Anthony A. Williams.

Five days into the new year, only reports of minor bumps were coming in from around the region.

“Our planning paid off,” said Bonnie Pfoutz, who headed Arlington County’s computer effort.

However, the year-2000 bug crashed more than 800 slot machines at three Delaware racetracks in the days leading up to Jan. 1.

It also could be responsible for a malfunctioning computer at Crossland High School in Prince George’s County that keeps records for one-third of the public schools.

computer-school

Several staff members within the school system did not return calls yesterday requesting information about a memo explaining the computer’s year-2000 problems.

During computer testing on Saturday, classrooms at Woodley Hills Elementary School in Alexandria remained dark and the heating system malfunctioned.

But computers weren’t to blame for this trouble, rather it was “a suicidal squirrel that tried to party like it was 1999,” according to a report to the Fairfax County command center from Bob Ross of the school district’s year-2000 team.

“The poor squirrel either got into the circuit breaker or the electric transformer and got zapped,” Mr. Ross said. “The only Y2K fatality in the county was the squirrel.”

Michael Cady, director of information technology services for Prince George’s County, said he has encountered only one technical problem, which took about 10 minutes to fix.

“We expect maybe some other minor things going on, but that’s the extent of our glitches,” he said. “I call it a burp.”

In Montgomery County, the year-2000 project office requires every agency to check in twice a day – before 9 a.m. and again by 3 p.m. – to report any problems.

Officials reported only one error on Tuesday, which they said was not tied to the year-2000 bug. Nine public schools had trouble with the system that controls temperatures. Technicians managed to manually override it to compensate.

y2k-problem“Nobody’s called in a Y2K problem to us,” said Sonny Segal, chief of Montgomery County’s year 2000 efforts. He said workers having trouble signing on or printing documents are calling his office, only to find out that their glitches have nothing to do with the rollover to the new year.

“We did have calls reporting trouble logging on or slow computers,” said Charles Grammick of the Fairfax County school system’s year-2000 team. “But none of the problems was caused by Y2K.”

“Before, I was worried if I would have a job,” he added. “I am still here, and it’s great. Now I am wondering who is going to pay for the ulcer.”

The year-2000 computer problem stems from a cost-saving shortcut years ago in which software programmers devoted only two spaces in a date field to designate the year. That older software assumes the year always will begin with the digits 19.

Technicians feared that if they didn’t carefully reprogram and test affected systems – and replace calendar-sensitive computer chips embedded in some equipment – the computers would shut down or malfunction when they “read” the digits 00 as meaning 1900 and not 2000.