Silver Pellets and Remote Programmers

While computer professionals have not found a comprehensive cure for the Y2K disease, they have developed many partial solutions that taken together can render the Y2K bug impotent, saving brute-force software remediation by overseas services as a backup.

People keep asking me how we could have been so shortsighted as not to have foreseen the problem known as the millennium, or year 2000, bug. This question implies that we should have caught the problem at its source and not let it get to the stage where so many people are in a near panic in dealing with it. The next most frequent question posed to me is even more vexing: “You know all about computers. Why don’t you just apply a generalized fix and end this all now?” Such a fix has come to be known as a “silver bullet.” My reply is: “Do you believe in magic? I don’t. The Y2K bug involves so many different types of computers and software applications that producing a silver bullet to eradicate it is beyond my imagination.” Apart from magic, however, there are clever and insightful techniques that information technology professionals are using to control the millennium bug. We will call them “silver pellets.”


In addition to a variety of silver-pellet-type solutions, there is a completely different approach to controlling the Y2K bug that is more akin to brute force: Hire a sufficient number of computer analysts and programmers to fix all the deficient software. This can be ruinously expensive, and in most places in the United States, there is a shortage of qualified people to actually staff such an effort. However, many qualified programmers who can fill the need are available overseas, and the costs of this labor are much lower, so remote programmers have offered another approach to solving the Y2K problem.

The scope of the Y2K problem is massive. In our computer-pervasive society, the millennium bug has been found to inhabit all four types of computers on which we depend: personal computers, telecommunications devices with embedded processors, microcontrollers in “smart” appliances, and mainframe and other shared computers. Personal computers now inhabit most offices and more than half the homes in North America. Our cars, ovens, and televisions are saturated with embedded processors. Everywhere we look we see smart appliances and devices that connect to them [see “Y2K Up Close,” The World & I, May 1999, p. 170]. Furthermore, the interconnectedness of computers at all levels–from embedded microprocessors to PCs to mainframe computers– means that the ill effects of the bug could spread readily, even into commerce and the Internet’s vast array of information sources.

Although the Y2K bug is related to how a computer tells time, using both hardware and software, Y2K bug fixes are almost always achieved through a change in software. Given the hundreds of computer languages and myriad dialects used to program computers, it is hardly surprising that there is no universal silver bullet to kill the bug.

In 1997–98, as concerns about the threat of the Y2K problem were growing, many companies and government agencies still refrained from addressing it. The common wisdom of commentators at the time was that no silver bullet solution had emerged or was likely to, and those who waited to start addressing the problem would face unbeatable deadlines, exorbitant costs, and manpower shortages in trying to correct it. Now, five months away from Y2K day, as this is being written, the situation is not nearly as bleak as had earlier been predicted, and even considerable benefits have been realized through dealing with the problem. Each of the Y2K silver pellets tackles a particular segment of the overall complex issue and, within the segment, applies clever programming techniques to either solve that segment of the problem or greatly simplify its manual resolution. Let’s look at some representative examples.

PCs everywhere

y2k_compliantOne of the daunting challenges of having computers on everyone’s desktop is that many of them are not Y2K-ready. The census of PCs in America is now over 60 million. According to a survey completed in 1998, 97 percent of PCs made before 1997, and 47 percent made in 1997, would not be able to make the transition from 1999 to 2000 unless the date/time were manually reset. Compound this with the realization that most knowledge workers are dependent on their personal computers and we see that it is likely to take some substantial effort to assess the Y2K-compliance status of each and every personal computer.

Clearly, a most useful silver pellet would be a tool that could perform an automated test of network-attached personal computers. One such tool is offered by ON Technology. Its product, called ON Command CCM, can check the Y2K compliance of hardware (the BIOS) on multiple PCs simultaneously from a central administrative system. ON Command CCM can also be used to automatically reset local-area network-attached PCs to 1/1/00 (January 1, 2000) without end-user intervention. In this way, the PC system can then be tested for compliance.

Merely asserting that a PC is Y2K-compliant after performing an upgrade is not sufficient. I strongly advise that each computer be tested and verified to be Y2K-compliant. This means testing the system hardware (BIOS and the real-time clock). Even though the effort to upgrade and test one computer’s hardware for compliance may require less than half an hour, to do this across an entire enterprise with thousands of PCs is no small task. ON Technology’s silver pellet is thus a useful innovation.

Even when the PC becomes Y2K-compliant in regard to its internal date/time processing, its local and networked applications software may still need to be upgraded. ON’s tools are also handy for this task. With them, it is possible to use an efficient, centralized approach to distributing Y2K-enhanced applications software, such as is available from Oracle or SAP R/3. Again, we are taking advantage of the local- area network connectivity with its speed of transmission and software- controlled automated management to replace a “catch as one can” software administration. Interestingly, tools such as these can also be used productively after the Y2K hubbub is behind us; PCs will always need testing for effective functioning on specific vulnerable dates such as leap year 2000 (February 29, 2000) or for other purposes.

If you have only one PC to test for compliance of its hardware (real- time clock and BIOS), however, you can use a product from Computer Experts called Millennium Bug Toolkit, which is available over the Internet at One of the nice features of this silver pellet is that it works from the floppy disk drive and does not interact with your hard disk during the testing procedure. In this way, it protects your data and software during testing; not all competitor products do.

By the millions

It is commonplace nowadays for corporations and governments to maintain large bodies of their own custom-written applications software, written mainly in the COBOL language for a mainframe environment. One such company presently completing its Y2K-compliance project has been reviewing and remediating some 21 million lines of programming code. This company is using another type of silver pellet to identify and analyze its software programs that fail the Y2K test. To appreciate the size of this task, consider that 21 million lines of code takes some 400,000 pages to print. Missing even one date occurrence can lead to a failed application. Platinum Technology’s TransCentury Analysis Tools handles more than 150 date formats and uses the power of the mainframe itself to break up the 21 million lines of code into manageable units. Platinum has a complementary product, Calendar Routines, that can automatically generate replacement software to fix noncompliant date logic; its FileAge product can simulate dates after December 31, 1999, for use in testing the changed application code.

Economies of scale

With thousands of personal computers to fix, millions of lines of code to scan, and a hard and fast time deadline of January 1, 2000, the real Y2K problem is one of managing a multitude of checks, upgrades, and tests. Not only must programmers carefully manage their own work, but also upper management must be extremely thorough and careful in managing the total repair job. The real management problem arises not because a particular instance of the date problem is so difficult to fix but because of the sheer number of bug occurrences that must be found and repaired–and all without error. The problem of locating all of the bug occurrences is what concerns me the most.

Another silver pellet aims not only to speed up bug-fix management but also to link the programmer’s task progress with status reports to upper management. Turnkey 2000 of San Jose, California, has such a tool called Unravel 2000, which automates up to 90 percent of a programmer’s work. It can convert noncompliant software code to be Y2K-compliant and can generate project-management reports of the changes. This tool makes it possible for managers to cut conversion costs by performing assessments in mid-project and to update project schedules and change priorities as needed.

Cleverness and brute force


What is the challenge in bringing 21 million lines of code up to Y2K- compliance standards? From surveys, we know that noncompliant date faults occur, on average, with a frequency of one per 1,000 lines of code (about one for every 20 pages of code). This works out to 21,000 faults, and the Y2K-repair effort is aimed at them. If a programmer can find, fix, and test one date fault in half a day’s time on average, then we are looking at about 10,000 days of work. If the company has 20 programmers dedicated full time to the Y2K-remediation task, then it will take 500 days (slightly more than two entire working years for the 20 people) to complete this brute-force solution.

What if it were possible to adopt an approach that would avoid changing the software at all? If this could work, then a tremendous load could be lifted. Approaches of this sort have proven successful in many cases. Understanding how it works requires some background. Let’s start by defining the year 2000 computer problem as a discrepancy between the external, four-digit dates used by people and the internal, two-digit dates used by computers. The discrepancy can be expressed by the two statements: “2000 is greater than 1999 (2000 > 1999),” and “00 is less than 99 (00 99).” A noncompliant computer only knows to drop or add the first two digits when dates pass across the divider between internal dates and external dates. Thus, its internal dates are stuck in the twentieth century, in a loop between 1900 and 1999.

The clever trick called program encapsulation, which can avoid changing all of the software, shifts each date when it crosses the divide from external, four-digit to internal, two-digit or vice versa. If, for example, we subtract 28 years from both 2000 and 1999, we see that 1972 > 1971, and after dropping the first two digits of each year, that 72 > 71. In the reverse path the computer would first convert 72 and 71 to 1972 and 1971, then add 28 to each of those. With this date shift built in, the noncompliant software can continue to work without change. Programmers have selected 28, or multiples of 28, as the preferred date-shift increment because the pattern of the cycles of days of the week (Monday, Tuesday, …) and days of the month (1, 2, …) and the leap year relationship repeat identically every 28 years. Given that 2000 is a leap year but 1900 and 2100 are not leap years, it may appear that this technique can in principle be used for all years whose external representation falls within 1901–2099. That window is further reduced, however, to external dates of 1929–2099 by constraints on the date-shifted internal representation (1929 –28 = 1901).

With program encapsulation, the computer’s clock is operating in its own world 28 years earlier than real time. Everything internal to the boundary is shifted by –28 years on input and +28 years on output. Data files also are taken inside the time boundary–by shifting all the year values on the file by minus 28 years. One has to be careful with this approach that outputs are properly adjusted by +28 years. Usually, this is not a difficult task.

Thus, program encapsulation is one of the easiest methods of correcting the Y2K bug by changing the frame of reference for time. Of course, this silver pellet assumes that the software runs correctly for the relevant span of time in the twentieth century. Seeing this technique, some have commented that it only postpones the day of reckoning. But if you replay it again in 28 years by subtracting and adding years, its applicability is extended. Similarly with 84 years, and other multiples of 28, ad infinitum.

A process patent covering the concept of program encapsulation dating from 1995 is held by the original developers, Turn of the Century Solution, LP. Anyone using the method is required to obtain a license. Seven software developers have licensed the process and offer program encapsulation utilities for all major platforms.

A closely related technique, data encapsulation, handles the time shift inside programs rather than outside programs as in program encapsulation. With data encapsulation, new code to shift the data forward and back is inserted at every input or output statement in the programs– the disadvantage being that many programs have to be changed and recompiled. The advantage, however, is that vast repositories of data do not have to be expanded to handle four-digit years. This process was developed by Paul O’Neil of Raytheon and is in the public domain.

The full significance of program encapsulation is that it avoids the necessity of performing tests with advancing dates. Since the software is not changed to handle dates in two different centuries, it does not need to be tested with dates in two different centuries. Given that, for medium- and large-scale software application systems, advanced date testing is usually 50 percent of the personnel effort, eliminating this task makes program encapsulation a significant silver pellet.

Remote programmers

Clearly there is a shortage of computer programmers in the United States. Companies trying to make their systems Y2K-compliant need to dedicate programmers, analysts, managers, and a certain proportion of their information technology infrastructure to the remediation task. However, these companies also have to maintain normal operations such as payroll, accounts receivable, and general ledger. With the fixed deadline of January 1, 2000, and the exigencies of normal maintenance, striking a balance has been difficult. Because the Y2K-remediation task is finite, it has been simpler to contract it to other companies. But America has no surplus personnel resources available. Enterprising groups like Trigent Software of Southborough, Massachusetts, have specialized in filling the need for Y2K personnel by arranging with offshore programmers in countries such as India, Pakistan, and the Philippines. Mexico’s Softtek is working with Ernst and Young, LLP, to provide its “nearshore” programmers for large software development projects. In these countries, there is both the skilled labor force and sufficient familiarity with the English language to be able to read and write technical documentation.

The work performed by remote programmers follows the same approach used in the United States:

  1. find the instances where dates are used;
  2. document their location;
  3. analyze their criticality;
  4. devise a fix and reprogram using an agreed-upon convention (e.g., to expand the year code from two to four digits);
  5. carry out testing to validate the changes; and
  6. report the status of remediation results to the management.

The first four stages of repair take about 30 percent of the total effort. Testing takes about 50 percent, and reporting requires the remaining 20 percent.

Even if some stateside Y2K-remediation projects have the programming and analysis talent, they often skimp on the testing phase. Generating test data can be both difficult and tedious, and programmers are known to be highly optimistic about the outcome of their creativity. As a result, testing is given too little emphasis, and this can spell the ruination of a Y2K project. With remote programmers, testing is often more exhaustively complete because they are happy to get the relatively high-paying work and want to please their U.S. customers. For these offshore firms, their Y2K projects are golden opportunities to show what they can do. They realize that if they do a good job, there is likely to be more work from these sources in the future.

One of the substantial benefits of this approach is cost savings for American companies hiring offshore programmers, whose wages are about a tenth of what they are in the United States. Through the use of the Internet and satellite-based communications facilities, information officers are beaming software that needs repair across the globe to a waiting cadre of technical specialists. By using remote programmers, the time- and manpower-intensive manual labor of correcting hundreds of thousands of lines of computer code is being done at reduced prices by burgeoning offshore industries.

The silver lining

The enormous effort to reach y2k-compliance standards is starting to pay a substantial bonus. Because the efforts are mostly managerial, it should be no surprise that most of the benefits are in that domain. With personal computers so pervasive and software and hardware proliferating so rapidly, successful Y2K-compliance projects have put many companies in a position for the first time to know and keep current on the full inventory of their computers and software. Some of the silver pellets described in this article are being used to track all of a company’s software and to provide upgrades as new versions are released. Because testing and validation of Y2K compliance is so critical, Y2K tools were acquired for this purpose. But these tools will also be used to test for non-Y2K purposes, so the end result will be more thorough testing of future systems and their applications. Our mainframes are getting the cleaning of their lives; cobwebs in software libraries are being swept away after years of accumulation. Having more current systems, better tested, and with better management of both central and distributed information technology assets may not be too much to pay for being Y2K-compliant.

After all is said and done, January 1, 2000, is not the last of our worries about processing dates by computer. The next date to be concerned about is February 29, 2000, which is the first leap day in a century year since 1600. The simplest rule in programming for leap years is to add an extra day when the year is evenly divisible by four. Since computers were not around in 1900, this exception is no problem for real-time work, and 2100, the next exception, is still over 100 years away. So the simplest rule works correctly in 2000. What we have to be concerned with is that the algorithm may be formulated correctly for the 100 years but not the 400-year exception. December 31, 2000, may produce another surprise, as it is the 366th day. If a program tells time by counting days from a fixed point, then it will be incorrect if the programmer forgets that 2000 is a leap year. In a little over a year, we’ll see if anyone got caught on this one.

More ominous is September 9, 2001, at precisely 1:46:39 a.m. This is a special date for UNIX systems. Unlike classical mainframes or personal computers, UNIX computers are programmed to tell time by counting the seconds from a fixed point: midnight, January 1, 1970. On September 9, 2001, the counter reaches 999,999,999. The significance of this number is that programmers often use such a number as the code for the end-of- file. Thus, on September 9, 2001, UNIX programs may mysteriously end prematurely or give erroneous results.

Well, with computers it is always something. I’m glad we have a good stock of silver pellets and remote programmers to call on again.

How to Choose the Right Paint Sprayer

Paint sprayers are incredibly useful tools when it comes to either hard to reach areas, or when you have to paint a large surface. They offer a fine finish, and paint very quickly, especially if you are using the right paint type for your paint sprayer. There’s a multitude of sprayer types, and, to choose the best paint sprayer for you, you must find out a few things:

  1. Type of paint/stain – some paint sprayers can’t use certain paints: e.g. a HVLP paint sprayer can’t handle thick latex paint
  2. The size of a surface – paint sprayers range from cup sprayers that are the cheapest and are used on small projects to normal air sprayers that are used for larger projects as they create overspray.
  3. Volume – there is no use wasting your money on a huge sprayer when all you have to paint is a doorknob. Same goes for the reverse – it will be tedious to paint a side of a house with a cup sprayer. Consider how much paint you are going to use up before you buy a paint sprayer.
  4. Last but not least, the budget – you can expect the cheapest budgetmodels to clog up, and the expensive ones to output a lot of power. With that said, you have to understand that if you are going to use the paint sprayer once a year to paint a small surface, you don’t need the most expensive unit, but if you paint often, the cheap units clogging up will waste a lot of your time, and may not be worth the money you saved.

Which paint sprayer is best for you



Cup Sprayers A small, cheap and the easiest to use sprayer. Used on small to medium projects, as it draws paint from a cup. Compatible with a standard outlet.
Conventional Air Sprayer Offers a fine finish by compressing air, and releasing it with paint. Expect a lot of wasted paint due to overspray. Best used in medium exteriors and in car painting.
Airless Sprayer One of the most versatile sprayers – they put paint under high pressure and are suitable for thin stains and thick paints. Minimize overspray
HVLP Sprayers The HVLP sprayers (High Volume Low Pressure) offer minimal overspray, and are used for fine projects that do not use thick latex paints, as these paints clog up this sprayer.

Finding the Right Paint Sprayer online

The easiest way to buy an paint sprayer online is just to access an online shop (like eBay or Amazon) and head straight to the ‘home and garden’ section.

online shopping

Then, unless you have a specific paint sprayer in mind, you can just browse for whatever catches your eye. Otherwise, just type in the type of the paint sprayer type you are looking for (airless, cup, etc.).

Before jumping into buying the paint sprayer, check that the seller provides warranty, and had positive reviews. Because a paint sprayer can be relatively expensive, these steps are a necessity precaution to avoid scams.

If you are painting hard to reach areas, consider purchasing a nozzle compatible with your model of a paint sprayer. If you are not sure about the model, ask the seller.

Overall, a paint sprayer can make your painting task a lot easier, saving you time and pain. The paint sprayer is also very useful when combined with a tip, to access difficult to paint areas (like corners).

To summarize the differences from sprayer to sprayer:

  • Small and medium projects that needs a certain level of precision, – conventional air sprayer.
  • Small, projects that need precision – HVLP or cup sprayer. These paint sprayers are slow, and when you are painting large projects they will be too slow for you to work efficiently.
  • If you are undertaking a large job, which will require lots of pain, the best paint sprayer for you would be an airless paint sprayer, or another industrial paint sprayer.

When choosing paint sprayer, also consider your budget, the material size (the size of your project) as well as the paint type and amount that you will use up, to achieve the best result when painting.

Sleeping on air install a Select Comfort mattress for a good night’s rest. (Do-it-Yourself)

There’s nothing more restful than sleeping on a mattress that’s just the right firmness for you, but the mattress that comes standard in your RV is often not as firm or as soft as you may want. That was the case with the mattress in our new trailer, so it wasn’t long before we replaced it with a Select Comfort air mattress, which can be adjusted to any firmness.

Select Comfort Air Mattress

A Select Comfort mattress is not your father’s air mattress. It looks like any other luxury mattress, and the pressure is adjustable simply by pressing the buttons on the remote control. We opted for the dual-chamber model because each side of the bed can be adjusted for pressure independently of the other side.

An air mattress has two other benefits for RVers. First, it weighs much less than a conventional mattress, and second, that lighter weight makes it easier to lift the bed platform for access to the underbed storage area.

Installing the mattress

Installing a new conventional mattress is a matter of out with the old, and in with the new. However a unit like the Select Comfort is a bit more complex to set up.

A Select Comfort mattress requires some tool-free assembly and only takes about 30 minutes to set up. Its parts include a fabric mattress base and a quilted, zippered top, a 120-volt AC air pump, air hoses and one or two remote controls for the single- or dual-chamber models.

There needs to be space for the air pump near the head of the bed, so the air hoses can reach the mattress, and an AC receptacle for plugging in the pump cord. There may be room to keep the pump in the storage space under the bed, but it’s more complicated if the bed is in a slideout. The pump should ideally be in a place where you don’t have to move it each time you use the slideout, and where it’s not necessary to remove the air hoses each time. Our bed is in a slide, so we installed the pump under the bed.

We brought the slide in and out several times with the mattress platform raised and observed what happened under the bed. The foot of our bed moves in and out on casters, and the only thing that moves underneath is a plywood platform beneath the head of the bed. The space between the platform and the underside of the bed would easily accommodate the pump, which measures 5 inches high, 9 1/2 inches wide, and 8 inches deep. The platform was the best location for the pump because it could safely ride in and out with the slideout.

Routing the air hoses was another matter, and we opted to have the hoses come up through the plywood mattress platform. That made it a short run from the pump to the mattress-attachment points near the head of the bed. Hose extensions are available if needed.

We used a sabre saw to cut a square hole in the plywood at the head of the bed to accommodate the hoses and cords for the controls. We made it slightly larger so we could reach through to grasp the hoses and the remote controls to pull them up, and the cut-out piece serves as a removable lid. On one edge of the lid, we cut a notch large enough to accommodate the two air hoses and the cords for the two remotes. The lid is supported with a 1- by 2-inch cleat along two sides of the hole. The cleats are screwed to the underside of the plywood, and extend into the hole about an inch to make a lip for the lid to rest on.

The weight of the pump would probably hold it in place during travel, but just to be certain we anchored it to the underbed platform with mounting tape. Assembling the mattress is simply a matter of placing the components in the base, zipping on the top and attaching the snap-on air hoses.

With our setup, the pump power cord is in the path of the slideout when it’s retracted. On travel days, we unplug the cord and lay it on the bed before retracting the slide. So we won’t forget to do this, we added “unplug pump” to the checklist we always use when breaking camp.

The controls can be kept on nightstands or shelves near the bed. Although we have a shelf on each side of the bed, the remote unit would take up too much space on the tiny shelf. Mounting tape was used to secure a remote on the wall on each side of the bed. Due to the mattress sliding a bit during travel, a flannel sheet was stapled to the plywood platform to give it some “tooth” and help keep the mattress in place. Folding the edges of the sheet under the sides and stapling the undersides worked well.

Due to atmospheric changes, as you travel and go higher in altitude, the air mattress will become harder, and when you go to a lower altitude, it will become softer If your travels involve a dramatic change in altitude, plan on adjusting the mattress pressure as needed.


Select Comfort air mattresses are available in all standard sizes, in addition to a short queen. The size of our mattress is 76 inches long instead of the standard 80 inches.

Our Model 4000 short queen-size mattress cost $749, and there’s no shipping charge. For an added layer of comfort, an optional pilowtop that covers the entire mattress is available. Select Comfort mattresses come with a 20-year limited warranty.


It’s wonderful to be able to adjust a mattress to the firmness you want. More importantly, if you have any muscular aches and pains, the Select Comfort may help to allieviate them. Because it’s an air mattress, it distributes your weight more evenly, and conforms to your body shape, no matter what position you’re sleeping in. We’ve enjoyed many comfortable nights’ sleep on our air mattress, the hardware is solid and reliable and we can highly recommend it.

Really cookin

Outfit your kitchen just like a professional chef, but don’t get burned by price — it may cost $20,000 or more

In simpler times, remodeling a kitchen meant repainting the wood cabinets and laying vinyl tile on the floor. Today, a kitchen overhaul might include custom cabinetry, granite countertops, and restaurant-quality appliances — to the tune of tens of thousands of dollars.

kitchen remodel

Nearly a third of all homeowners plan to remodel their kitchen or bath in the near future, making kitchen makeovers one of the top home projects, according to the National Association of the Remodeling Industry.

As advances in technology and design make it increasingly possible to own the type of stove, refrigerator or dishwasher formerly found only in the finest restaurants, it has become fun to focus on the kitchen — especially if funds are not an issue.

“This gourmet-kitchen trend has been strong for the past five years in Seattle, because of the strong economy, says Bill Bevis, a distributor of Wolf, Gaggenau, Five Star and other high-end kitchen appliance manufacturers. “It’s not at all unusual to see a 25-year-old techie buying a gourmet (read: $8,500) Wolf range. These people need the high-performance range to go with the high-performance car.

“Fifty percent of customers (of high-end products) are gourmets who want to cook better,” Bevis adds. “The other 50 percent are buying purely for style, for bragging rights.”

Ovens/gas ranges

Oven gas range

When it comes to impressive kitchen equipment, ranges and ovens certainly take the cake.

Most of the major high-end manufacturers are including such features as standard convection ovens, automatic burner relighting and simmering capability, as well as options ranging from builtin griddles and grills to stainless steel ventilation hoods.

The price tags, of course, are as hefty as the list of features.

A popular range from Wolf measures 48 inches wide and can be outfitted with a griddle, a grill and even a stainless-steel wok ring. On oversized burners, the temperature is regulated by moving the pan from the center to the side, instead of by lowering a flame. The burners automatically reignite if the flame goes out.

It boasts two ovens — one of which is fan-forced convection.

The hood completes the look,” says Bevis. “It’s really for customers who are interested in how it looks.”

Without options, the manufacturer’s suggested retail price is $8,359.

Similar in style and performance is a 60-inch range from Dynasty ($10,000), which includes two full-size ovens and a host of range options. They include a charbroiler, a griddle, a dedicated wok burner and conventional burners, says John Mitchell, vice president and general manager of Tri State Distributors. “Most of my customers are serious cooks. The people who buy for style might buy noncommercial products that have the look but not necessarily the capability But we’re definitely seeing a lot of younger people, many Microsoft executives. When economies are good, people buy to last.”

Even in a booming economy, time is precious, and many of the newer products are geared for reducing the time and hassle involved in meal preparation.

“People are finding that it is hectic to fit everything into the schedule, and in order to get meals cooked and eaten after work and before bedtime, we need to cook food as fast as possible,” says Bevis.


Because they take up so much space, get used so frequently and consume the lion’s portion of energy in a house, refrigerators are an important kitchen purchase. Many of the recent models have stainless steel exteriors, side-by-side refrigerators and freezers, and precise temperature control that varies in different areas.

Viking Professional refrigeratorA popular model is the 42-inch-wide, stainless steel Viking Professional refrigerator, which costs about $5,500. “The stainless look is very elegant, and it’s energy efficient — that’s the trend coming from Europe,” says Robert Neal, a Viking representative who works at Monarch.

Laura Gardner, an architect who recently remodeled the kitchen in her Montlake-neighborhood home, was particularly interested in finding a refrigerator that used less than the normal amount of electricity.

“I did a lot of research on the Internet and found a company in California that makes an energy-efficient fridge, but it wasn’t very nice looking,” she says. She ended up with an Amana model.


dish washerThe concept of energy efficiency is a strong selling point for dishwashers, as well. Manufacturers of several newer models claim that they save water and energy, and that they’re quieter than their predecessors.

The Asko Dishwashing System is advertised to clean and dry better, use less water and energy, and last longer. What’s more, it’s quieter than traditional models and holds 20 percent more dishes and utensils.

Miele, with its G879 SCVi dishwasher, boasts a “Novotronic console” integrated into the door’s edge to select wash programs, and in company literature states that “a century of Miele technological advances … have greatly enhanced our customers’ lives.”



No kitchen is complete without cabinets, and apparently those storage areas are finally getting their due.

Gary Potter of Potter Construction in Seattle sees “a trend toward furniture-quality cabinetry” (in kitchens). “When you can see the kitchen from the living room, you want the kitchen cabinets to look more elegant.”

In her recent kitchen remodel, architect Laura Gardner found that the 1927 home’s original cabinets “weren’t even salvageable.”

After much deliberation, she and her husband decided to have cabinets custom built. They ended up with maple-stained with yellow aniline dye, which provides a vivid color but doesn’t cover the wood’s natural grain.

“We might have paid double what we would have (for off-the-rack models), but we got a lot more room and high quality,” Gardner says. “It makes sense to get good quality cabinets.”

Nevertheless, Gardner, who in addition to her own home has designed remodels for many clients, cautions, “I think a lot of kitchens are really overdesigned for the amount of time spent in them. There seems to be a push to make it look like a restaurant kitchen, and I don’t think that’s right for everyone. I think people really have to think about the way they live.”

Ubiquitous computer leads way to improved accessibility


THE WORLD is zooming ahead in technology, and people with disabilities ask to be taken along on the journey.

Propelling the global craft is the computer, an instrument of enormous interest to many in the estimated 15 per cent of the Canadian population with disabilities.

“As computers increased in their prominence, it was always something cited as the great solution because the potential is there to have a great range of accessibility,” says Dr. Graham Strong, director of the Centre for Sight Enhancement at the University of Waterloo’s School of Optometry.

Realizing that potential requires industry to build the accessibility in when it devises its products.

“For people who are vision-impaired,” says Dr. Strong, “they have to wait while someone figures out how the technology can be adapted.”

That said, Dr. Strong and his colleagues at the Ontario Rehabilitation Technology Consortium are themselves working on new devices that will adapt the tools we take for granted. For example:

A spectacle-mounted autofocus telescope allows for swift, accurate, hands-free focusing for any viewing distance, something extremely useful for those with visual impairments and limited manipulative abilities. “Just by dropping their chin slightly, the person can automatically focus on the approaching bus, even though it’s moving,” Dr. Strong says.

New optical-character-recognition technology will extend the ability of people with low vision to work with scanning, faxing, photocopying and printing.


An electronic video telescope can enlarge the person’s view and offer a more highly contrasted image. “You have the ability to control light if it’s presented in a poor way, such as if you’re looking at somebody standing in front of a window,” Dr. Strong says.

These are specialized devices created expressly for a population with disabilities, of course. When it comes to keeping up with advances in the mainstream, says Bill Bennett, head of the technology transfer unit for the ORTC in Toronto, the task is nearly impossible.

“We really feel we need to have some touchstones or checks while the technology is developing,” says Bennett, who adds that a number of accommodations have been made by computer-industry leaders.

Sometimes advances for the general population can leave others trailing, as in the case of early Internet access dependent on keystroking being superseded by graphical-interface systems that leave the visually impaired behind. When web sites factor in an outlet for keystroke access, conversely, the Internet again becomes a valuable tool for this population.

Internet“If you have mobility problems, difficulty getting out, the telecommunications tools can act as a replacement for going somewhere and that is significant,” Bennett says. “Also, people can work electronically with text without revealing their disability.”

The idea that adaptations for people with disabilities is always a costly process is a key misconception.

Mary Frances Laughton, chief of the federal assistive-devices program office in Ottawa, notes that “with a 15-cent chip put into a telephone, it will speak to you.” Such a feature would allow the visually-impaired to have services like call display in a usable manner.

The alternative device bought separately for the telephone, Laughton adds, costs $300.

Moreover, it is not just people with disabilities who benefit from such gestures. In the case of something like the graphical interface model now used in much computer software, many people have older computers that must rely on keystroking just as blind users do.

“When people are told what the issue is, I would say 98 per cent of the people do the retrofitting,” Laughton says. “It’s not that people aren’t willing to do it.”

Still, technological advances must always be examined carefully. Dr. Strong gives the example of talking traffic signals that tell the blind to cross the street or not. While that sounds helpful, safety may be compromised since the computer knows what colour the light is but not, as a dog would, whether a car is speeding through the red signal anyway.

“Technology is never a cure-all for everybody,” Laughton concludes, “but technology is going a long way to give people a lot more freedom than before.”

How to talk your way into a computer

computer-talkThe Globe and Mail LAST week, the long-distance phone company Sprint began offering its customers an automated voice-recognition service: Punch in your access code, then just say “phone home” or “phone office,” and a computer does the rest.

Limited voice-recognition products have been around for years, but they’ve had problems with accuracy and limited vocabularies.

But with recent improvements in the power of microprocessors, in the software behind voice recognition and in the quality of microphones, we can expect to see more of the technology.

Among the new applications are video games, educational software, interactive cable television, desktop publishing, word processing, electronic mail and just about anything else to which you can apply computer power.

The arrival of dictation software, which can convert your spoken words into printed text, resulted directly from the development of microphones that can cancel out background noise. If you’re in a noisy office, factory or playroom and the computer can’t differentiate between your spoken commands and the sound of screaming in the background, the system isn’t going to work.

Voice-recognition systems such as those made by IBM can be divided into two basic categories.

The first is the small-vocabulary system, which is called speaker- independent. That means it will understand about 1,000 words as they are spoken by almost anybody. So a video-game player might be able to say, “Pick up the sword, grab the gold, blow out the candle and jump through the window on the right.” The system will obey.

Small-vocabulary systems are useful for jobs in which the language is precise and specific to a task at hand. With a home banking system, for example, you would say, “Pay Visa $1,400 from chequing,” or “Deposit $1,237 to savings.”

In these systems, the acoustic signal – the sound of your voice – comes in to an analog signal chip, which converts it to the digital language of computers.

That digital voiceprint is compared to a composite voiceprint of the way thousands of people have uttered the sounds in a single word.

So if the word is “four,” the computer charts the consonant sound at the beginning, the vowel sound in the middle and the consonant sound at the end, comparing them all to its library of sounds and words.

These small-vocabulary systems have high accuracy rates; they can be relied on to understand 99.5 per cent of what they hear. They can even decipher the speech of people who talk fast or speak with accents or have head colds.


Large-vocabulary systems are trickier. First, they tend to be speaker- dependent, which means you must train them to recognize your voice. To do this, IBM’s large-vocabulary system asks the user to read Mark Twain’s A Ghost Story, which takes about an hour. That story was picked because it is acoustically juicy, using many of the sounds in the English language in both their usual and unusual combinations.

After listening to your recital, the system spends about four hours building a mathematical model of your voice. There are more than 100 ways you can speak a long e, for example, depending on what sounds come before or after it.

Once the training is done, the system is ready to take dictation from you, at a speed of up to 70 words a minute.

IBM’s version works by listening to combinations of three words. If you start a sentence with a word that sounds like “there” it will print “there” on your computer screen because, according to its mathematical model, that is the most likely spelling of that sound at the beginning of a sentence.

But if your second word is “parking,” the system changes the spelling of the first word to “they’re” because that is the most likely word to precede “parking” when it’s a verb.

If your third word is “spot,” the system will change the spelling of the phrase to “their parking spot.” IBM’s system can back up as far as five words to calculate the most likely context.

The large-vocabulary IBM system, which now runs on late-model personal computers, can handle about 32,000 words and most of the combinations that can be made with them. Other systems, created primarily for research, have libraries of more than 100,000 words but require larger computers.

IBM expects to shrink its voice-recognition software down to credit- card size by this summer, which means that by next Christmas you can expect to see hand-held devices that convert spoken words into text.

Europeans push computer plan


European physicists, looking enviously across the Atlantic at the $638-million high-speed computing initiative proposed by the Bush Administration, are pushing for an even more ambitious European effort. Last week, a working group of the European Commission, chaired by CERN director Carlo Rubbia, laid out a proposal for a high-speed computer network spanning the continent, and massive investment in the development of a European supercomputer industry. Total cost: about $1.4 billion a year over the next decade, half from government and half from industry.

Europe has a long way to go to rival the United States and Japan in supercomputing, however. Although Europe represents 30% of the $2.6-billion world market for supercomputers, not a single European company manufacturers the machines. And that, says Rubbia, is “an unacceptable situation.”

It might seem a bit late to play catch-up, but Rubbia argues that Europe has a window of opportunity because high-performance computing is at a watershed. Current machines are capable of several gigaflops. (A flop is essentially one calculation per second.) The next generation will be teraflops machines, capable of flops. That will require completely new approaches to hardware and software, which could be developed in Europe.

European-industryThe report, drawn up by 18 high-level users of supercomputers, outlines a five-stage program. First would be an effort to encourage the use of existing supercomputers. That’s where the new pan-European high-speed network comes in. Existing links are relatively slow and fragmented within individual countries. Rubbia would like to see a multi-megabaud backbone to create what he calls “a European high-performance computing community” and position Europe to build the next generation of gigabaud links. While that is going on, manufacturers should “vigorously” pursue advanced machines, while programmers concentrate on “the inventive development of novel software.” Basic research will be needed “to raise the competitive level of European industry.” And education and training – even at the high school level – should be stepped up to ensure that Europe’s scientists become aware of the potential of high-performance computing.

As for funding, the Rubbia report says spending – currently about $150 million for “advanced architectures and their application” – should increase gradually to about 1 billion European Currency Units year by 1995. (One ecu is currently worth about $1.4.) But it does not say exactly where that funding should come from. Rubbia took the easy route: “We are scientists and engineers, calling attention to the needs rather than suggesting a clear financial strategy of how to solve these problems.”

The working group unveiled its proposal to the European Commission last week, and it got a favorable reception. Fillipo Maria pandolfi, vice president of the commission, hinted that Rubbia’s proposals fit well with future plans of Directorate-General XIII, which is responsible for relecommunications, information industries, and innovation, and which commissioned the report. In 1992 the directorate will reasses priorities under its third Framework program. That will involve concentrating resources in specific areas, Pandolfi said, and supercomputing is likely to be one of them.

Does Europe really need its own supercomputer industry? Rubbia and other members of the working group stressed the benefits that supercomputers bring to science, engineering, and everyday life. But they were less specific on the benefits of building, rather than buying the capability. “It is just inconceivable to buy everything from abroad,” said Rubbia. Pierre Perrier of Dassault Aviation stated baldly that “without a supercomputer industry, Europe would return to the second world. It would not be part of the first world.”

Computer woes still few and far between


monitor-skyA computer glitch that pushed the year back to 1900 on computer screens throughout D.C. government was considered a “nonevent” and has been fixed, officials said yesterday.

Employees with access to such databases as payroll or tax and revenue were first confronted Saturday with the incorrect date on a security window.

Once they moved to a new window by entering their user name and password, the system ran smoothly, said Henry Debnam, chief computer technician for the office of Chief Financial Officer Valerie Holt.

“It was a nonevent as far as we were concerned,” he said.

Mr. Debnam did not inform all government employees or the public, he said, because technicians solved the problem in the amount of time notification would have taken. Yesterday morning, the date on computer screens read Jan. 5, 2000 after workers finished their repairs Tuesday night.

Officials said the problem did not slow city operations.

They could not estimate how many workers ran into the problem, though the number was limited to those with the proper security clearance.

“From Day One we have said that we would probably have some very minor date-change problems, but that residents would continue to receive the full range of government services,” said Mayor Anthony A. Williams.

Five days into the new year, only reports of minor bumps were coming in from around the region.

“Our planning paid off,” said Bonnie Pfoutz, who headed Arlington County’s computer effort.

However, the year-2000 bug crashed more than 800 slot machines at three Delaware racetracks in the days leading up to Jan. 1.

It also could be responsible for a malfunctioning computer at Crossland High School in Prince George’s County that keeps records for one-third of the public schools.


Several staff members within the school system did not return calls yesterday requesting information about a memo explaining the computer’s year-2000 problems.

During computer testing on Saturday, classrooms at Woodley Hills Elementary School in Alexandria remained dark and the heating system malfunctioned.

But computers weren’t to blame for this trouble, rather it was “a suicidal squirrel that tried to party like it was 1999,” according to a report to the Fairfax County command center from Bob Ross of the school district’s year-2000 team.

“The poor squirrel either got into the circuit breaker or the electric transformer and got zapped,” Mr. Ross said. “The only Y2K fatality in the county was the squirrel.”

Michael Cady, director of information technology services for Prince George’s County, said he has encountered only one technical problem, which took about 10 minutes to fix.

“We expect maybe some other minor things going on, but that’s the extent of our glitches,” he said. “I call it a burp.”

In Montgomery County, the year-2000 project office requires every agency to check in twice a day – before 9 a.m. and again by 3 p.m. – to report any problems.

Officials reported only one error on Tuesday, which they said was not tied to the year-2000 bug. Nine public schools had trouble with the system that controls temperatures. Technicians managed to manually override it to compensate.

y2k-problem“Nobody’s called in a Y2K problem to us,” said Sonny Segal, chief of Montgomery County’s year 2000 efforts. He said workers having trouble signing on or printing documents are calling his office, only to find out that their glitches have nothing to do with the rollover to the new year.

“We did have calls reporting trouble logging on or slow computers,” said Charles Grammick of the Fairfax County school system’s year-2000 team. “But none of the problems was caused by Y2K.”

“Before, I was worried if I would have a job,” he added. “I am still here, and it’s great. Now I am wondering who is going to pay for the ulcer.”

The year-2000 computer problem stems from a cost-saving shortcut years ago in which software programmers devoted only two spaces in a date field to designate the year. That older software assumes the year always will begin with the digits 19.

Technicians feared that if they didn’t carefully reprogram and test affected systems – and replace calendar-sensitive computer chips embedded in some equipment – the computers would shut down or malfunction when they “read” the digits 00 as meaning 1900 and not 2000.