Many of you NAGlings might remember a time when people went into what was once described by the New York Times as “a mild, blind panic” over the much-reported Y2K bug. According to the many rumours and reports and silly stories floating around, many believed that once the clock and calendar ticked their way into the 21st century, the world and it’s computer systems would fall apart at the seams and we’d be plunged into darknesss. Well, we’re still here and the internet is still working. Let’s see what all the fuss was about.


The issue that created the environment for the Y2K bug started way, way back before personal computers were a thing. See, in the past we had computers that would interpret information using punched cards, where the holes punched through the card would represent commands and instructions and stored memory for the machine. Punched cards themselves are hundreds of years old – the first recorded use of them for programming a mechanical device was in the textile industry, operating huge looms that would weave and interleave fabric into patterns determined by the cards. Although not binary in nature, the cards had elements of binary mathematics that would later be incorporated into machine code by IBM in the twentieth century.


The more well-known Jacquard fabric looms, designed by smooth-faced Joseph Marie Jacquard in 1801, still operate today in some museums in France. In modern times, the “Loom of Fate” in the movie Wanted is similar in operation to a Jacquard loom, but it’s merely a fictional device, as it doesn’t appear to use a punched card system. If it did, however, that would probably have been (spoiler!) the way in which Sloan directed the machine to give kill orders by his command. The cards that Jacquard created were very basic and only were patterns for the machine to follow and could include over a million different design alterations to choose from.

"Limited" may not be the right word though. The above picture was made by Jacquard himself. On a loom. Using 24,000 punched cards. Clearly, he knew what his machines were capable of.

“Basic” may not be the right word though. The above picture was made by a Jacquard loom in 1839, using 24,000 punched cards.

However, their usefulness was limited to just the textile industry and making copies of things. The machines had to be pretty big to run the card systems and would have needed an entire warehouse to allow for all the different combinations. While many of the cards were duplicates, putting in new ones was a major hassle and took just over a day. You couldn’t debug these cards either – once they were on the loom, you’d only know then if the designs were right or not. Despite their flaws, however, they were insanely useful. The punched card systems were improved by clean-shaven Russian inventor Semen Korsakov (Hey! No sniggling at the back!) in 1832. They had eight columns lettered A to H and had 32 digits to program information into. This was the first use of punched cards to store information other than pretty carpet designs.


Korsakov’s design had the holes punched into the cards and would be laid out flat on a table. The cards themselves were thick and allowed the pins, which were attached to a block of wood, to fall through the holes and indicate a data match. Korsakov’s invention was essentially a search engine, useful for large libraries that needed to search for particular volumes and their location very quicky. It was never put into production and he never patented it – in the (possibly) first examply of applying the GNU license to anything, he distributed the designs to the public and allowed them to make their own systems and improve them. Later on, it would be none other than a balding Charles Babbage, the father of the modern programmable computer, to use a variation of Korsakov’s design in his Analytical Engine, designed but never finished in 1837.


IBM-manufactured punched-card systems

Punched cards were used in many, many devices. They were used to operate fairground machines and playerless Organs, among other things. However, they were still used to control machines, as opposed to Korsakov’s use as information storage. They were later perfected for both uses by Herman Hollerith, an American Inventor with a fabulous moustache, in 1890, to be used in the 1890 Census of the United States. Not only did the cards store information (now up to eighty rows of ten numbers), they also had early examples of a header, noting the contents of the information stored on the card. Hollerith later patented his system and started a company called the Tabulating Machine Company (TMC). TMC produced both the cards and the machines that would read data off them and produce results in the form of more punched cards.

TMC was later absorbed with three other companies to form Computing Tabulating Recording Company in 1911. CTRC made use of the punched cards in a range of systems, including automatic meat slicers, printers, small looms, weighing scales, time-keeping systems, employee attendance records (coining the term “clock-out” or “punch-out” at the end of a shift) and record-keeping. CTRC was dissolved in 1924 and absorbed the name of a little-known subsidiary it had created in Canada – International Business Machines.

Even IBM's logo harks back to the card systems they produced in the early days.

Even IBM’s logo harks back to the card systems they produced in the early days.

Hollerith’s death in 1929, at the age of 69, brought with it some tough times for the punched card standard. Because of limited space, Hollerith determined that he could get away with using just six digits to represent the date the card was made or when the data was entered into the system. The format mm/dd/yy is still used for short-hand writing of the dates today and it was used in all of TMC’s, CTRC’s and IBM’s products. Its often debated whether Hollerith knew about a flaw in his design that would haunt people using his system decades later, choosing to take it to his grave to spite Thomas Watson, IBM’s CEO.

The early computers built in the 1940s used punched cards as the basis for their memory storage. Many of IBM’s products were adopted for governmental use by that time and they used punched cards with two digits displaying the year. The use of removing two digits to save on space continued into the early 60s, with companies like IBM moving into electrical mass storage in the size of Kilobytes. With such a limited amount of storage space and memory, programs were written for the machines using the same techniques applied to punched cards, removing two bytes for the two numerals, one and nine. Because everyone assumed the programs wouldn’t be used for more than five years before being improved, nothing was done about it.


IBM’s 704 still used cards, but was rather effective.

Towards the seventies, everything was run off programs that were written to be as small as possible, including the subtraction of the two digits. The Space Programme, audio recording systems, UNIX and even programmable calculators made by the small up-start, Intel, in 1968. By that time, no-one was thinking about the future, only worrying about whether the Russians would wake up on the wrong side of the bed one morning, or if the Beatles were somehow simultaneously murdered.

Right up until the early 90s this continued until programmers caught a wake-up call and resolved to start fixing it. This was a little ironic, since the issue was first noted by Bob Bemer in 1958, when he factored future dates into his genealogical software. Despite twenty years of warning everyone about this, Bemer’s warning was ignored. When Internet Mail was standardised and ratified in 1989, programmer Erik Naggum was vocal about the recommendation to use four digits to represent the year.

A Y2K conversion center in Japan, with corporations helping programmers to turn over their software systems to compatibility with the new year.

A Y2K conversion center in Japan, with corporations helping programmers to turn over their software systems to compatibility with the new year.

It was a wide-spread issue, but implementing a fix wasn’t so easy. Most systems that had programs similar to punched card systems would simply revert the date back to 1900, displaying it as “00”, which would have broken banking systems and programs that couldn’t return a result of “00”. For those systems, it would have been the equivalent of dividing by zero, causing a complete crash. It would have affected to stock markets as well, had Wall Street not changed to a UNIX-based trading system before the turn of the century. The results for systems that couldn’t yet be taken offline were particularly worrying to those who knew about it – in Japan, a nuclear power plant started up an alarm signaling it was overheating. In Sheffield, UK, incorrect test results were sent to 154 pregnant woman testing for Down’s Syndrome. Japan’s NTT DoCoMo found that people receiving SMSes had them deleted before they could read them. In the US, slot machines at a horse racing track in Delaware stopped working two seconds after the beginning of the new century.

Other programs and virtual environments were a little different. UNIX got over the problem by displaying the date as a set year (commonly 1970) plus the number of seconds that had passed.  Microsoft Excel would return results that incorrectly recognised 1900, 2200 and 2200 as leap years – to this day, it still recognises 1900 as a leap year to keep compatibility with spreadsheets set up before the year 2000. Javascript had to be rewritten in its entirety for four-digit numbering systems, requiring hundreds of thousands of web developers to rewrite their code to accommodate the new systems. This was only a small subset of which systems were due to crash, prompting the entire software industry to re-check their code and update programs written for mission-critical business use.


If time progressed into the year 2000, but the New York Stock Exchange rolled over to 1900, it would have been another market crash. That’s the extent to the seriousness of the situation. Luckily, thanks to the internet, Usenet and the swathe of tech-focused magazines, the message eventually spread to the right people and everything that was mission-critical was saved before we had the countdown and people with loosened morals conceived Millenium babies to be born in September. Some systems continue to operate without being updated: the Voyager 1 and 2 satellites continue to beam back images, despite not having their OS updated to reflect the new dates.

It wasn’t the end, though. As is always the case with Murphy’s law, when you fix something, something else breaks. 32-bit UNIX systems and software will have to deal with their own Y2K issue by 2038, when the amount of space in the 32-bit integer recognising the date has to roll over. We also had a minor altercation with the year 2010, when some systems were due to ride into glitches from incorrect operands in their code that would spaz out when 2010 began. There may be more, but we’re not that far enough to figure out when they’re going to happen.

Regardless, though, we’ll be ready. The Y2K bug caused enough consternation and trouble to make programmers aware of the limitations of their systems and this has had a side effect of improving documentation and identifying known issues in a system. Because of the good (and bad) it did the world, it deserves today’s title of “Oldie But Goodie”.

Discuss this in the forums: Linky