Skip to content

The Clicking Computers: How Relay Machines Launched the Digital Revolution

Imagine a time when the word "computer" referred not to the sleek, silent rectangles that fill our pockets and desktops, but to a massive machine filled with clicking, whirring metal parts. A time when computation was a physical process, measured not in gigahertz and petabytes, but in the mechanical motion of thousands of tiny switches. This was the era of the relay computer—the first generation of machines that truly earned the name "computer."

Though largely forgotten today, relay computers played a crucial role in launching the digital revolution that has transformed our world. From the 1930s through the early 1950s, these electromechanical behemoths were the most advanced information processing devices on the planet. They helped the Allies win World War II, cracked enemy codes, and solved problems in physics and mathematics that were previously intractable. And perhaps most importantly, they demonstrated the incredible potential of programmable computation—a concept that would go on to reshape science, business, culture, and daily life in ways the early pioneers could hardly imagine.

So what exactly were relay computers, and how did they work? Let‘s start with the basic building block: the relay. An electromagnetic relay is a simple device consisting of an electromagnet, a movable armature, and one or more sets of contacts. When current flows through the electromagnet, it generates a magnetic field that pulls the armature, causing the contacts to touch and create a closed circuit. When the current stops, the armature springs back, opening the circuit.

Relays had been used in telephone exchanges and other electrical systems since the 19th century, but in the 1930s, a few visionary engineers realized they could be used for something entirely different: computation. By arranging relays into precise configurations, they could be made to perform logic operations like AND, OR, and NOT. These logic gates could then be combined into more complex circuits, creating a physical instantiation of Boolean algebra. With enough relays wired together in the right way, a machine could be built that could automatically step through a sequence of these logical operations—in other words, a computer.

The first person to put this idea into practice was a German engineer named Konrad Zuse. Working in near-total isolation in his parents‘ apartment in Berlin, Zuse spent the late 1930s building a series of increasingly sophisticated relay computers, which he called the Z1, Z2, and Z3. The Z1, completed in 1938, was more of a prototype than a practical machine, but it introduced several key concepts that would become standard in later computers, including the use of a binary system for representing data and a floating-point format for handling non-integer numbers.

The Z2, finished in 1940, was the first working relay computer. It contained around 600 relays and could perform basic arithmetic operations at a rate of about 5 Hz. But it was the Z3, completed in 1941, that really pushed the boundaries of what was possible with relay technology. With over 2,000 relays, the Z3 was the first programmable, fully automatic computer. It had a 22-bit word length, a clock frequency of about 5 Hz, and a memory of 64 words. Programs were punched onto old 35 mm film stock, which was read optically and fed into the machine as a sequence of instructions.

Though the Z3 was destroyed by Allied bombing in 1944, Zuse‘s work laid the foundation for the relay computers that would follow. In the United States, a team at Bell Labs led by mathematician George Stibitz was working on a machine to perform complex number arithmetic, which was needed for analyzing the behavior of telephone networks. In 1939, Stibitz unveiled the Complex Number Computer (CNC), which used over 400 relays to implement addition, subtraction, multiplication, and division of complex numbers.

The CNC was a special-purpose device, but it demonstrated the potential of relay technology for more general computation. Inspired by Stibitz‘s work, a graduate student at Harvard named Howard Aiken began designing a much more ambitious machine. With funding from IBM and the U.S. Navy, Aiken‘s team spent the early 1940s building the Harvard Mark I, a behemoth of a computer that weighed over 10,000 pounds and contained more than 750,000 components, including 3,000 relays.

The Mark I was a decimal computer, meaning it represented numbers using discrete digits rather than binary. It had 72 registers for storing numbers, each containing 23 decimal digits plus a sign. Instructions were fed into the machine on punched paper tape, and output was printed on electric typewriters or punched onto more paper tape. The machine could perform three additions or subtractions per second, and a multiplication took about six seconds.

Despite its impressive size and complexity, the Mark I was still an electromechanical device, relying on physical motion to perform computations. This made it inherently slow and error-prone compared to the electronic computers that would follow. But during World War II, it was the most powerful computing machine available to the Allies, and it played a vital role in the war effort.

The Mark I was used extensively by the U.S. Navy for calculating artillery firing tables, which required solving complex differential equations to account for factors like wind speed, air resistance, and the rotation of the Earth. It also performed crucial calculations for the Manhattan Project, including modeling the implosion-type nuclear weapon that was eventually dropped on Nagasaki.

Other relay computers were also built and used during the war, including the Bell Labs Model II, III, IV, and V, the IBM Automatic Sequence Controlled Calculator (ASCC), and the Harvard Mark II, III, and IV. In the U.K., a team at Bletchley Park built the Heath Robinson and Colossus machines, which were used to break German codes. And in the Soviet Union, a group led by Sergei Lebedev built the MESM (Small Electronic Calculating Machine), which used a combination of relays and vacuum tubes.

But as the 1940s drew to a close, it was becoming clear that the days of the relay computer were numbered. The development of electronic computing elements like vacuum tubes and transistors offered the promise of much faster speeds and greater reliability than any electromechanical system could provide. In 1946, the ENIAC (Electronic Numerical Integrator and Computer) debuted at the University of Pennsylvania, and the age of electronic computing had begun.

Still, the relay computers had laid the groundwork for everything that would follow. By demonstrating the feasibility and power of programmable computation, they opened up a vast new frontier of possibilities. And many of the concepts and techniques developed for relay machines, such as the use of binary arithmetic, flow control instructions, and modular hardware design, would carry over into the electronic era and beyond.

Today, the clicking and clacking of relays has long since been replaced by the silent hum of integrated circuits. But the pioneering work of Zuse, Stibitz, Aiken, and their contemporaries remains as relevant as ever. In a world increasingly mediated by digital technology, it‘s easy to forget just how far we‘ve come in a few short generations. The relay computers may seem primitive by modern standards, but they were a critical first step on the long road to the astonishing devices we now take for granted.

As a digital technology expert, I believe it‘s important to study and celebrate these early machines, not just as historical curiosities, but as a reminder of the ingenuity, determination, and vision that drove the digital revolution forward. The relay computers may be long gone, but their legacy lives on in every smartphone, laptop, and smart device we use today. They were the clicking computers that launched a new era of human progress—and we owe them a debt of gratitude that can never be fully repaid.