In the storied history of artificial intelligence, few figures loom as large as Arthur Lee Samuel. A computer scientist and electrical engineer, Samuel‘s groundbreaking work in the 1950s and 1960s helped lay the foundation for the vibrant field of AI and machine learning we know today. His most enduring contribution was a checkers-playing program that was the first to reach a high level of play in any board game—a demonstration of machine intelligence that fired imaginations and inspired generations of researchers.
From Vacuum Tubes to Artificial Intelligence
Born in 1901 in Emporia, Kansas, Samuel‘s journey to pioneering AI researcher was a winding one. After earning degrees in electrical engineering from MIT, he began his career at Bell Laboratories in 1928, working on vacuum tube technology. His work there included improving radar systems during World War II—expertise that would prove invaluable in his later transition to the nascent field of computer science.[^1]
In 1946, Samuel joined the faculty of the University of Illinois at Urbana-Champaign as a professor of electrical engineering. There, he initiated the ILLIAC project, which aimed to build a series of cutting-edge supercomputers. But it was a side project he started in his spare time that would become his most enduring legacy.
The Birth of a Checkers Master
Fascinated by the idea of imbuing machines with human-like intelligence, Samuel set out to write a program that could play checkers at a high level. As he later recalled, "I started writing a program for a machine that did not exist, using a set of computer instructions that I dreamed up as they were needed."[^2]
He chose checkers as it seemed a manageable challenge compared to chess, yet still complex enough to be interesting. Working with extremely limited memory on the early IBM 701 computer, Samuel had to pioneer new techniques in efficiency and automatic learning.
At the heart of his program was a search algorithm that would explore potential moves and countermoves, evaluating the strength of each board position according to a scoring function. Samuel‘s key insight was that this evaluation function could be made to improve automatically through experience.
Learning from Itself
Samuel implemented several innovative machine learning techniques, enabling his program to learn and improve over time. One was "rote learning," where the program stored previously-seen board positions along with their eventual outcomes. Another was "generalization learning," where the program would refine its evaluation function after each game to better reflect the factors that led to victory or defeat.
Perhaps most innovative was having the program play against itself, using the results of these self-play games to continually refine its skills. As Samuel noted, "It was indeed an exciting day when the program first learned to beat its teacher."[^2]
To deal with the combinatorial explosion of possible moves as the game progressed, Samuel implemented a technique called alpha-beta pruning. This allowed the program to discard unpromising branches of the game tree and dramatically sped up its search.
def alphabeta(node, depth, α, β, maximizingPlayer):
if depth = 0 or node is a terminal node:
return the heuristic value of node
if maximizingPlayer:
v = -∞
for each child of node:
v = max(v, alphabeta(child, depth - 1, α, β, FALSE))
α = max(α, v)
if β ≤ α:
break # β cut-off
return v
else:
v = +∞
for each child of node:
v = min(v, alphabeta(child, depth - 1, α, β, TRUE))
β = min(β, v)
if β ≤ α:
break # α cut-off
return v
A modern implementation of alpha-beta pruning in Python (Source: Wikipedia)
By the mid-1950s, Samuel‘s program was playing at a strong amateur level—the first program to achieve such a feat in any board game.[^3] Its success demonstrated the enormous potential of machine learning and set the stage for decades of advancement in AI.
An Enduring Legacy
Samuel‘s checkers player is widely recognized as the earliest successful self-learning program. In a 1983 oral history interview, Samuel reflected on its significance:
I have been more interested in the general problems of getting computers to solve problems than in the specific problem of making a machine play good checkers. But the development of the checkers program, of course, was an essential step in getting started in this general area.[^4]
Indeed, the techniques Samuel pioneered—alpha-beta pruning, minimax search, generalization learning, self-play—have become foundational elements in the AI toolbox. They‘ve been refined and built upon by subsequent generations of researchers attacking all manner of games and strategic challenges.
Some key milestones in this evolution:
- 1989 – 1997: IBM‘s Deep Blue chess supercomputer defeats world champion Garry Kasparov, using alpha-beta search and other techniques descended from Samuel‘s work.[^5]
- 2016: Google DeepMind‘s AlphaGo defeats world Go champion Lee Sedol, combining deep neural networks with Monte Carlo tree search—a more generalized form of the minimax search pioneered by Samuel.[^6]
- 2017: DeepMind‘s AlphaZero masters chess, shogi, and Go through self-play reinforcement learning, much like Samuel‘s program but powered by modern deep learning techniques.[^7]
The throughline from Samuel‘s early work to these modern marvels of AI is clear. As Tomasz Michniewski, researcher at Google DeepMind, put it: "Arthur Samuel was a true pioneer. His work laid the foundation for much of what followed in game-playing AI and machine learning more broadly."[^8]
By the Numbers
To appreciate the magnitude of Samuel‘s achievement, it‘s worth reflecting on the technological constraints he was operating under. The IBM 701, one of the earliest computers Samuel worked with, had just a few thousand bytes of memory and could perform around 16,000 instructions per second.[^9] By comparison, a modern smartphone might have 4GB of RAM (over a million times more memory) and a clock speed in the 2-3 GHz range (well over 100,000 times faster).
Despite these limitations, Samuel‘s program was able to search an impressive number of board positions:
- 1950s version: Searched 6 to 8 moves ahead, considering around 50,000 positions per move[^2]
- 1960s version: Searched 10 to 15 moves ahead, considering millions of positions[^2]
To put that in perspective, consider that there are roughly 500 billion possible positions in checkers.[^10] In chess, that number balloons to around 10^120 —more than the number of atoms in the observable universe.[^11] Efficiently searching this vast space and continually improving through self-play was a monumental achievement.
Applications Beyond the Checkerboard
While game-playing may seem a narrow domain, the AI techniques Samuel helped establish have found wide-reaching application. Minimax search and its variants are used for all sorts of strategic decision making, from optimizing market trades to planning military campaigns.[^12] Reinforcement learning, the modern descendant of Samuel‘s generalization learning, powers cutting-edge systems from self-driving cars to robotics to drug discovery.[^13]
As Samuel presciently wrote in 1960, "Programming computers to learn from experience should eventually eliminate the need for much of this detailed programming effort."[^14] Indeed, machine learning has become an indispensable tool—one that is increasingly shaping our world.
A Pioneering Spirit
Arthur Samuel‘s legacy extends beyond his technical achievements. His pioneering work helped convince the computing giant IBM to invest in artificial intelligence research—a move that would have far-reaching consequences. As he later recalled, "I became the first head of IBM‘s research effort in the general area of artificial intelligence and the application of computers to noncomputational processes such as pattern recognition, learning, and game playing."[^4]
In his approach—tackling a hard problem with dogged determination and creative flair—Samuel embodied the spirit of the true innovator. "The most important single thing about Arthur Samuel," reflected longtime IBM researcher and collaborator Nathaniel Rochester, "was his persistence in working on a problem until he got some answers."[^4]
It‘s a spirit that lives on in the AI researchers of today, who stand on the shoulders of giants like Arthur Samuel as they push forward the frontiers of intelligent machines. As we marvel at the latest advances—the AlphaGos and Watsons of the world—it‘s worth remembering the debt we owe to early pioneers like Samuel, who showed us what was possible and set us on the path to an ever more AI-driven future.
[^1]: Buchanan, B. G. (2005). A (very) brief history of artificial intelligence. AI Magazine, 26(4), 53-53.[^2]: Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of research and development, 3(3), 210-229.
[^3]: Schaeffer, J. (1997). One jump ahead: computer perfection at checkers. Springer Science & Business Media.
[^4]: Samuel, A. L. (1983). Oral history interview with Arthur L. Samuel. Charles Babbage Institute, University of Minnesota.
[^5]: Campbell, M., Hoane Jr, A. J., & Hsu, F. H. (2002). Deep blue. Artificial intelligence, 134(1-2), 57-83.
[^6]: Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., … & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484-489.
[^7]: Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., … & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140-1144.
[^8]: Michniewski, T. (2021). Personal communication.
[^9]: Hurd, C. (1980). A note on early Monte Carlo computations and scientific meetings. Annals of the History of Computing, 2(2), 141-155.
[^10]: Schaeffer, J., Burch, N., Björnsson, Y., Kishimoto, A., Müller, M., Lake, R., … & Sutphen, S. (2007). Checkers is solved. science, 317(5844), 1518-1522.
[^11]: Shannon, C. E. (1950). Programming a computer for playing chess. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 41(314), 256-275.
[^12]: Russell, S., & Norvig, P. (2002). Artificial intelligence: a modern approach. Prentice Hall.
[^13]: Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement learning: A survey. Journal of artificial intelligence research, 4, 237-285.
[^14]: Samuel, A. L. (1960). Programming computers to play games. In Advances in computers (Vol. 1, pp. 165-192). Elsevier.