Dynamic random access memory (DRAM) has transformed computing by providing an affordable way to equip systems with large, high speed memory capacities. But what is DRAM, and how did it enable the personal computing revolution? Let‘s unpack everything you need to know about this vital computer memory technology!
First off – DRAM stands for dynamic random access memory. As you can see, its full name already gives some clues about it. DRAM stores data in an integrated circuit containing capacitors and transistors arranged in a grid of memory cells. It‘s dynamic because the data in it needs to be continuously refreshed – more on that later!
To understand DRAM‘s significance, we have to go back to the days before it existed, when magnetic core memory dominated the computing landscape in the 1960s. Core memory sounds like something from science fiction, using arrays of tiny ferrite rings with wires running through them to store bits magnetically. But it came with some major drawbacks…
Limitations of Magnetic Core Memory
- Bulky physical size and heavy weight
- Significant power consumption and heating
- Low densities and capacities (kilobits level)
- Slow read/write times, in the microseconds
- Challenging and expensive manufacturing process
Enter Robert Dennard at IBM in 1966, who conceived of a dramatically different approach to build memory, using just a single transistor and capacitor to store each bit. This compact 1T design meant DRAM memory cells took up far less space on silicon chips than hulking magnetic cores. Dennard‘s breakthrough invention paved the way for orders of magnitude greater densities.
The Density Advantage of DRAM
Year | Maximum DRAM density |
---|---|
1970 | 1 kb per chip |
1975 | 16 kb per chip |
1980 | 64 kb per chip |
1985 | 256 kb per chip |
1990 | 1 Mb per chip |
As you can see from the table above, DRAM densities grew rapidly in the first decades after its invention, exponentially increasing the memory capacity that could fit on a chip. Where magnetic cores measured data in kilobits, DRAM had zoomed to megabits by 1990.
So how does DRAM pull off this magic? Let‘s look at what happens behind the scenes…
Inside a DRAM Memory Cell
DRAM stores data in the form of electrical charge on a capacitor, which can be in either a charged or discharged state to represent a 1 or 0 bit value. Each memory cell contains one access transistor that acts like a switch to control reading from or writing to the capacitor.
The capacitor‘s charge leaks away over a period of milliseconds, so DRAM needs to be refreshed thousands of times per second before the data fades. This refresh operation happens automatically in the background. When you access data, the sense amplifiers detect the small voltage differences caused by the remaining charge. Pretty neat!
Of course, further shrinking the components introduces challenges like row-hammer, where accessing one location can disturb neighboring cells. But through five decades of incremental developments, DRAM has continuously delivered ever-higher densities and faster speeds.
Unleashing Personal Computing
This combination of affordability, capacity and performance was exactly what early personal computers needed in the 1970s and 80s. Desktop models like the Apple II and IBM PC used DRAM to provide enough working memory for applications like spreadsheets and word processors.
With storage no longer restricted to magnetic tape or disks, users could now enjoy responsive programs that run and switch between tasks rapidly. DRAM enabled PCs to become general purpose machines useful for creativity, business, and entertainment.
So next time your PC boots up, remember it‘s DRAM silently powering things behind the scenes! Decades after its inception, this technology still performs a crucial role making modern computing possible. Not bad for a memory design based on tiny capacitors!