Think of technology fifty years ago, and you probably don’t think of anything as being “technology” at all, although of course it is in itself an incredibly broad term that could encompass any tool that anyone’s used since the dawn of history. Still, if you were to think of technology as it was back in 1965, you’d probably consider huge banks of computers in research laboratories, huge tape reels whirring backwards and forwards and a level of IT that, in the perspective of 2015, would seem almost impossibly primitive.
It’s fascinating to think, then, that the course of the last half century’s worth of technology has revolved around an article first published on April 19th, 1965 in the pages of Electronics Magazine. You might think I’m being a touch hyperbolic there, because how could one small magazine article change the future of information technology over the course of decades?
The answer lies in the specifics of that particular article, entitled “Cramming More Components onto Integrated Circuits”. Not a blockbuster title by any stretch of the imagination, but as your mother always told you, it’s what’s inside that counts. Inside the article, Intel co-founder Gordon Moore detailed his view of the future of circuitry, giving birth to what’s become known as “Moore’s Law”.
Strictly speaking, Moore’s Law isn’t a Law at all, but an observation on the rate of increase in the number of transistors within an integrated circuit. Moore’s law is usually stated as a doubling of transistors every two years, although in his original article he pegged it closer to a year. If you’re particularly keen at staring straight at history, you can read the original article here.
Transistors don’t sound terribly exciting unless you’re an engineer, and it is something of a rather dry article if you’re not. Still, Moore’s observation had a profound impact on the way that IT evolved over the following decades. While in many ways he was only extrapolating on a trend that was already evident, the public idea of “Moore’s Law” — which Moore himself wasn’t too happy with for some time — became something of a self-fulfilling prophecy, because it signalled to everyone in the technology field that they’d have to compete at this kind of level in order to stay relevant. As such, it spurred all sorts of research en route to fulfilling the “law”, which is why, for example, the keyfob that unlocks your car has far more computing power than those room-sized machines that Moore was writing about back in 1965.
From Moore’s original article, which talked in theoretical terms of cramming up to 65,000 components onto a quarter-inch semiconductor, we’ve moved through to today, where modern high-end processors cram billions of transistors instead. Moore’s Law in its classical form was observable for decades, although more recent innovations have led to questions as to how long it’s physically feasible to continue iterating along those kinds of lines. Certainly, processor companies are keen to keep Moore’s Law ticking along, with specific plans for processors that halve today’s 14nm (that’s nanometre — just try thinking that small) down to a tiny 7nm process, and perhaps beyond.