About a decade back there was a major change in the way commodity microprocessors were designed. Until then Moore's Law had focused on baking increasingly bigger and faster processor chips with a focus on single-threaded performance. The processor internal clock frequency was expected to exceed 6 GHz within a few years. Around 2004 the design emphasis changed to multi-core, and clock frequencies actually dropped. The shift to multi-core coincided with two other streams - the rise of Linux, and a renewed Web. This confluence of mostly unrelated developments paved the way for today's prevalent theme of concurrent computing on the cloud.
To properly understand these developments we need to start with semiconductor scaling. Something happened to Moore's Law around 2004, and we need to understand that first. We'll then apply those principles to get a perspective on what's happening today, in 2014. To that end this article is divided into two parts: