Multicore

Semiconductor scaling and concurrent clouds - Part II

II. The era of concurrency

In Part I we discussed the technical secrets of semiconductor scaling that has kept Moore's Law going to this day. We learnt about such things as channel length scaling, high-K, leakage current, finFET, and timing closure. In particular we saw that as channel length is aggressively scaled down, the dynamic power dissipation goes up as the CUBE of clock frequency.

We will now apply these semiconductor insights to understand the shift to multi-core commodity processors, and the emergence of concurrent computing on the cloud.

Semiconductor scaling and concurrent clouds - Part I

About a decade back there was a major change in the way commodity microprocessors were designed. Until then Moore's Law had focused on baking increasingly bigger and faster processor chips with a focus on single-threaded performance. The processor internal clock frequency was expected to exceed 6 GHz within a few years. Around 2004 the design emphasis changed to multi-core, and clock frequencies actually dropped. The shift to multi-core coincided with two other streams - the rise of Linux, and a renewed Web. This confluence of mostly unrelated developments paved the way for today's prevalent theme of concurrent computing on the cloud.

To properly understand these developments we need to start with semiconductor scaling. Something happened to Moore's Law around 2004, and we need to understand that first. We'll then apply those principles to get a perspective on what's happening today, in 2014. To that end this article is divided into two parts:

Part I: The secrets of Moore's Law

Part II: The era of concurrency