tag 标签: moores

相关博文
  • 热度 21
    2015-5-29 18:40
    1635 次阅读|
    0 个评论
    Last April, Moore’s Law celebrated its 50 th anniversary.   No one seems to be sure what Moore’s Law is. Moore originally predicted the number of transistors on a chip would double every year; later he amended that to two years. Some now claim the law sets an 18 month rate. Others say the law predicts that system performance will double every 18 months (or two years, depending on who is making the claim).   The mainstream press is clueless about the “law.” Tom Friedman, of The World is Flat fame, seems to think it really is a law, instead of an observation and an aspiration. Gordon Moore’s 1965 paper showed it to be a historical pattern rather than a law of nature. And it’s an aspiration as leading semiconductor vendors use it as a guide for their products. For instance, Intel plans for a “tick” (process shrink) or a “tock” (new microarchitecture) every 12-18 months.   I can’t find what process geometry was used in 1965 when Moore made his prediction, but remember that few ICs had more than dozens of transistors at the time. I thought maybe the biggest part of the time was the 74181 ALU, which implemented about 75 gates, but it appears  that this part didn’t come out till five years later. Given that we’ve gone from a handful of transistors to billions on an IC, Moore’s prediction certainly was remarkable and prescient.   Is it coming to an end? For 30 years naysayers have been claiming that process scaling is doomed, but the incredible engineers designing these parts continue to astound.   Gordon Moore thinks  the law will peter out in the next decade or so. Intel thinks  it’s good for at least another decade. Yet in 2003 Intel predicted  an end to the law shortly after hitting the anticipated 16 nm node. Today it seems we’re on track for 10 nm fairly soon with 7 not far off. (There is some marketing fuzz in the definition of “node,” and it has been a long time since that term had much to do with the size of a transistor).   In a sense Moore’s Law, or at least many of the benefits it gives, ended years ago around the 90 nm node when Dennard scaling fell apart. In 1974 Robert Dennard noted that as geometries shrink we get all sorts of goodness like higher clock rates and lower power. Many of the benefits he described no longer come with increasing densities. Today the law does give us more transistors per unit area, which translates into better performance and a lower cost-per-gate. However, some believe  that 20 nm broke even the cost model.   A silicon atom is about 0.25 nm in diameter. Gate oxide thicknesses are measured in a handful of atoms. The scales are truly amazing, and to think of us manipulating things on an atomic scale is breathtaking. The technology is astounding. And yet it’s commonplace; most everyone in the developed world has a device built with 28 nm or smaller processes.   How many transistors are in your pocket?   A few months ago I bought a 128 GB thumb drive for $45. 128 GB means a trillion bits of information. Who could resist tearing something like that apart? It has two memory chips, so each chip stores a half-trillion bits. Memory vendors don’t say much about their technology, but I haven’t seen MLC flash with more than three levels. Each of these chips may have over 100 billion transistors. And that assumes there’s no logic for row/column addressing or other overhead. Samsung touts their V-NAND devices which go 3D, but it’s frustratingly hard to get much deep technical information about those parts. Xilinx went 2.5D with their interposer arrangement for FPGAs a few years ago.   I suspect Moore’s Law will have a healthy future. Memory devices are leading the way into 3D architectures. Processors might follow, though it’s hard to imagine how the somewhat random interconnects CPUs require between layers will work. Interposer approaches will become more common for high-end (read “very expensive”) devices, but I doubt that will scale into consumer-style parts.   Regardless, a half-century of logarithmic improvements to a technology is unprecedented in all of human history. My hat is off to all of the device engineers who have made this happen.
  • 热度 26
    2013-8-11 15:37
    3818 次阅读|
    0 个评论
    I usually just roll my eyes and move on when I hear an end-of-the-world story. However, I heard a message at Design Automation Conference (DAC) a few months ago that still has me thinking. People have been talking about the end of Moore's Law for some time, but those discussions became a lot more urgent and heated at DAC in June. Many reasons have been postulated as to why Moore's Law might end, including not being able to overcome some physical limitation—perhaps a design issue that is preventing the whole chip from being powered up at the same time. More recently the matter of cost has been raised, where it will become so expensive to design a chip at the next node that nobody will be able to afford it. The concern has been that, with fewer design starts using the latest technologies and lower chip volumes, manufacturers would then not invest in wafer fabs for the next technology. I am not sure I fully get behind any of these arguments, but if we do stop making these advances what really happens? Is there no room for innovation if monolithically integrated devices cannot get more complicated? I am sure that some companies will be affected by this "crisis" as their commercial lead is contingent on being ahead of the design and fabrication curve rather than having the best design. Such an end may well transform our industry, but then we cannot expect the ride we have been on for 50 years to continue without some kind of change. Robert Colwell, who works for DARPA, said at DAC that the end of Moore's Law would be a US national security threat. This is based on the assertion that if the US does not stay ahead of the rest of the world in terms of computing power and associated technologies, then the rest of the world will become as capable as the US and be able to do things without the US government finding out—and they will be able to find out what the US is planning to do. Similar assertions can and are made in terms of weapons, of course. My first reaction is a political one. Why can we not spend more time getting along with people so that this is just not an issue that we care about? OK, so I am idealistic and I understand that some people may not think this is realistic or pragmatic. Does innovation die when we cannot create more complex devices? I hope this is not true. I hope that we would find ways to use our knowledge and the capabilities we have in better and more optimal ways, exploring different architectures where we have just accepted those in existence today because that is easier and faster. What about biological computing or coming up with computers that operate more like the brain rather than just accept that binary arithmetic is the way to go? So, what happens if the whole world has equal access to technology? Does stability depend on one country having a bigger stick than everyone else? Brian Bailey EE Times
  • 热度 31
    2013-8-11 14:59
    4959 次阅读|
    1 个评论
    Usually, when I hear a doomsday story, I just roll my eyes and move on. However, I heard a message at Design Automation Conference (DAC) this year that still has me thinking. People have been talking about the end of Moore's Law for some time, but those discussions became a lot more urgent and heated at DAC in June. Many reasons have been postulated as to why Moore's Law might end, including not being able to overcome some physical limitation—perhaps a design issue that is preventing the whole chip from being powered up at the same time. More recently the matter of cost has been raised, where it will become so expensive to design a chip at the next node that nobody will be able to afford it. The concern has been that, with fewer design starts using the latest technologies and lower chip volumes, manufacturers would then not invest in wafer fabs for the next technology. I am not sure I fully get behind any of these arguments, but if we do stop making these advances what really happens? Is there no room for innovation if monolithically integrated devices cannot get more complicated? I am sure that some companies will be affected by this "crisis" as their commercial lead is contingent on being ahead of the design and fabrication curve rather than having the best design. Such an end may well transform our industry, but then we cannot expect the ride we have been on for 50 years to continue without some kind of change. Robert Colwell, who works for DARPA, said at DAC that the end of Moore's Law would be a US national security threat. This is based on the assertion that if the US does not stay ahead of the rest of the world in terms of computing power and associated technologies, then the rest of the world will become as capable as the US and be able to do things without the US government finding out—and they will be able to find out what the US is planning to do. Similar assertions can and are made in terms of weapons, of course. My first reaction is a political one. Why can we not spend more time getting along with people so that this is just not an issue that we care about? OK, so I am idealistic and I understand that some people may not think this is realistic or pragmatic. Does innovation die when we cannot create more complex devices? I hope this is not true. I hope that we would find ways to use our knowledge and the capabilities we have in better and more optimal ways, exploring different architectures where we have just accepted those in existence today because that is easier and faster. What about biological computing or coming up with computers that operate more like the brain rather than just accept that binary arithmetic is the way to go? So, what happens if the whole world has equal access to technology? Does stability depend on one country having a bigger stick than everyone else? Brian Bailey EE Times