tag 标签: law

相关博文
  • 热度 21
    2015-5-29 18:40
    1617 次阅读|
    0 个评论
    Last April, Moore’s Law celebrated its 50 th anniversary.   No one seems to be sure what Moore’s Law is. Moore originally predicted the number of transistors on a chip would double every year; later he amended that to two years. Some now claim the law sets an 18 month rate. Others say the law predicts that system performance will double every 18 months (or two years, depending on who is making the claim).   The mainstream press is clueless about the “law.” Tom Friedman, of The World is Flat fame, seems to think it really is a law, instead of an observation and an aspiration. Gordon Moore’s 1965 paper showed it to be a historical pattern rather than a law of nature. And it’s an aspiration as leading semiconductor vendors use it as a guide for their products. For instance, Intel plans for a “tick” (process shrink) or a “tock” (new microarchitecture) every 12-18 months.   I can’t find what process geometry was used in 1965 when Moore made his prediction, but remember that few ICs had more than dozens of transistors at the time. I thought maybe the biggest part of the time was the 74181 ALU, which implemented about 75 gates, but it appears  that this part didn’t come out till five years later. Given that we’ve gone from a handful of transistors to billions on an IC, Moore’s prediction certainly was remarkable and prescient.   Is it coming to an end? For 30 years naysayers have been claiming that process scaling is doomed, but the incredible engineers designing these parts continue to astound.   Gordon Moore thinks  the law will peter out in the next decade or so. Intel thinks  it’s good for at least another decade. Yet in 2003 Intel predicted  an end to the law shortly after hitting the anticipated 16 nm node. Today it seems we’re on track for 10 nm fairly soon with 7 not far off. (There is some marketing fuzz in the definition of “node,” and it has been a long time since that term had much to do with the size of a transistor).   In a sense Moore’s Law, or at least many of the benefits it gives, ended years ago around the 90 nm node when Dennard scaling fell apart. In 1974 Robert Dennard noted that as geometries shrink we get all sorts of goodness like higher clock rates and lower power. Many of the benefits he described no longer come with increasing densities. Today the law does give us more transistors per unit area, which translates into better performance and a lower cost-per-gate. However, some believe  that 20 nm broke even the cost model.   A silicon atom is about 0.25 nm in diameter. Gate oxide thicknesses are measured in a handful of atoms. The scales are truly amazing, and to think of us manipulating things on an atomic scale is breathtaking. The technology is astounding. And yet it’s commonplace; most everyone in the developed world has a device built with 28 nm or smaller processes.   How many transistors are in your pocket?   A few months ago I bought a 128 GB thumb drive for $45. 128 GB means a trillion bits of information. Who could resist tearing something like that apart? It has two memory chips, so each chip stores a half-trillion bits. Memory vendors don’t say much about their technology, but I haven’t seen MLC flash with more than three levels. Each of these chips may have over 100 billion transistors. And that assumes there’s no logic for row/column addressing or other overhead. Samsung touts their V-NAND devices which go 3D, but it’s frustratingly hard to get much deep technical information about those parts. Xilinx went 2.5D with their interposer arrangement for FPGAs a few years ago.   I suspect Moore’s Law will have a healthy future. Memory devices are leading the way into 3D architectures. Processors might follow, though it’s hard to imagine how the somewhat random interconnects CPUs require between layers will work. Interposer approaches will become more common for high-end (read “very expensive”) devices, but I doubt that will scale into consumer-style parts.   Regardless, a half-century of logarithmic improvements to a technology is unprecedented in all of human history. My hat is off to all of the device engineers who have made this happen.
  • 热度 26
    2015-5-28 17:52
    1488 次阅读|
    0 个评论
    Moore’s Law celebrated its 50 th anniversary last month, April 2015.   No one seems to be sure what Moore’s Law is. Moore originally predicted the number of transistors on a chip would double every year; later he amended that to two years. Some now claim the law sets an 18 month rate. Others say the law predicts that system performance will double every 18 months (or two years, depending on who is making the claim).   The mainstream press is clueless about the “law.” Tom Friedman, of The World is Flat fame, seems to think it really is a law, instead of an observation and an aspiration. Gordon Moore’s 1965 paper showed it to be a historical pattern rather than a law of nature. And it’s an aspiration as leading semiconductor vendors use it as a guide for their products. For instance, Intel plans for a “tick” (process shrink) or a “tock” (new microarchitecture) every 12-18 months.   I can’t find what process geometry was used in 1965 when Moore made his prediction, but remember that few ICs had more than dozens of transistors at the time. I thought maybe the biggest part of the time was the 74181 ALU, which implemented about 75 gates, but it appears  that this part didn’t come out till five years later. Given that we’ve gone from a handful of transistors to billions on an IC, Moore’s prediction certainly was remarkable and prescient.   Is it coming to an end? For 30 years naysayers have been claiming that process scaling is doomed, but the incredible engineers designing these parts continue to astound.   Gordon Moore thinks  the law will peter out in the next decade or so. Intel thinks  it’s good for at least another decade. Yet in 2003 Intel predicted  an end to the law shortly after hitting the anticipated 16 nm node. Today it seems we’re on track for 10 nm fairly soon with 7 not far off. (There is some marketing fuzz in the definition of “node,” and it has been a long time since that term had much to do with the size of a transistor).   In a sense Moore’s Law, or at least many of the benefits it gives, ended years ago around the 90 nm node when Dennard scaling fell apart. In 1974 Robert Dennard noted that as geometries shrink we get all sorts of goodness like higher clock rates and lower power. Many of the benefits he described no longer come with increasing densities. Today the law does give us more transistors per unit area, which translates into better performance and a lower cost-per-gate. However, some believe  that 20 nm broke even the cost model.   A silicon atom is about 0.25 nm in diameter. Gate oxide thicknesses are measured in a handful of atoms. The scales are truly amazing, and to think of us manipulating things on an atomic scale is breathtaking. The technology is astounding. And yet it’s commonplace; most everyone in the developed world has a device built with 28 nm or smaller processes.   How many transistors are in your pocket?   A few months ago I bought a 128 GB thumb drive for $45. 128 GB means a trillion bits of information. Who could resist tearing something like that apart? It has two memory chips, so each chip stores a half-trillion bits. Memory vendors don’t say much about their technology, but I haven’t seen MLC flash with more than three levels. Each of these chips may have over 100 billion transistors. And that assumes there’s no logic for row/column addressing or other overhead. Samsung touts their V-NAND devices which go 3D, but it’s frustratingly hard to get much deep technical information about those parts. Xilinx went 2.5D with their interposer arrangement for FPGAs a few years ago.   I suspect Moore’s Law will have a healthy future. Memory devices are leading the way into 3D architectures. Processors might follow, though it’s hard to imagine how the somewhat random interconnects CPUs require between layers will work. Interposer approaches will become more common for high-end (read “very expensive”) devices, but I doubt that will scale into consumer-style parts.   Regardless, a half-century of logarithmic improvements to a technology is unprecedented in all of human history. My hat is off to all of the device engineers who have made this happen.
  • 热度 16
    2015-3-19 22:43
    1168 次阅读|
    0 个评论
    As I mentioned in an earlier blog , my chum David Ashton from Australia tells me that his local Microchip Technology representative includes a "Joke of the Month" in his communications.   David often forwards these jokes to me. I particularly like the funny ones (LOL). The last one was titled Murphy's Laws, Part 1 , which sparked my blog Expanding Murphy's Law .     Well, I just received an email from David with the subject line: Murphy's Laws, Part 2 (or "Part Deux" for the French speakers amongst our number). The contents of this email were as follows:   Murphy's Laws: Part 2 You can never tell which way the train went by looking at the track. Logic is a systematic method of coming to the wrong conclusion with confidence. All great discoveries are made by mistake. Nothing ever gets built on schedule or within budget. A meeting is an event at which minutes are kept and hours are lost. A failure will not appear in a unit until after it has passed final inspection. Some people manage by the book, even though they don't know who wrote the book or even which book. The primary function of the design engineer is to make things difficult for the fabricator and impossible for the serviceman. If there is a possibility of several things going wrong, the one that will cause the most damage will be the one to go wrong. Matter will be damaged in direct proportion to its value.   Several of these brought a wry smile to my face, but the one that really struck a chord was #7. How about you? Have you had any real-life experiences that directly map onto one of these "laws"? And can you suggest any additional items we should add to the list?
  • 热度 24
    2015-2-11 20:57
    1129 次阅读|
    0 个评论
    My friend David Ashton from Australia tells me that his local Microchip Technology representative includes a "Joke of the Month" in his communications. David was kind enough to share this month's chuckle with me as follows:   Murphy's Law: Part 1 Nothing is as easy as it looks. Everything takes longer than you think. Anything that can go wrong will go wrong. If there is a worse time for something to go wrong, it will happen then. If anything simply cannot go wrong, it will anyway. If you perceive that there are four possible ways in which a procedure can go wrong, and circumvent these, then a fifth way, unprepared for, will promptly develop. Left to themselves, things tend to go from bad to worse. If everything seems to be going well, you have obviously overlooked something. It is impossible to make anything foolproof because fools are so ingenious. The light at the end of the tunnel is only the light of an oncoming train.   Now, I always thought that the namesake of Murphy's law was Capt. Ed Murphy, a development engineer from Wright Field Aircraft Lab. But I was just reading this Wikipedia entry , which notes that: "The perceived perversity of the universe has long been a subject of comment, and precursors to the modern version of Murphy's law are not hard to find."     With regard to Capt. Ed Murphy's claim to fame, the Wikipedia goes on to say: "According to the book A History of Murphy's Law by author Nick T. Spark, differing recollections years later by various participants make it impossible to pinpoint who first coined the saying Murphy's law .   Now, do you recall my blog The 10 Commandments of Electronics ? If so, you will remember that we had a lot of fun augmenting this with our own offerings. So let's do the same for Murphy's law -- can you think of any additional items that should be added to the list above?
  • 热度 31
    2013-8-11 14:59
    4945 次阅读|
    1 个评论
    Usually, when I hear a doomsday story, I just roll my eyes and move on. However, I heard a message at Design Automation Conference (DAC) this year that still has me thinking. People have been talking about the end of Moore's Law for some time, but those discussions became a lot more urgent and heated at DAC in June. Many reasons have been postulated as to why Moore's Law might end, including not being able to overcome some physical limitation—perhaps a design issue that is preventing the whole chip from being powered up at the same time. More recently the matter of cost has been raised, where it will become so expensive to design a chip at the next node that nobody will be able to afford it. The concern has been that, with fewer design starts using the latest technologies and lower chip volumes, manufacturers would then not invest in wafer fabs for the next technology. I am not sure I fully get behind any of these arguments, but if we do stop making these advances what really happens? Is there no room for innovation if monolithically integrated devices cannot get more complicated? I am sure that some companies will be affected by this "crisis" as their commercial lead is contingent on being ahead of the design and fabrication curve rather than having the best design. Such an end may well transform our industry, but then we cannot expect the ride we have been on for 50 years to continue without some kind of change. Robert Colwell, who works for DARPA, said at DAC that the end of Moore's Law would be a US national security threat. This is based on the assertion that if the US does not stay ahead of the rest of the world in terms of computing power and associated technologies, then the rest of the world will become as capable as the US and be able to do things without the US government finding out—and they will be able to find out what the US is planning to do. Similar assertions can and are made in terms of weapons, of course. My first reaction is a political one. Why can we not spend more time getting along with people so that this is just not an issue that we care about? OK, so I am idealistic and I understand that some people may not think this is realistic or pragmatic. Does innovation die when we cannot create more complex devices? I hope this is not true. I hope that we would find ways to use our knowledge and the capabilities we have in better and more optimal ways, exploring different architectures where we have just accepted those in existence today because that is easier and faster. What about biological computing or coming up with computers that operate more like the brain rather than just accept that binary arithmetic is the way to go? So, what happens if the whole world has equal access to technology? Does stability depend on one country having a bigger stick than everyone else? Brian Bailey EE Times  
相关资源
  • 所需E币: 3
    时间: 2019-12-24 17:06
    大小: 77.19KB
    上传者: rdg1993
    摘要:通过解释法拉第定律启动此应用程序说明。注释然后显示如何电压表和连接导线作为一个位置传感器。一轨到轨I/O比较器和几个简单组件构成比较直流电压正弦波振幅的振幅探测器。MAX985轨到轨I/O运算放大器的特点是在设计中。Maxim>AppNotes>GeneralengineeringtopicsMeasurementcircuitsMiscellaneouscircuitsKeywords:Faraday'sLaw,voltmeter,positionsensor,amplitudedetector,sinewaveamplitudeSep22,2003APPLICATIONNOTE2238PositionsensorexploitsFaraday'sLawAbstract:ThisapplicationnotestartsbyexplainingFaraday'sLaw.Thenotethenshowshowavoltmeterandconnectingwiresserveasapositionalsensor.Arail-to-railI/OcomparatorandafewsimplecomponentsformanamplitudedetectorthatcomparesthesinewaveamplitudewithaDCvoltage.TheMAX985rail-to-railI/Oopampisfeaturedinthedesign.ConsidertworesistorsR1(1kΩ)andR2(3kΩ),connectedinparallelasshowninFigure1.Atime-varyingmagneticfieldH,increasinglinearlywithtime,……