热度 28
2016-1-30 12:25
1955 次阅读|
0 个评论
Most electronic, mechanical, and thermal engineers think of how to keep the temperature of their IC or printed circuit board below some maximum allowable value. Others are more worried about the overall enclosure, which can be range from a self-contained package such as a DVR to a standard rack of boards and power supplies. Basic techniques for getting heat from an IC, board, or enclosure involve one or more of heat sinks, heat spreaders (PC-board copper), head pipes, cold plates, and fans; it can sometimes move up to more-active cooling approaches including air conditioning or embedded pipes with liquid flow. That's all well and good, but obviously not good enough for the megawatts of a "hyperscale" data center. (If you are not sure what a hyperscale data center is, there's a good explanation here ). While there is no apparent formal standard on the minimum power dissipation to be considered hyperscale, you can be sure it's in the hundreds of kilowatt to megawatt range. But where does all that heat go? Where is the "away" to which the heat is sent? If you’re cooling a large data center, that "away" to hard to get to, and doesn't necessarily want to take all that heat you are dissipating. A recent market study from a BSRIA offered some insight in the hyperscale data-center cooling options and trends. I saw a story on the report in the November issue of Cabling Installation Maintenance , a publication which gives great real-world perspective into the nasty details of actually running all those network cables, building codes, cabling standards, and more. (After looking through this magazine you'll never casually say, it’s "no big deal, it’s just an RJ-45 connector" again.) BSRIA summarized their report and used a four-quadrant graph (below) of techniques versus data-center temperatures to clarify what is feasible and what is coming on strong. Among the options are reducing dissipation via variable-speed drives and modular DC supplies, cooling techniques from liquid cooling to adiabatic evaporative cooling, or allowing a rise in server-inlet temperature. The graph also shows the growth potential versus investment level required for each approach; apparently, adiabatic/evaporative cooling is the "rising star." Cooling approaches for hyperscale data centers encompass basic dissipation reduction, liquid cooling, and adiabatic/evaporative cooling, according to this analysis from BSRIA Ltd, with the latter a "rising star." When you are worried about cooling your corner of a PC board, it's easy to forget that it's not enough to succeed in that goal; you have to think of the next person who will have to deal with the heat which you so nicely spirited away. That's why I am often wary of PC-board heat spreaders, unless the design has been thermally modeled "across the board", so to speak: they move the heat from your IC to the next one, and so make their thermal headache more difficult. Although I know I won't be involved with design of such hyperscale cooling, I need to learn more about thermal principles, including adiabatic/evaporative cooling. It still hurts that a very long time ago, when I was told to take an engineering course on "thermal physics" to learn basics, I was misled. It turned out the course was about the personal thermal lives of atoms and molecules, and had nothing at all to do with "thermal" as engineers need to know it: heat, heat flow, thermal modeling, temperature rise, cooling techniques, and more. In contrast, "thermal physics" is what Einstein used in one his is five 1905 papers, " Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen " ("On the Motion of Small Particles Suspended in a Stationary Liquid, as Required by the Molecular Kinetic Theory of Heat"), but hey, he wasn't worried about cooling a hot component!