tag 标签: cooling

相关博文
  • 热度 28
    2016-1-30 12:25
    1947 次阅读|
    0 个评论
    Most electronic, mechanical, and thermal engineers think of how to keep the temperature of their IC or printed circuit board below some maximum allowable value. Others are more worried about the overall enclosure, which can be range from a self-contained package such as a DVR to a standard rack of boards and power supplies.   Basic techniques for getting heat from an IC, board, or enclosure involve one or more of heat sinks, heat spreaders (PC-board copper), head pipes, cold plates, and fans; it can sometimes move up to more-active cooling approaches including air conditioning or embedded pipes with liquid flow. That's all well and good, but obviously not good enough for the megawatts of a "hyperscale" data center. (If you are not sure what a hyperscale data center is, there's a good explanation here ). While there is no apparent formal standard on the minimum power dissipation to be considered hyperscale, you can be sure it's in the hundreds of kilowatt to megawatt range.   But where does all that heat go? Where is the "away" to which the heat is sent? If you’re cooling a large data center, that "away" to hard to get to, and doesn't necessarily want to take all that heat you are dissipating.   A recent market study from a BSRIA offered some insight in the hyperscale data-center cooling options and trends. I saw a story on the report in the November issue of Cabling Installation Maintenance , a publication which gives great real-world perspective into the nasty details of actually running all those network cables, building codes, cabling standards, and more. (After looking through this magazine you'll never casually say, it’s "no big deal, it’s just an RJ-45 connector" again.)   BSRIA summarized their report and used a four-quadrant graph (below) of techniques versus data-center temperatures to clarify what is feasible and what is coming on strong. Among the options are reducing dissipation via variable-speed drives and modular DC supplies, cooling techniques from liquid cooling to adiabatic evaporative cooling, or allowing a rise in server-inlet temperature. The graph also shows the growth potential versus investment level required for each approach; apparently, adiabatic/evaporative cooling is the "rising star."   Cooling approaches for hyperscale data centers encompass basic dissipation reduction, liquid cooling, and adiabatic/evaporative cooling, according to this analysis from BSRIA Ltd, with the latter a "rising star."   When you are worried about cooling your corner of a PC board, it's easy to forget that it's not enough to succeed in that goal; you have to think of the next person who will have to deal with the heat which you so nicely spirited away. That's why I am often wary of PC-board heat spreaders, unless the design has been thermally modeled "across the board", so to speak: they move the heat from your IC to the next one, and so make their thermal headache more difficult.   Although I know I won't be involved with design of such hyperscale cooling, I need to learn more about thermal principles, including adiabatic/evaporative cooling. It still hurts that a very long time ago, when I was told to take an engineering course on "thermal physics" to learn basics, I was misled. It turned out the course was about the personal thermal lives of atoms and molecules, and had nothing at all to do with "thermal" as engineers need to know it: heat, heat flow, thermal modeling, temperature rise, cooling techniques, and more. In contrast, "thermal physics" is what Einstein used in one his is five 1905 papers, " Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen " ("On the Motion of Small Particles Suspended in a Stationary Liquid, as Required by the Molecular Kinetic Theory of Heat"), but hey, he wasn't worried about cooling a hot component!
  • 热度 20
    2012-5-11 15:40
    2379 次阅读|
    1 个评论
    At this moment, you have most likely heard that Embedded Systems Design magazine had its last print issue. That occasion is especially poignant for me, because so much—20+ years—of my career has been tangled up with the magazine in general, and the Programmer's Toolbox column in particular. Some folks have been blessed (or cursed) by careers that are "linear." They start one job, stay with it, move up the ladder, and retire happy. Mine hasn't been that way. It's taken some sometimes-unexpected twists and turns—some more pleasant than others. Not all of those directions have had anything whatever to do with embedded systems. I thought, however, that you might enjoy hearing about the ones that did. But first, I need to set the stage with a little background. Giant brains I've been involved with computers for a long time. How long? Here's a hint: The textbook for my first computer science class, in 1956, was entitled Giant Brains, or Machines That Think . Back then, the notion of "micro brains" wasn't even a blip on anyone's radar. Computers were—and, we assumed, always would be—monster, power-hungry machines that filled large rooms with glass walls, raised floors, and over-engineered cooling systems. The computer room of 1960 felt more like a cathedral than a place of science, and it had its share of mysterious icons, rituals, a small army of acolytes, and a hierarchy of priesthood, from floor supervisors to managers to that highest of all high priests, the systems administrator. This was my world for the next decade or so. Not that I actually got to enter the computer room, of course. That privilege was reserved for the anointed. We mere engineers and scientists were not welcome. Heck, it was two years before I even saw the computer, and that was from the outside of those glass walls, looking in. My only contacts with it were the "keypunch girls" who punched my card decks, and the clerk behind the counter who accepted my jobs and returned their results. If, on rare occasions, I interacted with the priesthood, it was in hushed and reverent tones, and a proper air of respect. I resisted the urge to genuflect. Now, when you consider that the purpose of the computer was, after all, to help us scientists and engineers solve our problems, it may seem hard to understand why we customers were treated so shabbily. The explanation has to do with money and bureaucracy. In those days, computer time was expensive: $600 per hour. That's in 1960 dollars, when a Coke cost a nickel, gasoline 25 cents per gallon, and that $600 would pay my salary for six weeks. So the priesthood tended to guard the computer jealously. To them, we were not so much valued customers, as necessary evils to be tolerated, however grudgingly. The systems administrator was not judged on how many problems that were solved, but by his ability to keep the computer backlog down. Backlog, as in the number of jobs waiting in the queue. The easiest way to keep the backlog down was simply to deny access to the job queue, or to abort jobs on the flimsiest of excuses. In one shop, I had jobs rejected because the card deck had too many rubber bands around it. Other times, too few. One computer group actually issued written guidelines for how many rubber bands should be used, per inch. The only problem was that the computer operators didn't follow their own guidelines. So a deck they returned to me was likely to be rejected on the next turnaround. Despite the oppressive, Big Brother environment, we got exciting things done. We did, after all, help Neil and Buzz walk on the Moon. What's more, it was in this environment that I learned my craft and developed techniques that I still use today. I did, however, take one thing away from the experience: A deep and abiding hatred of systems administrators. First personal computers During those oppressive years, I found a glimmer of hope and a glimpse into the future. I discovered that not all computers had to be large, nor limited in access. Around 1961, I gained access to what we'd now call a personal computer. The Royal McBee LGP-30 was about the size of a desk. 3 A vacuum-tube machine, it had a grand total of 15 flip-flops. Its only memory was a 4k, magnetic drum. All the data—even the machine registers—resided there. Bits marched to/from the drum in serial fashion. "Bulk storage" was rolls of paper tape. As primitive as the LGP-30 was, it offered important advantages over the Giant Brains. First, I didn't have to beg for permission to use it, No accountant or systems administrator stood behind me, tapping his foot. And though the computer was shared among our team, I had virtually unlimited access to it. Saving computer clock cycles was no longer an issue. Most importantly, I could use the LGP-30 interactively. I'd sit down at its console, a modified electric typewriter, and type. Answers came back in seconds, on the same page. I didn't have to learn and use machine language; the LGP-30 sported a primitive interpreter. If you think of it as a 1960 version of an Apple, with paper tape instead of tape cassettes, you won't be far from wrong. Using that computer, I formulated a philosophy that I've held ever since: Having virtually unlimited and interactive access to a small computer is infinitely preferable over needing an act of Congress and wading through a hierarchy of bureaucrats, to get batch-mode access to a big one. Limited though a small computer might be, a "turnaround" time in seconds trumps one in hours or days. I got a lot of problems solved with that old computer, but my main take-away was a dream. Someday, I vowed, I'd have a computer of my very own. I wouldn't have to justify my use of it to anyone. Its only job would be to sit on my desk, waiting for me to give it something to do. And if I chose to use it frivolously, inefficiently, or not at all—well, that would be entirely up to me.