tag 标签: current

相关博文
  • 热度 5
    2024-7-9 12:20
    554 次阅读|
    0 个评论
    概述 当前SiPM读出测试系统进行到了一定程度,最初SiPM型号的选定是由项目团队敲定,个人对此经过一段时间的学习,对应为何要如此选型也有了一定的理解。 目前,项目计划在当前基础上衍生额外的项目分之,即SiPM需要选用其它型号。本文基于个人理解试图初步探讨如何对于即将采用的SiPM型号进行选择。 当前SiPM型号 当前使用的SiPM来自滨松的S14161-6050HS-04,这是一个4x4的SiPM阵列,总共16个通道,其电气指标如下图所示。 图1:S14161系列SiPM指标。 新项目SiPM选型指导 新衍生的项目设计根本目标是提升符合分辨率。当前项目有哪些是除了电子电路设计方面无法优化,而需要通过改变SiPM型号来进行优化的呢?这方面主要有两个因素可以考虑: 提高光子探测效率 降低暗电流 新的SiPM型号无法从根本上大幅度提高所谓的符合分辨率,只能在当前物理条件基础上进行优化。如图1所示,当前使用的SiPM像素尺寸是50um,单通道尺寸为6mm,暗电流典型值为2.5uA,最大值为7.5uA。 从上述两个指导建议,即提高PDE,降低暗电流出发,可以寻求增大像素尺寸,减小单通道尺寸,这样在不改变其它指标情况下可以初步得到一个型号S14161-4075HS-04。从滨松公开网站上无法查询到S14161系列有该型号。 SiPM相关指标的理解 这里对于指标的理解还有两个疑问,即首先是图1给出暗电流整个阵列的暗电流还是单个像素的暗电流?其次是图1给出的增益是整个阵列增益还是单个通道或单个像素的增益? 从滨松官网有关MPPC的基础知识分享,可以了解到增益计算公式如下所示: Gain = 1/q * Cp * Vov 根据上述公式,增益似乎是基于像素计算,所以图1给出的增益是否可以理解为也是对于型号中单像素增益。 SiPM其它因素考虑 通过咨询,得知滨松应该是很愿意为特定用户进行定制化MPPC的生产。那么在考虑增大PDE方面考虑除了增大像素尺寸外,是否还有其它因素可以是该目标呢?同样在滨松官网给出了PDE计算公式,如下式所示。 PDE = FF * QE * AP 上述公式右侧,第一项是填充系数,第二项是量子效率,最后一项是雪崩概率。理论上来说,大的像素尺寸会带来大的雪崩概率,从而增大PDE,这是为何上述指导意见首先像素尺寸从50um到75um的原因。 那么这里探讨的其它因素是什么呢?和第二项的量子效率无关,和第一项的填充系数应该是有关系的。如图2所示,在SiPM器件手册里给出的封装尺寸可以看出,SiPM通道 之间存在比较大的“沟道”间隔。 图2:S14161-6050HS-04封装尺寸示意图 如上图所示,这里通道之间有个0.2mm的间隔,0.2mm相对50um或75um的像素尺寸是非常巨大间隔。这个间隔必然会降低整体阵列的PDE,这里提出一个想法,通过将这个沟道尺寸尽量降低来提升或优化整体器件的PDE。唯一担心的问题是,这有可能是滨松封装生产上的限制。因为遍查其它阵列器件,该尺寸都是固定的0.2mm。而且如图3所示,竞争对手的这个指标也是0.2mm。 图3:ON Semi(SENSL)家SiPM芯片封装尺寸示意图 如图3所示,SiPM阵列其实就是由多个单通道SiPM封装而成,那么如果用正片原料直接代替分割好的小颗粒封装成类似尺寸SiPM是否可行?这样就解决了通道间过大间隔问题。内部通过“电气分割”再将正片材料分成同样的通道。这是一种思路。
  • 热度 27
    2016-4-29 18:04
    1550 次阅读|
    0 个评论
    With all the attention given to low-power design and products, we tend to forget that there's another world out there with power levels that are tens of orders of magnitude greater—and working with them is a radically different world in every respect. A recent article in IEEE Spectrum, " Inside the Lab That Pushes Supergrid Circuit Breakers to the Limit ," was a dramatic reminder of the engineering challenges of anything having to do with the electric grid's high-tension power lines in the megavolt/kiloamp regime. Next time you hear engineers moan that their design has to “sip electrons,” perhaps you should suggest that they read this article and they’ll hopefully soon stop. Forget everything you think you know about "electricity" when you are at these levels. The article focuses on testing of circuit breakers for these power lines, and it's truly another world. There's no need for me to reiterate the article; you can read it yourself. The author’s detailed description of what a circuit breaker must do and deal with at these levels is astonishing: we're in the world of plasmas, gas quenching, switching of kA in milliseconds, mechanical contact issues that you won't be able to anticipate, and more. Every aspect of the test setup had to be custom built, as there are no off-the-shelf fixtures for this kind of work. The test facility even had to build a special generating and storage system to supply the power for the tests. Every decision and action requires careful, deliberate thinking and risk assessment. I'm fascinated by engineers who are not only at the extremes of design along one or more performance or operational parameters, but must also devise and build ways to test their designs. Sometimes, as in the case of the power-grid circuit breakers, there is a set of operational features in their favor: the tests are reasonably close to final conditions, they can be repeated as needed under carefully controlled conditions, and changes can be made and then the devices are retested. However, not all tests involving systems at high power levels (whether electrical, chemical, or mechanical) have this characteristic. Often, the test process is so complex and difficult to set up or execute that any the test/modify/retest cycle is too expensive or time consuming. The implications of this point were clearly explained in the excellent book " Apollo: The Race to the Moon ,” where an “interlude chapter” steps back and provides a big-picture overview of the differences between aircraft test and big-rocket test. They paraphrase Joe Shea, Apollo Program Manager, as saying it didn't matter if they tested the Saturn V rocket six times instead of four, or eight instead of six … statistically, the extra successes (if they were successes) would be meaningless – all they would do is use up pieces of precious, costly hardware (time and dollars) that could have been used for real missions. There are other cases when a project can’t be fully tested. The recently published book " The Right Kind of Crazy " about the Mars Curiosity Rover mission discussed the implications of the obvious: that many aspects of the design of space-exploration systems must be simulated, assessed, and analyzed to an extreme, because there is no way to replicate some of the real operating conditions. For the Mars mission, one such topic was the parachute which slowed the Lander down so the "sky crane" could hover and lower the Rover to the Martian surface. You can replicate the extreme cold and vacuum of space, but how do you test deploying that parachute at hundreds of miles/hour in the Martian atmosphere and then using retrorockets to stabilize the platform in a low-g environment? The answer is that you can’t. What’s your experience with higher voltages and currents? What‘s the highest power level for which you have had to design for? What was your biggest surprise or memorable example of culture shock?
  • 热度 21
    2014-11-23 21:47
    1526 次阅读|
    0 个评论
    When you need to measure current through a load such as a motor, there are various obvious options: Use a Hall-effect device, a transformer (for AC only, of course), or an in-line series sense resistor (often called a "shunt resistor," but this is misleading, as it is not bypassing the load). Conceptually, the sense resistor is an attractive option, because it is inexpensive, can be placed anywhere in the load line in principle, and produces a voltage output based on Ohm's law: V = I × R. What could be simpler?   As with so many other engineering topics, the reality is that what looks simple at first actually involves tradeoffs and conflicts. That's true for the sense resistor, as well, with issues such as the resistor's value, location, and physical installation. Two application notes (found in the References section) made this quite clear.   Using a low-value sense resistor is a common technique for measuring current through a load, but applying even this simple component for such a basic, straightforward application has its subtleties and tradeoffs.   Consider the most obvious parameter: the resistor value. A higher-value resistor develops a larger voltage drop for a given current, and that larger voltage is easier to use as a feedback signal. Whether you digitize it or use it in analog form, the higher voltage provides greater noise immunity and better resolution.   However, that larger value also means there is increased voltage drop between the rail and common (often referred to as "ground," even if it is not a true Earth ground) and less voltage for the load, reducing system performance and efficiency. Further, the resistor itself dissipates power. This wastes available power and means there is more heat to be removed from the system. Dissipation in tens of watts is fairly common. Finally, the sense resistor is within the load-control loop, and this will affect the loop dynamics, stability, and performance, since it is in that loop but not part of the "real load" that the system is driving.   Balancing these factors, most sense resistors are chosen with sub-ohm, milliohm, and even sub-milliohm values to minimize IR drop, I 2 R self-heating dissipation, and load disruptions. The corresponding voltage across the resistor is usually about 1 V full scale, meaning that the sensing circuit needs to be designed for good analog response and resolution at relatively low levels.   The low resistance value also has a ripple effect on design, layout, and physical configuration in a way that many engineers may not be used to considering, especially if most of their experience is with the resistance in the more familiar kilohm range. At the milliohm and lower values for the sense resistor itself, the associated resistances of the PC board, solder connections (if any), and the placement of the voltage sense-lead pickoffs become significant.   Even a few centimeters of PC-board track between the sense resistor and the sense-circuit input may be a significant fraction of the sense-resistor value. There's also the temperature coefficient of resistance α of the PC board's copper to consider: ΔR/R 0 = αΔT; for copper, α = .00386.   I also wonder: If you are interviewing a candidate for a power-related or analog-circuit role, would a good place to start be to ask about an apparently simple subject such as current-sensing options, especially the use of current-sensing resistors? Then you could dive deeper into topics such as the pros and cons of high-side versus low-side sensing or techniques for isolated or differential sensing (often needed, especially with high-side sensing). Perhaps we'll look at those issues in future columns.   What's been your experience with current sensing and sense resistors? Are there other topics which you assumed at first would be simple, only to surprise you when you dug deeper, did the math, and looked at topologies?   References Choosing the right sense resistor layout, Texas Instruments Optimize High-Current Sensing Accuracy by Improving Pad Layout of Low-Value Shunt Resistors, Analog Devices
  • 热度 20
    2014-11-12 17:13
    1677 次阅读|
    0 个评论
    Teeth may soon be repaired without the drill, so dreaded by some, as a new technique has been developed at King’s College London.   It is called electrically accelerated and enhanced re-mineralization (EAER), and the inventors claim that it accelerates the natural movement of calcium and phosphate minerals into the damaged tooth. The technique employs a two-step process. First, the damaged area of enamel in the tooth is prepared, and then tiny electric current pulses are used to move the minerals needed for repair into the repair site. It is currently in development, and its developers suggest it could be available for general use in about three years.   To those of us with an interest in non-volatile (NV) memory, the process strikes a chord. The movement of material by electro-migration is a bane to developers of some types of emerging non-volatile memory, and the very driving force for others.   The building of links, usually conducting, by electro-migration-driven movement, by electro-chemical means (e.g., plating), by electro-crystallization, or by electric field, in a manner that can be reversed, are now all part of the emerging NV memory mix. Those same effects can also appear as reliability problems by shortening the life of some types of memory device as well as other solid-state devices.   The similarity between this new emerging dentistry and NV memory technology is illustrated in the two cross sections shown in the figure below. On the right-side is a cross section of a generic NV memory with the active material between two electrodes (green). The conducting link required to write the memory to one of its logic states grows from one electrode towards the other.   On the left side of the figure is the cross-section of a tooth undergoing repair with the electric pulses. The repaired tooth material is growing into the active material. The active material (shown in pink in the tooth cross-section) is what must be applied as part of the preparation process described as the first step. This will also need to have a sort of electrode in contact with it in order to apply the electric pulses. Unlike most non-volatile memory, the material deposited as the repair does not have to be conducting. But the pink preparation material will need to be (unless the process of moving the material is electric field driven, in which case it could be dielectric and more insulating). In the figure, we have assumed that a ground link to complete the circuit is through the body and gum.   Clearly, the dentists and their patients do not require the growth to be reversed, so perhaps in an electronics industry analogy, this process should be more accurately classified as one-time programmable memory (PROM). In any case, it is certainly an interesting development.   A new company named Reminova , based in Perth, Scotland, has been set up to commercialize the research and is in the process of seeking private investment to develop the EAER technology. It is the first company to emerge from the King's College London Dental Innovation and Translation Centre, which was set up to take novel technologies and turn them into new products and practices.   Ron Neale is an independent electrical/electronic manufacturing professional .
  • 热度 25
    2013-12-5 22:07
    1605 次阅读|
    0 个评论
    We have many choices for energy-harvesting sources, such as localized heat or vibration, but one of the most pervasive possibilities is to grab some of that stray RF field energy that is all around us, from low frequencies into the gigahertz range. Hey, if you don't use it, it truly will go to waste. It will be absorbed by any materials in its path (causing imperceptible but widespread heating) or dissipated into space (perhaps continuing forever on its journey at 3 x 10 8 meters/second). That's why a recent development at Duke University looks interesting. It involves the use of highly engineered metamaterials to build a 900MHz-to-DC transducer, shown in the figure below. The researchers say their device achieves 37% efficiency. That's very impressive, especially when you consider that good solar cells reach around 10% and (for obvious reasons) are not usable 24/7.   Duke engineering students Alexander Katko (left) and Allen Hawkes show a waveguide containing a single power-harvesting metamaterial cell, which provides enough energy to power the attached green LED. I have a quibble with the Duke University press release about this development. In trying to make the concept tangible to the audience, the writer says that, by using the metamaterial cells in series (see image below), the device was able to produce an output of 7.3 V, which is higher than a standard USB port.   This five-cell metamaterial array developed by Duke engineers converts stray microwave energy, as from a WiFi hub, into more than 7 V with an efficiency of 36.8%—comparable to that of a solar cell. Even though that is factually correct, there's the implication that this metamaterial panel can act as a USB charger or similar power source. That would be nice. But anyone reading this column knows that, even though it could deliver that voltage, the current level would be low. There isn't that much RF energy passing through in the capture field. The full technical paper in Applied Physics Letters shows that the researchers produced about 100 mA into a 70-Ω load, which is very impressive. The 7.3 V tag made me think about the love/hate relationship engineers have with voltage and current and thus with energy and power (the rate at which energy is delivered). Sometimes the specific value needed is determined by the laws of physics. If you want to ionise a gas (such as a neon tube) or jump a spark gap, you'll need several thousand volts but low current. When you want to do real work such as driving a motor, you'll want more current to deliver the power—at a higher voltage to reduce I 2 R losses and increase overall efficiency. By contrast, the voltage and current needed for a smartphone is dictated by the ICs that were designed for very low voltage, due to the imperative of low power consumption. In general, low-single-digit voltages are tough to work with efficiently—not because of resistive losses, but because unavoidable diode drops of 0.6 to 0.8 V can take a big bite out of your available source voltage. How do you choose the voltage and current values to use? As in most engineering situations, the answer is clear: It depends. For some situations, such as gas ionisation or the smartphone, you have little choice; the numbers are dictated by physics, available components, or industry standards. In other cases, the engineer has the flexibility to choose (within limits). It's a matter of finding the voltage/current combination that works best in terms of power delivery, system efficiency, available components, safety requirements (which kick in at different levels), and cost. Have you had to analyse and select operating voltage and current levels? How did you come to a decision on balancing the unavoidable trade-offs?  
相关资源