tag 标签: storage

相关博文
  • 热度 20
    2014-11-14 20:53
    2168 次阅读|
    0 个评论
    Non-volatile flash memory (NVMe) has been utilised to increase the performance of high-end servers for years, notably pioneered by Fusion IO. Today, NVMe is becoming the preferred technology for flash storage and the all-flash datacenter. As NVMe adoption increases, other elements of usability will allow this technology to increase its reach beyond the datacenter, including its use as a boot device.   In order to increase the usability of NVMe devices, the ability to boot off of an NVMe device is an important step. For enterprise devices, boot performance and timing aren't critical. Since these products are intended to be up 24/7, boot time is not a major concern. However, while NVMe is being introduced initially in the enterprise space, its sights are set on the client and mobile space (hence the emphasis on using the M.2 connector). In these product segments, boot time is critical. Thus, UNH-IOL in partnership with the NVM Express Organization has added an OS boot interoperability test to the NVMe Interoperability Test Suite .   Although today the test is optional, in the future it will likely become a mandatory part of the NVMe Integrators List qualification. Booting off of an NVMe device requires users to use UEFI instead of BIOS. What is UEFI? More information can be found at UEFI.org .   For our purposes, think of UEFI as a modern replacement for BIOS. While NVMe has had driver support in Linux and Windows for some time, only recently was NVMe support added to UEFI. BIOS support for NVMe is nonexistent, hence the need to use UEFI. With a UEFI NVMe driver, a system can be configured to boot off of an NVMe device. The UEFI Boot Manager is configured to look for the OS on the NVMe device. Then during the bring-up sequence, the UEFI driver will hand off to the OS driver (either Linux or Windows).   If you have access to an NVMe device and a system that supports UEFI, you can follow our test procedure to boot off of the NVMe device. (See PDF of Test 1.5 in the NVMe Interoperability Test Suite .)   Again, boot testing will be à la carte or an FYI test as part of the Interop Test Plan at the third NVMe Plugfest the week of November 10, hosted by the UNH-IOL. We encourage those attending to participate in this testing, since it will be mandatory in future plugfests.   David Woolf UNH-IOL NVMe Consortium Lead
  • 热度 22
    2014-3-28 14:52
    1938 次阅读|
    0 个评论
    In his article Preserving Data Books From Yesteryear, Aubrey Kagan related how he painstakingly scanned data books and made the information searchable. That's great, as long as he has the hardware and software to read the files. If someone finds his designs 200 years on and wonders how they worked, will his digitized data books and schematics be of any help? In 1998, I wrote an article for Test Measurement World called "Arrange Test Benches More Efficiently" in which Kagan sent me a photo of how he used a computer monitor arm to hold an oscilloscope. You can read the PDF of the article in the link above because I still have my print copy to scan, as does Kagan. In fact, I have every print edition of TMW from August 1992 through to the end in 2011. Those print copies could outlast all of the electronic versions that are now on EDN. In 200 years, someone will know how electronic products were tested, because reading them won't require machine intervention.   Aubrey Kagan used this computer monitor arm to hold his oscilloscope in 1998. Last week, I read an article in IEEE Spectrum about the movies and how they've been stored for the last century—on film. The beauty of film is that it can last 100 years. All you need is a film projector. But, movies are going digital, at least in the short term. For the long term the article says that film is still the best option. On Saturday, March 8, The Boston Globe posted an article about how people are giving up on safe-deposit boxes for storing documents and valuables. They're scanning documents and storing them in the cloud. Valuables such as jewellery are going in home safes. A local bank reported that safe-deposit boxes are in less demand. But the article noted that when a cloud service went out of business, the company gave just 24-hours notice for people to download their documents. Many were gone forever unless other copies remain. These instances of storage made me think about how test and calibration data are stored. Granted, you probably don't need to keep test data for 100 years, but you may need a way to gain access to it many years after your company started (or stopped) producing a product. The big problem with storing any kind of data for the long haul is "Will there be a machine that can read the data?" Systems and file formats change over time, and you may find yourself transferring data from an old format to another. I've done that more than once. For example, I once converted documents from Multimate to MS Word. Both formats use .doc file extensions, but they are incompatible. If you have engineering or manufacturing records that are more than perhaps 10 years old, can you still read the data? If yes, then for how much longer? What about documents that were originally stored as hard copies? Do you scan them and upload them to the company network? If you destroyed the paper records, you'd better make sure that you'll be able to read them on some machine and have them stored in more than one place. Hard drives are disasters waiting to happen. Therein is the beauty of hard-copy records. Remember, portions of the Dead Sea Scrolls survived in a cave for centuries. Once they were recovered, people were able to read them because the language had survived. They required no machine to see the words, though having digital images means we can see the artifacts from anywhere. Will the original scolls outlast their digital copies?   The Dead Sea Scrolls survived for centuries in a cave, and they just may outlive their digital images. Of course, digitized test data is, at least in the near term, far more valuable than paper records. You can analyse digital data and discover things about designs and their manufacture that are impossible to achieve with paper copies. You can also easily keep copies of the data in more than one place in case of a disaster.   How long will it take for this iPad to become nothing more than a lighted serving tray?   Martin Rowe is Senior Technical Editor at EETimes.  
  • 热度 24
    2013-8-22 19:29
    1875 次阅读|
    1 个评论
    The changes in the way we do computing and where our data gets stored seem inevitable. With what we read these days about cloud computing, the changes will take place whether we like it or not. Perhaps I am an old fuddy-duddy, but I have been resisting the calls to put my data online. It just does not make sense to me due to convenience and cost issues. I can buy a 1-terabyte disc for quite a bit under $100 these days, and I can make it accessible to all of my devices while at home and through a web interface when I want to access my data from anywhere else. For another $100, I can secure that data by putting it in a RAID array. Admittedly, if I have a fire in my house, I could lose the data, and I would love to have an offsite disc at a friend's house to give me that extra layer of protection. Unfortunately, the software is not in place to enable that to be done easily. So, for about $400 total, I can have one terabyte of reasonably secure storage, and assuming a disc life of five years, that equates to under $100 per year plus a little bit for electricity to run them. To compare, I looked at cloud storage solutions. I am using Rackspace as an example because they have clear pricing. I have no idea if they are the cheapest or best solution out there—they are just an example. They charge 10c per Gigabyte per month. So, for one terabyte, it's $100 per month—over 10 times the cost. In the place I use it the most—home—it's going to be more than 10 times slower. But that is not the end of the cost equation; you also have to pay 12c per Gigabyte to access your data. So, assuming I only access a quarter of my files once per month, that is another $30 per month. What do I do that requires this amount of storage? I am a photographer, and my library of photos exceeds one terabyte. Who knows how often my Lightroom application would access files when I start up? OK, so let's say I have lost all of my marbles and decide to put my photos in the cloud. What about security and privacy? Let me start with privacy. While I don't believe that I take any photos that are illegal, what if they are and I don't know it? Have I given up any rights by placing them with a third party? We've recently heard about the sweeping spying going on by the US government through the National Security Agency's programs and evidence of foreign countries gaining access to secure data within this country. I am sure that none of this is particularly unique to the US—it's going on everywhere—it just gets talked about more openly here. Could it be governments are really the ones who want cloud storage to become ubiquitous so that they have access to more people's private data? And lastly, there are issues about security. I am sure that Rackspace is a stable company and that they are not going to disappear overnight, but that may be a concern with some of the lower-cost providers. Rackspace also says that all data is written to three separate disks on three different servers, each of which has a dual power supply. Sounds nice and secure, but what happens if the data gets corrupted or lost? What liability do they have? Well, I couldn't find an answer to that anywhere on the website, but I wouldn't mind betting that they limit their liability to the cost of the storage. So, am I painting an overly negative opinion of cloud storage? I would love to be persuaded that this might eventually be a good thing to do. What concerns do you have? Brian Bailey EE Times
  • 热度 31
    2013-8-22 19:27
    3342 次阅读|
    1 个评论
    Whether we like it or not, changes in the way we do computing, and where our data gets stored, seem inevitable given everything we read these days about cloud computing. Perhaps I am an old fuddy-duddy, but I have been resisting the calls to put my data online. It just does not make sense to me due to convenience and cost issues. I can buy a 1-terabyte disc for quite a bit under $100 these days, and I can make it accessible to all of my devices while at home and through a web interface when I want to access my data from anywhere else. For another $100, I can secure that data by putting it in a RAID array. Admittedly, if I have a fire in my house, I could lose the data, and I would love to have an offsite disc at a friend's house to give me that extra layer of protection. Unfortunately, the software is not in place to enable that to be done easily. So, for about $400 total, I can have one terabyte of reasonably secure storage, and assuming a disc life of five years, that equates to under $100 per year plus a little bit for electricity to run them. To compare, I looked at cloud storage solutions. I am using Rackspace as an example because they have clear pricing. I have no idea if they are the cheapest or best solution out there—they are just an example. They charge 10c per Gigabyte per month. So, for one terabyte, it's $100 per month—over 10 times the cost. In the place I use it the most—home—it's going to be more than 10 times slower. But that is not the end of the cost equation; you also have to pay 12c per Gigabyte to access your data. So, assuming I only access a quarter of my files once per month, that is another $30 per month. What do I do that requires this amount of storage? I am a photographer, and my library of photos exceeds one terabyte. Who knows how often my Lightroom application would access files when I start up? OK, so let's say I have lost all of my marbles and decide to put my photos in the cloud. What about security and privacy? Let me start with privacy. While I don't believe that I take any photos that are illegal, what if they are and I don't know it? Have I given up any rights by placing them with a third party? We've recently heard about the sweeping spying going on by the US government through the National Security Agency's programs and evidence of foreign countries gaining access to secure data within this country. I am sure that none of this is particularly unique to the US—it's going on everywhere—it just gets talked about more openly here. Could it be governments are really the ones who want cloud storage to become ubiquitous so that they have access to more people's private data? And lastly, there are issues about security. I am sure that Rackspace is a stable company and that they are not going to disappear overnight, but that may be a concern with some of the lower-cost providers. Rackspace also says that all data is written to three separate disks on three different servers, each of which has a dual power supply. Sounds nice and secure, but what happens if the data gets corrupted or lost? What liability do they have? Well, I couldn't find an answer to that anywhere on the website, but I wouldn't mind betting that they limit their liability to the cost of the storage. So, am I painting an overly negative opinion of cloud storage? I would love to be persuaded that this might eventually be a good thing to do. What concerns do you have? Brian Bailey EE Times  
  • 热度 28
    2011-3-14 16:38
    2293 次阅读|
    0 个评论
    This is going to be a different kind of article for me. As I was planning it, I considered several possible topics. I couldn't decide which one to cover, so I decided to cover them all. You may find some of them incomplete—I'm giving you my thoughts as I'm having them. I welcome your thoughts and comments.   Vector/Matrix class redux Whenever I set out to (re)create a vector/matrix math package, I always wrestle with the decision: should I include separate functions for the special case where the vectors are dimensioned 3, and the matrices, 3×3? Clearly, classes that can handle general, n-dimensional vectors and m × n matrices can also handle the case m = n = 3. On the other hand, it's certain that if we know in advance the dimensions of the objects, the code will be a little faster and more compact.   While it's important to be able to deal with general forms of the mathematical entities called vectors and matrices, there's a good argument for the specialized functions as well.   We live, after all, in a three-dimensional universe (unless your field is String Theory). Most of the computations I do—space mechanics, rocket and aircraft dynamics, robotics, dynamic simulations—involve the motions of real, physical bodies in this three-dimensional space. It's the discipline called dynamics, whose math is defined by Newton's laws of motion.   The mathematical entities called vectors and matrices weren't exactly invented specifically to deal with physics problems, but they might as well have been. The simplifications that result from their use in physical problems is so profound, it's hard to over-emphasize the point.   That being the case, I'm always tempted to define special computer functions to process 3-d vectors and matrices. Usually I succumb to the temptation. When I was developing the C++ classes called Vector and Matrix , I wrestled with the same decision. For the sake of brevity, I chose to omit the special 3-d case. For the column, I was focused more on showing you the principles rather than defining a production-quality library of functions.   However, the argument for the special case is even stronger than usual, when defining C++ classes. That's because my general-case solutions required dynamic memory allocation .   If you declare all named objects statically, you can move most of the memory allocations to the initialization phase of the programs. We might do this, for example, when writing flight software for a satellite or missile.   But if you make use of those lovely overloaded operators, you can't avoid the creation of temporary objects, and the overhead to construct and destruct them.   Finally, there's the issue of the vector cross product, which only works for 3-vectors. In our n-vector class, I had to include tests to make sure that both vectors were, in fact, dimensioned 3. That's not a problem in the special case. (Optional for extra credit: some of us gurus in the know, know that the cross product isn't really limited to 3-vectors. There's a 4-d cross product as well. If you're "one of us," drop me a line. Say the magic word, "quaternion," and I'll send you the Secret Decoder Ring Tone.)   If we know in advance that all the vectors will be dimensioned 3, and matrices, 3×3, the double dose of dynamic allocation goes away.   The situation with respect to matrices is even nicer. If you followed the creation of the Matrix class, you'll recall that we struggled with the issue of allocating matrices with arbitrary dimensions. Because C/C++ doesn't support conformant arrays, we can't just declare the array data in the form:   double x ;   Instead, we had to resort to defining a single-dimensioned array and generating our own code to index into it. This isn't a problem for the 3×3 case. We simply declare the array:   double x ;   and let the compiler use its own indexing, presumably optimized out the gazoo.   Recently, I had occasion to do a lot of computations in the 3-d universe. So I succumbed to the temptation and wrote a new C++ class called ThreeVec . As the name implies, the class is specialized to the case of 3-d vectors. I'm going to show it to you, but in a rather unconventional way. Instead of walking you through all the design decisions yet again one more time, I'm simply going to post the files on Embedded.com. I think you'll find that the differences between the n-vector and 3-vector cases are straightforward enough so that you won't need me leading you by the hand. Please let me know if you need more help.   Do something ... For the same project, I also need a class for 3×3 matrices. I'll tell you about it soon, but first I have a confession to make. I have a character flaw (No! Really?).   I'm basically an optimizing sort of fellow. Before I do anything, I tend to look at the options, weigh them carefully, and choose the one that's, in some sense, optimal. I'm the guy who tries to guess which lane I should be in at a traffic light. There's no malice involved, no jamming my way in front of someone else. But given a choice, I'll choose the option that's most likely to get me where I'm going sooner with fewer hassles.   This tendency to optimize can be both a blessing and a curse. A blessing in that the code I generate is usually pretty good. A curse because it takes me longer to produce it. Because I'm looking for the best solution, I can never keep up with the software wizard who simply codes the first thought that comes to him.   In another lifetime, I built and raced midget cars. My kids also raced, and I was their chief mechanic. Naturally, optimizing the route to the front is a good habit for a driver, and I was pretty good at it. But optimization is also good when deciding how best to tune or modify the car. When it came time to make a modification, I'd find myself spending considerable time trying to optimize the design. The design optimization process goes something like this:   while(1){ if(option(a) option(b)) return a; if(option(b) option(a)) return b; }   The problem, of course, is that there's no predicate for equality. If I can't find a clear reason to choose one option over the other, I'm paralyzed into inaction, stuck in an infinite loop.   This sometimes happened when I was planning a new modification. I'd sit in the garage, holding the parts I could use, and literally weighing my options. I'd look at the parts in one hand, and see advantages in using them, but also disadvantages. I'd look at the other and see a different sets of pros and cons. I could find no clear winner.   Fortunately, my decision-making system had a priority interrupt. Not hearing any work being done, my wife would open the door behind me. She'd shout, "Do something, even if it's wrong!!!" and slam the door again. This interrupt was usually enough to jog me out of the loop.    
相关资源
  • 所需E币: 1
    时间: 2023-6-28 11:13
    大小: 372.04KB
    上传者: 张红川
    STM32USBMassStorage学习资料.pdf
  • 所需E币: 1
    时间: 2022-7-23 11:24
    大小: 1.22MB
    上传者: Argent
    CompactFlashDataStorage
  • 所需E币: 0
    时间: 2022-3-4 20:42
    大小: 337.63KB
    上传者: samewell
    KuduStorageforFastAnalyticsonFastData.pdf
  • 所需E币: 5
    时间: 2019-12-26 00:34
    大小: 365.96KB
    上传者: rdg1993
    MassStorage协议有关文档……
  • 所需E币: 5
    时间: 2020-1-6 12:05
    大小: 646.91KB
    上传者: quw431979_163.com
       ToshibaNANDFlashStorageProducts……
  • 所需E币: 4
    时间: 2020-1-6 12:13
    大小: 100.25KB
    上传者: 微风DS
    MemoriesandStorageDevices……
  • 所需E币: 4
    时间: 2020-1-6 12:42
    大小: 80.32KB
    上传者: 238112554_qq
    StratixGXinStorageApplications……
  • 所需E币: 5
    时间: 2019-12-24 23:03
    大小: 69.94KB
    上传者: 16245458_qq.com
    P89LPC900微控制器系列中的精简版ISP,它可以用来擦除和编程微控制器的代码闪存。AN10342UsingLPC900codeFlashasdatastorageRev.01―13December2004ApplicationnoteDocumentinformationInfoContentKeywordsLPC900,FlashDatastorage,nonvolatilememoryAbstractTheP89LPC900familyofmicrocontrollershaveInApplicationProgrammingLite,whichcanbeusedtoeraseandreprogramthecodeFlashofthemicrocontroller.PhilipsSemiconductorsAN10342UsingLPC900codeFlashasDatastorageRevisionhistoryRevDateDescription0120041213……
  • 所需E币: 4
    时间: 2019-12-24 19:07
    大小: 47.52KB
    上传者: 二不过三
    摘要:一个安全的USB“垂死的喘息”关机需要一个本地的储能电容,电源设备,并延长运作有序关闭。一个5VUSB“垂死的喘息”应用程序中的电路提供电源失效输出一个2.88ms的预警时间。特色的MAX1970/MAX1972降压与PFO的监管。Maxim>AppNotes>BatteryManagementPower-SupplyCircuitsKeywords:dyinggasp,USB,power-failwarning,power-failoutput,PFO,PFOthreshold,DC-DCconverter,DC-DCconverterefficiency,Dec18,2001energystoragecapacitor,step-downregulatorAPPLICATIONNOTE891Dual1.4MHzSynchronousBuckRegulatorSupportsUSBDyingGaspApplicationsAbstract:AsafeUSB"dyinggasp"shutdownrequiresalocalenergystoragecapacitortopowerthedeviceandprolongoperationfororderlyshutdown.Acircuitfora5VUSB"dyinggasp"applicationprovidesapower-failoutputwarningtimeof2.88ms.TheMAX1970/MAX1972step-downregulatorswithPFOarefeatured.Manyportableuniversalserialbus(USB)devicesarepoweredfromtheUSBport.Mostofthesedevicesneedapowerfailwarningto……
  • 所需E币: 3
    时间: 2020-1-9 18:08
    大小: 512.58KB
    上传者: 238112554_qq
    MN_Data_Storage_Management_Module_Interface_User_Guide……
  • 所需E币: 5
    时间: 2020-1-10 12:04
    大小: 359.08KB
    上传者: 16245458_qq.com
    ls_wai_mass_storage,ls_wai_mass_storage……
  • 所需E币: 4
    时间: 2020-1-15 09:47
    大小: 505.14KB
    上传者: 2iot
    DigitalStorageOscilloscopeDigitalStorageOscilloscopeThomasGrocuttApril2000AcknowledgementsIwouldliketothankmyprojectsupervisorMilosKolarforhistime,help,andencouragement,PhilipCupittforhispreviousworkandforprovingthebasicconcepts,andthetechnicalstaffintheengineeringdepartment,particularlyIanHutchinsonandAnthonyMcFarlane.IwouldalsoliketothanktheprogrammingteamthatcreatedMSOffice97,formakingthewritingofthisreportso…challenging.iSummaryTheobjectivewastodesignandbuildalowcost,highperformance,dualchannelDigitalStorageOscilloscope(DSO).Costswereminimisedcomparedtoconventional,commercialDSO’sbyutilisingapersonalcomputertoprovideboththedisplayfunctionsandthemajorityofthecontrolfunctions.Theremainingc……
  • 所需E币: 3
    时间: 2020-1-6 14:15
    大小: 315.93KB
    上传者: quw431979_163.com
    theseshorttkyttleight-bituniversalregistersfeaturemultiplexedinputs/outputstoachievefulleight-bitdatahandlingina20-pinpackage.twofunction-selectinputsandtwooutput-controlinputscanbeusedtochoosethemodesofcoperationlistedinthefunctiontable.……