tag 标签: hardware

相关博文
  • 热度 23
    2015-10-30 21:33
    1590 次阅读|
    0 个评论
    I was just chatting with the folks from Bitfusion.io , who are on a mission to "bring supercomputing to the masses." Of course, when they say this, they don’t mean the unwashed masses like yours truly (LOL).   These days, a lot of scientific, medical, and technological development relies on humongous amounts of compute power. If we visualize the technology company landscape as a pyramid, there are relatively few mega-companies perched on the pinnacle. These companies have the resources to implement massive supercomputers and to throw PhD computer scientists at their applications (figuratively, not literally) in order to wrest the most out of the hardware.   Meanwhile, the middle of the pyramid is occupied by the aforementioned "masses" -- medium-sized companies conceiving incredibly innovative ideas that rely on computing power if they are ever to see the light of day.   It's now possible to create a very respectable computing environment that boasts a mix of multiple CPUs, GPUs, FPGAs, and so forth, and to do so without breaking the bank. Alternatively, it's possible to run one's applications in the cloud on a baffling and bewildering mixture of computing resources.   The real trick is to get one's applications to take full advantage of the resources that are available. Another other trick is to be able to determine what the most advantageous mix of resources would be for a particular application in terms of both performance and price.   The folks from Bitfusion say they have the answer to both of these problems. We start with Bitfusion Boost, which is an abstraction layer that runs as an agent in the operating system and that conceptually sits between your application(s) and any hardware supported by OpenCL.     Bitfusion Boost intercepts and profiles application code at runtime and redirects function calls that can be accelerated to the best available hardware -- CPU, GPU, or FPGA.   This really is very impressive. The result is to unlock hardware acceleration -- to transparently and automatically accelerate existing applications, sometimes by orders of magnitude, without requiring application rewrite. You can literally be running your application and monitoring its performance, and then "turn on" Bitfusion Boost -- whilst your application is still running -- and watch the performance numbers rocket up.   But that's not what I wanted to talk about (LOL). One of the things the folks at Bitfusion are passionate about is measuring things -- that's how they can tell how well Bitfusion Boost is doing. Now, they've made this technology available to the rest of us in the form of the Bitfusion Profiler. To get started with Bitfusion Profiler, users access a complete Linux environment directly from their browser and use Bitfusion’s Workload Builder to install programs and utilities and write scripts, just as they would on a home system or server. Bitfusion also provides detailed documentation and pre-configured sample workloads that users can employ as a starting point for their own experiments.   Bitfusion Profiler quickly evaluates application performance across a variety of hardware and software configurations. It automatically detects limitations and whether a particular application might benefit from larger memory footprints, multiple sockets, more cores, larger disk drives or even a different cloud provider. Then Bitfusion Profiler suggests optimal configurations, helping users determine which instance types are fastest and which offer the best value.   Bitfusion Profiler is free and available in public beta as of today ( click here to get started). Enterprise customers who wish to install Bitfusion Profiler directly onto their own infrastructure can do so for a fee (contact info@bitfusion.io for more details).
  • 热度 12
    2015-5-22 17:58
    1339 次阅读|
    0 个评论
    A lot of my friends have a microcontroller (MCU) background and have only a vague idea as to what an FPGA is and what it does.   When pressed, they might say something like "You can configure an FPGA to do different things," but they really have no clue as to what's inside an FPGA or how one might be used in a design.   Similarly, they have typically heard about hardware design languages (HDLs) like Verilog and VHDL; they understand that hardware design engineers use these languages to capture the design intent; but... that's typically about the extent of their knowledge.     Actually, it's only when you try to explain all of this stuff that you realize just how confusing it can be to the uninitiated. Like the fact that things can happen at the same time in the physical hardware world, so we hardware designers tend to tell people that traditional programming languages like C/C++ are inherently sequential in nature while our HDLs are concurrent.   But what does this really mean? HDL source code is just that -- source code -- it's only when we feed it into a tool like a simulator that it acts in a concurrent manner, or when we pass it to a synthesis tool that generates a configuration file that is loaded into an FPGA that then functions in a concurrent manner. How about the fact that the simulator and synthesis tools themselves will be written in conventional programming languages like C/C++, which we just said were inherently sequential in nature?   The thing is that we hardware guys and gals know how all this works "in our bones" and we don’t understand why MCU guys and software weenies don't (LOL).   Anyway, the reason I'm waffling on about this here is that I've just written a couple of articles:   The MCU guy's guide to FPGAs: The hardware The MCU guy's guide to FPGAs: The software It would be great if you could find the time to bounce over there, give these a read, and then let me know if I've answered most of the questions or confused the issue further. Also, please let me know if there's anything you think I've missed out.
  • 热度 19
    2015-5-5 08:28
    1535 次阅读|
    0 个评论
    A few weeks ago, Wall Street Journal reported that Andy Rubin, creator of Android has raised $48 million to launch Playground Global LLC to provide support and advice to tech startups making devices for consumers and companies (see http://www.wsj.com/articles/android-creator-andy-rubin-launching-playground-global-1428353398 ). This follows other recent announcements from Y Combinator (YC) ( http://www.ycombinator.com ), arguably the most successful Accelerator program that was started in 2005 for software startups. Y Combinator announced partnership with Bolt (https://www.bolt.io/) to provide hardware startups the critical support in design, prototyping, and manufacturing. Why is this important? Just like the launch of Y Combinator in 2005 filled a gap at the earliest stage where VCs didn’t normally invest, their foray into hardware startups begins to address a similar gap in the hardware startup space. Till recently, crowd funding (e.g. Kickstarter) offered the only viable source of funding for startups looking to build new products. As Internet of Things (IoT) brings together hardware, software, cloud, security, and communication technologies, Silicon Valley is becoming more interested in hardware companies to take advantage of new opportunities.  In 2013, US VC funding for hardware startups was US$848 million (Wall Street Journal, March 2014), twice the amount invested in 2012. The early Accelerator programs in hardware space are likely to be rewarded handsomely in coming years as entrepreneurs all over the world develop new “smart” products in areas as diverse as Wearables, Healthcare, Home automation, Smart cities, and others. What changed for hardware startups?  The ecosystem is now in place to support rapid growth of companies building products that leverage hardware as a key component. First, the cost and difficulties in developing and launching hardware enabled products and services has dropped drastically in the last few years for many reasons--rapid prototyping (e.g. 3D printing), lean manufacturing techniques, explosion in open source communities around hardware designs, easier product distribution channels, and separation of design and manufacturing (i.e. contract manufacturing and fabless semiconductor industry models). Second, continued miniaturization of devices driven by rapid growth in semiconductor technologies and advances in cloud computing, communication technologies, and analytics has created opportunities for innovation that did not exist just a few years ago. Finally, the funding and mentoring part of the ecosystem is also now in place with more entrepreneurs, big name Accelerator programs, and VCs looking past software to the hardware enabled products and services as the next growth areas.  
  • 热度 16
    2012-3-9 18:32
    1976 次阅读|
    2 个评论
    I just finished reading this book by Mohit Arora titled The Art of Hardware Architecture . Although this book is primarily targeted at the designers of ASICs / ASSPs / SoCs, quite a few of the topics are also applicable to FPGA designers; for example Chapter 6: The Art of Pipelining and Chapter 7: Handling Endianness . As an aside, if you aren't sure what we mean by Endianness , then you really need to read this book (grin). In the meantime, check out the Wikipedia Page on this topic. This is an unusual book in a number of respects. For example, there are a lot of books out there that purport to talk about design techniques; but when you get right down to the "nitty-gritty", many of them turn out to be largely theoretical in nature. By comparison, The Art of Hardware Architecture is firmly focused on describing and solving real-world problems using tried-and-true techniques. Another way in which this book is unusual is that it doesn't cover as comprehensive a range of design problems as you might expect; instead, Mohit has selected a collection of topics that are (a) of interest to a lot of designers and (b) that he obviously understands very well indeed; he then walks us through these problem areas and discussed the ways in which the issues can be addressed. The contents list is as follows: * Chapter 1: The World of Metastability * Chapter 2: Clocks and Resets * Chapter 3: Handling Multiple Clocks * Chapter 4: Clock Dividers * Chapter 5: Low Power Design * Chapter 6: The Art of Pipelining * Chapter 7: Handling Endianness * Chapter 8: De-bouncing Techniques * Chapter 9: Design Guidelines for EMC Performance The more detailed sub-contents for some of these topics go quite deep. For example, the sub-contents list for Chapter 2 consumes an entire page. A couple of these topics immediately grabbed my attention (Pipelining, Endianness, De-bouncing...). I've been doing this stuff for years, but I still wanted to see what Mohit had to say about it (which I take as being a good sign). Take the chapter on De-bouncing Techniques, for example. After describing the behaviour of a switch and the different types of switches, Mohit describes a variety of de-bouncing techniques, such as RC De-bouncing, Hardware De-bouncers, Software De-bouncing. These are followed by De-bouncing Guidelines and De-bouncing Multiple Inputs. Some readers will already know this stuff, but the ones who aren't familiar with this topic will really learn a lot. One small point is that I really feel like the book could have done with a more thorough proof-read / copy-edit. There are a lot of minor "gotchas", such as the last word in the very first paragraph, which currently reads "...how to minimise its effort ." (This should read "...how to minimise its effect ." ). All of the "gotchas" I saw are really minor, but this sort of thing really bugs some readers. (Sorry Mohit, you should have asked me to copy-edit the book for you ). Actually, while I'm thinking about it, you might be interested to know that Mohit has a dedicated Author website ( www.aroramohit.com ) that provides a load of information about himself and his book. Readers can also interact with Mohit via blogs and other social media on this site. The other negative point is the price of $129, which some will feel is rather steep for what Amazon shows as being 236 pages (strange to relate, there are only 221 pages in my copy, and that includes the index). Having said this (I've said this before and I'll say it again), if being a logic / hardware design engineer is what you do, and if reading this book will make you better at your job, then many folks might feel $129 is a small price to pay... I'll tell you what... I have my copy sitting here in front of me... I have more books than I know what to do with, and I'm no longer designing ASICs, so post a comment explaining why owning this book would change your life and why I should give it to you above all others... and whoever makes me laugh (or cry) the hardest can have my copy (I'm up to my ears in alligators fighting fires without a canoe at the moment, so I'll make my final decision a week or so after I've posted this review).  
  • 热度 13
    2011-3-18 12:33
    2056 次阅读|
    0 个评论
    Two months after an NSA official said there is no longer any such thing as "secure" computing, pioneers of the field of cryptography gathered for their annual panel discussion at the RSA Conference in San Francisco. A few months ago, Debora Plunkett director of the Information Assurance Directorate (IAD) at the U.S. National Security Agency (NSA), made headlines when she told attendees at a cyber security forum that there is "no such thing as 'secure' anymore."   What Plunkett meant, according to Dickie George, technical director of the NSA's IAD, is that there has been a paradigm shift in network and computer security: rather than focusing all efforts on keeping intruders out, the reality of today's world forces security teams to assume that adversaries can and do access their networks.   While keeping intruders out is still the primary objective, George said during the annual Cryptographers' Panel at the RSA Conference 2011 in San Francisco, monitoring today's networks requires keeping a vigilant eye out within for uncharacteristic or "inappropriate" behavior.   "If you assume they haven't been , you are setting yourself up for a shock," George said.   George and fellow panelists, including Ronald Rivest, the Viterbi professor of electrical engineering and computer science at MIT, said cryptography remains the best tool available for ensuring network security. But they noted that cryptography has its limitations.   "Cryptography provides the tools, but I think the problem we are facing is the rash of technology development," Rivest said. "We keep building fences, but the universe keeps growing."   Adi Shamir, professor of computer science at Israel's Weizmann Institute of Science, noted that the two biggest network security issues of the past year—the WikiLeaks controversy and the Stuxnet computer worm attack that reportedly damaged as many as one fifth of Iran's nuclear centrifuges—could not have been prevented with cryptography.   "It's interesting to me that the two biggest attacks of the last year had nothing to do with cryptography," Shamir said.   But though they acknowledged that cryptographer has its limitations, panelists—pioneers in field—also emphasized that ongoing cryptography research is still of great value.   Martin Hellman, professor emeritus of electrical engineer at Stanford University, pointed to the work done by security technology firm Cryptography Research Inc. in identifying the threat of differential power analysis attacks as an example of the tangible value of ongoing research in the field.   "There are attacks yet to be found," Dickie said.   Whitfield Diffie, a visiting professor at the University of London's Royal Holloway College and a visiting scholar at Stanford, defined the first phase of cryptography's existence as the period between roughly 1915 when the first Enigma machine was created until the February 2005 release of the NSA's Suite B set of cryptographic algorithms. The application of secure computing only existed for about half of that roughly 90-year period, Diffie noted, suggesting that there is plenty of room for continued research.   Despite the gravity of the topic, the panelists found time for a few laughs. In a separate QA session held after the Cryptographers' Panel, Shamir added that he was not convinced that embedding cryptographic elements within semiconductors is the solution to the security issue.   "I'm not convinced that a security mechanism embedded on chips is going to make the situation much better," Shamir said, adding that a Trojan horse that makes it onto a computer is going to record keystrokes regardless of whether the security is in the hardware, the software, or both.   "My only hope is that the Russian Trojans on my computer and the Chinese Trojans on my computer will fight each other and block each other ," Shamir joked.      Dylan McGrath EE Times
相关资源