原创 Tackling the risks of software/hardware homogeneity

2014-9-18 17:44 2209 12 12 分类: 消费电子

I have long been worried about the security dangers of the computer industry's increasing reliance on common development tools, hardware architectures, software building blocks, and operating systems. And finally, there is someone--several someones, in fact--as concerned as I am. And they have done something about it.

 

"They" are the authors of a recently published technical paper on "Enhancing security by diversifying instruction sets," and include Kanad Sinha, Vasileios Kemerlis, Vasilis Pappas, Simha Sethumadhavan, and Angelos Keromytis, all of Columbia University. In their paper’s opening paragraphs they are clear what the security danger is.

 

"Many large-scale security attacks are facilitated by the lack of diversity in computer systems," they write. "Today many computers have the same hardware and software configuration, e.g., Windows on x86 and Android on ARM, and consequently suffer from the same vulnerabilities.

 

"Even worse, security protection mechanisms, as well as their workarounds, are also invariable. This homogeneity allows an attacker to prepare malicious binaries, and deploy them quickly and profitably on a large number of victim systems."

 

This homogeneity, they claim, results in what they call a "computing oligoculture," which provides fertile ground for malware in our highly networked world. "These attacks are made possible not only due to the poor security awareness on part of the users," they write, "but also because of the large number of systems sharing the same kind of hardware/software stack, which in turn allows attackers to monetize such 'write once, run anywhere' exploits.”

 

They characterize current approaches to securing such systems as “patch and pray” in which protections are created after the fact to deal with symptoms instead of dealing with the root cause: the existence of so many systems using the same hardware and software building blocks, allowing attackers to deploy exploits en masse with minimum effort and expense.

 

I agree with their "patch and pray" characterization and believe it applies to not only our connected PCs but to smartphones as well as to much of what is described as the Internet of Things, especially in the consumer market. It applies not only to most of the software fixes we have evolved since the late 90s but also to more recent hardware security add-ons such as the TrustZone IP block ARM Ltd. developed for its licensees and the Trusted Execution Environment proposed by Intel, AMD and others.

 

Much like Alexander the Great who cut through the the Ancient Greek world’s complex and unsolvable Goridian Knot problem by slicing through it with his sword, the authors of the Columbia University paper have come up with what is conceptually an equally direct solution - reverse the trend toward platform homogeneity with diversification – and they have, I think, come up with a way of doing it which will not adversely affect existing solutions.

 

"Just like in biological systems, where diversity within an ecosystem prevents large populations from going extinct due to a single pathogen," they write, "the diversification of systems can make them more robust by making them invulnerable to generic attack payloads and infection vectors."

 

However, in the face of a computing environment in which we are using common software and common tools and with single hardware architectures dominating whole market segments of millions of users, we are well down the road in the opposite direction to complete computing homogeneity.

 

To at least blunt the edges of this pervasive software and hardware homogeneity the Columbia University authors' approach to achieving diversification is instruction set randomization (ISR).

 

"We propose native hardware support for diversifying systems by enabling instruction-set randomization (ISR), which aims to provide a unique random ISA for every deployed system," they write. "For instance, the opcode 0xa may denote the XOR operation on one machine, but may be invalid on another. Software implementations are too slow (70% to 400% slowdowns) and more importantly, are insecure because they use much weaker encryption schemes and can be turned off."

 

They have implemented this technique in a prototype by encrypting the instruction stream with AES and decrypting it before execution. Additionally, the encryption key for AES is kept secret to prevent development of unauthorized binary.

 

“A particularly striking feature of the ISR solution is that very simple modifications in hardware enable broad security coverage,” they write. “For instance, we protect against all kinds of binary code-injection attacks.”

 

In their well-thought out description of their hardware solution, they go into detail on the system-level, architectural and microarchitectural design choices that allowed them to reduce the performance overheads of ISR to zero with strategic microarchitectural optimizations.

 

In addition, they also outline possible deployment and key management techniques, such as how keys are embedded in hardware, how they are accessed, etc., within the framework of established distribution models.

 

In a prototype of the ISR approach they synthesized for used at the 32 nanometer process node, they report that adding the logic necessary for the instruction set randomization - an extra 16-entry 128-bit RAM and associated circuitry for storing keys in an instruction translation look-aside buffer - resulted in an 81 percent increase in the ITLB. "This is manageable, however, since the actual core is orders of magnitude bigger than the ITLB," they write.

 

What I think will make this a viable approach is that the researchers have been careful to detail how this can be deployed as a complement to current approaches, rather than as a replacement.

 

"ISR is by no means a panacea against all security problems," they write. "In addition to the diversification of instruction sets that we propose in this paper, we assume the use of contemporary defense mechanisms - address space lay-out randomization (ASLR) and software-only diversification solutions for mitigating return-oriented programming (ROP) attacks. "

 

They also assume that support for hardware- aided protection exists in processors in the form of NX bits, virtualization (AMD-V, VT-x), trusted execution (Intel TXT, ARM TrustZone), etc.

 

"Our solution is agnostic and orthogonal to such technologies that behave homogeneously in all deployments," they write. "However, note that diversification can make them more effective by raising the bar for an attacker that studies their deployment and tries to bypass them. Instruction set diversification is a missing piece that will nicely complement such mechanisms."

 

They believe ISR provides a foundation for further security improvements. One such direction is offloading key-management to hardware so that the requirement of a trusted OS can be done away with, thus evicting thousands of lines of software code from the sphere of trust. Yet another is to extend this encryption on data items which can open the way for truly obfuscated and practical “dark execution”, which is particularly significant today when executing programs on remote clouds.

 

Given the amount of ground work the Columbia researchers have put into this elegant solution to hardware/software homogeneity and its attendant security dangers, I would like to think that it or something like it will have some chance of being adopted widely.

 

But I'm not optimistic. I have been concerned about this increasing lack of diversity in most embedded, mobile, and desktop systems since the late 1990s and early 2000s, when the previous relative isolation of many computers within proprietary networks was replaced with a common Internet protocol. The resulting plague of viruses, hackers, and security breaches to the then ubiquitous Windows OS almost brought Microsoft to its knees.

 

As noted in “Securing Android on nextgen embedded IoT & mobile apps,” we are in a similar situation in Android mobile phones and in the many wireless sensor and machine-to-machine networks that make up the emerging Internet of Things. And this is made more serious by the drive toward open source software and hardware with its perceived economic benefits for companies.

 

I understand the near-term business economics driving companies to keep their focus on the immediate bottom-line who have made the safe short-term decisions that have resulted in such homogeneity. But the immediate dividend of such "patch and pray" strategies is canceled out by the long term costs we are all having to absorb in terms of continually upgraded security software and hardware additions that increase the cost of the silicon.

 

I hope the situation is now serious enough that the computer industry will have as much foresight as the scientists who created the Green Revolution in agriculture in the late 1980s and early 1990s. Then, agronomists had developed new crop strains that were more productive and able to feed the earth's growing population, displacing less productive local varieties.

 

But even as the crop homogenization was occurring, many agronomists warned that it could create a monoculture situation in which a plant virus or a bug could emerge that would wipe out not just the local plants, but much of the world's output of critical foodstuffs virtually overnight.

 

So far, almost 25 years later, their fears by and large have not been realized. But that has not prevented the creation of several international efforts to create seed banks of the more hardy genetic strains that have been supplanted just in case they are needed.

 

There have been previous attempts at such  diversification solutions, some using instruction set randomization, but most have failed to take hold, either because they had fundamental flaws, or required to much reworking of the existing infrastructure to make them practical.

 

One problem with previous ISR approaches was that because vendors were so wedded to specific hardware solutions, most such attempts were software-based. That meant they were also open to security problems and could be bypassed by motivated – and street-smart – hackers, who are as knowledgeable about the generally available tools – such as dynamic rewriting tools - as the developers. So using the same tools used to randomize the instruction set, the hackers could simply turn the randomization off or find a workaround. What also probably killed such early attempts at ISR is that when implemented in software, it resulted in considerable slowdown, a no-no in most markets, where speed is of the essence.

 

But in the same way they came up with a structure that would allow ISR to be deployed without disrupting existing after the fact solutions, the Columbia authors have paid close attention to previous ISR-based diversification efforts and avoided making the same mistakes.

 

“By rooting diversification in hardware we cleanly sidestep these problems,” they write. “First, and most important, unlike software implementations, the attacker cannot simply “turn-off ” hardware rooted randomization. Second, as long as the cryptographic keys are kept secret, the attacker has a practically impossible task of figuring out the ISA of the target. Finally, using intelligent microarchitectural optimizations we are able to practically reduce the overheads of ISR to near zero.”

 

With such a well-thought out game plan as the Columbia authors present, I am guardedly optimistic that the computer industry will finally have the same foresight that agronomists did in addressing the potential threat to the world’s food crops. But given the mindless drive toward hardware and software homogenization I have seen so far, I will not be surprised if it, like other such efforts, is totally ignored.

PARTNER CONTENT

文章评论0条评论)

登录后参与讨论
EE直播间
更多
我要评论
0
12
关闭 站长推荐上一条 /3 下一条