tag 标签: software liability

相关博文
  • 热度 12
    2012-1-3 16:35
    1355 次阅读|
    0 个评论
    In 2010, a man got $1.5 million in a law suit against a tablesaw vendor. The fellow had removed all of the safety gear and was doing an unbelievably dangerous cut, and the saw cut back. He won, at least in part, because a different vendor has technology that senses flesh and stops the blade before serious harm can result. The court felt that the saw was defective since it didn't incorporate this flesh-detecting technology. Now the CPSC is considering mandating that all new tablesaws will be required to be safe no matter how dangerous the operator chooses to be. The woodworking world is in an uproar. Some feel the Feds should take action, others decry the increasing presence of the government in our lives. I think that none of the arguments are particularly relevant since the vendors now know they are at risk in court. It doesn't take a rocket scientist to see that sooner or later, and probably sooner, they'll protect themselves from this exposure by voluntarily adding smarts to make saws safer. Or rather, to protect the vendors from lawsuits. This seems a better outcome than yet more laws and the bureaucracy needed to enforce them. Recently Poul-Henning Kamp, writing in ACM Queue proposed a new set of laws to protect users from dangers from software. It is in three clauses, the first two of which I discussed a few weeks ago . Here's the third clause: Clause 2 . In any other case, you are liable for whatever damage your software causes when used normally. I completely agree with the sentiment expressed here. But, going back to the tablesaw discussion, here in the USA I believe that the courts will be increasingly called on to deal with the repercussions of software failures. Do we need a law? Most accidents (if one cares to call them that) result from a series of problems, not a single failure. A bug in the code, weakness in the hardware, operator error and other factors generally combine to cause damage. Are we wise enough to write a law that somehow sorts all of that out? I completely agree with Mr. Kamp that software today is in crisis. It's the most complex engineered artifact in history. The processes used to develop it are often ones that are known to be a problem. The resulting bugs and security issues are largely avoidable. Ironically, in no other industry can one get away with shipping known defective products. Will that state of affairs last? When the lawsuits start flying you can be sure management will take action. Consider what will likely happen: the litigation will cause the tech world to waken, at least in part, from the software slumber. Publicly-traded companies will be compelled to list in the risks section of their annual report: "another potential risk area is that we have decided to use poor software development processes, and potential lawsuits could put us out of business." Companies are required to list those risks, and it's inconceivable to me that the CEO will tolerate a statement of that sort. Product liability laws already exist, designed to give a plaintiff a route through the courts to address injury or costs associated with defective products. Does software need its own, special, law? There's a lot of debate about the nature of software, and some feel it is different from products that one can hold and feel. But in the context of embedded systems, I think it's clear that the firmware is an innate component of a product; without it, the device is roughly as feature-rich as a brick. Remove the firmware and all of the features, the things the customer paid for, disappear. Ultimately, I think that fear of litigation will be the force that causes management to demand better code. The legal wrangling will be cataclysmic, but it will disappear once the software improves.  
  • 热度 13
    2011-12-22 18:15
    1379 次阅读|
    0 个评论
    In this  ACM Queue article, Poul-Henning Kamp recently proposed a new set of laws to protect users from harm caused by software. Mr. Kamp first quotes an excerpt from a Ken Thompson's Turing Award lecture: "You can't trust code that you did not totally create yourself." This quote may have been appropriate in 1983 when Mr. Thompson delivered the lecture, but is patently absurd today. We do trust a lot of code. Whether it's the code in your microwave or that which injects fuel into your car's engine, we confidently expect software to work. It generally does. The article proposes a law written in three clauses. Let's take them apart one at a time. Clause 0 . Consult criminal code to see if any intentionally caused damage is already covered. This clause is supposed to cover malfeasance such as that demonstrated by Stuxnet, but what does "intentional" mean? ( I hate to sound like a lawyer or a particular ex-president, but it will be lawyers arguing the cases ). Suppose a doctor for years ignored important advances in medicine. Generally he gets away with this, but then a patient suffers harm or dies. I have no doubt a court would find him guilty of malpractice, and would consider his actions intentional. Ignoring advances in his field is an intentional action, mirroring the old dictum "failing to make a decision is itself a decision." One could make a pretty persuasive argument that, for instance, since the use of a sloppy development process is always intentional (because we decide what and how we'll do things), the inevitable results (bugs) are in effect intentional. Further, there's a wealth of literature that demonstrates how poor processes translate into defects, so the developers, or their bosses, are in a position analogous to the doctor who didn't avail himself of the latest techniques. In fact, there are papers in the literature which have called certain development strategies "professional malpractice." This is not to say that the only standard is perfection. The best of doctors lose patients. It means any team whose processes aren't high quality could be liable. And, honestly, that could ultimately be a good thing! As Mr. Kamp humourously writes: "...any pundits and lobbyists they could afford would spew their dire predictions that "this law will mean the end of computing as we all know it!" To which my considered answer would be: "Yes, please! That was exactly the idea."" Then there's the second clause: Clause 1 . If you deliver software with complete and buildable source code and a licence that allows disabling any functionality or code by the licensee, then your liability is limited to a refund. This homage to open source is naïve. The vast majority of our users don't know the difference between a zero and a one. They simply haven't the skills to understand the source, let alone modify it to disable some functionality. Even for those of us who can, we have neither the time nor the toolchains. We're surrounded by software, millions and millions of lines of the stuff. The trust that Ken Thompson eschewed is the only way we can practically cope with this web of technology. If Novartis supplied me with a mass spectrometer and the chemical formulation for my blood pressure medicine I still would not have the skill to test it for purity. In this supremely-complex world we rely on the expertise of others. The fact that the medicine's chemical structure is printed on the instructions that accompanies the pills does not absolve the manufacturer of liability for a bad batch. Suppose the regenerative brakes in your hybrid car fail due to a software error, but it did come with a disc containing the source code. Does the auto maker get a pass? Five people dead, perhaps, and the only liability is the price of the vehicle? In my next column, I'll deal with the third clause.