tag 标签: network

相关帖子
相关博文
  • 热度 23
    2016-2-5 19:12
    1446 次阅读|
    0 个评论
    Jakob Engblom shared an intriguing thought:   One thing that seems to be happening today with the whole “IoT” move and ubiquity of cellular communications (at least in the rich world) is that “in the field SW upgrade” is becoming easier and easier to do.  Systems that used to be isolated can now be connected, and thus updated if the manufacturer wants to. But this creates a whole new set of questions: for how long do you support a deployed system with updates?  Are you obliged to roll out updates when you find security holes?  Do customers accept that you update their systems?  Who controls the data from these systems? A friend of mine spent a ton of money on a Tesla car recently, and he some interesting stories about its update process. That car is apparently updated by Tesla when the manufacturer feels like doing it.  Consumers have little choice, and user interfaces do change over time.  He might come down in the morning and realize that some element of the car GUI has changed and he has to figure out how to do something with this particular iteration.  Just like with a mobile phone or computer OS, where we are pretty used to this process. It is a bold new world of connected embedded, and scary things will happen. (Source: Personal communication)   Jakob’s comments raises a slew of questions.   I had a network router a few years ago and one day a feature just disappeared – an unsolicited midnight firmware upgrade downgraded the product. Suppose a customer had selected the router specifically for that feature, and a year or three after the purchase it suddenly no longer performs as advertised? My understanding of US law is that he could sue for redress, but no one will do that for a $100 device.   In effect the company was like a thief in the night, sneaking into my office and furtively making off with the goods.   Then there’s the liability problem. Suppose you don’t do a firmware upgrade to patch a security flaw that surfaces years after delivery, and a hacker attacks causing grief. It could be argued you delivered a defective product and chose not to remove the bug(s). If firmware upgrades are possible, does this imply they are mandatory? Must product support go on forever?   Adding an Internet connection to a device changes, I think, the nature of the product in a very significant way. A modern oscilloscope, for instance, is a very complex bit of equipment. Firmware upgrades via, say, a USB stick are pretty straightforward. But if a TCP/IP stack is added it could become a threat vector against a network. What responsibility does the manufacturer have years down the road to patch vulnerabilities?   One of my scopes boots Windows… and it’s rumored that some versions of that OS might actually have imperfect security. My Windows 8.1 computer gets a never-ending stream of updates from Microsoft. Should all of those be applied – on practically a weekly basis – to a ‘net-connected scope?   And what happens when the version of Windows in the equipment loses Microsoft’s support? How culpable is the scope manufacturer then?   Commenters will undoubtedly write “just use Linux.” As I write this on January 19 the first page of Slashdot has a story entitled “Serious Linux Kernel Vulnerability Patched.” Linux might reduce the incidence of problems, but won’t eliminate them.   Whether it’s a Linux- or Windows-based system, an OS upgrade can take a lot of time. How frustrated will users be when they need to quickly probe a few nodes, and the scope tells them too wait for an hour for an upgrade to take place?   On the up side, being able to download new code means the customer can get new features for free. It might be nice to advertise “sometimes you might get some cool new feature, but we have no idea what that might be!” Does that rather vague promise compensate for the business risk of litigation for unpatched problems?   These are issues far over our pay grades. You can be sure the legal profession will be soaked in cash as litigation creates case law. Before that happens zillions of IoT devices will be designed and deployed. I wonder how many companies, having no legal guidance on these matters today, will be retroactively liable when problems surface?   What’s your take?
  • 热度 14
    2015-2-27 21:04
    2253 次阅读|
    0 个评论
    In a recently released study , Hewlett Packard's Fortity on Demand research team reports that the Internet of Things in the home is not just insecure, it is a Frankenbeast. In a blog commenting on the report Daniel Miessler, Principal Security Architect with Fortify on Demand wrote:   "Five years ago, we decided that mobile was the real place to be. So everyone started building mobile apps while ignoring everything we've learned from securing web and thick-client applications. And now we have the Internet of Things (IoT). If we continued in this trend we'd have a new space that ignores the security lessons from mobile, but it's actually much worse than that."   He describes it as a Frankenbeast of technology ( figure ) that links network, application, mobile, and cloud technologies together into a single ecosystem, that seems to be taking on the worst security characteristics of each.   Figure: The HP Study reveals virtually all aspects of Internet of Things connectivity in the home is insecure. According to the study, 100 percent of the devices they studied that were used in home security contained significant vulnerabilities, including password security, encryption and authentication issues. In the ten security systems they tested along with their cloud and mobile application components, they found that that none of the systems required the use of a strong password and 100 percent of the systems failed to offer two-factor authentication. Some of the common and easily avoidable security issues they found included: https://www.hpfod.com/docs/InternetOfThings.pdf   Insufficient authorization: All systems that included their cloud-based web interfaces and mobile interfaces failed to require passwords of sufficient complexity and length with most only requiring a six character alphanumeric password. All systems also lacked the ability to lock out accounts after a certain number of failed attempts.   Insecure Interfaces: All cloud-based web interfaces tested exhibited security concerns enabling a potential attacker to gain account access through account harvesting which uses three application flaws; account enumeration, weak password policy and lack of account lockout. Similarly five of the ten systems tested exhibited account harvesting concerns with their mobile application interface exposing consumers to similar risks.   Privacy Concerns: All systems collected some form of personal information such as name, address, date of birth, phone number and even credit card numbers. Exposure of this personal information is of concern given the account harvesting issues across all systems. It is also worth noting that the use of video is a key feature of many home security systems with viewing available via mobile applications and cloud-based web interfaces. The privacy of video images from inside the home becomes an added concern.   Lack of transport encryption: While all systems implemented transport encryption such as SSL/TLS, many of the cloud connections remain vulnerable to attacks (e.g. POODLE attack). The importance of properly configured transport encryption is especially important since security is a primary function of these systems.   "The biggest takeaway is the fact that we were able to brute force against all 10 systems," said Miessler,,"meaning they had the trifecta of fail (enumerable usernames, weak password policy, and no account lockout), meaning we could gather and watch home video remotely.   "With complex systems like IoT, breaking security is often all about chaining smaller vulnerabilities together, and that's what we saw when looking at these home security systems. We can expect to see more of the same across the IoT space precisely because of the complexity of merging network, application, mobile, and cloud components into one system."
  • 热度 26
    2014-5-2 16:18
    1728 次阅读|
    0 个评论
    The industry is quite obsessed with security these days, particularly as it transitions from traditional, standalone devices to the design of connected, networked systems that are “always on.”   But sometimes it's the people on the inside, not the outside, who unwittingly present the biggest security threat.   I am a Certified Ethical Hacker, which basically means I get paid by companies to hack into their networks. My company, Digital Locksmiths, was hired by a manufacturing firm in 2011 to attempt to expose any security weaknesses that might be lurking in the ether.   A company’s external infrastructure -- including web servers, domain name servers, email servers, VPN access points, perimeter firewalls, and any other applications publicly accessible from the Internet -- is typically considered the primary target of security attacks. So that’s where we start.   Our methods include cracking passwords and eavesdropping as well as using keystroke loggers, sniffers, denial-of-service, and remote controls. In this case, I tried attacking the firewall systems with every trick in our digital lock picker’s toolkit, but to no avail: The network was locked tight, so to speak.   So I told myself, “Screw it. I’m going in.”   You see, companies with an impenetrable wall against external attacks are often surprisingly open to insider threats. Hackers are able to expose these vulnerabilities by exploiting one simple fact: Most people will respond in a highly predictable way to a particular situation.   First, I did a little recon on Google Earth and Street View to familiarize myself with the physical perimeter of the company’s building and grounds. Since the character I was playing that day was “me,” the walking stereotype of a friendly, guy-next-door, I put on my usual garb: a pair of good jeans and a button-down shirt.   I hopped into my truck and drove over to the facility. Doing my best to look sheepish, I walked into the front lobby and approached the receptionist: “This is really embarrassing, and I don’t usually ask for this type of favor, but I wonder if I could use your washroom? I knew I’d regret ordering that super-sized drink!”   She smiled -- always a good sign -- and buzzed me in. Once I was inside the men’s room and confirmed it was unoccupied, I quickly yanked two USB keys out of my pocket and dropped one on top of the metal toilet paper holder in each stall. I gave myself a thumbs-up in the mirror, strolled back to the lobby, and flashed the receptionist a big smile as I walked out the door.   I drove back to my office and sat down in front of my computer to wait. I knew that as soon as someone plugged one of my USBs into a computer, a program on the flash drive would auto-run and execute a remote connection to my computer.   This would give me instant access and the ability to "pass the hash." Note that I’m not talking about the good ol’ college days here; what I'm doing is taking the encrypted credentials for the computer’s owner and passing them to the company’s own server, mimicking a real and normal login.   In a short time, my computer sprang to life: With the ability now to log into the company’s network, I was poised to unleash all kinds of mayhem -- from extracting user names and passwords to opening and interacting with files on the compromised system, to taking screenshots of current activity on a user’s desktop.   Needless to say, company management was horrified to learn how easily I had hacked into their system, simply by exploiting the fact that people tend to react the same way in certain situations.   My "Big Gulp" ruse was a success because, by and large, people are inclined to be helpful. And it’s true -- curiosity does kill the cat. Nine times out of ten a person who finds a random USB stick will wonder what’s on the thing and plug it in to find out.   In fact, my backup plan should my men’s-room story have failed was to toss it in the parking lot in a prominent locale.   This episode underscores the fact that security involves more than just protection of a company's network firewall. Internal threats are real -- and they aren’t all necessarily the work of a disgruntled employee.   Employees need to understand that security threats can be triggered in numerous ways and trained on how to protect against possible security threats that may be masquerading as something perfectly innocuous -- like the guy next door. A simple policy like mandating only one type of USB device for internal use might have prevented me from gaining accessing to the network in this case.   Companies also need to recognize when they have a problem -- and the sooner they know, the better their chances of minimizing the harm done. The good news is that most enterprises have an enormous amount of data scattered throughout firewall, application, router, and log sources that is useful for determining what sorts of things are going on within their networks. The bad news is that all too few know how to aggregate and put that data to use.   Security professionals need to put in place the technologies and processes that afford them access to security logs along with some type of log management to extract the information required to keep the infrastructure secure.   Better yet, they can employ a Security Information Event Manager (SIEM) for grabbing and correlating data, as well as a process to integrate security data with identity and access information. That way, in our hacking incident, a number of alerts would have been fired off to security managers long before any proprietary data was accessed.   While it’s true that security threats have become more menacing, remember that security defenses also have become more powerful.   Terry Cutler is a Certified Ethical Hacker and co-founder of Digital Locksmiths Inc., an IT security and data defense firm based in Montreal. He serves as the company's Chief Technology Officer. He specializes in the anticipation, recognition, and prevention of security breaches.   This article originally appeared on IFSEC Global .
  • 热度 21
    2013-12-17 19:14
    1836 次阅读|
    0 个评论
    You sure have had your own A-Ha moment. The Merriam-Webster dictionary defines it as "a moment of sudden realisation, inspiration, insight, recognition, or comprehension." But I'm taking it one step further— a-ha! with an exclamation point. This is a more dramatic realisation. It's the moment when you discover a great truth, when something that was complicated or unpredictable suddenly becomes clear. As engineers, I'm sure we've had many. One of my first a-ha!s came in grade school. I was a hobbyist, and I enjoyed creating things from the local Radio Shack, even though I didn't know why they worked. I had hoarded quite the collection of resistors, capacitors, tubes, and speakers. I could read the colour bands on a resistor to get its value, and I had rudimentary soldering skills, but I didn't know how to design anything. One day, my older brother, who had been a Navy technician, explained Ohm's Law to me. A lightning bolt ignited in my head. You mean, there's a relationship between voltage, current, and resistance? That makes a lot of sense. And the world became a little bit more understandable. Building a circuit was a little less about pleasing the electron gods and a little more like engineering. I've had several of these moments since then. Calculus was a key ingredient to many. Though I knew formulas from high school physics, calculus enabled me to derive the formulas. As a ham radio operator, I knew how to calculate the resonate frequency of an LC network.   But I didn't know why that was the resonant frequency. With calculus, I was able to derive it myself, and I discovered why it also explained the resonant frequency of a pendulum or a spring and mass. Calculus explained that the current through a capacitor was proportional to the first derivative of the voltage. Suddenly, first-order differential equations explained time-domain and frequency-domain phenomena. This was another a-ha! moment. I've had many since then. Boolean logic explained digital circuits to me—no longer a mystery. A more recent a-ha! moment came when I learned how WCDMA worked. In retrospect, all these things seem obvious, but I can recall the very day that each of these a-ha! moments came. Science and engineering aren't the only subjects that have created these moments. An Economist article about international trade led the reader through a simplified two-party, two-industry model, where one party had a productivity advantage for both industries, and the other party had inferior productivity for both. Much to my surprise, simple arithmetic in the example showed that both parties produced and acquired more goods than either one could have done by itself if trading could occur between them. Until I had done the math, I had assumed trade was only advantageous if each had an absolute advantage in some industry. This was the principle of comparative advantage—and another a-ha! moment for me. I remember the day I did the surprising arithmetic. What about you? I'd like to hear about your a-ha! moments. Larry Desjardin Consultant  
  • 热度 18
    2013-2-14 21:16
    1836 次阅读|
    0 个评论
    Before the Internet began turning into a means of distributing mass entertainment and infotainment, there was once a clear delineation between two types of network operation. One type included the real time and deterministic networks that served the specialised needs of industrial/ machine control and factory automation. The other included asynchronous networks in business and personal computing where the need was for reliable delivery and for which determinism and precisely timed delivery was a secondary consideration. This is changing rapidly. Many of the early factory automation and machine control oriented networks operated according to the same rules of real time and deterministic operation of the devices they linked. Starting as ad hoc and mainly proprietary solutions, out of this came a variety of so-called controller area networks standards such as CANopen, ControlNet, DeviceNet, Modbus, Profibus, and the fieldbus. At the same time, network communications between non-real time systems in business and data processing began to standardise around such LAN-based topologies such as Ethernet, which used something very similar to the current TCP/IP internetworking protocol suite. Unlike the controller area networks, the TCP/IP protocol is probabilistic and asynchronous in nature. It was—and by in large still remains—a network environment where the aim is to guarantee delivery, but not within any particular, or even predictable, time frame. Even with the development of enhancements such as the Network Time Protocol (NTP) for clock synchronisation between computer systems, it is certainly not within the fine-grained microsecond and millisecond deterministic boundary conditions most embedded control operations need. But Ethernet and the TCP/IP suite have one advantage that these real time network protocols did not: they are ubiquitous, which provided two things that embedded developers needed: 1) a network protocol that if engineered correctly would work in a variety of environments and allow communications between them, and 2) a protocol that was low in cost to implement because of its ubiquity and the resultant economies of scale. But starting in the late 1990s and through most of the last decade there have been increasing efforts to adapt the Ethernet protocol to the real time and deterministic requirements of embedded control applications. Initially it was just a matter of physically configuring stations or nodes into a closed Ethernet collection and limit the number of nodes until the response times were within the deterministic requirements needed. This way the time it took for each to do the time consuming handshaking to ensure delivery could be minimised and, more importantly, predicted. Even more fine-grained were early attempts to take the underlying TCP/IP protocol apart and use only those portions, such as UDP—that do away with the time consuming process of acknowledging receipt to guarantee delivery—in a closed environment to handle transmissions in a much more deterministic way. But most useful to embedded developers was the creation in 2002 of the IEEE 1588 and its precision clock synchronisation protocol. Using it as the starting point, many of the industrial control network protocols have adapted elements of the PTP and out of that has emerged what is called the Industrial Ethernet. Joining Ethernet in adapting to a networking environment that requires more and more real-time deterministic performance have been any number of other specifications and standards. One example is the Firewire serial interconnect specification which started its life as an alternative to the Universal Serial Bus as a way of networking various peripheral and storage functions to a main server or desktop computer. Such adaptations to real time and determinism are coming just in time for the broader Internet, which is now increasing being used as the means by which to deliver real time video transmissions to Internet-enabled TVs, smartphones and tablets. For example, virtually every significant Ethernet router/switch manufacturer has incorporated the IEEE 1588 standard into their hardware layer. Also, the 1588-based IEEE 802.1 Audio/Video Bridging enhancement to the Ethernet standard is being widely used to deliver time-synchronised, low-latency audio and video while retaining 100% compatibility with Ethernet. Further contributing to this trend is Introduction to the Synchronised Ethernet (SyncE).Taking a slightly different approach to making the TCP/IP suite more deterministic, this ITU-T standard facilitates the transference of clock signals over the Ethernet physical layer. It is being used widely in applications such as cellular networks, IPTV and VoIP, not to mention such Internet-backbone access technologies as passive optical networks which require something more than traditional Ethernet protocols. Also not immune to the need for a more deterministic TCP/IP protocol is the new segment of embedded design activity involving in bringing wirelessly connected M2M and Internet-of-Things enabled devices into homes and buildings via new IPv6 Internet protocol extensions such as 6LoWPAN. Ironically, one way developers are looking to satisfy this need for more deterministic operation is the same UDP subset that so occupied embedded developers attention in the late 1990s. I do not see this trend towards the incorporation of real time determinism into the broader Internet slowing down. Soon, the deterministic modifications to Ethernet servers and routers that are the backbone of the Internet will move out of the physical layer where they now reside and move into the protocol layer. This trend will be driven not by how humans use the network, but by the needs of the many more embedded and distributed sensors that will be connected. Where present protocol standards now reflect the response times of humans ( one to several seconds ), the Internet of the future will be mainly populated and used by devices with response times in the microseconds to the milliseconds. Even now, in the average home, the ratio of devices to humans is on the order of 10:1. In the near future it is virtually certain that most of the devices will be connected. We live in exciting times. After graduating from Columbia University in the 1970s, I came into the electronics industry only a few years after the introduction of the microprocessor. In each of the decades since. I thought the one I was in was the most exciting ever and that there was nothing that would top it. This decade is no different and has not disappointed me yet. I can't wait to see what the next few years in this new exciting segment of the industry will bring.  
相关资源