Archives For cybersecurity

censorship ad

Policy wonks, like most people, tend to think of IT as a magical black box which takes requests, does something, and makes their computers do what they want, or at least somewhat close to it. And so it’s not really surprising to see Ronan Farrow and Shamila Chaudhary rail against major cybersecurity companies for enabling dictators to block internet content at Foreign Policy, with allegations that show how poorly they understand what these companies do and how virtually all of the products they make work. You see, blaming a tech company for censorship is kind of like blaming a car manufacturer for drunk drivers. Certainly their tools are intended to block content but they’re not designed to filter all undesirables from a centralized location to which a dictator can submit a request. They’re meant to analyze and block traffic coming from malicious sources to prevent malware and any time you can analyze and stop traffic, you can abuse the ability and start blocking legitimate sites just because you don’t like them or the people who run them.

Most of the software they cited is meant to secure corporate networks and if they no longer get to stop or scan data, they’re pretty much useless because they can’t do threat identification or mitigation. WebSense does filter content and uses a centralized database cluster to push how it classifies sites to its customers so, as Farrow and Chaudhary noted, it was able to change up a few things to help mitigate its abuse by authoritarians. But McAffee and others are in a tougher spot because they’ve simply sold a software license to network admins. Other than virus and bot net definitions, there’s not much they can control from a central location, and trying to shame a company for selling tools made for something entirely different puts them in a position in which it would be very hard to defend their actions to someone convinced that they can just flip a switch and end the digital reign of tyranny across the world. And its even worse when the first reactions to articles about the abuse of their wares blame them for just being greedy.

On top of that, it’s not exactly hard to write your own filters and deep packet inspection tools. It’s just difficult to scale them for millions of users but it’s nothing out of the authoritarians reach. As they spend billions on security and control, surely they could divert a couple of million to build a capable system of their own. In fact, the Great Firewall of China is mostly home-grown and uses the country’s ISPs to scan incoming and outgoing traffic on a daily basis to find what to block. It sounds like a powerful indictment to point out that the Chinese use Cisco routers in their system, but it’s not as if they outsourced the task of pinging and blocking Tor nodes to the company. To be perfectly fair in charging tech companies in aiding and abetting censorship, you’d have to be talking about search engines that agree to modify their functionality to get a toehold in markets ruled over by authoritarians who will get someone to censor searches if not the company which was trying to expand. Bottom line: dictators will find a way to censor what they want to censor. If they use network monitoring security tools to do it, the blame still rests with them.

Share

bad idea

Recently, computers at two power plants were found to have been infected by three viruses that came from compromised USBs, all three easily detectable by up to date anti-virus software, and both infections were easily preventable if the plant operators followed the simplest cybersecurity procedures. If our infrastructure was ever to be the victim of a powerful cyberattack, the exploits’ success wouldn’t be so much a testament to the skills of the hackers as much as they would be indictments of the shoddy practices by those who simply don’t understand how to secure critical systems and don’t care to learn. Very few attacks we see out in the wild are truly brand new and very sophisticated like Stuxnet, Duqu, Flame, Gauss, and Red October. Most target unpatched, poorly secured systems with easily exploitable administrator accounts or out of date servers and database engines, attacks on which have been all but automated by simple PHP scripts. If you’re wondering how Anonymous can topple site after site during an op, now you know.

For example, take the pillaging of Stratfor. How did Anons get into their system? By using easily crackable default passwords and reading databses that were never encrypted. What about the huge data leak from Sony in which hundreds of thousands of accounts were compromised? An unpatched server provided a back door. Periodic leaks of credit card numbers from point of sale systems you find at local bars and restaurants? Out of date operating systems exposing admin accounts to external systems as is a typical industry practice. The ability to get into AT&T users’ account just by typing the right URL? Total absence of security checks on the company’s sites, checks that should’ve been tested before the sites ever went live. I think you get the point. Keep up with the virus definitions, patches, updates, test your software, don’t let external systems run as administrators on your network, and don’t stick random USBs into mission critical computers. If you don’t follow these elementary practices, you, quite frankly, are begging to be infected and hacked, and considering that we basically live on the web today, that’s just reckless.

Share

future stealth bomber

Danger Room’s editor at large Noah Shachtman generally writes interesting pieces that aren’t a chore to follow, but when taking up the problem with securing unmanned drones while more and more cyber weapon platforms are being deployed, he ended up writing a rather disjointed post which invokes computer science in a way that just doesn’t make sense. The short version is that securing any unmanned weapons system is impossible due to the Halting Problem and the task of actually auditing what we can know about them is extremely expensive and time-consuming. I’ll give him the latter but definitely not the former since he invokes the concept incorrectly and tries to tie it to a scenario where it doesn’t really apply. Here’s how he tries to explain it…

It’d be great if someone could simply write some sort of universal software checker that sniffs out any program’s potential flaws. One small problem: Such a checker can’t exist. As computer science pioneer Alan Turing showed in 1936, it’s impossible to write a program that can tell if another will run forever, given a particular input. That’s asking the checker to make a logical contradiction: Stop if you’re supposed to run for eternity.

The logic here simply does not follow. Turing was trying to determine if there can be unsolvable problems in computer science and focused on trying to predict how to tell if a program would run forever given unlimited execution time and resources. When you do that, your algorithm will end up with logical contradictions on both possible results. But that’s a theoretical program which has infinite resources and time, certainly no program in the real world can really run forever or have as much memory as it wants, right? Exactly. It can’t because it would eventually be killed by the operating system to prevent a crash or crash the computer by hogging up too many resources. That places a limit on the number of states in which the software can be and the system itself is going to allow only certain inputs. And this means that it’s logical, and far more feasible, to focus on testing the software for what you know could happen on the system it calls home.

We can definitely do that when it comes to security with frameworks like Metaspoit and IMPACT, which can throw an entire library worth of known and potential exploits at a program and see if it breaks or yields to attackers. New hacks get added to the frameworks as time goes on and you can keep pounding away at your digital gates to see if anything breaks. While Shachtman had a few minutes to read about the Halting Problem, he seemed to have missed that there are a few software checkers in existence and they do a decent job at making sure software doesn’t have a lot of glaring vulnerabilities. They can’t check the programs for correctness and completion, but that’s ok because we don’t need them to. Knowing if a given input would cause a program to run forever or stop won’t tell you much about its vulnerabilities, especially since we know that it’s not possible to really run a real world program forever or accept infinite inputs.

And just as programs that have to run on real computers don’t have infinite resources or accept infinite inputs, real hackers can’t execute an infinite number of attacks, so the concern that was being deemed as impossible to address with nothing less than the Halting Problem actually does have a practical solution and theoretical computer science concepts are being mixed in with very mundane security issues that are tackled every day. I’m not sure if Shachtman knows that when he goes off into this theoretical realm, he’s talking about infinites and securing all software from all attacks for all time, not about a comprehensive model for testing unmanned combat systems from known and potential exploits identified by researchers and engineers. At the end of the day, this is what DARPA is trying to accomplish and nothing in computing prevents them from making it happen. If it did, we wouldn’t have antivirus suites, spam filters, or exploit frameworks…

Share

When writing about the inherent difficulties in a large cyber-broadside about a country’s critical infrastructure, I focused on the fact that different SCADA machines and computer networks have different implementations and hacks would require a different approach for each one. But the problem is that about 11 million devices in roughly 52 countries are now linked together by a platform called the Niagra Framework, which abstracts the APIs for all kinds of machines into common code objects. The result is the ability to remotely control every device that was hooked up to a network implementing Niagra, from elevator doors, to video cameras, to secure vaults, all monitored via Niagra’s makers’ servers. Now this is all nice and neat, but linking everything together to make sure updates are sent to every customer as they come out and every action could be monitored and tracked to help ensure clients’ security into a single bundle poses a major problem. It creates a single point of failure for thousands of client companies and millions of their assets, and if that point isn’t adequately secure, you’re in trouble. So as more and more people invested in Niagra and linked their devices to its custom development tools, they were betting that Niagra was safe. You can probably see where I’m headed with this, right?

Well, Niagra, turned out to be somewhat easily hackable and this weakness represents a very real threat to the security of hospitals, factories, and power plants. Basically, the sins committed by Tridium, the creators of Niagra’s, include leaving clients’ configuration files used to store data like database passwords, access control settings, machine keys, and other credentials needed by applications at runtime far too easily accessible, and encrypting passwords with an outdated hash. These are hardly Stratfor-type errors, and the average Joe or Jane is unlikely to hijack a Niagra server, but someone who knows how to hack would be able to do some serious damage fairly quickly. Even worse is that such issues are very common when it comes to well known security lapses. Contrary to popular opinion, very few Anonymous hacks happen due to the sheer cunning of LulSec/Anti-Sec members. Although they certainly do have some very determined and very creative minds at their disposal, hacktivists usually exploit old and well known vulnerabilities like SQL injection, poorly secured configuration files, easily hackable passwords or employee e-mail accounts secured by a password using old encryption that can relatively easily be broken with a brute force hack. Sadly this sort of stuff is all too common and has many infosec experts proclaiming that security doesn’t exist in the IT world.

That statement is, of course, hyperbolic because we do have tools and encryption standards that simply can’t be broken in practical terms. While even SHA-2 function family and the mighty AES cipher aren’t invulnerable, the amount of time and processing power it would take to break them pose a challenge even to the NSA, one of the world’s most experienced and best funded intelligence agencies, much less a rogue hacker. So why is the web so riddled with sites that use the outdated MD5 or SHA-0 if they use any encryption at all? Why do so few sites even bother with a salted hash to make things even a little difficult on their attackers? There’s a wide range of libraries implementing SHA-2 ciphers for programmers and you can easily find another powerful and effective encryption standard called PGP in open source development kits, or at least buy tools to encrypt your sensitive data. Why not use them? Well, sophisticated encryption standards add to a computational overhead and can be a drain on performance. For a massive website with many users, reducing lag is simply a higher priority than security and they’d rather spend tens of thousands on new servers than on new certificates. Even if we were to accept and forgive that, however, I would think that Tridium’s executives would’ve taken security a lot more seriously since they have a little more to protect than blog posts or internal e-mails…

[ illustration by Aurich Lawson/Ars Technica ]

Share

So we’ve already seen how some of the more vocal pronouncements about cyber warfare were overhyped by those who think that hackers are nearly omnipotent, and thankfully, more and more skeptics with a good idea of how computers actually work have been published in major publications. One of the promoters of the idea of cyber warfare used for asymmetrical military engagement, Foreign Policy, now has two dueling posts on the subject, one of which puts current examples of cyber war in proper context, and one which tries to spin every act of digital malfeasance as an act of war. Obviously, you know where I stand on the issue and can probably guess that I find few faults with the skeptical column. It does underplay intelligence collection on the web and recurring problems with phishing and whaling for classified information, something which does have a very real impact on military affairs and planning, but otherwise, it’s very well done and researched. And by contrast, its doppelganger seems to mix digital spies, activist DDoS attacks, and what seems to be actual military operations using a computer virus into one huge and scary melting pot of digital gloom and doom.

robot vs. fish

We can’t assume that every major DDoS attack is being executed as an act of war because it’s not. For a long time, these attacks were used to hold certain sites for ransom and occasionally, what looks like an attack is a programming error which triggers internal applications to send way too much data over the wire. Over the last year, it’s also become a form of protest, a means to voice one’s displeasure with the powers that be and do at least something to demonstrate that they’re not invulnerable. So yes, some DDoS attacks could be political in nature, but they’re hardly effective weapons. Take a look at the reality behind the case of the attack on Estonia which was compared to a military blockade of government institutions by the nation’s prime minister…

The well-wired country found itself at the receiving end of a massive distributed denial-of-service attack that emanated from up to 85,000 hijacked computers and lasted three weeks. The attacks reached a peak on May 9th, when 58 Estonian websites were attacked [ simultaneously ] and the online services of Estonia’s largest bank were taken down… It was a nuisance and an emotional strike on the country, but the bank’s network was not even penetrated; it went down for 90 minutes one day and two hours the next.

Would you really claim that an attack that made one major bank’s online dashboard unavailable for three and a half hours over two days was a successful military operation? A similar DDoS attack on Twitter credited to a group of Russian hackers who wanted to silence a Georgian blogger also used to get a lot of traction when a cyber warfare drum needed to be beaten, but the outage lasted just a few hours and did nothing to silence or dissuade the blogger being targeted. Take a look at a much more serious incident when hackers working for a Chinese government project were snooping through Google’s servers for political dissidents’ e-mail. This was a careful, expert attack for political purposes but it was an internal matter rather than an attempt to attack the company or the nation which that company called home. So far, the only real successful example of a well executed act of cyber warfare was the Stuxnet worm. It was written by experts, targeted one specific system to sabotage another nation’s nuclear program, and seems to have achieved its intended goal. A supposed work of a Russian hacker squad to apply their own version of Stuxnet to an Illinois water utility actually turned out to be nothing more than a manufacturer’s employee trying to update the SCADA software from Russia, but it was assumed to be a sinister attack until shown otherwise thanks to the heated rhetoric about cyber war.

As said in the previous post on the subject, cyber warfare is nowhere near as effective or simple as we’re told again and again by the media, politicians, and self-proclaimed security experts why spread gloom and doom so they can sell their services after driving demand for them upwards. Counting every DDoS attack, and every questionable use of a computer as a precursor to cyber warfare diverts our focus from securing what’s really, truly important to secure, misleading those in charge into thinking that every computer virus should be treated as seriously as an active nuclear warhead ready to go off with no warning rather than prioritizing their assets, and developing cost and time-effective measures to avoid easily discoverable and exploitable flaws in the key nodes of their networks. No system will ever be unhackable or invulnerable, but it can be greatly reinforced in the most important points and surrounded by honey nets and powerful firewalls that filter incoming traffic into tools that can examine the probability that an incoming request is malicious. To do that, we need to be sober about the threats we face rather than chasing down every DDoS protest or rumor of a Stuxnet 2.0 co-opted by vicious hackers working for a special ops team with wild abandon while thinking it makes us safer.

[ illustration by Andre Kutscherauer ]

Share

Generally, if you work with technology for a living, you notice that people have two extreme reactions to all new electronic devices. The first is surprise that they can do anything beyond their expected functions, like gasping when smartphones browsing the web offer to make calls to the numbers one clicks. The other is a belief that the new device can do pretty much anything and everything under the sun, transcending mere bits, bytes and circuitry, and becoming indistinguishable from magic. Unfortunately for us, those now terrified of cyberwarfare seem to have the second extreme reaction, and if you want to know just how paranoid they can get, check out what former bureaucrat and current security consultant, Richard Clarke, says about the possibilities of a huge cyber-offensive in his attempt at a non-fictional adaptation of a Tom Clancy novel. Unbeknownst to IT experts, computers have suddenly gained the power not only to tear through any security measure, but also overcome incompatibilities between proprietary software packages and operating systems, and air gaps, while hackers are now supposed to level playing fields for nations with small militaries with their 1337 techno-wizardry.

While this notion grabs eyeballs, sells books and magazines, and scares the living daylights out of politicians who saw the flick Live Free or Die Hard one too many times, it’s total rubbish. Cyber-espionage is a very real thing and it does happen all the time when hackers or even computer science students recruited by a military adopt common hacker tricks to peer into classified networks. They use social engineering, a widely available packet sniffer like Wireshark, or a custom built one based on the open source library on which Wireshark was built, attempt spear phishing and whaling with employees of major security contractors, or look for any gap in secured networks that may lead to valuable intelligence. But there’s a huge gap between the very real threats posed by hackers looking for information much the same way Anonymous’ Antisec collective carried out a lot of its operations, and being able to just flip a switch and bring one of the largest, best armed, and most wired nations on the planet to its knees in just fifteen minutes like Clarke prophesizes. A review of his book on a top security news blog accurately alludes to the Book of Revelations when describing his visions…

Chinese hackers take down the Pentagon’s networks, trigger explosions at oil refineries, release chlorine gas from chemical plants, disable air traffic control, cause trains to crash into each other, delete all data, including offsite backups, held by the federal reserve and major banks, [and] then plunge the country into darkness by taking down the power grid from coast-to-coast. Thousands die immediately. Cities run out of food, ATMs shut down, looters take to the streets.

He forgot cats and dogs living together and the seven-headed, ten-horned beast ridden by the bejeweled and purple-robe clad Whore of Babylon leading Satan’s digital forces in a charge across Megiddo, but not bad as far as apocalyptic fantasies go. Problem is that all of this simply can’t happen unless the entire nation runs on only one massive command and control system that can be accessed via the web. Considering that the main software package in your office has trouble talking to that of another company, much less every company that works in the same industry as you, you can probably see the problem in this logic. Try and bring down power grids across the country. You can’t. They work in disparate blocks using different SCADA machines which are made by different manufacturers and use different software. The now infamous Stuxnet worm only targeted a single system, Siemens Step 7 and looked for only one type of instruction to disrupt. If the same instruction is a different argument type in Step 8, the worm will be rendered impotent. True, there are vulnerabilities in many of those SCADA machines because the manufacturers often didn’t bother fixing them and their customers do not want to update for fear that their perfectly calibrated systems may break, costing tens of millions in repairs and downtime, but the sheer variety and number of them makes a one-size-fits-all attack impossible.

Even though it was found that thousands of SCADA machines are not really air-gapped, they were made by different vendors, have different vulnerabilities, and represent only a tiny fraction of all the SCADA machines in use right now. An army of thousands of hackers working around the clock couldn’t do even a tiny fraction of the damage Clarke envisions just because the technology they’re attacking is so disparate and varied. And to hit banking systems to empty out ATMs they would need to attack massive international funds exchange entities responsible for standardizing inter-bank communications, no easy task by any means. To disable GPS, they’ll need to task down dozens of military operated and tracked satellites, and to take down air traffic systems they would need to disable tens of thousands of radar towers across the nation, also operated by a wide variety of software and hardware. I really don’t think Clarke and those who quote his hyperbole realize just how vast our wired infrastructures are and how many millions of targets would need to be hit simultaneously to do serious or lasting damage to them in a very short span of time, many of which would be air gapped and really difficult to exploit. And when the hackers actually bump into decent security and honey nets, they’ll need hours if not a full day or two to find the appropriate zero-day exploit to continue their attack. Again, this isn’t simple stuff.

Sure it’s scary when Antisec rummages through the web and takes down the websites of the CIA and FBI, but you have to keep in mind that most of the sites hit by Anonymous members were targeted with a social DDoS tool which simply overwhelms web servers rather than actually destroying databases or interfering with how a site does business logic on the backend. Big enough websites are pretty much impossible to shut down with this method because their enormous networks could simply absorb the attacks, and tearing down posters for any major government agencies in no way compromises the data they actually keep classified on the internal networks they use in their daily work. The sites that are hit by hackers who do steal valuable information either used very lax security or didn’t update their security tools against new threats, and the hacks were the results of their complacency. For well-maintained and well-updated sites, a hack isn’t a simple matter of using a new script like a lock picker would select a different tool, it’s a slow and steady research project where the gap will be found by trial and error rather than a simple brute attack. No network and no device will ever be 100% safe and secure, but neither is every network an easy target for government hackers on a mission.

Share

In what will sound like an old Yakov Smirnoff joke, DARPA wants your computer to watch you. Literally. Every move you make and everything you do with data will be monitored, recorded, then analyzed and dissected so the computer can use your behavior for authentication. Don’t feel quite like yourself one day? You can’t use the machine since your irregular behavior locks you out of the system. Pretty nifty right? Just one question though. How exactly is this going to work with any degree of accuracy? Humans are by their nature hard to predict with a significant degree of accuracy and recording their typical habits in no way leads to an individualized, secure authentication model because different people can have very similar habits and any system based on a basic statistical analysis of human patterns has to allow for a certain degree of variation, otherwise a curt reply on a rough day could trigger an account lockout. This idea seems to follow the new DARPA pattern of collecting an enormous amount of data, then using it to predict complex soft metrics, something that we really can’t do.

My guess is that the different factors like eye scans and commonly used words will be used as neurons in an artificial neural network. Then, when a person whose habits were monitored interacts with the system, all the interactions will be used as inputs and ran through the network to determine whether they fit into the patterns on an ongoing basis. What happens if you’re just having an off day? Trouble at home? Maybe you got a ticket on your way to work? Maybe the traffic was particularly horrendous that day and you’re still seething over some random twit who decided to cut through two lanes of freeway traffic doing 25 under the speed limit right under your nose, almost making you read end him? Well, the neural network probably won’t like that and interfere as you try to work, threatening to turn your simmering anger into full blown fury and triggering another system that observes your outward behavior to target you as a security risk. Just watching what you tend to type often, how much you type, where your eyes are directed, and so on and so forth, aren’t very good indicators of how you’re going to behave in the future and they don’t capture anything all that highly individual, especially if you’re doing fairly repetitive tasks on a computer as part of your daily routine. It’s just data for a massive data dump.

And this is not to mention that judging your every interaction with a computer on even an hour by hour basis is going to take an immense amount of computing power since your actions have to be recorded and sent to an equivalent of a small server far to be constantly ran through the neural network. Since we’re talking about the mad science arm of the military, this system’s target use would be the Defense Department which will have millions of uniformed service members, employees, and contractors, to keep track of. The system would have to analyze billions of actions every day nonstop. It’s not impossible, but it would be very expensive to maintain and as we’ve just discussed, the results it would provide are rather dubious at best. It may be tempting to see the patterns of data it will generate as being extremely informative and revealing, but they’re not. Since it has to deal with humans, anything other than a major anomaly within the entire system will get lost in the noise, and seemingly personally identifiable computer usage habits will be homogenized into something so generic, it’s going to apply to an entire subset of computer users rather than just one. Any other approach and the network could be constantly going off with false alarms and thousands of people will be locked out on a regular basis after an occasional sneeze sets off the system’s hair trigger, which DARPA would find unacceptable.

We need to remember that with today’s computers we can easily record enormous amounts of data and then crunch through it faster than an entire army of humans. But that doesn’t mean that the data we can collect has to yield some profound insight we can tease out for predictive purposes. We have to focus not on what we can measure but why we’re measuring it and what factors are involved. We can mine oceans of data for some big and surprising factoids like say, 88% of users don’t use a feature the site owners thought would be huge. And that’s really it. From this data, we can’t predict that tweaking the feature in certain ways will ensure that it could attract three times the users utilizing it at the time the data was reviewed because personal preference varies greatly and something new and completely out of range for your data collection capability really captures your users’ time and attention. And sure, we can use certain data to help solve very straightforward prognostication problems, but only when they involve few factors, are very narrowly defined, and based on solid data points we can express as hard facts such as numbers, text, or true/false values. Beyond that, we’re engaging in what is really more or less just informed speculation that could be spot on, or a textbook example of GIGO.

[ illustration via Popular Science ]

Share

The foreign policy wonk blog Best Defense is making the case that we need to turn the inevitable wind-down of cybersecurity hysteria in the media after the news splash made by revelations about the Stuxnet virus, into our permanent attitudes. Basically, the media and politicians are really good at overreacting and forgetting an important issue when its cleansed out of the news cycle, however, we need a balance of both. We have to be aware but not overly paranoid that we're going to get hit with another malicious horror that turns our machines against us. Sounds great but it's kind of vague and cryptic. On a scale of one to ten, with one being completely calm and then being tear-your-hair-out paranoid, how freaked out should we be? A five? A three? A six? While I'm not an expert on guesstimating the appropriate panic levels about security issues, what I can add is that making the kind of malware that can strike real world targets is very, very hard, and we shouldn't be terrified of a viral infestation of our power plants and grids because it takes a lot of time and effort to execute an attack.

Some of the things that really set Stuxnet apart from other viruses was the fact that it targeted specific SCADA machines and showed great familiarity with Siemens Step 7 software at a very low level. And while that made it very scary, it also gives it a very limited potency. This malware is less like a cluster bomb and more like the knife of a surgeon, and like any surgical tool, it's designed with a very specific purpose in mind. Having highly intimate familiarity with another set of software tools designed to control other SCADA machines may require exhaustive rewrites of something Stuxnet-like to the point where we're not even dealing with the same worm, and over the time it will take to develop it, who knows what new security patches will be applied to operating systems and the targeted software? Getting a warm onto a machine isn't such an easy task anymore. With a lot of users very aware if not paranoid about leaving strange files in their junk mail filter and warnings that will pop up every time something potentially compromising happens from operating systems, you have to rely on escalation of privileges attacks and the users' own bad judgment, hence the prevalence of phishing and a bit more elaborate spear phishing attacks to circumvent passwords and user permission managers.

Now, there are obviously other ways of getting into systems which rely on lax security when it comes to a wi-fi connection or just physically spying on what people are doing to get a password or plug in a USB with a viral payload, but the point is that hacking into systems today is like trying to hit a moving target. It's not a trivial task if you encounter even a modicum of what's considered basic security nowadays and the users faithfully keep their machines updated. But that said, industrial machinery is actually updated very infrequently because if it's working, applying an update carries what seems like an unnecessary risk. Even the most reliable vendor will eventually stumble and something will go wrong. So why take a chance, right? That leaves SCADA machines which haven't been patched exposed for a very long time, giving potential attackers a long time window during which they can get very familiar with how the machines work, how they communicate with the software, and at what events communication and be seamlessly altered and sent back to the machinery, triggering it to miss a crucial cycle or exceed some acceptable bound. So essentially, another Stuxnet is possible and big industrial machinery is a likely target, but the next worm will take a while to develop, will target a specific system, and we can thwart it with regular updates and maintaining redundant systems and good security protocols.

Share