Archives For military

Yes, Pacific Rim is a loud popcorn movie best viewed with your brain operating at half capacity, just enjoying the show without asking any questions. And that’s exactly what makes it fun. This may shock film snobs and critics who review Oscar bait, but not every movie in theaters needs to be an epic character drama that explores the fundamental issues with existence and the human condition, or brutally cataloging a bloody genocide while repeatedly beating its viewers over the head with heavy-handed questions about morals, ethics, free will, and what lurks within us all. At the same time though, big budget Hollywood spectacles with thin plots are usually outsourced to Michael Bay, or directors who emulate his style, who latch on to formulas that even the writers of Adam Sandler and Ben Stiller movies would find too flimsy and groan-inducing, then proceed to viciously drill them into your eyes to a soundtrack of explosions. Pacific Rim was thankfully made by Guillermo del Toro and easily avoids this trap by being a simple and very straightforward little tribute to giant robot vs. giant monster anime many twenty-somethings watched as kids.

But that said, there’s something just not right about humanoid robots brawling with giant beasts sent from another world through an undersea portal called The Breach. Jaegers might deliver a knockout punch to a 30 story Kaiju or pound one over the head with a container ship to give the monster a hell of a concussion, but the mechanics just don’t quite work. Kaijus are fleshy, which means they’re more flexible and heal minor cuts and scrapes quickly. By comparison, a Jaeger would be made of comparatively brittle metal alloys and have to be refurbished after every fight, making it extremely expensive and labor-intensive to operate. When the Kaijus appear every six months or so as they did at the beginning of the war, the cost can be managed. But as the giant brutes keep getting bigger and bigger, and start appearing as often as once a week, resources are quickly going to start running dry, so building ever more Jaegers would quickly become very difficult. No wonder that the bureaucrats who run the world in Pacific Rim want to shut down this once promising program for a wall to keep the Kaijus out. They can’t afford it anymore.

Of course one also wonders how they got the Jaegers to be bipedal at such a scale. Walking on two legs is very computationally expensive for a machine that’s as big as a high rise, and even a small bump in the road could send these robots falling, and falling badly. Not only that, but they give the Kaijus excellent points of attack: the ankles and the knees. To truly make their punches count, the Jaeger pilots have to get their robots to behave just like a human fighter and put the core and hips into the blow. Punching in a basic one-two sequence, the weight would swing from leg to leg, so a counter-attack from a Kaiju aimed at the thigh or the side of the knee could send a million tons of robot down hard with its head lined up for a finishing blow from above. You can see the same idea in mixed martial art disciplines which use stomps and side-knees in a clinch to shift an opponent’s weight so you can topple him and get full mount for a well placed elbow, or a swift hammer fist to the side of the head. Jaegers would simply not be flexible enough to survive this sort of assault in the real world. Many much less brittle and more coordinated humans aren’t without at least a little training or a whole lot of mass to counteract the impacts.

For better fight mechanics, I would have designed Jaegers to look more like sumo wrestlers. An extremely wide base either on tracks or hovering with the aid of nuclear powered jet engines, no legs, and stuffed with ranged weaponry to soften up the Kaiju as it charges. Large, thick, heavy arms with huge claws would pummel the monsters at close range and its barrel-like core would spin naturally, so tipping it over or even getting it off-balance would be a Herculean task, even for the fat Category 4 Kaiju which attacks Hong Kong in the movie’s second act. Its hull could be made of something flexible like kevlar to make it tougher for a Kaiju to bite through and diffuse a good deal of the force that would be generated by a direct hit. One could even imagine it pulling off a complicated sequence just by rotating around its axis. For example, it could hit a Kaiju with an enormous left hook starting about 30 degrees left off center, keep spinning until it can follow the punch with a right elbow at between 60 and 120 degrees right off center, and returning back with a left hammer fist and a right hook, using the hits on the Kaiju to redirect its momentum.

And while we’re redesigning the Jaegers, we should ask why they can’t be piloted remotely. We can control drones halfway across the world in real time and all of the infrastructure to pull off a similar feat with a giant robot seems to be in place in the film. To minimize lag, the pilots should be in the base from which their Jaegers would be launched, but they wouldn’t have to be in their robot. Their brain-machine interfaces with their co-pilots and with their machine are going to be implemented as an abstraction over the kernel of the Jaeger’s operating system anyway so the pilots could fight, lose, and be ready to fight again as soon as a new machine is ready to go. It’s actually kind of a no-brainer that allows them to switch tactics, pushing the Jaegers further and taking risks that could kill them if they were in the actual robot but win the day in the end. There would be a huge psychological boost from seeing a Kaiju on a big screen in a bunker instead of up close and personal, its fangs tearing through the cockpit and rattling the robot around. Yes, it’s not as heroic or dangerous, but much more militarily effective and politically beneficial.

But then again, all of this is based on the idea that Jaegers make for the best front-line defense when a Kaiju attacks. That’s not necessarily true. We know they can be killed by nukes, but the proposition of turning the world’s most populated coastlines into radioactive deserts is a tough sell and actually doing that will kill food production and give the Kaijus a beachhead from which they can mount assaults further and further inland. However, launching a very large kinetic kill vehicle from orbit, basically a huge spike dropped from a satellite, could hit a Kaiju with roughly the same yield as a 300 kiloton nuclear warhead without all the radiation. Currenly we can’t build and launch wepaons like this because they violate the Outer Space Treaty, but when there’s an angry horde of aliens that can flatten a city block with each step rampaging on Earth and all of the nations unite in building and deploying Jaegers, I’m sure exceptions could be made and the current space faring powers can launch a system of satellites ready to drive a super-heated alloy slug into a Kaiju at hypersonic speeds at a moment’s notice. Should that somehow fail and some time needs to be bought for another shot, Jaegers can coral the beast into the kill zone.

This is how you would fight a Kaiju in the real world. Orbiting KKV launchers that can fire off an exceptionally engineered slug at the planet below at a moment’s notice, drone bomber swarms, and giant mobile weapon platforms known as Jaegers, remotely piloted as a last line of defense against the nightmarish beasts. Pacific Rim’s spectacle is great for a live action anime movie, a solid tribute to the genre, and it creates tension by putting the main characters in real danger in the maws of the Kaiju, but if we were to translate any of it to the real world, it would be a militarily unsustainable strategy with little chance of actually working. The only worse strategy would be a giant wall to keep the monsters out, i.e. the Wall of Life being built in the movie, but it seems like the competent commanders in the Pacific Rim universe were all on leave throughout the war and this is why the world has been stuck with worse and worse ideas for fighting the alien titans. But hey, how mad can you be at a movie’s plot holes if it lets you mentally design giant robots and a swarm of global space-based defenses to fight aliens the size of an office block?

Share

paranoia

Recently, word got out that a major defense contractor has been working on Riot, an application that tracks people across the web to figure out what they’re doing, and give those using it some sort of an idea of their routine. In the demonstration video, an employee is accurately placed as a morning gym rat who can be found on a treadmill at 6 am, should anyone want to ambush him with a warrant or start trailing him for one reason or another. Sounds kind of creepy, huh? It’s a massive computer system that knows where people are, their friends, and gives faceless agents of various three letter entities a deep look into their lives. But of course there’s a caveat to how scary Riot really is and that caveat should worry you, the average internet user, a lot more than anything that can be done by Riot. For all its predictive and tracking abilities, Riot can only use public data, data you shared with social media sites which can be read with an RSS feed. So the efficacy of Riot is essentially based on its victim rather than a backdoor into his digital life.

Don’t want yourself targeted by Riot or whatever Riot 2.0 is being cooked up? Keep as much as you can off Facebook and make sure you and your friends stay on top of your current security settings. Turn off any automatic geolocation services on your smartphones and on your favorite social media sites and clients, and don’t check in on any of them. This would make you virtually invisible to the application. You’d be little more than an occasional blip on the radar which isn’t all that easy to decipher. Now, if Riot was able to crack your passwords or install a backdoor into your social media accounts and your phone, then you’d have to start worrying. But what I saw in the demos shows a sales pitch for an automated way to do something many intelligence agency analysts can do by hand nowadays and reliant on internet savvy but security naive users to do much of the data mining on themselves, handing over their lives via FB and Twitter.

If anything, the leaked video shows how easy it is for those who live on the web to expose a lot more than they think they’re exposing to the outside world, that is if they’re even aware of how freely they release intimate details about their lives and daily routine to complete strangers. And of course, those who are mindful of how much data is being collected on them and how easily an overlooked security setting can put information meant solely for friends and/or family can spill in the social media world, will take care not to expose themselves the way Raytheon’s test subject did, rendering the use of this app to find potential terrorists and spies rather moot. The digital medium allows for all sorts of interesting cat and mouse head games and false trails can cover a spy’s trail, leading analysts to dead ends and making them waste hours on wild goose-chases as they try to establish routines and patterns from fictional data being fed into social media sites on a daily basis. And this is why Riot is likely still in its proof of concept stage…

[ illustration by Sven Prim ]

Share

future stealth bomber

Danger Room’s editor at large Noah Shachtman generally writes interesting pieces that aren’t a chore to follow, but when taking up the problem with securing unmanned drones while more and more cyber weapon platforms are being deployed, he ended up writing a rather disjointed post which invokes computer science in a way that just doesn’t make sense. The short version is that securing any unmanned weapons system is impossible due to the Halting Problem and the task of actually auditing what we can know about them is extremely expensive and time-consuming. I’ll give him the latter but definitely not the former since he invokes the concept incorrectly and tries to tie it to a scenario where it doesn’t really apply. Here’s how he tries to explain it…

It’d be great if someone could simply write some sort of universal software checker that sniffs out any program’s potential flaws. One small problem: Such a checker can’t exist. As computer science pioneer Alan Turing showed in 1936, it’s impossible to write a program that can tell if another will run forever, given a particular input. That’s asking the checker to make a logical contradiction: Stop if you’re supposed to run for eternity.

The logic here simply does not follow. Turing was trying to determine if there can be unsolvable problems in computer science and focused on trying to predict how to tell if a program would run forever given unlimited execution time and resources. When you do that, your algorithm will end up with logical contradictions on both possible results. But that’s a theoretical program which has infinite resources and time, certainly no program in the real world can really run forever or have as much memory as it wants, right? Exactly. It can’t because it would eventually be killed by the operating system to prevent a crash or crash the computer by hogging up too many resources. That places a limit on the number of states in which the software can be and the system itself is going to allow only certain inputs. And this means that it’s logical, and far more feasible, to focus on testing the software for what you know could happen on the system it calls home.

We can definitely do that when it comes to security with frameworks like Metaspoit and IMPACT, which can throw an entire library worth of known and potential exploits at a program and see if it breaks or yields to attackers. New hacks get added to the frameworks as time goes on and you can keep pounding away at your digital gates to see if anything breaks. While Shachtman had a few minutes to read about the Halting Problem, he seemed to have missed that there are a few software checkers in existence and they do a decent job at making sure software doesn’t have a lot of glaring vulnerabilities. They can’t check the programs for correctness and completion, but that’s ok because we don’t need them to. Knowing if a given input would cause a program to run forever or stop won’t tell you much about its vulnerabilities, especially since we know that it’s not possible to really run a real world program forever or accept infinite inputs.

And just as programs that have to run on real computers don’t have infinite resources or accept infinite inputs, real hackers can’t execute an infinite number of attacks, so the concern that was being deemed as impossible to address with nothing less than the Halting Problem actually does have a practical solution and theoretical computer science concepts are being mixed in with very mundane security issues that are tackled every day. I’m not sure if Shachtman knows that when he goes off into this theoretical realm, he’s talking about infinites and securing all software from all attacks for all time, not about a comprehensive model for testing unmanned combat systems from known and potential exploits identified by researchers and engineers. At the end of the day, this is what DARPA is trying to accomplish and nothing in computing prevents them from making it happen. If it did, we wouldn’t have antivirus suites, spam filters, or exploit frameworks…

Share

x47b takeoff

Human Rights Watch has seen the future of warfare and they don’t like it, not one bit. It’s pretty much inevitable that machines will be doing more and more fighting because they’re cheap and when one of them is destroyed by enemy fire, no one has to lose a father or a mother. Another one will be rolled off the assembly line and thrown into the fray. But the problem, according to a lengthy report by HRW, is that robots couldn’t tell civilians from enemy combatants during a war, and so humans should be the ones deciding who gets killed and who doesn’t. Today being able to distinguish civilians from hostiles is absolutely crucial because most wars being fought today are asymmetric and often involve complex, loosely affiliated groups which move through a civilian population and recruit civilians or so-called "non-state actors" to join them. How do you tell the difference, especially when you’re just a collection of circuits running code?

Just as HRW warns in its grandly titled report, robots left to make all the decisions could easily turn into indiscriminate killers, butchering everyone in sight and no human would be accountable for their actions because one could always blame a bug or lack of testing in real world situations on what could all too easily become a war crime. But considering that humans have a hard time telling who is on whose side in Afghanistan and faced the same problem in Iraq by keeping the country together until the population decided to come down hard on the worst of the sectarian militias, how well would a robot fare? HRW may be asking for an impossible goal here: to make a robot better at telling civilians apart from combatants than humans who spend years learning to do that. Of course as a computer person, I’m intrigued by the idea, but the only viable possibility that I see is to keep the entire population under constant surveillance, log their every movement, word, key stroke, and nervous tick, and parse the resulting oceans of data for patterns.

But how would that look? Excuse us, mind if we’d wire your building as if we’re shooting a reality show, install spyware on your computer, and tap your phones to record everything you say and do so our supercomputer doesn’t tell a drone to lob a 1,000 pound warhead through your living room window? Something tells me that’s not a viable plan, and even then, mistakes could easily be made by both humans and robots since our intra-cultural interactions are very complex and hard to interpret with certainty. And again, we already spy on people and still mistakes are made so it’s doubtful this technique would help, especially when we consider just how much data would come pouring in. Really, it all comes down to the fact that war is terrible and people get killed in armed conflicts. Mistakes can and will inevitably be made, robots or no robots, and asking that a nation looking to automate its mechanized infantry and air force keep on risking humans is like yelling into the wind. The only way civilians will be spared is if wars are prevented but preventing wars is a task at which we’ve been spectacularly failing for thousands of years…

Share

ufo city

Please pardon the lack of posts. Things have been rather hectic on and off and the news from the usual sources have been rather slow, reporting on experiments and ideas which I’ve written about before in their previous incarnations, or ones that seem to be of little interest to virtually anyone outside the field in question. But I did come across something from Ray Villard that gave me a good idea for a post. Basically, Ray explores the question of whether UFO sightings were culprits in accidents and finds that cases of mistaken identity can certainly cause you to crash a car or make a military pilot do something risky with his jet, but overall, you don’t have to worry if an alien spacecraft will run you off the road or out of the sky. This is all old news of course, but the incident mentioned in his opening paragraphs regarding a pilot who crashed his plane in a spirited pursuit of a UFO likely to have been a weather balloon, is noteworthy because it lets me try and address a very common and often hard to counter claim made by many ufologists.

A while ago, a small group of former high ranking Air Force officers claimed that UFOs regularly showed up during nuclear tests, occasionally disabling the warheads, something a lot of ardent conspiracy theorists and ufologists took as concrete proof of a long-standing idea that nuclear weapons attracted the aliens who come to Earth. Having military personnel talk about having no idea whet was in the sky above them or recalling chasing down bizarre objects which they could not identify and which their commanders seemed very reluctant to discuss, if they discussed the objects at all, sounds like a slam dunk to a UFO believer. If anyone would know what was in the skies, it should be the Air Force and if it doesn’t know, it must be an alien, right? There’s no way that crazy people are flying bombers and interceptors, and operating radar stations on such a massive scale that hundreds of honorably discharged specialists and career officers will come forward to talk about their UFOs sightings. And they’re right. There aren’t. But the issue is not a question of whether someone not entirely sane servers in the military. It’s military secrecy.

The defense establishment has a lot of secrets and these secrets are stratified. If you have top secret clearance while your colleague has a secret one, you know things he or she doesn’t and you’re not allowed to say anything about a top secret level project without those with the same exact clearance as you. This is important because clearances can also be project specific which means that two officers with top secret clearance may actually not be cleared to know about an extremely important project, or only one of them may be involved with it but is not allowed to say anything about his work to his counterpart. Getting pretty tangled isn’t it? Usually, this happens to minimize the potential leaks because the fewer people know about a critical project which has to stay in the shadows, the fewer people can spill any details and if they do, it’s easier to track down who talked and to whom. And during the cold war, the golden days of UFO sightings, very classified, compartmentalized work was constantly happening at military bases.

Former military pilots, specialists, and officers talking about UFOs isn’t crazy or poorly trained, they simply didn’t know what they saw or why because they weren’t allowed to know. Spy plane prototypes flying overhead, highly experimental detectors and weapons systems flew across an impressive swath of the country in total secrecy and whoever detected them with no clue what a bizarre objects like that was doing in the air, was unlikely to have the clearances to find out what they actually were. And the same trend continues today, so even as the number of clearances grows, there are still few people who can accurately connect the dots on today’s black projects, ones likely to involve very oddly shaped robotic craft that have been mistaken for UFOs by the public when being trucked from base to base, even when they were already known to exist and had their own Wikipedia pages for years. Just imagine what’s happening behind closed doors at the infamous Area 51 base, the birthplace of the world’s most advanced military jets. How many experimental planes are flying in the skies today and how many are so secret that only a room full of people are allowed to know about them? How many have been spotted as UFOs?

Share

jason mask

Here’s a fun fact for you. If you zap someone with a powerful enough magnetic field, you could change this person’s behavior and not always for the best. In fact, you could even zap someone into a state of cold, callous sociopathy if you know where to aim, at least for a short while. Yes, the effects do wear off, but it does seem perfectly plausible that the same effect could be easily harnessed and prolonged by a chemical cocktail and we’ve known that behavior can be altered with the right tools. So of course conspiracy theorists around the world were wondering if sinister military officers or politicians with little concern for their fellow humans would start injecting some people with a psychopath-killer-in-a-syringe serum and setting them loose on a battlefield to do unspeakable evil, acting as shock troops before or during an invasion. The answer is twofold. In theory, yes, they could. In practice, the results would vary widely and can easily backfire and we already have plenty of sociopaths available for building a small army of shock troops. Just ask the Pakistani ISI if you’re curious, and while you’re at it, ask how well it’s worked for them…

Basically, the issue here is that there are limits to which you can change someone’s behavior as well as for how long. In the article above, the subject feels less empathetic and inhibited, but his psychopathy only extends to taking more risks in a video game and pocketing an uncollected tip which he promptly pays back after returning to normal. His comparison point is a special forces soldier who had extensive training and whose skills were honed in real wars. This doesn’t tell us much because military training is a major variable that’s overlooked in such stories. How likely is our non-military test subject to injure or kill someone in a real fight? Probably not very, and here is why. If you ever take a martial arts class, you’ll spend the first few weeks apologizing if you do manage to land a punch on your sparring partner and the instructors will yell at you for going far too easy on your blows and tackles. You’ll shy away from jabs and your natural instinct will be to flinch or fall back when attacked, not to calmly stand your ground. Humans are social creatures and they tend to be averse to hurting each other in the vast majority of cases.

True, we can be induced into hurting others with money or threats, and we do know how to train someone not to shy away from fights and to overcome the natural aversion to real violence. But the experimental subject in question appears to have never had any combat training or martial arts background. He may be less averse to getting into a fight because his impulse control was radically lowered, but chances are that he’ll run for it if he picks a fight with someone who’s able to hold his own or when he realizes that he’s about to get hurt. Likewise, he’s unlikely to punch as hard or as accurately as someone who’s had some real training. All in all, he may be a major menace to unwatched tips in a bar and in Grand Theft Auto, but he’s most probably not a threat to flesh and blood humans. His former special forces friend? Absolutely, but he seems to have no need to be zapped into an emotionally detached state and has his impulses pretty well under control. On top of that, were we to just zap or drag a random person into a psychopathic malice, there’s simply no telling whether he would turn on his friends and handlers or not, a chance no evil, self-respecting mastermind of the New World Order would want to take.

And that brings us back to the very real problem of an abundance of psychopaths to do a dirty job for someone willing to pay. Just look at what happened in Afghanistan during and soon after the Soviet occupation. The mujahedeen trained to fight a guerilla war against the Red Army as well as become proxy shock troops for the ISI in a potential war with India, were not given drugs or magnetic bursts to the brain. They were recruited based on their religious convictions, trained to channel their loathing for the occupying infidels into violence, and let loose on Soviet troops. No artificial inducement or neural intervention was even needed. Today they quire regularly turn on their former handlers, kill people who displease them with near impunity and absolutely zero question or moral qualms, and have generally proved to be a far bigger threat and liability than an asymmetric military asset. Considering that real psychopaths are so dangerous, why create an entire army of them with experimental chemicals or magnetic beams? If indiscriminate murder is your goal, fully automated robots are the easiest way to go, not average people or soldiers just out of basic with their impulse control drugged and zapped out of existence…

Share

300

While reporting about cyberwarfare and information security has been getting better and better as of late, there are still some articles that posit baffling ideas about how to prevent a massive cyber attack launched by a government. The strange idea in question this time is one which has a good starting point, but ends up imagining cyber attacks as one would imagine a conventional siege, somewhat reminiscent of The Battle of Thermopylae. Rather than envisioning an attack from the cloud able to hit a target out of the blue, it tries to portray network topologies as a kind of unseen battlefield on which one side can gain an advantage by exploiting the landscape…

Cyberspace depends on a physical infrastructure of computers and fiber, and this physical infrastructure is located on national territory or subject to national jurisdiction. Cyberspace is a hierarchy of networks, at the top of which a small number of companies carry the bulk of global traffic over the Internet “backbone.” International traffic, including attacks, enters the United States over this “backbone.” The backbone is a choke point, relatively easy to defend, and something that the NSA is already intimately familiar with (as are the other major powers that engage in signals intelligence). Sit at the boundary of the backbone and U.S. jurisdiction, monitor and intercept malware, and attacks can be blocked.

Technically yes, you can use the main switches where the fiber stretching across the oceans will reach your shores and have a deep packet inspector check the headers of incoming packets to flag anything suspicious. But this really only works for relatively straightforward attacks and can easily be avoided. If you’re trying to inject a worm or a virus into a research lab’s computer, you’ll have to get through an anti-virus system which will scan your malware and compare its bytes to as many virus and worm signatures in its database as it reasonably can. With the sheer amount of malware out there today, these tools are good at stopping existing infections and their mutant versions. However, brand new attacks require reverse engineering and being ran in a simulated environment to be identified. This is how Flame and Gauss went undetected for years and they were most likely not even spread via the web, but with infected flash drives, meaning that efforts to stop them with packet inspection would’ve been absolutely useless.

A deep packet inspector sitting at MAE-East or MAE-West exchange points (or IXPs) would have to work like an anti-virus suite if it is to do what the author is proposing, so it can stop someone from downloading an obvious virus or bit of spyware from a server in another nation or deny an odd stream of packets from China or Iran thought to be malicious, but it’s not a choke point in any conventional sense. IXPs are not in the business of being a traffic cop so having them take on that role could have serious diplomatic repercussions, and aggressive filtering could have all sorts of nasty downstream effects on the ISPs connected to them. Considering that trying to flag traffic by country could be foiled by proxies and IP spoofing, and that complex new attacks would easily be able to slip by an IXP-based anti-virus system, all the effort may might be worth it in the long run and simply cause glitches for users trying to watch Netflix or surfing foreign websites to read the news in another language while trying to prevent threats users can easily manage.

So if creating IXP chokepoints would do little to stop the kind of complex attacks for which they’d be needed, why has there been so much talk about the Pentagon treating the internet as a top national security concern and trying to secure networks across America, or at the very least, be on call should anything go wrong? Why is the Secretary of Defense telling businesspeople that he views cybersecurity as the country’s biggest new challenge and has the Air Force on the job? My guess would be that some organizations and businesses simply haven’t been investing the time and attention they needed to be investing in security and now see the DOD as the perfect, cost-effective way to secure their networks, even though they could thwart attacks and counter-hack on their own without getting the military on the case, perhaps not even realizing that they’re giving it a Sisyphean task. If they know they’re targets, the best thing for them to do is to secure their networks and be aggressive about hiring infosec experts, not call in the cavalry and expect it to stop a real threat from materializing since it simply can’t perform such miracles…

Share

playground soldier

You might remember the bitter Israeli joke I used in a previous post about Haredi Jews’ complete lack of any desire in participating in their own nation’s future. It goes something like this. A third of the country works, a third fights in the military, and a third pays all the taxes; unfortunately it’s all the same third. Things have progressed somewhat since this joke was made, and there are a lot fewer religious fundamentalists shirking their military duties rather than claiming their religious exemption similar to the American "conscientious objector" clause, and staying home on a state stipend, reading the Torah for the hundredth time. Unfortunately this means that there are a lot of fundamentalists in the IDF and that doesn’t bode well for the Israeli women who are very, very quickly rising through the ranks and thriving in the military’s primarily secular structure. The new Haredi recruits seem determined to maintain separation of genders at all costs…

If the pressure to avoid sin in the military has always been an onus on women, more recently it’s transferred to men. Like Boianjiu’s recruits, many religious men are taught that they must steer clear of certain dangers, such as being touched by a woman, hearing a woman sing, and looking at women. As more women advance into positions of power, or just generally spread out among various units, these actions are harder and harder for men to avoid.

Perhaps one of the most ridiculous manifestations of this is a growing refusal to allow a female instructor to correct a man’s posture during combat training. Not hearing women sing or simply avoiding the sight of a woman can be excused as quirks, utterly asinine quirks that take several phrases in the Torah to unwarranted and unthinking extremes, but quirks nonetheless. But when these recruits refuse to learn how to handle their weapons or assume a proper stance during a live ammo exercise because a woman is in charge of their training, we’re venturing into that rare category of lunacy so extreme that it’s dangerous to the lunatic and everyone around him. It’s almost better if they just kept mooching off the government and shirked their responsibilities just like they did before because now, instead of trying to live in a post-1600 AD world, they’re trying to make the IDF bow to their whims regardless of what it does to combat readiness.

Not only are the Haredis a force for social unrest in Israel, and not only do they refuse to work in a knowledge-based economy the country has spent many billions trying to create, they’re now trying to turn their nation’s military into one of their yeshivas but with uniforms and guns. Can we consider this example of religious fundamentalism going too far again and again to learn a few glaring lessons as to why we shouldn’t be praising religious extremists as devoted pillars of their communities, and why we can’t allow them to have free reign in politics and modern society? For the last 60 plus years, Israel clothed, fed, sheltered, and defended its fundamentalists, letting them do as they wished and granting them every exception and stipend they demanded. What did the state get in return? Hardcore religious fanatics who will ridicule and shun the society that enabled their cushy existence, demanding ever more money, power, and concessions. And the only word I can possibly think of to describe shameful behavior like that is parasitic.

Share

future stealth bomber

Military planners in the U.S. are fretting about the success of the F-35 jets and for good reason since they’re slated to be the backbone of all military aviation over the next half century, so over the next 50 years, more than a trillion dollars will be spend building, testing, and maintaining a vast fleet of them. Most of the technology they’re using is brand new and has never quite been put together that way before so there are definitely bugs a plenty in the initial stages, but since a lot of technology tends to be seen as black magic by politicians and reporters, there are angry rumbles about why the jets aren’t performing up to par and freakouts about the 10 million lines of C++ code between the three software blocks (which we’d call versions in the civilian world). It’s an impressive number and it’s great that software running on jets is getting the attention is really deserves for once, but what exactly does that number mean and is it really that important?

First and foremost, just to set the proper mood, allow me to share one of my professors’ favorite lines about programming. Software is a lot like sausage. Just like good sausage is delicious but you wouldn’t want to see exactly how it’s made firsthand, software can be impressive and useful, but you may not want to look into its source code. I have never seen new technology come off the line or be deployed completely successfully and error free first time out in the field, and have never met or heard of someone who has. Were you to look into any sizeable company’s audit of where its tech projects are going, you’d see countless lamentations of how nothing works right, everything is slow and broken, and that the systems just won’t do what they want them to do. It’s the natural state of new development. Often times people aren’t even sure what they want to do but the minute you show them a proposal they know it’s not what you just demonstrated and you will end up going through iteration after iteration to finally nail down the requirements.

In lower visibility projects this iterative chaos with a method behind it tends to fly well under the radar, but the F-35 is anything but low visibility. Were we to hear nothing but stellar reports of its perfect performance first time out, I’d be concerned since to me it would say that it was not being properly tested or something was being covered up to make the jet look good. So when I hear hyperventilation about the Block III software and its 8.5 million lines of code, it’s almost like music to my ears since it tells me that this crucial part of the process is really being taken seriously. At the same time, we could do without the focus on the number of lines of code because it’s good for conveying scale and little else. Much of this code is probably going to be very routine, meant to get and set data. A relatively small portion of it is going to handle that data in a complex ways and a good chunk is going to be responsible for optimizing performance critical pieces and can always be tweaked when we know the key functions and methods work. It’s those bits we need to worry about rather than whisper "10 million lines" on the way to the fainting couch.

Now, I don’t have access to the F-35 source, otherwise I would obviously not be writing about it, but these hunches are based on my experience with large applications. Most of a huge piece of code has to be CRUD because it needs to pull in so much data from all the relevant parts of the system to make complex choices. The same thing is happening with the F-35. It has to pull in all sorts of data from its sensors, balance it against the pilot’s input, and run its options against the complex logic that tells it what the plane can and cannot do, which means that I could defend my educated guess here in the comments. So with that in mind, freaking out about the size of Block III makes it rather easy for journalists to sound like they researched the software issues, but in reality, it’s a meaningless stat that obscures the real concern. Lockheed could lighten its code blocks with machine learning, but the military doesn’t yet trust robots to make decisions with little or no human input, so the code has to be deterministic rather than probabilistic. We could talk about what that means too, but we’d just be getting into the gritty details of software design…

Share

dead cyber spy

Nowadays, if you hack into a company’s servers, the company might hack you right back. No, it won’t wipe your hard drive or infect you with a virus of course. The goal is to figure out who you are and what you’re after, primarily because some of the most advanced hacks over the past few years have been cases of industrial and military espionage. And this is where legal wonks are arguing that the government should step in, lest a company issue a retaliatory cyberattack only to find that its target is actually a foreign intelligence agency. Case in point, Google. After a very sophisticated attack on its servers coming from China and a messy international incident which saw a heated back and forth between the Chinese Communist Party and the company, the tech titan hacked back and found that its attackers were targeting defense and other tech companies with meancingly complex scripts and the group, dubbed the Elderwood Gang, is still at it.

Their easy access to zero day exploits and the coordination equired to pull off their favorite type of attack points to backing from someone who can afford to employ highly skilled programmers and wants to spy on foreign defense and tech contractors, trying to steal blueprints, e-mail, and source code.Basically, what I’m trying to say is that prevailing rumor paints the Elderwood Gang as a part of the Chinese cyber-army long suspected of stealing classified documents from the U.N. and a lot of First World military contractors and government agencies via spyware. As the vast majority of the wired world knows, the United States isn’t exactly a hacking lightweight and it more than likely deploys some very sophisticated spyware and malware of its own. So, say the legal wonks mentioned above, have the Air Force and the NSA tackle sophisticated hackers, not companies that find themselves riddled with foreign spyware. It could’ve come from a Facebook game someone way playing at work and is trying to steal logins to PayPal, or it might be a worm from another government and hacking them back would provoke an international incident which would have to escalate all the way up to the military. But is that a workable approach?

No, not really. Fact is that the vast majority of infections are trying to steal financial information and/or turn your computer into a bot for DDOS attacks. Not only that, but the malware kits used to make viruses and worms are exploitable too. Only a tiny sliver of all the nasty stuff you might catch surfing random sites without some very heavy duty firewalls and strict privacy and browser settings, is actually complex malware from a nation state, and even then you’d have to be a very highly visible defense or tech company since these attacks tend to come from whailing (which is like spear-phishing but targeted to high level executives) and compromised industry message boards, blogs, and forums. Little fries don’t interest the spies much so they quickly lose interest, so it’s really the Lockheed Martins, EADS’, and Northrop Grummans of the world that should be worried, but considering their cozy relationship with the militares of their home states, they can always escalate things when they need to. And since all this is being done in secret, I’d highly doubt that a foreign intelligence agency hacked in retaliation will cry foul. That would just be an admission of guilt and the start of a major diplomatic clusterscrew.

Were we to start reporting hack attempt after hack attempt and infection after infection, we’d so quickly swamp cybersecurity experts at the NSA and the Air Force, that they’d be buried under a massive backlog of things to investigate in weeks while the torrents of reports keep on coming. Antivirus makers already have vast databases that can identify who was infected with what kind of virus and how to remove it running 24/7/365, and can keep up with 99.9% of infections out in the wild. Considering that they’re the primary discoverers of cyber weapons in use, they’re more than up to the job and can do it without defense establishments getting involved in their daily work. And when we take into account the sheer number of random trojans and worms out there, a hacked company has a 99.9% chance of pinging random hacker crews rather than something as threatening as the Elderwood Gang or as sophisticated as Flame or Stuxnet, and even then, no one on the other end will make a peep because doing so would be a lot worse than keeping quiet and let the retaliating businesses get away with it. Treaties and tens of billions in trade may be at stake so it’s best to just let the accusations die down and resume the spying later. So if you get hacked, go ahead and hack back. You’re not going to start any wars by doing it.

Share