Archives For technology

sleeping cell phone

Is seems that George Dvorsky and I will never see eye to eye on AI matters. We couldn’t agree on some key things when we were on two episodes of Science For The People when it was still called Skeptically Speaking, and after his recent attempt at dispelling popular myths about what artificial intelligence is and how it may endanger us, I don’t see a reason to break with tradition. It’s not that Dvorsky is completely wrong in what he says, but like many pundits fascinated with bleeding edge technology, he ascribes abilities and a certain sentience to computers that they simply don’t have, and borrows from vague Singularitarianism which uses grandiose terms with seemingly no fixed definitions for what they mean. The result is a muddled list which has some valid points and does provide valuable information, but not for the reasons actually specified as some fundamental problems are waived off as is they don’t matter. It’s articles like this why I’m doing the open source AI project, which I swear is being worked on in my spare time, although that’s been a bit hard to come by as I was navigating a professional roller coaster recently. But while the pace of my code review has slowed, I still have time to be a proper AI skeptic.

The very first problem with Dvorsky’s attempt at myth busting comes with his attempts to tackle very first “myth” that we won’t create AI with human-like intelligence. His argument? We made machines that can beat humans at certain games and which can trade stocks faster than us. If that’s all there is to human intelligence, that’s pretty deflating. We’ve succeeded in writing some apps and neural networks which we trained to be extremely good at a task which requires a lot of repetition and the strategies for which lay in very fixed domains where there are a few, really well defined correct answers, which is why we built computers in the first place. They automate repetitive tasks during which our attention and focus can drift and cause errors. So it’s not that surprising that we can build a search engine than can look up an answer faster than the typical human will remember it, or a computer that can play a board game by keeping track of enough probabilities with each move to beat a human champion. Make those machines do something a neural network in their software has not been trained to do and watch them fail. But a human is going to figure out the new task and train him or herself how to do it until it’s second nature.

For all the gung-ho quotes from equally enthusiastic laypeople with only tangential expertise in the subject matter, and the typical Singularitarian mantras that brains are just meat machines, throwing around the term “human-like intelligence” while scientists still struggle to define what it means to be intelligent in the first place, is not even an argument. It’s basically a typical techie’s rough day on the job, listening to clients debate about their big ideas, simply assuming that with enough elbow grease, what they want can be done without realizing that their requests are only loosely tethered to reality, they’re just regurgitating the promotional fluff they read on some tech blogs. And besides, none of the software Dvorsky so approvingly cites appeared ex nihlo; there were people who wrote it and tested it, so to say that software beat a person at a particular task isn’t even what happened. People wrote software to beat other people in certain tasks. All that’s happening with the AI part is that they used well understood math and data structures to avoid writing too much code and have the software itself guess its way to better performance. To just neglect the programmers like that is like praising a puck for getting into a net past a goalie while forgetting to mention that oh yeah, there was a team that lined up the shot and got it in.

Failing to get this fundamental part of where we are with AI, looking at fancy calculators and an advanced search engine, then imagining HAL 9000 and Skynet being the next logical steps for straightforward probabilistic algorithms, the rest of the myths are philosophical what-ifs instead of the definitive facts Dvorsky presents them to be. Can someone write a dangerous AI that we might have to fear or that may turn against us? Sure. But will it be so smart that we’ll be unable to shut it down is we have to as he claims? Probably not. Just like the next logical step for your first rocket to make it into orbit is not a fully functioning warp drive — which may or may not be feasible in the first place, and if it is, unlikely to be anything like shown in science fiction — an AI system today is on track to be a glorified calculator, search engine, and workflow supervisor. In terms of original and creative thought, it’s a tool to extend a human’s abilities by crunching the numbers on speculative ideas, but little else. There’s a reason why computer scientists are not writing countless philosophical treatises on artificial intelligence co-existing with lesser things of flesh and bone in droves, while pundits, futurists, and self-proclaimed AI experts churn out vast papers passionately debating the contents of vague PopSci Tech section articles after all…

woman with barcode

If you live in the U.S. and still watch certain TV channels, you may be forgiven for thinking that if you don’t know your FICO score, or lack apps and services to notify you of every slight change within a moment, you may as well give up on actually owning or renting anything without having a massive pile of cash sitting in a bank. Cutting through the commercial hyperbole, there’s a bit of truth to that in a country where borrowing is high and saving is low. Lenders need to have an objective and quick way to figure out how likely you are to repay them, and one company called Fair Isaac has long claimed it owns an equation to predict exactly that based on your history of making timely payments and other factors that seem important. The end result is quick, a three digit number that seems to speak volumes. But is it objective in an age where getting laid off as automation or outsourcing claim one’s job, or a dire medical problem can instantly land you in a world of financial pain and ruin? Probably not. No matter how you look at it, the FICO score has some pretty significant shortcomings, but fixing them could actually get really, really ugly…

For a few years, credit rating agencies have been toying with the idea of using social media as an additional barometer of your creditworthiness, particularly Facebook and LinkedIn, trying to find a correlation between your online contacts and odds of a default. In some cases, you can make fairly accurate predictions. A senior manager at a very large corporation whose contacts on professional social networks are all high powered business people, with a resume full of big numbers and grand accomplishments is probably not going to stop paying for his new BMW or buy a new house and skip town. But what about a hardworking college student with a couple of stoner friends who never amounted too much still listed in her Facebook contacts? You may as well flip a coin because if you’re deciding the worth of a person only by the company he or she keeps, not only does it open the door to discrimination, but removes that applicant’s agency by holding friends’ failures real or imagined, over this person’s head. Yes, this student may default and fall behind. But she could also be determined to build up a great credit score no matter the personal cost and pay in full, on time, every time, while working her way to adulthood.

Now, as scary as the attempts to base your credit rating on that of your friends sound, they got nothing on China’s grand plan to develop a social score for its citizens that goes far beyond the humble creditworthiness rating and all the way into meddling in their personal lives and political beliefs. Not only do you need to have a great history of on time payments to qualify for loans or ownership of private property, but you must also demonstrate yourself a productive citizen who is loyal to the party. Buying video games penalizes you while buying diapers rewards you. Your friends started posting sarcastic, Soviet-style jokes about the Communist Party? Well, you really didn’t want to buy a new house or get a new car, did you now? Oh, you did? Too bad. Probably shouldn’t be friends with unpatriotic dissidents then. You can see where this is going. Imagine a similar score in the U.S. used by the NSA and FBI to assign one’s likelihood of becoming some sort of criminal or terrorist, their less than airtight statistical models used to justify searches and seizures of random individuals whose personal choices and behavior matter less and less than the choices and behaviors of their social group. It’s like dystopian a sci-fi tale coming to life.

Really, there’s a limit to how much data we should be collecting and using, and allowing people to opt out of collection processes they think can be abused. Maybe a credit rating agency does want to create a financial product for people who want to use their friends to vouch for them. It would be their choice to see how it pans out. But if it’s using the same kind of research on new line of credit applicants who have not consented to this process, it needs to be heavily punished so that violating the rules costs much more than just complying with them. Just because we are fully capable of quickly and easily creating the tools for an Orwellian society doesn’t mean that we have to enable tyranny by algorithm and pretend that because computers are making some decision based on data they’re collecting it’s all objective and above board. People program all of these sites, people collect and organize this information, and people write the algorithms that will crunch it and render a verdict. And people are often biased and hypocritically judgmental. If we let their biases hide in lines of code watching out every move and encouraging us to be little model citizens, like the Chinese plan does, the consequences will be extremely dire.

designer v. developer

After some huffing, puffing, and fussing around with GitHub pages and wikis, I can finally bring you the promised first installment of my play-along-at-home AI project in which there’s no code to review just yet, but a high level explanation of how it will be implemented. It’s nothing fancy, but that’s kind of the point. Simple, easy to follow modules are easier to deal with and debug so that’s the direction in which I’m headed as they’ll be snapped on top of cloud service interfaces which will provide the on-demand computing resources required as it ramps up. There are also explanations for some of the choices I’m making when there are several decent implementation options for a particular feature set, some of which more or less have to come down to personal preference, while there are some long-view reasons to definitely pick one over the other.

In the next update, there will be database designs and SQL which may look a like overkill for a framework to run some ANNs, particularly when there are hundred line Python scripts that run them without a hiccup. But remember that for what’s being built, ANNs are just one component so the overhead is based on managing where it goes and securing the information because if I learned anything about security is that if it’s not baked in from the start but layered on top after all the functionality has been completed, you end up with only one layer of defense that may be easily pierced by exploiting a vulnerability out of your control. Inputs may not get sanitized with proper care, your framework package for CSRF prevention might not have been updated, and without a security model to put up some roadblocks between a hacker and your data, you may as well have not bothered. Likewise, there’s going to be a fair amount of code and resources to define the ANNs inputs and outputs so we can actually harness them to do useful things.

There’s nothing more wasteful that reinventing the wheel. We’ve been using them for 5,000 or so years and pretty much everything we could’ve done to them, we have, so when we’re pretty sure we found the optimal way of doing something, we invoke this expression to mark a totally useless, repetitive endeavor. But here’s the thing about thinking you’re done perfecting even a simple design: you get stuck in doing things one way for so long that you lose the ability to see completely new approaches that can really improve something you thought was ideal, and yes, that literally includes wheels on a car, as a recent concept video for Goodyear illustrates. A 3D printed spherical wheel for autonomous vehicles providing better grip and traction, and making the dreaded task of parallel parking a breeze because cars can just effortlessly move sideways into their spots. If I saw those cars on the road in Santa Monica later today, I’d say it was a day too late for them to get on the road. Really, the whole concept seems obviously superior.

Again, this is what you get when you approach a seemingly solved problem with a fresh look: a completely new solution that could prove better than the supposedly optimal solution today. It’s part of the reason why STEM students perform the same experiments and try to build the same structures — physical and digital — as the students before them. Not only do they learn how the solution they’ll typically use evolved, but there’s always the chance that someone for whom the problem in question is still new and the solution is a blank slate, will spot a new way to handle it and in the process, create a new standard solution. And while living on the bleeding edge is an exciting prospect and our knowledge grows on the foundations laid by previous generations, it really isn’t a waste of time and money to inspect those foundations and see if we can replace a few sections of it with something better and sturdier, maybe leading to new discoveries in sub-branches that seemed to have hit a dead end. You might end up with the same exact answers over 99.9% of the time, but that 0.1% of the time you come up with something different may be more than worth it, giving you a stronger, more nimble wheel, literally and metaphorically…

curious bot

Defense contractor Raytheon is working on robots that can talk to each other by tweaking how machine learning is commonly done. Out with top-down algorithmic instructions, in with neural network collaboration and delegation across numerous machines. Personally, I think this is not just a great idea but a fantastic one, so much so that I ended up writing my thesis on it and had some designs and code laying around for a proof of concept. Sadly, it’s been a few years and I got side-tracked by work, my eventual cross-country move, and other pedestrian concerns. But all that time, this idea just kept nagging me, and so after reading about Raytheon’s thoughts on networked robotics, I decided to dust off my old project and build it anew with modern tools in a series of posts, laying out not just the core concepts but the details of the implementation. Yes, there’s going to be a lot of in-depth discussion about code, but I’ll do my best to keep it easy to follow and discuss, whether you’re a seasoned professional coder, or just byte-curious.

All right, all right, that’s enough with all the groaning, I design and write software for a living, not pack comedy clubs in West Hollywood. And before you write any software, you have to lay out a few basic goals for what you want it to do. First and foremost, this project should be flexible and easily expandable because all we know is that we’re going to have neural networks that will run for machines with inputs and outputs, and we want to tie them to a certain terminology we could invoke when calling it. Secondly, it should be easily scalable and ready for the cloud, where all it takes to ramp it up is tweaking a few settings on the administration screen. Thirdly, it should be capable of accepting and executing custom rules for making sure the digital representations of the robots in the system are valid on the fly. And finally, it should allow for custom interfaces to different machines inhabiting the real world, or at least get close enough to providing a generic way to talk to real world entities. Sounds pretty ambitious, I know, but hey, if you’re going to be dealing with artificial intelligence, why not try to see just how far you can take an idea?

Before we proceed though, I’d like to tackle the obvious question of why one would want to dive into a project like that on skeptical pop sci blog. Well, for the last few years artificial intelligence has figured in popular science news as some sort of dark magic able to create utopias and ruin economies by making nearly half of all jobs obsolete in mere decades by writers who can’t fact check the claims they quoted and use to build elaborate scenarios of the future. But even if you don’t dive into the code and experiment with it yourself, you’ll get a good idea what AI actually is and isn’t. Then, the next time you read some clickbait laying out preposterous claims about how robots will take over the world and enslave us as we remain oblivious to it, you could recall that AI isn’t a digital ghost from sci-fi comic books, waiting to turn on humanity it comes to resent like the hateful supercomputer of I Have No Mouth and I Must Scream, but something you’ve seen diagrammed and rendered in code you can run on your very own computer on an odd little pop sci blog, and feel accordingly unimpressed with the cheap sensationalism. So with that in mind, here’s your chance to stop worrying to learn to understand your future machine overlords.

Here how this project is going to work. Each new post in this series is going to point to a GitHub wiki entry with code and details to keep the code and in-depth analysis in the same place while the posts here will give the high level overview. This way, if you prefer to stick to very high level basic overviews, that’s what you get to see first because as I’ve been told by so many bloggers who specialize in popular science and technology, big blocks of math and code are guaranteed to scare off an audience. But if the details intrigue you and you wanted a better look under the hood, it’s only a link away, and even though it looks scary at first, I really would encourage you to click on it and try to see how much you can follow along. Meanwhile, you’ll still get your dose of skeptical and scientific content in between so don’t think Weird Things is about to turn into an a comp sci blog the whole time this project is underway. After all, after long days of dealing with code and architectural designs, even someone who can’t imagine doing anything else will need a break from talking about computers and writing even more code for public review…

x47b takeoff

The peaceniks at Amnesty International have been worried about killer robots for a while, so as the international community convenes in Geneva to talk about weapons of the future, they once again launched a media blitz about what they see as an urgent need to ban killer robots. In the future they envision, merciless killer bots mow down soldiers and civilians alike with virtually no human intervention, kind of like in the opening scene of the Robocop remake. In an age of vast global trade empires with far too much to lose by fighting with each other use their soldiers and war machines to tackle far-flung low intensity conflicts, in military wonk parlance, where telling a civilian apart from a combatant is no easy feat, Amnesty International raises an important issue to consider. If we build robots to kill, there’s bound to be a time when they’ll make a decision in error and end someone’s life when they shouldn’t have. Who will be held responsible? Was it a bug or a feature that it killed who it did? Could we prevent similar incidents in the future?

Having seen machines take on the role of perfect bad guys in countless sci-fi tales, I can’t help but shake the feeling that a big part of the objections to autonomous armed robots comes from the innate anxiety at the idea of being killed because some lines of code ruled you a target. It’s an uneasy feeling even for someone who works with computers every day. Algorithms are way too often buggy and screw up edge cases way too easily. Programmers rushing to meet a hard deadline will sometimes cut corners to make something work, then never go back to fix it. They mean to, but as new projects start and time gets away from them, an update breaks their code and bugs emerge seemingly out of nowhere. If you ask a roomful of programmers who did this at least a few times in their careers to raise their hands, almost all of them will. And the few who did not are lying. When this is a bug in a game or a mobile app, it’s seldom a big deal. When it’s code deployed in an active war zone, it’s going to become a major problem very quickly.

Even worse, imagine bugs in the robots’ security systems. Shoddy encryption, or lack of it, was once exploited to capture live video feeds from drones on patrol. Poorly secured APIs meant to talk to the robot mid-action could be hijacked and turn the killer bot against its handlers, and as seen in pretty much every movie ever, this turn of events never has a good ending. Even good, secure APIs might not stay that way because cybersecurity is a very lopsided game in which all the cards are heavily stacked the hackers’ favor. Security experts need to execute perfectly for every patch, update, and code change to keep their machines safe. Hackers only need to take advantage of a single slip-up or bug to gain access and do their dirty work. This is why security for killer robots’ systems could never be perfect and the only thing its creators could do is make the machine extremely hard to hack with strict code, constantly updated secure connections to its base station, and include a way to quickly reset or destroy it when it does get hacked.

Still, all of this isn’t necessarily an argument against killer robots. It’s a reminder of how serious the challenges of making them are, and they better be heeded because no matter how much it may pain pacifist groups and think tanks, these weapons are coming. While they’ll inevitably kill civilians in war zones, in the mind of a general, so do flesh and blood soldiers, and if those well trained humans with all the empathy and complex reasoning skills being human entails cannot get it right all the time, what hope do robots have? Plus, to paraphrase the late General Patton, you don’t win wars by dying for your country but by making someone does for theirs’ and what better way to do that than by substituting your live troops with machinery you don’t mind losing nearly as much in combat? I’ve covered the “ideal” scenario for how all this would work back in the early days of this blog and in subsequent years, the technology to make it all possible isn’t just growing ever more advanced, it’s practically already here. It would make little sense to just throw it all away to continue to risk human lives in war zones from a military standpoint.

And here’s another thing to think about when envisioning a world where killer robots making life or death decisions dominate the battlefield. Only advanced countries could afford to build robot armies and deploy them instead of humans in conflict. Third World states would have no choice but to rely on flesh and blood soldiers, meaning that one side loses thousands of lives fighting a vast, expendable metal swarm armed with high tech weaponry able to outflank any human-held position before its defenders even have time to react. How easy would it be to start wars when soldiers no longer need to be put at risk and the other side either would not have good enough robots or must put humans on the front lines? If today all it takes to send thousand into combat saying that they volunteered and their sacrifice won’t be in vain, how quickly will future chicken hawks vote to send the killer bots to settle disputes, often in nations where only humans will be capable of fighting back, all but assuring the robots’ swift tactical victory?

voodoo doll

In another edition of people-can-be-awful news following last week’s post about why it’s indeed best not to feed trolls, it’s time to talk about online harassment and what to do about it. It seems that some 72 social activist groups are asking the Department of Education to police what they see as harassing and hate speech on a geo-fenced messaging app, arguing that because said geo-fence includes college campuses, it’s the colleges’ job to deal with it. Well, I suppose that it must be the start of windmill tilting season somewhere and now a government agency will have to do something to appease activists with good intentions in whose minds computers are magic that with the right lines of code can make racists, sexists, and stalkers go away. Except when all of them simply reappear on another social media platform and keep being terrible people since the only thing censoring them changes is the venue on which they’ll spew their hatred or harass their victims. Of course this is to be expected because the internet is built to work like that.

Now look, I completely understand how unpleasant it is to have terrible things said about you or done to you on the web and how it affects you in real life. As a techie who lives on the web, I’ve had these sorts of things happen to me firsthand. However, the same part of me that knows full well that the internet is in fact serious business, contrary to the old joke, also understands that a genuine attempt to police it is doomed to failure. Since the communication protocols used by all software using the internet are built to be extremely dynamic and robust, there’s always a way to circumvent censorship, confuse tracking, and defeat blacklists. This is what happens when a group of scientists build a network to share classified information. Like it or not, as long as there is electricity and an internet connection, people will get online, and some of these people will be terrible. For all the great things the internet brought us, it also gave us a really good look at how many people are mediocre and hateful, in stark contrast to most techo-utopian dreams.

So keeping in mind that some denizens of the web will always be awful human beings who give exactly zero shits about anyone else or what effect their invective has on others, and that there will never be a social media platform free of them no matter how hard we try, what should their targets do about it? Well, certainly not ask a government agency to step in. With social media’s reach and influence as powerful as it is today, and the fact that it’s free to use, we’ve gotten lost in dreamy manifestos of access to Twitter, Facebook, Snapchat, and yes, the dreaded Yik Yak, being fundamental human rights to speak truth to power and find a supporting community. But allowing free and unlimited use of social media is not some sort of internet mandate. It’s ran by private companies, many of them not very profitable, hoping to create an ecosystem in which a few ads or add-on services will make them some money by being middlemen in your everyday interactions with your meatspace and internet friends. If we stop using these services when the users with which we’re dealing through them are being horrible us, we do real damage.

But wait a minute, isn’t not using the social media platform on which you’ve been hit with waves and waves of hate speech, harassment, and libel, just letting the trolls win? In a way, maybe. At the same time though, their victory will leave them simply talking to other trolls with whom pretty much no one wants to deal, including the company that runs the platform. If Yik Yak develops a reputation as the social app where you go to get abused, who will want to use it? And if no one wants to use it, what reason is there for the company to waste millions giving racist, misogynist, and bigoted trolls their own little social network? Consider the case of Chatroulette. Started with the intent of giving random internet users a face with a screen name and connecting them with people they’d never otherwise meet, the sheer amount of male nudity almost destroyed it. Way too many users had negative experiences and never logged on again, associating it with crude, gratuitous nudity, so much so that it’s still shorthand for being surprised by an unwelcome erect penis on cam. Even after installing filters and controls banning tens of thousands of users every day, it’s still not the site it used to be, or that its creator actually envisioned it becoming.

With that in mind, why try to compel politicians and bureaucrats to unmask and prosecute users for saying offensive things on the web, many of which will no doubt be found to be protected by their freedom of speech rights? That’s right, remember that free speech doesn’t mean freedom to say things you personally approve of, or find tolerable. Considering that hate speech is legal, having slurs or rumors about you in your feed is very unlikely to be a criminal offense. You can be far more effective by doing nothing and letting the trolls fester, their favorite social platform to abuse others become their own personal hell where other trolls, out of targets, turn on them to get their kicks. Sure, many trolls just do it for the lulz with few hard feelings towards you. Until it’s them being doxxed, or flooded with unwanted pizzas, or swatted, or seeing their nudes on a site for other trolls’ ridicule. No matter how hard you try, they won’t be any less awful to you, so let them be awful to each other until they kill the community that allows them to flourish and the company that created and maintained it, and allow their innate awfulness be their undoing.

fable troll

Every internet community has them and many have been killed by them. They crave two things most of all: attention and a platform to broadcast whatever comes to mind, and every time they appear, you can safely bet that someone will admonish users engaging with them not to feed a troll as per the common axiom. But what if, just to propose something crazy here, maybe there are reasons to talk to them, downvote them, and otherwise show your displeasure because an appropriate amount of push back will finally solidify the message that they’re not wanted? They could either leave or give up on their trollish ways. Either way, it would be an improvement. So, following this hypothesis, a small group at a Bay Area college collected 42 million comments on huge gaming, political, and news sites with a grand total of 114 million votes spanning as many as 1.8 million unique users, to figure out once and for all if you can downvote trolls into oblivion or force them to productively contribute. Unfortunately, the answer is a pretty definitive no.

After creating an artificial neural network to gauge whether comments deserved an upvote or a downvote after using the actual discussion threads as a training set, the researchers decided to follow users’ comment histories to see how feedback from others affected them over time. They found that users who were ignored simply stopped participating, which seems quite logical. It’s simply a waste of time and effort to shout into the digital aether with no feedback. But when the computer followed the trolls, the data showed that even withering negativity had pretty much no effect on what they posted or how much. Their comments didn’t change and they did not seem to care at all about the community’s opinions of them. If they wanted to antagonize people, they kept right on doing it. We could say that not every person who provoked a flood of negativity in response is a troll, true. Some of the political sites used in the sample are extremely partisan so any deviation from the party line can provoke a dog pile. But by the same token, while not every maligned comment is trollish, most trollish comments are maligned, so the idea still holds.

With this in mind, how do we police trolls? Not feeding them does seem to be the best strategy, but considering how many of us suffer from SIWOTI syndrome — and yes, I’m not an exception to this by any stretch of the imagination since half this blog is a manifestation of it — and will not let trollish things go, it’s not always feasible. This means that shadow banning is actually by far the most effective technique to deal with problematic users. Because they won’t know they’re in their own little sandbox invisible to everyone else, their attempts to garner attention are always ignored so they get bored and leave. Of course this method isn’t foolproof, but a well designed and ran community will quickly channel even repeat offenders into the shadow banned abyss to be alone with their meanderings. In short, according to science, the best thing we can do to put a stop to trolling is to aggressively ignore them, as paradoxical as that sounds at first blush.

amazon boxes

It’s been a few months since NYT savaged Amazon’s work environment in the national press to several stammering professions of utter bewilderment from Bezos. We’ve heard little since, but just as it seemed that most of the unpleasant attention died down, something bizarre happened to bring the article back into the spotlight. Amazon’s new chief of PR decided to very publicly hit the newspaper with detailed criticisms of its coverage as if the story was still fresh. As you may expect, the head editor of the Times did not take it lightly and posted very stern rebuttals to the rebuttals, and the two are likely to go back and forth on the topic for a while while the rest of us are left to figure out exactly how bad of a place Amazon is to work. Personally, I have not heard any good things about working there and the consensus I’ve found basically says that if you’re willing to bite the bullet and suffer for two years, you’ll come out with a resume booster to find a job where you can actually enjoy what you do while working saner hours.

Amusingly enough, many internet commenters reacted to these sorts of discussions with close to the same scorn they reserved for the wealthy who feel they need affluenza therapy. Does it really matter whether 20-somethings making six figures are or aren’t happy with how their boss treats them? They’re making bank while people who loathe their jobs and whose bosses are so cruel, it seems like there’s a management competition in sadism, work sunup to sundown for a wage that still makes them prioritize rent and food over long overdue basic car maintenance. In some ways, I can understand that attitude. IT definitely pays well, and in many places there are so many jobs for someone with a computer science degree and a few years of experience that receiving multiple offers in the same day is not uncommon. As said in Eastern Europe, it would be a sin to complain about a fruitful computer science career, especially when your job title has the words “senior” or “lead” in it. But that said, I will now proceed to commit that exact sin.

For many programmers, insane hours aren’t just expected, they’re required. If you don’t put in your eight to ten hours a day, then go home and spend another four to five hours studying your first few years on the job, you’re going to struggle and find that your contract isn’t renewed. The lack of sleep and subsistence on caffeine, adrenaline, and electronic music are not just badges of honor, but the price of admission to the club. And now, on top of working around the clock, a lot of employers want to know what code you’re publishing on open source repositories, and to what programming groups you belong. You’re expected to live, sleep, breathe, eat, and cough comp sci to have a fruitful career that allows you to advance past the sweatshop setting. Suffer through it with a stiff upper lip and you’ll be given a reward. More work. But in a cozy office with snacks, game rooms, free coffee and even booze — all to keep you in the office longer — along with at least some creative freedom about how to set up the code structure for your project.

Just like doctors, lawyers, and architects, techies have to run a professional gauntlet before the salary fairy finally deems you worthy, waves her wand, and puts a smile on your face when you see your paycheck along with the money you saved while spending all your time at work. That’s your reward for all the blood, sweat and tears. And trust me, when you see the complex pieces of code you wrote roar to life and be relied on by thousands of people alongside, that’s more or less the exact moment you’ll either realize it was all totally worth every minute of frustration and exhaustion and you’re in love with what you do, or that the people who just pulled this off only to celebrate by doing it all over again must be completely insane, and should be swiftly committed to the nearest mental health facility. If it sounds like IT is very pro-hazing, it is, because we want to ensure that those willing to put in the hard work and have the tenacity to solve problems that seem like a real life hex placed by a dark wizard on machinery, are the ones who get rewarded, not people whose only job skill is to show up on time and look busy for enough of the day.

And that brings us back to Amazon. Since a lot of programmers expect a long grind until they’ll land that coveted spot in a startup-like atmosphere, there are a lot of companies which gleefully abuse this expectation to run a modern day white collar sweatshop. You’re shoved in a cubicle, assigned a mountain of tasks, and told to hurry up. If you have a technical boss, all he wants is to know when the code is finished. If you have a non-technical boss, he’ll watch you for signs of slacking off so he can have a disciplinary talk with you because unable to manage the product, he manages the people. And after being whipped into a crazy, unsustainable pace, you deliver someone else’s vision, then told to do the same thing again even faster. This is not only how all the stories the NYT quoted paint Amazon, this is exactly how Amazon, Microsoft, IBM, and IT at large banks and insurance companies work, by the sweatshop system. Working for them is just one long career-beginning hazing that never really ends, and most IT people simply accept it to be the way their world works, and share their time at a sweatshop as a battlefield story.

We are not upset about it, we just know that companies like Amazon only care about speed and scale, and can afford the golden shackles with which to chain roughly enough warm bodies to a computer to crank out the required code, and make our employment decisions with this in mind. For many techies a company that will chew them up and spit them out, but looks good to one of the countless tech recruiters out there when highlighted in an online resume, is a means to the kind of job they really want. Sure, you’ll find stories of programmers rebelling that we can’t wear jeans and t-shirts to the office, or tales of on-site catered meals on demand and massages, but that’s a tiny minority of all techies, primarily in California’s tech hubs. Most programmers wear a selection of outfits best fit for Jake from State Farm, spend their days in a cube farm, and game rooms with pool tables, consoles, and free booze for coders whose work at a company isn’t just acknowledged in passing, like a long lost uncle’s stint in jail, are things they read about between coding. To them, Amazon isn’t a particularly cruel or bruising employer. It’s a typical one.

math is logical

When you live in a world filled with technology, you’re living with the products of millions of lines of code, both low, and high level. There’s code in your car’s digital controls, all your appliances, and sprawling software, with which yours truly has more than just a passing familiarity, are way more often than not behind virtually every decision made about you by banks, potential bosses, hospitals, and even law enforcement. And it’s that last decision maker that warrants the highest scrutiny and the most worry because proprietary code is making decisions that can very literally end your life without actually being audited and examined for potential flaws. Buggy software in forensic labs means that actual criminals may go free while innocent bystanders are sentenced to decades, if not life in jail or death row for their actions, so criminal defense attorneys are now arguing that putting evidence in a black box to get a result is absurd, and want a real audit of at least one company’s software. Sadly, their requests have so far been denied by the courts for a really terrible reason: that the company is allowed to protect its code from the competition.

Instead of opening up its source code, the company in question, Cybergenetics, simply says its methods are mathematically sound and peer reviewed, so that should be the end of discussion as far as justice is concerned. So far, the courts seem to agree, arguing that revealing code will force the company to reveal its trade secrets despite the fact that its entitled to keep them. And while its unlikely that Cybergenetics is doing anything willfully malicious or avoiding an audit for some sort of sinister reason, the logic of saying that because their methodology seems sound, the code implementing it should be beyond reproach is fatally flawed. Just because you know a great deal about how something should be done doesn’t mean that you won’t make a mistake, one that may completely undermine your entire operation. Just consider the Heartbleed bug in the open source OpenSSL. Even when anyone could’ve reviewed the code, a bug undermining security the software was supposed to offer was missed for years, despite all the methodology behind OpenSSL’s approach to security for the package was quite mathematically sound.

So what could Cybergenetics not want to share with the world? Well, knowing what I’ve had the chance to learn about code meant to process DNA sequences, I can provide several educated guesses. One of the most problematic things with processing genetic data is quantity. It simply takes a lot of time and processing power to accurately read and compare DNA sequences and that means a lot of money goes solely to let your computers crunch data. The faster you could read and compare genetic data, the lower your customers’ costs, the more orders you can take and fulfill on time, and the higher your profit margins. What the code in question could reveal is how its programmers are trying to optimize it and tweak things like data types, memory usage, and mathematical shortcuts to get better performance out of it. All of these are clearly perfectly valid trade secrets and knowing how they do what they do could easily give competition a very real leg up on developing even faster and better algorithms. But these optimizations are also a perfect part of the code for evidence-compromising bugs to hide. It’s a real conundrum.

It’s one thing if you’re running a company which provides advanced data warehousing or code obfuscation services, where a bug in your code doesn’t result in someone going to jail. But if a wrong result on your end can cost even one innocent person a quarter century behind bars, an argument centered around your financial viability as a business just doesn’t cut it. Perhaps the patent system could help keep this software safe from being pilfered by competitors who won’t be able to compete otherwise while still keeping the code accessible and easy to review by the relevant experts. Otherwise, if we let commercial considerations into how we review one of the most important types of forensic evidence, criminal defense attorneys have an easy way to do what they do best and raise reasonable doubt by repeating how the method of matching is top secret and is banned from being reviewed solely to keep up the company’s revenue stream. Or ask the jury how they would feel if an algorithm no one is allowed to review not to compromise the creators’ bank accounts decides their ultimate fate in a complicated criminal case.