Archives For technology

x47b takeoff

The peaceniks at Amnesty International have been worried about killer robots for a while, so as the international community convenes in Geneva to talk about weapons of the future, they once again launched a media blitz about what they see as an urgent need to ban killer robots. In the future they envision, merciless killer bots mow down soldiers and civilians alike with virtually no human intervention, kind of like in the opening scene of the Robocop remake. In an age of vast global trade empires with far too much to lose by fighting with each other use their soldiers and war machines to tackle far-flung low intensity conflicts, in military wonk parlance, where telling a civilian apart from a combatant is no easy feat, Amnesty International raises an important issue to consider. If we build robots to kill, there’s bound to be a time when they’ll make a decision in error and end someone’s life when they shouldn’t have. Who will be held responsible? Was it a bug or a feature that it killed who it did? Could we prevent similar incidents in the future?

Having seen machines take on the role of perfect bad guys in countless sci-fi tales, I can’t help but shake the feeling that a big part of the objections to autonomous armed robots comes from the innate anxiety at the idea of being killed because some lines of code ruled you a target. It’s an uneasy feeling even for someone who works with computers every day. Algorithms are way too often buggy and screw up edge cases way too easily. Programmers rushing to meet a hard deadline will sometimes cut corners to make something work, then never go back to fix it. They mean to, but as new projects start and time gets away from them, an update breaks their code and bugs emerge seemingly out of nowhere. If you ask a roomful of programmers who did this at least a few times in their careers to raise their hands, almost all of them will. And the few who did not are lying. When this is a bug in a game or a mobile app, it’s seldom a big deal. When it’s code deployed in an active war zone, it’s going to become a major problem very quickly.

Even worse, imagine bugs in the robots’ security systems. Shoddy encryption, or lack of it, was once exploited to capture live video feeds from drones on patrol. Poorly secured APIs meant to talk to the robot mid-action could be hijacked and turn the killer bot against its handlers, and as seen in pretty much every movie ever, this turn of events never has a good ending. Even good, secure APIs might not stay that way because cybersecurity is a very lopsided game in which all the cards are heavily stacked the hackers’ favor. Security experts need to execute perfectly for every patch, update, and code change to keep their machines safe. Hackers only need to take advantage of a single slip-up or bug to gain access and do their dirty work. This is why security for killer robots’ systems could never be perfect and the only thing its creators could do is make the machine extremely hard to hack with strict code, constantly updated secure connections to its base station, and include a way to quickly reset or destroy it when it does get hacked.

Still, all of this isn’t necessarily an argument against killer robots. It’s a reminder of how serious the challenges of making them are, and they better be heeded because no matter how much it may pain pacifist groups and think tanks, these weapons are coming. While they’ll inevitably kill civilians in war zones, in the mind of a general, so do flesh and blood soldiers, and if those well trained humans with all the empathy and complex reasoning skills being human entails cannot get it right all the time, what hope do robots have? Plus, to paraphrase the late General Patton, you don’t win wars by dying for your country but by making someone does for theirs’ and what better way to do that than by substituting your live troops with machinery you don’t mind losing nearly as much in combat? I’ve covered the “ideal” scenario for how all this would work back in the early days of this blog and in subsequent years, the technology to make it all possible isn’t just growing ever more advanced, it’s practically already here. It would make little sense to just throw it all away to continue to risk human lives in war zones from a military standpoint.

And here’s another thing to think about when envisioning a world where killer robots making life or death decisions dominate the battlefield. Only advanced countries could afford to build robot armies and deploy them instead of humans in conflict. Third World states would have no choice but to rely on flesh and blood soldiers, meaning that one side loses thousands of lives fighting a vast, expendable metal swarm armed with high tech weaponry able to outflank any human-held position before its defenders even have time to react. How easy would it be to start wars when soldiers no longer need to be put at risk and the other side either would not have good enough robots or must put humans on the front lines? If today all it takes to send thousand into combat saying that they volunteered and their sacrifice won’t be in vain, how quickly will future chicken hawks vote to send the killer bots to settle disputes, often in nations where only humans will be capable of fighting back, all but assuring the robots’ swift tactical victory?

voodoo doll

In another edition of people-can-be-awful news following last week’s post about why it’s indeed best not to feed trolls, it’s time to talk about online harassment and what to do about it. It seems that some 72 social activist groups are asking the Department of Education to police what they see as harassing and hate speech on a geo-fenced messaging app, arguing that because said geo-fence includes college campuses, it’s the colleges’ job to deal with it. Well, I suppose that it must be the start of windmill tilting season somewhere and now a government agency will have to do something to appease activists with good intentions in whose minds computers are magic that with the right lines of code can make racists, sexists, and stalkers go away. Except when all of them simply reappear on another social media platform and keep being terrible people since the only thing censoring them changes is the venue on which they’ll spew their hatred or harass their victims. Of course this is to be expected because the internet is built to work like that.

Now look, I completely understand how unpleasant it is to have terrible things said about you or done to you on the web and how it affects you in real life. As a techie who lives on the web, I’ve had these sorts of things happen to me firsthand. However, the same part of me that knows full well that the internet is in fact serious business, contrary to the old joke, also understands that a genuine attempt to police it is doomed to failure. Since the communication protocols used by all software using the internet are built to be extremely dynamic and robust, there’s always a way to circumvent censorship, confuse tracking, and defeat blacklists. This is what happens when a group of scientists build a network to share classified information. Like it or not, as long as there is electricity and an internet connection, people will get online, and some of these people will be terrible. For all the great things the internet brought us, it also gave us a really good look at how many people are mediocre and hateful, in stark contrast to most techo-utopian dreams.

So keeping in mind that some denizens of the web will always be awful human beings who give exactly zero shits about anyone else or what effect their invective has on others, and that there will never be a social media platform free of them no matter how hard we try, what should their targets do about it? Well, certainly not ask a government agency to step in. With social media’s reach and influence as powerful as it is today, and the fact that it’s free to use, we’ve gotten lost in dreamy manifestos of access to Twitter, Facebook, Snapchat, and yes, the dreaded Yik Yak, being fundamental human rights to speak truth to power and find a supporting community. But allowing free and unlimited use of social media is not some sort of internet mandate. It’s ran by private companies, many of them not very profitable, hoping to create an ecosystem in which a few ads or add-on services will make them some money by being middlemen in your everyday interactions with your meatspace and internet friends. If we stop using these services when the users with which we’re dealing through them are being horrible us, we do real damage.

But wait a minute, isn’t not using the social media platform on which you’ve been hit with waves and waves of hate speech, harassment, and libel, just letting the trolls win? In a way, maybe. At the same time though, their victory will leave them simply talking to other trolls with whom pretty much no one wants to deal, including the company that runs the platform. If Yik Yak develops a reputation as the social app where you go to get abused, who will want to use it? And if no one wants to use it, what reason is there for the company to waste millions giving racist, misogynist, and bigoted trolls their own little social network? Consider the case of Chatroulette. Started with the intent of giving random internet users a face with a screen name and connecting them with people they’d never otherwise meet, the sheer amount of male nudity almost destroyed it. Way too many users had negative experiences and never logged on again, associating it with crude, gratuitous nudity, so much so that it’s still shorthand for being surprised by an unwelcome erect penis on cam. Even after installing filters and controls banning tens of thousands of users every day, it’s still not the site it used to be, or that its creator actually envisioned it becoming.

With that in mind, why try to compel politicians and bureaucrats to unmask and prosecute users for saying offensive things on the web, many of which will no doubt be found to be protected by their freedom of speech rights? That’s right, remember that free speech doesn’t mean freedom to say things you personally approve of, or find tolerable. Considering that hate speech is legal, having slurs or rumors about you in your feed is very unlikely to be a criminal offense. You can be far more effective by doing nothing and letting the trolls fester, their favorite social platform to abuse others become their own personal hell where other trolls, out of targets, turn on them to get their kicks. Sure, many trolls just do it for the lulz with few hard feelings towards you. Until it’s them being doxxed, or flooded with unwanted pizzas, or swatted, or seeing their nudes on a site for other trolls’ ridicule. No matter how hard you try, they won’t be any less awful to you, so let them be awful to each other until they kill the community that allows them to flourish and the company that created and maintained it, and allow their innate awfulness be their undoing.

fable troll

Every internet community has them and many have been killed by them. They crave two things most of all: attention and a platform to broadcast whatever comes to mind, and every time they appear, you can safely bet that someone will admonish users engaging with them not to feed a troll as per the common axiom. But what if, just to propose something crazy here, maybe there are reasons to talk to them, downvote them, and otherwise show your displeasure because an appropriate amount of push back will finally solidify the message that they’re not wanted? They could either leave or give up on their trollish ways. Either way, it would be an improvement. So, following this hypothesis, a small group at a Bay Area college collected 42 million comments on huge gaming, political, and news sites with a grand total of 114 million votes spanning as many as 1.8 million unique users, to figure out once and for all if you can downvote trolls into oblivion or force them to productively contribute. Unfortunately, the answer is a pretty definitive no.

After creating an artificial neural network to gauge whether comments deserved an upvote or a downvote after using the actual discussion threads as a training set, the researchers decided to follow users’ comment histories to see how feedback from others affected them over time. They found that users who were ignored simply stopped participating, which seems quite logical. It’s simply a waste of time and effort to shout into the digital aether with no feedback. But when the computer followed the trolls, the data showed that even withering negativity had pretty much no effect on what they posted or how much. Their comments didn’t change and they did not seem to care at all about the community’s opinions of them. If they wanted to antagonize people, they kept right on doing it. We could say that not every person who provoked a flood of negativity in response is a troll, true. Some of the political sites used in the sample are extremely partisan so any deviation from the party line can provoke a dog pile. But by the same token, while not every maligned comment is trollish, most trollish comments are maligned, so the idea still holds.

With this in mind, how do we police trolls? Not feeding them does seem to be the best strategy, but considering how many of us suffer from SIWOTI syndrome — and yes, I’m not an exception to this by any stretch of the imagination since half this blog is a manifestation of it — and will not let trollish things go, it’s not always feasible. This means that shadow banning is actually by far the most effective technique to deal with problematic users. Because they won’t know they’re in their own little sandbox invisible to everyone else, their attempts to garner attention are always ignored so they get bored and leave. Of course this method isn’t foolproof, but a well designed and ran community will quickly channel even repeat offenders into the shadow banned abyss to be alone with their meanderings. In short, according to science, the best thing we can do to put a stop to trolling is to aggressively ignore them, as paradoxical as that sounds at first blush.

amazon boxes

It’s been a few months since NYT savaged Amazon’s work environment in the national press to several stammering professions of utter bewilderment from Bezos. We’ve heard little since, but just as it seemed that most of the unpleasant attention died down, something bizarre happened to bring the article back into the spotlight. Amazon’s new chief of PR decided to very publicly hit the newspaper with detailed criticisms of its coverage as if the story was still fresh. As you may expect, the head editor of the Times did not take it lightly and posted very stern rebuttals to the rebuttals, and the two are likely to go back and forth on the topic for a while while the rest of us are left to figure out exactly how bad of a place Amazon is to work. Personally, I have not heard any good things about working there and the consensus I’ve found basically says that if you’re willing to bite the bullet and suffer for two years, you’ll come out with a resume booster to find a job where you can actually enjoy what you do while working saner hours.

Amusingly enough, many internet commenters reacted to these sorts of discussions with close to the same scorn they reserved for the wealthy who feel they need affluenza therapy. Does it really matter whether 20-somethings making six figures are or aren’t happy with how their boss treats them? They’re making bank while people who loathe their jobs and whose bosses are so cruel, it seems like there’s a management competition in sadism, work sunup to sundown for a wage that still makes them prioritize rent and food over long overdue basic car maintenance. In some ways, I can understand that attitude. IT definitely pays well, and in many places there are so many jobs for someone with a computer science degree and a few years of experience that receiving multiple offers in the same day is not uncommon. As said in Eastern Europe, it would be a sin to complain about a fruitful computer science career, especially when your job title has the words “senior” or “lead” in it. But that said, I will now proceed to commit that exact sin.

For many programmers, insane hours aren’t just expected, they’re required. If you don’t put in your eight to ten hours a day, then go home and spend another four to five hours studying your first few years on the job, you’re going to struggle and find that your contract isn’t renewed. The lack of sleep and subsistence on caffeine, adrenaline, and electronic music are not just badges of honor, but the price of admission to the club. And now, on top of working around the clock, a lot of employers want to know what code you’re publishing on open source repositories, and to what programming groups you belong. You’re expected to live, sleep, breathe, eat, and cough comp sci to have a fruitful career that allows you to advance past the sweatshop setting. Suffer through it with a stiff upper lip and you’ll be given a reward. More work. But in a cozy office with snacks, game rooms, free coffee and even booze — all to keep you in the office longer — along with at least some creative freedom about how to set up the code structure for your project.

Just like doctors, lawyers, and architects, techies have to run a professional gauntlet before the salary fairy finally deems you worthy, waves her wand, and puts a smile on your face when you see your paycheck along with the money you saved while spending all your time at work. That’s your reward for all the blood, sweat and tears. And trust me, when you see the complex pieces of code you wrote roar to life and be relied on by thousands of people alongside, that’s more or less the exact moment you’ll either realize it was all totally worth every minute of frustration and exhaustion and you’re in love with what you do, or that the people who just pulled this off only to celebrate by doing it all over again must be completely insane, and should be swiftly committed to the nearest mental health facility. If it sounds like IT is very pro-hazing, it is, because we want to ensure that those willing to put in the hard work and have the tenacity to solve problems that seem like a real life hex placed by a dark wizard on machinery, are the ones who get rewarded, not people whose only job skill is to show up on time and look busy for enough of the day.

And that brings us back to Amazon. Since a lot of programmers expect a long grind until they’ll land that coveted spot in a startup-like atmosphere, there are a lot of companies which gleefully abuse this expectation to run a modern day white collar sweatshop. You’re shoved in a cubicle, assigned a mountain of tasks, and told to hurry up. If you have a technical boss, all he wants is to know when the code is finished. If you have a non-technical boss, he’ll watch you for signs of slacking off so he can have a disciplinary talk with you because unable to manage the product, he manages the people. And after being whipped into a crazy, unsustainable pace, you deliver someone else’s vision, then told to do the same thing again even faster. This is not only how all the stories the NYT quoted paint Amazon, this is exactly how Amazon, Microsoft, IBM, and IT at large banks and insurance companies work, by the sweatshop system. Working for them is just one long career-beginning hazing that never really ends, and most IT people simply accept it to be the way their world works, and share their time at a sweatshop as a battlefield story.

We are not upset about it, we just know that companies like Amazon only care about speed and scale, and can afford the golden shackles with which to chain roughly enough warm bodies to a computer to crank out the required code, and make our employment decisions with this in mind. For many techies a company that will chew them up and spit them out, but looks good to one of the countless tech recruiters out there when highlighted in an online resume, is a means to the kind of job they really want. Sure, you’ll find stories of programmers rebelling that we can’t wear jeans and t-shirts to the office, or tales of on-site catered meals on demand and massages, but that’s a tiny minority of all techies, primarily in California’s tech hubs. Most programmers wear a selection of outfits best fit for Jake from State Farm, spend their days in a cube farm, and game rooms with pool tables, consoles, and free booze for coders whose work at a company isn’t just acknowledged in passing, like a long lost uncle’s stint in jail, are things they read about between coding. To them, Amazon isn’t a particularly cruel or bruising employer. It’s a typical one.

math is logical

When you live in a world filled with technology, you’re living with the products of millions of lines of code, both low, and high level. There’s code in your car’s digital controls, all your appliances, and sprawling software, with which yours truly has more than just a passing familiarity, are way more often than not behind virtually every decision made about you by banks, potential bosses, hospitals, and even law enforcement. And it’s that last decision maker that warrants the highest scrutiny and the most worry because proprietary code is making decisions that can very literally end your life without actually being audited and examined for potential flaws. Buggy software in forensic labs means that actual criminals may go free while innocent bystanders are sentenced to decades, if not life in jail or death row for their actions, so criminal defense attorneys are now arguing that putting evidence in a black box to get a result is absurd, and want a real audit of at least one company’s software. Sadly, their requests have so far been denied by the courts for a really terrible reason: that the company is allowed to protect its code from the competition.

Instead of opening up its source code, the company in question, Cybergenetics, simply says its methods are mathematically sound and peer reviewed, so that should be the end of discussion as far as justice is concerned. So far, the courts seem to agree, arguing that revealing code will force the company to reveal its trade secrets despite the fact that its entitled to keep them. And while its unlikely that Cybergenetics is doing anything willfully malicious or avoiding an audit for some sort of sinister reason, the logic of saying that because their methodology seems sound, the code implementing it should be beyond reproach is fatally flawed. Just because you know a great deal about how something should be done doesn’t mean that you won’t make a mistake, one that may completely undermine your entire operation. Just consider the Heartbleed bug in the open source OpenSSL. Even when anyone could’ve reviewed the code, a bug undermining security the software was supposed to offer was missed for years, despite all the methodology behind OpenSSL’s approach to security for the package was quite mathematically sound.

So what could Cybergenetics not want to share with the world? Well, knowing what I’ve had the chance to learn about code meant to process DNA sequences, I can provide several educated guesses. One of the most problematic things with processing genetic data is quantity. It simply takes a lot of time and processing power to accurately read and compare DNA sequences and that means a lot of money goes solely to let your computers crunch data. The faster you could read and compare genetic data, the lower your customers’ costs, the more orders you can take and fulfill on time, and the higher your profit margins. What the code in question could reveal is how its programmers are trying to optimize it and tweak things like data types, memory usage, and mathematical shortcuts to get better performance out of it. All of these are clearly perfectly valid trade secrets and knowing how they do what they do could easily give competition a very real leg up on developing even faster and better algorithms. But these optimizations are also a perfect part of the code for evidence-compromising bugs to hide. It’s a real conundrum.

It’s one thing if you’re running a company which provides advanced data warehousing or code obfuscation services, where a bug in your code doesn’t result in someone going to jail. But if a wrong result on your end can cost even one innocent person a quarter century behind bars, an argument centered around your financial viability as a business just doesn’t cut it. Perhaps the patent system could help keep this software safe from being pilfered by competitors who won’t be able to compete otherwise while still keeping the code accessible and easy to review by the relevant experts. Otherwise, if we let commercial considerations into how we review one of the most important types of forensic evidence, criminal defense attorneys have an easy way to do what they do best and raise reasonable doubt by repeating how the method of matching is top secret and is banned from being reviewed solely to keep up the company’s revenue stream. Or ask the jury how they would feel if an algorithm no one is allowed to review not to compromise the creators’ bank accounts decides their ultimate fate in a complicated criminal case.

microtree in glass

How about we run through a few basic statistics about our effects on the world around us? Over the last hundred years or so, we paved nearly 11.2 million miles of roads, built 845,000 dams to divert over a third of all rivers on the planet, consumed over a billion gallons of water, generated and then used 142,000 Terrawatt hours of electricity, and belched 33 billion tons of greenhouse gases into the atmosphere. The only things that impact Earth more than human industrialization are supervolcanic eruptions and massive asteroid impacts, which is why environmentalists have been thinking about a bold plan to somehow mark half the planet as conservation areas. While you might think that there’s no place where humans can’t thrive, the fact of the matter is that an amazingly large percentage of Earth isn’t extremely welcoming to humans or practical to settle in the long run. We are still tropical creatures who like mild, warm climates and want access to the world’s oceans, which is why 44% of us live in coastal areas rather than deserts and tundras. As well adapted to this planet as we are, we’re really not as spread out as we often think we are.

Even more interestingly, we’re converging more and more into megacities like Shanghai, Tokyo, Mumbai, New York, and Los Angeles. More than half the global population now calls cities home and the trend is very likely to continue in a post-industrial economy where efficiency is king, and geographic hubs for many professions are still very important. What’s more is that the new trend towards automated vertical farming, which reduces costs, water use, and eliminates the need for pesticides, would also free up millions and millions of acres of land currently used for growing all of our crops. Sure, not all farming can be done indoors and livestock raised for consumption will either still need to be raised the old-fashioned way, or we’d need to create synthetic meat that’s palatable to most people. We may never live in cities contained within skyscrapers for maximum efficiency, but there are a lot of demographic projections saying that 80% of us will be living way closer together on average than we do today, in massive, sprawling cities, and we’re making the necessary preparations already. So while at first glance, it may seem odd to abandon half of all land to become a nature preserve, maybe, just maybe, it will be possible in some 35 years…

grumpy cat

Some days I read stories about machine learning being deployed to fight crime, exoskeletons to help the paralyzed walk again, or supercomputers modeling new spacecraft, and feel very lucky to be in my current profession. Computers changed the world, and the discipline behind making these computers work is based around the egalitarian concept of tinkering. You need electricity and a little bit of money to get started, true, but the path from wanting to build something useful to doing it has never been more straightforward or shorter. Anyone with enough dedication can make something from scratch, even without formal training, though it’s highly recommended for those who want to become professionals. And then, other days I read about things like Peeple, the app that lets you review other humans, currently valued at $7.6 million, and groan that what people like me do is both helping the world while slowly ruining it by letting awful ideas like this spawn into existence with little effort. Because there’s no way this can possibly end well…

Just consider that out of a hundred people who read something online, just one might respond, or somehow interact with the content. People are not going to go through the effort of creating usernames, passwords, and e-mail or social media verification unless they are really motivated to do so. And when are people most motivated? When they’re upset or are expecting a reward in return for their trouble. Consider that when a business is in the news for ugly misdeeds, it’s pretty much a given that the first thing to happen to them will be angry torrents of one-star Yelp reviews which the admins then have to clean up. It’s not going to be any different with people, and whereas businesses are just legal entities that can be re-branded or ran by someone new which would give them the benefit of the doubt, a person is a person, and reviews about him or her will be around for years, no matter whether this person turned a new leaf, or the reviews for past bad behavior are actually legitimate complaints, a misunderstanding, or just malicious, and it’s likely that negativity will quickly trump whatever positive feedback the apps encourage.

As an example, take last year’s flash in the smartphone app pan Lulu, which allowed women to rate men as sexual partners. Negative reviews vastly outnumbered the positive ones, and while the app’s goal may have been helping women to avoid selfish partners and bad dates, it turned into a place for women to complain about men they didn’t like. I’m sure that the same exact app made for men to rate women would have the same results. For Peeple to really be any different would require human beings to fundamentally change how they interact with each other. And to add to the unpleasantness of dealing with judgmental, demanding, and hypersensitive people in the real world, all their unfiltered, nasty remarks now have a megaphone and are searchable by future romantic partners, landlords, and employers who have only these strangers’ opinions as their introduction to you. Have the creators of Peeple or Lulu thought whether it would be better for all of us if someone could type in a name and in an instant see our sexual history, a laundry list of opinions and complaints about us by friends and strangers alike on top of everything that already was made public about our lives through social media, or the potential for abuse?

We live at a time when revenge porn and social media turned leaked sex tapes and nudes into quaint mishaps and you have to develop a strategy to deal with your most intimate details in an enormous data dumpof millions of others’ most intimate details and fantasies. Isn’t that a sign that we’ve taken this social media thing far enough? When banks are mulling the idea of giving you loans based on your friends’ social media profiles, and employers are poking around your tweets and Instagram pictures, do you need to give malicious hackers or exploitative friends an additional way to take advantage of you? Even worse, just think about the fact that a third of all reviews on the web are likely to be fake and imagine a future where you have to buy a positive review bundle to offset nastiness said about you on Peeple, or make up a small horde of really, really satisfied and vocal sexual partners on a Lulu follow-up, which would be inevitable when a people rating app catches on. The bottom line is that apps that let you rate people like products are a textbook example of why being able to do something doesn’t mean you should, without a second thought about the potential consequences of what you’re unleashing on the world.

little smartphone

Every few years, we seem to get paroxysms of warnings about how our smartphones are going to give us cancer one day. Despite being grounded in junk science, they cause a stir because a few people with the right credentials claiming that something they we every day is killing us is a good way to get a lot of attention very quickly. And with large contingents of people all too ready and willing to believe that a few cells in a lab are a good proxy for the human body, and that Big TelCo is just the next Big Tobacco in waiting, the City of Berkley accomplished a feat of quixotic justice that San Francisco and the state of Maine once failed to secure, and is trying to force all stores that sell phones within the city’s limits to carry a vague, scary warning about cell phones emitting radiation and implying that users may be at risk of something malignant if they don’t go through their phone’s manual to find a safe way to use it while shielding their fragile bodies. No scientific work dealing with in vivo studies says this, but hey, there’s pandering to be done so a little something like, say, the medical community disagreeing with you should’t get in the way.

Really, it’s not often that siding with a large industry trade group, such as CTIA, which fought in court to stop the mandate, is the scientifically correct thing to do. Usually trade groups will jump on a junk science bandwagon if it benefits them in a heartbeat and twist facts to suit the desires for higher profit, as in the case of the anti-GMO lobby for example. But in this rare case, CITA’s objections really did have the science on their side and it would’ve been a way more interesting case if science was actually invoked. Despite having the ability to prove that the City of Berkley was simply ascribing to Luddism and anti-scientific fallacies to cast cell phones as evil, cancer-emitting boxes of death, the modern equivalents to a pack of cigarettes in the 1950s, it decided to argue that the mandate just violated their members’ free speech rights. Please join me for a minute of facepalming at this legal equivalent of snatching a defeat from the jaws of victory. It’s yet another example why court decisions should be inadmissible in debates about science.

But hold on, you might say, what’s so bad about the City of Berkley only giving its citizens what they wanted? After all, shouldn’t people be free to make their own informed decisions and this disclaimer only gives them the tools to make up their minds after considering both sides? Well, yes, that would be the case in a scientifically hyperliterate utopia, or when there’s a real debate about an issue in the scientific community. But there’s a reason why we don’t slap labels on the astronomy books sold at Barnes and Noble warning readers that it contains descriptions of the theory of heliocentrism and features multiple references to the Big Bang, or on a medical book to warn readers that it does not consider the theory of the four humors and miasmas alongside germ theory. There are no current scientific debates about whether the universe is static, or the Earth orbits the sun, or that microorganisms invading our bodies are the origin of disease. Why would we want to give the public erroneous information because a special interest group really, really wanted to shout its ill-informed ideas no matter what the experts actually told them?

Make no mistake, this is not about a really lefty anti-establishment city defying corporate villains in court as a victory for the little guy, as the Luddite lobby spins it. It’s not about helping a public at risk make up its own mind on a case by case basis. This is about promoting misinformation a small but vocal group of technophobes believes to be true in order to similarly scare others and using the city to do the dirty legal work. This time they managed to get lucky because the trade group defending the science abdicated its responsibility to wander off into the tenuous lands of free speech where factual standards are non-existent unless you’re lying to damage careers or imply that someone innocent committed a crime while obviously knowing he or she didn’t. All of the labeling and warning the anti-science activists really want aren’t giving people some sort of valuable information they desperately need, but about putting their propaganda right in front of their faces through court-assisted arm twisting, which is why we shouldn’t so much be laughing and joking about them, but actively pointing out what they are and publicly opposing them.

[ illustration by Eric Motang ]

android mind

For those who are convinced that one day we can upload our minds to a computer and emulate the artificial immortality of Ultron in the finest traditions of comic book science, there’s a number of planned experiments which claim to have the potential to digitally reanimate brains from very thorough maps of neuron connections. They’re based on Ray Kurzweil’s theory of the mind; we are simply the sum total of our neural network in the brain and if we can capture it, we can build a viable digital analog that should think, act, and sound like us. Basically, the general plot of last year’s Johnny Depp flop Transcendence wasn’t built around something a room of studio writers dreamed up over a very productive lunch, but on a very real idea which some people are taking seriously enough to use it to plan the fate of their bodies and minds after death. Those who are dying are now finding some comfort in the idea that they can be brought back to life should any of these experiments succeed, and reunite with the loved ones who they’re leaving behind.

In both industry and academia, it can be really easy to forget that the bleeding edge technology you study and promote can have a very real effect on very real people’s lives. Cancer patients, those with debilitating injuries that will drastically shorten their lives, and people whose genetics conspired to make their bodies fail them, are starting to make decisions based on the promises spread by the media on behalf of self-styled tech prophets. For years, I’ve been writing a lot of posts and articles explaining exactly why many of these promises are poorly formed ideas that lack the requisite understanding of the problem they claim to understand how to solve. And it is still very much the case, as neuroscientist Michael Hendricks felt compelled to detail for MIT in response to the New York Times feature on whole brain emulation. His argument is a solid one, based on an actual attempt to emulate a brain we understand inside and out in an organism we have mapped from its skin down to the individual codon, the humble nematode worm.

Essentially, Hendricks says that to digitally emulate the brain of a nematode, we need to realize that its mind still has thousands of constant, ongoing chemical reactions in addition to the flows of electrical pulses through its neurons. We don’t know how to model them and the exact effect they have on the worm’s cognition, and even with the entire immaculately accurate connectome at hand, he’s still missing a great deal of information on how to start emulating its brain. But why should we have all the information, you ask, can’t we just build a proper artificial neural network reflecting the nematode connectome and fire it up? After all, if we know how the information will navigate its brain and what all the neurons do, couldn’t we have something up and running? To add on to Hendricks’ argument that the structure of the brain itself is only a part of what makes individuals who they are and how they work, allow me to add that this is simply not how a digital neural network is supposed to function, despite being constantly compared to our neurons.

Artificial neural networks are mechanisms to implement a mathematical formula for learning an unfamiliar task in the language of propositional logic. In essence, you define the problem space and the expected outcomes, then allow the network to weigh the inputs and guess its way to an acceptable solution. You can say that’s how our brains work too, but you’d be wrong. There are parts of our brain that deal with high level logic, like the prefrontal cortex which helps you make decisions about what to do in certain situations, that is, deal with executive functions. But unlike artificial neural networks, there are countless chemical reactions involved, reactions which warp how the information is being processed. Being hungry, sleepy, tired, aroused, sick, happy, and so on, and so forth, can make the same set of connections produce different outputs from very similar inputs. Ever had an experience of being asked to help a friend with something until one day, you got fed up that you were being constantly pestered for help, started a fight, and ended the friendship? Humans do that. Social animals can do that. Computers never could.

You see, your connectome doesn’t implement propositional calculus, it’s a constantly changing infrastructure for exchanging basic functionality, deeply affected by training, injury, your overall health, your memories, and the complex flow of neurotransmitters floating between neurons. If you bring me a connectome, even for a tiny nematode, and told me to set up an artificial neural network that captures these relationships, I’m sure it would be possible to draw up something in a bit of custom code, but what exactly would the result be? How do I encode plasticity? How do we define each neuron’s statistical weight if we’re missing the chemical reactions affecting it? Is there a variation in the neurotransmitters we’d have to simulate as well, and if so, what would it be and to which neurotransmitters will it apply? It’s like trying to rebuild a city with only the road map, no buildings, people, cars, trucks, and businesses included, then expecting artificial traffic patterns to recreate all the dynamics of the city the road map of which you digitized, with pretty much no room for entropy because it could easily break down the simulation over time. You will both be running the neural network and training it, something it’s really not meant to do.

The bottom line here is that synthetic minds, even once capable of hot-swapping newly trained networks in place of existing ones, are not going to be the same as organic ones. What a great deal of transhumanists refuse to accept is that the substrate in which computing — and they will define what the mind does as computing — is being done, is actually quite important because it allows the information to flow at different rates and in different ways than another substrate. We can put something from a connectome into a computer, but what comes out will not be what we put into it, it will be something new, something different because we put in just a part of it into a machine and naively expected the code to make up for all the gaps. And that’s for a best case scenario with a nematode and 302 neurons. Humans have 86 billion. Even if we don’t need the majority of these neurons to be emulated, the point is that whatever problems you’ll have with a virtual nematode brain, they will be more than nine orders of magnitude worse in virtual human ones, as added size and complexity create new problems. In short, whole brain emulation as a means for digital immortality may work in comic books, but definitely not in the real world.

roaring hulk

Since the dawn of the web, there have been shock jocks and people on a quest to see who can post the most extreme content without crossing the line into depraved criminality. Then, with an enormous wave of social media companies, and our ever-expanding access to broadband and fast mobile networks, the distance between saying and doing something very regrettable, and a massive backlash that can go global, has never been shorter. An ill-thought out tweet could be devastating to one’s life and career, and we’re still all getting used to this scary reality, making a lot of mistakes along the way. Every bad decision, questionable blog post, and tone-deaf article zooms around the world within minutes to one of the online media’s most reliable sources of all those sweet, juicy, ad price hiking page views: the outraged response. Just consider last year’s meteoric rise of the outrage click, with a fresh, new scandal for each and every day, and should we consider non-celebrities and the world outside current events, many more beyond that.

This year, the outrage machine isn’t slowing down one bit. If anything, it’s picked up steam as a vast array of popular blogs and news sites are ready to pounce on every Twitter war and every botched interview and social media post. But as the rage keeps on coming, there’s a slow, sure trickle of think pieces asking if we’re ever going to get tired of it and if it’s the result of opening a digital Pandora’s Box. After all, once you give people a diet of nothing but outrage, they should, in theory, become largely immune to it, right? We have the same issue when it comes to caring and empathizing with something that leaves a large number of victims in its wake, a well known and thoroughly studied phenomenon known as the scope-severity paradox. It comes down to a limit on how many things we can process at once and how much emotion we can invest in each and every case brought to our attention. Our empathetic and and cognitive abilities start fading quickly when we’re overwhelmed, so logically, someday, we’ll get completely outraged out.

In fact, it would really be interesting to see and compare the traffic from popular outrage articles and social media posts over the last few years to chart the duration and size of each fury spike. There are publicly available tools for researchers regarding Twitter and Facebook activity, but a glimpse at that data alone wouldn’t tell the full story. We’d need closely kept traffic data from all the major media sources with more than a million views a day, including comment counts, likes, shares, and links, as well as additional controls for small cliques in debates inflating comments, regular outrageaholics, and whether the pieces are one-offs, or the entire outlet traffics in solely outrage and scandal. Only then will we actually have a clue as to whether the internet will in fact get sick of the steady drumbeat of the outrage machine. At the same time, I think we can make several predictions as to what we’re likely to find because while the speed and medium are new to us, how we collect and sometimes manufacture outrage for the public is rather old hat.

First off, it’s unlikely that internet outrage will ever be dethroned as a key in building traffic since we sure love to form angry mobs and it’s simply too easy to throw some red meat to such mobs just waiting to form. Likewise, it should be noted that among this outrage, there are instances of actual, brutal, noteworthy injustice that must be swiftly, vocally, and publicly addressed to make things right again. As bizarre as it sounds, sometimes an angry mob can actually do some good and contribute to fixing a problem. If anything, we do want the Outrage Machine around for the instances where we can use its power for good rather than evil, chaos, and PC wars. Secondly, people are going to participate in whipping up media outrage and escalating it it because they’ll want to be part of an angry mob, and nowadays, they don’t even have to physically grab nearby torches and pitchforks. Tweets and Facebook posts will more than suffice. With this barrier to a virtual riot as low as a click, many will find it hard to resist from basking in moral superiority.

Finally, let’s just admit that there are writers whose very bread and butter now relies on getting involved in some sort of scandal, so their outrage will get posted and promoted day in, day out, hoping that one or two of their pieces of outrage clickbait go viral and get them the page views, attention, and vitriolic feedback they need to keep their careers going. If online outrage starts to die down as a genre, it’s going to be a very slow death with periodic spasms that make it seem as if it had risen from death once again. It’s too easy to generate it, too easy to escalate it, way too easy to let it consume you, and it feeds the urge of many to seeing others in a situation that gives them a chance to gloat and compare themselves favorably to the disgraced schmucks. At the same time, there is a very real danger that constant outrage will ruin our connection to how our much less dramatic world really works, and lose incidents where public outrage is almost a required civic duty among the trivial and inconsequential. And that would be sad indeed.