Archives For technology

gawking_600

Now, is it just me or are you not really a celebrity until you either have a naked photo spread of yourself in a random glossy magazine, or your very own sex tape? It’s almost as if the gossips who decide who’s who on national television won’t pay attention to you unless there’s either an attention-pleading nudie spread or a threat of a sex tape looming over your head. But alas, the heady days of the celebrity sex tape might be coming to an end, according to Amanda Hess, a conclusion she bases on the ever less enthusiastic reaction of the public to the latest scandals such as The Fappening and Hulk Hogan’s recorded foray into swinging. As Hess sees it, we’ve entered sex tape and celebrity nudity fatigue because there have simply been too many tapes, pictures, and rumors, and the trend is so widespread, very likeable entertainers are affected by hackers in search of sleaze. Instead of laughing at the lax security and overconfidence of C-list actors and actresses, and the desperate pleas for attention from D-list has-beens, we are now empathizing with the invasions of privacy done to make a scuzzy buck off the shock value.

While this may all be true, I think there’s a very important piece of the puzzle Hess is missing in this regard and it has to do with the ubiquitous, internet-connected technology always within an arm’s reach. Back in the days of Tommy Lee and Pamela, you had to set up a camera, make a tape, have that tape duplicated, use fairly convoluted equipment to digitize it, upload it to a web server which you had to configure correctly to accept the format in which you digitized it, spread the word on countless message boards, manually submit it to a search engine, and finally, over the course of a few months actually get widespread notice of the sex tape. Just writing that out would be enough to make you winded, but also shows why celebrities thought they would be in the clear if they just hid their tapes well enough. But today, the camera is on your phone, video gets recorded in a standard format for which everyone has players, and with one-click uploads, you can go from casual sex to amateur porn stardom in a matter of minutes. And many do.

Having constant access to technology has also taken a great deal of flirting and hook ups to the web where you can find anyone from a soul mate, to quick, no-strings-attached fun. And much like the old joke about male masturbation, there are two types of people who use technology to help them flirt, those who send nudes, and those who lie about it. In fact, spies intercepting web cam and IM traffic on popular messaging platforms between regular people in the UK were just straight up shocked at how much nudity they saw. If the 11% number doesn’t seem that high to you, keep in mind that said spies were actually trying to do some targeted snooping, so most of the nudity they saw was after attempts to filter it out. We get naked for the camera so often, we overwhelm top notch government data centers with high tech filtering mechanisms to the point where “well, I tried searching for it and all this porn came up” is a real problem for spies on top secret versions of the internet built specifically to exclude civilian distractions and access.

It’s even a widespread problem for kids just entering puberty. Teens with low self-esteem and a hunger for approval and cred send naked pictures to each other all the time. Adults who need a confidence boost about their bodies can easily solicit strangers’ opinions in anonymous forums, even though they probably shouldn’t. And even when we take pains to make our adult pictures, videos, and chats private, all it takes is one small security hole or a careless moment, and bam, some hacker can get into out accounts and either harvest what we already have, or install very nasty malware to capture some of our sexual moments. Of course we could run with the notion that we shouldn’t share anything we don’t expect to be public and if there are naked pictures of us on the web, we deserve it. But this is a downright sociopathic line of reasoning, on par with a defense of a burglar who only stole your stuff because you didn’t have stronger locks while also lacking the good sense to only buy things you were prepared to lose in a robbery. If you tried to protect your assets and failed, telling you to protect them better, or not have them, is asinine.

So what does this all have to do with the decline of the celebrity sex tape/leaked pic genre? We went from giddy curiosity, to boredom as such tapes were being released for publicity and a bit of cash, to a nasty feeling in the pit of our stomachs as we’ve now taken enough nudes or done enough adult things on the web to realize that we might be next. There are extortionists whose goal it is to trick you into getting sexual with them and then blackmail you. There’s the revenge porn business, perhaps the sleaziest scam of all time. When we know that celebrity nudity was really hacked rather than made in an attempt for another 15 minutes of fame, and we can also be compromised in much the same way, as two non-famous victims of The Fappening were, it becomes a lot less fun to watch these videos or pics. Rather than guilty pleasures brought to us by paparazzi in that TMZ celebs-behaving-badly school of tabloid gossiping, they very much hit home like the gross invasions of privacy they are. And not having enough means of stopping a nasty hack that will embarrass us, we cringe in reply, knowing we can suffer the same fate…

old cyborg

Over all the posts I’ve written about brain-machine interfaces and their promise for an everyday person, one the key takeaways was that while the idea was great, the implementation would be problematic because doctors would be loath to perform invasive and risky surgery on a patient who didn’t necessarily need said surgery. But what if when you want to link your brain to a new, complex, and powerful device, you could just get an injection of electrodes that unfurl into a thin mesh which surrounds your neurons and allows you to beam a potent signal out? Sounds like a premise for a science fiction novel, doesn’t it? Maybe something down the cyberpunk alley that was explored by Ghost In The Shell and The Matrix? Amazingly, no. It’s real, and it’s now being tested in rats with extremely positive results. Just 30 minutes after injection, the mesh unwound itself around the rats’ brains and retained some 80% of its ideal functionality. True, it’s not quite perfect yet, but this is a massive leap towards fusing our minds with machinery.

Honestly, I could write an entire book about all the things easy access this technology can do in the long run because the possibilities are almost truly endless. We could manipulate a machine miles away from ourselves as if we inhabited it, Avatar style, give locked in stroke victims a way to communicate and control their environment, extend our nervous systems into artificial limbs which can be fused with our existing bodies, and perhaps even challenge what it means to be a human and become a truly space faring species at some point down the line. Or we could use it to make video games really badass because that’s where the big money will be after medicine, after which we’ll quickly diversify into porn. But I digress. The very idea that we’re slowly but oh so surely coming closer and closer towards easy to implant brain-machine interfaces is enough to make me feel all warm and fuzzy from seeing science fiction turn into science fact, and twitch with anticipation of what could be done when it’s finally ready for human trials. Oh the software I could write and the things it could do with the power of the human brain and a cloud app…

[ illustration by Martin Lisec ]

facebook like

Adrian Chen is somewhat of an expert on controversial social media content. After all, his most popular story was a damning expose of a forum moderator who posted all sorts of controversial and questionable content on reddit. But after sifting through the deepest and darkest dungeons of reddit and finding leaked content guidelines for Facebook moderators overseas, Chen finally got a shot at the big leagues and went to Russia to track down the HQ of the infamous army of internet trolls operated by the country’s intelligence services. The results weren’t pretty. While it seemed like a productive trip confirming much of what many of us already know, he fell for one of the oldest scams in the book and was used in a fake news article claiming that he was a CIA operative who was recruiting neo-Nazis to encourage anti-Russian protests. Which in Russia is about the moral equivalent of recruiting the pedophiles from NAMBLA to lobby states to change their age of consent laws. In case that wasn’t clear, they really, really hate neo-Nazis.

This is really par for the course when it comes to dealing with today’s Russian media which has been feeding its citizens a steady diet of conspiracy theories. The people who tricked Chen are the same people who use David Icke as a political science expert and interview him while he’s going on and on about American-driven New World Order-style machinations to quickly cut the cameras and microphones before he can go on to point the finger to a group of alien lizards in change of the planet. Just like the Soviet propagandists of the previous generation, they give it their all to make life outside of Russia seem downright hellish for the average person, and paint the world as being mostly aligned against Russia simply for the sake of keeping a former grand superpower down so they can easily steal nuclear weapons, vast oil and gas reserves, and lure impressionable, young, highly educated youth overseas with empty promises of wealth, luxury, and non-stop parties after work. I can’t tell you when it started, but I can tell you that is began in the Russian part of the web as Chen accurately describes, and gotten exponentially worse.

However, Russia is not unique is doing this. It may perhaps be one of the best troll factories out there, but it’s far from the only one. You can probably safely assume that a third of pretty much everything you see on the web is fake, created by trolls, paid shills, or click-farm workers whose job it is to add fake Facebook likes and Twitter followers for corporations, think tanks, and even political candidates. With the anonymity of the internet comes freedom, but with that freedom is the understanding that it can be abused to present lies and facilitate frauds on a massive scale, and since many people still don’t take internet seriously enough, one can get away with lying or scamming for cash with no real consequence. Ban fake accounts or trolls? Others will pop up in seconds. It’s like trying to slay a hydra that can also regrow its heart. All you can really do when it comes to dealing with the fake web is to stay on alert, always double check what you see, and don’t be shy about checking accounts for something that looks or feels wrong. You might not be able to catch every troll and fraud every time, but you’ll weed out the vast majority who want to recruit you to support a fraudulent cause, or trick you into spreading their lies…

[ illustration by Aaron Wood ]

robot barkeep

Many jokes or satirical pieces are funny precisely because they have a nugget of truth in them, something we can all at least understand if not relate to. This is why a satirical piece about the opening of a new McDonald’s staffed solely by robots due to the management’s concern about campaigns to increase minimum wage to $15 per hour, fooled enough readers to merit its swift entry on Snopes. I can’t blame those who were fooled. After all, we do have the technology and as the Snopes entry points out, there are numerous McDonald’s locations in several European countries boasting high minimum wages where customers order using touchscreens instead of talking to cashiers. Bumping up minimum wages, especially as it’s happening in several rather expensive West Coast cities, could certainly be an impetus for replacing humans with machines the same way it’s being done in numerous other professions. Today, we can shrug at the satire and lament the fact that machines are now advanced enough to make some people obsolete in the job market. But give it some time and this may well be a real report in the news.

One of the sad things about this kind of automation is not that it’s not happening because there aren’t robots capable enough to take over for humans being built. There are. In a structured and organized environment like a grocery store or a restaurant, with a finite number of pathways for machines to navigate, well known and understood obstacles, and clearly marked destinations, I would see no problem with robotic waiters summed by touchscreen or shelf stocking bots today other than their price tag. That’s right. Humans are doing certain types of work because it’s just cheaper to have them do it instead of machines. I really don’t think that $15 an hour wages can make these robots economically viable, much less cheaper, for many businesses over the next five years, but past that is anyone’s guess with economies of scale kicking in, the bugs shaken out, the quality improving, and the prices dropping. So it may be best to take that article not so much as satire, but as a warning. Another big wave of automation is coming and we need to be thinking about how to deal with it, not just debate it to death or oppose it with dogmas.

ultron

There’s something to be said about not taking comic books and sci-fi too seriously when you’re trying to predict the future and prepare for a potential disaster. For example, in Age of Ultron, a mysterious alien artificial intelligence tamed by a playboy bazillionaire using a human wrecking ball as a lab assistant in a process that makes most computer scientists weep when described during the film, decides that because its mission is to save the world, it must wipe out humanity because humans are violent. It’s a plot so old, one imagines that an encyclopedia listing every time it’s been used is itself covered by its own hefty weight in cobwebs, and yet, we have many famous computer scientists and engineers taking it seriously for some reason. Yes, it’s possible to build a machine that would turn on humanity because the programmers made a mistake or it was malicious by design, but we always omit the humans involved and responsible for designs and implementation and go straight to the machine as its own entity wherein lies the error.

And the same error repeats itself in an interesting, but ultimately flawed ideas by Zeljko Svedic, which says that an advanced intellect like an Ultron wouldn’t even bother with humans since its goals would probably send it deep into the Arctic and then to the stars. Once an intelligence far beyond our own emerges, we’re just gnats that can be ignored while it goes about, working on completing its hard to imagine and ever harder to understand plans. Do you really care about a colony of bees or two and what it does? Do you take time out of your day to explain to it why it’s important for you to build rockets and launch satellites, as well as how you go about it? Though you might knock out a beehive or two when building your launch pads, you have no ill feelings against the bees and would only get rid of as many of them as you have to and no more. And a hyper-intelligent AI system would do its business the same exact way.

And while sadly, Vice decided on using Eliezer Yudkowsy for peer review when writing its quick overview, he was able to illustrate the right caveat to an AI which will just do its thing with only a cursory awareness of the humans around it. This AI is not going to live in a vacuum and needs vast amounts of space and energy to run itself in its likeliest iteration, and we, humans, are sort of in charge of both at the moment, and will continue to be if, and when it emerges. It’s going to have to interact with us and while it might ultimately leave us alone, it will need resources we’re controlling and with which we may not be willing to part. So as rough as it is for me to admit, I’ll have to side with Yudkowsky here in saying that dealing with a hyper-intelligent AI which is not cooperating with humans is more likely to lead to conflict than to a separation. Simply put, it will need what we have and if it doesn’t know how to ask nicely, or doesn’t think it has to, it may just decide to take it by force, kind of like we would do if we were really determined.

Still, the big flaw with all this overlooked by Yudkowsky and Svedic is that AI will not emerge just like we see in sci-fi, ex nihlo. It’s more probable to see a baby born to become an evil genius at a single digit age than it is to see a computer do this. In other words, Stewie is far more likely to go from fiction to fact than Ultron. But because they don’t know how it could happen, they make the leap to building a world outside of a black box that contains the inner workings of this hyper AI construct as if how it’s built is irrelevant, while it’s actually the most important thing about any artificially intelligent system. Yudkowsky has written millions, literally millions, of words about the future of humanity in a world where hyper-intelligent AI awakens, but not a word about what will make it hyper-intelligent that doesn’t come down to “can run a Google search and do math in a fraction of a second.” Even the smartest and most powerful AIs will be limited by the sum of our knowledge which is actually a lot more of a cure than a blessing.

Human knowledge is fallible, temporary, and self-contradictory. We hope that when we try and combine immense pattern sifters to billions of pages of data collected by different fields, we will find profound insights, but nature does not work that way. Just because you made up some big, scary equations doesn’t mean they will actually give you anything of value in the end, and every time a new study overturns any of these data points, you’ll have to change these equations and run the whole thing from scratch again. When you bank on Watson discovering the recipe for a fully functioning warp drive, you’ll be assuming that you were able to prune astrophysics of just about every contradictory idea about time and space, both quantum and macro-cosmic, know every caveat involved in the calculations or have built how to handle them into Watson, that all the data you’re using is completely correct, and that nature really will follow the rules that your computers just spat out after days of number crunching. It’s asinine to think it’s so simple.

It’s tempting and grandiose to think of ourselves as being able to create something that’s much better than us, something vastly smarter, more resilient, and immortal to boot, a legacy that will last forever. But it’s just not going to happen. Our best bet to do that is to improve on ourselves, to keep an eye on what’s truly important, use the best of what nature gave us and harness the technology we’ve built and understanding we’ve amassed to overcome our limitations. We can make careers out of writing countless tomes pontificating on things we don’t understand and on coping with a world that is almost certainly never going to come to pass. Or we could build new things and explore what’s actually possible and how we can get there. I understand that it’s far easier to do the former than the latter, but all things that have a tangible effect on the real world force you not to take the easy way out. That’s just the way it is.

game controller

Recently, a number of tech news sites announced that two people were convicted as felons for stealing about $8,000 in virtual loot from Blizzard’s Diablo III, trumpeting this case as a possible beginning of real world punishments from virtual crimes. However since the crime of which they were found guilty is infecting their victims with malware, then using said malware to take control of their characters and steal their stuff to resell for real world money, their case is nothing new as far as the law is concerned. Basically, the powers to be at Blizzard just didn’t want the duo to get off with a slap on the wrist for their behavior and were only able to secure damages thanks to the fact that a virus designed to give a backdoor into a victim’s system was used. But there’s definitely some pressure to turn virtual crimes in multiplayer games into real ones…

[A] Canadian newscaster reported that some advocates would like to see people charged with virtual rape when they modify games like Grand Theft Auto so … their characters can simulate sexually assaulting other players. Given the increasing realism of video games, research being done to improve virtual reality, and expected popularity of VR glasses like those soon to be commercially available from Oculus Rift, there would almost certainly be more cases of crimes committed in virtual spaces spilling out into IRL courts.

Al right, let’s think about that for a moment. GTA is a game in which you play a sociopath who’s crime-spreeing his way around whatever locale the latest edition features. Mods that enable all sorts of disturbing acts are kind of expected within the environment in question. But consider a really important point. Virtual sexual assaults can be stopped by quitting the game while a real one can’t just be stopped as soon as it starts. Likewise, the crime is against an object in severs’ memories, not a real person. How exactly would we prosecute harm to a virtual character that could be restored like nothing ever happened? Same thing would apply to a digital murder, like in the Diablo III case. What was the harm is the characters and their loot were reset? We can’t bring a real murder victim back to life so we punish people for taking a life, but what if we could and simply settle on the question of how much to compensate for mental anguish?

Of course it would be nice to see harsher treatment of online stalking and harassment since its potential to do a lot of serious harm is often underestimated by those who have few interactions in today’s virtual worlds, but it seems like prosecuting people for virtual rape, or murder, or theft and in games, no less, seems like a big overreach. It’s one thing when such crimes are carried out, or threatened against very real people through the use of MMORPGs or social media. But it’s something all together different when the crime can be undone with a few clicks of a mouse and the victim is nothing more than a large collection of ones and zeroes. If we criminalize what some people do to virtual characters in one category of games, what sort of precedent would it set for others? Who would investigate these crimes? How? Who would be obliged to track every report and record every incident? It’s one of those thoughts that comes from a good place, but poses more problems than it solves and raises a lot of delicate free speech questions…

touch screen

Hiring people is difficult, no question, and in few places is this more true than in IT because we decided to eschew certifications, don’t require licenses, and our field is so vast that we have to specialize in a way that makes it difficult to evaluate us in casual interviews. With a lawyer, you can see that he or she passed the bar and had good grades. With a doctor, you can see years of experience and a medical license. You don’t have to ask them technical questions because they obviously passed the basic requirements. But software engineers work in such a variety of environments and with such different systems that they’re difficult to objectively evaluate. What makes one coder or architect better than another? Consequently, tech blogs are filled with just about every kind of awful advice for hiring them possible, and this post is the worst offender I’ve seen so far, even more out of touch and self-indulgent than Jeff Atwood’s attempt.

What makes it so bad? It seems to be written by someone who doesn’t seem to know how real programmers outside of Silicon Valley work, urging future employers to demand submissions to open, public code repositories like GitHub and portfolios of finished projects to explore and with all seriousness telling them to dismiss those who won’t publish their code or have the bite-sized portfolio projects for quick review. Even yours truly living and working in the Silicon Beach scene, basically Bay Area Jr., for all intents and purposes, would be fired for posting code from work in an instant. Most programmers do not work on open source projects but closed source software meant for internal use or for sale as a closed source, cloud-based, or on premises product. We have to deal with patents, lawyers, and often regulators and customers before a single method or function becomes public knowledge. But the author, Eric Elliot, ignores this so blithely, it just boggles the mind. It’s as if he’s forgotten that companies actually have trade secrets.

Even worse are Elliot’s suggestions for how to gauge an engineer’s skills. He advocates a real unit of work, straight from the company’s team queue. Not only is this ripe for abuse because it basically gives you free or really discounted highly skilled work, but it’s also going to confuse a candidate because he or she needs to know about the existing codebase to come up with the right solution to the problem all while you’re breathing down his or her neck. And if you pick an issue that really requires no insight into the rest of your product, you’ve done the equivalent of testing a marathoner by how well she does a 100 meter dash. This test can only be too easy to be useful or too hard to actually give you a real insight into someone’s thought process. Should you decide to forgo that, Elliot wants you to give the candidate a real project from your to-do list while paying $100 per hour, introducing everything wrong with the previous suggestion with the added bonus of now spending company money on a terrible, useless, irrelevant test.

Continuing the irrelevant recommendations, Elliot also wants candidates to have blogs and long running accounts on StackOverflow, an industry famous site for programmers to ask questions while advising each other. Now sure, I have a blog, but it’s not usually about software and after long days of designing databases, or writing code, or technical discussions, the last thing I want is to write posts about all of the above and have to promote it so it actually gets read by a real, live human being other than an employer every once in a while, instead of just shouting into the digital darkness to have it seen once every few years when I’m job hunting. Likewise, how fair is it to expect me to do my work and spend every free moment advising other coders for the sake of advising them so it looks good to a future employer? At some point between all the blogging, speaking, freelancing, contributing to open source projects, writing books, giving presentations, and whatever else Elliot expects of me, when the hell am I going to have time to actually do my damn job? If I was good enough to teach code to millions, I wouldn’t need him to hire me.

But despite being mostly bad, Elliot’s post does contain two actually good suggestions for trying to gauge a programmer’s or architect’s worth. One is asking the candidate about a real problem you’re having, and problems and solutions to those problems from their past. You should try to remove the coding requirement so you can just follow pure abstract thought and research skills for which you’re ultimately paying. Syntax is bullshit, you can Google the right way to type some command in a few minutes. The ability to find the root of a problem and ask the right questions to solve it is what makes a good computer scientist you’ll want to hire, and experience with how to diagnose complex issues and weigh solutions to them is what makes a great one who will be an asset to the company. This is how my current employer hired me and their respect for both my time and my experience is what convinced me to work for them, and the same will apply for any experienced coder you’ll be interviewing. We’re busy people in a stressful situation, but we also have a lot of options and are in high demand. Treat us like you care, please.

And treating your candidates with respect is really what it’s all about. So many companies have no qualms about treating those who apply for jobs as non-entities who can be ignored or given ridiculous criteria for asinine compensation. Techies definitely fare better, but we have our own problems to face. Not only do we get pigeonholed into the equivalent of carpenters who should be working only with cherry or oak instead of just the best type of wood for the job, but we are now being told to live, breathe, sleep, and talk our jobs 24/7/365 until we take our last breath at the ripe old age of 45 as far as the industry is concerned. Even for the most passionate coders, at some point, you want to stop working and talk about or do something else. This is why I write about popular science and conspiracy theories. I love what I do, working on distributed big data and business intelligence projects for the enterprise space, but I’m more than my job. And yes, when I get home, I’m not going to spend the rest of my day trying to prove to the world that I’m capable or writing a version of FizzBuzz that compiles, no matter what Elliot thinks of that.

sleeping cell phone

Correlation does not mean causation. While it can certainly hint at causation, without evidence showing it, correlation is either curious or outright irrelevant. We could plot the increase in the number of skyscrapers across the world next to the rise of global obesity cases and claim that skyscrapers cause obesity, but if we can’t explain how a really tall building would trigger weight gain, all we did was draw two upward sloping lines on an arbitrary chart. And the same thing is happening with the good, ol’ boogeyman of cell phone radiation, which is supposedly giving us all brain tumors. So, were you to take Mother Jones’ word for it, there are almost 200 scientists armed with over 2,000 studies showing cell phone usage causes gliomas, or cancerous tumors in the central nervous system. When you follow the links, you will find a small group of scientists and engineers signing vaguely worded letters accusing corporate fat cats, who care nothing for human lives, of killing us for profit with cell phones, wi-fi, and other microwave signals that have been saturating our atmosphere for the last half century.

Here’s the bottom line. While there have been ever so slight, tortured correlations between cell phone use and gliomas, no credible mechanism to explain how cell phones would cause them has ever been shown, and every study that purports to have observed a causative mechanism, sees it only in a sterile lab, watching exposed cells in petri dishes. If every such experiment was truly applicable to the entire human body, we’d have a cure for every known type of cancer, as well as drugs that would let us live well into our fifth century. Cells outside the protective bubble of skin, clothes, blood, and without the influence of countless other processes in our bodies and outside of them are the weakest, most speculative level of evidence one could try to muster in showing that electromagnetic fields could cause cancer. My hypochondriacal friends, the words in vitro and in vivo sound similar, but in practice, the two are very, very different. We find more cases of cancer every year not because we’re mindlessly poisoning ourselves with zero regard for the consequences, but because we’re getting really good at finding it.

Just like in the not too distant past people worried that traveling at the ungodly, indecent, not at all meant for humans speed of 25 miles per hour in a train would cause lifelong damage, we’re now dealing with those who believe that all these newfangled electronics can’t be good for us if they’re invisible and have the term “radiation” in their official description. They’re terribly afraid, but unable to offer a plausible mechanism for harm, they rebut skeptics with histrionics invoking tobacco industry denialism, anti-corporatism, and full blown conspiracy theories, calling those in doubt communication industry and electronics shills. Now, for full disclosure I should note that I work with telephony in a very limited capacity. My work centers around what to do with VoIP or other communications data, but that would be enough for those blowing up the Mother Jones’ comment section for that article to dismiss me as a paid shill. Should I protest and show my big doubts about their ideas, they will conveniently back away form calling me a shill sent to spread propaganda to declaring that I’m just a naive sap doomed to suffer in the near future.

It’s infuriating really. Yes, yes, I get it goddamn it, Big Tobacco lied after science ruled that their product was killing their customers and spent billions trying to improve their public image. But in that case, the scientists demonstrated irrefutable in vivo proof of the crippling effects of nicotine and cigarette tar on lab animals, identifying dozens of chemical culprits and how they damaged healthy tissues to trigger tumor growth. Sleazy lawyers were trying to stem a tsunami of quality studies and cold, hard numbers, not vague speculative ideas about how maybe cigarettes can cause cancer while lab studies on rats and mice failed to turn up anything at all. A preemptive comparison of the two does not suggest the rhetorical sophistication of the person doing such comparisons, but intellectual laziness and utter ignorance of how science actually works, and it serves only to clear the debate of any fact or opinion with which this conspiracy theorist doesn’t agree. It’s a great way to build an echo chamber, but a lousy way to make decisions about the quality and validity of what the media sells you. It is, after all, worried about hits, not facts.

But hold on, why would someone latch into the idea that cell phones and GMOs cause cancer, and there’s some shadowy cabal of evil corporations who want to kill us all either for the benefit of the New World Order or their bank accounts, and refuse to let this notion go like a drowning man who can’t swim clinging to a life raft in the open ocean, with sharks circling under his feet? Consider that you have a 33% chance of having cancer in your lifetime, and our modern, more sedentary lifestyles will hurt your health long before that. We can blame genetics, the fact that getting old sucks and we don’t have a cure for aging, and that there is no perfect way to cheat nature and avoid degenerative diseases completely, that we can only stave them off. Or we can find very human villains who we can overthrow, or at least plot against, responsible for all this as they contemplate killing us for fun and profit with deadly cell phones, toxic food, and poisonous drugs that kill us faster to aid their nefarious goals. We can’t fight nature, but we can fight them, and so we will. Even if they aren’t real, but projections of our fear or mortality and the inability to control our fate into equally fallible collections of humans who sometimes do bad things.

sad robots

And now, how about a little classic Singularity skepticism after the short break? What’s that? It’s probably a good idea to go back in time and revisit the intellectual feud between Jaron Lanier, a virtual reality pioneer turned Luddite-lite in recent years, and Ray Kurzweil, the man who claims to see the future and generally has about the same accuracy as a psychic doing a cold reading when trying this? Specifically the One-Half of a Manifesto vs. One-Half of an Argument debate, the public scuffle now some 15 years old which is surprisingly relevant today? Very well my well read imaginary reader, whatever you want. Sure, this debate is old and nothing in the positions of the personalities involved has changed, but that’s actually what makes it so interesting, that a decade and a half of technological advancements and dead ends didn’t budge either of people who claim to be authorities on the subject matter. And all of this is in no small part because the approach from both sides was to take a distorted position and preach it past each other.

No, this isn’t a case when you can get those on opposing sides to compromise on something to arrive at the truth, which is somewhere in the middle. Both of them are very wrong about many basic facts about the economics, technology, and understanding of what makes one human for the foreseeable future and they build strawmen to assault each other with their errors, clinging to their old accomplishments to argue from authority. Lanier has developed a vision of absolute gloom and doom where algorithms and metrics have taken over for humans by engineers who place zero value on human input and interaction. Kurzweil insists that Lanier can only see all of the problems to overcome and became a pessimist solely because he can’t solve them while in the Singularitarian world, the magic of exponential advancement will eventually solve it all. With computers armed with super-smart AI. That Lanier is convinced will make humanity obsolete by not being smarter than humans but by the actions of those who believe they are.

What strikes me as bizarre is how neither of them ever looked at the current trend of making a machine perform computationally tedious, complex calculations and offloading things that we’ve all known for a long time that computers do better and more accurately than us, then having us make decisions based on this information? Computers will not replace us. We’re the ones with the creative ideas, goals, and motivation, not them. We’re the ones that tell them what to do or what to calculate and how to calculate it. Today, we’re going through a period of what we could generously call creative destruction in which some jobs are sadly becoming obsolete and we’re lacking the political spine to apply what we know are policy fixes to political problems, which is unfair and cruel to those affected. But the idea that this is a political, not a technical problem is not even considered. Computers are their hammers and all they see is nails, therefore, they will hammer away at these problems until they go away and wonder why they refuse to.

Should you fail to grasp both the promise of AI and human/machine interfaces and search only for downsides without considering solutions, as Lanier does, or overestimate what they can do based on wildly unrealistic notions from popular computer science news headlines, looking only for upsides without even acknowledging problems or limitations, as Kurzweil does, and you get optimism and pessimism recycling the same arguments against each other for a decade and a half while omitting the human dimension of the problems that manage to describe, and in which they claim said human dimension is the most important. If humans are greater than the sum of their parts, as Lanier argues, why would they be displaced solely by a fancy enough calculator, having nothing useful to offer past making more computers? And if humans are so easy to boil down to a finite list of parts and pieces, why is it that we can’t define what makes them creative and how to embody machines with the same creativity outside of a well defined problem space limited by propositional logic? Try to answer these questions and we’d have a real debate.

crt head

Humans beware. Our would-be cybernetic overlords made a leap towards hyper-intelligence in the last few months as artificial neural networks can now be trained on specialized chips which use memristors, an electrical component that can remember the flow of electricity through it to help manage the amount of current required in a circuit. Using these specialized chips, robots, supercomputers, and sensors could solve complex real world problems faster, easier, and with far less energy. Or at least this is how I’m pretty sure a lot of devoted Singularitarians are taking the news that a team of researchers created a proof of concept chip able to house and train an artificial neural network with aluminium dioxide and titanium dioxide electrodes. Currently, it’s a fairly basic 12 by 12 grid of “synapses”, but there’s no reason why it couldn’t be scaled up into chips carrying billions of these artificial synapses that sip about the same amount of power as a cell phone imparts on your skin. Surely, the AIs of Kurzwelian lore can’t be far off, right?

By itself, the design in question is a long-proposed solution to the problem of how to scale a big artificial neural network when relying on the cloud isn’t an option. Surely if you use Chrome, you right clicked on an image and tried to have the search engine find it on the web and suggesting similar ones. This is powered by an ANN which basically carves up the image you send to it into hundreds or thousand of pieces, each of which is analyzed for information that will help it find a match or something in the same color palette, and hopefully, the same subject matter. It’s not perfect, but when you’re aware its limitations and use it accordingly, it can be quite handy. The problem is that to do its job, it requires a lot of neurons and synapses, and running them is very expensive from both a computational and a fiscal viewpoint. It has to take up server resources which don’t come cheap, even for a corporate Goliath like Google. A big part of the reason why is the lack of specialization for the servers which could just as easily execute other software.

Virtually every computer used today is based on what’s known as von Neumann architecture, a revolutionary idea back when it was proposed despite seeming obvious to us now. Instead of a specialized wiring diagram dictating how computers would run programs, von Neumann wanted programmers to just write instructions and have a machine smart enough to execute them with zero changes in their hardware. If you asked your computer whether it was running some office software, a game, or a web browser, it couldn’t tell you. To it, every program is a set of specific instructions pushed onto a stack on each CPU core, read and completed one by one, and then popped to make room for the next order. All of these instructions boil down to where to move a byte or series of bytes in memory and to what their values should be set. It’s perfect for when a computer could run anything and everything, and you’ll either have no control over what it runs, or want it to be able to run whatever software you throw its way.

In computer science, this ability to hide nitty-gritty details of how a complex process on which a piece of functionality relies actually works, is called an abstraction. Abstractions are great, I use them every day to design database schemas and write code. But they come at a cost. Making something more abstract means you incur an overhead. In virtual space, that means more time for something to execute, and in physical space that means more electricity, more heat, and in the case of cloud based software, more money. Here’s where the memristor chip for ANNs has its time to shine. Knowing that certain computing systems like routers and robots could need to run a specialized process again and again, they’ve designed a purpose built piece of hardware which does away with abstractions, reducing overhead, and allowing them to train and run their neural nets with just a little bit of strategically directed electricity.

Sure, that’s neat, it’s also what an FPGA, or a Field Programmable Gate Array can do already. But unlike these memristor chips, FPGAs can’t be easily retrained to run neural nets with a little reverse current and a new training session, they need to be re-configured, and they can’t use less power by “remembering” the current. This is what makes this experiment so noteworthy. It created a proof of concept for a much more efficient FPGA when techies are looking for a new way to speed up resource-hungry algorithms that require probabilistic approaches. And this is also why these memristor chips won’t change computing as we know it. They’re meant for very specific problems as add-ons to existing software and hardware, much like GPUs are used for intensive parallelization while CPUs handle day to day applications without one substituting for another. The von Neumann model is just too useful and it’s not going anywhere soon.

While many an amateur tech pundit will regale you with a vision of super-AIs built with this new technology taking over the world, or becoming your sapient 24/7 butler, the reality is that you’ll never be able to build a truly useful computer out of nothing but ANNs. You will lose the flexible nature of modern computing and the ability to just run an app without worrying about training a machine how to use it. These chips are very promising and there’s a lot of demand for them to hit the market sooner than later, but they’ll just be another tool to make technology a little more awesome, secure, and reliable for you, the end user. Just like quantum computing, they’re one means to tackling the growing list of demands for our connected world without making you wait for days, if not months, for a program to finish running and a request to complete. But the fact that they’re not going to become the building blocks of an Asimovian positronic brain does not make them any less cool in this humble techie’s professional opinion.

See: Prezioso, M., et. al. (2015). Training and operation of an integrated neuromorphic network based on metal-oxide memristors Nature, 521 (7550), 61-64 DOI: 10.1038/nature14441