Archives For technology

blood bag

Generally, when skeptics or popular science writers talk about medicine and money, it’s to ward off something one could call an argument ad-shillium, or rejecting scientific studies outright with declarations that anyone who sticks up for doctors and pharmaceutical companies over the hot and trendy snake oil salesperson of the month must be a paid shill. Shilling certainly happens in both the real world and online, but when one’s argument rests in basic science, money is not a topic relevant to the conversation. However, that doesn’t mean that it’s not important when new ideas come along and gain some serious traction. Case in point, Theranos, a company which a lot of people rightly suspect can shake up healthcare in the United States by offering dozens of blood using just a drop of blood at your corner pharmacy, is facing a barrage of questions as to how exactly its tests work and seems to be unwilling to tell anyone about their lab on a chip.

Ordinarily, this is where an experienced skeptic would look for signs of quackery. Useless tests, pseudoscientifc mumbo-jumbo on the website, avoidance of the FDA, and special pleading for the enigmatic technology which offers vague benefits that don’t run afoul of the agency’s rules for the same of pharmaceuticals and medical devices. But that’s not the case with Theranos. In fact, the company recently got a nod from the FDA to continue its work and is seeking approval of its technology and testing methods, and scientists who have tried to parse how it can test for so many things with so little blood say that it’s more than likely upgrading old technology into a new, compact toolkit. There’s no voodoo or snake oil here, just good old fashioned science and faster, better computers and machinery. Furthermore, the fees for each test are posted openly, and they’re a lot less than what’s offered by its competitors, whose pricing is opaque at best.

So if there’s nothing amiss at Theranos, why all the secrecy? Well, after many millions spent on research, development, and testing, the company wants to expand significantly and if it shares how it does what it does with the world, especially if it’s just an overhaul of existing methodology with better machinery, its competitors can quickly catch up and limit its growth. I’m sure it’s also trying to avoid getting patent trolled and bogged down in expensive litigation, more than likely of the frivolous, made to line lawyers’ pockets variety, since there’s no shortage of people with an abandoned medical testing device patent from which a troll can manufacture an infringement or two and file in East Texas. Perhaps this is unfair to scientists, and to some degree patients who may want a second opinion after Theranos’ tests show something alarming, but this is the result of setting up a healthcare system with opaque pricing and strict regulation, and legal minefields in the technology world through easy to obtain and vaguely worded frivolous patents.

sci-fi plane

Now, I don’t mean to alarm you, but if Boeing is serious about its idea for the fusion powered jet engine and puts it into a commercial airplane in the near future more or less as it is now, you’re probably going to be killed when it’s turned on as the plane gets ready to taxi. How exactly your life will end is a matter of debate really. The most obvious way is being poisoned by a shower of stray neutrons and electrons emanating from the fusion process, and the fissile shielding which would absorb some of the neutrons and start a chain reaction much like in a commercial fission plant but with basically nothing between you and the radiation. If you want to know exactly what that would do to your body, and want to lose sleep for a few days, simply do a search — and for the love of all things Noodly not an image search, anything but that — for Hiroshi Ouchi. Another way would be a swift crash landing after the initial reaction gets the plane airborne but just can’t continue consistently enough to stay in the air. A third involves electrical components fried by a steady radioactive onslaught giving out mid-flight. I could go on and on, but you get the point.

Of course this assumes that Boeing would actually build such a jet engine, which is pretty much impossible without some absolutely amazing breakthroughs in physics, material sciences, and a subsequent miniaturization of all these huge leaps into something that will fit into commercial jet engines. While you’ve seen something the size of a NYC or San Francisco studio apartment on the side of each wing on planes that routinely cross oceans, that’s not nearly enough space for even one component of Boeing’s fusion engine. It would be like planning to stuff one of the very first computers into a Raspberry Pi back in 1952, when we theoretically knew that we should be able to do it someday, but had no idea how. We know that fusion should work. It’s basically the predominant high energy reaction in the universe. But we just can’t scale it down until we figure out how to negotiate turbulent plasma streams and charged particles repelling each other in the early stages of ignition. Right now, we can mostly recoup the energy from the initial laser bursts, but we’re still far off from breaking even on the whole system, much generate more power.

Even in ten years there wouldn’t be lasers powerful enough to start fusion with enough net gain to send a jet down a runway. The most compact and energetic fission reactors today are used by submarines and icebreakers, but they’re twice the size of even the biggest jet engines with a weight measured in thousand of tons. Add between 1,000 pounds and a ton of uranium-238 for the fissile shielding and the laser assembly, and you’re quickly looking at close to ten times the maximum takeoff weight for the largest aircraft ever built with just two engines. Even if you can travel in time and bring back the technology for all this to work, your plane could not land in any airport in existence. Just taxiing onto the runway would crush the tarmac. Landing would tear it to shreds as the plane would drive straight through solid ground. And of course, it would rain all sorts of radioactive particles over its flight path. If chemtrails weren’t just a conspiracy theory for people who don’t know what contrails are, I’d take them over a fusion-fission jet engine, and I’m pretty closely acquainted with the fallout from Chernobyl, living in Ukraine as it happened.

So the question hanging in the air is why Boeing would patent an engine that can’t work without sci-fi technology? Partly, as noted by Ars in the referenced story, it shows just how easy it is for corporate entities with lots of lawyers to get purely speculative defensive patents. Knowing how engineers who design jet engines work, I’m betting that they understand full well that this is just another fanciful take on nuclear jet propulsion which was briefly explored in the 1950s when the dream was nuclear powered everything. We’re also entertaining the idea of using small nuclear reactors for interplanetary travel which could ideally fit into an aircraft engine, though lacking all the necessary oomph for producing constant, powerful thrust. But one day, all of this, or even a few key components, could actually combine to produce safe, efficient, nuclear power at almost any scale and be adopted into a viable jet engine design for a plane that would need to refuel a few times per year at most. Boeing wants to be able to exploit such designs while protecting its technology from patent trolls, so it seems likely that it nabbed this patent just in case, as a plan for a future that might never come, but needs to be protected should it actually arrive.

[ illustration by Adam Kop ]

reddit aliens

Gawker really has it out for reddit and has for years. Blithely ignoring the many millions of users who’ll browse everything from makeup tips and funny pictures of animals, to relationship advice and startup ideas, engaging in perfectly civil exchanges of stories and perspectives, every post they publish goes after a small, seedy underbelly of the enormous site and pretends that every single subreddit is full of nothing but racists, bigots, misogynists, and trolls. From the very same site which slut-shamed a punchline of a politician it didn’t like, published celebrity revenge porn, and in general behaves like the TMZ of social media, recently came a high and mighty treatise of a man whose poor soul can’t bear to enjoy a million programmers trading tips on a site which can’t shut down recurring white supremacist forums with thousands of subscribers. Right. As all of us who spent any time on the internet know, deleting stuff on the web once means it forever vanished, never to return. It’s not like the white supremacists of reddit just set up new subs and new alts every time they get banned or their subreddits get shut down. Oh wait, they do.

Really, not only does Gawker seem to be willfully unable to understand how large websites for sharing user-generated content work, which is suspicious enough, but it also used this sudden moral epiphany as a prelude to their conspiracy theory about Pao’s outster as CEO. As usual, I wouldn’t trust Nick Denton to report that two plus two still equals four without fact-checking it on my own, so my very strong recommendation would be to take this reddit bashing as simply one more hypocritical salvo at a site he uses as a punching bag and a repository of scandals when his existing well runs dry. Directing users to the very worst of an enormous set of forums just to pretend that the entire community is like that, or prime readers to go into the site looking for an evil bigoted misogynist to fight only sets them up for a terrible experience. Jezebel will tell you a swarm of MRAs trawl the site looking for any excuse to post something awful about women and yes, you’ll find a few every once in a while. What it conveniently omits is that they will quickly get voted down into oblivion, their offending comments requiring action on your part to view.

This pattern applies to homophobes, racists, and every other kind of bigot. Among hundreds of millions of voices, the statistical probability of running into some user with regressive or hateful opinions he or she is proud to voice is high enough to be a certainty. There are simply way too many people surfing the site to avoid it. However, they’re either a punchline or a subject of very vocal derision among the biggest, most trafficked, and most visible subreddits, which is why the average reddit MRA, or white supremacist, or homophobe, has to stick to small communities in which he could preach to his choir. He’ll be run out of any other one. Could reddit delete these evil subreddits then? If they’re aware of what they’re currently being called, yes. But think about it from the following perspective: why should they? They’ll just come back. Hate is like a zombie, its only urge is to perpetuate itself through assaults on the rest of us. Deleting a subreddit that’s dedicated to insulting women like r/redpill is not going to make the misogynists within suddenly have an epiphany and recant their tracts on why women should be abused and manipulated. It will just give them another annoyance in life to blame on women. Same idea with racists.

Sure, we can employ armies of moderators in the Philippines who are getting PTSD from trying to fight humanity’s darkest impulses on the web, and keep hitting the delete button. Then, we’d pat ourselves on the back for creating “safe spaces” with the mentally scarring work of a digital day labor sweatshop that will have to continue in perpetuity to keep them that way, and pretend that after sanitizing a few big sites we now live in a post-racial, gender-equal, sex-positive world where the sun is always shining and the clouds are a fluffy virgin white. This is what Europe has done with its criminal statutes against racist and bigoted speech. It still has just as many racists and bigots as ever, and its policies still encourage subtle but constant segregation between the natives and immigrants advocated by very popular right wing parties. Censoring hate speech is not doing them any favors and neither will it for reddit, or even Americans at large. When we let those with regressive, archaic, and downright repugnant viewpoints speak their minds, they will never be able to claim the mantle of free speech martyrs speaking truth to power. 

They will just self-identify as people with whom we don’t want to associate and let the hate filling their minds speak for itself. It won’t be “safe” or “respectful,” but it will let us know exactly where we stand as a society in regards to race, gender, and sexual attitudes. We can police the most egregious, threatening, and out of control hatred because we do need a mechanism to prevent hate form turning into real world violence, and either censor our way to deluding ourselves into thinking that we’ve done away with bigotry and hate, or choose to face the harsh truth. We can choose to be mad at reddit for not playing whack-a-mole with its worst members, or we can be happy that among the tens of millions of members, these tens of thousands are pariahs whose fanatical hatred is mocked, downvoted, and chased from subreddits they try to infest, limited to the very fringes where they’re constantly ostracized from the outside. And we can even use the hateful content they generate as a perfect counterpoint to the raving ex-girlfriend’s best friend’s cousin’s uncle on Facebook preaching that there’s no such thing as racism anymore with a few links showing racists he claims don’t exist celebrating behaviors he claims are no more…

infosec

Imagine that every time you had to buy a lock to your house, you had to send a key to some far off government office which could use it to enter your house at any time. Whoever it sent would not be required to have a warrant, or may have obtained one in a secret procedure you’d have no right to challenge, or even talk about with others, and can make copies of anything you own, liable to be used against you in whatever investigations sent him there. And what if a greedy or desperate government clerk in charge of people’s keys sells them to gangs of thieves who now have access to your house, or mandated that all locks should be easy to pick for agents since a key sent in by a person could be fake or misplaced? Sounds like the plot of a dystopian novel in which a dictator tries to consolidate newly found power, doesn’t it? And when questioned, could you not see this despot justifying such overreach by claiming it was your protection and it would only be used for catching and convicting the worst sort of violent and perverted criminals?

Well, a similar situation is currently happening in the tech world as governments demand that a system designed to keep your private data secure from prying eyes comes with a backdoor for spooks and cops. The data about your comings and goings, your searches for directions, your medical data, your browsing habits, your credit card information and sensitive passwords, they want it all to be accessible at the click of a button to stop all manner of evildoers. Just listen to a passionate plea from a New York District Attorney designed to make you think that encryption is only for the criminally malevolent mastermind trying to escape well-deserved justice…

This defendant’s appreciation of the safety that the iOS 8 operating system afforded him is surely shared by […] defendants in every jurisdiction in America charged with all manner of crimes, including rape, kidnapping, robbery, promotion of child pornography, larceny, and presumably by those interested in committing acts of terrorism. Criminal defendants across the nation are the principal beneficiaries of iOS 8, and the safety of all American communities is imperiled by it.

Wow, terrorists, pedophiles, rapists, kidnappers, and more, all in one sentence. If he only found some way to work in illegal immigrants, we could have won a game of Paranoia Bingo. Notably missing from his list of principal beneficiaries of better encryption, however, are those trying to keep their banking and credit card information safe from the very defendants he’s so very keen on prosecuting. Who, by the way, vastly outnumber the defendants for whom having some sort of an encryption defeating backdoor would be a huge boon for committing more crimes. If your primary goal is to stop crime, you should not be asking for a technical solution which would very quickly become the primary means of committing more of it. Computers will not understand the difference between a spy trying to catch a terrorist sleeper cell and a carder trying to get some magnetic strip data for a shopping spree with someone else’s money. A backdoor that will work for the former, will work exactly the same way for the latter, and no amount of scaremongering, special pleading, and threats from the technically illiterate will ever change that fact.

gawking_600

Now, is it just me or are you not really a celebrity until you either have a naked photo spread of yourself in a random glossy magazine, or your very own sex tape? It’s almost as if the gossips who decide who’s who on national television won’t pay attention to you unless there’s either an attention-pleading nudie spread or a threat of a sex tape looming over your head. But alas, the heady days of the celebrity sex tape might be coming to an end, according to Amanda Hess, a conclusion she bases on the ever less enthusiastic reaction of the public to the latest scandals such as The Fappening and Hulk Hogan’s recorded foray into swinging. As Hess sees it, we’ve entered sex tape and celebrity nudity fatigue because there have simply been too many tapes, pictures, and rumors, and the trend is so widespread, very likeable entertainers are affected by hackers in search of sleaze. Instead of laughing at the lax security and overconfidence of C-list actors and actresses, and the desperate pleas for attention from D-list has-beens, we are now empathizing with the invasions of privacy done to make a scuzzy buck off the shock value.

While this may all be true, I think there’s a very important piece of the puzzle Hess is missing in this regard and it has to do with the ubiquitous, internet-connected technology always within an arm’s reach. Back in the days of Tommy Lee and Pamela, you had to set up a camera, make a tape, have that tape duplicated, use fairly convoluted equipment to digitize it, upload it to a web server which you had to configure correctly to accept the format in which you digitized it, spread the word on countless message boards, manually submit it to a search engine, and finally, over the course of a few months actually get widespread notice of the sex tape. Just writing that out would be enough to make you winded, but also shows why celebrities thought they would be in the clear if they just hid their tapes well enough. But today, the camera is on your phone, video gets recorded in a standard format for which everyone has players, and with one-click uploads, you can go from casual sex to amateur porn stardom in a matter of minutes. And many do.

Having constant access to technology has also taken a great deal of flirting and hook ups to the web where you can find anyone from a soul mate, to quick, no-strings-attached fun. And much like the old joke about male masturbation, there are two types of people who use technology to help them flirt, those who send nudes, and those who lie about it. In fact, spies intercepting web cam and IM traffic on popular messaging platforms between regular people in the UK were just straight up shocked at how much nudity they saw. If the 11% number doesn’t seem that high to you, keep in mind that said spies were actually trying to do some targeted snooping, so most of the nudity they saw was after attempts to filter it out. We get naked for the camera so often, we overwhelm top notch government data centers with high tech filtering mechanisms to the point where “well, I tried searching for it and all this porn came up” is a real problem for spies on top secret versions of the internet built specifically to exclude civilian distractions and access.

It’s even a widespread problem for kids just entering puberty. Teens with low self-esteem and a hunger for approval and cred send naked pictures to each other all the time. Adults who need a confidence boost about their bodies can easily solicit strangers’ opinions in anonymous forums, even though they probably shouldn’t. And even when we take pains to make our adult pictures, videos, and chats private, all it takes is one small security hole or a careless moment, and bam, some hacker can get into out accounts and either harvest what we already have, or install very nasty malware to capture some of our sexual moments. Of course we could run with the notion that we shouldn’t share anything we don’t expect to be public and if there are naked pictures of us on the web, we deserve it. But this is a downright sociopathic line of reasoning, on par with a defense of a burglar who only stole your stuff because you didn’t have stronger locks while also lacking the good sense to only buy things you were prepared to lose in a robbery. If you tried to protect your assets and failed, telling you to protect them better, or not have them, is asinine.

So what does this all have to do with the decline of the celebrity sex tape/leaked pic genre? We went from giddy curiosity, to boredom as such tapes were being released for publicity and a bit of cash, to a nasty feeling in the pit of our stomachs as we’ve now taken enough nudes or done enough adult things on the web to realize that we might be next. There are extortionists whose goal it is to trick you into getting sexual with them and then blackmail you. There’s the revenge porn business, perhaps the sleaziest scam of all time. When we know that celebrity nudity was really hacked rather than made in an attempt for another 15 minutes of fame, and we can also be compromised in much the same way, as two non-famous victims of The Fappening were, it becomes a lot less fun to watch these videos or pics. Rather than guilty pleasures brought to us by paparazzi in that TMZ celebs-behaving-badly school of tabloid gossiping, they very much hit home like the gross invasions of privacy they are. And not having enough means of stopping a nasty hack that will embarrass us, we cringe in reply, knowing we can suffer the same fate…

old cyborg

Over all the posts I’ve written about brain-machine interfaces and their promise for an everyday person, one the key takeaways was that while the idea was great, the implementation would be problematic because doctors would be loath to perform invasive and risky surgery on a patient who didn’t necessarily need said surgery. But what if when you want to link your brain to a new, complex, and powerful device, you could just get an injection of electrodes that unfurl into a thin mesh which surrounds your neurons and allows you to beam a potent signal out? Sounds like a premise for a science fiction novel, doesn’t it? Maybe something down the cyberpunk alley that was explored by Ghost In The Shell and The Matrix? Amazingly, no. It’s real, and it’s now being tested in rats with extremely positive results. Just 30 minutes after injection, the mesh unwound itself around the rats’ brains and retained some 80% of its ideal functionality. True, it’s not quite perfect yet, but this is a massive leap towards fusing our minds with machinery.

Honestly, I could write an entire book about all the things easy access this technology can do in the long run because the possibilities are almost truly endless. We could manipulate a machine miles away from ourselves as if we inhabited it, Avatar style, give locked in stroke victims a way to communicate and control their environment, extend our nervous systems into artificial limbs which can be fused with our existing bodies, and perhaps even challenge what it means to be a human and become a truly space faring species at some point down the line. Or we could use it to make video games really badass because that’s where the big money will be after medicine, after which we’ll quickly diversify into porn. But I digress. The very idea that we’re slowly but oh so surely coming closer and closer towards easy to implant brain-machine interfaces is enough to make me feel all warm and fuzzy from seeing science fiction turn into science fact, and twitch with anticipation of what could be done when it’s finally ready for human trials. Oh the software I could write and the things it could do with the power of the human brain and a cloud app…

[ illustration by Martin Lisec ]

facebook like

Adrian Chen is somewhat of an expert on controversial social media content. After all, his most popular story was a damning expose of a forum moderator who posted all sorts of controversial and questionable content on reddit. But after sifting through the deepest and darkest dungeons of reddit and finding leaked content guidelines for Facebook moderators overseas, Chen finally got a shot at the big leagues and went to Russia to track down the HQ of the infamous army of internet trolls operated by the country’s intelligence services. The results weren’t pretty. While it seemed like a productive trip confirming much of what many of us already know, he fell for one of the oldest scams in the book and was used in a fake news article claiming that he was a CIA operative who was recruiting neo-Nazis to encourage anti-Russian protests. Which in Russia is about the moral equivalent of recruiting the pedophiles from NAMBLA to lobby states to change their age of consent laws. In case that wasn’t clear, they really, really hate neo-Nazis.

This is really par for the course when it comes to dealing with today’s Russian media which has been feeding its citizens a steady diet of conspiracy theories. The people who tricked Chen are the same people who use David Icke as a political science expert and interview him while he’s going on and on about American-driven New World Order-style machinations to quickly cut the cameras and microphones before he can go on to point the finger to a group of alien lizards in change of the planet. Just like the Soviet propagandists of the previous generation, they give it their all to make life outside of Russia seem downright hellish for the average person, and paint the world as being mostly aligned against Russia simply for the sake of keeping a former grand superpower down so they can easily steal nuclear weapons, vast oil and gas reserves, and lure impressionable, young, highly educated youth overseas with empty promises of wealth, luxury, and non-stop parties after work. I can’t tell you when it started, but I can tell you that is began in the Russian part of the web as Chen accurately describes, and gotten exponentially worse.

However, Russia is not unique is doing this. It may perhaps be one of the best troll factories out there, but it’s far from the only one. You can probably safely assume that a third of pretty much everything you see on the web is fake, created by trolls, paid shills, or click-farm workers whose job it is to add fake Facebook likes and Twitter followers for corporations, think tanks, and even political candidates. With the anonymity of the internet comes freedom, but with that freedom is the understanding that it can be abused to present lies and facilitate frauds on a massive scale, and since many people still don’t take internet seriously enough, one can get away with lying or scamming for cash with no real consequence. Ban fake accounts or trolls? Others will pop up in seconds. It’s like trying to slay a hydra that can also regrow its heart. All you can really do when it comes to dealing with the fake web is to stay on alert, always double check what you see, and don’t be shy about checking accounts for something that looks or feels wrong. You might not be able to catch every troll and fraud every time, but you’ll weed out the vast majority who want to recruit you to support a fraudulent cause, or trick you into spreading their lies…

[ illustration by Aaron Wood ]

robot barkeep

Many jokes or satirical pieces are funny precisely because they have a nugget of truth in them, something we can all at least understand if not relate to. This is why a satirical piece about the opening of a new McDonald’s staffed solely by robots due to the management’s concern about campaigns to increase minimum wage to $15 per hour, fooled enough readers to merit its swift entry on Snopes. I can’t blame those who were fooled. After all, we do have the technology and as the Snopes entry points out, there are numerous McDonald’s locations in several European countries boasting high minimum wages where customers order using touchscreens instead of talking to cashiers. Bumping up minimum wages, especially as it’s happening in several rather expensive West Coast cities, could certainly be an impetus for replacing humans with machines the same way it’s being done in numerous other professions. Today, we can shrug at the satire and lament the fact that machines are now advanced enough to make some people obsolete in the job market. But give it some time and this may well be a real report in the news.

One of the sad things about this kind of automation is not that it’s not happening because there aren’t robots capable enough to take over for humans being built. There are. In a structured and organized environment like a grocery store or a restaurant, with a finite number of pathways for machines to navigate, well known and understood obstacles, and clearly marked destinations, I would see no problem with robotic waiters summed by touchscreen or shelf stocking bots today other than their price tag. That’s right. Humans are doing certain types of work because it’s just cheaper to have them do it instead of machines. I really don’t think that $15 an hour wages can make these robots economically viable, much less cheaper, for many businesses over the next five years, but past that is anyone’s guess with economies of scale kicking in, the bugs shaken out, the quality improving, and the prices dropping. So it may be best to take that article not so much as satire, but as a warning. Another big wave of automation is coming and we need to be thinking about how to deal with it, not just debate it to death or oppose it with dogmas.

ultron

There’s something to be said about not taking comic books and sci-fi too seriously when you’re trying to predict the future and prepare for a potential disaster. For example, in Age of Ultron, a mysterious alien artificial intelligence tamed by a playboy bazillionaire using a human wrecking ball as a lab assistant in a process that makes most computer scientists weep when described during the film, decides that because its mission is to save the world, it must wipe out humanity because humans are violent. It’s a plot so old, one imagines that an encyclopedia listing every time it’s been used is itself covered by its own hefty weight in cobwebs, and yet, we have many famous computer scientists and engineers taking it seriously for some reason. Yes, it’s possible to build a machine that would turn on humanity because the programmers made a mistake or it was malicious by design, but we always omit the humans involved and responsible for designs and implementation and go straight to the machine as its own entity wherein lies the error.

And the same error repeats itself in an interesting, but ultimately flawed ideas by Zeljko Svedic, which says that an advanced intellect like an Ultron wouldn’t even bother with humans since its goals would probably send it deep into the Arctic and then to the stars. Once an intelligence far beyond our own emerges, we’re just gnats that can be ignored while it goes about, working on completing its hard to imagine and ever harder to understand plans. Do you really care about a colony of bees or two and what it does? Do you take time out of your day to explain to it why it’s important for you to build rockets and launch satellites, as well as how you go about it? Though you might knock out a beehive or two when building your launch pads, you have no ill feelings against the bees and would only get rid of as many of them as you have to and no more. And a hyper-intelligent AI system would do its business the same exact way.

And while sadly, Vice decided on using Eliezer Yudkowsy for peer review when writing its quick overview, he was able to illustrate the right caveat to an AI which will just do its thing with only a cursory awareness of the humans around it. This AI is not going to live in a vacuum and needs vast amounts of space and energy to run itself in its likeliest iteration, and we, humans, are sort of in charge of both at the moment, and will continue to be if, and when it emerges. It’s going to have to interact with us and while it might ultimately leave us alone, it will need resources we’re controlling and with which we may not be willing to part. So as rough as it is for me to admit, I’ll have to side with Yudkowsky here in saying that dealing with a hyper-intelligent AI which is not cooperating with humans is more likely to lead to conflict than to a separation. Simply put, it will need what we have and if it doesn’t know how to ask nicely, or doesn’t think it has to, it may just decide to take it by force, kind of like we would do if we were really determined.

Still, the big flaw with all this overlooked by Yudkowsky and Svedic is that AI will not emerge just like we see in sci-fi, ex nihlo. It’s more probable to see a baby born to become an evil genius at a single digit age than it is to see a computer do this. In other words, Stewie is far more likely to go from fiction to fact than Ultron. But because they don’t know how it could happen, they make the leap to building a world outside of a black box that contains the inner workings of this hyper AI construct as if how it’s built is irrelevant, while it’s actually the most important thing about any artificially intelligent system. Yudkowsky has written millions, literally millions, of words about the future of humanity in a world where hyper-intelligent AI awakens, but not a word about what will make it hyper-intelligent that doesn’t come down to “can run a Google search and do math in a fraction of a second.” Even the smartest and most powerful AIs will be limited by the sum of our knowledge which is actually a lot more of a cure than a blessing.

Human knowledge is fallible, temporary, and self-contradictory. We hope that when we try and combine immense pattern sifters to billions of pages of data collected by different fields, we will find profound insights, but nature does not work that way. Just because you made up some big, scary equations doesn’t mean they will actually give you anything of value in the end, and every time a new study overturns any of these data points, you’ll have to change these equations and run the whole thing from scratch again. When you bank on Watson discovering the recipe for a fully functioning warp drive, you’ll be assuming that you were able to prune astrophysics of just about every contradictory idea about time and space, both quantum and macro-cosmic, know every caveat involved in the calculations or have built how to handle them into Watson, that all the data you’re using is completely correct, and that nature really will follow the rules that your computers just spat out after days of number crunching. It’s asinine to think it’s so simple.

It’s tempting and grandiose to think of ourselves as being able to create something that’s much better than us, something vastly smarter, more resilient, and immortal to boot, a legacy that will last forever. But it’s just not going to happen. Our best bet to do that is to improve on ourselves, to keep an eye on what’s truly important, use the best of what nature gave us and harness the technology we’ve built and understanding we’ve amassed to overcome our limitations. We can make careers out of writing countless tomes pontificating on things we don’t understand and on coping with a world that is almost certainly never going to come to pass. Or we could build new things and explore what’s actually possible and how we can get there. I understand that it’s far easier to do the former than the latter, but all things that have a tangible effect on the real world force you not to take the easy way out. That’s just the way it is.

game controller

Recently, a number of tech news sites announced that two people were convicted as felons for stealing about $8,000 in virtual loot from Blizzard’s Diablo III, trumpeting this case as a possible beginning of real world punishments from virtual crimes. However since the crime of which they were found guilty is infecting their victims with malware, then using said malware to take control of their characters and steal their stuff to resell for real world money, their case is nothing new as far as the law is concerned. Basically, the powers to be at Blizzard just didn’t want the duo to get off with a slap on the wrist for their behavior and were only able to secure damages thanks to the fact that a virus designed to give a backdoor into a victim’s system was used. But there’s definitely some pressure to turn virtual crimes in multiplayer games into real ones…

[A] Canadian newscaster reported that some advocates would like to see people charged with virtual rape when they modify games like Grand Theft Auto so … their characters can simulate sexually assaulting other players. Given the increasing realism of video games, research being done to improve virtual reality, and expected popularity of VR glasses like those soon to be commercially available from Oculus Rift, there would almost certainly be more cases of crimes committed in virtual spaces spilling out into IRL courts.

Al right, let’s think about that for a moment. GTA is a game in which you play a sociopath who’s crime-spreeing his way around whatever locale the latest edition features. Mods that enable all sorts of disturbing acts are kind of expected within the environment in question. But consider a really important point. Virtual sexual assaults can be stopped by quitting the game while a real one can’t just be stopped as soon as it starts. Likewise, the crime is against an object in severs’ memories, not a real person. How exactly would we prosecute harm to a virtual character that could be restored like nothing ever happened? Same thing would apply to a digital murder, like in the Diablo III case. What was the harm is the characters and their loot were reset? We can’t bring a real murder victim back to life so we punish people for taking a life, but what if we could and simply settle on the question of how much to compensate for mental anguish?

Of course it would be nice to see harsher treatment of online stalking and harassment since its potential to do a lot of serious harm is often underestimated by those who have few interactions in today’s virtual worlds, but it seems like prosecuting people for virtual rape, or murder, or theft and in games, no less, seems like a big overreach. It’s one thing when such crimes are carried out, or threatened against very real people through the use of MMORPGs or social media. But it’s something all together different when the crime can be undone with a few clicks of a mouse and the victim is nothing more than a large collection of ones and zeroes. If we criminalize what some people do to virtual characters in one category of games, what sort of precedent would it set for others? Who would investigate these crimes? How? Who would be obliged to track every report and record every incident? It’s one of those thoughts that comes from a good place, but poses more problems than it solves and raises a lot of delicate free speech questions…