Archives For computer science

voodoo doll

In another edition of people-can-be-awful news following last week’s post about why it’s indeed best not to feed trolls, it’s time to talk about online harassment and what to do about it. It seems that some 72 social activist groups are asking the Department of Education to police what they see as harassing and hate speech on a geo-fenced messaging app, arguing that because said geo-fence includes college campuses, it’s the colleges’ job to deal with it. Well, I suppose that it must be the start of windmill tilting season somewhere and now a government agency will have to do something to appease activists with good intentions in whose minds computers are magic that with the right lines of code can make racists, sexists, and stalkers go away. Except when all of them simply reappear on another social media platform and keep being terrible people since the only thing censoring them changes is the venue on which they’ll spew their hatred or harass their victims. Of course this is to be expected because the internet is built to work like that.

Now look, I completely understand how unpleasant it is to have terrible things said about you or done to you on the web and how it affects you in real life. As a techie who lives on the web, I’ve had these sorts of things happen to me firsthand. However, the same part of me that knows full well that the internet is in fact serious business, contrary to the old joke, also understands that a genuine attempt to police it is doomed to failure. Since the communication protocols used by all software using the internet are built to be extremely dynamic and robust, there’s always a way to circumvent censorship, confuse tracking, and defeat blacklists. This is what happens when a group of scientists build a network to share classified information. Like it or not, as long as there is electricity and an internet connection, people will get online, and some of these people will be terrible. For all the great things the internet brought us, it also gave us a really good look at how many people are mediocre and hateful, in stark contrast to most techo-utopian dreams.

So keeping in mind that some denizens of the web will always be awful human beings who give exactly zero shits about anyone else or what effect their invective has on others, and that there will never be a social media platform free of them no matter how hard we try, what should their targets do about it? Well, certainly not ask a government agency to step in. With social media’s reach and influence as powerful as it is today, and the fact that it’s free to use, we’ve gotten lost in dreamy manifestos of access to Twitter, Facebook, Snapchat, and yes, the dreaded Yik Yak, being fundamental human rights to speak truth to power and find a supporting community. But allowing free and unlimited use of social media is not some sort of internet mandate. It’s ran by private companies, many of them not very profitable, hoping to create an ecosystem in which a few ads or add-on services will make them some money by being middlemen in your everyday interactions with your meatspace and internet friends. If we stop using these services when the users with which we’re dealing through them are being horrible us, we do real damage.

But wait a minute, isn’t not using the social media platform on which you’ve been hit with waves and waves of hate speech, harassment, and libel, just letting the trolls win? In a way, maybe. At the same time though, their victory will leave them simply talking to other trolls with whom pretty much no one wants to deal, including the company that runs the platform. If Yik Yak develops a reputation as the social app where you go to get abused, who will want to use it? And if no one wants to use it, what reason is there for the company to waste millions giving racist, misogynist, and bigoted trolls their own little social network? Consider the case of Chatroulette. Started with the intent of giving random internet users a face with a screen name and connecting them with people they’d never otherwise meet, the sheer amount of male nudity almost destroyed it. Way too many users had negative experiences and never logged on again, associating it with crude, gratuitous nudity, so much so that it’s still shorthand for being surprised by an unwelcome erect penis on cam. Even after installing filters and controls banning tens of thousands of users every day, it’s still not the site it used to be, or that its creator actually envisioned it becoming.

With that in mind, why try to compel politicians and bureaucrats to unmask and prosecute users for saying offensive things on the web, many of which will no doubt be found to be protected by their freedom of speech rights? That’s right, remember that free speech doesn’t mean freedom to say things you personally approve of, or find tolerable. Considering that hate speech is legal, having slurs or rumors about you in your feed is very unlikely to be a criminal offense. You can be far more effective by doing nothing and letting the trolls fester, their favorite social platform to abuse others become their own personal hell where other trolls, out of targets, turn on them to get their kicks. Sure, many trolls just do it for the lulz with few hard feelings towards you. Until it’s them being doxxed, or flooded with unwanted pizzas, or swatted, or seeing their nudes on a site for other trolls’ ridicule. No matter how hard you try, they won’t be any less awful to you, so let them be awful to each other until they kill the community that allows them to flourish and the company that created and maintained it, and allow their innate awfulness be their undoing.

math is logical

When you live in a world filled with technology, you’re living with the products of millions of lines of code, both low, and high level. There’s code in your car’s digital controls, all your appliances, and sprawling software, with which yours truly has more than just a passing familiarity, are way more often than not behind virtually every decision made about you by banks, potential bosses, hospitals, and even law enforcement. And it’s that last decision maker that warrants the highest scrutiny and the most worry because proprietary code is making decisions that can very literally end your life without actually being audited and examined for potential flaws. Buggy software in forensic labs means that actual criminals may go free while innocent bystanders are sentenced to decades, if not life in jail or death row for their actions, so criminal defense attorneys are now arguing that putting evidence in a black box to get a result is absurd, and want a real audit of at least one company’s software. Sadly, their requests have so far been denied by the courts for a really terrible reason: that the company is allowed to protect its code from the competition.

Instead of opening up its source code, the company in question, Cybergenetics, simply says its methods are mathematically sound and peer reviewed, so that should be the end of discussion as far as justice is concerned. So far, the courts seem to agree, arguing that revealing code will force the company to reveal its trade secrets despite the fact that its entitled to keep them. And while its unlikely that Cybergenetics is doing anything willfully malicious or avoiding an audit for some sort of sinister reason, the logic of saying that because their methodology seems sound, the code implementing it should be beyond reproach is fatally flawed. Just because you know a great deal about how something should be done doesn’t mean that you won’t make a mistake, one that may completely undermine your entire operation. Just consider the Heartbleed bug in the open source OpenSSL. Even when anyone could’ve reviewed the code, a bug undermining security the software was supposed to offer was missed for years, despite all the methodology behind OpenSSL’s approach to security for the package was quite mathematically sound.

So what could Cybergenetics not want to share with the world? Well, knowing what I’ve had the chance to learn about code meant to process DNA sequences, I can provide several educated guesses. One of the most problematic things with processing genetic data is quantity. It simply takes a lot of time and processing power to accurately read and compare DNA sequences and that means a lot of money goes solely to let your computers crunch data. The faster you could read and compare genetic data, the lower your customers’ costs, the more orders you can take and fulfill on time, and the higher your profit margins. What the code in question could reveal is how its programmers are trying to optimize it and tweak things like data types, memory usage, and mathematical shortcuts to get better performance out of it. All of these are clearly perfectly valid trade secrets and knowing how they do what they do could easily give competition a very real leg up on developing even faster and better algorithms. But these optimizations are also a perfect part of the code for evidence-compromising bugs to hide. It’s a real conundrum.

It’s one thing if you’re running a company which provides advanced data warehousing or code obfuscation services, where a bug in your code doesn’t result in someone going to jail. But if a wrong result on your end can cost even one innocent person a quarter century behind bars, an argument centered around your financial viability as a business just doesn’t cut it. Perhaps the patent system could help keep this software safe from being pilfered by competitors who won’t be able to compete otherwise while still keeping the code accessible and easy to review by the relevant experts. Otherwise, if we let commercial considerations into how we review one of the most important types of forensic evidence, criminal defense attorneys have an easy way to do what they do best and raise reasonable doubt by repeating how the method of matching is top secret and is banned from being reviewed solely to keep up the company’s revenue stream. Or ask the jury how they would feel if an algorithm no one is allowed to review not to compromise the creators’ bank accounts decides their ultimate fate in a complicated criminal case.


Whenever I write a post about why you can’t just plug a human brain or a map of it into a future computer and expect to get a working mind as a result, two criticisms inevitably get sent to both my inbox and via social media. The first says that I’m simply not giving enough credit to a future computer science lab because the complexity of a task hasn’t stopped us before and it certainly won’t stop us again. The second points to a computer simulation, such as the recent successful attempt to recreate a second of human brain activity, and say it’s proof that all we need is just a little more computing oomph before we can create a digital replica of the human brain. The first criticism is a red herring because it claims that laying out how many proponents of this idea are severely underestimating the size and scope of the problem is the equivalent of saying that it’s simply too hard to do, while the actual argument is that brains don’t work like computers, and to make computers work more like brains can only get you so far. The second criticism, however, deserves a more in-depth explanation because it’s based on a very hard to spot mistake…

You see, we can simulate how neurons work fairly accurately based on what we know about all the chemical reactions and electrical pulses in their immediate environment. We can even link a lot of them together and see how they’ll react to virtual environments to test our theories of the basic mechanics of the human brain and generate new questions to answer in the lab. But this isn’t the same thing as emulating the human brain. If you read carefully, the one second model didn’t actually consider how the brain is structured or wired. It was a brute force test to see just how much power it should take for a typical modern computer architecture to model the human brain. And even if we provide a detailed connectome map, we’ll just have a simulated snapshot frozen in time, giving us mathematical descriptions of how electrical pulses travel. We could use that to identify interesting features and network topologies, but we can’t run it forward, change it in response to new stimuli at random, and expect that a virtual mind resembling that of the test subject whose brain was used would suddenly come to life and communicate with us.


There’s something to be said about not taking comic books and sci-fi too seriously when you’re trying to predict the future and prepare for a potential disaster. For example, in Age of Ultron, a mysterious alien artificial intelligence tamed by a playboy bazillionaire using a human wrecking ball as a lab assistant in a process that makes most computer scientists weep when described during the film, decides that because its mission is to save the world, it must wipe out humanity because humans are violent. It’s a plot so old, one imagines that an encyclopedia listing every time it’s been used is itself covered by its own hefty weight in cobwebs, and yet, we have many famous computer scientists and engineers taking it seriously for some reason. Yes, it’s possible to build a machine that would turn on humanity because the programmers made a mistake or it was malicious by design, but we always omit the humans involved and responsible for designs and implementation and go straight to the machine as its own entity wherein lies the error.

And the same error repeats itself in an interesting, but ultimately flawed ideas by Zeljko Svedic, which says that an advanced intellect like an Ultron wouldn’t even bother with humans since its goals would probably send it deep into the Arctic and then to the stars. Once an intelligence far beyond our own emerges, we’re just gnats that can be ignored while it goes about, working on completing its hard to imagine and ever harder to understand plans. Do you really care about a colony of bees or two and what it does? Do you take time out of your day to explain to it why it’s important for you to build rockets and launch satellites, as well as how you go about it? Though you might knock out a beehive or two when building your launch pads, you have no ill feelings against the bees and would only get rid of as many of them as you have to and no more. And a hyper-intelligent AI system would do its business the same exact way.

And while sadly, Vice decided on using Eliezer Yudkowsy for peer review when writing its quick overview, he was able to illustrate the right caveat to an AI which will just do its thing with only a cursory awareness of the humans around it. This AI is not going to live in a vacuum and needs vast amounts of space and energy to run itself in its likeliest iteration, and we, humans, are sort of in charge of both at the moment, and will continue to be if, and when it emerges. It’s going to have to interact with us and while it might ultimately leave us alone, it will need resources we’re controlling and with which we may not be willing to part. So as rough as it is for me to admit, I’ll have to side with Yudkowsky here in saying that dealing with a hyper-intelligent AI which is not cooperating with humans is more likely to lead to conflict than to a separation. Simply put, it will need what we have and if it doesn’t know how to ask nicely, or doesn’t think it has to, it may just decide to take it by force, kind of like we would do if we were really determined.

Still, the big flaw with all this overlooked by Yudkowsky and Svedic is that AI will not emerge just like we see in sci-fi, ex nihlo. It’s more probable to see a baby born to become an evil genius at a single digit age than it is to see a computer do this. In other words, Stewie is far more likely to go from fiction to fact than Ultron. But because they don’t know how it could happen, they make the leap to building a world outside of a black box that contains the inner workings of this hyper AI construct as if how it’s built is irrelevant, while it’s actually the most important thing about any artificially intelligent system. Yudkowsky has written millions, literally millions, of words about the future of humanity in a world where hyper-intelligent AI awakens, but not a word about what will make it hyper-intelligent that doesn’t come down to “can run a Google search and do math in a fraction of a second.” Even the smartest and most powerful AIs will be limited by the sum of our knowledge which is actually a lot more of a cure than a blessing.

Human knowledge is fallible, temporary, and self-contradictory. We hope that when we try and combine immense pattern sifters to billions of pages of data collected by different fields, we will find profound insights, but nature does not work that way. Just because you made up some big, scary equations doesn’t mean they will actually give you anything of value in the end, and every time a new study overturns any of these data points, you’ll have to change these equations and run the whole thing from scratch again. When you bank on Watson discovering the recipe for a fully functioning warp drive, you’ll be assuming that you were able to prune astrophysics of just about every contradictory idea about time and space, both quantum and macro-cosmic, know every caveat involved in the calculations or have built how to handle them into Watson, that all the data you’re using is completely correct, and that nature really will follow the rules that your computers just spat out after days of number crunching. It’s asinine to think it’s so simple.

It’s tempting and grandiose to think of ourselves as being able to create something that’s much better than us, something vastly smarter, more resilient, and immortal to boot, a legacy that will last forever. But it’s just not going to happen. Our best bet to do that is to improve on ourselves, to keep an eye on what’s truly important, use the best of what nature gave us and harness the technology we’ve built and understanding we’ve amassed to overcome our limitations. We can make careers out of writing countless tomes pontificating on things we don’t understand and on coping with a world that is almost certainly never going to come to pass. Or we could build new things and explore what’s actually possible and how we can get there. I understand that it’s far easier to do the former than the latter, but all things that have a tangible effect on the real world force you not to take the easy way out. That’s just the way it is.

touch screen

Hiring people is difficult, no question, and in few places is this more true than in IT because we decided to eschew certifications, don’t require licenses, and our field is so vast that we have to specialize in a way that makes it difficult to evaluate us in casual interviews. With a lawyer, you can see that he or she passed the bar and had good grades. With a doctor, you can see years of experience and a medical license. You don’t have to ask them technical questions because they obviously passed the basic requirements. But software engineers work in such a variety of environments and with such different systems that they’re difficult to objectively evaluate. What makes one coder or architect better than another? Consequently, tech blogs are filled with just about every kind of awful advice for hiring them possible, and this post is the worst offender I’ve seen so far, even more out of touch and self-indulgent than Jeff Atwood’s attempt.

What makes it so bad? It seems to be written by someone who doesn’t seem to know how real programmers outside of Silicon Valley work, urging future employers to demand submissions to open, public code repositories like GitHub and portfolios of finished projects to explore and with all seriousness telling them to dismiss those who won’t publish their code or have the bite-sized portfolio projects for quick review. Even yours truly living and working in the Silicon Beach scene, basically Bay Area Jr., for all intents and purposes, would be fired for posting code from work in an instant. Most programmers do not work on open source projects but closed source software meant for internal use or for sale as a closed source, cloud-based, or on premises product. We have to deal with patents, lawyers, and often regulators and customers before a single method or function becomes public knowledge. But the author, Eric Elliot, ignores this so blithely, it just boggles the mind. It’s as if he’s forgotten that companies actually have trade secrets.

Even worse are Elliot’s suggestions for how to gauge an engineer’s skills. He advocates a real unit of work, straight from the company’s team queue. Not only is this ripe for abuse because it basically gives you free or really discounted highly skilled work, but it’s also going to confuse a candidate because he or she needs to know about the existing codebase to come up with the right solution to the problem all while you’re breathing down his or her neck. And if you pick an issue that really requires no insight into the rest of your product, you’ve done the equivalent of testing a marathoner by how well she does a 100 meter dash. This test can only be too easy to be useful or too hard to actually give you a real insight into someone’s thought process. Should you decide to forgo that, Elliot wants you to give the candidate a real project from your to-do list while paying $100 per hour, introducing everything wrong with the previous suggestion with the added bonus of now spending company money on a terrible, useless, irrelevant test.

Continuing the irrelevant recommendations, Elliot also wants candidates to have blogs and long running accounts on StackOverflow, an industry famous site for programmers to ask questions while advising each other. Now sure, I have a blog, but it’s not usually about software and after long days of designing databases, or writing code, or technical discussions, the last thing I want is to write posts about all of the above and have to promote it so it actually gets read by a real, live human being other than an employer every once in a while, instead of just shouting into the digital darkness to have it seen once every few years when I’m job hunting. Likewise, how fair is it to expect me to do my work and spend every free moment advising other coders for the sake of advising them so it looks good to a future employer? At some point between all the blogging, speaking, freelancing, contributing to open source projects, writing books, giving presentations, and whatever else Elliot expects of me, when the hell am I going to have time to actually do my damn job? If I was good enough to teach code to millions, I wouldn’t need him to hire me.

But despite being mostly bad, Elliot’s post does contain two actually good suggestions for trying to gauge a programmer’s or architect’s worth. One is asking the candidate about a real problem you’re having, and problems and solutions to those problems from their past. You should try to remove the coding requirement so you can just follow pure abstract thought and research skills for which you’re ultimately paying. Syntax is bullshit, you can Google the right way to type some command in a few minutes. The ability to find the root of a problem and ask the right questions to solve it is what makes a good computer scientist you’ll want to hire, and experience with how to diagnose complex issues and weigh solutions to them is what makes a great one who will be an asset to the company. This is how my current employer hired me and their respect for both my time and my experience is what convinced me to work for them, and the same will apply for any experienced coder you’ll be interviewing. We’re busy people in a stressful situation, but we also have a lot of options and are in high demand. Treat us like you care, please.

And treating your candidates with respect is really what it’s all about. So many companies have no qualms about treating those who apply for jobs as non-entities who can be ignored or given ridiculous criteria for asinine compensation. Techies definitely fare better, but we have our own problems to face. Not only do we get pigeonholed into the equivalent of carpenters who should be working only with cherry or oak instead of just the best type of wood for the job, but we are now being told to live, breathe, sleep, and talk our jobs 24/7/365 until we take our last breath at the ripe old age of 45 as far as the industry is concerned. Even for the most passionate coders, at some point, you want to stop working and talk about or do something else. This is why I write about popular science and conspiracy theories. I love what I do, working on distributed big data and business intelligence projects for the enterprise space, but I’m more than my job. And yes, when I get home, I’m not going to spend the rest of my day trying to prove to the world that I’m capable or writing a version of FizzBuzz that compiles, no matter what Elliot thinks of that.

late night

Every summer, there’s always something in my inbox about going to college or back to it for an undergraduate degree in computer science. Lots of people want to become programmers. It’s one of the few in-demand fields that keeps growing and growing with few limits, where a starting salary allows for comfortable student loan repayments and a quick path to savings, and you’re often creating something new, which keeps things fun and exciting. Working in IT when you left college and live alone can be a very rewarding experience. Hell, if I did it all over again, I’d have gone to grad school sooner, but it’s true that I’m rather biased. When the work starts getting too stale or repetitive, there’s the luxury of just taking your skill set elsewhere after calling recruiters and telling them that you need a change of scenery, and there are so many people working on new projects that you can always get involved in building something from scratch. Of course all this comes with a catch. Computer science is notoriously hard to study and competitive. Most of the people who take first year classes will fail them and never earn a degree.

Although, some are saying nowadays, do you really even need a degree? Programming is a lot like art. If you have a degree in fine arts, have a deep grasp of history, and can debate the pros and cons of particular techniques that’s fantastic. But if you’re just really good at making art that sells with very little to no formal training, are you any less of an artist than someone with a B.A. or an M.A. with a focus on the art you’re creating? You might not know what Medieval artisans might have called your approach back in the day, or what steps you’re missing, but frankly, who gives a damn if the result is in demand and the whole thing just works? This idea underpins the efforts of tech investors who go out of their way to court teenagers into trying to create startups in the Bay Area, telling them that college is for chumps who can’t run a company, betting what seems like a lot of money to teens right out of high school that one of their projects will become the next Facebook, or Uber, or Google. It’s a pure numbers game in which those whose money is burning a hole in their pockets are looking for lower risk to achieve higher returns, and these talented teens needs a lot less startup cash than experienced adults.

This isn’t outright exploitation; the young programmers will definitely get something out of all of this, and were this an apprenticeship program, it would be a damn good one. However, the sad truth is that less than 1 out of 10 of their ideas will succeed and this success will typically involve a sale to one of the larger companies in the Bay rather than a corporate behemoth they control. In the next few years, nearly all of them will work in typical jobs or consult, and it’s there when a lack of formalism they could only really get in college is going to be felt more acutely. You could learn everything about programming and software architecture on your own, true. But a college will help you but pointing out what you don’t even know you don’t yet know but should. Getting solid guidance in how to flesh out your understanding of computing is definitely worth the tuition and the money they’ll make now can go a long way towards paying it. Understanding only basic scalability, how to keep prototypes working for real life customers, and quick deployment limits them to fairly rare IT organizations which go into and out of business at breakneck pace.

Here’s the point of all this. If you’re considering a career in computer science and see features about teenagers supposedly becoming millionaires writing apps and not bothering with college, and decide that if they can do it, you can too, don’t. These are talented kids given opportunities few will have in a very exclusive programming enclave in which they will spend many years. If a line of code looks like gibberish to you, you need college, and the majority of the jobs what will be available to you will require it as a prerequisite to even get an interview. Despite what you’re often told in tech headlines, most successful tech companies are ran by people in their 30s and 40s rather than ambitious college dropouts for whom all of Silicon Valley opened their wallets to great fanfare, and when those companies do B2B sales, you’re going to need some architects with graduate degrees and seasoned leadership with a lot of experience in their clients’ industry to create a stable business. Just like theater students dream of Hollywood, programmers often dream of the Valley. Both dreams have very similar outcomes.


When we moved to LA to pursue our non-entertainment related dreams, we decided that when you’re basically trying to live out your fantasies, you might as well try to fulfill all of them. So we soon found ourselves at a shelter, looking at a relatively small, grumpy wookie who wasn’t quite sure what to make of us. Over the next several days we got used to each other and he showed us that underneath the gruff exterior was a fun-loving pup who just wanted some affection and attention, along with belly rubs. Lots and lots of belly rubs. We gave him a scrub down, a trim at the groomers’, changed his name to Seamus because frankly, he looked like one, and took him home. Almost a year later, he’s very much a part of our family, and one of our absolute favorite things about him is how smart and affectionate he turned out to be. We don’t know what kind of a mix he is, but his parents must have been very intelligent breeds, and while I’m sure there are dogs smarter than him out there, he’s definitely no slouch when it comes to brainpower.

And living with a sapient non-human made me think quite a bit about artificial intelligence. Why would we consider something or someone intelligent? Well, because Seamus is clever, he has an actual personality instead of just reflexive reactions to food, water, and possibilities to mate, which sadly, is not an option for him anymore thanks to a little snip snip at the shelter. If I throw treats his way to lure him somewhere he doesn’t want to go and he’s seen this trick before, his reaction is just to look at me and take a step back. Not every treat will do either. If it’s not chewy and gamey, he wants nothing to do with it. He’s very careful with whom he’s friendly, and after a past as a stray, he’s always ready to show other dogs how tough he can be when they stare too long or won’t leave him alone. Finally, from the scientific standpoint, he can pass the mirror test and when he gets bored, he plays with his toys and raises a ruckus so we play with him too. By most measures, we would call him an intelligent entity and definitely treat him like one.

When people talk about biological intelligence being different from the artificial kind, they usually refer to something they can’t quite put their fingers on, which immediately gives Singularitarians room to dismiss their objections as “vitalism” and unnecessary to address. But that’s not right at all because that thing on which non-Singularitarians often can’t put their finger is personality, an intricate, messy process in response to the environment that involves more than meeting needs or following a routine. Seamus might want a treat, but he wants this kind of treat and he knows he will needs to shake or sit to be allowed to have it, and if he doesn’t get it, he will voice both his dismay and frustration, reactions to something he sees as unfair in the environment around him which he now wants to correct. And not all of his reactions are food related. He’s excited to see us after we’ve left him along for a little while and he misses us when we’re gone. My laptop, on the other hand, couldn’t give less of a damn whether I’m home or not.

No problem, say Singularitarians, we’ll just give computers goals and motivations so they could come up with a personality and certain preferences! Hell, we can give them reactions you could confuse for emotions too! After all, if it walks like a duck and quacks like a duck, who cares if it’s a biological duck or a cybernetic one if you can’t tell the difference? And it’s true, you could just build a robotic copy of Seamus, including mimicking his personality, and say that you’ve built an artificial intelligence as smart as a clever dog. But why? What’s the point? How is this utilizing a piece of technology meant for complex calculations and logical flows for its purpose? Why go to all this trouble to recreate something we already have for machines that don’t need it? There’s nothing divinely special in biological intelligence, but to dismiss it as just another form of doing a set of computations you can just mimic with some code is reductionist to the point of absurdity, an exercise in behavioral mimicry for the sake of achieving… what exactly?

So many people all over the news seem so wrapped up in imagining AIs that have a humanoid personality and act the way we would, warning us about the need to align their morals, ethics, and value systems with ours, but how many of them ask why we would want to even try to build them? When we have problems that could be efficiently solved by computers, let’s program the right solutions or teach them the parameters of the problem so they can solve it in a way which yields valuable insights for us. But what problem do we solve trying to create something able to pass for human for a little while and then having to raise it so it won’t get mad at us and decide to nuke us into a real world version of Mad Max? Personally, I’m not the least bit worried about the AI boogeymen from the sci-fi world becoming real. I’m more worried about a curiosity which gets built for no other reason that to show it can be done being programmed to get offended or even violent because that’s how we can get, and turning a cold, logical machine into a wreck of unpredictable pseudo-emotions that could end up with its creators being maimed or killed.

crt head

Humans beware. Our would-be cybernetic overlords made a leap towards hyper-intelligence in the last few months as artificial neural networks can now be trained on specialized chips which use memristors, an electrical component that can remember the flow of electricity through it to help manage the amount of current required in a circuit. Using these specialized chips, robots, supercomputers, and sensors could solve complex real world problems faster, easier, and with far less energy. Or at least this is how I’m pretty sure a lot of devoted Singularitarians are taking the news that a team of researchers created a proof of concept chip able to house and train an artificial neural network with aluminium dioxide and titanium dioxide electrodes. Currently, it’s a fairly basic 12 by 12 grid of “synapses”, but there’s no reason why it couldn’t be scaled up into chips carrying billions of these artificial synapses that sip about the same amount of power as a cell phone imparts on your skin. Surely, the AIs of Kurzwelian lore can’t be far off, right?

By itself, the design in question is a long-proposed solution to the problem of how to scale a big artificial neural network when relying on the cloud isn’t an option. Surely if you use Chrome, you right clicked on an image and tried to have the search engine find it on the web and suggesting similar ones. This is powered by an ANN which basically carves up the image you send to it into hundreds or thousand of pieces, each of which is analyzed for information that will help it find a match or something in the same color palette, and hopefully, the same subject matter. It’s not perfect, but when you’re aware its limitations and use it accordingly, it can be quite handy. The problem is that to do its job, it requires a lot of neurons and synapses, and running them is very expensive from both a computational and a fiscal viewpoint. It has to take up server resources which don’t come cheap, even for a corporate Goliath like Google. A big part of the reason why is the lack of specialization for the servers which could just as easily execute other software.

Virtually every computer used today is based on what’s known as von Neumann architecture, a revolutionary idea back when it was proposed despite seeming obvious to us now. Instead of a specialized wiring diagram dictating how computers would run programs, von Neumann wanted programmers to just write instructions and have a machine smart enough to execute them with zero changes in their hardware. If you asked your computer whether it was running some office software, a game, or a web browser, it couldn’t tell you. To it, every program is a set of specific instructions pushed onto a stack on each CPU core, read and completed one by one, and then popped to make room for the next order. All of these instructions boil down to where to move a byte or series of bytes in memory and to what their values should be set. It’s perfect for when a computer could run anything and everything, and you’ll either have no control over what it runs, or want it to be able to run whatever software you throw its way.

In computer science, this ability to hide nitty-gritty details of how a complex process on which a piece of functionality relies actually works, is called an abstraction. Abstractions are great, I use them every day to design database schemas and write code. But they come at a cost. Making something more abstract means you incur an overhead. In virtual space, that means more time for something to execute, and in physical space that means more electricity, more heat, and in the case of cloud based software, more money. Here’s where the memristor chip for ANNs has its time to shine. Knowing that certain computing systems like routers and robots could need to run a specialized process again and again, they’ve designed a purpose built piece of hardware which does away with abstractions, reducing overhead, and allowing them to train and run their neural nets with just a little bit of strategically directed electricity.

Sure, that’s neat, it’s also what an FPGA, or a Field Programmable Gate Array can do already. But unlike these memristor chips, FPGAs can’t be easily retrained to run neural nets with a little reverse current and a new training session, they need to be re-configured, and they can’t use less power by “remembering” the current. This is what makes this experiment so noteworthy. It created a proof of concept for a much more efficient FPGA when techies are looking for a new way to speed up resource-hungry algorithms that require probabilistic approaches. And this is also why these memristor chips won’t change computing as we know it. They’re meant for very specific problems as add-ons to existing software and hardware, much like GPUs are used for intensive parallelization while CPUs handle day to day applications without one substituting for another. The von Neumann model is just too useful and it’s not going anywhere soon.

While many an amateur tech pundit will regale you with a vision of super-AIs built with this new technology taking over the world, or becoming your sapient 24/7 butler, the reality is that you’ll never be able to build a truly useful computer out of nothing but ANNs. You will lose the flexible nature of modern computing and the ability to just run an app without worrying about training a machine how to use it. These chips are very promising and there’s a lot of demand for them to hit the market sooner than later, but they’ll just be another tool to make technology a little more awesome, secure, and reliable for you, the end user. Just like quantum computing, they’re one means to tackling the growing list of demands for our connected world without making you wait for days, if not months, for a program to finish running and a request to complete. But the fact that they’re not going to become the building blocks of an Asimovian positronic brain does not make them any less cool in this humble techie’s professional opinion.

See: Prezioso, M., et. al. (2015). Training and operation of an integrated neuromorphic network based on metal-oxide memristors Nature, 521 (7550), 61-64 DOI: 10.1038/nature14441


A while ago, I wrote about some futurists’ ideas of robot brothels and conscious, self-aware sex bots capable of entering a relationship with a human, and why marriage to an android is unlikely to become legal. Short version? I wouldn’t be surprised if there are sex bots for rent in a wealthy first world country’s red light district, but robot-human marriages are a legal dead end. Basically, it comes down to two factors. First, a robot, no matter how self-aware or seemingly intelligent, is not a living things capable of giving consent. It could easily be programmed to do what its owner wants it to do, and in fact this seems to be the primary draw for those who consider themselves technosexuals. Unlike another human, robots are not looking for companionship, they were built to be companions. Second, and perhaps most important, is that anatomically correct robots are often used as surrogates for contact with humans and are being imparted human features by an owner who is either intimidated or easily hurt by the complexities of typical human interaction.

You don’t have to take my word on the latter. Just consider this interview with an iDollator — the term sometimes used by technosexuals to identify for themselves — in which he more or less just confirms everything I said word for word. He buys and has relationships with sex dolls because a relationship with a woman just doesn’t really work out for him. He’s too shy to make a move, gets hurt when he makes what many of us consider classic dating mistakes, and rather than trying to navigate the emotional landscape of a relationship, he simply avoids trying to build one. It’s little wonder he’s so attached to his dolls. He projected all his fantasies and desires to a pair of pliant objects that can provide him with some sexual satisfaction and will never say no, or demand any kind of compromise or emotional concern from him rather than for their upkeep. Using them, he went from a perpetual third wheel in relationships, to having a bisexual wife and girlfriend, a very common fantasy that has a very mixed track record with flesh and blood humans because those pesky emotions get in the way as boundaries and rules have to be firmly established.

Now, I understand this might come across as judgmental, although it’s really not meant to be an indictment against iDollators, and it’s entirely possible that my biases are in play here. After all, who am I to potentially pathologize the decisions of iDollator as a married man who never even considered the idea of synthetic companionship as an option, much less a viable one at that? At the same time, I think we could objectively argue that the benefits of marriage wouldn’t work for relationships between humans and robots. One of the main benefits of marriage is the transfers of property between spouses. Robots would be property, virtual extensions of the will of humans who bought and programmed them. They would be useful in making the wishes of the human on his or her deathbed known but that’s about it. Inheriting the humans’ other property would be an equivalent of a house getting to keep a car, a bank account, and the insurance payout as far as laws would be concerned. More than likely, the robot would be auctioned off or be transferred to the next of kin as a belonging of the deceased, and very likely re-programmed.

And here’s another caveat. All of this is based on the idea of advancements in AI we aren’t even sure will be made, applied to sex bots. We know that their makers want to give them some basic semblance of a personality, but how successful they’ll be is a very open question. Being able to change the robot’s mood and general personality on a whim would still be a requirement for any potential buyer as we see with iDollators, and without autonomy, we can’t even think of granting any legal person-hood to even a very sophisticated synthetic intelligence. That would leave sex bots as objects of pleasure and relationship surrogates, perhaps useful in therapy or to replace human sex workers and combat human trafficking. Personally, considering the cost of upkeep of a high end sex bot and the level of expertise and infrastructure required, I’m still not seeing sex bots as solving the ethical and criminal issues involved with semi-legal or illegalized prostitution, especially in the developing world. To human traffickers, their victims’ lives are cheap and those being exploited are just useful commodities for paying clients, especially wealthy ones.

So while we could safely predict they they will emerge and become quite complex and engaging over the coming decades, they’re unlikely to anything more than a niche product. They won’t be legally viable spouses and very seldom the first choice of companion. They won’t help stem the horrors of human trafficking until they become extremely cheap and convenient. They might be a useful therapy tool where human sexual surrogates can’t do their work or a way for some tech-savvy entrepreneurs sitting on a small pile of cash to make some quick money. But they will not change human relationships in profound ways as some futurists like to predict, and there might well be a limit to how well they can interact with us. Considering our history and biology, it a safe bet that our partners will almost always be other humans and robots will almost always be things we own. Oh they could be wonderful, helpful things to which we’ll have emotional attachments in the same way we’d be emotionally attached to a favorite pet, but ultimately, just our property.

[ illustration by Michael O ]

quantum chip

Quantum computers are slowly but surely arriving, and while they won’t be able to create brand new synthetic intelligences where modern computers have failed, or will even be faster for most tasks typical users will need to execute, they’ll be very useful in certain key areas of computing as we know it today. These machines aren’t being created as a permanent replacement to your laptop but to solve what are known as BPQ problems which will help your existing devices and their direct descendants run more securely and efficiently route torrents of data from the digital clouds. In computational complexity theory, BPQ problems are decision problems that could be performed in polynomial time when using superposition and quantum entanglement is an option for the device. Or to translate that to English, binary, yes/no problems that we could solve pretty efficiently if we could use quantum phenomena. The increase in speed comes not from making faster CPUs or GPUs, or creating ever larger clusters of them, but from implementing brand new logical paradigms into your programs. And to make that easier, a new language was created.

In classical computing, if we wanted to do factorization, we would create our algorithms then call on them with an input, or a range of inputs if we wanted to parallelize the calculations. So in high level languages you’d create a function or a method using the inputs as arguments, then call it when you need it. But in a quantum computer, you’d be building a circuit made of qubits to read your input and make a decision, then collect the output of the circuit and carry on. If you wanted to do your factorization on a quantum computer — and trust me, you really, really do — then you would be using Shor’s algorithm which gets a quantum circuit to run through countless possible results and pick out the answer you wanted to get with a specialized function for this task. How should you best set up a quantum circuit so you can treat it like any other method or function in your programs? It’s a pretty low level task that can get really hairy. That’s where Quipper comes in handy by helping you build a quantum circuit and know what to expect from it, abstracting just enough of the nitty-gritty to keep you focused on the big picture logic of what you’re doing.

It’s an embedded language, meaning that the implementations of what it does is handled with an interpreter that translates the scripts into its own code before turning into bytecode the machine that it runs on can understand. In Quipper’s case the underlying host language is Haskell, which explains why so much of its syntax is a lot like Haskell with the exception of types that define the quantum circuits you’re trying to build. Although Haskell never really got that much traction in a lot of applications and the developer community is not exactly vast, I can certainly see Quipper being used to create cryptographic systems or quantum routing protocols for huge data centers kind of like Erlang is used by many telecommunications companies to route call and texting data around their networks. It also begs the idea that one could envision creating quantum circuitry in other languages, like a QuantumCircuit class in C#, Python, or Java, or maybe a quantum_ajax() function call in PHP along with a QuantumSession object. And that is the real importance of the initiative by Quipper’s creators. It’s taking that step to add quantum logic to our computing.

Maybe, one day quantum computers will direct secure traffic between vast data centers, giving programmers an API adopted as a common library in the languages they use, so it’s easy for a powerful web application to securely process large amounts of data obtained through only a few lines of code calling on a quantum algorithm to scramble passwords and session data, or query far off servers will less lag — if those servers don’t implement that functionality on lower layers of the OSI Model already. It could train and run vast convolutional neural networks for OCR, swiftly digitizing entire libraries worth of books, notes, and handwritten documents with far fewer errors than modern systems, and help you manage unruly terabytes of photos spread across a social networking site or a home network by identifying similar images for tagging and organization. If we kept going, we could probably think of a thousand more uses for injecting quantum logic into our digital lives. And in this process, Quipper would be our jump-off point, a project which shows how easily we could wrap the weird world of quantum mechanics into a classical program to reap the benefits from the results. It’s a great idea and, hopefully, a sign of big things to come.