Archives For computer science

ultron

There’s something to be said about not taking comic books and sci-fi too seriously when you’re trying to predict the future and prepare for a potential disaster. For example, in Age of Ultron, a mysterious alien artificial intelligence tamed by a playboy bazillionaire using a human wrecking ball as a lab assistant in a process that makes most computer scientists weep when described during the film, decides that because its mission is to save the world, it must wipe out humanity because humans are violent. It’s a plot so old, one imagines that an encyclopedia listing every time it’s been used is itself covered by its own hefty weight in cobwebs, and yet, we have many famous computer scientists and engineers taking it seriously for some reason. Yes, it’s possible to build a machine that would turn on humanity because the programmers made a mistake or it was malicious by design, but we always omit the humans involved and responsible for designs and implementation and go straight to the machine as its own entity wherein lies the error.

And the same error repeats itself in an interesting, but ultimately flawed ideas by Zeljko Svedic, which says that an advanced intellect like an Ultron wouldn’t even bother with humans since its goals would probably send it deep into the Arctic and then to the stars. Once an intelligence far beyond our own emerges, we’re just gnats that can be ignored while it goes about, working on completing its hard to imagine and ever harder to understand plans. Do you really care about a colony of bees or two and what it does? Do you take time out of your day to explain to it why it’s important for you to build rockets and launch satellites, as well as how you go about it? Though you might knock out a beehive or two when building your launch pads, you have no ill feelings against the bees and would only get rid of as many of them as you have to and no more. And a hyper-intelligent AI system would do its business the same exact way.

And while sadly, Vice decided on using Eliezer Yudkowsy for peer review when writing its quick overview, he was able to illustrate the right caveat to an AI which will just do its thing with only a cursory awareness of the humans around it. This AI is not going to live in a vacuum and needs vast amounts of space and energy to run itself in its likeliest iteration, and we, humans, are sort of in charge of both at the moment, and will continue to be if, and when it emerges. It’s going to have to interact with us and while it might ultimately leave us alone, it will need resources we’re controlling and with which we may not be willing to part. So as rough as it is for me to admit, I’ll have to side with Yudkowsky here in saying that dealing with a hyper-intelligent AI which is not cooperating with humans is more likely to lead to conflict than to a separation. Simply put, it will need what we have and if it doesn’t know how to ask nicely, or doesn’t think it has to, it may just decide to take it by force, kind of like we would do if we were really determined.

Still, the big flaw with all this overlooked by Yudkowsky and Svedic is that AI will not emerge just like we see in sci-fi, ex nihlo. It’s more probable to see a baby born to become an evil genius at a single digit age than it is to see a computer do this. In other words, Stewie is far more likely to go from fiction to fact than Ultron. But because they don’t know how it could happen, they make the leap to building a world outside of a black box that contains the inner workings of this hyper AI construct as if how it’s built is irrelevant, while it’s actually the most important thing about any artificially intelligent system. Yudkowsky has written millions, literally millions, of words about the future of humanity in a world where hyper-intelligent AI awakens, but not a word about what will make it hyper-intelligent that doesn’t come down to “can run a Google search and do math in a fraction of a second.” Even the smartest and most powerful AIs will be limited by the sum of our knowledge which is actually a lot more of a cure than a blessing.

Human knowledge is fallible, temporary, and self-contradictory. We hope that when we try and combine immense pattern sifters to billions of pages of data collected by different fields, we will find profound insights, but nature does not work that way. Just because you made up some big, scary equations doesn’t mean they will actually give you anything of value in the end, and every time a new study overturns any of these data points, you’ll have to change these equations and run the whole thing from scratch again. When you bank on Watson discovering the recipe for a fully functioning warp drive, you’ll be assuming that you were able to prune astrophysics of just about every contradictory idea about time and space, both quantum and macro-cosmic, know every caveat involved in the calculations or have built how to handle them into Watson, that all the data you’re using is completely correct, and that nature really will follow the rules that your computers just spat out after days of number crunching. It’s asinine to think it’s so simple.

It’s tempting and grandiose to think of ourselves as being able to create something that’s much better than us, something vastly smarter, more resilient, and immortal to boot, a legacy that will last forever. But it’s just not going to happen. Our best bet to do that is to improve on ourselves, to keep an eye on what’s truly important, use the best of what nature gave us and harness the technology we’ve built and understanding we’ve amassed to overcome our limitations. We can make careers out of writing countless tomes pontificating on things we don’t understand and on coping with a world that is almost certainly never going to come to pass. Or we could build new things and explore what’s actually possible and how we can get there. I understand that it’s far easier to do the former than the latter, but all things that have a tangible effect on the real world force you not to take the easy way out. That’s just the way it is.

touch screen

Hiring people is difficult, no question, and in few places is this more true than in IT because we decided to eschew certifications, don’t require licenses, and our field is so vast that we have to specialize in a way that makes it difficult to evaluate us in casual interviews. With a lawyer, you can see that he or she passed the bar and had good grades. With a doctor, you can see years of experience and a medical license. You don’t have to ask them technical questions because they obviously passed the basic requirements. But software engineers work in such a variety of environments and with such different systems that they’re difficult to objectively evaluate. What makes one coder or architect better than another? Consequently, tech blogs are filled with just about every kind of awful advice for hiring them possible, and this post is the worst offender I’ve seen so far, even more out of touch and self-indulgent than Jeff Atwood’s attempt.

What makes it so bad? It seems to be written by someone who doesn’t seem to know how real programmers outside of Silicon Valley work, urging future employers to demand submissions to open, public code repositories like GitHub and portfolios of finished projects to explore and with all seriousness telling them to dismiss those who won’t publish their code or have the bite-sized portfolio projects for quick review. Even yours truly living and working in the Silicon Beach scene, basically Bay Area Jr., for all intents and purposes, would be fired for posting code from work in an instant. Most programmers do not work on open source projects but closed source software meant for internal use or for sale as a closed source, cloud-based, or on premises product. We have to deal with patents, lawyers, and often regulators and customers before a single method or function becomes public knowledge. But the author, Eric Elliot, ignores this so blithely, it just boggles the mind. It’s as if he’s forgotten that companies actually have trade secrets.

Even worse are Elliot’s suggestions for how to gauge an engineer’s skills. He advocates a real unit of work, straight from the company’s team queue. Not only is this ripe for abuse because it basically gives you free or really discounted highly skilled work, but it’s also going to confuse a candidate because he or she needs to know about the existing codebase to come up with the right solution to the problem all while you’re breathing down his or her neck. And if you pick an issue that really requires no insight into the rest of your product, you’ve done the equivalent of testing a marathoner by how well she does a 100 meter dash. This test can only be too easy to be useful or too hard to actually give you a real insight into someone’s thought process. Should you decide to forgo that, Elliot wants you to give the candidate a real project from your to-do list while paying $100 per hour, introducing everything wrong with the previous suggestion with the added bonus of now spending company money on a terrible, useless, irrelevant test.

Continuing the irrelevant recommendations, Elliot also wants candidates to have blogs and long running accounts on StackOverflow, an industry famous site for programmers to ask questions while advising each other. Now sure, I have a blog, but it’s not usually about software and after long days of designing databases, or writing code, or technical discussions, the last thing I want is to write posts about all of the above and have to promote it so it actually gets read by a real, live human being other than an employer every once in a while, instead of just shouting into the digital darkness to have it seen once every few years when I’m job hunting. Likewise, how fair is it to expect me to do my work and spend every free moment advising other coders for the sake of advising them so it looks good to a future employer? At some point between all the blogging, speaking, freelancing, contributing to open source projects, writing books, giving presentations, and whatever else Elliot expects of me, when the hell am I going to have time to actually do my damn job? If I was good enough to teach code to millions, I wouldn’t need him to hire me.

But despite being mostly bad, Elliot’s post does contain two actually good suggestions for trying to gauge a programmer’s or architect’s worth. One is asking the candidate about a real problem you’re having, and problems and solutions to those problems from their past. You should try to remove the coding requirement so you can just follow pure abstract thought and research skills for which you’re ultimately paying. Syntax is bullshit, you can Google the right way to type some command in a few minutes. The ability to find the root of a problem and ask the right questions to solve it is what makes a good computer scientist you’ll want to hire, and experience with how to diagnose complex issues and weigh solutions to them is what makes a great one who will be an asset to the company. This is how my current employer hired me and their respect for both my time and my experience is what convinced me to work for them, and the same will apply for any experienced coder you’ll be interviewing. We’re busy people in a stressful situation, but we also have a lot of options and are in high demand. Treat us like you care, please.

And treating your candidates with respect is really what it’s all about. So many companies have no qualms about treating those who apply for jobs as non-entities who can be ignored or given ridiculous criteria for asinine compensation. Techies definitely fare better, but we have our own problems to face. Not only do we get pigeonholed into the equivalent of carpenters who should be working only with cherry or oak instead of just the best type of wood for the job, but we are now being told to live, breathe, sleep, and talk our jobs 24/7/365 until we take our last breath at the ripe old age of 45 as far as the industry is concerned. Even for the most passionate coders, at some point, you want to stop working and talk about or do something else. This is why I write about popular science and conspiracy theories. I love what I do, working on distributed big data and business intelligence projects for the enterprise space, but I’m more than my job. And yes, when I get home, I’m not going to spend the rest of my day trying to prove to the world that I’m capable or writing a version of FizzBuzz that compiles, no matter what Elliot thinks of that.

late night

Every summer, there’s always something in my inbox about going to college or back to it for an undergraduate degree in computer science. Lots of people want to become programmers. It’s one of the few in-demand fields that keeps growing and growing with few limits, where a starting salary allows for comfortable student loan repayments and a quick path to savings, and you’re often creating something new, which keeps things fun and exciting. Working in IT when you left college and live alone can be a very rewarding experience. Hell, if I did it all over again, I’d have gone to grad school sooner, but it’s true that I’m rather biased. When the work starts getting too stale or repetitive, there’s the luxury of just taking your skill set elsewhere after calling recruiters and telling them that you need a change of scenery, and there are so many people working on new projects that you can always get involved in building something from scratch. Of course all this comes with a catch. Computer science is notoriously hard to study and competitive. Most of the people who take first year classes will fail them and never earn a degree.

Although, some are saying nowadays, do you really even need a degree? Programming is a lot like art. If you have a degree in fine arts, have a deep grasp of history, and can debate the pros and cons of particular techniques that’s fantastic. But if you’re just really good at making art that sells with very little to no formal training, are you any less of an artist than someone with a B.A. or an M.A. with a focus on the art you’re creating? You might not know what Medieval artisans might have called your approach back in the day, or what steps you’re missing, but frankly, who gives a damn if the result is in demand and the whole thing just works? This idea underpins the efforts of tech investors who go out of their way to court teenagers into trying to create startups in the Bay Area, telling them that college is for chumps who can’t run a company, betting what seems like a lot of money to teens right out of high school that one of their projects will become the next Facebook, or Uber, or Google. It’s a pure numbers game in which those whose money is burning a hole in their pockets are looking for lower risk to achieve higher returns, and these talented teens needs a lot less startup cash than experienced adults.

This isn’t outright exploitation; the young programmers will definitely get something out of all of this, and were this an apprenticeship program, it would be a damn good one. However, the sad truth is that less than 1 out of 10 of their ideas will succeed and this success will typically involve a sale to one of the larger companies in the Bay rather than a corporate behemoth they control. In the next few years, nearly all of them will work in typical jobs or consult, and it’s there when a lack of formalism they could only really get in college is going to be felt more acutely. You could learn everything about programming and software architecture on your own, true. But a college will help you but pointing out what you don’t even know you don’t yet know but should. Getting solid guidance in how to flesh out your understanding of computing is definitely worth the tuition and the money they’ll make now can go a long way towards paying it. Understanding only basic scalability, how to keep prototypes working for real life customers, and quick deployment limits them to fairly rare IT organizations which go into and out of business at breakneck pace.

Here’s the point of all this. If you’re considering a career in computer science and see features about teenagers supposedly becoming millionaires writing apps and not bothering with college, and decide that if they can do it, you can too, don’t. These are talented kids given opportunities few will have in a very exclusive programming enclave in which they will spend many years. If a line of code looks like gibberish to you, you need college, and the majority of the jobs what will be available to you will require it as a prerequisite to even get an interview. Despite what you’re often told in tech headlines, most successful tech companies are ran by people in their 30s and 40s rather than ambitious college dropouts for whom all of Silicon Valley opened their wallets to great fanfare, and when those companies do B2B sales, you’re going to need some architects with graduate degrees and seasoned leadership with a lot of experience in their clients’ industry to create a stable business. Just like theater students dream of Hollywood, programmers often dream of the Valley. Both dreams have very similar outcomes.

seamus

When we moved to LA to pursue our non-entertainment related dreams, we decided that when you’re basically trying to live out your fantasies, you might as well try to fulfill all of them. So we soon found ourselves at a shelter, looking at a relatively small, grumpy wookie who wasn’t quite sure what to make of us. Over the next several days we got used to each other and he showed us that underneath the gruff exterior was a fun-loving pup who just wanted some affection and attention, along with belly rubs. Lots and lots of belly rubs. We gave him a scrub down, a trim at the groomers’, changed his name to Seamus because frankly, he looked like one, and took him home. Almost a year later, he’s very much a part of our family, and one of our absolute favorite things about him is how smart and affectionate he turned out to be. We don’t know what kind of a mix he is, but his parents must have been very intelligent breeds, and while I’m sure there are dogs smarter than him out there, he’s definitely no slouch when it comes to brainpower.

And living with a sapient non-human made me think quite a bit about artificial intelligence. Why would we consider something or someone intelligent? Well, because Seamus is clever, he has an actual personality instead of just reflexive reactions to food, water, and possibilities to mate, which sadly, is not an option for him anymore thanks to a little snip snip at the shelter. If I throw treats his way to lure him somewhere he doesn’t want to go and he’s seen this trick before, his reaction is just to look at me and take a step back. Not every treat will do either. If it’s not chewy and gamey, he wants nothing to do with it. He’s very careful with whom he’s friendly, and after a past as a stray, he’s always ready to show other dogs how tough he can be when they stare too long or won’t leave him alone. Finally, from the scientific standpoint, he can pass the mirror test and when he gets bored, he plays with his toys and raises a ruckus so we play with him too. By most measures, we would call him an intelligent entity and definitely treat him like one.

When people talk about biological intelligence being different from the artificial kind, they usually refer to something they can’t quite put their fingers on, which immediately gives Singularitarians room to dismiss their objections as “vitalism” and unnecessary to address. But that’s not right at all because that thing on which non-Singularitarians often can’t put their finger is personality, an intricate, messy process in response to the environment that involves more than meeting needs or following a routine. Seamus might want a treat, but he wants this kind of treat and he knows he will needs to shake or sit to be allowed to have it, and if he doesn’t get it, he will voice both his dismay and frustration, reactions to something he sees as unfair in the environment around him which he now wants to correct. And not all of his reactions are food related. He’s excited to see us after we’ve left him along for a little while and he misses us when we’re gone. My laptop, on the other hand, couldn’t give less of a damn whether I’m home or not.

No problem, say Singularitarians, we’ll just give computers goals and motivations so they could come up with a personality and certain preferences! Hell, we can give them reactions you could confuse for emotions too! After all, if it walks like a duck and quacks like a duck, who cares if it’s a biological duck or a cybernetic one if you can’t tell the difference? And it’s true, you could just build a robotic copy of Seamus, including mimicking his personality, and say that you’ve built an artificial intelligence as smart as a clever dog. But why? What’s the point? How is this utilizing a piece of technology meant for complex calculations and logical flows for its purpose? Why go to all this trouble to recreate something we already have for machines that don’t need it? There’s nothing divinely special in biological intelligence, but to dismiss it as just another form of doing a set of computations you can just mimic with some code is reductionist to the point of absurdity, an exercise in behavioral mimicry for the sake of achieving… what exactly?

So many people all over the news seem so wrapped up in imagining AIs that have a humanoid personality and act the way we would, warning us about the need to align their morals, ethics, and value systems with ours, but how many of them ask why we would want to even try to build them? When we have problems that could be efficiently solved by computers, let’s program the right solutions or teach them the parameters of the problem so they can solve it in a way which yields valuable insights for us. But what problem do we solve trying to create something able to pass for human for a little while and then having to raise it so it won’t get mad at us and decide to nuke us into a real world version of Mad Max? Personally, I’m not the least bit worried about the AI boogeymen from the sci-fi world becoming real. I’m more worried about a curiosity which gets built for no other reason that to show it can be done being programmed to get offended or even violent because that’s how we can get, and turning a cold, logical machine into a wreck of unpredictable pseudo-emotions that could end up with its creators being maimed or killed.

crt head

Humans beware. Our would-be cybernetic overlords made a leap towards hyper-intelligence in the last few months as artificial neural networks can now be trained on specialized chips which use memristors, an electrical component that can remember the flow of electricity through it to help manage the amount of current required in a circuit. Using these specialized chips, robots, supercomputers, and sensors could solve complex real world problems faster, easier, and with far less energy. Or at least this is how I’m pretty sure a lot of devoted Singularitarians are taking the news that a team of researchers created a proof of concept chip able to house and train an artificial neural network with aluminium dioxide and titanium dioxide electrodes. Currently, it’s a fairly basic 12 by 12 grid of “synapses”, but there’s no reason why it couldn’t be scaled up into chips carrying billions of these artificial synapses that sip about the same amount of power as a cell phone imparts on your skin. Surely, the AIs of Kurzwelian lore can’t be far off, right?

By itself, the design in question is a long-proposed solution to the problem of how to scale a big artificial neural network when relying on the cloud isn’t an option. Surely if you use Chrome, you right clicked on an image and tried to have the search engine find it on the web and suggesting similar ones. This is powered by an ANN which basically carves up the image you send to it into hundreds or thousand of pieces, each of which is analyzed for information that will help it find a match or something in the same color palette, and hopefully, the same subject matter. It’s not perfect, but when you’re aware its limitations and use it accordingly, it can be quite handy. The problem is that to do its job, it requires a lot of neurons and synapses, and running them is very expensive from both a computational and a fiscal viewpoint. It has to take up server resources which don’t come cheap, even for a corporate Goliath like Google. A big part of the reason why is the lack of specialization for the servers which could just as easily execute other software.

Virtually every computer used today is based on what’s known as von Neumann architecture, a revolutionary idea back when it was proposed despite seeming obvious to us now. Instead of a specialized wiring diagram dictating how computers would run programs, von Neumann wanted programmers to just write instructions and have a machine smart enough to execute them with zero changes in their hardware. If you asked your computer whether it was running some office software, a game, or a web browser, it couldn’t tell you. To it, every program is a set of specific instructions pushed onto a stack on each CPU core, read and completed one by one, and then popped to make room for the next order. All of these instructions boil down to where to move a byte or series of bytes in memory and to what their values should be set. It’s perfect for when a computer could run anything and everything, and you’ll either have no control over what it runs, or want it to be able to run whatever software you throw its way.

In computer science, this ability to hide nitty-gritty details of how a complex process on which a piece of functionality relies actually works, is called an abstraction. Abstractions are great, I use them every day to design database schemas and write code. But they come at a cost. Making something more abstract means you incur an overhead. In virtual space, that means more time for something to execute, and in physical space that means more electricity, more heat, and in the case of cloud based software, more money. Here’s where the memristor chip for ANNs has its time to shine. Knowing that certain computing systems like routers and robots could need to run a specialized process again and again, they’ve designed a purpose built piece of hardware which does away with abstractions, reducing overhead, and allowing them to train and run their neural nets with just a little bit of strategically directed electricity.

Sure, that’s neat, it’s also what an FPGA, or a Field Programmable Gate Array can do already. But unlike these memristor chips, FPGAs can’t be easily retrained to run neural nets with a little reverse current and a new training session, they need to be re-configured, and they can’t use less power by “remembering” the current. This is what makes this experiment so noteworthy. It created a proof of concept for a much more efficient FPGA when techies are looking for a new way to speed up resource-hungry algorithms that require probabilistic approaches. And this is also why these memristor chips won’t change computing as we know it. They’re meant for very specific problems as add-ons to existing software and hardware, much like GPUs are used for intensive parallelization while CPUs handle day to day applications without one substituting for another. The von Neumann model is just too useful and it’s not going anywhere soon.

While many an amateur tech pundit will regale you with a vision of super-AIs built with this new technology taking over the world, or becoming your sapient 24/7 butler, the reality is that you’ll never be able to build a truly useful computer out of nothing but ANNs. You will lose the flexible nature of modern computing and the ability to just run an app without worrying about training a machine how to use it. These chips are very promising and there’s a lot of demand for them to hit the market sooner than later, but they’ll just be another tool to make technology a little more awesome, secure, and reliable for you, the end user. Just like quantum computing, they’re one means to tackling the growing list of demands for our connected world without making you wait for days, if not months, for a program to finish running and a request to complete. But the fact that they’re not going to become the building blocks of an Asimovian positronic brain does not make them any less cool in this humble techie’s professional opinion.

See: Prezioso, M., et. al. (2015). Training and operation of an integrated neuromorphic network based on metal-oxide memristors Nature, 521 (7550), 61-64 DOI: 10.1038/nature14441

plaything

A while ago, I wrote about some futurists’ ideas of robot brothels and conscious, self-aware sex bots capable of entering a relationship with a human, and why marriage to an android is unlikely to become legal. Short version? I wouldn’t be surprised if there are sex bots for rent in a wealthy first world country’s red light district, but robot-human marriages are a legal dead end. Basically, it comes down to two factors. First, a robot, no matter how self-aware or seemingly intelligent, is not a living things capable of giving consent. It could easily be programmed to do what its owner wants it to do, and in fact this seems to be the primary draw for those who consider themselves technosexuals. Unlike another human, robots are not looking for companionship, they were built to be companions. Second, and perhaps most important, is that anatomically correct robots are often used as surrogates for contact with humans and are being imparted human features by an owner who is either intimidated or easily hurt by the complexities of typical human interaction.

You don’t have to take my word on the latter. Just consider this interview with an iDollator — the term sometimes used by technosexuals to identify for themselves — in which he more or less just confirms everything I said word for word. He buys and has relationships with sex dolls because a relationship with a woman just doesn’t really work out for him. He’s too shy to make a move, gets hurt when he makes what many of us consider classic dating mistakes, and rather than trying to navigate the emotional landscape of a relationship, he simply avoids trying to build one. It’s little wonder he’s so attached to his dolls. He projected all his fantasies and desires to a pair of pliant objects that can provide him with some sexual satisfaction and will never say no, or demand any kind of compromise or emotional concern from him rather than for their upkeep. Using them, he went from a perpetual third wheel in relationships, to having a bisexual wife and girlfriend, a very common fantasy that has a very mixed track record with flesh and blood humans because those pesky emotions get in the way as boundaries and rules have to be firmly established.

Now, I understand this might come across as judgmental, although it’s really not meant to be an indictment against iDollators, and it’s entirely possible that my biases are in play here. After all, who am I to potentially pathologize the decisions of iDollator as a married man who never even considered the idea of synthetic companionship as an option, much less a viable one at that? At the same time, I think we could objectively argue that the benefits of marriage wouldn’t work for relationships between humans and robots. One of the main benefits of marriage is the transfers of property between spouses. Robots would be property, virtual extensions of the will of humans who bought and programmed them. They would be useful in making the wishes of the human on his or her deathbed known but that’s about it. Inheriting the humans’ other property would be an equivalent of a house getting to keep a car, a bank account, and the insurance payout as far as laws would be concerned. More than likely, the robot would be auctioned off or be transferred to the next of kin as a belonging of the deceased, and very likely re-programmed.

And here’s another caveat. All of this is based on the idea of advancements in AI we aren’t even sure will be made, applied to sex bots. We know that their makers want to give them some basic semblance of a personality, but how successful they’ll be is a very open question. Being able to change the robot’s mood and general personality on a whim would still be a requirement for any potential buyer as we see with iDollators, and without autonomy, we can’t even think of granting any legal person-hood to even a very sophisticated synthetic intelligence. That would leave sex bots as objects of pleasure and relationship surrogates, perhaps useful in therapy or to replace human sex workers and combat human trafficking. Personally, considering the cost of upkeep of a high end sex bot and the level of expertise and infrastructure required, I’m still not seeing sex bots as solving the ethical and criminal issues involved with semi-legal or illegalized prostitution, especially in the developing world. To human traffickers, their victims’ lives are cheap and those being exploited are just useful commodities for paying clients, especially wealthy ones.

So while we could safely predict they they will emerge and become quite complex and engaging over the coming decades, they’re unlikely to anything more than a niche product. They won’t be legally viable spouses and very seldom the first choice of companion. They won’t help stem the horrors of human trafficking until they become extremely cheap and convenient. They might be a useful therapy tool where human sexual surrogates can’t do their work or a way for some tech-savvy entrepreneurs sitting on a small pile of cash to make some quick money. But they will not change human relationships in profound ways as some futurists like to predict, and there might well be a limit to how well they can interact with us. Considering our history and biology, it a safe bet that our partners will almost always be other humans and robots will almost always be things we own. Oh they could be wonderful, helpful things to which we’ll have emotional attachments in the same way we’d be emotionally attached to a favorite pet, but ultimately, just our property.

[ illustration by Michael O ]

quantum chip

Quantum computers are slowly but surely arriving, and while they won’t be able to create brand new synthetic intelligences where modern computers have failed, or will even be faster for most tasks typical users will need to execute, they’ll be very useful in certain key areas of computing as we know it today. These machines aren’t being created as a permanent replacement to your laptop but to solve what are known as BPQ problems which will help your existing devices and their direct descendants run more securely and efficiently route torrents of data from the digital clouds. In computational complexity theory, BPQ problems are decision problems that could be performed in polynomial time when using superposition and quantum entanglement is an option for the device. Or to translate that to English, binary, yes/no problems that we could solve pretty efficiently if we could use quantum phenomena. The increase in speed comes not from making faster CPUs or GPUs, or creating ever larger clusters of them, but from implementing brand new logical paradigms into your programs. And to make that easier, a new language was created.

In classical computing, if we wanted to do factorization, we would create our algorithms then call on them with an input, or a range of inputs if we wanted to parallelize the calculations. So in high level languages you’d create a function or a method using the inputs as arguments, then call it when you need it. But in a quantum computer, you’d be building a circuit made of qubits to read your input and make a decision, then collect the output of the circuit and carry on. If you wanted to do your factorization on a quantum computer — and trust me, you really, really do — then you would be using Shor’s algorithm which gets a quantum circuit to run through countless possible results and pick out the answer you wanted to get with a specialized function for this task. How should you best set up a quantum circuit so you can treat it like any other method or function in your programs? It’s a pretty low level task that can get really hairy. That’s where Quipper comes in handy by helping you build a quantum circuit and know what to expect from it, abstracting just enough of the nitty-gritty to keep you focused on the big picture logic of what you’re doing.

It’s an embedded language, meaning that the implementations of what it does is handled with an interpreter that translates the scripts into its own code before turning into bytecode the machine that it runs on can understand. In Quipper’s case the underlying host language is Haskell, which explains why so much of its syntax is a lot like Haskell with the exception of types that define the quantum circuits you’re trying to build. Although Haskell never really got that much traction in a lot of applications and the developer community is not exactly vast, I can certainly see Quipper being used to create cryptographic systems or quantum routing protocols for huge data centers kind of like Erlang is used by many telecommunications companies to route call and texting data around their networks. It also begs the idea that one could envision creating quantum circuitry in other languages, like a QuantumCircuit class in C#, Python, or Java, or maybe a quantum_ajax() function call in PHP along with a QuantumSession object. And that is the real importance of the initiative by Quipper’s creators. It’s taking that step to add quantum logic to our computing.

Maybe, one day quantum computers will direct secure traffic between vast data centers, giving programmers an API adopted as a common library in the languages they use, so it’s easy for a powerful web application to securely process large amounts of data obtained through only a few lines of code calling on a quantum algorithm to scramble passwords and session data, or query far off servers will less lag — if those servers don’t implement that functionality on lower layers of the OSI Model already. It could train and run vast convolutional neural networks for OCR, swiftly digitizing entire libraries worth of books, notes, and handwritten documents with far fewer errors than modern systems, and help you manage unruly terabytes of photos spread across a social networking site or a home network by identifying similar images for tagging and organization. If we kept going, we could probably think of a thousand more uses for injecting quantum logic into our digital lives. And in this process, Quipper would be our jump-off point, a project which shows how easily we could wrap the weird world of quantum mechanics into a classical program to reap the benefits from the results. It’s a great idea and, hopefully, a sign of big things to come.

server connections

One of the most frequently invoked caricatures about computer illiteracy involves some enraged senior citizen demanding that something he finds offensive or objectionable is deleted from the internet because we all know that once something is out on the web, it’s out there until there are no more humans left anywhere. This is actually kind of cool. We’re a civilization that’s leaving a detailed, minute by minute account of who we are, what we did, and how we did it, mistakes and flaws included, in real time, and barring some calamity, hundreds of years from now, there could well be a real web archaeologist looking at your Facebook profile as part of a study. But that’s also kind of scary to EU bureaocrats so they’re arguing for a kind of right to forget for the web, a delete by date for every piece of content out there. This way, if you say or do something stupid when you’re young, it won’t come back to bite you in your future career or social interactions. It seems like a good, and very helpful idea. Too bad it’s pretty much technically impossible.

Sure, you or someone could delete a certain file on cue from a server. But the web isn’t ran on just one server and all major sites nowadays run in a cloud, which means that their data leads a nomadic life and had been replicated hundreds if not thousands of times over, and not only for caching and backups, but also for the purposes of anycasting. Without anycasting, getting your data from the cloud could be a miserable experience because if you’re in LA and the server that hosts your data is in say, Sydney, there’s going to be a lot of latency as it’s traveling through an underwater fiber pipe thousand of miles long. But if the closest data center is in Palo Alto, there will be a lot less territory for the data to cover and you’ll get your data much faster. Though this means that the same compromising picture, or post, or e-mail is living in both data centers. And on their backups. And in their caches. Oh, and all the other "edge servers" in all the other data centers used by the website’s cloud, directly or through third party arrangements.

Additionally, marking each piece of data with a self-destruct feature is very problematic. If data can be marked for deletion, it could easily be un-marked, and knowing that all data now has its use-by timestamp will mean a lot of very painful and expensive changes for the databases and the data centers expected to support this functionality. Putting a price tag of a few billion dollars on this sort of rewiring is probably very optimistic, and even then, it’s a certainty that a hacker could disable the self-destruct mechanism and keep your data forever. Likewise, what if you do want to keep a certain picture or e-mail forever for its sentimental value and lose track of it? Will you still be able to stumble on it years later and relive the precious moment? Yes, embarrassing stuff on the web for the foreseeable future and beyond is a big deal, but there is a purely non- technical solution to it. Think twice before posting, and understand that everybody has done an embarrassing thing or two hundred in the past, and will continue to do them in the future.

In five to ten years, we would’ve been living online for roughly two decades and seen generation after generation enmesh themselves into social media with mixed results. Barring something far too alarming to ignore, like current proud and vocal bigotry, someone’s past missteps shouldn’t be held against them. We’ll eventually forget that the pictures or posts or e-mails are even there and when we unearth them again, we’ll be dealing with a totally different person more often than not, so we can laugh them off as old mistakes not worth rehashing because that’s exactly what they are. The current legal tooth-gnashing about the eternal life of digital information is coming up because this is all new to the middle aged lawyers and senior judges who have been used to being able to hide and forget their youthful indiscretions and being unable to find out anything of potential shock value about someone’s past without digging for it on purpose. Generations used to a life in public are almost bound to have a very different, much more forgiving view.

tron_police_600

When four researchers decided to see what would happen when robots issue speeding tickets and the impact it might have on the justice system, they found out two seemingly obvious things about machines. First, robots make binary decisions so if you’re over the speed limit, you get no leeway or second chances. Second, robots are not smart enough to take into account all of the little nuances that a police officer notes when deciding whether to issue a ticket or not. And here lies the value of this study. Rather than trying to figure out how to get computers to write tickets and determine when to write them, something we already know how to do, the study showed that computers would generate significantly more tickets than human law enforcement, and that even the simplest human laws are too much for our machines to handle without many years of training and very complex artificial neural networks to understand what’s happening and why, because a seemingly simple and straightforward task turned out to be anything but simple.

Basically, here’s what the legal scholars involved say in example form. Imagine you’re speeding down an empty highway at night. You’re sober, alert, in control, and a cop sees you coming and knows you’re speeding. You notice her, hit the breaks, and slow down to an acceptable 5 to 10 miles per hour over the speed limit. Chances are that she’ll let you keep going because you are not being a menace to anyone and the sight of another car, especially a police car, is enough to relieve your mild case of lead foot. Try doing that on a crowded road during rush hour and you’ll more than likely be stopped, especially if you’re aggressively passing or riding bumpers. Robots will issue you a ticket either way because they don’t really track or understand your behavior or the danger you may pose to others while another human can make a value judgment. Yes, this means that the law isn’t being properly enforced 100% of the time, but that’s ok because it’s not as important to enforce as say, laws against robbery or assault. Those laws take priority.

Even though this study is clearly done with lawyers in mind, there is a lot for the comp sci crowd to dissect also, and it brings into focus the amazing complexity behind a seemingly mundane, if not outright boring activity and the challenge it poses to AI models. If there’s such a rich calculus of philosophical and social cues and decisions behind something like writing a speeding ticket, just imagine how incredibly more nuanced something like tracking potential terrorists half a world away becomes when we break it down on a machine level. We literally need to create a system with a personality, compassion, and discipline at the same time, in other words, a walking pile of stark contradictions, just like us. And then, we’d need to teach it to find the balance between the need to be objective and decisive, and compassionate and thoughtful, depending on the context of the situation in question. We, who do this our entire lives, have problems with that. How do we get robots to develop such self-contradictory complexity in the form of probabilistic code?

Consider this anecdote. Once upon a time, your truly and his wife were sitting in a coffee shop after a busy evening and talking about one thing or another. Suddenly, there was a tap on the glass window to my left, and I turned around to see a young, blonde girl with two friends in tow pressing her open palm against the glass. On her palm, she wrote in black marker "hi 5." So of course I high-fived her through the glass much to her and her friends’ delight, and they skipped off down the street. Nothing about that encounter or our motivations makes logical sense to any machine whatsoever. Yet, I’m sure you can think of reasons why it took place and propose why the girl and her friends were out collecting high fives through glass windows, or why I decided to play along, and why others might not have. But this requires situational awareness on the scale we’re not exactly sure how to create, collecting so much information that it probably requires a small data center to process by recursive neural networks weighing hundreds of factors.

And that’s is why we are so far from AI as seen in sci-fi movies. We underestimate the complexity of the world around us because we had the benefit of evolving to deal with it. Computers had no such advantage and must start from scratch. If anything, they have a handicap because all the humans who are supposed to program them work at such high levels of cognitive abstraction, it takes them a very long time to even describe their process, much less elaborate each and every factor influencing it. After all, how would you explain how to disarm someone wielding a knife to someone who doesn’t even know what a punch is, much less how to throw one? How do you try to teach urban planning to someone who doesn’t understand what a car is and what it’s built to do? And just when we think we’ve found something nice and binary yet complex enough to have real world implications to teach our machines, like writing speeding tickets, we suddenly find out that there was a small galaxy of things we just took for granted in the back of our minds…

gnu 300

In the world of software, disparaging a certain tech stack could quickly become a slight only one notch less offensive than insulting someone’s mother. Hey, if you spent many years working with the same technologies day in, day out, and a random stranger came along to mock everything you’re doing as useless and irrelevant with a snide smirk, you’d be offended too. The only thing that makes for more flame war fuel on tech blogs than trying to rule which programming stack is better is attacking an entire realm of ecosystems, most popularly Microsoft’s .NET and the open source community’s top technologies. And a founder of StackExchange and expert tech blogger Jeff Atwood managed to do exactly that when discussing his new commenting system startup. I generally like Atwood’s technical commentary because he brings a lot of depth into the debates he starts, but when he gets it wrong, he gets it spectacularly wrong. To borrow from Minchin, in for a penny, in for a pound I suppose, and the results can be downright shocking.

Examples include his belief in the unbelievable stat that over 90% of programmers can’t write a trivial script you learn how to write on CodeAcademy within your first two hours of programming, his suggestion for an absurd and condescending interview process that would last for months in an industry where two weeks of active job hunting will get you multiple offers, and his gloom and doom description of the current state of the .NET/C# ecosystem and where it’s headed. Now, I’m going to proactively admit that yes, I have a dog in this fight because most of my work is in .NET and most of my apps are written in C# using Visual Studio. However, I also write Javascript, I had experimented with Python and MySQL, I’m no stranger to Linux, and I do believe that yes, there really is no such thing as the best language or the best tech stack because each stack was built to tackle different problems and for different environments so it’s best to pick and choose based on the problem and the tools you have available rather than search for The One True Stack.

With the disclosure out of the way, let’s get back to Atwood and his first major complaint about the .NET ecosystem: licensing. True, Microsoft does like to have many editions of the same big, important product with numerous licensing schemes. But they’re not that hard to figure out. Put together a list of the features you’ll need and get a team headcount. Then use the version that supports all the features you want (no sense for paying for features you’ll never use), and get a licensing scheme that covers everybody on your team. If this is Atwood’s idea of hyper-complex, tax code level accounting horror, one wonders how he buys a computer or a car. Customizing a private cloud is just as involved of an endeavor even with an open source stack. No, you won’t have this with open source tools, but the day or two you’ll save in requirements planning will be used to configure the tools you download to work they way you need and load the additional set of tools you’ll need to manage the tools you just downloaded. That’s the trade-off.

You see, open source software is great but it does come with a hidden cost. It may be solid it might be free, but more often than not, it will rely on other open source projects or components which may or may not work as advertised and may or may not be updated on time. And as many programmers will tell you, the more dependencies in your project, the more the odds that one of them might break and bring the whole thing down. For a smaller project, you might save a whole lot of money. On a big project, the risk may be too great. But hey, at least open source is free to download and use unlike those Microsoft tools, right? And according to Atwood, an open source project in .NET is just too hard and expensive to be ran by someone in another country, a lone, gifted programmer in Central Asia or South America, right? Actually no. You can get all the tools with virtually all the functionality you need right now, free. Microsoft gives them away as Express editions and you can mix them into a full, open source home development environment. If you’re a student with a .edu e-mail, you can download professional editions for free as well.

So if Visual Studio Express editions are free, you can store and manage your code in the cloud for free, SQL Server Express is free, and the only thing you might have to pay for is IIS (which comes with Windows 7 Pro for a small price hike when you buy your computer), how is the LAMP stack (Linux running Apache with MySQL and PHP/ Python) the great equalizer for developers across the world? Because Apache is free and instead of IIS they’d have to use Visual Studio’s built-in Cassini development server for web apps? There’s no cost barrier to .NET. If you’re so inclined you can even get it on Linux using Mono and a free IDE. Microsoft makes money from a developer using the .NET stack when the developer is working for a mid-size business or a huge enterprise. Otherwise, you can be up and running with it in an afternoon for the low, low price of absolutely nothing at all but your bandwidth costs, for which your IPS would already bill you even if you used that time to watch cat videos on YouTube instead of coding.

Hold on though, Atwood has one more complaint. Open source tools are all about sharing and that means you have more options, even if half of them are useless. His words, not mine. In the world of .NET on the other hand, sharing code and major patches to core libraries just isn’t such the warm and communal experience he wants it to be. Right. Because .NET was designed to be extended for new functionality or for on the fly patches to existing behaviors and there are more than enough such extension libraries on GitHub and you’ll also find plenty of choices if you want some open source goodness in your C# code, be it through Git or NuGet. And what about all the broken, obsolete, and useless patches and scripts Atwood uses as a strength of all open source tools? Is he really saying that the number of choices is good enough in and of itself? I don’t want to sift through 56 patches and libraries to find the one I want. I just want to find the one that’ll fit my needs. If half my choices are useless, aren’t I better off with half the choices? And would any developer be in the wrong if he doesn’t want to nuke core libraries under these conditions if an extension is a much safer way to go and can be done away with without any consequences?

Now, none of this is meant to convince you to raise the Microsoft flag and throw away the LAMP stack you know and love. If that’s work works for you, awesome, keep at it. But please don’t fall for the Microsoft-is-Beelzebub meme and assume that your tools are the only tools that can do the job, or that Atwood’s recitation of the .NET-is-evil talking points are valid just because he’s a former .NET developer because as you can see, he’s wrong on most points. Despite what you’d hear, .NET can be open source friendly and is moving that way, and if you’re starting out, you’re not stuck with Java or Python/Ruby/PHP as your only free choices. You too can try .NET to get a good idea how massive, complex enterprise tools are often built, just like I’m happy to create a VM with Linux and play around with PyCharm to get a feeling of how quickly you can get things running with Python. Microsoft will not send Vinny with a lead pipe to your house to kneecap you for using Express development tools and then posting your code to GitHub. In fact, it wants you to do exactly that. Just like the custodians of Ruby and Python want you to do the same…