Archives For computer science

plaything

A while ago, I wrote about some futurists’ ideas of robot brothels and conscious, self-aware sex bots capable of entering a relationship with a human, and why marriage to an android is unlikely to become legal. Short version? I wouldn’t be surprised if there are sex bots for rent in a wealthy first world country’s red light district, but robot-human marriages are a legal dead end. Basically, it comes down to two factors. First, a robot, no matter how self-aware or seemingly intelligent, is not a living things capable of giving consent. It could easily be programmed to do what its owner wants it to do, and in fact this seems to be the primary draw for those who consider themselves technosexuals. Unlike another human, robots are not looking for companionship, they were built to be companions. Second, and perhaps most important, is that anatomically correct robots are often used as surrogates for contact with humans and are being imparted human features by an owner who is either intimidated or easily hurt by the complexities of typical human interaction.

You don’t have to take my word on the latter. Just consider this interview with an iDollator — the term sometimes used by technosexuals to identify for themselves — in which he more or less just confirms everything I said word for word. He buys and has relationships with sex dolls because a relationship with a woman just doesn’t really work out for him. He’s too shy to make a move, gets hurt when he makes what many of us consider classic dating mistakes, and rather than trying to navigate the emotional landscape of a relationship, he simply avoids trying to build one. It’s little wonder he’s so attached to his dolls. He projected all his fantasies and desires to a pair of pliant objects that can provide him with some sexual satisfaction and will never say no, or demand any kind of compromise or emotional concern from him rather than for their upkeep. Using them, he went from a perpetual third wheel in relationships, to having a bisexual wife and girlfriend, a very common fantasy that has a very mixed track record with flesh and blood humans because those pesky emotions get in the way as boundaries and rules have to be firmly established.

Now, I understand this might come across as judgmental, although it’s really not meant to be an indictment against iDollators, and it’s entirely possible that my biases are in play here. After all, who am I to potentially pathologize the decisions of iDollator as a married man who never even considered the idea of synthetic companionship as an option, much less a viable one at that? At the same time, I think we could objectively argue that the benefits of marriage wouldn’t work for relationships between humans and robots. One of the main benefits of marriage is the transfers of property between spouses. Robots would be property, virtual extensions of the will of humans who bought and programmed them. They would be useful in making the wishes of the human on his or her deathbed known but that’s about it. Inheriting the humans’ other property would be an equivalent of a house getting to keep a car, a bank account, and the insurance payout as far as laws would be concerned. More than likely, the robot would be auctioned off or be transferred to the next of kin as a belonging of the deceased, and very likely re-programmed.

And here’s another caveat. All of this is based on the idea of advancements in AI we aren’t even sure will be made, applied to sex bots. We know that their makers want to give them some basic semblance of a personality, but how successful they’ll be is a very open question. Being able to change the robot’s mood and general personality on a whim would still be a requirement for any potential buyer as we see with iDollators, and without autonomy, we can’t even think of granting any legal person-hood to even a very sophisticated synthetic intelligence. That would leave sex bots as objects of pleasure and relationship surrogates, perhaps useful in therapy or to replace human sex workers and combat human trafficking. Personally, considering the cost of upkeep of a high end sex bot and the level of expertise and infrastructure required, I’m still not seeing sex bots as solving the ethical and criminal issues involved with semi-legal or illegalized prostitution, especially in the developing world. To human traffickers, their victims’ lives are cheap and those being exploited are just useful commodities for paying clients, especially wealthy ones.

So while we could safely predict they they will emerge and become quite complex and engaging over the coming decades, they’re unlikely to anything more than a niche product. They won’t be legally viable spouses and very seldom the first choice of companion. They won’t help stem the horrors of human trafficking until they become extremely cheap and convenient. They might be a useful therapy tool where human sexual surrogates can’t do their work or a way for some tech-savvy entrepreneurs sitting on a small pile of cash to make some quick money. But they will not change human relationships in profound ways as some futurists like to predict, and there might well be a limit to how well they can interact with us. Considering our history and biology, it a safe bet that our partners will almost always be other humans and robots will almost always be things we own. Oh they could be wonderful, helpful things to which we’ll have emotional attachments in the same way we’d be emotionally attached to a favorite pet, but ultimately, just our property.

[ illustration by Michael O ]

Share

quantum chip

Quantum computers are slowly but surely arriving, and while they won’t be able to create brand new synthetic intelligences where modern computers have failed, or will even be faster for most tasks typical users will need to execute, they’ll be very useful in certain key areas of computing as we know it today. These machines aren’t being created as a permanent replacement to your laptop but to solve what are known as BPQ problems which will help your existing devices and their direct descendants run more securely and efficiently route torrents of data from the digital clouds. In computational complexity theory, BPQ problems are decision problems that could be performed in polynomial time when using superposition and quantum entanglement is an option for the device. Or to translate that to English, binary, yes/no problems that we could solve pretty efficiently if we could use quantum phenomena. The increase in speed comes not from making faster CPUs or GPUs, or creating ever larger clusters of them, but from implementing brand new logical paradigms into your programs. And to make that easier, a new language was created.

In classical computing, if we wanted to do factorization, we would create our algorithms then call on them with an input, or a range of inputs if we wanted to parallelize the calculations. So in high level languages you’d create a function or a method using the inputs as arguments, then call it when you need it. But in a quantum computer, you’d be building a circuit made of qubits to read your input and make a decision, then collect the output of the circuit and carry on. If you wanted to do your factorization on a quantum computer — and trust me, you really, really do — then you would be using Shor’s algorithm which gets a quantum circuit to run through countless possible results and pick out the answer you wanted to get with a specialized function for this task. How should you best set up a quantum circuit so you can treat it like any other method or function in your programs? It’s a pretty low level task that can get really hairy. That’s where Quipper comes in handy by helping you build a quantum circuit and know what to expect from it, abstracting just enough of the nitty-gritty to keep you focused on the big picture logic of what you’re doing.

It’s an embedded language, meaning that the implementations of what it does is handled with an interpreter that translates the scripts into its own code before turning into bytecode the machine that it runs on can understand. In Quipper’s case the underlying host language is Haskell, which explains why so much of its syntax is a lot like Haskell with the exception of types that define the quantum circuits you’re trying to build. Although Haskell never really got that much traction in a lot of applications and the developer community is not exactly vast, I can certainly see Quipper being used to create cryptographic systems or quantum routing protocols for huge data centers kind of like Erlang is used by many telecommunications companies to route call and texting data around their networks. It also begs the idea that one could envision creating quantum circuitry in other languages, like a QuantumCircuit class in C#, Python, or Java, or maybe a quantum_ajax() function call in PHP along with a QuantumSession object. And that is the real importance of the initiative by Quipper’s creators. It’s taking that step to add quantum logic to our computing.

Maybe, one day quantum computers will direct secure traffic between vast data centers, giving programmers an API adopted as a common library in the languages they use, so it’s easy for a powerful web application to securely process large amounts of data obtained through only a few lines of code calling on a quantum algorithm to scramble passwords and session data, or query far off servers will less lag — if those servers don’t implement that functionality on lower layers of the OSI Model already. It could train and run vast convolutional neural networks for OCR, swiftly digitizing entire libraries worth of books, notes, and handwritten documents with far fewer errors than modern systems, and help you manage unruly terabytes of photos spread across a social networking site or a home network by identifying similar images for tagging and organization. If we kept going, we could probably think of a thousand more uses for injecting quantum logic into our digital lives. And in this process, Quipper would be our jump-off point, a project which shows how easily we could wrap the weird world of quantum mechanics into a classical program to reap the benefits from the results. It’s a great idea and, hopefully, a sign of big things to come.

Share

server connections

One of the most frequently invoked caricatures about computer illiteracy involves some enraged senior citizen demanding that something he finds offensive or objectionable is deleted from the internet because we all know that once something is out on the web, it’s out there until there are no more humans left anywhere. This is actually kind of cool. We’re a civilization that’s leaving a detailed, minute by minute account of who we are, what we did, and how we did it, mistakes and flaws included, in real time, and barring some calamity, hundreds of years from now, there could well be a real web archaeologist looking at your Facebook profile as part of a study. But that’s also kind of scary to EU bureaocrats so they’re arguing for a kind of right to forget for the web, a delete by date for every piece of content out there. This way, if you say or do something stupid when you’re young, it won’t come back to bite you in your future career or social interactions. It seems like a good, and very helpful idea. Too bad it’s pretty much technically impossible.

Sure, you or someone could delete a certain file on cue from a server. But the web isn’t ran on just one server and all major sites nowadays run in a cloud, which means that their data leads a nomadic life and had been replicated hundreds if not thousands of times over, and not only for caching and backups, but also for the purposes of anycasting. Without anycasting, getting your data from the cloud could be a miserable experience because if you’re in LA and the server that hosts your data is in say, Sydney, there’s going to be a lot of latency as it’s traveling through an underwater fiber pipe thousand of miles long. But if the closest data center is in Palo Alto, there will be a lot less territory for the data to cover and you’ll get your data much faster. Though this means that the same compromising picture, or post, or e-mail is living in both data centers. And on their backups. And in their caches. Oh, and all the other "edge servers" in all the other data centers used by the website’s cloud, directly or through third party arrangements.

Additionally, marking each piece of data with a self-destruct feature is very problematic. If data can be marked for deletion, it could easily be un-marked, and knowing that all data now has its use-by timestamp will mean a lot of very painful and expensive changes for the databases and the data centers expected to support this functionality. Putting a price tag of a few billion dollars on this sort of rewiring is probably very optimistic, and even then, it’s a certainty that a hacker could disable the self-destruct mechanism and keep your data forever. Likewise, what if you do want to keep a certain picture or e-mail forever for its sentimental value and lose track of it? Will you still be able to stumble on it years later and relive the precious moment? Yes, embarrassing stuff on the web for the foreseeable future and beyond is a big deal, but there is a purely non- technical solution to it. Think twice before posting, and understand that everybody has done an embarrassing thing or two hundred in the past, and will continue to do them in the future.

In five to ten years, we would’ve been living online for roughly two decades and seen generation after generation enmesh themselves into social media with mixed results. Barring something far too alarming to ignore, like current proud and vocal bigotry, someone’s past missteps shouldn’t be held against them. We’ll eventually forget that the pictures or posts or e-mails are even there and when we unearth them again, we’ll be dealing with a totally different person more often than not, so we can laugh them off as old mistakes not worth rehashing because that’s exactly what they are. The current legal tooth-gnashing about the eternal life of digital information is coming up because this is all new to the middle aged lawyers and senior judges who have been used to being able to hide and forget their youthful indiscretions and being unable to find out anything of potential shock value about someone’s past without digging for it on purpose. Generations used to a life in public are almost bound to have a very different, much more forgiving view.

Share

tron_police_600

When four researchers decided to see what would happen when robots issue speeding tickets and the impact it might have on the justice system, they found out two seemingly obvious things about machines. First, robots make binary decisions so if you’re over the speed limit, you get no leeway or second chances. Second, robots are not smart enough to take into account all of the little nuances that a police officer notes when deciding whether to issue a ticket or not. And here lies the value of this study. Rather than trying to figure out how to get computers to write tickets and determine when to write them, something we already know how to do, the study showed that computers would generate significantly more tickets than human law enforcement, and that even the simplest human laws are too much for our machines to handle without many years of training and very complex artificial neural networks to understand what’s happening and why, because a seemingly simple and straightforward task turned out to be anything but simple.

Basically, here’s what the legal scholars involved say in example form. Imagine you’re speeding down an empty highway at night. You’re sober, alert, in control, and a cop sees you coming and knows you’re speeding. You notice her, hit the breaks, and slow down to an acceptable 5 to 10 miles per hour over the speed limit. Chances are that she’ll let you keep going because you are not being a menace to anyone and the sight of another car, especially a police car, is enough to relieve your mild case of lead foot. Try doing that on a crowded road during rush hour and you’ll more than likely be stopped, especially if you’re aggressively passing or riding bumpers. Robots will issue you a ticket either way because they don’t really track or understand your behavior or the danger you may pose to others while another human can make a value judgment. Yes, this means that the law isn’t being properly enforced 100% of the time, but that’s ok because it’s not as important to enforce as say, laws against robbery or assault. Those laws take priority.

Even though this study is clearly done with lawyers in mind, there is a lot for the comp sci crowd to dissect also, and it brings into focus the amazing complexity behind a seemingly mundane, if not outright boring activity and the challenge it poses to AI models. If there’s such a rich calculus of philosophical and social cues and decisions behind something like writing a speeding ticket, just imagine how incredibly more nuanced something like tracking potential terrorists half a world away becomes when we break it down on a machine level. We literally need to create a system with a personality, compassion, and discipline at the same time, in other words, a walking pile of stark contradictions, just like us. And then, we’d need to teach it to find the balance between the need to be objective and decisive, and compassionate and thoughtful, depending on the context of the situation in question. We, who do this our entire lives, have problems with that. How do we get robots to develop such self-contradictory complexity in the form of probabilistic code?

Consider this anecdote. Once upon a time, your truly and his wife were sitting in a coffee shop after a busy evening and talking about one thing or another. Suddenly, there was a tap on the glass window to my left, and I turned around to see a young, blonde girl with two friends in tow pressing her open palm against the glass. On her palm, she wrote in black marker "hi 5." So of course I high-fived her through the glass much to her and her friends’ delight, and they skipped off down the street. Nothing about that encounter or our motivations makes logical sense to any machine whatsoever. Yet, I’m sure you can think of reasons why it took place and propose why the girl and her friends were out collecting high fives through glass windows, or why I decided to play along, and why others might not have. But this requires situational awareness on the scale we’re not exactly sure how to create, collecting so much information that it probably requires a small data center to process by recursive neural networks weighing hundreds of factors.

And that’s is why we are so far from AI as seen in sci-fi movies. We underestimate the complexity of the world around us because we had the benefit of evolving to deal with it. Computers had no such advantage and must start from scratch. If anything, they have a handicap because all the humans who are supposed to program them work at such high levels of cognitive abstraction, it takes them a very long time to even describe their process, much less elaborate each and every factor influencing it. After all, how would you explain how to disarm someone wielding a knife to someone who doesn’t even know what a punch is, much less how to throw one? How do you try to teach urban planning to someone who doesn’t understand what a car is and what it’s built to do? And just when we think we’ve found something nice and binary yet complex enough to have real world implications to teach our machines, like writing speeding tickets, we suddenly find out that there was a small galaxy of things we just took for granted in the back of our minds…

Share

gnu 300

In the world of software, disparaging a certain tech stack could quickly become a slight only one notch less offensive than insulting someone’s mother. Hey, if you spent many years working with the same technologies day in, day out, and a random stranger came along to mock everything you’re doing as useless and irrelevant with a snide smirk, you’d be offended too. The only thing that makes for more flame war fuel on tech blogs than trying to rule which programming stack is better is attacking an entire realm of ecosystems, most popularly Microsoft’s .NET and the open source community’s top technologies. And a founder of StackExchange and expert tech blogger Jeff Atwood managed to do exactly that when discussing his new commenting system startup. I generally like Atwood’s technical commentary because he brings a lot of depth into the debates he starts, but when he gets it wrong, he gets it spectacularly wrong. To borrow from Minchin, in for a penny, in for a pound I suppose, and the results can be downright shocking.

Examples include his belief in the unbelievable stat that over 90% of programmers can’t write a trivial script you learn how to write on CodeAcademy within your first two hours of programming, his suggestion for an absurd and condescending interview process that would last for months in an industry where two weeks of active job hunting will get you multiple offers, and his gloom and doom description of the current state of the .NET/C# ecosystem and where it’s headed. Now, I’m going to proactively admit that yes, I have a dog in this fight because most of my work is in .NET and most of my apps are written in C# using Visual Studio. However, I also write Javascript, I had experimented with Python and MySQL, I’m no stranger to Linux, and I do believe that yes, there really is no such thing as the best language or the best tech stack because each stack was built to tackle different problems and for different environments so it’s best to pick and choose based on the problem and the tools you have available rather than search for The One True Stack.

With the disclosure out of the way, let’s get back to Atwood and his first major complaint about the .NET ecosystem: licensing. True, Microsoft does like to have many editions of the same big, important product with numerous licensing schemes. But they’re not that hard to figure out. Put together a list of the features you’ll need and get a team headcount. Then use the version that supports all the features you want (no sense for paying for features you’ll never use), and get a licensing scheme that covers everybody on your team. If this is Atwood’s idea of hyper-complex, tax code level accounting horror, one wonders how he buys a computer or a car. Customizing a private cloud is just as involved of an endeavor even with an open source stack. No, you won’t have this with open source tools, but the day or two you’ll save in requirements planning will be used to configure the tools you download to work they way you need and load the additional set of tools you’ll need to manage the tools you just downloaded. That’s the trade-off.

You see, open source software is great but it does come with a hidden cost. It may be solid it might be free, but more often than not, it will rely on other open source projects or components which may or may not work as advertised and may or may not be updated on time. And as many programmers will tell you, the more dependencies in your project, the more the odds that one of them might break and bring the whole thing down. For a smaller project, you might save a whole lot of money. On a big project, the risk may be too great. But hey, at least open source is free to download and use unlike those Microsoft tools, right? And according to Atwood, an open source project in .NET is just too hard and expensive to be ran by someone in another country, a lone, gifted programmer in Central Asia or South America, right? Actually no. You can get all the tools with virtually all the functionality you need right now, free. Microsoft gives them away as Express editions and you can mix them into a full, open source home development environment. If you’re a student with a .edu e-mail, you can download professional editions for free as well.

So if Visual Studio Express editions are free, you can store and manage your code in the cloud for free, SQL Server Express is free, and the only thing you might have to pay for is IIS (which comes with Windows 7 Pro for a small price hike when you buy your computer), how is the LAMP stack (Linux running Apache with MySQL and PHP/ Python) the great equalizer for developers across the world? Because Apache is free and instead of IIS they’d have to use Visual Studio’s built-in Cassini development server for web apps? There’s no cost barrier to .NET. If you’re so inclined you can even get it on Linux using Mono and a free IDE. Microsoft makes money from a developer using the .NET stack when the developer is working for a mid-size business or a huge enterprise. Otherwise, you can be up and running with it in an afternoon for the low, low price of absolutely nothing at all but your bandwidth costs, for which your IPS would already bill you even if you used that time to watch cat videos on YouTube instead of coding.

Hold on though, Atwood has one more complaint. Open source tools are all about sharing and that means you have more options, even if half of them are useless. His words, not mine. In the world of .NET on the other hand, sharing code and major patches to core libraries just isn’t such the warm and communal experience he wants it to be. Right. Because .NET was designed to be extended for new functionality or for on the fly patches to existing behaviors and there are more than enough such extension libraries on GitHub and you’ll also find plenty of choices if you want some open source goodness in your C# code, be it through Git or NuGet. And what about all the broken, obsolete, and useless patches and scripts Atwood uses as a strength of all open source tools? Is he really saying that the number of choices is good enough in and of itself? I don’t want to sift through 56 patches and libraries to find the one I want. I just want to find the one that’ll fit my needs. If half my choices are useless, aren’t I better off with half the choices? And would any developer be in the wrong if he doesn’t want to nuke core libraries under these conditions if an extension is a much safer way to go and can be done away with without any consequences?

Now, none of this is meant to convince you to raise the Microsoft flag and throw away the LAMP stack you know and love. If that’s work works for you, awesome, keep at it. But please don’t fall for the Microsoft-is-Beelzebub meme and assume that your tools are the only tools that can do the job, or that Atwood’s recitation of the .NET-is-evil talking points are valid just because he’s a former .NET developer because as you can see, he’s wrong on most points. Despite what you’d hear, .NET can be open source friendly and is moving that way, and if you’re starting out, you’re not stuck with Java or Python/Ruby/PHP as your only free choices. You too can try .NET to get a good idea how massive, complex enterprise tools are often built, just like I’m happy to create a VM with Linux and play around with PyCharm to get a feeling of how quickly you can get things running with Python. Microsoft will not send Vinny with a lead pipe to your house to kneecap you for using Express development tools and then posting your code to GitHub. In fact, it wants you to do exactly that. Just like the custodians of Ruby and Python want you to do the same…

Share

troll

Maybe it’s just me, but the older I get, the more it seems that high school actually gives teens a fairly accurate sneak peek into adult life, minus the rent/mortgage, bills, and the fear of getting fired for or without cause, of course. High school drama seems like a pretty good description of what currently dominates once immensely popular and influential skeptical blogs after the nerd gender wars in the wake of Elevatorgate as many small skeptical conferences are being quickly reduced into confrontations about sexism, politics, and gender rather than discussions about all those science and skeptical inquiry topics they were meant to facilitate. But while I get plenty of gender and political correctness discussions from big name skeptical bloggers, my tech reading has remained quite clear of them. Well, until now, when the Donglegate incident lit up the feeds of numerous tech blogs and unleashed the fury of the internet on two companies.

Unlike the basic premise of Elevatorgate, where there was something to discuss and some good points to be made before the problem metastasized into what it has, Donglegate fits the definition of a tempest in a teapot to a T. At a conference for programmers working with Python, a popular open source scripting language, tech evangelist (basically a marketer/salesperson whose job is to explain why his or her company’s flagship products are the best thing since both sliced bread and perforated toilet paper) Adria Richards, overheard two guys behind her making an off-color joke about "forking a repository" followed by one about "big dongles." So she did what seasoned pros do in situations like this and asked them if they wouldn’t mind knocking it off because there was a presentation underway. Oh wait, then we wouldn’t have Donglegate, my bad. I meant she took a picture of them and publicly shamed them on Twitter for making dirty nerd puns with links to the conference’s policy asking attendees to keep their humor audience-appropriate.

From there things quickly got ridiculous. One of the guys in question was fired, Richards wrote an amazingly hyperbolic post delcaring that she felt compelled to shame them for every little girl out there who may never learn how to program because "the ass clowns behind me would make it impossible for her to do so" and concluding with "Yesterday the future of programming was on the line and I made myself heard," which unleashed the fury of the internet. She was also shown the door at her company as commercially painful DDoS attacks kept on coming for two days and her bosses most likely lost their confidence in her PR skills. And as the sour cherry on top, there was the usual assortment of rape and death threats by trolls who are attracted to these dramas much like vultures are attracted by the stench of putrid, rotting flesh, giving Richards a shot at a moral high ground, saying that brogrammers couldn’t stand to see a woman in tech stand up for herself, and, apparently, every woman and girl in the field or considering going into it.

But of course no manufactroversy would be complete without a kicker, and here it is. Richards herself is no stranger to dirty nerd puns, having used one herself on her work Twitter account a short time before the conference. By her logic, someone should’ve spoken up against it for all the boys who dirty minds like her will discourage from the field to peruse the profession. Why, if we let people make off-color jokes, they will be too offended to study, constantly in fear that the women in tech will just make jokes about their penises. </sarcasm> In the real world, people will make sexual jokes all the time and yes, a lot of them will make them at inappropriate times. The way to deal with this fact of life is to accept it and to tell the offending parties to knock it off when they cross the line rather than rush to appoint oneself as the savior of your industry’s future. As a man in the tech world, I’d be lying if I told you I’ve never heard women in IT make all sort of off-color puns about "multitasking" and "mounting drives." And yet I survived to code another day, mostly because like all adults, I’ve heard plenty of stuff like this since middle school.

Women going into IT are going to find that their problems with the industry will be institutional in nature rather than potentially overhearing dongle jokes. Graybeards who subtly imply or not so subtly declare that the programming world is not meant for women, or hiring managers who have free reign to hire whoever they think is most attractive rather than most qualified, are big issues those who want to ensure that little girls can easily become programmers if they so choose have to battle. If someone can’t handle a cheesy penis pun or joke implying coitus you can see in just about every other Superbowl commercial, this person is going to have a tough time in any job or any social circle outside of a fundamentalist religious group. If knowing that dirty jokes about the profession they want to take up exist are enough to make them abandon said profession, to me it’s a sign of a pathologically sheltered childhood rather than a real issue with the industry. It’s a downright inane argument that Richards was standing up for the future of women programmers and its self-serving nature is even more infuriating because it glosses over real problems.

I feel that it does a great disservice to women programmers when we’re told to treat them like a delicate bouquet of flowers instead of simply treating them as equals, paying them equal wages, and promoting them based on their merits as professionals. The women in IT I know want to be successful by doing something big and important, by cranking out highly visible projects. Why should we scramble to protect them from potentially overhearing a childish dirty joke and carry them to the finish line so we can hit the desired metric of female CIOs and CTOs, or architects? Isn’t that downright disrespectful to them? Why not just stay out of their way and let women in IT accomplish what they want to accomplish? It’s really not that hard to make an assumption that a qualified professional sitting across from you or next to you can excel regardless of gender. I’m not ridiculing Richards’ behavior because I don’t think there aren’t any issues for women in IT or any other STEM field. I’m ridiculing it because I have too much respect for women to think that a dongle or forking joke will deter them from following their programming dreams.

Share

circuit boards

A few years ago, when theoretical physicist Michio Kaku took on the future of computing in his thankfully short lived Big Think series, I pointed out the many things he got wrong. Most of them weren’t pedantic little issues either, they were a fundamental misunderstanding of not only the existing computing arsenal deployed outside academia, but the business of technology itself. So when the Future Tense blog put up a post from highly decorated computer expert Sethuraman Panchanathan purporting to answer the question of what comes after computer chips, a serious and detailed answer should’ve been expected. And there was one. Only it wasn’t a reply to the question that was asked. It was a breezy overview of brain-machine interfaces. Just like Kaku’s venture into the future of computing in response to a question clearly asked by someone whose grasp of computing is sketchy at best, Panchanathan’s answer was a detour that avoided what should’ve been done instead: an explanation of why the question was not even wrong.

Every computing technology not based on living things, a somewhat esoteric topic in the theory of computation we once covered, will rely on some form of a computer chip. It’s currently one of the most efficient ways we found of working with binary data and it’s very unlikely that we will be abandoning integrated circuitry and compact chips anytime soon. We might fiddle around with how they work on the inside making them probabilistic, or building them out of exotic materials, or even modifying them to read quantum fluctuations as well as electron pulses, but there isn’t a completely new approach to computing that’s poised to completely replace the good old chip in the foreseeable future. Everything Panchanathan mentions is based on integrating the signals from neurons with running currents through computer chips. Even cognitive computing for future AI models relies on computer chips. And why shouldn’t it? The chips give us lots of bang for our buck so asking "what comes after them" doesn’t make a whole lot of sense.

If computer chips weren’t keeping up with our computing demands and could not be modified to do so due to some law of physics or chemistry standing in the way, this question would be pretty logical, just like asking how we’ll store data when our typical spinning disk hard drives can’t read or write fast enough to keep up with data center demands and create unacceptable lag. But in the case of aging hard drive technology, we have good answers like RAID configurations and a new generation of solid state drives because these are real problems for which we had to find real solutions. But computer chips aren’t a future bottleneck. In fact they’re the very engine of a modern computer and we’d have to heavily add on to the theory of computing to even consider devices that don’t function like computer chips or whose job couldn’t be done by them. Honestly, I’m at a complete loss what these devices could be and how they could work. Probably the most novel idea I found was using chemical reactions to create logic gates, but it’s trying to improve a computer chip’s function and design, not outright replace it as the question implies.

Maybe we’re going a little too far with this. Maybe the person asking the questions really wanted to know about designs that will replace today’s CMOS chips, not challenge computation as most of us in the field know it. Then he could’ve talked about boron-enriched diamond, graphene, or graphene-molybdenum disulfide chips rather than future applications of computer chips in what are quite exciting areas of computer science all by themselves. But that’s the problem with a bad question by someone who doesn’t know the topic. We don’t know what’s really being asked and can’t give a proper answer. Considering that it originally came from a popular science and tech discussion though, makes answering it a political proposition. If instead of an answer you explain that the entire premise is wrong, you’ll risk coming across as patronizing, and making the topic way too complex for those whose expertise is not in your field. That may be why Panchanathan took a shot, though I really wish he tried to educate the person asking the question instead…

Share

math love

Sometimes it’s hard to decide whether an article asking about the role of computers in research is simply click bait that lures readers to disagree and boost views, or a legitimate question that a writer is trying to investigate. In this case, an article on Wired about the future of math focused ever more on computer proofs and algorithms asks whether computers are steamrolling over all human mathematicians because they can calculate so much so quickly, then answers itself with notes on how easily code can be buggy and proofs of complex theorems can go wrong. Maybe the only curious note is that of an eccentric mathematician at Rutgers who credits his computers on his papers as co-authors, and his polar opposite, an academic who eschews programming to such an extent that he delegates problems requiring code to his students, thinking it’s not worth his time to bother learning the new technology. It’s a quirky study in contrast, but little else.

But aside from the obvious answers and problems with the initial questions, a few things jumped out at me. I’m not a mathematician by any stretch of the imagination. My software deals with the applied world. But nevertheless, I’m familiar with how to write code in general and when there’s a mathematical proof that takes 50,000 lines of code being discussed, my first thought is how you could possibly have that much code to test one problem. The entire approach seems bizarre for what sounds like an application of graph theory that shouldn’t take more than a few functions to implement, especially in a higher level language. And this is not counting the 300 pages of the proof’s dissection, which again seems like tacking the problem with a flood of data rather than a basic understanding of the solution’s roots. In this case, the computer seemed like it was aiding and abetting a throw-everything-and-the-kitchen-sink-at-it methodology, and that’s not good.

When you use computers to generate vast reams of data where a solution may be hiding or just recording what it said after it ran a program you designed, you might get the right answer. The catch is that you’re never going to be sure unless you can solve the problem itself or come very close to the real answer and just need the computer to scale up your calculations and fill in most of the decimal spaces you know need to be there. After all, computers were designed for doing repetitive, well-defined work that would take humans far too long to do and in which missing an insignificant detail would quickly throw everything off by the end. They are not thinking machines and they rely on a programmer really knowing what’s going on under the hood to be truly useful in the academic field. Otherwise, mathematics could end up with 300 pages and 50,000 lines of code for one paper and two pages of computer printouts for another. And both extremes would get us nowehere pretty fast without a human who knows how to tackle the real problem…

Share

vintage social media

Computer scientist David Gelernter wants you to know that the age of search engines and most of the web as we know it is coming to an end. Soon, we’ll be able to filter out search results with custom settings, look at what happened in the news or to our friends in the past, and have new updates ready on demand, forming a constant stream of relevant, indexed data for our use. It’s probably just me, but why do these predictions sound so familiar? Oh yeah, because Gelernter just described Google advanced search and Facebook’s Timeline and called it "timestream" in a transparent or downright naive effort to make it sound as if he wasn’t ripping off Facbook’s now two year old concept. Not only that, but his entire op-ed is so trippy, larded with buzzwords, and dense with jargon he invents at the spur of the moment, complete with a photo of a very esoteric schematic on the back of a napkin, that it takes two or three passes through the whole thing for you to understand what he’s actually saying. The whole effort just comes off as ridiculous.

Gelernter’s primary claim to fame is that he contributed to an obscure part of parallel computing which deals with how objects are accessed when you’re… oh forget it. Unless you’re doing chip architecture or writing compilers, it probably won’t matter much to you. His second claim to fame is that he survived an attack by the Unabomber. From there on in, his list of accomplishments is more tenuous He claims to have foreseen the world wide web, but considering that some ideas of it have been around since ARPANET went public, that’s not really that huge of a feat. His day to day work today consists of writing partisan op-eds in neoconservative publications, declaring all higher education to be a failure because Obama was elected and advocating homeschooling as the only viable education option, decrying that liberal intellectuals are destroying America with zero elaboration as to how, and advising the Lifeboat Foundation, a group of transhumanists in search of a way to save the Earth through technology. In the interests of full disclosure I should mention that I was once offered a spot on their advisory board. I declined.

Ok, so Gelernter is now a pundit. Why is he trying to repackage Timeline as his brainchild? You see, he’s trying to launch a venture based around "lifestreaming" which, again, is basically very much like jamming your favorite RSS feeds into Timeline. And for that, he’s taking to a new trend in tech circles. Much like non-fiction authors write articles summarizing the thesis of their books, aspiring techies are writing op-eds about the Next Big Thing, hinting that they’re ahead of those trying to do the same thing, ideally summarizing their approach into a strained new buzzword like "long data" or "timestreaming" with the hope that they’ll be noticed by someone who would want to fund their efforts or at least introduce them to someone who might be interested. It’s a rather common marketing technique for many tech savvy companies and it does attract eyeballs, but if you really dig under 99% of these op-eds, you see nothing new or exciting. Just your standard issue buzzword salad. Though in Gelernter’s case, it’s trippy enough to be entertaining, despite the sad fact that it’s being written by someone who should really know better.

Share

android chip

There’s been a blip in the news cycle I’ve been meaning to dive into, but lately, more and more projects have been getting in the way of a steady writing schedule, and there are only so many hours in the day. So what’s the blip? Well, professional tech prophet and the public face of the Singularity as most of us know it, Ray Kurzweil, has a new gig at Google. His goal? To use stats to create an artificial intelligence that will handle web searches and explore the limits of how one could use statistics and inference to teach a synthetic mind. Unlike many of his prognostications about where technology is headed, this project is actually on very sound ground because we’re using search engines more and more to find what we want, and we do it based on the same type of educated guessing that machine learning can tackle quite well. And that’s why instead of what you’ve probably come to expect from me when Kurzweil embarks on a mission, you’ll get a small preview of the problems an artificially intelligent search engine will eventually face.

Machine learning and artificial neural networks are all the rage in the press right now because lots and lots of computing power can now run the millions of simulations required to train rather complex and elaborate behaviors in a relatively short amount of time. Watson couldn’t be built a few decades ago when artificial neural networks were being mathematically formalized because we simply didn’t have the technology we do today. Today’s cloud storage ideas require roughly the same kind of computational might as an intelligent system, and the thinking goes that if you pair the two, you’ll not only have your data available anywhere with an internet connection, but you’ll also have a digital assitant to fetch you what you need without having to browse through a myrriad of folders. Hence, systems like Watson and Siri, and now, whatever will come out of the joint Google-Kurzweil effort, and these functional AI prototypes are good at navigating context with a probabilistic approach, which successfully models how we think about the world.

So far so good, right? If we’re looking for something like "auto mechanics in Random, AZ," your search assistant living in the cloud would know to look at the relevant business listings, and if a lot of these listings link to reviews, it would assume that reviews are an important past of such a search result and bring them over as well. Knowing that reviews are important, it would likely do what it can to read through the reviews and select the mechanics with the most positive reviews that really read as if they were written by actual customers, parsing the text and looking for any telltale signs of sockpuppeting like too many superlatives or a rash of users in what seems like a stangely short time window as compared to the rest of the reviews. You get good results, some warnings about who to avoid, the AI did it’s job, you’re happy, the search engine is happy, and a couple of dozen tech reporters write gushing articles about this Wolfram Alpha Mark 2. But what if, just what if, you were to search for something scientific, something that brings up lots and lots of manufactroversies like evolution, climate change, or sexual education materials? The AI isn’t going to have the tools to give you the most useful or relevant recommendations there.

First off, there’s only so much that knowing context will do. For the AI, any page discussing the topic is valid, so a creationist website savaging evolution with unholy fury and a barrage of very, very carefully mined quotes designed to look respectable to the novice reader, and the archives at Talk Origins have the same validity unless a human tells it to prioritize scientific content over religious misrepresentations. Likewise, sites discussing healthy adult sexuality, sites going off in their condemantions of monogamy, and sites decrying any sexual activity before marriage as an amoral indulgence of the emotionally defective , are all the same to an AI without human input. I shudder to think of the kind of mess trying to accomodate a statistical approach here can make. Yes, we could say that if a user lives in what we know to be a socially conservative area, place a marked emphasis on the prudish and religious side of things, and if a user is in a moderate or a liberal area, show a gradient of sound science and alternative views on sexuality. Statistically, it makes sense. In the big picture, it perpetuates socio-political echo chambers.

And that introduces a moral dilemma Google and Kurzweil will have to face. Today’s search bar takes in your input, finds what look like good matches, and spits them out in pages. Good? Bad? Moral? Immoral? Scientifically valid? Total crackpottery? You, the human, will decide. Having an intelligent search assistant, however, places at least some of the responsibility for trying to filter out or flag bad or heavily biased information on the technology involved, and if the AI is way too accommodating to the user, it will simply perpetuate misinformation and propaganda. If it’s a bit too confrontational, or follows a version of the Golden Mean fallacy, it will be seen as defective by users who don’t like to step outside of their bubble too much, or those who’d like their AI to be a little more opinionated and put up an intellectual challenge. Hey, no one said that indexing and curating all human knowledge will be easy and that it won’t require making a stand on what gets top billing when someone tries to dive into your digital library. And here, no amount of machine learning and statistical analysis will save your thinking search engine…

Share