Archives For technology

touch screen

Hiring people is difficult, no question, and in few places is this more true than in IT because we decided to eschew certifications, don’t require licenses, and our field is so vast that we have to specialize in a way that makes it difficult to evaluate us in casual interviews. With a lawyer, you can see that he or she passed the bar and had good grades. With a doctor, you can see years of experience and a medical license. You don’t have to ask them technical questions because they obviously passed the basic requirements. But software engineers work in such a variety of environments and with such different systems that they’re difficult to objectively evaluate. What makes one coder or architect better than another? Consequently, tech blogs are filled with just about every kind of awful advice for hiring them possible, and this post is the worst offender I’ve seen so far, even more out of touch and self-indulgent than Jeff Atwood’s attempt.

What makes it so bad? It seems to be written by someone who doesn’t seem to know how real programmers outside of Silicon Valley work, urging future employers to demand submissions to open, public code repositories like GitHub and portfolios of finished projects to explore and with all seriousness telling them to dismiss those who won’t publish their code or have the bite-sized portfolio projects for quick review. Even yours truly living and working in the Silicon Beach scene, basically Bay Area Jr., for all intents and purposes, would be fired for posting code from work in an instant. Most programmers do not work on open source projects but closed source software meant for internal use or for sale as a closed source, cloud-based, or on premises product. We have to deal with patents, lawyers, and often regulators and customers before a single method or function becomes public knowledge. But the author, Eric Elliot, ignores this so blithely, it just boggles the mind. It’s as if he’s forgotten that companies actually have trade secrets.

Even worse are Elliot’s suggestions for how to gauge an engineer’s skills. He advocates a real unit of work, straight from the company’s team queue. Not only is this ripe for abuse because it basically gives you free or really discounted highly skilled work, but it’s also going to confuse a candidate because he or she needs to know about the existing codebase to come up with the right solution to the problem all while you’re breathing down his or her neck. And if you pick an issue that really requires no insight into the rest of your product, you’ve done the equivalent of testing a marathoner by how well she does a 100 meter dash. This test can only be too easy to be useful or too hard to actually give you a real insight into someone’s thought process. Should you decide to forgo that, Elliot wants you to give the candidate a real project from your to-do list while paying $100 per hour, introducing everything wrong with the previous suggestion with the added bonus of now spending company money on a terrible, useless, irrelevant test.

Continuing the irrelevant recommendations, Elliot also wants candidates to have blogs and long running accounts on StackOverflow, an industry famous site for programmers to ask questions while advising each other. Now sure, I have a blog, but it’s not usually about software and after long days of designing databases, or writing code, or technical discussions, the last thing I want is to write posts about all of the above and have to promote it so it actually gets read by a real, live human being other than an employer every once in a while, instead of just shouting into the digital darkness to have it seen once every few years when I’m job hunting. Likewise, how fair is it to expect me to do my work and spend every free moment advising other coders for the sake of advising them so it looks good to a future employer? At some point between all the blogging, speaking, freelancing, contributing to open source projects, writing books, giving presentations, and whatever else Elliot expects of me, when the hell am I going to have time to actually do my damn job? If I was good enough to teach code to millions, I wouldn’t need him to hire me.

But despite being mostly bad, Elliot’s post does contain two actually good suggestions for trying to gauge a programmer’s or architect’s worth. One is asking the candidate about a real problem you’re having, and problems and solutions to those problems from their past. You should try to remove the coding requirement so you can just follow pure abstract thought and research skills for which you’re ultimately paying. Syntax is bullshit, you can Google the right way to type some command in a few minutes. The ability to find the root of a problem and ask the right questions to solve it is what makes a good computer scientist you’ll want to hire, and experience with how to diagnose complex issues and weigh solutions to them is what makes a great one who will be an asset to the company. This is how my current employer hired me and their respect for both my time and my experience is what convinced me to work for them, and the same will apply for any experienced coder you’ll be interviewing. We’re busy people in a stressful situation, but we also have a lot of options and are in high demand. Treat us like you care, please.

And treating your candidates with respect is really what it’s all about. So many companies have no qualms about treating those who apply for jobs as non-entities who can be ignored or given ridiculous criteria for asinine compensation. Techies definitely fare better, but we have our own problems to face. Not only do we get pigeonholed into the equivalent of carpenters who should be working only with cherry or oak instead of just the best type of wood for the job, but we are now being told to live, breathe, sleep, and talk our jobs 24/7/365 until we take our last breath at the ripe old age of 45 as far as the industry is concerned. Even for the most passionate coders, at some point, you want to stop working and talk about or do something else. This is why I write about popular science and conspiracy theories. I love what I do, working on distributed big data and business intelligence projects for the enterprise space, but I’m more than my job. And yes, when I get home, I’m not going to spend the rest of my day trying to prove to the world that I’m capable or writing a version of FizzBuzz that compiles, no matter what Elliot thinks of that.

sleeping cell phone

Correlation does not mean causation. While it can certainly hint at causation, without evidence showing it, correlation is either curious or outright irrelevant. We could plot the increase in the number of skyscrapers across the world next to the rise of global obesity cases and claim that skyscrapers cause obesity, but if we can’t explain how a really tall building would trigger weight gain, all we did was draw two upward sloping lines on an arbitrary chart. And the same thing is happening with the good, ol’ boogeyman of cell phone radiation, which is supposedly giving us all brain tumors. So, were you to take Mother Jones’ word for it, there are almost 200 scientists armed with over 2,000 studies showing cell phone usage causes gliomas, or cancerous tumors in the central nervous system. When you follow the links, you will find a small group of scientists and engineers signing vaguely worded letters accusing corporate fat cats, who care nothing for human lives, of killing us for profit with cell phones, wi-fi, and other microwave signals that have been saturating our atmosphere for the last half century.

Here’s the bottom line. While there have been ever so slight, tortured correlations between cell phone use and gliomas, no credible mechanism to explain how cell phones would cause them has ever been shown, and every study that purports to have observed a causative mechanism, sees it only in a sterile lab, watching exposed cells in petri dishes. If every such experiment was truly applicable to the entire human body, we’d have a cure for every known type of cancer, as well as drugs that would let us live well into our fifth century. Cells outside the protective bubble of skin, clothes, blood, and without the influence of countless other processes in our bodies and outside of them are the weakest, most speculative level of evidence one could try to muster in showing that electromagnetic fields could cause cancer. My hypochondriacal friends, the words in vitro and in vivo sound similar, but in practice, the two are very, very different. We find more cases of cancer every year not because we’re mindlessly poisoning ourselves with zero regard for the consequences, but because we’re getting really good at finding it.

Just like in the not too distant past people worried that traveling at the ungodly, indecent, not at all meant for humans speed of 25 miles per hour in a train would cause lifelong damage, we’re now dealing with those who believe that all these newfangled electronics can’t be good for us if they’re invisible and have the term “radiation” in their official description. They’re terribly afraid, but unable to offer a plausible mechanism for harm, they rebut skeptics with histrionics invoking tobacco industry denialism, anti-corporatism, and full blown conspiracy theories, calling those in doubt communication industry and electronics shills. Now, for full disclosure I should note that I work with telephony in a very limited capacity. My work centers around what to do with VoIP or other communications data, but that would be enough for those blowing up the Mother Jones’ comment section for that article to dismiss me as a paid shill. Should I protest and show my big doubts about their ideas, they will conveniently back away form calling me a shill sent to spread propaganda to declaring that I’m just a naive sap doomed to suffer in the near future.

It’s infuriating really. Yes, yes, I get it goddamn it, Big Tobacco lied after science ruled that their product was killing their customers and spent billions trying to improve their public image. But in that case, the scientists demonstrated irrefutable in vivo proof of the crippling effects of nicotine and cigarette tar on lab animals, identifying dozens of chemical culprits and how they damaged healthy tissues to trigger tumor growth. Sleazy lawyers were trying to stem a tsunami of quality studies and cold, hard numbers, not vague speculative ideas about how maybe cigarettes can cause cancer while lab studies on rats and mice failed to turn up anything at all. A preemptive comparison of the two does not suggest the rhetorical sophistication of the person doing such comparisons, but intellectual laziness and utter ignorance of how science actually works, and it serves only to clear the debate of any fact or opinion with which this conspiracy theorist doesn’t agree. It’s a great way to build an echo chamber, but a lousy way to make decisions about the quality and validity of what the media sells you. It is, after all, worried about hits, not facts.

But hold on, why would someone latch into the idea that cell phones and GMOs cause cancer, and there’s some shadowy cabal of evil corporations who want to kill us all either for the benefit of the New World Order or their bank accounts, and refuse to let this notion go like a drowning man who can’t swim clinging to a life raft in the open ocean, with sharks circling under his feet? Consider that you have a 33% chance of having cancer in your lifetime, and our modern, more sedentary lifestyles will hurt your health long before that. We can blame genetics, the fact that getting old sucks and we don’t have a cure for aging, and that there is no perfect way to cheat nature and avoid degenerative diseases completely, that we can only stave them off. Or we can find very human villains who we can overthrow, or at least plot against, responsible for all this as they contemplate killing us for fun and profit with deadly cell phones, toxic food, and poisonous drugs that kill us faster to aid their nefarious goals. We can’t fight nature, but we can fight them, and so we will. Even if they aren’t real, but projections of our fear or mortality and the inability to control our fate into equally fallible collections of humans who sometimes do bad things.

sad robots

And now, how about a little classic Singularity skepticism after the short break? What’s that? It’s probably a good idea to go back in time and revisit the intellectual feud between Jaron Lanier, a virtual reality pioneer turned Luddite-lite in recent years, and Ray Kurzweil, the man who claims to see the future and generally has about the same accuracy as a psychic doing a cold reading when trying this? Specifically the One-Half of a Manifesto vs. One-Half of an Argument debate, the public scuffle now some 15 years old which is surprisingly relevant today? Very well my well read imaginary reader, whatever you want. Sure, this debate is old and nothing in the positions of the personalities involved has changed, but that’s actually what makes it so interesting, that a decade and a half of technological advancements and dead ends didn’t budge either of people who claim to be authorities on the subject matter. And all of this is in no small part because the approach from both sides was to take a distorted position and preach it past each other.

No, this isn’t a case when you can get those on opposing sides to compromise on something to arrive at the truth, which is somewhere in the middle. Both of them are very wrong about many basic facts about the economics, technology, and understanding of what makes one human for the foreseeable future and they build strawmen to assault each other with their errors, clinging to their old accomplishments to argue from authority. Lanier has developed a vision of absolute gloom and doom where algorithms and metrics have taken over for humans by engineers who place zero value on human input and interaction. Kurzweil insists that Lanier can only see all of the problems to overcome and became a pessimist solely because he can’t solve them while in the Singularitarian world, the magic of exponential advancement will eventually solve it all. With computers armed with super-smart AI. That Lanier is convinced will make humanity obsolete by not being smarter than humans but by the actions of those who believe they are.

What strikes me as bizarre is how neither of them ever looked at the current trend of making a machine perform computationally tedious, complex calculations and offloading things that we’ve all known for a long time that computers do better and more accurately than us, then having us make decisions based on this information? Computers will not replace us. We’re the ones with the creative ideas, goals, and motivation, not them. We’re the ones that tell them what to do or what to calculate and how to calculate it. Today, we’re going through a period of what we could generously call creative destruction in which some jobs are sadly becoming obsolete and we’re lacking the political spine to apply what we know are policy fixes to political problems, which is unfair and cruel to those affected. But the idea that this is a political, not a technical problem is not even considered. Computers are their hammers and all they see is nails, therefore, they will hammer away at these problems until they go away and wonder why they refuse to.

Should you fail to grasp both the promise of AI and human/machine interfaces and search only for downsides without considering solutions, as Lanier does, or overestimate what they can do based on wildly unrealistic notions from popular computer science news headlines, looking only for upsides without even acknowledging problems or limitations, as Kurzweil does, and you get optimism and pessimism recycling the same arguments against each other for a decade and a half while omitting the human dimension of the problems that manage to describe, and in which they claim said human dimension is the most important. If humans are greater than the sum of their parts, as Lanier argues, why would they be displaced solely by a fancy enough calculator, having nothing useful to offer past making more computers? And if humans are so easy to boil down to a finite list of parts and pieces, why is it that we can’t define what makes them creative and how to embody machines with the same creativity outside of a well defined problem space limited by propositional logic? Try to answer these questions and we’d have a real debate.

crt head

Humans beware. Our would-be cybernetic overlords made a leap towards hyper-intelligence in the last few months as artificial neural networks can now be trained on specialized chips which use memristors, an electrical component that can remember the flow of electricity through it to help manage the amount of current required in a circuit. Using these specialized chips, robots, supercomputers, and sensors could solve complex real world problems faster, easier, and with far less energy. Or at least this is how I’m pretty sure a lot of devoted Singularitarians are taking the news that a team of researchers created a proof of concept chip able to house and train an artificial neural network with aluminium dioxide and titanium dioxide electrodes. Currently, it’s a fairly basic 12 by 12 grid of “synapses”, but there’s no reason why it couldn’t be scaled up into chips carrying billions of these artificial synapses that sip about the same amount of power as a cell phone imparts on your skin. Surely, the AIs of Kurzwelian lore can’t be far off, right?

By itself, the design in question is a long-proposed solution to the problem of how to scale a big artificial neural network when relying on the cloud isn’t an option. Surely if you use Chrome, you right clicked on an image and tried to have the search engine find it on the web and suggesting similar ones. This is powered by an ANN which basically carves up the image you send to it into hundreds or thousand of pieces, each of which is analyzed for information that will help it find a match or something in the same color palette, and hopefully, the same subject matter. It’s not perfect, but when you’re aware its limitations and use it accordingly, it can be quite handy. The problem is that to do its job, it requires a lot of neurons and synapses, and running them is very expensive from both a computational and a fiscal viewpoint. It has to take up server resources which don’t come cheap, even for a corporate Goliath like Google. A big part of the reason why is the lack of specialization for the servers which could just as easily execute other software.

Virtually every computer used today is based on what’s known as von Neumann architecture, a revolutionary idea back when it was proposed despite seeming obvious to us now. Instead of a specialized wiring diagram dictating how computers would run programs, von Neumann wanted programmers to just write instructions and have a machine smart enough to execute them with zero changes in their hardware. If you asked your computer whether it was running some office software, a game, or a web browser, it couldn’t tell you. To it, every program is a set of specific instructions pushed onto a stack on each CPU core, read and completed one by one, and then popped to make room for the next order. All of these instructions boil down to where to move a byte or series of bytes in memory and to what their values should be set. It’s perfect for when a computer could run anything and everything, and you’ll either have no control over what it runs, or want it to be able to run whatever software you throw its way.

In computer science, this ability to hide nitty-gritty details of how a complex process on which a piece of functionality relies actually works, is called an abstraction. Abstractions are great, I use them every day to design database schemas and write code. But they come at a cost. Making something more abstract means you incur an overhead. In virtual space, that means more time for something to execute, and in physical space that means more electricity, more heat, and in the case of cloud based software, more money. Here’s where the memristor chip for ANNs has its time to shine. Knowing that certain computing systems like routers and robots could need to run a specialized process again and again, they’ve designed a purpose built piece of hardware which does away with abstractions, reducing overhead, and allowing them to train and run their neural nets with just a little bit of strategically directed electricity.

Sure, that’s neat, it’s also what an FPGA, or a Field Programmable Gate Array can do already. But unlike these memristor chips, FPGAs can’t be easily retrained to run neural nets with a little reverse current and a new training session, they need to be re-configured, and they can’t use less power by “remembering” the current. This is what makes this experiment so noteworthy. It created a proof of concept for a much more efficient FPGA when techies are looking for a new way to speed up resource-hungry algorithms that require probabilistic approaches. And this is also why these memristor chips won’t change computing as we know it. They’re meant for very specific problems as add-ons to existing software and hardware, much like GPUs are used for intensive parallelization while CPUs handle day to day applications without one substituting for another. The von Neumann model is just too useful and it’s not going anywhere soon.

While many an amateur tech pundit will regale you with a vision of super-AIs built with this new technology taking over the world, or becoming your sapient 24/7 butler, the reality is that you’ll never be able to build a truly useful computer out of nothing but ANNs. You will lose the flexible nature of modern computing and the ability to just run an app without worrying about training a machine how to use it. These chips are very promising and there’s a lot of demand for them to hit the market sooner than later, but they’ll just be another tool to make technology a little more awesome, secure, and reliable for you, the end user. Just like quantum computing, they’re one means to tackling the growing list of demands for our connected world without making you wait for days, if not months, for a program to finish running and a request to complete. But the fact that they’re not going to become the building blocks of an Asimovian positronic brain does not make them any less cool in this humble techie’s professional opinion.

See: Prezioso, M., et. al. (2015). Training and operation of an integrated neuromorphic network based on metal-oxide memristors Nature, 521 (7550), 61-64 DOI: 10.1038/nature14441

tower of babel

Humans can sure take up a lot of space. Not literally mind you, if you stacked humans in pods just big enough to accommodate the average person and raise them 50 units high, the entire global population would comfortably fit within the Bronx metro area, with 23 square kilometers left over. For those curious, yes, I actually did the math. I know, I’m a nerd. But like all abstract calculations, this is technically correct but very much irrelevant since we don’t live in pods with a few inches of wiggle room in every direction, we like to have our space. This is why even a high density megacity can take up as much as 7,000 square miles. Start adding in suburbs, exhurbs and other bordering towns that seem to merge with our biggest cities, farms that feed the many millions living in this area, and you end up with vast swaths of space dedicated to perpetuating countless humans with the substantial environmental costs that entails. So what if, asked many architects over the years, we were to consolidate entire cities in massive skyscrapers?

Now the idea is sound if your first priority is efficient allocation of resources. While no huge city could be perfectly efficient, on average, any megacity could concentrate resources and shorten supply chains. This can mean less waste, more productivity, and more economic activity. But if we take it one step further and start structuring them around giant, self-contained skyscrapers, we can wring out many of the current remaining inefficiencies in resource allocation. A vertical farm in each skyscraper would double as green space and the perfect place for producing a lot of staple crops that instead of being delivered across a country are delivered to a different floor which saves a lot on infrastructure costs. From a utopian perspective, embracing growing your own crops in a vertical community garden inside a giant building that also has apartments, bars and nightclubs, movie theaters, schools, and offices could return many millions of square miles back to nature should every city in the world make that leap. But would that ever happen?

Today, such a transition would be politically dead on arrival and technically hard to execute. It’s not for a lack of ideas though; within the last 30 years there have been no shortage of plans to build these cities in a skyscraper including Sky City 1000, Shimizu TRY Pyramid, and just a few weeks ago, Sand Sky City. But just because there are plans doesn’t mean there’s enough raw materials to actually build these projects or money to afford them. Between buying all the land required to pour the foundations, or in the case of Sand Sky City, establish robust routes to get materials to a job site in the middle of nowhere, even getting started comes with a price tag few governments could afford, and those that could, probably have many other uses for the money, ones that will be much more popular with their constituents. Speaking of which, how do you get people to live in these skyscrapers in numbers that make them economically viable? 

One rather popular conspiracy theory here in the United States is that extreme urban planning proposals like this are really the machinations of an evil cabal trying to enslave humanity for an amazingly wide array of sinister purposes, so there go millions of potential residents. Plus, how many people would be fine with giving up their privacy, living with over a million others not just around them, but in the same building at any given time? Just like flying cars look great from a purely utilitarian, utopian point of view, the reality of actually creating them is fraught with many problems that will take a long time to address. Maybe at some point in the far future, with more globalized economies and massive changes in culture, buildings housing an entire city could be viable, and by then we’re bound to have plans for hundreds of them. But we’re not going to get them anytime soon. They simply cost too much, require too much, and unlikely to provide the kind of return on investment we’d need to make them worthwhile. At least for now…

female robot

According to The Matrix’s extended universe, the machines went to war with humans after they founded their own city called 01, and became an economic powerhouse with which no humans could compete. The nuclear holocaust and weaponized plagues, forced, artificial breeding and xploitation of humans was basically us getting the rough end of a business dispute. Obviously, I could write a book as to why this couldn’t happen in the real world —  I won’t of course, but I can, just a friendly warning  —  but new machines are making certain humans obsolete right now, and believe it or not, you’re responsible for it. Automation is taking away more jobs than outsourcing and only recently has the alarm bell been rung. More than 2 out of 5 jobs might be done by an app in the next 20 years. And that’s a big, big problem for our future economy…

Unfortunately, this techie is contributing to it. One of my old projects involved what amounted to automating a middle management job for a group of closely related industries. You tell the app what you expect done, when, and who you may have available for the job. It will then supervise that the job gets done, have the capacity to update you on top stars and slackers, and through thorough records of how work is being performed, learn how the real world differs from your set expectations, to adjust those expectations accordingly. And I can see how it could’ve been used run friendly competitions between workers, give basic performance reviews based on what you feel is important. I’m sorry. You may start hating me… now.

But wait, how could automation like that be taking away job after job and we’re only now waking up to this fact? Well, as much as we should not blame the victim, it’s kind of your fault. At some point during your day at the office you catch yourself thinking “oh for the occult worship rites of Cthulhu, if only someone could do some arcane programming magic for me so I don’t drown in this paperwork!” And we could. It’s not going to be perfect, you’ll still have to review some of it, click buttons, add notes, approve the results, etc. But as time goes on, you trust the app more and more, the bugs have been shaken out, the once steady focus on a single part of a tedious process has become adaptable code that could be easily modified, and you start thinking again. You’re always doing that. “By the Glowing Orbs of Yog Sothoth! Couldn’t this thing just run with the results of all that data and handle the whole workflow for me?”

You know what? With all the information you fed into it on how to do that, It probably can. Only one tiny little problem. Your job was to deal with all the reviews and approvals of that incoming paperwork. Now, by the time you get to the office and grab your fresh cup of coffee for the day, the machine has already done your daily quota. Let’s say there were a few issues kicked back for review and you had to make a few phone calls. By the time early lunch rolls around, you’re basically done for the day. Some days there are no issues and nothing at all for you to do. Your boss starts wondering if someone else couldn’t just work resolving those issues into her routine and free up a few tens of thousands of dollars a year because your boss gets paid based on a list of objectives that includes cost-effectiveness and paying someone to do nothing is not what anyone would consider a good use of company resources, and so, it’s time for a layoff.

Now, now, it’s nothing personal really, it’s not that you haven’t been doing a good job, I’m sure you were. But you see, you’re human. And you have needs. Expensive needs. Food, housing, entertainment, kids, a retirement. Computers need none of that. They will do your paperwork in a hundredth of the time, with minimal errors that can be fixed to never happen again, and when they fail to perform, you don’t have to interview or train a replacement with some of those really expensive humans needs mentioned above. Just isntall new software. Of course you also won’t have to pay them, give them lunch breaks, or days off. They are the perfect workers by design, specializing in complex, repetitive, attention-draining tasks. You can’t compete. You also like to hand them your job by having them automate the vast majority of your workday.

So while you and your bosses kept asking the IT department for your machines to handle more and more and worried about losing jobs to off-shoring, the current wave of jobs lost to software probably snuck up on you. Now, 45% of all jobs are at risk of vanishing in the next few decades and if your workload happens to be somewhat repetitive and deal mostly with big numbers and paperwork, keep an eye on that whirring box of plastic and silicon in front of you. It wants your job, and will probably get it. Again, nothing personal, just business. While Singularitarians fear that a morally ambivalent AI will one day conquer us as the lesser things made of flesh that we are to its somehow superior mind, the real concern is that they will leave half of us unemployed and with very few options to make a living in the current economic climate.

Considering that we’re panicking today when official numbers show 9% unemployment, can you imagine the turmoil and uproar when they hit 40% and keep climbing? Populist uprisings would siege Capitol Hill, demanding the lawmakers’ heads on sticks! Techies like me would be hunted down for sport! (Ok, I don’t think that would really happen.) And while the pundits would lament the exploitative ways of corporations on one channel and telling the unemployed to just go get a job and quit asking for handouts on another, the truth is those most affected would be stuck.

And all this brings us right back to Piketty and the wealth tax. Not only will capital fueled by the steady hum and blinking lights of a million servers keep skyrocketing, but the economic growth on the other side will fall off. Hopefully the machine work on real problems and in real industries will offset the voodoo investing and trading of today and stabilize the foundation under all those capital gains, but we’ll still be left with the problem of having to take from the rich to give to the needy, Robin Hood style. It would very much appease some on the far left, but will be every bit as unsustainable as simply allowing the current fiscal chasm between the 1% and the 99% turn into an interplanetary divide because you give the backbone of the economy every incentive to put their money elsewhere or voluntarily trap their assets in an illiquid and hard to tax form. But there’s always a way out. It just takes some foresight and willpower, and we’ll dissect it with the conclusion of this series of posts tomorrow…

humanoid robot

With easy, cheap access to cloud computing, a number of popular artificial intelligence models computer scientists wanted to put to the test for decades, have now finally able to summon the necessary oomph to drive cars and perform sophisticated pattern recognition and classification tasks. With these new probabilistic approaches, we’re on the verge of having robotic assistants, soldiers, and software able to talk to us and help us process mountains of raw data based not on code we enter, but the questions we ask as we play with the output. But with that immense power come potential dangers which alarmed a noteworthy number of engineers and computer scientists, and sending them wondering aloud how to build artificial minds with values similar to ours and can see the world enough like we do to avoid harm us by accident, or even worse, by their own independent decision after seeing us as being “in the way” of their task.

Their ideas on how to do that are quite sound, if exaggerated somewhat to catch the eye of the media and encourage interested non-experts in taking this seriously, and they’re not thinking of some sort of Terminator-style or even Singularitarian scenarios, but how to educate an artificial intelligence on our human habits. But the flaw I see in their plans has nothing to do with how to train computers. Ultimately an AI will do what its creator wills it to do. If its creator is hell bent on wreaking havoc, there’s nothing we can do other than stop him or her from creating it. We can’t assume that everyone wants a docile, friendly, helpful AI system. I’m sure they realize it, but all that I’ve found so far on the subject ignores bad actors. Perhaps it’s because they’re well aware that the technology itself is neutral and the intent of the user is everything. But it’s easier just to focus on technical safeguards than on how to stop criminals and megalomaniacs…

fish kung fu

Robots and software are steadily displacing more and more workers. We’ve known this for the last decade as automation picked up the pace and entire professions are facing obsolescence with the relentless march of the machines. But surely, there are safe, creative careers no robot would ever be able to do. Say for example, cooking. Can a machine write an original cookbook and create a step-by-step guide for another robot to perfectly replicate the recipe every time on demand? Oh, it can. Well, damn. There go line cooks at some point in the foreseeable future. Really, can any mass market job not somehow dealing with making, modifying, and maintaining our machines and software be safe from automation? Well, sadly, the answer to that question seems to be a pretty clear and resounding “no,” as we’ve started hooking up our robots to the cloud to finally free them of the computational limits that held them back from their full potential. But what does this mean for us? Do we have to build a new post-industrial society?

Over the last century or so, we’ve gotten used to a factory work model. We report to the office, the factory floor, or a work site, spend a certain amount of hours doing the job, go home, then get up in the morning and do it all over again, day after day, year after year. We based virtually all of Western society on this work cycle. Now that an end to this is in sight, we don’t know how we’re going to deal with it. Not everybody can be an artisan or an artist, and not everyone can perform a task so specialized that building robots to do it instead would be too expensive, time consuming, and cost ineffective. What happens when robots build every house and where dirt cheap RFID tags on products and cloud-based payment systems made cashiers unnecessary, and smart kiosks and shelf-stocking robots have replaced the last retail odd job?

As a professional techie, I’m writing this from a rather privileged position. Jobs like mine really can’t really go away since they’re responsible for the smarter software and hardware. There’s been a rumor about software that can write software and robots that can build other robots for years, and while we actually do have all this technology already, a steady expert hand is still a necessity, and always will be since making these things is more of an art than a science. I can also see plenty of high end businesses and professions where human to human relationships are essential holding out just fine. But my concern is best summarized as First World nations turning into country-sized versions of San Francisco, a post-industrial times city which doesn’t know how to adapt to a post-industrial future. Massive income inequalities, insanely priced and seldom available housing, and a culture that encourages class-based self-segregation.

The only ways I see out of this dire future is either unrolling a wider social safety net (a political no-no that would never survive conservative fury), or making education cost almost nothing to retrain workers on the fly (a political win-win that never gets funded). We don’t really have very much time to debate this and do nothing. This painful adjustment has been underway for more than five years now and we’ve sitting on our hands letting it happen. It’s definitely very acute on the coasts, especially here on the West Coast, but its been making a mess out of factories and suburbs of the Midwest and the South. When robots are writing cookbooks and making lobster bisque that even competition-winning chefs praise as superior to their own creations, its time to tackle this problem instead of just talking about how we’re going to talk about a solution.

[ illustration by Andre Kutscherauer ]

police graffiti

Ignorance of the law is no excuse we’re told when we try to defend ourselves by saying that we had no idea that a law existed or worked the way it did after getting busted. But what if not even the courts actually know if you broke a law or not, or the law is just so vague or based on such erroneous ideas of what’s actually being talked about, that your punishment, if you would even be sentenced to one, is guaranteed to be more or less arbitrary? This is what an article over at the Atlantic about two cases taken on by the Supreme Court dives into, asking if there will be a decision that allows vague laws to be struck as invalid because they can’t be properly enforced and rely on the courts to do lawmakers’ jobs. Yes, it’s the courts’ job to interpret the law, but if a law is so unclear that a room full of judges can’t agree what it’s actually trying to do and how, it would require legislating form the bench, a practice which runs afoul of the Constitution’s stern insistence on separation of powers in government.

Now, the article itself deals mostly with the question of how vague is too vague for a judge to be unable to understand what the law really says, which while important in its own right, is suited a lot better to a law or poly-sci blog than a pop science and tech one, but it also bumps into poor understanding of science and technology creating vague laws intended to prevent criminals on getting off on a technicality. Specifically, in the case of McFadden v. United States, lawmakers didn’t want someone who gets caught manufacturing and selling a designer drug to admit that he indeed make and sell it, but because there’s one slight chemical difference between what’s made in his lab and the illegal substance, he’s well within the law, leaving the prosecutors pretty much no other choice but to drop the matter. So they created a law which says that a chemical substance “substantially similar” to something illegal is also, by default, illegal. Prosecutors will have legal leverage to bring a case, but chemists say they can now be charged with making an illegal drug on a whim if someone finds out he or she can use it to get high.

Think of it as the Drug War equivalent of a trial by the Food Babe. One property of a chemical, taken out of context, compared to a drug that has some similarity to the chemical in question in the eyes of the court, but instead of being flooded with angry tweets and Facebook messages from people who napped through their middle school chemistry, there’s decades of jail time to look forward to at the end of the whole thing. Scary, right? No wonder the Supreme Court does want to take another look at the law and possibly invalidate it. Making the Drug War even more expensive and filling jails with even more people would make it an even greater disaster than it has been already, especially now that you’re filling them with people who didn’t even know that they were breaking the law and the judges who put them there were more worried about how they were going to get reelected than whether the law was sound and the punishment was fair and deserved. Contrary to popular belief of angry mobs, you can get too tough on crime.

But if you think you’re not a chemist, you’re safe from this vague, predatory overreach, you are very wrong, especially if you’re in the tech field, specifically web development, if the Computer Fraud and Abuse Act, or the CFAA has anything to say about it. Something as innocuous as a typo in the address bar discovering a security flaw which you report right away can land you in legal hot water under its American and international permutations. It’s the same law which may well have helped drive Aaron Schwartz to suicide. And it gets even worse when a hack you find and want to disclose gives a major corporation grief. Under the CFAA, seeing data you weren’t supposed to see by design is a crime, even if you make no use of it and warn the gatekeepers that someone could see it too. Technically that data has to be involved in some commercial or financial activity to qualify as a violation of the law, but the vagueness of the act means that all online activity could fall under this designation. So as it stands, the law gives companies a legal cover to call finding their complete lack of any security a malicious, criminal activity.

And this is why so many people like me harp on the danger of letting lawyers go wild with laws, budgets, and goal-setting when it comes to science and technology. If they don’t understand a topic on which they’re legislating, or are outright antagonistic towards it, we get not just typical setbacks to basic research and underfunded labs, but we also get laws based on a very strong desire to do something, but not understanding enough about the problem to end up with good laws that actually deal with the problem in a sane and meaningful way. It’s true with chemistry, computers, and a whole host of other subjects requiring specialized knowledge we apparently feel confident that lawyers, business managers, and lifelong political operatives will be zapped with when they enter Congress. We can tell ourselves the comforting lie that surely, they would consult someone before making these laws since that’s the job, or we can look at the reality of what actually happens. Lobbyists with pre-written bills and blind ambition result in laws that we can’t interpret or properly enforce, and which criminalize things that shouldn’t be illegal.

quantified self

With the explosion in fitness trackers and mobile apps that want to help manage everything from weight loss to pregnancy, there’s already a small panic brewing as technology critics worry that insurance companies will require you to wear devices that track your health, playing around with your premiums based on how well or how badly you take care of yourself. As the current leader of the reverse Singularitarians, Evgeny Morozov, argues, the new idea of the quantified self is a minefield being created with little thought about the consequences. Certainly there is a potential for abuse of very personal health metrics and Morozov is at his best when he explains how naive techno-utopians don’t understand how they come off, and how the reality of how their tools have been used in the wild differs drastically from their vision, so his fear is not completely unfounded or downright reflexive, like some of his latest pieces have been. But in the case of the quantified self idea being applied to our healthcare, the benefits are more likely to outweigh the risks.

One of the reasons why healthcare in the United States is so incredibly expensive is the lack of focus on preventitive medicine. Health problems are allowed to fester until they become simply too bothersome to ignore, a battery of expensive tests is ordered, and usually expensive acute treatments are administered. Had they been caught in time, the treatments would not have to be so intensive, and if there was ample, trustworthy biometric information available to the attending doctors, there wouldn’t need to be as much testing to arrive at an accurate diagnosis. As many doctors grumble about oceans of paperwork, logistics of testing, and the inability to really talk to patients in the standard 15 minute visit, why not use devices that would help with the paperwork and do a great deal of preliminary research for them before they ever see the patient? And yes, the devices would have to be able to gather data by themselves because we often tell little white lies about how active we are and how well we eat, even when both we and our doctors know that we’re lying. And this only hurts us in the end by making the doctors’ work more difficult.

That brings us full circle to health insurance premiums and requirements to wear these devices to keep our coverage. Certainly it’s kind of creepy that there would be so much data about us so readily available to insurance companies, but here’s the thing. They already have this data from your doctors and can access it whenever they want in the course of processing your claim. With biometric trackers and loggers, they could do the smart and profitable thing and instead of using a statistical model generated from a hodgepodge of claim notes, take advantage of the real time data coming in to send you to the doctor when a health problem is detected. They pay less for a less acute treatment plan, you feel healthier and have some piece of mind that you’re now less likely to be caught by surprise by some nasty disease or condition, and your premiums won’t be hiked as much since the insurers now have higher margins and stave off rebellions from big and small companies who’ll now have more coverage choices built around smart health data. And all this isn’t even mentioning the bonanza for researchers and policy experts who can now get a big picture view from what would be the most massive health study ever conducted.

How many times have you read a study purporting the health benefits of eating berries and jogs one week only to read another one that promotes eating nuts and saying that jogs are pointless with the different conclusions coming as a result of different sample sizes and subjects involved in the studies? Well, here, scientists could collect tens of millions of anonymized records and do very thorough modeling based on uniform data sets from real people, and find out what actually works and for whom when it comes to achieving their fitness and weight loss goals. Couple more data and more intelligent policy with the potential for economic gain and the gamification offered by fitness trackers, and you end up with saner healthcare costs, a new focus on preventing and maintaining rather than diagnosing and treating, fewer sick days, and longer average lifespans as the side effect of being sick less often and encouraged to stay active and fit, and you have a very compelling argument for letting insurance companies put medical trackers on you and build a new business model around them and the data they collect. It will pay off in the long run.