Archives For automated systems

cyborg integration

Stop me if you’ve heard any of this before. As computers keep getting faster and more powerful and robots keep advancing at a breakneck pace, most human jobs will be obsolete. But instead of simply being pink-spilled, humans will get brand new jobs which pay better and give them a lot of free time to enjoy the products of our civilization’s robotic workforce, create, and invent. It’s a futuristic dream that’s been around for almost a century in one form or another, and it has been given an update in the latest issue of Wired. Robots will take our jobs and we should welcome it because we’ll eliminate grunt work in favor of more creative pursuits, say today’s tech prophets, and in a way they’re right. Automation is one of the biggest reasons why a lot of people can’t go out and get jobs that once used to be plentiful and why companies are bringing in more revenue with far fewer workers. Machines have effectively eliminated millions of jobs.

When we get to the second part of this techno-utopian prediction, however, things aren’t exactly as rosy. Yes, new and higher paying jobs have emerged, especially in IT, but they’re closed to a lot of people who simply don’t have the skills to do these new jobs or for whom no position exists in their geographical vicinity. Automation doesn’t just mean that humans get bumped up from an obsolete job, it means there are fewer jobs overall for humans. And when it comes to positions in which dealing with reams of paperwork and mundane office tasks is the order of the day, having computers and robots in them eliminates internships college students or young grads can use to build up a resume and get their feet in the door. They’re now stuck in a Catch-22 where they’re unable to get experience and more education puts them further behind thanks to a machine. I’m going to go out on a limb and say that this is not what the techno-utopians had in mind.

Of course humans will have to move up into more abstract and creative jobs where robots have no hope of ever competing with them, otherwise the economy will collapse as automated factory after automated factory churns out trillions of dollars worth of goods that no one can buy since some 70% of the population no longer has a job. And at 70% unemployment, every last horrible possibility that sends societal collapse theory survivalists screaming themselves awake at night has a high enough chance of happening that yours truly would also start seriously considering taking up gun hoarding and food stockpiling as really good hobbies. Basically, the failure to get adjusted to the growing cybernetic sector of the workforce simply isn’t an option. Companies, no matter how multinational, would be able to eliminate so many positions that the robot takeover of human jobs with no replacements in sight that it wouldn’t start feeling the economic pain as they hit maximum market saturation and can go no further because no one can buy their wares.

But all these good news aside, just because we’ll have time to adjust to an ever more automated economy and feel the need to do so, doesn’t mean that the transition will be easy and people will not be left behind. Without a coordinated effort by wealthy nations to change the incentives they give their companies and educational institutions, we’ll be forced to ride out a series of massive recessions in which millions of jobs are shed, relatively few are replaced, and the job markets will be slowly rebuilt around new careers because a large chunk of the ones lost are now handed off to machines or made obsolete by an industry’s contraction after the crisis. And this means that when facing the machine takeover of the economy we have two realistic choices. The first is to adapt by taking action now and bringing education and economic incentives in line with what the postindustrial markets are likely to become. The second is to try and ride out the coming storm, adapting in a very economically painful ad hoc manner through cyclical recessions. Unlike we’re being told, the new, post-machine jobs won’t just naturally appear on their own…

Share

The UK’s Royal Academy of Engineering recently published a report on the social and legal implications of using more and more automatic system in our daily lives. Some of its main questions ask who should be held responsible if one of these systems malfunctions with a lethal outcome. Is the machine its own entity that’s learning and making bad decisions as it does? Are the coders and designers to blame for its glitches. Do we need to separate machines from the humans who build them and if so, when? Can you haul a robot to court and charge it with negligent homicide or manslaughter?

dog robot

Admittedly when it comes to dealing with computer systems, my approach doesn’t linger in the realm of the theoretical. In fact, I’m not a huge fan of reports and essays like this because there’s a very fine line after which ideas about future technologies become philosophical navel-gazing and we lose the focus on how these systems will be designed, developed, tested and implemented. Practical brainstorming sessions identify problems and come up with creative solutions that are then critiqued for feasibility and give us new software, updates to systems we’re outgrowing and hardware that meets our demands. Theoretical philosophy tends to inspire transhumanism and Singularitarians.

So, from a practical standpoint, who should be responsible if an automated system makes an error and kills someone? Any automated system will have to use a software package to function and any software is a set of instructions carried out according to predetermined rules. You can update those rules as much as you’d like but every time the software needs to carry out an operation, it will do so according to the rules given to it by human developers. And when software makes mistakes, it does so because it trips up in the logic build into it, or lacks a rule to deal with something new and throws out an exception. That means any malfunction has to be attributed to the system’s development team and they’re the ones who have to fix it.

Computers can’t think for themselves. In fact, they never think and require designers and developers to do it for them. This is why we, humans, install cutoff switches, emergency overrides and manual controls. With every new technology, it’s our job to know that we may have to jump in and take over for the machines to solve a problem ourselves. To allow a computer to take complete and absolute control of anything and everything without an option for human intervention is just begging for trouble. In the worst case scenario, the end result is unintentional suicide by robot. This is why you’re not seeing too many drivers warm up to the persistent ideas for self-driving cars and people like me frown on vehicles that don’t let you turn off a feature that can be a bother in an otherwise great vehicle. It’s not because we’re technophobes. It’s because we know full well how machines can fail where we could make the right decision and avoid an accident.

But wait a second, you may ask, what about AI? What happens in the future, when computers can think for themselves? Like I’ve written numerous times, the hurdles to actual, unaided, creative problem solving from a box of plastic and silicon are immense. Any AI system will have to be developed in small stages, over many decades of work, and considering how real software is developed, it will be designed for a very specialized set of tasks. No BA or developer should be crazy enough to just suddenly decide to plunk billions of dollars and decades of time into a real world application that does anything and everything under the sun. That’s the IT equivalent of a Sisyphean task. At best, it would be a needless technical curiosity with a list of maintenance tasks that would keep the project gushing red ink.

Experimenting with AI software is great for academia, but don’t think it will be developed anytime soon or that it will be plugged into the world’s technical infrastructure on completion. Most likely, academic developments in creative computing would be spun off to solve very task specific problems because remember, the goal of software design is to make a system that solves a certain problem, not an OmniApp. And that means any future systems that will try to think for themselves within parameters given to them by their development teams, will come with a lot of documentation and will be mapped and scrutinized so much so, we’ll know how to deal with them.

Unlike theoreticians seem to worry, building automated systems in the real world is a very intensive process in which things like artificial consciousness wouldn’t just slip by during the QA process and suddenly emerge out of nowhere. The algorithms for it would have to be purposefully built in from the start and come with a set of business rules and functional requirements. And those are a lot more specific than just “build a self-aware machine that tells great knock-knock jokes and runs the planet’s energy grids.”

Share