Archives For technological singularity

giant robot

Personally, I’m a big fan of Ray Villard’s columns because he writes about the same kind of stuff that gets dissected on this blog and the kind of stuff I like to read. Since most of it is wonderfully and wildly speculative, I seldom find something with which to really disagree. But his latest foray into futurism inspired by Cambridge University’s Center for the Study of Existential Risk’s project trying to assess the danger artificial intelligence poses to us is an exception to this rule. Roughly speaking, Ray takes John Good’s idea of human designing robots better at making new robots than humans and runs with it darkest adaptations in futurist lore. His endgame? Galaxies ruled not by "thinking meat" but by immortal machinery which surpassed its squishy creators and built civilizations that dominated their home worlds and beyond. The cosmos, it seems, is destined to be in the cold, icy grip of intelligent machinery rather than a few clever space-faring species.

To cut straight to the heart of the matter, the notion that we’ll build robots better at making new and different robots than us is not an objective one. We can certainly build machines that have more efficient approaches and can mass produce their new designs faster than us. But when it comes a nebulous notion like "better," we have to ask in what way. Over the last century, we’ve really excelled at measuring how well we do in tasks like math, pattern recognition, or logic. With concrete answers to most problems in these categories, it’s fairly straightforward to administer a test heavily emphasizing these skills and comparing the scores among the general populace. In dealing with things like creativity or social skills, things are much harder to measure and it’s easy to end up measuring inconsequeantial things as if they were make or break metrics, or give up on measuring them at all. And the difficulty only goes up when we consider context.

We can complicate the matter even further when we start taking who’s judging into account. To judges who aren’t very creative people and never have been, some robots’ designs might seem like feats beyond the limits of the human imagination. To a panel of artists and pro designers, a machine’s effort at creating other robots might seem nifty but predictable, or far too specialized for a particular task to be useful in more than one context. To a group of engineers, having the ability to design just-for-the-job robots might seem just the right mix of creativity and utility, even though they’d question whether this isn’t just a wasteful design. If you’re starting to get fuzzy on this hypothetical design by machine concept, don’t worry. You’re supposed to be since grading designs without very specific guidelines is basically just a matter of personal taste and opinion where trying to inject objective criteria doesn’t help in the least. And yet the Singularitarians who run with Good’s idea expect us to assume that this will be an easy win for the machines.

This unshakable belief that computers are somehow destined to surpass us in all things as they get faster and have bigger hard drives is at the core of the Singulatarianism that gives us these daramatic visions of organic obsolesence and machine domination of the galaxy. But it’s wrong from the ground up because it equates processing power and complexity of programming with a number of cognitive abilities which can’t be objectively measured for out entire species. Humans are no match for machinery if we have to do millions of mathematical calculations or read a few thousand books in a matter of days. Machines are stronger, faster, and immune to things that’ll kill us in a heartbeat. But once we get past measuring FLOPS, upload rates, and spec sheets on industrial robots, how can we argue that robots will be more imaginative than us? How do we try to explain how they’ll get there in more than a few Singularitarian buzzwords that mean nothing in the world of computer science? We don’t even know what makes a human creative in a useful or appreciable way. How would we train a computer to replicate a feat we don’t understand?

[ illustration by Chester Chien ]

Share

gynoid

The mindset of a Singularitarian is an interesting one. It’s certainly very optimistic, countering a lot of criticisms of their ideas by declaring that surely, someone will solve them with the mighty and omnipotent technology of the future, technology that pre-Singularity primitives like us won’t even be able to conceive because we don’t understand their mythology of exponential growth in scientific sophistication. And it also has some very strange ideas about computers, placing them as useful and powerful tools, our potential overlords, rogue agents to be tamed like pets, and as new homes for our brains after our bodies are past their use-by date, all at the same time. Now, I’m not exactly surprised by this because the original concept of the Singularity as detailed by a paper by Vernor Vinge is pretty much all over the place so overlap and conflicting opinions are pretty much inevitable as everyone tries to define what the Singularity really is and when it will arrive, generally settling on vague, almost meaningless cliches for the press.

But what does surprise me is how brazenly Singularitarians embrace the idea of a future where computers can and will do it all just by having more processing power or more efficient CPUs on display in this H+ Magazine review of a transhumanist guide. While ruminating on the awesome things we’ll get to go with infinite technological prowess in Q&A format, the book’s author blithely dismisses the notion of using advanced cyborg technology for space exploration. According to him, we’ll have so much computing power available that we could simulate anything we wanted, making the notion of space exploration obsolete. In the words of Wolfgang Pauli, this isn’t even wrong. We have a lot of computational power available today though a cloud or by assembling immense supercomputers with many thousands of cores and algorithms which can distribute the work to squeeze the most processing power out of them. All that power means squat though if it’s not used wisely, like, for instance, to simulate things we know too little about to simulate.

How can we simulate Mars or Titan if we’re still not sure of their exact composition and natural processes and use these simulations as viable models for exploration? Look at the models that we had for alien solar systems in the 1970s and how little resemblance they have to what we’re actually seeing by exploring the cosmos. Instead of organizing in neat groups and orbits which look like slightly elongated circles, exoplanets are all over the place. We didn’t even think that a Hot Jupiter was a thing until we saw one and even then, it took us years to say that yes, they’re really a thing and it definitely exists. And after all that, we also find that they appear to be rather common, making our solar system an outlier. Now this may all change with new observations, of course, but the point is that we can’t simulate what we don’t know and the only way to know is to go, look, experiment, and repeat the findings. Raw computing power is not substitute for a real world research program or genuine space exploration done by humans and machines.

The scary thing about this proposal though is that I’ve heard very similar views casually echoed by members of the Singularity Institute as well and mentioned by transhumanists around the web while they disparage the future of human spaceflight. I’m a firm believer that if anything would be able to qualify for a Singularity, it would be augmented humans living and working in space and carrying out complex engineering and scientific missions beyond Earth orbit. Considering what long term stays in microgravity and cosmic radiation do to the human body, augmentation of our future astronauts is just downright logical, especially because it could be put to great use after it proves its worth to help stroke and trauma victims regain control of their bodies or give them new limbs which will become permanent parts of them, not just prosthetics. Rather than run with the idea, however, a number of Singularitarians prefer to believe that magical computers endowed with powerful enough CPUs will just do everything for them, even their scientific research. That’s just intellectually lazy and a major disservice to their goal of merging with machines.

[ illustration by Oliver Wetter ]

Share

crysis cyborg

Ray Kurzweil, the tech prophet reporters love to quote when it comes to our coming immortality courtesy of incredible machines being invented as we speak, despite his rather sketchy track record of predicting long term tech trends, has a new book laying out the blueprint for reverse-engineering the human mind. You see, in Kurzwelian theories, being able to map out the human brain means that we’ll be able to create a digital version of it, doing away with the neurons and replacing them with their digital equivalents while preserving your unique sense of self. His new ideas are definitely a step in the right direction and are much improved from his original notions of mind uploading, the ones that triggered many a back and forth with the Singularity Institute’s fellows and fans on this blog. Unfortunately, as reviewers astutely note, his conception of how a brain works on a macro scale is still simplistic to a glaring fault, so instead of a theory of how an artificial mind based off our brains should work, he presents vague, hopeful overviews.

Here’s the problem. Using fMRI we can identify what parts of the brain seem to be involved in a particular process. If we see a certain cortex light up every time we’re testing a very specific skill in every test subject, it’s probably a safe bet that this cortex has something to do with the skill in question. However, we can’t really say with 100% certainty that this cortex is responsible for this skill because this cortex doesn’t work in a vacuum. There are hundreds of billions of neurons in the brain and at any given time, 99% of them are doing something. It would seem bizarre to get the sort of skin-deep look that fMRI can offer and draw sweeping conclusions without taking the constantly buzzing brain cells around an active area into account. How involved are they? How deep does a particular thought process go? What other nodes are involved? How much of that activity is noise and how much is signal? We’re just not sure. Neurons are so numerous and so active that tracing the entire connectome is a daunting task, especially when we consider that every connectome is unique, albeit with very general similarities across species.

We know enough to point to areas we think play key roles but we also know that areas can and do overlap, which means that we don’t necessarily have the full picture of how the brain carries out complex processes. But that doesn’t give Kurzweil pause as he boldly tries to explain how a computer would handle some sort of classification or behavioral task and arguing that since the brain can be separated into sections, it should also behave in much the same way. And since a brain and a computer could tackle the problem in a similar manner, he continues, we could swap out a certain part of the brain and replace it with a computer analog. This is how you would tend go about doing something so complex in a sci-fi movie based on speculative articles about the inner workings of the brain, but certainly not how you’d actually do that in the real world where brains are messy structures that evolved to be good at cognition, not to be compartmentalized machines with discrete problem-solving functions for each module. Just because they’ve been presented as such on a regular basis over the last few years, doesn’t mean they are.

Reverse-engineering the brain would be an amazing feat and there’s certainly a lot of excellent neuroscience being done. But if anything, this new research shows how complex the mind really is and how erroneous it is to simply assume that an fMRI blotch tells us the whole story. Those who actually do the research and study cognition certainly understand the caveats in the basic maps of brain function used today, but lot of popular, high profile neuroscience writers simply go for broke with bold, categorical statements about which part of the brain does what and how we could manipulate or even improve it citing just a few still speculative studies in support. Kurzweil is no different. Backed with papers which describe something he can use in support for his view of the human brain of being just an imperfect analog computer defined by the genome, he gives his readers the impression that we know a lot more than we really do and can take steps beyond those we can realistically take. But then again, keep in mind that Kurzweil’s goal is to make it to the year 2045, when he believes computers will make humans immortal, and at 64, he’s certainly very acutely aware of his own mortality, and needs to stay optimistic about his future…

Share

microcosm

If yours truly is supposedly a wet blanket when it comes to the long awaited and near imminent technocratic utopia of the post-Singularity world, Evgeny Morozov is a torrent from a firehose as he delivers one verbal body blow after another to the technocrats so vocally praised at TED conferences, and the TED concept itself, railing on their abuse of technobabble with the same scorched earth approach I reserve for post-modernist woo. Using a slim e-book praising the idea of autocratic technocracy by two of the disciples of the Kurzwelian Singularity Gospel as a jump- off point, he tries to tear through the pompous overuse of unnecessary jargon and present the book and its promotion by TED as evidence that the technocrats who run it think they’re better than you. Front and center is this quote he chose to highlight…

Using technology to deliberate on matters of national importance, deliver public services, and incorporate citizen feedback may ultimately be a truer form of direct participation than a system of indirect representation and infrequent elections. Democracy depends on the participation of crowds, but doesn’t guarantee their wisdom. We cannot be afraid of technocracy when the alternative is the futile populism of Argentines, Hungarians, and Thais masquerading as democracy.

And you know what? I think I agree with his point. Even if we look past the double-speak in this proposal, which says that citizen feedback should be incorporated into a technocracy and yet discards it as quite possibly useless at the same time, we’re still left with the disturbing thesis that the opinions of citizens living in a sovereign state and paying taxes for its upkeep shouldn’t have much of a say in how it’s run. Instead, they should just sit back and let the “smart people” — you can just imagine the author resist the urge to add “to use a term the simpletons can understand” in the final draft — handle it. Seriously? No, no, back it up, back it in, where do we even begin with this mess? For the love of FSM’s left meatball, are these really the kind of people with which my profession associates on a regular basis nowadays? Do people really think that those in the T part of STEM tend to agree with this outlook on democracy? I’d certainly hope not.

We could go out on a limb and say that yes, swaths of the voting public can be very ignorant and the choices they make can create gridlock in government. We could even argue further that the stagnation of the political process in the United States today is a real consequence of partisan zealots electing only the most hard-line politicians into power because rather than dealing with the pressing issues of the day, we’re blasted with the 24/7 blame game and political horse races. The partisans want new roads but they want someone else to pay for them, they want social safety nets if they’re disabled or laid off thanks to outsourcing, but they want to pay less taxes to fund them while balking at the debt incurred by catering to their whims, and they want to cut crucial R&D budgets because they won’t even educate themselves on how little is actually spent on them. Yes, all of this is horrible, but how would taking away their right to vote make things any better or ensure that the supposedly well-meaning technocrats at the top of the suggested New Techical Order won’t be ignorant about something crucial as well and make very poor decisions as a result, all the while smugly that they really know best?

Now, lest you think that the haughty Silicon Valley types who run ventures like TED to spread the message of the future as the inevitable Singularitarian utopia in which the technocrats know all and see all aren’t necessarily as condescending as they come off, allow me to present the following snippet regarding TED organizers’ response to the question of why they wanted to publish short e-books rather than full tomes trying to advance a complete idea in sufficient detail to be dissected by those who actually make and study technology, rather than buzzword-spewing think tank fellows who give themselves weighty titles with the word “technology” in them and have zero experience with anything technical outside their word processors and kitchen appliances.

When they launched their publishing venture, the TED organizers dismissed any concern that their books’ slim size would be dumbing us down. “Actually, we suspect people reading TED Books will be trading up rather than down. They’ll be reading a short, compelling book instead of browsing a magazine or doing crossword puzzles. Our goal is to make ideas accessible in a way that matches modern attention spans.”

There you have it folks. Elaborations on fluffy technobabble are simply too much for your puny little attention span. Why, you might just wander off like Nicholas Carr when he tries to read a book so they’ll just talk down to you… err… I mean “engage” you at the level they think you can manage to grasp. Were they to actually detail their ideas down to how they see them being implemented, they might have to defend them from criticisms levied by people who actually understand how technology works and spent time outside the recursive hype chamber of Silicon Valley. While disciplines like medicine, history, and physics have to deal with post-modernist pretention, the tech world’s curse are these vainly self-absorbed, arrogant, condescending meanderings of those who think that if only everyone did what they told them to do, the world would be a better place. It’s one thing to challenge ideas to which you’re opposed and advocate solutions from your research. But demanding the power to do as you wish while deeming those you’d govern as incapable of making good decisions would just make you a power-hungry dictator to those outside your circle of like-minded sycophants.

Share

Nowadays, it seems like Ray Kurzweil is one of the most exciting people in tech, apparently warranting a big write-up of his predictions in Time Magazine, and despite his nearly religious view of technology, named as one of the world’s most influential living atheists. And so, once in a while, we’re treated to a look at how well his prediction actually fared, often by those who’ve actually done very little research into the major disconnect between his seemingly successful predictions and reality. One of the latest iterations of almost suspiciously subtle praise for Kurzweil’s powers of prognostication from TechVert’s JD Rucker, is a perfect example of exactly that, presenting an infographic with a track record of someone who seems to have nothing less than a precognitive powers when it comes to the world of high tech. Though if you manage to catch the attribution on the bottom of the graphic itself, you’ll find that its source is none other than Ray and once again, he’s giving a very, very generous reinterpretation to his predictions and omits the myriad of details he actually got wrong.

Remember when last year, the much less lenient judges at IEEE spectrum decided to put his predictions in their proper place and evaluate how what he actually said compares to what he claims he said when grading his own predictions in retrospect? Even when simply quoting obvious trends, his record actually tends to be quite mediocre and starts out with a reasonable idea, such as that more and more computing will be mobile and web access will be close to ubiquitous, and starts adding in statements about brain implants, intelligent and web-enabled clothing, and other personal fantasies which are decades away from practical use, if they’ll actually ever be mass marketed in the first place. Then, he goes back and revises his own claims as shown by the link above, claiming that he never actually said that computers as we know them would vanish by 2010 even though in his TED presentation he said it in pretty much those exact words. Along the way, he also held that by 2009, we would’ve adopted intelligent highways with self-piloting cars. Google’s autonomous vehicle guided by sensors and GPS is still just an experiment and highways don’t manage their own traffic, unless a sign telling you about an accident or travel time to an exit counts as high tech traffic management.

So were you to do a cold reading of technology’s future ala Kurzweil, just think big, make lots of claims, and if you get something rather obvious right, just forget all the other stuff you added to your prediction and you too can be cited as a pioneer and visionary in fields you actually know little to nothing about, and said to have the kind of uncanny accuracy that makes everything you say compelling. You know, kind of like astrologers whose random wild guesses are edited down to just the vaguely right ones if we give them a lot of leeway when they manage to get something they consider to be an accurate prediction? And hey, maybe your own evaluation of your own predictive powers can also be cited by particularly lazy writers as they gush about the fantastic world of tomorrow you’ve been promising in your countless speeches and articles. The speeches and articles with which you make a good chunk of cash by hawking everything from alkaline water and vitamins for those who want to live long enough to see the Singularity, to classes at your own little university of futurism. Why study to be a real expert in AI or computing when you can just play one on TV and in the press? If anything, the pay is a lot better when you just talk about the promise of AI and intelligent machines rather than try to build them…

Share

Since this blog is probably best known for it’s skeptical view of the strain of cyber-utopianism being promoted by professional technocrat, and apparently one of the world’s top atheists, Ray Kurzweil, it seems that I have to somehow note his appearance in Time Magazine and point out the numerous flaws in treating him like an immensely influential technology prophet with a pulse on the world of computer science. And it’s unfortunate that so many reporters seem to take him seriously because almost half a century ago, he was experimenting with some really cool machines and over the next few decades, came up with some interesting stuff. But for a while now, he’s been coasting on his own coattails, making grand pronouncements about areas of computer science in which he was never involved, and the reporters who profile him seem to think that if he can make a music machine in the 1965, it must mean that he knows where the AI world is headed, forgetting the fact that being an expert in one area of computer science doesn’t make you an expert in another. And so we’re treated to a breezy recitation of Kurzweil’s greatest hits which glosses over the numerous problems with his big plan for the world of 2045 with the good, old exponential advancement numerology that he loves to cite so often.

Again, there’s really nothing new here if you’re familiar with Kurzweil’s brand of transhumanism, just the same promises of mind uploading and digital immortality on the date predicted on the exponential chart that far too many devoted Singularitarians embrace with the same zeal as post-modernists ascribing to every concept with the word "quantum" in it. Never mind that mind uploading will require the kind of mind-body dualism based on the religious concept of a soul rather than sound science, and that even if it were possible, there would be huge adjustments involved with the process. Never mind that no matter whether Kurzweil takes a thousand vitamins a day, his body will simply fall apart by 125 because evolution does not produce humans who aren’t subject to senescence. Never mind that new inventions can backfire or never find an outlet and that the tech industry has been overpromising the benefits of what computers can do for nearly 50 years, always assuring us that the next generation of electronics would give us a nearly perfect world. Never mind that by now, more scholarly Singularitarians are trying to reign in Kurzweil’s hype while politely pointing out to whom we may want to listen instead. And never mind that Ray has a miserable record when it comes to predicting future trends in computing and technology and constantly changes what he said after the fact to give everyone the impression that he actually knows what he says he does. We’re told that every challenge to his dream world of immortal humans who swap minds between machines is easily met by the march of tech progress which will quickly add up to grant him his fantasies at just the right moment.

There’s really something borderline religious about Kurzweil’s approach to technology. He’s embraced is as his savior and his weapon to cheating death, and his devotion runs so deeply, he even says that any threats from new technology could be countered with more and better technology. But technology is just a tool, the means to an end, not an end in and of itself. It’s not something to be tamed and worshipped like an elusive or mysterious creature that works in bizarre ways, and it doesn’t work on schedule to give you what it wants. It is what you make of it and there are problems that it can’t overcome because we don’t know the solutions. Sure, being able to live for hundreds of years sounds great. But all the medical technology in the world won’t help a researcher who doesn’t know why we age and exactly what needs to be fixed or how to sufficiently and safely slow the aging process. Those kinds of discoveries aren’t done on schedule because they’re broad and very open-ended. Just saying that we’ll get there in 2030 because a chart you drew based on Moore’s Law, which was a marketing gimmick of Intel rather than an actual law of physics or technology, says so, is ridiculous to say the least. It’s like a theologist trying to calculate the day of the Rapture by digging through the Bible or the quatrains of Nostradamus’ volume. You can’t just commit scientists and researchers to work according to an arbitrary schedule so they can help you deal with your midlife crisis and come to terms with your own mortality. And yet, that’s exactly what Ray does, substituting confidence and technobabble for substance and attracting way too many reporters who just regurgitate what he says into their articles and call him a genius.

Here’s what will likely happen by 2045. We might live a little longer thanks to prosthetics and maybe artificial organs and valves which will replace some of our natural entrails when they start going out of order with age, and hopefully, better diet and exercise. We’ll have very basic AI which we’ll use to control large processes and train using genetic algorithms and artificial neural networks. We may even have a resurgence in space travel and be wondering about sending cyborgs into space for long term missions. We’ll probably have new and very interesting inventions and gadgets that we’ll consider vital for our future. But we’ll still inhabit our bodies, we’ll still die, and we’ll still find answers to the our biggest and most important problems when we find them, not according to a schedule assembled by a tech shaman. Meanwhile, I’ll be an old fogey who used to write about how Singularitarians are getting way ahead of themselves and have to face my own upcoming end as best I can, without dreaming of some magical technology that will swoop from the sky and save me because imaginary scientists are supposed to come up with it before I die and ensure my immortality. All I’ll be able to do is live out my life as best I can and try to do as many things as I want to do, hopefully leaving some kind of legacy for those who’ll come after me, or maybe a useful gadget or two with which they can also tinker. And if we manage to figure out how to stop aging before I’m gone for good, terrific. But I won’t bet on it.

[ illustration by Martin Lisec ]

Share

Long time readers might remember the first (and so far only) megalomaniacal mad genius to appear on this blog, Dr. Steel. Oh sure, there’s been a fair share of megalomaniacs in the comments here and there, but few of those could qualify for the genius part of the title so I’m omitting them from the count. But I digress. You see, part of Dr. Steel’s propaganda strategy involved public service announcements both to regular folks and to his minions, including this little gem predicting the imminent arrival of the Technological Singularity in which he’s doing his best impression of Ray Kurzweil if Ray were had ambitions for global conquest. And frankly, I think it might be worth recruiting him as a spokesman for the Singularity Institute. The man’s basically Kurzweil 3.0.

Oh sure, there will be some glitches to overcome, like the whole notion of mind uploading being biologically unfounded and technically implausible, the inability of the human body to survive past 125 years no matter how many vitamins you take due to the way its built, the promising but nascent state of life extension, very probable cognitive dissonances between real and virtual worlds which would make post-Singularity life in a computer very messy, the fact that the bold predictions being made about the future of technology are so often wrong, and of course, the very likely slew of problems we tend to fail to consider for new inventions, problems that will ensure that the groundbreaking technology we’re talking about never quite lives up to most of our grand ideas for it. But hey, it’s catchy and convincing, right? And it’s what helped Ray build a business out of the entire concept of one day maybe achieving immortality through the vaporware he exalts…

Share

There’s an important note about transhumanism and Singularitarians. Despite the fact that the two ideas get tied at the hip by bloggers and reporters because Kurzweil, the man they turn to in regards to both, embraces both concepts with equal enthusiasm, one doesn’t need to be a Singularitarian to be a transhumanist. And a major focus for the former is the supposedly inevitable advancement of artificial intelligence to a superhuman level, a notion revered in the canon of the Singularity as a kind of Rapture during which clever machines take over from their human masters and remake the world in their own image. Since these clever machines are to be much smarter than their former human masters, a fair bit of Singularitarian bandwidth gets devoted to the idea of how to make sure that the coming machine overlords are friendly and like working with humans, often resulting in papers that frankly don’t make much sense to yours truly for reasons covered previously. Yes, we don’t want runaway machines deciding that we really don’t need all that much electricity or water, but we’re probably not going to have to worry about random super-smart computers raising an army to dispose of us.

Keeping in mind the Singularitarian thought process, let’s take a look at a more general-level post written by someone most of my long time readers will likely recognize, the Singularity Institute’s Michael Anissimov. It’s basically a rumination on the challenges of corralling the coming superhuman intelligence explosion and as it floats off into the hypothetical future, it manages to hit all the high notes I’m used to hearing about AI and the kind of awkward shoehorning of evolution into technology we often get from pop sci evangelists. Right off the bat, Michael recycles a canard about human prehistory we now know to be rather inaccurate, framing a rise in intelligence in modern humans as our key to domination over all those other hominid species tens of thousands of years ago and trying to present us as the next Neanderthals who will eventually face far smarter and much more competitive superhuman robots who can outthink us in a millisecond…

Intelligence is the most powerful force in the universe that we know of, obviously the creation of a higher form of intelligence/power would represent a tremendous threat/opportunity to the lesser intelligences that come before it, and whose survival depends on the whims of the greater [form  of] intelligence/power. The same thing happened with humans and the “lesser” hominids that we eliminated on the way to becoming the number one species on the planet.

Actually, about that. Modern humans didn’t so much eliminate all our competitors when we slowly made it up to the Middle East from North Africa and spread to Europe and Asia after the Toba eruption, as much as we outcompeted them and interbred with them since we were close enough biologically to hybridize. In Europe, modern humans didn’t slay the Neanderthals and push them out to the Atlantic where they eventually died of starvation and low birth rates, as the popular narrative goes. We’re actually part Neanderthal, who by the way, weren’t hulking brutes of limited intelligence but quite clever hunters who showed signs of having rituals and appreciated tools and basic decorations. Modern humans seem to be more creative and curious, qualities a simple side by side comparison between us and our extinct or absorbed evolutionary cousins wouldn’t show as signs of super-intelligence, and we had a more varied diet which was beneficial during harsh times. And as we move away from the typical half-guesses popularized around a century ago, we should be getting an appreciation of how complex and multifaceted cognition actually is and that our intelligence isn’t measured in discrete levels that determined which hominids lived and which died. Just like all new branches of the tree of life, humans as we know them are hybrid creatures representing a long period of evolutionary churn.

So where does this leave the narrative of the next big leap in intelligence, superhuman machinery? Well, not on such a firm ground. I’ve consistently asked for definitions of superhumanly intelligent machines and all of them seem to come down to doing everything humans do but faster, which seems like a better way to judge the intelligence of a clichéd "genius" on TV than actual cognitive skill. How fast you can solve a puzzle isn’t an indication of how smart you are. That’s demonstrated by your ability to solve a puzzle. I know there are some tasks with which I tend to slow down and take my time to make sure I get them done right. Does it mean that someone who performs the exact same task just as well in half the time is twice as smart as I am even if we came up with the same exact results? According to some Singularitarians, yes. And what role does creativity play in all this? Some humans are highly inventive and constantly brimming with ideas. Others couldn’t even guess where to start modifying the dullest and simplest piece of paperwork. But yet, somehow, with a future computer array, say Singularitarians, our computers will have all that covered and their creativity can take very sinister turns, turns that read as if they were lifted out of a Steven King novel. In his post, Michael quotes oft- cited theorist Stephen Omohundro on the potentially nefarious nature of goal-driven AI…

Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.

Pardon me, but who the hell is building a non-military robot that refuses to have itself shut off and tries to act like a virus to randomly reproduce? That’s not a chess-playing robot! That’s a freaking berserker drone on a rampage and requiring a violent intervention to stop! And here’s the thing, since I actually write code, I know that without me specifying how to avoid being shut down, robots can be turned off with a simple switch. For a machine to resist being turned off, it would have to modify its BIOS settings and programmatically override all commands for it to shut down. And since all the actions the robot was assigned, or learned by ANNs or some sort of genetic algorithms designed to gauge its performance at a task, take place at the application layer, a layer which interfaces with the hardware through a kernel with various drivers and sees the actual body of the robot as a series of abstractions, it wouldn’t even know about the BIOS settings without us telling it how to go and access them. I’d be a lot more afraid of the programmer than of the robot in Omohundro’s scenario since this kind of coding could easily get people killed with no AI involved. So if anything, the example of a nefarious bot we’re given above is actually backwards. Without special instructions allowing it to resist human actions, we could always turn the machine off and do whatever we want with it.

I’ve said it before and I’ll say again. Artificial intelligence is not going to simply transcend human thought on a schedule and the very best case scenario we can expect is a helpful supercomputer like GERTY, seen in the movie Moon. As its components would be pieced together, we’d know how it’s made and what it can do. Even the most sophisticated ANNs and genetic algorithms would still require training and could be analyzed after every iteration to see how things were coming along. All this talk of AGIs just deciding to modify their source code only because they suddenly could by virtue of some unnamed future mechanisms, ignores most of the basic fundamentals of what code is, what code does, and how it’s ultimately compiled to run executables. To make all this even more bizarre, Omohundro is an expert in computer science, yet what I’ve seen of his AGI- related work throws up red flag after red flag to me. It’s great that he was one of the developers of *Lisp and tried to merge functional languages and OOP in Sather, but that wasn’t exactly exactly in the recent past, and what he says about artificial intelligence sounds more like his wish list from the late 1980s and early 1990s than how and towards what the field is actually progressing today. And it may be worth considering that black boxes with whatever technology you need magically in it with no documentation or engineers to consult about its basics, isn’t a great premise for a paper on AI, especially when you’re trying to look into the near future.

[ illustration from a poster by Zelia ]

Share

Personally, I’m not a fan of the whole religion as a virus argument set forth by Craig James for the very simple reason that he tries to boil down a rather complex psychological and societal phenomenon, and then turns it into a simple algorithm with a few fixed inputs and an output that’s almost always described as negative. Yes, there certainly are major, all too often violent downsides of religious fervor, but there’s more to religion than just a simple human impulse or an edict from an authority telling people what to believe and how. Of course, none of this means that there’s any validity in deities, but to simply call people’s beliefs a virus is reaching too far, even for an accommodationist-basher like myself. This is why when in the spirit of his approach to taking on religion James decides to go Singularitarian and argue that once we’re immortal, faith will be obsolete, I have to call for a time out for both oversimplifying why people join and stay in religious movements, and giving some of Ray Kurzweil’s overly bold and often inaccurate predictions a shout-out in the service of atheism.

old man

Let’s start from the beginning. James’ thesis is that religion is like an entity which fulfills certain human needs and survives by mutating to appeal to our urge to feel special and mitigate our fears of death. Sounds fine so far, but what is he forgetting? If you think back to every introductory psychology class you’ve ever had, Maslow’s Hierarchy of Needs was always mentioned, and one important part of that hierarchy was belonging. People all too often join religious groups not even so much because they have an unshakable faith in a certain deity or a polytheistic tradition, but because everyone around them belongs to the same churches, mosques, temples, covens, and what have you. We’re social mammals and a sense of community is important to us, which could explain one of the reasons religion evolved as a codified form of natural behaviors and radiated into all of its different forms. Were you to look at a map of religious affiliations, you’d find regions of Christians, Muslims, or Buddhists based on geopolitics and cultural hubs rather than a mish-mash of random religions scattered all across the world. A lot of faithful go to church because it’s expected of them by members of their community, and in places where church attendance is considered less important, less people go.

Another important issue is that we’re biologically inclined towards a vague feeling of belief in something that is greater than ourselves, which I would suppose is one of the brain’s adaptations towards living as a social mammal. Having to work with others to achieve big goals requires to view oneself as part of something much bigger and more important than your immediate needs and wants. Now, we’re not predisposed towards any particular faith per se, that’s something that’s usually up to the community around us or a community that we’d like to join, but we can change the level of predisposition towards belief with brain surgery and aggressive, intensive, nonstop indoctrination, often conducted for very selfish reasons. Simply put, we’re wired for some kind of belief that there’s more to the world than just us and have a need to propagate our views and opinions because we often end up investing so much time and effort into them. Again, this doesn’t make any religious view more valid than another, and certainly doesn’t mean that the view in question is even correct since much of religion is built on strong personal opinions and conformation biases rather than reproducible evidence. If we want evidence we can test in a lab and answers to big questions about nature, we have science. But there is that nagging sense of wanting to be a part of something big, a sense that has to be satisfied.

For atheists, neither desire has gone away. We try to form communities and gather into groups that share the same ideas, and we see ourselves as part of a vast universe, privileged to be here by chance and evolution. I would argue that we have good reason for how we see ourselves in the grand scheme of things and we have plenty of evidence to back up our position. But the point is that we still need to satisfy our basic needs to join a community and play a part of something bigger than ourselves. Even if one day we manage to live as long as we want and never have to fear death, these urges won’t go away and we’ll find something to replace existing religions. We may turn the idea of ancient astronauts and alien gods into a new, mainstream faith, although I’m really hoping that we won’t adopt the Scientology variant of it. We would also have to deal with those who would refuse to do whatever it would take to become immortal, protesting the very idea an abomination since their religion tells them we have to die at some point. But no, religion won’t go away just because we may one day have the privilege of unlimited lifespans through cutting edge medicine and technology.

Also, unlike James says, the odds of the first immortal being alive today are infinitesimal to nil. Life extension will thrive eventually and declaring it dead on arrival is premature at best, but the only place where humanity is even close to unlimited life is in Kurzweil’s fantasies and numerological charts, so invoking his ideas for some sort of rhetorical blow to religion is simply not sound in any way, shape or form. Actually, it’s countering religious tenets with almost pseudo-religious techno-utopianism based on wishful thinking and a belief that technology will solve all of our problems according to a timeline we find convenient. Really, there’s a reason why a number of prominent transhumanists are pulling back from Ray and his prognostications and James’ education in computer science should’ve rang a few alarm bells when he read the books…

Share

The Singularity Institute’s media director, Michael Anissimov, is apparently fed up with transhumanists whose desperate focus on tuning into immortal robots was recently satirized on prime time TV, and wrote the kind of lengthy and detailed rebuke to their worldview you might easily expect from me on his blog, even citing a key point about the future of cyborgs I’ve been emphasizing to the Kurzwelian crowd. I wouldn’t say I’m all that surprised that Michael and I see the situation in very similar ways because where transhumanists and I tend to disagree are implementation details rather than overall principle, and I’m not going to claim that this post is some sort of a sign of a schism between Singularitarians and Kurzwelian transhumanists because this isn’t an official position paper from The Institute, but Michael’s opinion. But I can’t resist from pointing back to what may be attempts to cool overzealous disciples of the Nerd Rapture from the scholarly side of the movement.

As mentioned before, big name transhumanists are very politely distancing themselves from Kurzweil and as they sing praises to the man seen as the Singularitarian-in-Chief by the media, they also revise his claims and provide far more realistic and academic overviews of his more outlandish sound bites, like his notions of reverse-engineering the human brain in a million lines of code. And now, after reading an op ed by one very enthusiastic and overly optimistic transhumanist, Michael suddenly left his usual script of saying that while a myriad of problems still have to be resolved for complex technologies to work, scientists and institutions are aware of them and are trying to fix them, and let loose with a stern dressing down to those who down enough supplements to ensure a happy and healthy life for a mature bull elephant in hopes of living forever just after they become superhuman. His advice? Work out. Go for a run or a hike. Sign up for cryonics if you want to. An almost worshipful reliance on technology to solve all your problems is unrealistic until we can build nanotech that will manipulate our bodies on a molecular level. Though it’s very unlikely we could ever manipulate living organisms, or anything, on a strictly molecular level with nanotechnology due to the limits imposed on us by physics and the cost of making trillions of such complex machines, the rest is all sound advice.

While we should also note that nanotechnology will be just the first step in radical human enhancements and that the kind of enhancements we’re talking about may remain science fiction for almost a century, this highly grounded strain of transhumanism being voiced more and more often sounds encouraging. Sure, I still have my doubts about a general artificial intelligence system and really don’t see why we should build one, but this is miles better than the typical proclamations I was seeing on a regular basis two years ago, saturated by an obsession wth reaching digital immortality by 2045 and fury at those who note why mind uploading won’t be a viable means of achieving that goal. If George and Michael kept this up, I could end up finding fewer and fewer material for those classic WoWT-goes-to-town-on-Singularitarians posts. But I’d be willing to mark that down as a positive, especially after having my posts included in Singularitarian debates along the way…

[ illustration by John Liberto ]

Share