Archives For futurism

blue planet

For just a moment, let’s pretend that we solve the controversial legal issues that surround how and if we’ll mine asteroids in the near future, and have managed to expand our way into space faring cyborgs with warp drives capable of shuttling us from solar system to solar system in an acceptable amount of time. Over thousand of years, we’d have visited countless planets in our post-scarcity futuristic pseudo-utopia, and those with the means might ask themselves what if it would be a good investment to buy an entire world. You know, much the same way people buy expensive houses and private islands today. How much would something like that run a tycoon in the far future? Obviously it would have to be some insane amount of galactic credits. Several asteroids we’d like to main are worth tens of trillions of dollars in today’s cash. Typical, smallish, rocky planets like ours are ten orders of magnitude larger or so, and with fewer easy to access resources due to their molten innards, they should cost tens of septillions of dollars, right?

Seems a little simplistic, don’t you think? Remember that when you’re out shopping for an alien planet, you’re already living in a post-scarcity world with 3D printers ready to create your cities, infrastructures, and anything else you need at a moment’s notice. And settling on other worlds would mean that you have to be extremely self-sufficient, needing nothing more than access to interstellar communication networks and able to easily live off the land with your portable power supplies which allowed you to cross the vast distances between solar systems. That means not that much mining is going to get done on your new world, and the lack of demand means lower prices. What good is a million tons of gold if no one wants it or needs it? And if no one needs it, no one should be charging you for it, especially when you’re just going to extract the little bit of resources you need as you need them on your own. With resource values now out of the price, what exactly would influence how much a planet is worth? What the previous owners left?

Well, it may just come down to the same three most important things in real estate prices back on our boring little home world: location, location, and location. How close is the planet you will buy to hubs of civilization? Can you invite people on vacations, or safaris in alien jungles, or get scientists to excavate the ruins of a long gone extraterrestrial civilization? Does your new world offer some sort of gateway to other star systems, the last place to refuel and patch up a ship in the next few months or years of travel? Are there pretty views of the Milky Way in the night sky, and magnificent oceans you can explore? Those are likely to be things by which a species that can travel to other worlds will judge how much a planet is worth, rather than the value of what’s there to be mined or otherwise extracted. Still, considering how many people there will be when we’re spread across the stars and how many of them will be doing something akin to a normal job today since all the machinery they will depend on won’t maintain itself, it’s likely that planets will be a super-luxury item for the future top 0.1% who own the rights and blueprints to all of the technology making space exploration on an interstellar scale possible as an investment…

android mind

For those who are convinced that one day we can upload our minds to a computer and emulate the artificial immortality of Ultron in the finest traditions of comic book science, there’s a number of planned experiments which claim to have the potential to digitally reanimate brains from very thorough maps of neuron connections. They’re based on Ray Kurzweil’s theory of the mind; we are simply the sum total of our neural network in the brain and if we can capture it, we can build a viable digital analog that should think, act, and sound like us. Basically, the general plot of last year’s Johnny Depp flop Transcendence wasn’t built around something a room of studio writers dreamed up over a very productive lunch, but on a very real idea which some people are taking seriously enough to use it to plan the fate of their bodies and minds after death. Those who are dying are now finding some comfort in the idea that they can be brought back to life should any of these experiments succeed, and reunite with the loved ones who they’re leaving behind.

In both industry and academia, it can be really easy to forget that the bleeding edge technology you study and promote can have a very real effect on very real people’s lives. Cancer patients, those with debilitating injuries that will drastically shorten their lives, and people whose genetics conspired to make their bodies fail them, are starting to make decisions based on the promises spread by the media on behalf of self-styled tech prophets. For years, I’ve been writing a lot of posts and articles explaining exactly why many of these promises are poorly formed ideas that lack the requisite understand of the problem they claim they understand how to solve. And it is still very much the case, as neuroscientist Michael Hendricks felt compelled to detail for MIT in response to the New York Times feature on whole brain emulation. His argument is a solid one, based on an actual attempt to emulate a brain we understand inside and out in an organism we have mapped from its skin down to the individual codon, the humble nematode worm.

Essentially, Hendricks says that to digitally emulate the brain of a nematode, we need to realize that its mind still has thousands of constant, ongoing chemical reactions in addition to the flows of electrical pulses through its neurons. We don’t know how to model them and the exact effect they have on the worm’s cognition, and even with the entire immaculately accurate connectome at hand, he’s still missing a great deal of information on how to start emulating its brain. But why should we have all the information, you ask, can’t we just build a proper artificial neural network reflecting the nematode connectome and fire it up? After all, if we know how the information will navigate its brain and what all the neurons do, couldn’t we have something up and running? To add on to Hendricks’ argument that the structure of the brain itself is only a part of what makes individuals who they are and how they work, allow me to add that this is simply not how a digital neural network is supposed to function, despite being constantly compared to our neurons.

Artificial neural networks are mechanisms to implement a mathematical formula for learning an unfamiliar task in the language of propositional logic. In essence, you define the problem space and the expected outcomes, then allow the network to weigh the inputs and guess its way to an acceptable solution. You can say that’s how our brains work too, but you’d be wrong. There are parts of our brain that deal with high level logic, like the prefrontal cortex which helps you make decisions about what to do in certain situations, that is, deal with executive functions. But unlike artificial neural networks, there are countless chemical reactions involved, reactions which warp how the information is being processed. Being hungry, sleepy, tired, aroused, sick, happy, and so on, and so forth, can make the same set of connections produce different outputs from very similar inputs. Ever had an experience of being asked to help a friend with something until one day, you got fed up that you were being constantly pestered for help, started a fight, and ended the friendship? Humans do that. Social animals can do that. Computers never could.

You see, your connectome doesn’t implement propositional calculus, it’s a constantly changing infrastructure for exchanging basic functionality, deeply affected by training, injury, your overall health, your memories, and the complex flow of neurotransmitters floating between neurons. If you bring me a connectome, even for a tiny nematode, and told me to set up an artificial neural network that captures these relationships, I’m sure it would be possible to draw up something in a bit of custom code, but what exactly would the result be? How do I encode plasticity? How do we define each neuron’s statistical weight if we’re missing the chemical reactions affecting it? Is there a variation in the neurotransmitters we’d have to simulate as well, and if so, what would it be and to which neurotransmitters will it apply? It’s like trying to rebuild a city with only the road map, no buildings, people, cars, trucks, and businesses included, then expecting artificial traffic patterns to recreate all the dynamics of the city the road map of which you digitized, with pretty much no room for entropy because it could easily break down the simulation over time. You will both be running the neural network and training it, something it’s really not meant to do.

The bottom line here is that synthetic minds, even once capable of hot-swapping newly trained networks in place of existing ones, are not going to be the same as organic ones. What a great deal of transhumanists refuse to accept is that the substrate in which computing — and they will define what the mind does as computing — is being done, is actually quite important because it allows the information to flow at different rates and in different ways than another substrate. We can put something from a connectome into a computer, but what comes out will not be what we put into it, it will be something new, something different because we put in just a part of it into a machine and naively expected the code to make up for all the gaps. And that’s for a best case scenario with a nematode and 302 neurons. Humans have 86 billion. Even if we don’t need the majority of these neurons to be emulated, the point is that whatever problems you’ll have with a virtual nematode brain, they will be more than nine orders of magnitude worse in virtual human ones, as added size and complexity create new problems. In short, whole brain emulation as a means for digital immortality may work in comic books, but definitely not in the real world.

astronaut on mars

Astrobiologist Jacob Haqq-Misra likes to ask questions about our future in space. If you’ve been following this blog for a long time and the name seems familiar, it’s because you’ve read a take on a paper regarding the Fermi Paradox he co-authored. But this time, instead of looking at the dynamics of an alien civilization in the near future, he turned his eye towards ours by asking if it would be beneficial for astronauts we will one day send to Mars to create their own government and legally become extraterrestrial citizens from the start. At its heart, it’s not a really outlandish notion at all, and in fact, I’ve previously argued that it’s inevitable that deep space exploration is going to splinter humanity into independent, autonomous territories. Even further, unless we’ve been able to build warp drives to travel faster than light and abuse some quantum shenanigans to break the laws of physics and communicate instantaneously, colonists on far off worlds would eventually become not just different cultures and nations, but different species altogether.

However, the time scales for that are thousands to hundreds of thousands of years, while plans for an independent Mars advanced by Haqq-Misra are on the order of decades. And that’s very problematic because the first Martian colonies are not going to be self-sustaining. While they’re claiming their independence, they’re being bankrolled and logistically supported by Earth until a time when they can become fully self-sufficient. Obviously that’s the goal, to travel light and live off the land once you get there, but laying the basic infrastructure for making that happen in an alien wilderness where no terrestrial life can exist on its own requires a lot of initial buildup. And under three out of the five main provisions of what I’m calling the Haqq-Misra Mars Charter, the relationship between the colonists and Earth will be parasitic at best, violating international laws on similar matters, and ultimately restricting the colony’s growth and future prospects.

For example, under the charter, every piece of technology sent to Mars is now Martian property in perpetuity and cannot be taken back. What if this technology is software updated by a steady internet connection used for communication between the two worlds as NASA is planning? Will some Martian patent trolls start suing Earthly companies for not handing over the rights to their digital assets? Not only that, but if a Martian pays for this software, he or she is in violation of a trade prohibition between the planets. That’s right, no commerce would be allowed, and neither would input on scientific research that the Martians feel infringes on their right to run their world as they see fit. In other words, Earth is expected to shell out cash, send free technology, write a lot of free software stuck in legal limbo, and keep its opinions to itself. This does not sound like setting up a new civilization as much as it sounds like enabling a freeloader. Any even remotely plausible Martian colony will have to pay its own way in technology and research that should be traded with Earth on an open market. That’s the only way they’ll be independent quickly.

And of course there’s the provision that no human may lay claim on Martian territory. However, should the colonies lack a sufficiently strong armed forces, their ability to enforce this provision would be pretty much nonexistent. Sovereign territory takes force projection to stay that way so what this provision would be doing is creating an incentive for military buildup in space as soon as we set foot on Mars. Considering that the top three space powers which will be capable of a human landing on another world in the foreseeable future currently have strained relations, it is not something to take lightly. Runaway military buildup gave us space travel in the first place. It can change the world again just as quickly. And I can assure you that no nation in the world will be just fine with heavily armed extraterrestrial freeloaders with whom they can’t engage using a lot of resources these countries have to provide on a regular basis to keep them going. There’s not going to be a war for Martian independence that Haqq-Misra wants to avoid, but there may be one of Martian annexation. And probably a fairly short war at that when the troops land.

Now, all that said, after a century of colonies, terraforming attempts, and several generations of colonists who know Mars as their home, I can definitely see the planet turning independent. It’s going to have the self-sufficiency, economy, and culture to do so, and that culture isn’t going to be created ex nihlo, as Haqq-Misra is hoping to force by declaring astronauts Martians with the first step on alien soil. They will be speaking with Earth daily, many will identify with their nations of origin and their cultures, and it’s all going to take a long time to gel together into something a future researcher can call uniquely Martian. And what it will ultimately mean to be a Martian will be shaped by two-way interactions with those on Earth, not by forced isolation which could give megalomaniacs a chance to create a nation they could subjugate, or utopians a chance to build an alien commune with the consequences that would entail, while people who could help give a group of critics a means to be heard, are legally required to stay out of the way. But the bottom line is that we need to learn to thrive on Mars and spend a great deal of time there before even thinking of making it its own autonomous territory. It will happen, just not anytime soon.

sci-fi plane

Now, I don’t mean to alarm you, but if Boeing is serious about its idea for the fusion powered jet engine and puts it into a commercial airplane in the near future more or less as it is now, you’re probably going to be killed when it’s turned on as the plane gets ready to taxi. How exactly your life will end is a matter of debate really. The most obvious way is being poisoned by a shower of stray neutrons and electrons emanating from the fusion process, and the fissile shielding which would absorb some of the neutrons and start a chain reaction much like in a commercial fission plant but with basically nothing between you and the radiation. If you want to know exactly what that would do to your body, and want to lose sleep for a few days, simply do a search — and for the love of all things Noodly not an image search, anything but that — for Hiroshi Ouchi. Another way would be a swift crash landing after the initial reaction gets the plane airborne but just can’t continue consistently enough to stay in the air. A third involves electrical components fried by a steady radioactive onslaught giving out mid-flight. I could go on and on, but you get the point.

Of course this assumes that Boeing would actually build such a jet engine, which is pretty much impossible without some absolutely amazing breakthroughs in physics, material sciences, and a subsequent miniaturization of all these huge leaps into something that will fit into commercial jet engines. While you’ve seen something the size of a NYC or San Francisco studio apartment on the side of each wing on planes that routinely cross oceans, that’s not nearly enough space for even one component of Boeing’s fusion engine. It would be like planning to stuff one of the very first computers into a Raspberry Pi back in 1952, when we theoretically knew that we should be able to do it someday, but had no idea how. We know that fusion should work. It’s basically the predominant high energy reaction in the universe. But we just can’t scale it down until we figure out how to negotiate turbulent plasma streams and charged particles repelling each other in the early stages of ignition. Right now, we can mostly recoup the energy from the initial laser bursts, but we’re still far off from breaking even on the whole system, much generate more power.

Even in ten years there wouldn’t be lasers powerful enough to start fusion with enough net gain to send a jet down a runway. The most compact and energetic fission reactors today are used by submarines and icebreakers, but they’re twice the size of even the biggest jet engines with a weight measured in thousand of tons. Add between 1,000 pounds and a ton of uranium-238 for the fissile shielding and the laser assembly, and you’re quickly looking at close to ten times the maximum takeoff weight for the largest aircraft ever built with just two engines. Even if you can travel in time and bring back the technology for all this to work, your plane could not land in any airport in existence. Just taxiing onto the runway would crush the tarmac. Landing would tear it to shreds as the plane would drive straight through solid ground. And of course, it would rain all sorts of radioactive particles over its flight path. If chemtrails weren’t just a conspiracy theory for people who don’t know what contrails are, I’d take them over a fusion-fission jet engine, and I’m pretty closely acquainted with the fallout from Chernobyl, living in Ukraine as it happened.

So the question hanging in the air is why Boeing would patent an engine that can’t work without sci-fi technology? Partly, as noted by Ars in the referenced story, it shows just how easy it is for corporate entities with lots of lawyers to get purely speculative defensive patents. Knowing how engineers who design jet engines work, I’m betting that they understand full well that this is just another fanciful take on nuclear jet propulsion which was briefly explored in the 1950s when the dream was nuclear powered everything. We’re also entertaining the idea of using small nuclear reactors for interplanetary travel which could ideally fit into an aircraft engine, though lacking all the necessary oomph for producing constant, powerful thrust. But one day, all of this, or even a few key components, could actually combine to produce safe, efficient, nuclear power at almost any scale and be adopted into a viable jet engine design for a plane that would need to refuel a few times per year at most. Boeing wants to be able to exploit such designs while protecting its technology from patent trolls, so it seems likely that it nabbed this patent just in case, as a plan for a future that might never come, but needs to be protected should it actually arrive.

[ illustration by Adam Kop ]

old cyborg

Over all the posts I’ve written about brain-machine interfaces and their promise for an everyday person, one the key takeaways was that while the idea was great, the implementation would be problematic because doctors would be loath to perform invasive and risky surgery on a patient who didn’t necessarily need said surgery. But what if when you want to link your brain to a new, complex, and powerful device, you could just get an injection of electrodes that unfurl into a thin mesh which surrounds your neurons and allows you to beam a potent signal out? Sounds like a premise for a science fiction novel, doesn’t it? Maybe something down the cyberpunk alley that was explored by Ghost In The Shell and The Matrix? Amazingly, no. It’s real, and it’s now being tested in rats with extremely positive results. Just 30 minutes after injection, the mesh unwound itself around the rats’ brains and retained some 80% of its ideal functionality. True, it’s not quite perfect yet, but this is a massive leap towards fusing our minds with machinery.

Honestly, I could write an entire book about all the things easy access this technology can do in the long run because the possibilities are almost truly endless. We could manipulate a machine miles away from ourselves as if we inhabited it, Avatar style, give locked in stroke victims a way to communicate and control their environment, extend our nervous systems into artificial limbs which can be fused with our existing bodies, and perhaps even challenge what it means to be a human and become a truly space faring species at some point down the line. Or we could use it to make video games really badass because that’s where the big money will be after medicine, after which we’ll quickly diversify into porn. But I digress. The very idea that we’re slowly but oh so surely coming closer and closer towards easy to implant brain-machine interfaces is enough to make me feel all warm and fuzzy from seeing science fiction turn into science fact, and twitch with anticipation of what could be done when it’s finally ready for human trials. Oh the software I could write and the things it could do with the power of the human brain and a cloud app…

[ illustration by Martin Lisec ]


There’s something to be said about not taking comic books and sci-fi too seriously when you’re trying to predict the future and prepare for a potential disaster. For example, in Age of Ultron, a mysterious alien artificial intelligence tamed by a playboy bazillionaire using a human wrecking ball as a lab assistant in a process that makes most computer scientists weep when described during the film, decides that because its mission is to save the world, it must wipe out humanity because humans are violent. It’s a plot so old, one imagines that an encyclopedia listing every time it’s been used is itself covered by its own hefty weight in cobwebs, and yet, we have many famous computer scientists and engineers taking it seriously for some reason. Yes, it’s possible to build a machine that would turn on humanity because the programmers made a mistake or it was malicious by design, but we always omit the humans involved and responsible for designs and implementation and go straight to the machine as its own entity wherein lies the error.

And the same error repeats itself in an interesting, but ultimately flawed ideas by Zeljko Svedic, which says that an advanced intellect like an Ultron wouldn’t even bother with humans since its goals would probably send it deep into the Arctic and then to the stars. Once an intelligence far beyond our own emerges, we’re just gnats that can be ignored while it goes about, working on completing its hard to imagine and ever harder to understand plans. Do you really care about a colony of bees or two and what it does? Do you take time out of your day to explain to it why it’s important for you to build rockets and launch satellites, as well as how you go about it? Though you might knock out a beehive or two when building your launch pads, you have no ill feelings against the bees and would only get rid of as many of them as you have to and no more. And a hyper-intelligent AI system would do its business the same exact way.

And while sadly, Vice decided on using Eliezer Yudkowsy for peer review when writing its quick overview, he was able to illustrate the right caveat to an AI which will just do its thing with only a cursory awareness of the humans around it. This AI is not going to live in a vacuum and needs vast amounts of space and energy to run itself in its likeliest iteration, and we, humans, are sort of in charge of both at the moment, and will continue to be if, and when it emerges. It’s going to have to interact with us and while it might ultimately leave us alone, it will need resources we’re controlling and with which we may not be willing to part. So as rough as it is for me to admit, I’ll have to side with Yudkowsky here in saying that dealing with a hyper-intelligent AI which is not cooperating with humans is more likely to lead to conflict than to a separation. Simply put, it will need what we have and if it doesn’t know how to ask nicely, or doesn’t think it has to, it may just decide to take it by force, kind of like we would do if we were really determined.

Still, the big flaw with all this overlooked by Yudkowsky and Svedic is that AI will not emerge just like we see in sci-fi, ex nihlo. It’s more probable to see a baby born to become an evil genius at a single digit age than it is to see a computer do this. In other words, Stewie is far more likely to go from fiction to fact than Ultron. But because they don’t know how it could happen, they make the leap to building a world outside of a black box that contains the inner workings of this hyper AI construct as if how it’s built is irrelevant, while it’s actually the most important thing about any artificially intelligent system. Yudkowsky has written millions, literally millions, of words about the future of humanity in a world where hyper-intelligent AI awakens, but not a word about what will make it hyper-intelligent that doesn’t come down to “can run a Google search and do math in a fraction of a second.” Even the smartest and most powerful AIs will be limited by the sum of our knowledge which is actually a lot more of a cure than a blessing.

Human knowledge is fallible, temporary, and self-contradictory. We hope that when we try and combine immense pattern sifters to billions of pages of data collected by different fields, we will find profound insights, but nature does not work that way. Just because you made up some big, scary equations doesn’t mean they will actually give you anything of value in the end, and every time a new study overturns any of these data points, you’ll have to change these equations and run the whole thing from scratch again. When you bank on Watson discovering the recipe for a fully functioning warp drive, you’ll be assuming that you were able to prune astrophysics of just about every contradictory idea about time and space, both quantum and macro-cosmic, know every caveat involved in the calculations or have built how to handle them into Watson, that all the data you’re using is completely correct, and that nature really will follow the rules that your computers just spat out after days of number crunching. It’s asinine to think it’s so simple.

It’s tempting and grandiose to think of ourselves as being able to create something that’s much better than us, something vastly smarter, more resilient, and immortal to boot, a legacy that will last forever. But it’s just not going to happen. Our best bet to do that is to improve on ourselves, to keep an eye on what’s truly important, use the best of what nature gave us and harness the technology we’ve built and understanding we’ve amassed to overcome our limitations. We can make careers out of writing countless tomes pontificating on things we don’t understand and on coping with a world that is almost certainly never going to come to pass. Or we could build new things and explore what’s actually possible and how we can get there. I understand that it’s far easier to do the former than the latter, but all things that have a tangible effect on the real world force you not to take the easy way out. That’s just the way it is.


A while ago, I wrote about some futurists’ ideas of robot brothels and conscious, self-aware sex bots capable of entering a relationship with a human, and why marriage to an android is unlikely to become legal. Short version? I wouldn’t be surprised if there are sex bots for rent in a wealthy first world country’s red light district, but robot-human marriages are a legal dead end. Basically, it comes down to two factors. First, a robot, no matter how self-aware or seemingly intelligent, is not a living things capable of giving consent. It could easily be programmed to do what its owner wants it to do, and in fact this seems to be the primary draw for those who consider themselves technosexuals. Unlike another human, robots are not looking for companionship, they were built to be companions. Second, and perhaps most important, is that anatomically correct robots are often used as surrogates for contact with humans and are being imparted human features by an owner who is either intimidated or easily hurt by the complexities of typical human interaction.

You don’t have to take my word on the latter. Just consider this interview with an iDollator — the term sometimes used by technosexuals to identify for themselves — in which he more or less just confirms everything I said word for word. He buys and has relationships with sex dolls because a relationship with a woman just doesn’t really work out for him. He’s too shy to make a move, gets hurt when he makes what many of us consider classic dating mistakes, and rather than trying to navigate the emotional landscape of a relationship, he simply avoids trying to build one. It’s little wonder he’s so attached to his dolls. He projected all his fantasies and desires to a pair of pliant objects that can provide him with some sexual satisfaction and will never say no, or demand any kind of compromise or emotional concern from him rather than for their upkeep. Using them, he went from a perpetual third wheel in relationships, to having a bisexual wife and girlfriend, a very common fantasy that has a very mixed track record with flesh and blood humans because those pesky emotions get in the way as boundaries and rules have to be firmly established.

Now, I understand this might come across as judgmental, although it’s really not meant to be an indictment against iDollators, and it’s entirely possible that my biases are in play here. After all, who am I to potentially pathologize the decisions of iDollator as a married man who never even considered the idea of synthetic companionship as an option, much less a viable one at that? At the same time, I think we could objectively argue that the benefits of marriage wouldn’t work for relationships between humans and robots. One of the main benefits of marriage is the transfers of property between spouses. Robots would be property, virtual extensions of the will of humans who bought and programmed them. They would be useful in making the wishes of the human on his or her deathbed known but that’s about it. Inheriting the humans’ other property would be an equivalent of a house getting to keep a car, a bank account, and the insurance payout as far as laws would be concerned. More than likely, the robot would be auctioned off or be transferred to the next of kin as a belonging of the deceased, and very likely re-programmed.

And here’s another caveat. All of this is based on the idea of advancements in AI we aren’t even sure will be made, applied to sex bots. We know that their makers want to give them some basic semblance of a personality, but how successful they’ll be is a very open question. Being able to change the robot’s mood and general personality on a whim would still be a requirement for any potential buyer as we see with iDollators, and without autonomy, we can’t even think of granting any legal person-hood to even a very sophisticated synthetic intelligence. That would leave sex bots as objects of pleasure and relationship surrogates, perhaps useful in therapy or to replace human sex workers and combat human trafficking. Personally, considering the cost of upkeep of a high end sex bot and the level of expertise and infrastructure required, I’m still not seeing sex bots as solving the ethical and criminal issues involved with semi-legal or illegalized prostitution, especially in the developing world. To human traffickers, their victims’ lives are cheap and those being exploited are just useful commodities for paying clients, especially wealthy ones.

So while we could safely predict they they will emerge and become quite complex and engaging over the coming decades, they’re unlikely to anything more than a niche product. They won’t be legally viable spouses and very seldom the first choice of companion. They won’t help stem the horrors of human trafficking until they become extremely cheap and convenient. They might be a useful therapy tool where human sexual surrogates can’t do their work or a way for some tech-savvy entrepreneurs sitting on a small pile of cash to make some quick money. But they will not change human relationships in profound ways as some futurists like to predict, and there might well be a limit to how well they can interact with us. Considering our history and biology, it a safe bet that our partners will almost always be other humans and robots will almost always be things we own. Oh they could be wonderful, helpful things to which we’ll have emotional attachments in the same way we’d be emotionally attached to a favorite pet, but ultimately, just our property.

[ illustration by Michael O ]


Last time we took a look at what tech cynics and technophobes get wrong in their arguments, we focused on their lack of consideration for their fellow humans’ ability to exercise free will. Despite the fact that this is a huge hole in many of their arguments, there’s an even bigger problem with the dismissive stance they take towards science and technology. When they argue that we can’t feed all the hungry house all the homeless, or really prolong lifespans with technology, the facts they cite generally point not so much to technological limitations or scientific ignorance, but very convoluted social and political problems, then insist that because science and technology can’t solve them today, they likely never will, or won’t solve them adequately to consider the problem much smaller than it is today. While this argument is true, it’s also logically dishonest. You can’t fix the world’s problems with technology when the people who should be using it refuse to do so, or hijack it for their own less than noble means. No tool or piece of knowledge can help then.

As some of you might have noticed, the city in the graphic for this post in Dubai, a rich proving ground for how the cities of the near future are likely to be built. We know how to make cities of glass, steel, and concrete right out of science fiction. We know how to build the cheap, efficient housing complexes those making less than a dollar a day need to at least have secure shelter. We know how do diagnose complex diseases early enough to treat them before they’ll become dangerous, much less terminal, and our toolkits for understanding germs, viruses, and complex medical problems like cancers, are growing to become more sophisticated every day. We also have the tools and the money to apply all these solutions to the world at large. With something a little bit short of $100 billion just between Gates and Buffet pledged to fight poverty illiteracy, and disease, and when we can find $2 trillion laying around to help banks with a do-over, clearly, it’s not an issue of not having the technology, the scientific basis, or the cash. The issue is will.

Sure technological utopians have lofty ambitions and it’s valid to be skeptical of many of them, but when they vow that logistical problems can be solved with enough computing and research, they’re right more often than not. When the cynics deride these ambitions by pointing out that a lot of people don’t want to fund mass production of the necessary tools or the required science, and would much prefer to spend them on entertainment and public entitlements benefiting them directly, they’re not highlighting the problems with using technology to save the world, they’re a prime exhibit of why a technology hasn’t transformed the world of fixed a persistent problem. All too often it comes down to them saying it can’t be done, and politicians along with voters simply listening to them and deciding that no, it’s can’t be done since the critics said so, which is why it would be a waste of time to even bother. It’s a self-fulfilling prophecy of failure, a social variation of Newton’s First Law: a society that insists on the status quo, sticks to the status quo unless an external event or constant pressure forces it to change.

It’s the same attitude which strangled the promising and much anticipated future of space travel and exploration, and we’re still stuck with it. Yes, not every retro-futuristic dream about space or living on other worlds was practical or even feasible and yes, we did need experts to burst our bubble before an unworkable project got off the ground. But today’s science and tech critics are going well past a healthy skepticism about bold claims and venturing into a territory in which they dismiss scientific and technological solutions to global problems for the sake of dismissing them, pointing to other ideas they dismissed in the past and doomed to being chained to the drawing board, and saying that because their relentless cynicism killed the ideas rather than refined the scopes and missions to eliminate problems with them, new ideas building on past visions must be scrapped as well. It’s even more insidious than political vetting of basic science, because vetting at least allows some projects to survive and get refined into new tools and ideas. The withering cynicism of what science and technology can do for us is like an anti-innovation WMD…

shadow seal

After years of on again, off again rewrites, edits, and revisions, Shadow Nation is now available as an ebook for Kindle devices on as promised yesterday. Not only does it have aliens, cyborgs, massive space battles, conspiracies, and a draft of the first part still not all that far from the new version available for your review (one, two, three), but it’s also just $3.99 per flexible, lend-able, copy you can read on any device that supports Kindle apps. And I’ll throw the references to the Cthluhu mythos, the dark Lovecraftian undertones, and the transhumanist riff on politics as a bonus. Ever since part one made it online, I’ve been getting requests to publish more of the book or finally release it so after a long and hard battle with InDesign and Kindle’s publishing preview tools, I’m happy to be putting the book out there for everyone interested in a good, old fashioned space opera with a couple of modern twists.

Our story officially begins in the year 3507 when Earth is visited by alien insectoids scouting the planet’s defenses for the massive fleet that brought them there. As the Earth’s military prepares for a fight it knows it can’t win, the planet is rescued in the nick of time by an immensely powerful and enigmatic civilization that calls itself the Shadow Nation. But oddly enough, the Nation isn’t just aware of humanity, it’s populated by humans who though experiments with alien technology became space faring cyborgs once in the service of the galaxy’s dominant species. Now, they’re on the verge of war with the former benefactors and Earth is caught in the crossfire. And as the Nation introduces itself to humans, questions begin to arise. How exactly did the cyborgs got to their lofty perch in the galaxy? Why were they chosen? Why are their creators so anxious to go to war with them? And finally, why is the Nation suddenly so interested in Earth?

In the meantime, Earth’s most influential politicians, Howard Grey and Andrew Newman, involve the Nation’s top commander and his team into a political battle that will determine the future of the planet. As humans begin trading with the Nation’s companies, Newman starts to worry that the mysterious empire might have some rather sinister plans for the Earth while Grey becomes hell bent on using the Nation to secure an epic legacy for himself as he gets ready to retire and cash in on all his political capital. The only thing they manage to agree on is to send two special agents to live with the Nation and find out what makes it tick. And what these agents discover is beyond anything either either Grey or Newman could ever imagine: a web of lies, secrets and bad blood which can only be untangled if either the Nation’s cyborgs or their creators fall. And since a defeat means near-certain extinction, the stakes are very, very high…

So take a look at the Kindle sample, feel free to persue the previews (although chapter three underwent some extensive resivion in the final version), check out the Shadow Nation wiki, give the book a try, and share your thoughts here and on Amazon. If you like this blog’s main topics and takes on alien contact, transhumanism, and futurism, I don’t think you’ll be dissapointed in what you’ll find. And for the price of a fancy coffee, doesn’t it seem worth the risk?

cyborg integration

Stop me if you’ve heard any of this before. As computers keep getting faster and more powerful and robots keep advancing at a breakneck pace, most human jobs will be obsolete. But instead of simply being pink-spilled, humans will get brand new jobs which pay better and give them a lot of free time to enjoy the products of our civilization’s robotic workforce, create, and invent. It’s a futuristic dream that’s been around for almost a century in one form or another, and it has been given an update in the latest issue of Wired. Robots will take our jobs and we should welcome it because we’ll eliminate grunt work in favor of more creative pursuits, say today’s tech prophets, and in a way they’re right. Automation is one of the biggest reasons why a lot of people can’t go out and get jobs that once used to be plentiful and why companies are bringing in more revenue with far fewer workers. Machines have effectively eliminated millions of jobs.

When we get to the second part of this techno-utopian prediction, however, things aren’t exactly as rosy. Yes, new and higher paying jobs have emerged, especially in IT, but they’re closed to a lot of people who simply don’t have the skills to do these new jobs or for whom no position exists in their geographical vicinity. Automation doesn’t just mean that humans get bumped up from an obsolete job, it means there are fewer jobs overall for humans. And when it comes to positions in which dealing with reams of paperwork and mundane office tasks is the order of the day, having computers and robots in them eliminates internships college students or young grads can use to build up a resume and get their feet in the door. They’re now stuck in a Catch-22 where they’re unable to get experience and more education puts them further behind thanks to a machine. I’m going to go out on a limb and say that this is not what the techno-utopians had in mind.

Of course humans will have to move up into more abstract and creative jobs where robots have no hope of ever competing with them, otherwise the economy will collapse as automated factory after automated factory churns out trillions of dollars worth of goods that no one can buy since some 70% of the population no longer has a job. And at 70% unemployment, every last horrible possibility that sends societal collapse theory survivalists screaming themselves awake at night has a high enough chance of happening that yours truly would also start seriously considering taking up gun hoarding and food stockpiling as really good hobbies. Basically, the failure to get adjusted to the growing cybernetic sector of the workforce simply isn’t an option. Companies, no matter how multinational, would be able to eliminate so many positions that the robot takeover of human jobs with no replacements in sight that it wouldn’t start feeling the economic pain as they hit maximum market saturation and can go no further because no one can buy their wares.

But all these good news aside, just because we’ll have time to adjust to an ever more automated economy and feel the need to do so, doesn’t mean that the transition will be easy and people will not be left behind. Without a coordinated effort by wealthy nations to change the incentives they give their companies and educational institutions, we’ll be forced to ride out a series of massive recessions in which millions of jobs are shed, relatively few are replaced, and the job markets will be slowly rebuilt around new careers because a large chunk of the ones lost are now handed off to machines or made obsolete by an industry’s contraction after the crisis. And this means that when facing the machine takeover of the economy we have two realistic choices. The first is to adapt by taking action now and bringing education and economic incentives in line with what the postindustrial markets are likely to become. The second is to try and ride out the coming storm, adapting in a very economically painful ad hoc manner through cyclical recessions. Unlike we’re being told, the new, post-machine jobs won’t just naturally appear on their own…