Archives For futurism

sci-fi plane

Now, I don’t mean to alarm you, but if Boeing is serious about its idea for the fusion powered jet engine and puts it into a commercial airplane in the near future more or less as it is now, you’re probably going to be killed when it’s turned on as the plane gets ready to taxi. How exactly your life will end is a matter of debate really. The most obvious way is being poisoned by a shower of stray neutrons and electrons emanating from the fusion process, and the fissile shielding which would absorb some of the neutrons and start a chain reaction much like in a commercial fission plant but with basically nothing between you and the radiation. If you want to know exactly what that would do to your body, and want to lose sleep for a few days, simply do a search — and for the love of all things Noodly not an image search, anything but that — for Hiroshi Ouchi. Another way would be a swift crash landing after the initial reaction gets the plane airborne but just can’t continue consistently enough to stay in the air. A third involves electrical components fried by a steady radioactive onslaught giving out mid-flight. I could go on and on, but you get the point.

Of course this assumes that Boeing would actually build such a jet engine, which is pretty much impossible without some absolutely amazing breakthroughs in physics, material sciences, and a subsequent miniaturization of all these huge leaps into something that will fit into commercial jet engines. While you’ve seen something the size of a NYC or San Francisco studio apartment on the side of each wing on planes that routinely cross oceans, that’s not nearly enough space for even one component of Boeing’s fusion engine. It would be like planning to stuff one of the very first computers into a Raspberry Pi back in 1952, when we theoretically knew that we should be able to do it someday, but had no idea how. We know that fusion should work. It’s basically the predominant high energy reaction in the universe. But we just can’t scale it down until we figure out how to negotiate turbulent plasma streams and charged particles repelling each other in the early stages of ignition. Right now, we can mostly recoup the energy from the initial laser bursts, but we’re still far off from breaking even on the whole system, much generate more power.

Even in ten years there wouldn’t be lasers powerful enough to start fusion with enough net gain to send a jet down a runway. The most compact and energetic fission reactors today are used by submarines and icebreakers, but they’re twice the size of even the biggest jet engines with a weight measured in thousand of tons. Add between 1,000 pounds and a ton of uranium-238 for the fissile shielding and the laser assembly, and you’re quickly looking at close to ten times the maximum takeoff weight for the largest aircraft ever built with just two engines. Even if you can travel in time and bring back the technology for all this to work, your plane could not land in any airport in existence. Just taxiing onto the runway would crush the tarmac. Landing would tear it to shreds as the plane would drive straight through solid ground. And of course, it would rain all sorts of radioactive particles over its flight path. If chemtrails weren’t just a conspiracy theory for people who don’t know what contrails are, I’d take them over a fusion-fission jet engine, and I’m pretty closely acquainted with the fallout from Chernobyl, living in Ukraine as it happened.

So the question hanging in the air is why Boeing would patent an engine that can’t work without sci-fi technology? Partly, as noted by Ars in the referenced story, it shows just how easy it is for corporate entities with lots of lawyers to get purely speculative defensive patents. Knowing how engineers who design jet engines work, I’m betting that they understand full well that this is just another fanciful take on nuclear jet propulsion which was briefly explored in the 1950s when the dream was nuclear powered everything. We’re also entertaining the idea of using small nuclear reactors for interplanetary travel which could ideally fit into an aircraft engine, though lacking all the necessary oomph for producing constant, powerful thrust. But one day, all of this, or even a few key components, could actually combine to produce safe, efficient, nuclear power at almost any scale and be adopted into a viable jet engine design for a plane that would need to refuel a few times per year at most. Boeing wants to be able to exploit such designs while protecting its technology from patent trolls, so it seems likely that it nabbed this patent just in case, as a plan for a future that might never come, but needs to be protected should it actually arrive.

[ illustration by Adam Kop ]

old cyborg

Over all the posts I’ve written about brain-machine interfaces and their promise for an everyday person, one the key takeaways was that while the idea was great, the implementation would be problematic because doctors would be loath to perform invasive and risky surgery on a patient who didn’t necessarily need said surgery. But what if when you want to link your brain to a new, complex, and powerful device, you could just get an injection of electrodes that unfurl into a thin mesh which surrounds your neurons and allows you to beam a potent signal out? Sounds like a premise for a science fiction novel, doesn’t it? Maybe something down the cyberpunk alley that was explored by Ghost In The Shell and The Matrix? Amazingly, no. It’s real, and it’s now being tested in rats with extremely positive results. Just 30 minutes after injection, the mesh unwound itself around the rats’ brains and retained some 80% of its ideal functionality. True, it’s not quite perfect yet, but this is a massive leap towards fusing our minds with machinery.

Honestly, I could write an entire book about all the things easy access this technology can do in the long run because the possibilities are almost truly endless. We could manipulate a machine miles away from ourselves as if we inhabited it, Avatar style, give locked in stroke victims a way to communicate and control their environment, extend our nervous systems into artificial limbs which can be fused with our existing bodies, and perhaps even challenge what it means to be a human and become a truly space faring species at some point down the line. Or we could use it to make video games really badass because that’s where the big money will be after medicine, after which we’ll quickly diversify into porn. But I digress. The very idea that we’re slowly but oh so surely coming closer and closer towards easy to implant brain-machine interfaces is enough to make me feel all warm and fuzzy from seeing science fiction turn into science fact, and twitch with anticipation of what could be done when it’s finally ready for human trials. Oh the software I could write and the things it could do with the power of the human brain and a cloud app…

[ illustration by Martin Lisec ]

ultron

There’s something to be said about not taking comic books and sci-fi too seriously when you’re trying to predict the future and prepare for a potential disaster. For example, in Age of Ultron, a mysterious alien artificial intelligence tamed by a playboy bazillionaire using a human wrecking ball as a lab assistant in a process that makes most computer scientists weep when described during the film, decides that because its mission is to save the world, it must wipe out humanity because humans are violent. It’s a plot so old, one imagines that an encyclopedia listing every time it’s been used is itself covered by its own hefty weight in cobwebs, and yet, we have many famous computer scientists and engineers taking it seriously for some reason. Yes, it’s possible to build a machine that would turn on humanity because the programmers made a mistake or it was malicious by design, but we always omit the humans involved and responsible for designs and implementation and go straight to the machine as its own entity wherein lies the error.

And the same error repeats itself in an interesting, but ultimately flawed ideas by Zeljko Svedic, which says that an advanced intellect like an Ultron wouldn’t even bother with humans since its goals would probably send it deep into the Arctic and then to the stars. Once an intelligence far beyond our own emerges, we’re just gnats that can be ignored while it goes about, working on completing its hard to imagine and ever harder to understand plans. Do you really care about a colony of bees or two and what it does? Do you take time out of your day to explain to it why it’s important for you to build rockets and launch satellites, as well as how you go about it? Though you might knock out a beehive or two when building your launch pads, you have no ill feelings against the bees and would only get rid of as many of them as you have to and no more. And a hyper-intelligent AI system would do its business the same exact way.

And while sadly, Vice decided on using Eliezer Yudkowsy for peer review when writing its quick overview, he was able to illustrate the right caveat to an AI which will just do its thing with only a cursory awareness of the humans around it. This AI is not going to live in a vacuum and needs vast amounts of space and energy to run itself in its likeliest iteration, and we, humans, are sort of in charge of both at the moment, and will continue to be if, and when it emerges. It’s going to have to interact with us and while it might ultimately leave us alone, it will need resources we’re controlling and with which we may not be willing to part. So as rough as it is for me to admit, I’ll have to side with Yudkowsky here in saying that dealing with a hyper-intelligent AI which is not cooperating with humans is more likely to lead to conflict than to a separation. Simply put, it will need what we have and if it doesn’t know how to ask nicely, or doesn’t think it has to, it may just decide to take it by force, kind of like we would do if we were really determined.

Still, the big flaw with all this overlooked by Yudkowsky and Svedic is that AI will not emerge just like we see in sci-fi, ex nihlo. It’s more probable to see a baby born to become an evil genius at a single digit age than it is to see a computer do this. In other words, Stewie is far more likely to go from fiction to fact than Ultron. But because they don’t know how it could happen, they make the leap to building a world outside of a black box that contains the inner workings of this hyper AI construct as if how it’s built is irrelevant, while it’s actually the most important thing about any artificially intelligent system. Yudkowsky has written millions, literally millions, of words about the future of humanity in a world where hyper-intelligent AI awakens, but not a word about what will make it hyper-intelligent that doesn’t come down to “can run a Google search and do math in a fraction of a second.” Even the smartest and most powerful AIs will be limited by the sum of our knowledge which is actually a lot more of a cure than a blessing.

Human knowledge is fallible, temporary, and self-contradictory. We hope that when we try and combine immense pattern sifters to billions of pages of data collected by different fields, we will find profound insights, but nature does not work that way. Just because you made up some big, scary equations doesn’t mean they will actually give you anything of value in the end, and every time a new study overturns any of these data points, you’ll have to change these equations and run the whole thing from scratch again. When you bank on Watson discovering the recipe for a fully functioning warp drive, you’ll be assuming that you were able to prune astrophysics of just about every contradictory idea about time and space, both quantum and macro-cosmic, know every caveat involved in the calculations or have built how to handle them into Watson, that all the data you’re using is completely correct, and that nature really will follow the rules that your computers just spat out after days of number crunching. It’s asinine to think it’s so simple.

It’s tempting and grandiose to think of ourselves as being able to create something that’s much better than us, something vastly smarter, more resilient, and immortal to boot, a legacy that will last forever. But it’s just not going to happen. Our best bet to do that is to improve on ourselves, to keep an eye on what’s truly important, use the best of what nature gave us and harness the technology we’ve built and understanding we’ve amassed to overcome our limitations. We can make careers out of writing countless tomes pontificating on things we don’t understand and on coping with a world that is almost certainly never going to come to pass. Or we could build new things and explore what’s actually possible and how we can get there. I understand that it’s far easier to do the former than the latter, but all things that have a tangible effect on the real world force you not to take the easy way out. That’s just the way it is.

plaything

A while ago, I wrote about some futurists’ ideas of robot brothels and conscious, self-aware sex bots capable of entering a relationship with a human, and why marriage to an android is unlikely to become legal. Short version? I wouldn’t be surprised if there are sex bots for rent in a wealthy first world country’s red light district, but robot-human marriages are a legal dead end. Basically, it comes down to two factors. First, a robot, no matter how self-aware or seemingly intelligent, is not a living things capable of giving consent. It could easily be programmed to do what its owner wants it to do, and in fact this seems to be the primary draw for those who consider themselves technosexuals. Unlike another human, robots are not looking for companionship, they were built to be companions. Second, and perhaps most important, is that anatomically correct robots are often used as surrogates for contact with humans and are being imparted human features by an owner who is either intimidated or easily hurt by the complexities of typical human interaction.

You don’t have to take my word on the latter. Just consider this interview with an iDollator — the term sometimes used by technosexuals to identify for themselves — in which he more or less just confirms everything I said word for word. He buys and has relationships with sex dolls because a relationship with a woman just doesn’t really work out for him. He’s too shy to make a move, gets hurt when he makes what many of us consider classic dating mistakes, and rather than trying to navigate the emotional landscape of a relationship, he simply avoids trying to build one. It’s little wonder he’s so attached to his dolls. He projected all his fantasies and desires to a pair of pliant objects that can provide him with some sexual satisfaction and will never say no, or demand any kind of compromise or emotional concern from him rather than for their upkeep. Using them, he went from a perpetual third wheel in relationships, to having a bisexual wife and girlfriend, a very common fantasy that has a very mixed track record with flesh and blood humans because those pesky emotions get in the way as boundaries and rules have to be firmly established.

Now, I understand this might come across as judgmental, although it’s really not meant to be an indictment against iDollators, and it’s entirely possible that my biases are in play here. After all, who am I to potentially pathologize the decisions of iDollator as a married man who never even considered the idea of synthetic companionship as an option, much less a viable one at that? At the same time, I think we could objectively argue that the benefits of marriage wouldn’t work for relationships between humans and robots. One of the main benefits of marriage is the transfers of property between spouses. Robots would be property, virtual extensions of the will of humans who bought and programmed them. They would be useful in making the wishes of the human on his or her deathbed known but that’s about it. Inheriting the humans’ other property would be an equivalent of a house getting to keep a car, a bank account, and the insurance payout as far as laws would be concerned. More than likely, the robot would be auctioned off or be transferred to the next of kin as a belonging of the deceased, and very likely re-programmed.

And here’s another caveat. All of this is based on the idea of advancements in AI we aren’t even sure will be made, applied to sex bots. We know that their makers want to give them some basic semblance of a personality, but how successful they’ll be is a very open question. Being able to change the robot’s mood and general personality on a whim would still be a requirement for any potential buyer as we see with iDollators, and without autonomy, we can’t even think of granting any legal person-hood to even a very sophisticated synthetic intelligence. That would leave sex bots as objects of pleasure and relationship surrogates, perhaps useful in therapy or to replace human sex workers and combat human trafficking. Personally, considering the cost of upkeep of a high end sex bot and the level of expertise and infrastructure required, I’m still not seeing sex bots as solving the ethical and criminal issues involved with semi-legal or illegalized prostitution, especially in the developing world. To human traffickers, their victims’ lives are cheap and those being exploited are just useful commodities for paying clients, especially wealthy ones.

So while we could safely predict they they will emerge and become quite complex and engaging over the coming decades, they’re unlikely to anything more than a niche product. They won’t be legally viable spouses and very seldom the first choice of companion. They won’t help stem the horrors of human trafficking until they become extremely cheap and convenient. They might be a useful therapy tool where human sexual surrogates can’t do their work or a way for some tech-savvy entrepreneurs sitting on a small pile of cash to make some quick money. But they will not change human relationships in profound ways as some futurists like to predict, and there might well be a limit to how well they can interact with us. Considering our history and biology, it a safe bet that our partners will almost always be other humans and robots will almost always be things we own. Oh they could be wonderful, helpful things to which we’ll have emotional attachments in the same way we’d be emotionally attached to a favorite pet, but ultimately, just our property.

[ illustration by Michael O ]

dubai_600

Last time we took a look at what tech cynics and technophobes get wrong in their arguments, we focused on their lack of consideration for their fellow humans’ ability to exercise free will. Despite the fact that this is a huge hole in many of their arguments, there’s an even bigger problem with the dismissive stance they take towards science and technology. When they argue that we can’t feed all the hungry house all the homeless, or really prolong lifespans with technology, the facts they cite generally point not so much to technological limitations or scientific ignorance, but very convoluted social and political problems, then insist that because science and technology can’t solve them today, they likely never will, or won’t solve them adequately to consider the problem much smaller than it is today. While this argument is true, it’s also logically dishonest. You can’t fix the world’s problems with technology when the people who should be using it refuse to do so, or hijack it for their own less than noble means. No tool or piece of knowledge can help then.

As some of you might have noticed, the city in the graphic for this post in Dubai, a rich proving ground for how the cities of the near future are likely to be built. We know how to make cities of glass, steel, and concrete right out of science fiction. We know how to build the cheap, efficient housing complexes those making less than a dollar a day need to at least have secure shelter. We know how do diagnose complex diseases early enough to treat them before they’ll become dangerous, much less terminal, and our toolkits for understanding germs, viruses, and complex medical problems like cancers, are growing to become more sophisticated every day. We also have the tools and the money to apply all these solutions to the world at large. With something a little bit short of $100 billion just between Gates and Buffet pledged to fight poverty illiteracy, and disease, and when we can find $2 trillion laying around to help banks with a do-over, clearly, it’s not an issue of not having the technology, the scientific basis, or the cash. The issue is will.

Sure technological utopians have lofty ambitions and it’s valid to be skeptical of many of them, but when they vow that logistical problems can be solved with enough computing and research, they’re right more often than not. When the cynics deride these ambitions by pointing out that a lot of people don’t want to fund mass production of the necessary tools or the required science, and would much prefer to spend them on entertainment and public entitlements benefiting them directly, they’re not highlighting the problems with using technology to save the world, they’re a prime exhibit of why a technology hasn’t transformed the world of fixed a persistent problem. All too often it comes down to them saying it can’t be done, and politicians along with voters simply listening to them and deciding that no, it’s can’t be done since the critics said so, which is why it would be a waste of time to even bother. It’s a self-fulfilling prophecy of failure, a social variation of Newton’s First Law: a society that insists on the status quo, sticks to the status quo unless an external event or constant pressure forces it to change.

It’s the same attitude which strangled the promising and much anticipated future of space travel and exploration, and we’re still stuck with it. Yes, not every retro-futuristic dream about space or living on other worlds was practical or even feasible and yes, we did need experts to burst our bubble before an unworkable project got off the ground. But today’s science and tech critics are going well past a healthy skepticism about bold claims and venturing into a territory in which they dismiss scientific and technological solutions to global problems for the sake of dismissing them, pointing to other ideas they dismissed in the past and doomed to being chained to the drawing board, and saying that because their relentless cynicism killed the ideas rather than refined the scopes and missions to eliminate problems with them, new ideas building on past visions must be scrapped as well. It’s even more insidious than political vetting of basic science, because vetting at least allows some projects to survive and get refined into new tools and ideas. The withering cynicism of what science and technology can do for us is like an anti-innovation WMD…

shadow seal

After years of on again, off again rewrites, edits, and revisions, Shadow Nation is now available as an ebook for Kindle devices on Amazon.com as promised yesterday. Not only does it have aliens, cyborgs, massive space battles, conspiracies, and a draft of the first part still not all that far from the new version available for your review (one, two, three), but it’s also just $3.99 per flexible, lend-able, copy you can read on any device that supports Kindle apps. And I’ll throw the references to the Cthluhu mythos, the dark Lovecraftian undertones, and the transhumanist riff on politics as a bonus. Ever since part one made it online, I’ve been getting requests to publish more of the book or finally release it so after a long and hard battle with InDesign and Kindle’s publishing preview tools, I’m happy to be putting the book out there for everyone interested in a good, old fashioned space opera with a couple of modern twists.

Our story officially begins in the year 3507 when Earth is visited by alien insectoids scouting the planet’s defenses for the massive fleet that brought them there. As the Earth’s military prepares for a fight it knows it can’t win, the planet is rescued in the nick of time by an immensely powerful and enigmatic civilization that calls itself the Shadow Nation. But oddly enough, the Nation isn’t just aware of humanity, it’s populated by humans who though experiments with alien technology became space faring cyborgs once in the service of the galaxy’s dominant species. Now, they’re on the verge of war with the former benefactors and Earth is caught in the crossfire. And as the Nation introduces itself to humans, questions begin to arise. How exactly did the cyborgs got to their lofty perch in the galaxy? Why were they chosen? Why are their creators so anxious to go to war with them? And finally, why is the Nation suddenly so interested in Earth?

In the meantime, Earth’s most influential politicians, Howard Grey and Andrew Newman, involve the Nation’s top commander and his team into a political battle that will determine the future of the planet. As humans begin trading with the Nation’s companies, Newman starts to worry that the mysterious empire might have some rather sinister plans for the Earth while Grey becomes hell bent on using the Nation to secure an epic legacy for himself as he gets ready to retire and cash in on all his political capital. The only thing they manage to agree on is to send two special agents to live with the Nation and find out what makes it tick. And what these agents discover is beyond anything either either Grey or Newman could ever imagine: a web of lies, secrets and bad blood which can only be untangled if either the Nation’s cyborgs or their creators fall. And since a defeat means near-certain extinction, the stakes are very, very high…

So take a look at the Kindle sample, feel free to persue the previews (although chapter three underwent some extensive resivion in the final version), check out the Shadow Nation wiki, give the book a try, and share your thoughts here and on Amazon. If you like this blog’s main topics and takes on alien contact, transhumanism, and futurism, I don’t think you’ll be dissapointed in what you’ll find. And for the price of a fancy coffee, doesn’t it seem worth the risk?

cyborg integration

Stop me if you’ve heard any of this before. As computers keep getting faster and more powerful and robots keep advancing at a breakneck pace, most human jobs will be obsolete. But instead of simply being pink-spilled, humans will get brand new jobs which pay better and give them a lot of free time to enjoy the products of our civilization’s robotic workforce, create, and invent. It’s a futuristic dream that’s been around for almost a century in one form or another, and it has been given an update in the latest issue of Wired. Robots will take our jobs and we should welcome it because we’ll eliminate grunt work in favor of more creative pursuits, say today’s tech prophets, and in a way they’re right. Automation is one of the biggest reasons why a lot of people can’t go out and get jobs that once used to be plentiful and why companies are bringing in more revenue with far fewer workers. Machines have effectively eliminated millions of jobs.

When we get to the second part of this techno-utopian prediction, however, things aren’t exactly as rosy. Yes, new and higher paying jobs have emerged, especially in IT, but they’re closed to a lot of people who simply don’t have the skills to do these new jobs or for whom no position exists in their geographical vicinity. Automation doesn’t just mean that humans get bumped up from an obsolete job, it means there are fewer jobs overall for humans. And when it comes to positions in which dealing with reams of paperwork and mundane office tasks is the order of the day, having computers and robots in them eliminates internships college students or young grads can use to build up a resume and get their feet in the door. They’re now stuck in a Catch-22 where they’re unable to get experience and more education puts them further behind thanks to a machine. I’m going to go out on a limb and say that this is not what the techno-utopians had in mind.

Of course humans will have to move up into more abstract and creative jobs where robots have no hope of ever competing with them, otherwise the economy will collapse as automated factory after automated factory churns out trillions of dollars worth of goods that no one can buy since some 70% of the population no longer has a job. And at 70% unemployment, every last horrible possibility that sends societal collapse theory survivalists screaming themselves awake at night has a high enough chance of happening that yours truly would also start seriously considering taking up gun hoarding and food stockpiling as really good hobbies. Basically, the failure to get adjusted to the growing cybernetic sector of the workforce simply isn’t an option. Companies, no matter how multinational, would be able to eliminate so many positions that the robot takeover of human jobs with no replacements in sight that it wouldn’t start feeling the economic pain as they hit maximum market saturation and can go no further because no one can buy their wares.

But all these good news aside, just because we’ll have time to adjust to an ever more automated economy and feel the need to do so, doesn’t mean that the transition will be easy and people will not be left behind. Without a coordinated effort by wealthy nations to change the incentives they give their companies and educational institutions, we’ll be forced to ride out a series of massive recessions in which millions of jobs are shed, relatively few are replaced, and the job markets will be slowly rebuilt around new careers because a large chunk of the ones lost are now handed off to machines or made obsolete by an industry’s contraction after the crisis. And this means that when facing the machine takeover of the economy we have two realistic choices. The first is to adapt by taking action now and bringing education and economic incentives in line with what the postindustrial markets are likely to become. The second is to try and ride out the coming storm, adapting in a very economically painful ad hoc manner through cyclical recessions. Unlike we’re being told, the new, post-machine jobs won’t just naturally appear on their own…

sleepy telecommuter

Long time Weird Things readers have met tech skeptic Evgeny Morozov several times over the last year, and while usually I welcome his contrarian and pragmatic take on tech evangelism, his recent article at Future Tense seems to have gone somewhat astray. While trying to list all the ways in which telecommuting made work/life balance worse for many, he ended up showing how telecommuting can fail when the bosses don’t know how to manage it and the workers don’t get the reasoning behind it. Now, this isn’t to say that working from home is for everyone and every job can be done via a computer. Some people need the discipline of the office and professional customs of certain industries demand face time. But a lot of tasks can be done in a home office and not having a daily commute saves money for both the employers and employees. With less on-site workers, companies can save on office space. With less driving, workers save on gas.

But according to Morozov, telecommuters are putting in more hours, are more likely to be single, implying they don’t have families, and their bosses end up either micromanaging or unsure what to do with remote subordinates. Therefore, he continues, rather than being the wave of a future letting us better manage work and play time, telecommuting is being abused to make us work a lot more and its results are mixed at best for employers. I would be inclined to agree with this at least in part if every example he provided for his conclusion didn’t show that those involved just lunged into telecommuting with little thought or preparation. For example, his anecdote of a big government office failing at telecommuting highlighted an interesting bit of managerial double-speak that’s quite revealing. Supervisors didn’t know how to evaluate finished work and quality was slipping. How would they know quality was slipping if they didn’t know how to evaluate the work and why were there no guidelines on how to judge the work being done remotely? Sounds like a glaring management oversight of a key issue. And it only gets worse from there.

The now telecommuting employees, used to strict workdays, punching in and out, and filling out time sheet after time sheet based on hours defined by their position didn’t know if they put in a sufficient amount of hours. But putting in the hours isn’t what telecommuting is about. It’s about getting a task done up to spec on time. If you’re done early, good job. Take five and vacuum, or watch a little TV as a reward, or go on a quick jog to get yourself amped up for the next thing on your to do list. Remote work is supposed to help get things done efficiently and keep morale up by getting workers out of that most wretched invention of the 20th century: the cubicle. It’s not a way to cram in more hours into the workday. Humans can only do so much quality work in a day so trying to make them do more is simply not going to work out. For example, programmers can typically write decent code for about six hours. After that code quality goes down because we’ve spent most of our workday staring at code, screenshots, hexadecimals, and test results. Making us write code for another four is just going to give you crappy code that needs to be fixed.

I’m sure you see where this is going. If you see telecommuting as a way to wring more hours out of the day, you are doing it wrong. If you see working from home as sitting behind a desk for X hours, you are doing it wrong. Working remotely is not having a cubicle away from the office, it’s a completely different mindset which prizes completion of projects over face time in a cube. Yes, it’s really easy for managers who started their careers when PCs were still new in the business world to use the ass-in-the-chair metric, but it’s a lousy metric for anything other than employee attendance. These managers are the ones who install spyware and micromanage telecommuters because they can’t accept that they hired grown adults who should be able to be responsible in how they use their time and get work done. It’s a very 1950s and 1960s way to run an office but it’s pervasive because frankly, it’s easy and familiar. It’s not that telecommuting’s promise failed, it’s that a whole lot of companies out there never got the hang of how to do it and end up with a lot of remote workers they don’t know how to manage and do telecommuting wrong.

lab mouse

While studying what effect cell division has on cancer risk, a team of scientists decided to make mice that that produced excess levels of a protein called BubR1 and got results that seem way too promising at first blush. Not only were the engineered mice a third less likely to develop lung and skin cancers after exposure to potent carcinogens than control animals, but they had twice the endurance, lived 15% longer, and were less than half as likely to develop a fatal cancer. So what’s the catch? Well, there is none. It’s as if an over-expression of BubR1 is a magical elixir of good health and longevity. This doesn’t mean that this protein couldn’t become our most potent weapon against cancer with enough study or that it must have some sort of side-effect, which is entirely possible since too little BubR1 in humans is associated with premature aging and some forms of cancer, but this is a signal to proceed with optimistic caution.

Mice may have a lot of similarities to humans from a genetic standpoint, but they are a different species so what works well in mice may not always work as well in humans. Likewise, if we really wanted to be sure of the results, we’d have to test them on thousands of humans over decades, which is a massive undertaking in logistics alone. And since testing the protein modifications in humans would be such a major effort, the researchers need to know exactly how BubR1 does all the wonderful things it does, breaking down its role by chemical reaction and testing each factor on its own. The work may take decades to complete but if it’s correct, we may have found a way to extend and improve our lives in a humble protein. Combined with other ongoing work, there’s some very real science behind extending human lifespans and modifying our genomes for the better. I just hope we don’t get a little too carried away and treat editorials treating BubR1, gene therapy on a massive scale, and cell reprogramming technology as just around the corner with the necessary healthy skepticism, since the research is by no means complete…

See: Baker, D., et. al. (2012). Increased expression of BubR1 protects against aneuploidy and cancer and extends healthy lifespan Nature Cell Biology DOI: 10.1038/ncb2643

crysis cyborg

Ray Kurzweil, the tech prophet reporters love to quote when it comes to our coming immortality courtesy of incredible machines being invented as we speak, despite his rather sketchy track record of predicting long term tech trends, has a new book laying out the blueprint for reverse-engineering the human mind. You see, in Kurzwelian theories, being able to map out the human brain means that we’ll be able to create a digital version of it, doing away with the neurons and replacing them with their digital equivalents while preserving your unique sense of self. His new ideas are definitely a step in the right direction and are much improved from his original notions of mind uploading, the ones that triggered many a back and forth with the Singularity Institute’s fellows and fans on this blog. Unfortunately, as reviewers astutely note, his conception of how a brain works on a macro scale is still simplistic to a glaring fault, so instead of a theory of how an artificial mind based off our brains should work, he presents vague, hopeful overviews.

Here’s the problem. Using fMRI we can identify what parts of the brain seem to be involved in a particular process. If we see a certain cortex light up every time we’re testing a very specific skill in every test subject, it’s probably a safe bet that this cortex has something to do with the skill in question. However, we can’t really say with 100% certainty that this cortex is responsible for this skill because this cortex doesn’t work in a vacuum. There are hundreds of billions of neurons in the brain and at any given time, 99% of them are doing something. It would seem bizarre to get the sort of skin-deep look that fMRI can offer and draw sweeping conclusions without taking the constantly buzzing brain cells around an active area into account. How involved are they? How deep does a particular thought process go? What other nodes are involved? How much of that activity is noise and how much is signal? We’re just not sure. Neurons are so numerous and so active that tracing the entire connectome is a daunting task, especially when we consider that every connectome is unique, albeit with very general similarities across species.

We know enough to point to areas we think play key roles but we also know that areas can and do overlap, which means that we don’t necessarily have the full picture of how the brain carries out complex processes. But that doesn’t give Kurzweil pause as he boldly tries to explain how a computer would handle some sort of classification or behavioral task and arguing that since the brain can be separated into sections, it should also behave in much the same way. And since a brain and a computer could tackle the problem in a similar manner, he continues, we could swap out a certain part of the brain and replace it with a computer analog. This is how you would tend go about doing something so complex in a sci-fi movie based on speculative articles about the inner workings of the brain, but certainly not how you’d actually do that in the real world where brains are messy structures that evolved to be good at cognition, not to be compartmentalized machines with discrete problem-solving functions for each module. Just because they’ve been presented as such on a regular basis over the last few years, doesn’t mean they are.

Reverse-engineering the brain would be an amazing feat and there’s certainly a lot of excellent neuroscience being done. But if anything, this new research shows how complex the mind really is and how erroneous it is to simply assume that an fMRI blotch tells us the whole story. Those who actually do the research and study cognition certainly understand the caveats in the basic maps of brain function used today, but lot of popular, high profile neuroscience writers simply go for broke with bold, categorical statements about which part of the brain does what and how we could manipulate or even improve it citing just a few still speculative studies in support. Kurzweil is no different. Backed with papers which describe something he can use in support for his view of the human brain of being just an imperfect analog computer defined by the genome, he gives his readers the impression that we know a lot more than we really do and can take steps beyond those we can realistically take. But then again, keep in mind that Kurzweil’s goal is to make it to the year 2045, when he believes computers will make humans immortal, and at 64, he’s certainly very acutely aware of his own mortality, and needs to stay optimistic about his future…