Archives For technology

humanoid robot

With easy, cheap access to cloud computing, a number of popular artificial intelligence models computer scientists wanted to put to the test for decades, have now finally able to summon the necessary oomph to drive cars and perform sophisticated pattern recognition and classification tasks. With these new probabilistic approaches, we’re on the verge of having robotic assistants, soldiers, and software able to talk to us and help us process mountains of raw data based not on code we enter, but the questions we ask as we play with the output. But with that immense power come potential dangers which alarmed a noteworthy number of engineers and computer scientists, and sending them wondering aloud how to build artificial minds with values similar to ours and can see the world enough like we do to avoid harm us by accident, or even worse, by their own independent decision after seeing us as being “in the way” of their task.

Their ideas on how to do that are quite sound, if exaggerated somewhat to catch the eye of the media and encourage interested non-experts in taking this seriously, and they’re not thinking of some sort of Terminator-style or even Singularitarian scenarios, but how to educate an artificial intelligence on our human habits. But the flaw I see in their plans has nothing to do with how to train computers. Ultimately an AI will do what its creator wills it to do. If its creator is hell bent on wreaking havoc, there’s nothing we can do other than stop him or her from creating it. We can’t assume that everyone wants a docile, friendly, helpful AI system. I’m sure they realize it, but all that I’ve found so far on the subject ignores bad actors. Perhaps it’s because they’re well aware that the technology itself is neutral and the intent of the user is everything. But it’s easier just to focus on technical safeguards than on how to stop criminals and megalomaniacs…

fish kung fu

Robots and software are steadily displacing more and more workers. We’ve known this for the last decade as automation picked up the pace and entire professions are facing obsolescence with the relentless march of the machines. But surely, there are safe, creative careers no robot would ever be able to do. Say for example, cooking. Can a machine write an original cookbook and create a step-by-step guide for another robot to perfectly replicate the recipe every time on demand? Oh, it can. Well, damn. There go line cooks at some point in the foreseeable future. Really, can any mass market job not somehow dealing with making, modifying, and maintaining our machines and software be safe from automation? Well, sadly, the answer to that question seems to be a pretty clear and resounding “no,” as we’ve started hooking up our robots to the cloud to finally free them of the computational limits that held them back from their full potential. But what does this mean for us? Do we have to build a new post-industrial society?

Over the last century or so, we’ve gotten used to a factory work model. We report to the office, the factory floor, or a work site, spend a certain amount of hours doing the job, go home, then get up in the morning and do it all over again, day after day, year after year. We based virtually all of Western society on this work cycle. Now that an end to this is in sight, we don’t know how we’re going to deal with it. Not everybody can be an artisan or an artist, and not everyone can perform a task so specialized that building robots to do it instead would be too expensive, time consuming, and cost ineffective. What happens when robots build every house and where dirt cheap RFID tags on products and cloud-based payment systems made cashiers unnecessary, and smart kiosks and shelf-stocking robots have replaced the last retail odd job?

As a professional techie, I’m writing this from a rather privileged position. Jobs like mine really can’t really go away since they’re responsible for the smarter software and hardware. There’s been a rumor about software that can write software and robots that can build other robots for years, and while we actually do have all this technology already, a steady expert hand is still a necessity, and always will be since making these things is more of an art than a science. I can also see plenty of high end businesses and professions where human to human relationships are essential holding out just fine. But my concern is best summarized as First World nations turning into country-sized versions of San Francisco, a post-industrial times city which doesn’t know how to adapt to a post-industrial future. Massive income inequalities, insanely priced and seldom available housing, and a culture that encourages class-based self-segregation.

The only ways I see out of this dire future is either unrolling a wider social safety net (a political no-no that would never survive conservative fury), or making education cost almost nothing to retrain workers on the fly (a political win-win that never gets funded). We don’t really have very much time to debate this and do nothing. This painful adjustment has been underway for more than five years now and we’ve sitting on our hands letting it happen. It’s definitely very acute on the coasts, especially here on the West Coast, but its been making a mess out of factories and suburbs of the Midwest and the South. When robots are writing cookbooks and making lobster bisque that even competition-winning chefs praise as superior to their own creations, its time to tackle this problem instead of just talking about how we’re going to talk about a solution.

[ illustration by Andre Kutscherauer ]

police graffiti

Ignorance of the law is no excuse we’re told when we try to defend ourselves by saying that we had no idea that a law existed or worked the way it did after getting busted. But what if not even the courts actually know if you broke a law or not, or the law is just so vague or based on such erroneous ideas of what’s actually being talked about, that your punishment, if you would even be sentenced to one, is guaranteed to be more or less arbitrary? This is what an article over at the Atlantic about two cases taken on by the Supreme Court dives into, asking if there will be a decision that allows vague laws to be struck as invalid because they can’t be properly enforced and rely on the courts to do lawmakers’ jobs. Yes, it’s the courts’ job to interpret the law, but if a law is so unclear that a room full of judges can’t agree what it’s actually trying to do and how, it would require legislating form the bench, a practice which runs afoul of the Constitution’s stern insistence on separation of powers in government.

Now, the article itself deals mostly with the question of how vague is too vague for a judge to be unable to understand what the law really says, which while important in its own right, is suited a lot better to a law or poly-sci blog than a pop science and tech one, but it also bumps into poor understanding of science and technology creating vague laws intended to prevent criminals on getting off on a technicality. Specifically, in the case of McFadden v. United States, lawmakers didn’t want someone who gets caught manufacturing and selling a designer drug to admit that he indeed make and sell it, but because there’s one slight chemical difference between what’s made in his lab and the illegal substance, he’s well within the law, leaving the prosecutors pretty much no other choice but to drop the matter. So they created a law which says that a chemical substance “substantially similar” to something illegal is also, by default, illegal. Prosecutors will have legal leverage to bring a case, but chemists say they can now be charged with making an illegal drug on a whim if someone finds out he or she can use it to get high.

Think of it as the Drug War equivalent of a trial by the Food Babe. One property of a chemical, taken out of context, compared to a drug that has some similarity to the chemical in question in the eyes of the court, but instead of being flooded with angry tweets and Facebook messages from people who napped through their middle school chemistry, there’s decades of jail time to look forward to at the end of the whole thing. Scary, right? No wonder the Supreme Court does want to take another look at the law and possibly invalidate it. Making the Drug War even more expensive and filling jails with even more people would make it an even greater disaster than it has been already, especially now that you’re filling them with people who didn’t even know that they were breaking the law and the judges who put them there were more worried about how they were going to get reelected than whether the law was sound and the punishment was fair and deserved. Contrary to popular belief of angry mobs, you can get too tough on crime.

But if you think you’re not a chemist, you’re safe from this vague, predatory overreach, you are very wrong, especially if you’re in the tech field, specifically web development, if the Computer Fraud and Abuse Act, or the CFAA has anything to say about it. Something as innocuous as a typo in the address bar discovering a security flaw which you report right away can land you in legal hot water under its American and international permutations. It’s the same law which may well have helped drive Aaron Schwartz to suicide. And it gets even worse when a hack you find and want to disclose gives a major corporation grief. Under the CFAA, seeing data you weren’t supposed to see by design is a crime, even if you make no use of it and warn the gatekeepers that someone could see it too. Technically that data has to be involved in some commercial or financial activity to qualify as a violation of the law, but the vagueness of the act means that all online activity could fall under this designation. So as it stands, the law gives companies a legal cover to call finding their complete lack of any security a malicious, criminal activity.

And this is why so many people like me harp on the danger of letting lawyers go wild with laws, budgets, and goal-setting when it comes to science and technology. If they don’t understand a topic on which they’re legislating, or are outright antagonistic towards it, we get not just typical setbacks to basic research and underfunded labs, but we also get laws based on a very strong desire to do something, but not understanding enough about the problem to end up with good laws that actually deal with the problem in a sane and meaningful way. It’s true with chemistry, computers, and a whole host of other subjects requiring specialized knowledge we apparently feel confident that lawyers, business managers, and lifelong political operatives will be zapped with when they enter Congress. We can tell ourselves the comforting lie that surely, they would consult someone before making these laws since that’s the job, or we can look at the reality of what actually happens. Lobbyists with pre-written bills and blind ambition result in laws that we can’t interpret or properly enforce, and which criminalize things that shouldn’t be illegal.

quantified self

With the explosion in fitness trackers and mobile apps that want to help manage everything from weight loss to pregnancy, there’s already a small panic brewing as technology critics worry that insurance companies will require you to wear devices that track your health, playing around with your premiums based on how well or how badly you take care of yourself. As the current leader of the reverse Singularitarians, Evgeny Morozov, argues, the new idea of the quantified self is a minefield being created with little thought about the consequences. Certainly there is a potential for abuse of very personal health metrics and Morozov is at his best when he explains how naive techno-utopians don’t understand how they come off, and how the reality of how their tools have been used in the wild differs drastically from their vision, so his fear is not completely unfounded or downright reflexive, like some of his latest pieces have been. But in the case of the quantified self idea being applied to our healthcare, the benefits are more likely to outweigh the risks.

One of the reasons why healthcare in the United States is so incredibly expensive is the lack of focus on preventitive medicine. Health problems are allowed to fester until they become simply too bothersome to ignore, a battery of expensive tests is ordered, and usually expensive acute treatments are administered. Had they been caught in time, the treatments would not have to be so intensive, and if there was ample, trustworthy biometric information available to the attending doctors, there wouldn’t need to be as much testing to arrive at an accurate diagnosis. As many doctors grumble about oceans of paperwork, logistics of testing, and the inability to really talk to patients in the standard 15 minute visit, why not use devices that would help with the paperwork and do a great deal of preliminary research for them before they ever see the patient? And yes, the devices would have to be able to gather data by themselves because we often tell little white lies about how active we are and how well we eat, even when both we and our doctors know that we’re lying. And this only hurts us in the end by making the doctors’ work more difficult.

That brings us full circle to health insurance premiums and requirements to wear these devices to keep our coverage. Certainly it’s kind of creepy that there would be so much data about us so readily available to insurance companies, but here’s the thing. They already have this data from your doctors and can access it whenever they want in the course of processing your claim. With biometric trackers and loggers, they could do the smart and profitable thing and instead of using a statistical model generated from a hodgepodge of claim notes, take advantage of the real time data coming in to send you to the doctor when a health problem is detected. They pay less for a less acute treatment plan, you feel healthier and have some piece of mind that you’re now less likely to be caught by surprise by some nasty disease or condition, and your premiums won’t be hiked as much since the insurers now have higher margins and stave off rebellions from big and small companies who’ll now have more coverage choices built around smart health data. And all this isn’t even mentioning the bonanza for researchers and policy experts who can now get a big picture view from what would be the most massive health study ever conducted.

How many times have you read a study purporting the health benefits of eating berries and jogs one week only to read another one that promotes eating nuts and saying that jogs are pointless with the different conclusions coming as a result of different sample sizes and subjects involved in the studies? Well, here, scientists could collect tens of millions of anonymized records and do very thorough modeling based on uniform data sets from real people, and find out what actually works and for whom when it comes to achieving their fitness and weight loss goals. Couple more data and more intelligent policy with the potential for economic gain and the gamification offered by fitness trackers, and you end up with saner healthcare costs, a new focus on preventing and maintaining rather than diagnosing and treating, fewer sick days, and longer average lifespans as the side effect of being sick less often and encouraged to stay active and fit, and you have a very compelling argument for letting insurance companies put medical trackers on you and build a new business model around them and the data they collect. It will pay off in the long run.

server rack

Yes, I know, it’s been a while since my last post but life has a way of getting in the way of steady, regular blogging. And of course there’s still the work on Project X on the horizon which will affect that happens to Weird Things, but more on that in due time. Today’s topic is one which I heavily debated with myself before addressing because it’s been a near constant drumbeat in the news and the coverage has been almost overwhelmingly tilted towards setting the outrage dial all the way to 11 and tearing the knob off. I’m talking about the family of NSA surveillance programs for monitoring the internet and intercepting immense amounts of traffic and metadata, of course. As the revelations have been dropped on a regular schedule, the outrage keeps getting louder. In the techie media the most prominent reaction is "how could they?" According to online activists, the internet exists for the free exchange of ideas and a way to speak truth to power when need be, so the NSA’s snooping is a violation of the principles on which the internet was built.

Unfortunately, that’s just a soothing fantasy we tell ourselves today. Originally, the internet was developed as a means to exchange information between military researchers and Tor, the go-to tool for at least partial online anonymity (unless you get a nasty virus) was being developed to hide the tell-tale signs of electronic eavesdropping via onion routing by the U.S. Navy until it was spun off by the EFF. And while the web was meant to share scientific data for CERN over a very user unfriendly network at the time, it was given its near-ubiquity by big companies which didn’t adopt the technology and wrote browsers out of the goodness of their heart and desire to make the world into one big, global family, but because they wanted to make money. The internet was built to make classified and complex research easier, tamed for profit, and is delivered via a vast infrastructure worth many billions operated by massive businesses firmly within the grasp of a big government agency. It’s never been meant for world peace, anonymity, and public debate.

Now don’t get me wrong, it’s great that we can give political dissidents voices and promote ideas for peace and cooperation across the world at nearly the speed of light. We should be doing as much of that as possible. But my point is that this is not the primary function of the system, even if this is what cyber-anarchists and idealistic start-up owners in the Bay Area tell you. It’s a side-effect. So when massive companies give data flying through the web to spy agencies on request and even accept payment for it, we’re seeing the entities that built the system using it to further their own goals and means, and to comply with orders of governments that have power to bring them down if they want. It’s not fair, but picking a fight with the NSA is kind of like declaring that you’re going to play chicken with a nuclear aircraft carrier while paddling a canoe. At best, they’ll be amused. At worst, they’ll sink you with nary an effort. Wikipedia can encrypt all of its traffic as a form of protest, but a) the NSA really doesn’t care about how many summaries of comic book character plot lines you read, and b) if it suddenly starts caring, it’ll find a way to spy on you. It’s basically the agency’s job, and we’ve known it’s been doing that since 2006.

For all the outrage about the NSA, we need to focus on the most important problems with what’s going on. We have an agency which snoops on everyone and everything, passively storing data to use if you catch their attention and it decides you merit a deep dive into their database that’s holding every significant electronic communication you’ve had for the last decade or so. This is great if you’re trying to catch spies or would-be terrorists (but come on people, more than likely spies based on the infrastructure being brought into focus), but it also runs against the rights to due process and protection from warrantless, suspicionless searches and seizures. Blaming the legal departments of Microsoft, Google, and Yahoo for complying with official orders is useless, and pretending that an information exchange network built to make money and maintained by a consortium of profit-minded groups is somehow a bastion of freedom being corrupted by the evil maws of the U.S. government just seems hopelessly naive. Americans don’t like to think of their country as a global hegemony just doing what global hegemons do and using its might to secure its interests. They like to think of it as having a higher calling. For them, reality bites.

But again the sad truth is that this is exactly what’s going on. While transparency activists loose their fury and anger in the media and on the web, realpolitik is relentlessly brutal, treating entire nations exactly like pawns on a chessboard. For all the whistleblowing of the past five years, not that much of the leaked information was really that shocking. It just confirmed our fears that the world is ran by big egos, cooperation is rare and far between, and that as one nation is aiming to become another global hegemon, the current one is preparing for a siege and quietly readying a vast array of resources to maintain its dominance, if not economic, then military and political. On top of that, rather than being elected or asked to rise into its current position, it chose to police much of the planet and now finds itself stuck where it doesn’t want to be. We know all this and a great deal of this is taught in history class nowadays. We just don’t really want to deal with it and the fits of rage towards corporations and government agencies somehow corrupting the system they built for power and profit seem to be our reaction to having to deal with these fast after the last whistle was blown. Sadly, we don’t get the world we want, we get the one we really build.

censorship ad

Policy wonks, like most people, tend to think of IT as a magical black box which takes requests, does something, and makes their computers do what they want, or at least somewhat close to it. And so it’s not really surprising to see Ronan Farrow and Shamila Chaudhary rail against major cybersecurity companies for enabling dictators to block internet content at Foreign Policy, with allegations that show how poorly they understand what these companies do and how virtually all of the products they make work. You see, blaming a tech company for censorship is kind of like blaming a car manufacturer for drunk drivers. Certainly their tools are intended to block content but they’re not designed to filter all undesirables from a centralized location to which a dictator can submit a request. They’re meant to analyze and block traffic coming from malicious sources to prevent malware and any time you can analyze and stop traffic, you can abuse the ability and start blocking legitimate sites just because you don’t like them or the people who run them.

Most of the software they cited is meant to secure corporate networks and if they no longer get to stop or scan data, they’re pretty much useless because they can’t do threat identification or mitigation. WebSense does filter content and uses a centralized database cluster to push how it classifies sites to its customers so, as Farrow and Chaudhary noted, it was able to change up a few things to help mitigate its abuse by authoritarians. But McAffee and others are in a tougher spot because they’ve simply sold a software license to network admins. Other than virus and bot net definitions, there’s not much they can control from a central location, and trying to shame a company for selling tools made for something entirely different puts them in a position in which it would be very hard to defend their actions to someone convinced that they can just flip a switch and end the digital reign of tyranny across the world. And its even worse when the first reactions to articles about the abuse of their wares blame them for just being greedy.

On top of that, it’s not exactly hard to write your own filters and deep packet inspection tools. It’s just difficult to scale them for millions of users but it’s nothing out of the authoritarians reach. As they spend billions on security and control, surely they could divert a couple of million to build a capable system of their own. In fact, the Great Firewall of China is mostly home-grown and uses the country’s ISPs to scan incoming and outgoing traffic on a daily basis to find what to block. It sounds like a powerful indictment to point out that the Chinese use Cisco routers in their system, but it’s not as if they outsourced the task of pinging and blocking Tor nodes to the company. To be perfectly fair in charging tech companies in aiding and abetting censorship, you’d have to be talking about search engines that agree to modify their functionality to get a toehold in markets ruled over by authoritarians who will get someone to censor searches if not the company which was trying to expand. Bottom line: dictators will find a way to censor what they want to censor. If they use network monitoring security tools to do it, the blame still rests with them.

quantum chip

Quantum computers are slowly but surely arriving, and while they won’t be able to create brand new synthetic intelligences where modern computers have failed, or will even be faster for most tasks typical users will need to execute, they’ll be very useful in certain key areas of computing as we know it today. These machines aren’t being created as a permanent replacement to your laptop but to solve what are known as BPQ problems which will help your existing devices and their direct descendants run more securely and efficiently route torrents of data from the digital clouds. In computational complexity theory, BPQ problems are decision problems that could be performed in polynomial time when using superposition and quantum entanglement is an option for the device. Or to translate that to English, binary, yes/no problems that we could solve pretty efficiently if we could use quantum phenomena. The increase in speed comes not from making faster CPUs or GPUs, or creating ever larger clusters of them, but from implementing brand new logical paradigms into your programs. And to make that easier, a new language was created.

In classical computing, if we wanted to do factorization, we would create our algorithms then call on them with an input, or a range of inputs if we wanted to parallelize the calculations. So in high level languages you’d create a function or a method using the inputs as arguments, then call it when you need it. But in a quantum computer, you’d be building a circuit made of qubits to read your input and make a decision, then collect the output of the circuit and carry on. If you wanted to do your factorization on a quantum computer — and trust me, you really, really do — then you would be using Shor’s algorithm which gets a quantum circuit to run through countless possible results and pick out the answer you wanted to get with a specialized function for this task. How should you best set up a quantum circuit so you can treat it like any other method or function in your programs? It’s a pretty low level task that can get really hairy. That’s where Quipper comes in handy by helping you build a quantum circuit and know what to expect from it, abstracting just enough of the nitty-gritty to keep you focused on the big picture logic of what you’re doing.

It’s an embedded language, meaning that the implementations of what it does is handled with an interpreter that translates the scripts into its own code before turning into bytecode the machine that it runs on can understand. In Quipper’s case the underlying host language is Haskell, which explains why so much of its syntax is a lot like Haskell with the exception of types that define the quantum circuits you’re trying to build. Although Haskell never really got that much traction in a lot of applications and the developer community is not exactly vast, I can certainly see Quipper being used to create cryptographic systems or quantum routing protocols for huge data centers kind of like Erlang is used by many telecommunications companies to route call and texting data around their networks. It also begs the idea that one could envision creating quantum circuitry in other languages, like a QuantumCircuit class in C#, Python, or Java, or maybe a quantum_ajax() function call in PHP along with a QuantumSession object. And that is the real importance of the initiative by Quipper’s creators. It’s taking that step to add quantum logic to our computing.

Maybe, one day quantum computers will direct secure traffic between vast data centers, giving programmers an API adopted as a common library in the languages they use, so it’s easy for a powerful web application to securely process large amounts of data obtained through only a few lines of code calling on a quantum algorithm to scramble passwords and session data, or query far off servers will less lag — if those servers don’t implement that functionality on lower layers of the OSI Model already. It could train and run vast convolutional neural networks for OCR, swiftly digitizing entire libraries worth of books, notes, and handwritten documents with far fewer errors than modern systems, and help you manage unruly terabytes of photos spread across a social networking site or a home network by identifying similar images for tagging and organization. If we kept going, we could probably think of a thousand more uses for injecting quantum logic into our digital lives. And in this process, Quipper would be our jump-off point, a project which shows how easily we could wrap the weird world of quantum mechanics into a classical program to reap the benefits from the results. It’s a great idea and, hopefully, a sign of big things to come.

surveillance camera array

On the one hand, I am somewhat surprised by recent revelations about exactly how much we’re being watched on the internet by the NSA. However, the big surprise for me is that they couldn’t get data form Twitter. Considering that it’s building an immense data center in Utah, and works with tech companies on a regular basis, is it really that astonishing that the agency is browsing through our communications metadata on a regular basis? We all suspected this was the case, so if anything the current furor is almost a required reaction of anger and hurt to have what we always thought was happening and didn’t really want to, actually is happening. The question is what to do now, in the PRISM-aware world. Citizens know they’re being caught up in the dragnet when they’re just going about their day, foreign companies are afraid of the NSA spying on them via the advanced cloud technology the United States sells across the globe, and China could sit back and laugh off American reports of its hacking and spying on the web as hypocrisy.

Another fun fact is that Americans are actually split on how they feel about the NSA’s snooping and a majority of 56% says that privacy is an acceptable casualty in trying to catch terrorists. It might also be telling that the split hasn’t changed much since 2006 and that it breaks down by a distinct partisan preference, with liberals and conservatives flip-flopping on the issue when the other party was in the White House. So while the press is incensed and investigative reporters are falling all over themselves to talk about PRISM, the American people are shrugging it off by party affiliation. I would expect everyone to carry on as normal because if Facebook and Google didn’t have a mass exodus of accounts, it’s very unlikely they will. Plus, the NSA isn’t reading all the e-mail in your inbox. It just has a record of you e-mailing someone at a given time and if you are in the United States, your phone number and e-mail should be crossed out in their system, until of course a secret court order grants the analysis access to request the whole e-mail.

Even the slowdown in purchases of American high tech gear is likely to be temporary because much of what we’re hearing from many other countries is an almost mandatory response to the revelations about PRISM. In reality, many of the countries buying these tech products have very extensive spy networks of their own and engage in cyber-espionage on a daily basis. It’s kettle calling the pot black, and it’s likely that the rumors of tech companies giving the NSA back door access into their servers are just not true. There’s a number of ways to supply data to the NSA and a number of ways the NSA could’ve gotten the data itself. I’m not going to speculate how in this post because a) I don’t know the agency’s exact capabilities, b) there are people from both defense contractors and military agencies reading this blog who I’d just annoy with speculating, and c) most of them are probably much worse than having the companies just play ball when a court order comes down and an incredibly powerful agency is knocking on their door.

Now, none of this means this isn’t a big deal. But what it does signal is that the country which is dominating the world in the tech field and serves as the key node in the global communications grid has been crying wolf about cyberwarfare and espionage while actively waging it. We were starting to be sure of this when Stuxnet was discovered, we suspected it even stronger when all of its ingenious siblings like Flame and Duqu floated into the spotlight, we had a good idea that the United State was publicly holding back when reports of its potential in cyberwarfare drills with allied nations started surfacing, and with PRISM, we now know it for a fact. On the one hand, it’s bad news because your privacy is now not only being compromised by bad security or very lax internal policies of web giants, but by the government as well. On the other, we know that we’re hardly defenseless in the cyber realm and will fight and spy right back. Make of these facts what you will. It’s not like we can put this genie back in its virtual bottle anyway…

approach to mars

According to Wired’s laundry list of technical and political issues with getting humans to Mars by the year 2030 or so, exploring another planet many millions of miles away won’t be Apollo 2.0 in many ways. It will be an order of magnitude more expensive per launch, require 30 months for a round trip, and needs to be financed, overseen, and executed by an international group that will include space agencies and ambitious aerospace companies with plans and launch vehicles of their own. And yet, the designs being drawn up sound remarkably like Apollo on steroids. We’re basically working with the same basic mission plans we had in the 1980s with a few workarounds for handling fuel and oxygen. Come on folks, this is another planet. It’s not just a status symbol and we don’t need to rush there just to say we went. Really, we don’t. Flag planting is great for propaganda and PR purposes, but it’s disastrous for long term exploration, which needs to be a very boring, consistent, and yes, expensive effort. We need a better plan than this.

Now, as much as this blog will support my assertion that I’m all about space exploration and will go as far as to advocate augmenting humans to travel into deep space (which led to numerous arguments with the Singularity Institute’s fellows), we don’t have to go to Mars as soon as we’re able to launch. It’s been there for 4.5 billion years. It’s not going anywhere for at least another five billion, and we owe it to ourselves to do it right. This is why instead of sending a much bigger capsule or an updated ISS for a 30 month round trip, we need to send inflatable, rotating space stations powered by small nuclear reactors. Instead of landers, we need to send self-assembling habitats. Instead of going to Mars to stick a flag into the ground, collect rocks, and do some very brief and limited experiments to look for traces of organic compounds, we need to commit to an outright colonization effort, and we need to test the basics on the Moon before we go. We won’t fulfill our dreams of roaming the stars and living on alien worlds if we don’t get this right.

Yes, it sounds downright crazy to propose something like that, especially thanks to the political climates of today. And it is. But at the risk of repeating myself, when we have trillions of banks to erase their bad bets from the books and nothing to aid the paltry budgets of space agencies or labs working on the technology of the future, the issue isn’t money. It’s priorities, vision, and will, and today’s politicians have the first one skewed, and more often than not either lack the other two, or envision our society going backwards as if this is a good thing. And we can keep right on placating ourselves by saying that we’ll at least get to roam around the solar system a bit like we did once, but that’s not how we should be exploring space. We know it’s not. if you want to really reach out into space, you go in for the long term with your eye on the spin-offs and benefits that will rain down from massive, ambitious, integrated projects that try to do what’s never been done before not by reinventing the wheel, but by attaching said wheel to a new airplane.

newlyweds

There’s been a bit of a splash by a new study which says that meeting your spouse online could mean a longer, happier marriage, and confirms that far from being the last refuge of lonely shut-ins, online dating is now one of the top ways to meet your mate. Now, the numbers do bear this conclusion out. Out of a representative sample of 19,131 people, the researchers found that a couple that met online is 28% less likely to divorce than their offline matched counterparts, and that the happiest marriages start with a meeting in MMORPGs and on social networks. However, and you knew this was coming, the differences are statistically significant but far from huge, and there are several caveats to taking the findings too close to heart, caveats which result directly from the study’s design. Basically, they’re collecting some demographic information if a subject was married between 2005 and 2012, asking how the subject met his or her spouse, and how happy the marriage seems, then looking for any statistically notable trends to emerge.

Here’s what the data mining found. A smidgen more than a third of the subjects (35%) married a person they met online. Half of those meetings happened on a dating site, usually eHarmony or Match.com, which each claim a quarter of these dating site meetups. So if you’re looking to get into a serious relationship or get married, those sites are probably a very good bet. Likewise, a few very interesting data points jump out from the results. The more well educated and gainfully employed you are, the more likely you are to meet a serious partner online. Those who earn at least $75,000 per year and have a college education represent some 57% of relationships that started online. Oddly enough, those with graduate degrees have the lowest share of marriages to partners they met online, under 15% of the total. The data doesn’t show why, but I would be interested in figuring this out. Why is this finding so worthy of attention? Because it may have a connection to the so-called leisure inequality and tell us more about why online dating grown so much in the last decade or so. But I digress. Now, what about that marriage satisfaction?

Well, again, the numbers do show that people who married their online friends report a better marriage, especially those who met playing online games or on social networks (which could or could not include dating sites, the paper isn’t specific on this). On a scale of 1 to 7, with 1 being the equivalent of "I’m ready to file for divorce this second" and 7 being "this marriage is perfect," these subjects reported an average satisfaction score of 5.72, which is pretty damn good. But if we consider the score of the most miserable married couples who met offline in a bar or through a blind date, they still post a very respectable 5.35 average. Yes, the online couples are happy and they’re happier than many other couples, but not by leaps and bounds. Could you really tell the difference between a 5.35 and a 5.72 happy when general contentment is 3.5? If we indulge in paraphrasing Futurama, these researchers are techncially correct, the best kind of correct in science. But practically, they just found close to 20,000 happy couples, a third of which just so happened to have met online and got married in a certain time range.

And that brings us to the biggest caveat with this study. Only 8% of these subjects are divorced which is both, a lot lower than the national average, and only shows the short term trend. If you look at marriages 10 years out rather than the seven for this survey, the odds of a divorce are about 30% or so. Get 20 years out and the odds increase to 48% on the high end. The sample here just hasn’t been married long enough and it’s probably a safe assumption that a lot were caught in their early phases of marriage. But the goal for getting married generally tends to be staying married for life, which means roughly half a century, going by the typical life expectancy figures. The researchers are, in a sense, catching people a mile or two into a marathon when a whole lot usually hasn’t happened yet and the biggest bumps in the road are still ahead, getting a general thumbs up from some 92% of the respondents, and splitting hairs about who gave the most enthusiastic thumbs up. True, this doesn’t mean that there’s a problem with a marriage to someone you met online and yes, maybe these couples are happier. But it’s too soon to tell.

Likewise, we should also point out that marriage rates keep on falling and the domestic partner has been slowly becoming the new spouse. After witnessing messy divorces and confronted by general antipathy for marriage from many sides, a lot of people who would’ve already tied the knot are deciding to forgo the whole affair altogether. Now, this could mean that what the survey captured is a trend of people who get married staying together longer and being happier while more and more of their peers are opting out of married life, balancing out the high divorce rate over the next decade or so, but this is just an idea after looking at the data. Marriage as we are used to it in the modern world is changing. It’s becoming less commonplace, involves those who are more financially secure, and alternative households are becoming the new norm. So in light of all these changes, maybe the better question to ask is not what makes for a happier marriage but what makes for happy long term relationships, or at least what today’s long term relationship looks like from an academic standpoint. Work in that area is only beginning…

See: Cacioppo, J., et al. (2013). Marital satisfaction and break-ups differ across on-line and off-line meeting venues PNAS DOI: 10.1073/pnas.1222447110

[ photo illustration by Carlos Zangheri/flickr ]