space designs

Whenever you see interstellar ships in fiction, they’re almost always immense, something close to the size of an aircraft carrier. There are a lot of good reasons for that. Traveling between the stars requires immense amounts of energy, so you’ll need reactors to generate it all or a huge set of solar sails to keep going, shielding from reactors and cosmic debris, and because you’re not going to be able to easily diagnose and fix problems light years away from mission control, you’re going to need a crew which needs living quarters, supplies, and means to generate and renew air, food, and water. Accelerating all that mass to relativistic velocities is going to be very difficult with anything short of fusion reactors and antimatter, and even then you’re going to be dealing with drag from dust and microscopic debris littered across the universe. Since trying to bend space and time is still only a vaguely theoretical endeavor at best, we’ve come to see the prospect of interstellar travel as something probably a) best done by machines, b) require long periods of planning and waiting, and c) very unlikely to happen in our lifetimes anyway.

Enter billionaire investor Yuri Milner with a $100 million plan to create a proof of concept for an amazing mission to Alpha Centauri that will take only 20 years and be powered by a laser that sounds like something Bond would be assigned to destroy before a genius villain bent on world conquest finishes its construction. In order to make it happen, he’s going to take a hatchet to a conventional view of an interstellar mission and slash anything that can slow it down. Fuel and power generation? Gone. Crews? Gone. Dust shields? Gone. The only things left are batteries, one solar sail, and a camera that you couldn’t find even on the cheapest phones you could buy today, with a resolution of just two megapixels. In other words, he’s going to create what would be the fastest Razr flip phone and shoot it into space with a multi-megawatt laser. On paper, it seems like a pretty sound plan. Such a huge jolt to a solar sail on a spaceship weighing a mere few hundred grams would accelerate it very, very effectively, and since it’s such a simple, small device, not much on it can really go wrong so you don’t need elaborate rescue scenarios or an adventurous crew of experts on board should something go terribly wrong along the way.

Unfortunately, the devil is in the details, his preferred hiding spot. One of the biggest problems any interstellar probe would face is collisions with high energy particles and dust that makes up the interplanetary and interstellar mediums. While in interstellar space, this dust and debris will not be a problem until you get up to half the speed of light, and even then most particles aren’t going to even register until you’re going 0.95c which is far beyond anything Milner expects from his device. However, that assumes a fairly hefty ship rather than a cell phone sized little box we hurled into deep space. Going by the generally accepted calculations, the dust will erode a very painful 20 kg of shielding material, if we use the metric system to run the numbers and account for the law of inverse squares when it comes to the energy of the impacts as we accelerate. While the math works for accelerating less than a kilogram of spaceship to a significant percentage of the speed of light, it also says that this probe will be shredded into grain sized particles before it leaves the solar system as we know it, since interplanetary medium it would have to traverse as it gains velocity is much denser. To borrow a phrase, Milner’s gonna need a bigger ship.

But all that said, if we set our sights on interplanetary travel with larger, crewed ships and build lasers capable of powering their solar sails to navigate to the outer solar system and back, this project could really pay off over the long term. Imagine launching inflatable space stations with massive sails that surf our lasers to their destinations, then ride it for a slingshot around nearby worlds and make their way back to Earth. The only problem one could see in this scenario is a political fight over a laser that would put today’s best military technology to shame and have the capability of vaporizing satellites innocently orbiting in its path, but that’s a completely different sort of problem than we’re trying to solve here. When it comes to interstellar travel, however, a powerful laser and solar sails just aren’t going to be enough even though intuitively it seems to be a no-brainer that the smaller the craft, the faster and farther it can go while in reality, you’re pretty much doomed without enough heft to counter the rigors of relativistic flight. At least until we invent force fields and can really test them out using Milner’s ultra-lightweight probe…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

brainwashed humanoids

Back in the day, we covered the fear of cell phones and wi-fi promoted by people who are very, very confident that whatever electromagnetic waves they put out must cause cancer, or a host of other really, really nasty problems. But it’s not only the alt health crowd that’s terrified of cell phone technology and its emissions, there are numerous conspiracy theories centered around them and how seemingly benign cell signals are used for mind control or subliminal intelligence gathering. One recent theory alleges that cell towers in Tampa have been hijacked by a sinister group of DARPA operatives with a mission to do something very vague and scary to Floridians, according to a whistleblower who might have once worked for them, as these theories so often seem to go. What the sinister plan is exactly, he doesn’t know, but he cites mind control as an important component, claiming that the cell towers broadcast at a frequency that resonates at the same range as the human mind. Now, this is far from the only such theory floating around the conspiracy internet, but it’s such a textbook example that if we’re going to fact check one, it might as well be our model. And right out of the gate, it’s off to a really, really bad start…

Virtually everything presented to convince us of DARPA’s nefarious plot rests on e-mails from a whistleblower named Paul Batcho, who at some point held the DOE’s equivalent of a top secret clearance and worked at Los Alamos. Nothing bizarre or suspicious in that. More than a million people hold some sort of security clearance nowadays so it really isn’t much of a stretch to see Princeton alums with PhDs in computer science working for the government and having a high level security clearance. In fact, it often happens to comp sci people who come from top notch colleges since they do research funded by government agencies that can often deal with secret information related to weapons, intelligence, and infrastructure. What was suspicious, however, is seeing bizarre, disjointed ramblings in the quoted e-mails making claims that even a cursory Google search would quickly flag as ridiculous. For example, to give the notion that cell towers are broadcasting telepathic or mind-altering waves a patina of plausibility, the theory says that human thoughts resonate at 450 MHz and that’s the secret reason why there’s an FCC ban on radio stations broadcasting in the 400 MHz to 700 MHz frequency range.

To borrow from a commercial, that’s not how this works, that’s not how any of this works. First off, the most active human brain waves peak at 40 Hz, which is 10,000 times slower than their supposed resonance frequency. Secondly, resonating with these brain waves has no effect on humans because they are artifacts of electro-chemical reactions in our brains. To disrupt them requires direct magnetic and electrical stimulation that’s precisely targeted to the area you want to affect. You can’t just broadcast signals willy-nilly and expect there to be a major effect on the population to which you’re broadcasting. That may ahve worked in The Kingsmen, but in reality the best you’ll do is maybe give someone hypersensitive a very mild headache. Maybe. Finally, it’s true that radio stations aren’t allowed to broadcast in the 400 MHz to 700 MHz range by the FCC. Want to know who are? Astronomers, satellites, and aircraft navigation stations. Wouldn’t airplanes flying overhead and radio astronomy dishes mess with our minds and be called out in the cited e-mails? And why would this frequency band be considered in Europe for public safety organizations? Seriously, who is this alarmingly ignorant whistleblower anyway?

And that brings us to Dr. Paul Batcho. Did you notice that I really tried to avoid attributing any of this conspiracy theory claims directly to him? That’s because after a few minutes of digging, it’s pretty apparent that a man named Paul Batcho exists, that he has a doctorate in comp sci from a respected, prestigious institution, and he does mathematical research. But he’s not a scientist affiliated with DARPA and he didn’t work for the DOE in Los Alamos. Instead, he’s a trader who designs high performance trading software as an SVP at Citi. His only link to this whole thing is that he lives in Tampa, so what I’ve seen leads me to believe that someone is using his identity to advance his or her own conspiracy theory under the guise of a supposed whistleblower who looks like he fits the bill. After all, he seems like a fairly typical middle aged guy you can picture working in a nondescript office with computers, and he has a background in computer science you can legitimately highlight. But since he’s presenting papers on how to build a better trading algorithm, and his LinkedIn profile strangely omits a clearance that would be a big asset on his resume, or any DOD work — done with a DOE clearance for some reason — which is also a big plus to most employers, his DARPA whistleblower status seems highly, highly doubtful.

So let’s recap. We have a theory which claims that cell towers are trying to control the minds of Floridians, ripping off many comics and books for their main story, uses a buzzword salad that doesn’t get a single thing about the human brain right, and is based on a claim just one minute of searching shows to be completely wrong. On top of that, the supposed source appears to be a person in no way affiliated with this theory, working as a high profile technical expert in whose industry random, misplaced, rambling emails accusing the government of brainwashing people or reading their minds using the same frequency as radio telescopes and airplanes would be a huge image problem, and who should know how to anonymously leak very important classified information considering that many with lesser skill sets have done the same thing. Notice how a real whistleblower with explosive revelations like Snowden leaked top secret documents. He set up encrypted channels to contact serious reporters, not sent screeds addressed to entities that he held responsible for evil doings to random websites. So if you live or work in or near Tampa, don’t worry. Your cell phone and local towers aren’t out to take over your brain. But do a search for your name once in a while. You never know who may be using it or why…

[ illustration by Lun-acy ]

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

printed moonbase

Hotel owner and space tourism pioneer Robert Bigelow has a pretty fervent belief that alien life is out there, that it’s intelligent, and that it may be visiting Earth. While most people would make little of the first two ideas, the third, especially his story of supposedly running into a UFO in the middle of the Southwest, prompted many journalists covering his aerospace company to put in plenty of jokes at his expense. As a result, every time an in depth profile of Bigelow and his big plans in Earth’s orbit and beyond appears, there’s an inordinate amount of skepticism injected into discussions of sober and eminently reasonable plans. Yeah, sure, we’re going to trust the guy who thinks aliens are vising our planet make space stations and bases on other worlds, it’ll be great, right? Well, actually yeah, sure, let’s have him do exactly that. Creating a very cheap, convenient way to put up self-contained interlocking habitats built to absorb radiation and swift blows from micrometeorites that ding rigid metal spacecraft is a fantastic endeavor, and having the first direct application of this technology on the Moon makes a whole lot more sense than a flag-planting mission to Mars, which works much better as a logical extension of that effort.

See, the problem with simply skipping ahead to a Mars mission because we’ve already been to the Moon back in the day is that you’re not actually building an infrastructure for future missions that go farther and farther. This increases the cost because you now can’t piggyback on assets already in orbit and deeper in space, and vastly increase the risk because if things go wrong, a possible place to which you can retreat and survive while someone can rescue you won’t be an option, so the escape plans far from home will be very limited. Considering that the Moon is the perfect dress rehearsal for a mission to another planet right in our cosmic backyard, and a very convenient place to launch bigger and bigger craft into deep space thanks to its shallow gravity well, going back before we set our sights for Mars isn’t a crazy plan at all. If anything, it’s much, much more conservative and reasonable than anything being dictated to NASA right now. The same thing applies to the design and execution of the inflatable modules. Bigelow didn’t design them himself, he bought the technology, patents, and methods from companies contracted for NASA-backed programs to build exactly what SpaceX just launched to the ISS today.

With all this in mind, can we please stop wondering if Bigelow and his investors and supporters are crazy and overly ambitious when the technology they use has been originally created by a number of companies which have been launching things into space for the last 50 years, have been tested over the last three decades, easily survived several launches into orbit, and which are designed for a space exploration strategy that’s been kicked around since the 1960s and is based on the slow-and-steady-one-step-at-a-time principle rather than jumping straight into the far, far more complicated world of interplanetary human spaceflight? As of today, we have both reusable rockets and inflatable space habitats, proofs of concept for everything Bigelow would really like to accomplish, and the only things missing are monetary support and political will. We can’t just look at proven, functioning, mature technology and shrug out shoulders in skepticism solely because the guy has a UFO story he likes to tell. Here’s someone who wants to finish an amazing undertaking NASA started and has the tools to do it. We should be helping him rather than constantly reminding us that he’s a little eccentric when it comes to astrobiology.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

sad calvin

Being an older millennial is both a blessing and a curse. On the positive side, you’re still young enough to see the logic in some of the changes in education and workplaces that send hordes of baby boomers complaining about lazy, entitled young people who do nothing these days, all too often from their work cubicles, on social platforms said lazy do-nothings built into billion plus dollar empires. On the negative side, you get to hear all the grumbling and flack directed at you as well even though you’re now a colleague on the same level rather than an office temp. It’s as if just mentioning that media buzzword sends countless people close to 40 and older into a rant the sole purpose of which is to tell you how much life sucks, how work is supposed to be awful, how you must work day and night for a reward that might never come, and damn it, will you get some polo shirts and khakis instead of coming into the office in jeans? This is a business, not a hippie commune where you can wear comfortable clothes and be treated like a person! Why in the old days you came to work in a suit, had a two martini lunch, and slept in your office to ever get ahead in life and make something of yourself, you lazy, good for nothing little twerp…

And so it goes, we’re told. It’s called work because it’s supposed to be grinding, tedious, and all together terrible. You put in your eight hours, or however long the boss needs you, then you go home to your family where you can wear what you want, say what you want, and enjoy being at a place other than work. You’ll rise up slowly over the years, pay your dues, get noticed by your strict, no-nonsense, but secretly warm and kind boss who watches your every move and slowly mentors you to take over when he retires. And on the day of his retirement party, he delivers a long, heartfelt speech about how far you’ve come, and how he truly considers you the absolute best replacement for him, and how one day you’ll go far. Then he’ll invite you back to his house so you can have an adventurous night with his ex-supermodel trophy wife because hey, you’ve earned it champ. If we’re going to indulge in retelling nostalgic corporate fairy tales, we may as well spice it up, right? Yes, to those whose work is non-stop thanks to ubiquitous smartphones, and who must compete with either overseas versions of us or robots for every job, and whose bosses see us as nothing but an expense as our tenures grow, all this old timey advice is pretty much just tone deaf fairy tales that completely ignore the real world for a cozy fantasy.

Like with many nostalgic themes in America today, all too many people view work with a set of rose colored glasses, and pretend that not only does the magical workplace I satirized actually exist today, but that it also ever existed at all and they worked there. But you and I both know it really isn’t true. How? Because when I was a little kid, I watched you come home, drop all your bags, slump into a chair, and bitterly complain about how you got passed over for that big new promotion in favor of some brown noser, or the boss’ spoiled kids, how your benefits became a lot more expensive while giving you way less, how you’ve always dreamed of doing something, anything else with your life but that, telling me to go to college so I can grow up and like what I’ll end up doing for a living. What, did you really think that we weren’t watching and listening when we were little and that your grousing, depression, divorces, and eruptions of hatred for toiling at some cubicle farm where you’re just a cog in a large, faceless machine with little reward wasn’t going to make any impression on us as we were growing up? And now, we’re lazy, entitled, and rotten little bastards because we’re following your directive to find something that makes us feel at least somewhat alive in between sleep and home, and actually say that’s what we want?

Kids don’t grow up in a vacuum. We didn’t get the idea that we were supposed to do something meaningful with our lives out of thin air. It was taught to us from the day we could understand a simple sentence from your mouth. We’ve seen firsthand what an awful work life does to family, home, and health, so telling us to accept it as a fact of life and succumb to the misery is really a self-righteous way to tell us to give up and take life’s beatings. Kind of like you did. Except in an environment where not only are you treated as an overpriced non-person, you don’t even get a good, old fashioned load of bullshit from corporate assuring you that’s not the case anymore. It boggles the mind that so many people look at a commitment to an employer, with whom they’re going to spend more time than with their family in many cases, build a relationship with it much like a marriage, and after a while decide that it’s totally fine to have an abusive spouse that just uses you to get what it wants, and that’s how life just is. Almost three fourths of the workforce is basically suffering from an economic equivalent of a battered spouse syndrome, and that hurts both employers and employees. But too many employers simply don’t care and won’t fix it.

So you can choose to view millennials’ complaints of being made to do useless busy work, or a proposal to allow jeans and comfortable shoes in the office as spoiled brats being bratty. That’s your choice. Or you could choose to consider why you’re demanding a four year degree for an educated and fairly expensive human being to make copies, fetch coffee, and do something by hand for weeks on end instead of writing a program that could do it in minutes. Dismissing very vocal and recurrent complaints is a comfortable spot to occupy. You don’t have to reevaluate a method you’ve always followed critically, or make any changes. You can be lazy and leave that cube farm plagued with resource misallocation and workers who’ve long forgotten how to feign actually caring about what they do as it is, telling yourself and others that it’s a fantasy right out of an episode of Mad Men with mentorships (which have long been phased out), opportunities for all (which have all been outsourced or automated), and an almost fatherly concern by those in charge for the comfort of their employees (which was deemed to expensive) and not a slow, steady descent into irrelevance. Fixing real problems is much more expensive, and so we’re all simply told that the bugs are not bugs at all but actually features, and great ones at that…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon


When you hear the word tenure, you’re more than likely imagining the countless groans from a certain subset of pundits who think that’s it’s a magical ticket for an academic to do nothing and keep his or her job no matter what. Of course the reality is not quite as dire. Tenure has never, ever been a guaranteed job for life with no requirements. In fact, tenure is a reward for a really productive academic bringing in millions in grants and paying his or her salary, as well as for a whole lot of lab equipment and graduate students. You have to be really good at both research and raising money to even have a shot at it, and once you’re tenured, you could still be fired for doing bad science, or any other offense pretty much any of us would consider to be reasonable grounds for termination. Really, the only two things tenure would grant you is a reprieve from a committee laying you off for being only really good, not exceptionally amazing, and the right not to be fired on the spot, but after a hearing to decide if any of your offenses were actually worthy of termination and you’re not a target of retaliation, political malfeasance, or discrimination.

Sadly, for the politicians who made up their mind that tenure is just a way for scientists to bilk a few million tax payers in their states while they do nothing useful, and abuse the law to make it illegal to fire them for this egregious abuse of public funds, there’s plenty of popular support to simply do away with it. In Wisconsin, Governor Walker did just that by giving public colleges the power to fire any academic for any reason they could portray as remotely plausible. Instead of any guaranteed due process, a political appointee could simply decide that the research being done by the scientist “needs redirection or modification,” or that it’s not in the budget and that’s that. Obviously, academics are upset and one public college had to spend $9 million to keep at least some of the researchers it had so it could hold on to $18 million in grant money. But what happens if the scientists UW-Madison kept still feel threatened that if Walker or an appointee of his really doesn’t like what their experiments uncovered, or get upset that a paper challenges a partisan orthodoxy to which they’re particularly attached and suddenly, the program is just way too expensive and needs “realignment,” meaning that the academic is no longer needed?

Sure, there are definitely professors who abuse their tenure and use their perch to indulge in a variety of unsavory conspiracy theories, but changing or even removing tenure to punish them simply isn’t worth it because it creates a precedent in which important but unpopular speech all too easily gets silenced. Researchers and academics need to be intellectually independent, not beholden to their colleges and the political beliefs of the people who run it, fearing retaliation in response to unflattering scientific findings. If an owner of a sugar company can dismantle a lab where scientists were testing how excess sugar consumption can cause diabetes, that’s both a huge blow for science and public policy. Conversely, should UC-Berkeley dismiss the notorious AIDS denialist Peter Duesberg, it would lose his promising work on cancer genetics. And when the University of Colorado finally decided to get rid of the human Gowdin that is Ward Churchill, his incendiary essays weren’t the reason, it was his plagiarism and academic fraud that got him fired. If anything, his dismissal proves that the system works and that tenured academics aren’t immune from investigation and punishment if their science is even somewhat suspect.

I agree that we need be firing subpar scientists, frauds, and do-nothings, but we already do that when they’re tenured. To force good scientists to depend on the will of politicians and the mood of special interests which thrive in partisan echo chambers under the excuse of punishing those supposedly invincible bad apples would turn the scientific process into a groupthink exercise. If you can be fired simply for not parroting a party line or offending a powerful donor, why risk the trouble? This is how think tanks do their “research,” not universities, and if we want to fulfill the crude stereotype of colleges being propaganda mills, there’s no better way to do that than to do away with tenure. But then again, I really don’t think that those pushing for tenure’s repeal have much interest in independent scientists challenging long accepted dogmas or diving into a really controversial topic. They like their science pliant and rigged to produce data they agree with so their worldview never has to change. And they’re doing the equivalent of telling scientists that’s they’ve got awfully nice tenure and such well funded and ran labs, asking whether it would it be a real shame if something happened to all that should their next study be… disagreeable.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

human heart

Transplants save lives. Without them, thousands of patients every year with failing or damaged vital organs would have little hope of survival. But the sad truth is that modern transplants, with all the advances in organ preservation and post-operative care decades of clinical practice and research have given us, are still far from perfect. Putting one person’s living heart into another person’s chest isn’t like swapping engines between two cars of the same make and model. It’s more like trying to infiltrate a crucial post in a massive organization. Another heart is seen as a foreign object that must be attacked by the recipient’s immune system and destroyed, requiring a lifetime of immunosuppressive drug treatments which leave him or her vulnerable to a whole host of health problems and making treating those problems really complicated. And that’s if an acceptable donor can even be found, which isn’t the case for almost half the patients waiting to get a new organ. Hence there’s been a lot of interest in creating artificial organs for permanent implantation. It solves the donor problem and would require no immune suppression, but at the same time, we’re not sure how well we truly understand how these vital organs really work.

Since we are talking about vital organs after all, it’s going to take a long time for perfect robotic versions of hearts, lungs, kidneys, and livers to be ready for standard clinical use, but people in dire straights are dying now. In a perfect world, we’d just use their stem cells to 3D print them a new organ and implant it weeks after determining that a transplant is necessary. This approach has the same benefits as artificial organs. Since you’re the donor, there’s no waiting for a good math because your tissues are simply being put back into your own body, and since your body knows these tissues, it shouldn’t attack them. Plus, if we grow an organ based, we know it’s not going to be different from the one it’s replacing so we don’t have to guess if we built it right. But growing organs is difficult and the science has to progress in baby steps. We know how to grab the collagen scaffolding on which to grow the right tissues, and we can grow them into what oh so tantalizingly looks like a complete organ, but is still immature and not quite ready to go into a patient’s body. However, researchers are getting close. A recent experiment in Massachusetts resulted in an immature, weak, but definitely working, beating heart that looked healthy.

But why couldn’t they produce a mature, working organ? Well, the problem lays in how many of the donor’s cells they could harvest into a useful form. If they can figure out a way to get more out of the samples they can collect, the could grow a heart that could be viable for implantation and start the process of approving a very early clinical trial on a patient likely to do well after an experimental transplant. The good news is that the research to find these new ways is currently chugging along and scientists are working to better and better simulate the conditions of human bodies in which these cells will grow. Mature hearts are still years away, but there are definitely promising avenues and experimental data showing that we’re heading in the right direction. To think that just a decade ago all of this was still science fiction and science so bleeding edge, an informed skeptic could’ve been excused for saying that growing individualized organs could not be done in the foreseeable future. Despite the modern attitude that science takes forever to go from interesting ideas to everyday reality, when dealing with matters as complex as crafting an important organ just for one patient, science is actually moving at a breakneck pace.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

sleeping cell phone

Is seems that George Dvorsky and I will never see eye to eye on AI matters. We couldn’t agree on some key things when we were on two episodes of Science For The People when it was still called Skeptically Speaking, and after his recent attempt at dispelling popular myths about what artificial intelligence is and how it may endanger us, I don’t see a reason to break with tradition. It’s not that Dvorsky is completely wrong in what he says, but like many pundits fascinated with bleeding edge technology, he ascribes abilities and a certain sentience to computers that they simply don’t have, and borrows from vague Singularitarianism which uses grandiose terms with seemingly no fixed definitions for what they mean. The result is a muddled list which has some valid points and does provide valuable information, but not for the reasons actually specified as some fundamental problems are waived off as is they don’t matter. It’s articles like this why I’m doing the open source AI project, which I swear is being worked on in my spare time, although that’s been a bit hard to come by as I was navigating a professional roller coaster recently. But while the pace of my code review has slowed, I still have time to be a proper AI skeptic.

The very first problem with Dvorsky’s attempt at myth busting comes with his attempts to tackle very first “myth” that we won’t create AI with human-like intelligence. His argument? We made machines that can beat humans at certain games and which can trade stocks faster than us. If that’s all there is to human intelligence, that’s pretty deflating. We’ve succeeded in writing some apps and neural networks which we trained to be extremely good at a task which requires a lot of repetition and the strategies for which lay in very fixed domains where there are a few, really well defined correct answers, which is why we built computers in the first place. They automate repetitive tasks during which our attention and focus can drift and cause errors. So it’s not that surprising that we can build a search engine than can look up an answer faster than the typical human will remember it, or a computer that can play a board game by keeping track of enough probabilities with each move to beat a human champion. Make those machines do something a neural network in their software has not been trained to do and watch them fail. But a human is going to figure out the new task and train him or herself how to do it until it’s second nature.

For all the gung-ho quotes from equally enthusiastic laypeople with only tangential expertise in the subject matter, and the typical Singularitarian mantras that brains are just meat machines, throwing around the term “human-like intelligence” while scientists still struggle to define what it means to be intelligent in the first place, is not even an argument. It’s basically a typical techie’s rough day on the job, listening to clients debate about their big ideas, simply assuming that with enough elbow grease, what they want can be done without realizing that their requests are only loosely tethered to reality, they’re just regurgitating the promotional fluff they read on some tech blogs. And besides, none of the software Dvorsky so approvingly cites appeared ex nihlo; there were people who wrote it and tested it, so to say that software beat a person at a particular task isn’t even what happened. People wrote software to beat other people in certain tasks. All that’s happening with the AI part is that they used well understood math and data structures to avoid writing too much code and have the software itself guess its way to better performance. To just neglect the programmers like that is like praising a puck for getting into a net past a goalie while forgetting to mention that oh yeah, there was a team that lined up the shot and got it in.

Failing to get this fundamental part of where we are with AI, looking at fancy calculators and an advanced search engine, then imagining HAL 9000 and Skynet being the next logical steps for straightforward probabilistic algorithms, the rest of the myths are philosophical what-ifs instead of the definitive facts Dvorsky presents them to be. Can someone write a dangerous AI that we might have to fear or that may turn against us? Sure. But will it be so smart that we’ll be unable to shut it down is we have to as he claims? Probably not. Just like the next logical step for your first rocket to make it into orbit is not a fully functioning warp drive — which may or may not be feasible in the first place, and if it is, unlikely to be anything like shown in science fiction — an AI system today is on track to be a glorified calculator, search engine, and workflow supervisor. In terms of original and creative thought, it’s a tool to extend a human’s abilities by crunching the numbers on speculative ideas, but little else. There’s a reason why computer scientists are not writing countless philosophical treatises on artificial intelligence co-existing with lesser things of flesh and bone in droves, while pundits, futurists, and self-proclaimed AI experts churn out vast papers passionately debating the contents of vague PopSci Tech section articles after all…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

poor homeless forgotten

Ontario, Canada is going to try something new to help people get their lives back on track and become ground zero for a pilot program for a universal basic income. Though the province has not released any details, and the whole thing may still get scrapped, it shows that there’s some flirting with the concept from governments eager for a new way to tackle poverty. Essentially, a universal basic income is exactly what it sounds like. Everyone gets a certain sum of cash on a household basis simply for existing to address some basic needs. Anything above that is your choice and whatever other income you earn will be added to your UBI stipend, however you’re getting that money. Think of it as an efficient way to make sure your citizens don’t simply starve to death or are left homeless and destitute, or see crime as their only option for survival. But it’s not exactly a perfect system and considering the pretext for its advocacy in Europe, and now in the United States and Canada, it does seem to carry a certain sense of desperation, a positive spin on the admission that the officials implementing this ran out of ideas for job growth.

Now, I can just hear conservative pundits having conniptions on the subject. Money for nothing from the government? How would anyone be motivated to work, to study, or do something that isn’t watching TV and playing video games if all their basic needs are already met? And while it may be easy to dismiss these concerns by saying that no one should have to be forced to get a good education and a job at the gunpoint of starvation, it’s also impossible to deny that there’s going to be a group of people who use it as an excuse to do nothing whatsoever with their lives because they no longer have to in order to survive. On the other hand, considering that you will always have those only interested in putting in the absolute minimum effort required, going out of one’s way to base policies affecting everyone on the most efficient way to punish them is not just myopic, but harms those who genuinely need a hand up. There are numerous surveys and accounts which show that people who desperately want to escape poverty but can’t, are simply not planning for the future because they feel like they don’t have one, and every inconvenience can quickly turn into a budget-crippling disaster. UBI may be their ticket to the middle class.

Pulling yourself up by your bootstraps is easier when you can afford them in the first place, and knowing that you will have some money for the basics and to cover emergencies will allow the beneficiaries in poverty to start saving, get a financial plan together, and have confidence that they’re not one bad day away from becoming homeless. When they can see a future, they can follow a plan to make it into something better. The way many nations provide assistance yanks the necessary safety nets the minute those receiving it start climbing out of poverty, rather than provide incentives to keep going and a safety net to prevent them from falling back in, giving no flexibility to decide how the assistance money is spent, even if the recipients can prove that the current package isn’t going to help them get ahead. Just giving them cash will allow them to do what they need to and allocate a token sum meant for food to fix a car so they can get to work, or get a laptop so they can attend online job training classes to earn more money. But again, it does seem like using it to combat slow job growth and stagnant wages is a treating a symptom rather than curing the disease, and while it will help the poor, the question is how much.

Sadly, outsourcing and automation have made a lot of people basically obsolete, and instead of helping them adapt to the new way of things, we’ve made learning the new skills they’ll need to compete prohibitively expensive. Now, instead of addressing what really seems to a problem in how we educate our workforce and how we plan for the future, UBI advocates are saying that a stipend should help because let’s face it, when half the world may be struggling to find a job by the year 2035, we might as well give in and accept that keeping people off the streets and from starvation is a necessary budgetary evil. But if we use UBI as a crutch, wouldn’t we then give a handy excuse to colleges who refuse to implement apprenticeship programs or participate in a job training program for snobbish, self-centered reasons, and companies who refuse to drop a four year degree as a requirement for even getting an interview despite not needing those who work for them to hold this degree 74% of the time? Go ahead, take whatever classes you want and pile on those student loans. It’s fine if we don’t train the next generation and outsource the jobs for which we don’t want to pay higher wages. It’s no biggie. They’ll just get a UBI check. To enable the acceptance of the status quo like that may end up doing a great deal of harm.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

paper war

Social science gets a bad rap because not only does it sometimes make us confront some very ugly truths about human nature, its studies can be very difficult to reproduce, so much that the undertaking in doing just that found they couldn’t get the same results as more than half of the papers they tried to replicate. But ironically enough, an effort to replicate the replication did not succeed either. For those who are having trouble following this, let’s recap. Researchers trying to figure out how many social science papers can be reproduced didn’t conduct a study others were able to reproduce themselves. That’s a disaster on a meta level, but apparently, it’s more or less to be expected due to the subject matter, measurement biases, and flaws involved. In a study challenging the supposedly abysmal replication rate known as the Replication Project, it’s quickly evident that the guidelines by which the tested studies failed were simply too rigid, even going so far as to neglect clearly stated uncertainty and error margins, and choosing to perform some experiments using different methods than the papers they were trying to replicate.

Had the Replication Project simply followed the studies carefully and included the papers’ error bars when comparing the final results, it would have found over 70% of the replication attempts successful. That sounds not that great either, with more than one in four experiments not really panning out a second time, but that’s the wrong way to think about it. Social sciences are trying to measure very complicated things and they won’t get the same answer every time. There will be lots and lots of noise until we uncover a signal, and that’s really what science does. Where a quantification-minded curmudgeon sees failed replication attempts, a scientist sees failures that can be used as a lesson in what not to do when doing a future experimental design. It would’ve been great to see the much desired 92% successful replication rate the Replication Project set as the benchmark, but that number reduced the complexity of doing bleeding edge science that often needs to get it wrong before it gets it right, to the equivalent of answering questions on an unpleasantly thorough pop quiz. Add the facts that the project’s researchers refused to account for something as simple as error bars when rendering their final judgments, and that they would one in a while neglect to follow the designs they were testing, and it’s difficult to trust them.

Where does this leave us? Well, there is a replication problem in social sciences, so much that studies claiming to be able to measure it are themselves flawed and difficult to replicate. There are constant debates about which study got it right and which didn’t, and we can choose to see this as a huge problem we have to tackle to save the discipline. Or we can remember that this back and forth on how well certain studies hold up over time and whose paper got it wrong and whose got it right are exactly what we want to see in a healthy scientific field. The last thing we want is researchers not calling out studies they see as flawed because we’re trying to find how people think and societies work, not hit an ideal replication benchmark. It’s part of that asinine, self-destructive trend of quantifying the quality out of modern scientific papers by measuring a bunch of simple, irrelevant, or tangential metrics to determine the worth of the research being done and it really needs to stop. Look, we definitely want lots of papers we can replicate at the end of the day. But far more importantly than that, we want to see that researchers are giving it their best, most honest, most thorough try, and if they fail to prove something or we can’t easily replicate their findings, it could be even more important than a positive, repeatable result.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon


Nowadays, when severe weather strikes, the news immediately starts asking if global warming was responsible for the event they just covered, which is generally the wrong question to ask in the first place. Global warming itself is not going to trigger a particular storm system, rather it’s going to meddle with their frequency and severity depending on your regional climate because the world is a very big and complicated place, and a worldwide temperature rise of one degree will affect different places on Earth is different ways. This is what allows deniers to say that one glacier melting slower than another, or changing shape, means global warming isn’t happening because they should all melt, ignoring that their shape, location, and composition plays a huge role in how they will behave. So what should we expect to happen when we look at an event in one region of the world defined by a very particular kind of storm system: tornado outbreaks in the country where there are entire seasons during which they’re very likely to happen? There’s bound to be an uptick in how many tornadoes happen and how powerful they get, right?

Just like everything else in science, the answer isn’t quite cut and dry. While the typical number of outbreaks was roughly 20 per year over the last 60 years, the average number of tornadoes per outbreak rose by 50% as, interestingly enough, did the variance per outbreak. In short, we can’t find a change in the number of outbreaks and the ones that spawn fewer tornadoes grew less intense over more than half a century. However, the more intense ones have gotten really extreme, with far more tornadoes. Rather than increasing in number in a straight line, there are now extreme swings in how many tornadoes are born from storm system to storm system. It’s an interesting result though not a completely bizarre one. After all, tornadoes require a precise sequence of events to happen and North America is one of the few places where warm, moist air from the sea and cold, dry air from the Arctic can collide across vast swaths of land forming the powerful supercells that can spawn them, so if global warming is having any effect on them whatsoever, making tornado outbreaks more inconsistent with more energy being dumped into the typical regional weather patterns over decades is definitely not out of the question.

Since the research is limited to NOAA reports for the United States, it’s prudent to ask about an uptick in tornadoes in Canada, which also has a Tornado Alley, because a border isn’t going to suddenly stop a storm system to fill out customs forms and turn it away for lacking government issued identification to enter another nation. But there’s a bit of a controversy regarding if that’s happening because while on paper there are more tornadoes, scientists are hedging their bets by noting that they’re often happening in less populated regions and finally being spotted more often and detected more accurately so they’re not sure what the baseline was over the years. If those areas were much more heavily populated and more active, like they are in the U.S., they would have better tracking and would’ve been able to provide a more definitive answer. And all this brings us back to our original question of whether global warming is fueling tornadoes. The answer seems to be that it’s too early to tell, but over the last 60 years, the more violent swings in tornado outbreaks seem to point to it as a very plausible culprit. As always, the more data we have, the more complete our picture, but first impressions are that when weather turns violent, excess heat in our atmosphere can make an already bad storm even more extreme…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon