woman with barcode

If you live in the U.S. and still watch certain TV channels, you may be forgiven for thinking that if you don’t know your FICO score, or lack apps and services to notify you of every slight change within a moment, you may as well give up on actually owning or renting anything without having a massive pile of cash sitting in a bank. Cutting through the commercial hyperbole, there’s a bit of truth to that in a country where borrowing is high and saving is low. Lenders need to have an objective and quick way to figure out how likely you are to repay them, and one company called Fair Isaac has long claimed it owns an equation to predict exactly that based on your history of making timely payments and other factors that seem important. The end result is quick, a three digit number that seems to speak volumes. But is it objective in an age where getting laid off as automation or outsourcing claim one’s job, or a dire medical problem can instantly land you in a world of financial pain and ruin? Probably not. No matter how you look at it, the FICO score has some pretty significant shortcomings, but fixing them could actually get really, really ugly…

For a few years, credit rating agencies have been toying with the idea of using social media as an additional barometer of your creditworthiness, particularly Facebook and LinkedIn, trying to find a correlation between your online contacts and odds of a default. In some cases, you can make fairly accurate predictions. A senior manager at a very large corporation whose contacts on professional social networks are all high powered business people, with a resume full of big numbers and grand accomplishments is probably not going to stop paying for his new BMW or buy a new house and skip town. But what about a hardworking college student with a couple of stoner friends who never amounted too much still listed in her Facebook contacts? You may as well flip a coin because if you’re deciding the worth of a person only by the company he or she keeps, not only does it open the door to discrimination, but removes that applicant’s agency by holding friends’ failures real or imagined, over this person’s head. Yes, this student may default and fall behind. But she could also be determined to build up a great credit score no matter the personal cost and pay in full, on time, every time, while working her way to adulthood.

Now, as scary as the attempts to base your credit rating on that of your friends sound, they got nothing on China’s grand plan to develop a social score for its citizens that goes far beyond the humble creditworthiness rating and all the way into meddling in their personal lives and political beliefs. Not only do you need to have a great history of on time payments to qualify for loans or ownership of private property, but you must also demonstrate yourself a productive citizen who is loyal to the party. Buying video games penalizes you while buying diapers rewards you. Your friends started posting sarcastic, Soviet-style jokes about the Communist Party? Well, you really didn’t want to buy a new house or get a new car, did you now? Oh, you did? Too bad. Probably shouldn’t be friends with unpatriotic dissidents then. You can see where this is going. Imagine a similar score in the U.S. used by the NSA and FBI to assign one’s likelihood of becoming some sort of criminal or terrorist, their less than airtight statistical models used to justify searches and seizures of random individuals whose personal choices and behavior matter less and less than the choices and behaviors of their social group. It’s like dystopian a sci-fi tale coming to life.

Really, there’s a limit to how much data we should be collecting and using, and allowing people to opt out of collection processes they think can be abused. Maybe a credit rating agency does want to create a financial product for people who want to use their friends to vouch for them. It would be their choice to see how it pans out. But if it’s using the same kind of research on new line of credit applicants who have not consented to this process, it needs to be heavily punished so that violating the rules costs much more than just complying with them. Just because we are fully capable of quickly and easily creating the tools for an Orwellian society doesn’t mean that we have to enable tyranny by algorithm and pretend that because computers are making some decision based on data they’re collecting it’s all objective and above board. People program all of these sites, people collect and organize this information, and people write the algorithms that will crunch it and render a verdict. And people are often biased and hypocritically judgmental. If we let their biases hide in lines of code watching out every move and encouraging us to be little model citizens, like the Chinese plan does, the consequences will be extremely dire.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

designer v. developer

After some huffing, puffing, and fussing around with GitHub pages and wikis, I can finally bring you the promised first installment of my play-along-at-home AI project in which there’s no code to review just yet, but a high level explanation of how it will be implemented. It’s nothing fancy, but that’s kind of the point. Simple, easy to follow modules are easier to deal with and debug so that’s the direction in which I’m headed as they’ll be snapped on top of cloud service interfaces which will provide the on-demand computing resources required as it ramps up. There are also explanations for some of the choices I’m making when there are several decent implementation options for a particular feature set, some of which more or less have to come down to personal preference, while there are some long-view reasons to definitely pick one over the other.

In the next update, there will be database designs and SQL which may look a like overkill for a framework to run some ANNs, particularly when there are hundred line Python scripts that run them without a hiccup. But remember that for what’s being built, ANNs are just one component so the overhead is based on managing where it goes and securing the information because if I learned anything about security is that if it’s not baked in from the start but layered on top after all the functionality has been completed, you end up with only one layer of defense that may be easily pierced by exploiting a vulnerability out of your control. Inputs may not get sanitized with proper care, your framework package for CSRF prevention might not have been updated, and without a security model to put up some roadblocks between a hacker and your data, you may as well have not bothered. Likewise, there’s going to be a fair amount of code and resources to define the ANNs inputs and outputs so we can actually harness them to do useful things.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

There’s nothing more wasteful that reinventing the wheel. We’ve been using them for 5,000 or so years and pretty much everything we could’ve done to them, we have, so when we’re pretty sure we found the optimal way of doing something, we invoke this expression to mark a totally useless, repetitive endeavor. But here’s the thing about thinking you’re done perfecting even a simple design: you get stuck in doing things one way for so long that you lose the ability to see completely new approaches that can really improve something you thought was ideal, and yes, that literally includes wheels on a car, as a recent concept video for Goodyear illustrates. A 3D printed spherical wheel for autonomous vehicles providing better grip and traction, and making the dreaded task of parallel parking a breeze because cars can just effortlessly move sideways into their spots. If I saw those cars on the road in Santa Monica later today, I’d say it was a day too late for them to get on the road. Really, the whole concept seems obviously superior.

Again, this is what you get when you approach a seemingly solved problem with a fresh look: a completely new solution that could prove better than the supposedly optimal solution today. It’s part of the reason why STEM students perform the same experiments and try to build the same structures — physical and digital — as the students before them. Not only do they learn how the solution they’ll typically use evolved, but there’s always the chance that someone for whom the problem in question is still new and the solution is a blank slate, will spot a new way to handle it and in the process, create a new standard solution. And while living on the bleeding edge is an exciting prospect and our knowledge grows on the foundations laid by previous generations, it really isn’t a waste of time and money to inspect those foundations and see if we can replace a few sections of it with something better and sturdier, maybe leading to new discoveries in sub-branches that seemed to have hit a dead end. You might end up with the same exact answers over 99.9% of the time, but that 0.1% of the time you come up with something different may be more than worth it, giving you a stronger, more nimble wheel, literally and metaphorically…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

rage and fury

While my play-along-at-home AI project hit a little snag as I’m still experimenting with using the newest cross-platform version of a framework which might not be ready for prime time just yet, why don’t we take a look at the huge controversy surrounding the open journal PLOS ONE and why no matter how it all happened, the fact that it did is alarming? If you don’t know, the journal has been savaged on social media by scientists for publishing a Chinese study on the dexterity of the human hand with an explicit reference to God in the abstract. Some reactions have been so over the top that an ardent creationist watching form the sidelines could collect the outraged quotes and use them in a presentation on how scientists get reflexively incensed when anyone brings up God because they’re all evil atheists who can’t even bear to hear him invoked. But at its core, the scientists’ outrage has less to do with the content of the paper than it has with how badly broken the peer review mechanism is in a world of publish-every-day-or-perish, when the tenure committee that decides your fate scoffs at anything less than 100 papers in journals…

For what it’s worth, the paper’s writers say that they flubbed their translation into English and a reference to “the Creator” was really supposed to say evolutionary nature. I’m not sure if that’s true because while on the surface, China is an atheistic country, there are plenty of Christians who live there, and the rest of the paper’s English seems perfectly fine and proper. The capital reference seems too deliberate to just be there as a mistake, almost like someone deliberately snuck it in and the team is now covering for this investigator by faking sudden bouts of Engrish in a paper that doesn’t actually suffer from any. Obviously there’s no prohibitions for a scientist to be religious and conduct exemplary research. Francis Collins is a devout Evangelical whose free time was spent preaching Templeton’s gospel of accommodationism, but his work with the Human Genome Project is critical in modern biology. Ken Miller is a devoted Catholic, but he’s tirelessly kept creationism out of classrooms in the Midwest and separates his personal beliefs from his scientific work and advocacy. And that’s what all scientists of faith try to do, maintain a separation between religion and work in the public eye, and when they fail, an editor should be there to review the papers and point that out before publishing it for public consumption.

So that’s what the fuming on social media is all about: the lack of editorial oversight. Scientists who wanted to submit their research to PLOS ONE, or already have, are now worried that it will be considered a junk journal, and their time and effort publishing there will be wasted. Not only that, but they’re worried about the quality of papers they cited from the journal as well since an editorial failure during peer review means that outright fraud can go undetected and take huge professional risks by other scientists to uncover. Since peer review is supposed to keep a junk paper out of a good journal by pointing out every design flaw, obvious bias, cherry-picking, and inconsistencies that signal fraud or incompetence, and it’s the only mechanism that exists to do so before publication, any signs that the reviewers and editors are asleep at the wheel, or only going through the motions, is incredibly alarming to scientists. Yet, at the same time, I can sort of understand why this kind of thing happens. Reviewers are the gatekeepers of what qualifies as scientific literature and their job is to give scientists hell. But they’re not paid for it, their work for journals is not appreciated very much, and despite their crucial role in the scientific process, the fact of the matter is that they’re volunteers doing a thankless task out of a sense of duty.

While the popular perception of a scientist is that research is a cushy gig, the reality is that the majority of scientists are overworked, underpaid, and expected to hand out their time for free in the service of highly profitable journals that charge and arm and a leg to publish scientists’ own content. Any person, no matter how passionate or excited about his or her work, is not going to be extremely motivated and exceedingly thorough under these circumstances. Until we start to properly appreciate reviewers for their work and rewarding them for it, and until colleges finally realize that it’s dangerous and ridiculous to encourage scientists to write ten papers where one good one would’ve sufficed, mistakes like PLOS ONE’s are just going to keep happening as the review takes place in social media rather than by writers and editors, like it should’ve been. We can’t expect quality from peer review in the future if we’re not willing to make the task logistically reasonable and professionally appreciated, much like we shouldn’t expect to walk into any used car dealership and drive off in a brand new Ferrari for the price of an old Kia. Like with so many things in life, you get what you pay for when someone has to work for your benefit.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

hot magnetar

Fast radio bursts, or FRBs, are quickly becoming one of the most interesting things out there in deep space and the more we study them, the more strange questions they raise. In less than a year, the media declared them to be alien broadcasts and a few days later, just random flukes, while actual scientists confirmed not only that they’re very real, but that they’re coming from as far as six billion light years away and shed light on matters of cosmological significance. But for all the new and ever more detailed observations, we still had little clue what’s causing them and my favorite theory involving some really extreme physics, might turn out to be flawed according to a new paper which finally has some hard data about the objects causing the FRBs. You see, any theory involving a cataclysmic event emitting one of these bursts means that the signal can come from a particular location only once because the object that created them was destroyed, but apparently, that’s not what we’re seeing. In fact, the same object can generate multiple and intermittent FRBs, meaning that despite their energy, their source is still very much there.

After studying a single burst called FRB 121102, astronomers around the world saw that it was repeating. There was no regular pattern, but it definitely recurred ten times according to what’s known as the dispersion measure: disruptions in the signal caused by its path through the dust and gas of space on its way to us, which, as recently mentioned, confirmed that we are able to weigh the universe correctly. Armed with the knowledge that the signal is repeating, the team’s focus then shifted on identifying what could create such powerful bursts and live to do it again, and then nine more times. Well, the researchers found the burst doing a survey of pulsars, still active neutron stars belching death beams and radio signals as they cool and settle down, and one particular type of neutron star seems to fit the bill as an FRB progenitor: a magnetar. It’s a neutron star with a magnetic field so powerful, that it could brick your electronics and erase the data on your credit cards from 120,000 miles away. The most powerful magnets ever built have less than a hundred millionth of that strength, and the planet’s magnetic field is a quintillionth of that. And when magnetars undergo a quake, we can feel it from 50,000 light years away.

Ultimately, the team thinks that FRBs are magnetic aftershocks of these magnetar quakes. The energy from the quake itself is too small for us to easily detect, but the powerful magnetic fields are disrupted enough to emit a scream across time and space when they reconnect. Consider that neutron stars are like an incredibly tightly packed coil with a mass of our sun crammed into a sphere 15 to 20 miles across, surface temperatures in the millions of degrees and an internal one soaring to over 1.8 billion at the core. The unites of measurement don’t even matter at this point because the numbers are just so huge. A quake that causes just a millimeter crack in the crust registers as a magnitude 23 on the Richer scale. The biggest possible natural earthquake can’t exceed a 9.2 and the scale itself is logarithmic, meaning that an almost invisible motion of magnetar’s surface can easily unleash 10 trillion times the energy our planet can at its worst. It seems like this stellar monster can definitely produce a burst that seems apocalyptic, then turn around and do it again with ease. As awesome as neutron star collapse theories of FRBs were, distant, quaking magnetars seem to be a much more solid candidate for their origins.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

alien astronaut

Looking for aliens is hard, we all know that. Considering just how much space we need to scan while knowing full well that we’re looking for the equivalent of a needle in all the hay on Earth, a lot of proposals have been put forward to try and narrow down the slice of the sky on which we need to focus to make the task a little more manageable. We’ve tried looking for a planet-sized semaphore and think we might have found one, evidence of asteroid mining, and even gravity waves from relativistic rockets, but the amount of effort is still immense. Now, two researchers from a Canadian university say they have a much better plan to find aliens. Simply assume that distant alien species is looking for us in much the same way we are looking for them and would want to contact us with lasers and radio broadcasts when they detect Earth. This means we will only need to focus on planets from which we know extraterrestrial astronomers could spot us in the middle of a transit event, much like Kepler spots exoplanets transiting their stars. Seems to be fairly straightforward, right? Yes, it does. Actually, it seems way, way too straightforward.

First and foremost, this approach limits us to a sliver of the sky and pins its hopes on a curious, intelligent species interested in its place in the cosmos and looking for other intelligent life. It’s a huge leap which involves so many coincidences and good luck to pan out in our favor that we’d have to raise at least a few heckles on the subject. After all, even if intelligent life is plentiful and there are countless aliens that want to explore space and talk to other alien life forms, there’s a matter of when these species evolve and develop the technology to act on their curiosity as well as when other species will evolve and develop theirs. A thousand year mismatch seems virtually inevitable because on cosmic and evolutionary time scales, it’s a blink of an eye, and that’s the amount of time in which an entire civilization can rise and fall to be replaced by another. There are societies that lasted far longer than that, true, but they’re the exception rather than the rule and as civilizations rise and fall, their priorities change. There may be a long window for a guild of alien astronomers to scan the skies and a very short one for another species to respond.

Another big problem with this assumption is the idea that aliens would be curious, or care that another intelligent species is out there. We often project our aspirations to hypothetical species and picture them as hyper-literate space-faring explorers seeking out other life. In reality, quick surveys right here on Earth will show you that even a supposedly curious species like us places an extremely low priority on SETI research. In fact, countless people think it’s just a huge waste of time and money, and to assume that there will be no aliens who could find us and contact us but choose not to because they honestly don’t give a damn, would be a big mistake. Hell, they may be religious zealots who believe that looking for other intelligent life is a mortal sin. Waiting for them to send a signal to us from a narrow patch of the cosmos would be a fruitless exercise in wishful thinking. And that’s kind of what this proposal really is. Wishful thinking when it comes to alien life, hoping that they’re out there, watching, listening, and trying to reach us because it’s what we’d really like them to do. As nice as it would be if that was truly the case, the universe is not known for coddling our personal desires. If we want to find alien life, it’ll take a while…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

curious bot

Defense contractor Raytheon is working on robots that can talk to each other by tweaking how machine learning is commonly done. Out with top-down algorithmic instructions, in with neural network collaboration and delegation across numerous machines. Personally, I think this is not just a great idea but a fantastic one, so much so that I ended up writing my thesis on it and had some designs and code laying around for a proof of concept. Sadly, it’s been a few years and I got side-tracked by work, my eventual cross-country move, and other pedestrian concerns. But all that time, this idea just kept nagging me, and so after reading about Raytheon’s thoughts on networked robotics, I decided to dust off my old project and build it anew with modern tools in a series of posts, laying out not just the core concepts but the details of the implementation. Yes, there’s going to be a lot of in-depth discussion about code, but I’ll do my best to keep it easy to follow and discuss, whether you’re a seasoned professional coder, or just byte-curious.

All right, all right, that’s enough with all the groaning, I design and write software for a living, not pack comedy clubs in West Hollywood. And before you write any software, you have to lay out a few basic goals for what you want it to do. First and foremost, this project should be flexible and easily expandable because all we know is that we’re going to have neural networks that will run for machines with inputs and outputs, and we want to tie them to a certain terminology we could invoke when calling it. Secondly, it should be easily scalable and ready for the cloud, where all it takes to ramp it up is tweaking a few settings on the administration screen. Thirdly, it should be capable of accepting and executing custom rules for making sure the digital representations of the robots in the system are valid on the fly. And finally, it should allow for custom interfaces to different machines inhabiting the real world, or at least get close enough to providing a generic way to talk to real world entities. Sounds pretty ambitious, I know, but hey, if you’re going to be dealing with artificial intelligence, why not try to see just how far you can take an idea?

Before we proceed though, I’d like to tackle the obvious question of why one would want to dive into a project like that on skeptical pop sci blog. Well, for the last few years artificial intelligence has figured in popular science news as some sort of dark magic able to create utopias and ruin economies by making nearly half of all jobs obsolete in mere decades by writers who can’t fact check the claims they quoted and use to build elaborate scenarios of the future. But even if you don’t dive into the code and experiment with it yourself, you’ll get a good idea what AI actually is and isn’t. Then, the next time you read some clickbait laying out preposterous claims about how robots will take over the world and enslave us as we remain oblivious to it, you could recall that AI isn’t a digital ghost from sci-fi comic books, waiting to turn on humanity it comes to resent like the hateful supercomputer of I Have No Mouth and I Must Scream, but something you’ve seen diagrammed and rendered in code you can run on your very own computer on an odd little pop sci blog, and feel accordingly unimpressed with the cheap sensationalism. So with that in mind, here’s your chance to stop worrying to learn to understand your future machine overlords.

Here how this project is going to work. Each new post in this series is going to point to a GitHub wiki entry with code and details to keep the code and in-depth analysis in the same place while the posts here will give the high level overview. This way, if you prefer to stick to very high level basic overviews, that’s what you get to see first because as I’ve been told by so many bloggers who specialize in popular science and technology, big blocks of math and code are guaranteed to scare off an audience. But if the details intrigue you and you wanted a better look under the hood, it’s only a link away, and even though it looks scary at first, I really would encourage you to click on it and try to see how much you can follow along. Meanwhile, you’ll still get your dose of skeptical and scientific content in between so don’t think Weird Things is about to turn into an a comp sci blog the whole time this project is underway. After all, after long days of dealing with code and architectural designs, even someone who can’t imagine doing anything else will need a break from talking about computers and writing even more code for public review…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

global warming sea rise

In his ongoing fight against the cognitive dissonance of working to downsize the very entity that pays his salary and NOAA’s research on global warming, Rep. Lamar Smith wants government scientists to turn over more and more papers that will supposedly prove that they’re faking their research for political gain, refusing to accept even the remote possibility that maybe there is at least a sliver of a fragment of a chance that they’re being completely honest and accurate. But this is what happens when a politician has every incentive to cry conspiracy and refuse to give experts the benefit of the doubt. Still, the first rule of a good public fishing expedition is to have at least a modicum of plausible deniability when called out on it and rather than back off even a little bit, in an effort to keep appeasing his most zealous anti-science constituents, Smith filed a list of keywords he wants NOAA to use when searching for documents he wants to see as part of his “investigation” which blatantly show him digging really hard for out of context “proof” of a conspiracy theory repeated almost daily in the echo cambers of conservative radio shows.

Basically, anything mentioning President Obama, the talks in Paris, ocean buoys, or the UN are to be turned over for quote-mining to create the next e-mail scandal because to Smith’s rabidly denialist supporters, the previous one was a smoking gun ignored by the media when in reality, the press obsessed over them for months and barely covered the hearings in which the e-mails were showed to be nothing more than cherry-picking for incriminating quotes. If anyone wanted to give Smith the benefit of the doubt, his latest demands are proof that he is only interested in partisan spectacle and conspiracy-mongering. Just like rabid creationists who cling to the same old, long debunked canards, global warming denialists will continue to regurgitate the same old cherry-picked charts, cite the same non-scandals of their own invention, and pretend that they haven’t been shown wrong by everyone and their grandma twice already. To them the issue of global warming is not a scientific one, it’s a sinister plot by the New World Order to strip them of their freedoms and property by the sinister global elite. Politicians like Smith are either cynically exploiting these hysterical fears, or falling for them. And I’m really not sure which is worse…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

counting the days

Nowadays not only is there an app for what you want to do, but it can count how many times or how intensely you do it. It’s all part of the marketing pitch for the idea of The Quantified Self, an easy to follow, real-time analysis of your habits and patterns which should ideally help you be a better you with seemingly objective progress tracking showing how well you’re doing. However, does quantifying everything you do mean that you’re improving your stats at the expense of the happiness of doing the tasks being measured? Jordan Etkin from Duke University thinks it may after experimenting on 105 students and seeing them report getting less joy out of doing simple tasks that were being measured than just doing them. We know that enjoyment depends on an individual’s balance of eternal and intrinsic motivation to do something and the experiments set out to measure just how much knowing that one is being quantified affects the intrinsic rewards of doing something so we’d know how to determine that it’s important to quantify something but not destroy people’s motivation for performing the task in the first place when we do.

From a simple coding standpoint, it’s easy to record a data point to a persistent store. You can even do it behind the scenes in a way that doesn’t detract for an app’s typical functionality. But what are you going to do with that data? Why is it useful? If you can’t think of a reason why you should store it, the correct approach is to ignore it. In much the same way, Etkin measured the number of fish shapes colored by students, or the number of steps taken by them while paying the same token sum to the measured experimental group, and the free to do whatever control group performing the same tasks. In effect, she placed an additional burden on one group with quantification because the group coloring shapes had to click off when the shapes were drawn with every finished one, while the walking group had to check their pedometers. Normal, even fun activities have turned into, well, work. More shapes got drawn and more steps were taken, but enjoyment scores were lower. A follow-up experiment measuring reading in a work-oriented way and just for fun, saw the same pattern. When quantified, more done equals less enjoyed.

In some ways, this is common sense. That something intrinsically fun and turn it into something measured, analyzed, and dissected, and it’s a lot less appealing. This is why people with really, really serious cooking chops and talent may never want to become professional chefs because their outlet fro stress now becomes work and is tied to paying the bills. But if you give them the proper external incentive, like total creative freedom over the menu, or a high enough salary to quit their current jobs at a profit, their perspective may change. What Etkin did was to confirm a need for a motivation to measure something because if students who read more, or drew more, or walked more steps got bigger payments, the process would be a lot more fun since they get to look forward to being rewarded for the additional effort of measuring and logging data. Same as people trying to lose wight logging their calories and exercise, or factory workers moving as quickly as possible to crank out more widgets while getting paid on a per widget basis.

So don’t buy that FitBit because you’re curious about how many steps you take, buy it because you want to take 20% more steps than you usually do for a week and then reward yourself with something you wanted to buy when you hit that goal. And if you’re a manager and want to see an increase in your employees’ productivity, don’t just measure them and reprimand them if the numbers don’t hit your goal, give them something to look forward to, like a company lunch, or a night out, or adding free snacks to your office kitchen. Otherwise science shows that you’re not going to get much out of them, and considering the research on why people hate their jobs and want to quit, you’d actually be giving them a good reason to start calling recruiters and plotting their escape from your cube farm. Sure, Etkin’s study seems fairly obvious at first blush, but it’s downright maddening how many people don’t actually understand how to effectively quantify a task, especially in the workplace. Programmers are being asked all the time to track this or that simply because we could capture a data point. Maybe after reading about Etkin’s work, people making these requests will think twice about why they’re measuring what they are…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

creepy manequins

Every psychology class mentions Stanley Milgram’s famous experiment to determine the limits of how far people could be pushed to execute horrific orders, and it’s since been the standard for today’s experiments measuring how to awaken our inner sociopath without interfering with your normal brain function. We already know that enough money will make you reconsider the natural human aversion of harming others, especially if you don’t actually have to see the pain you inflict firsthand. But what actually goes on in the brains of those who are following orders or inducements to hurt someone? Are they suffering some internal crisis when they harm others, are they simply pushing the button with no sense of agency on their own, or is something more complicated going on? To find out, European researchers repeated Milgram’s experiment with several important modern twists. They added buttons, a tone when a button was pressed, and read the electrical activity inside the participants’ brains when they were doing their part.

Now, Milgram’s inspiration for his research were the excuses of Nazis at Nuremberg defending themselves by saying that they were simply following orders so his tests focused on how orders are delivered and the subsequent reactions, so verbal commands were a key part of the setup. In this follow-up, how orders were delivered didn’t matter, just the fact that an order was issued so the researchers played a tone after participants pressed a button they were told to press. If the subjects were making conscious decisions and sticking to them, previous research said, the tone would seem to come notably faster after they pressed the buttons than if they were simply doing something on auto-pilot. We’re not sure why this happens, but accidental events seem to be processed slower than intentional ones, which is why gauging the subject’s subjective ideas about how quickly the tone came after they performed the requested or voluntary actions was a crucial part of the experiment. Some were free to choose to apply a small electric “shock” to an anonymous victim, take away £20 from him or her, or just press a button that did nothing as the control group. Others were simply told what buttons to push by the researchers.

What they found was quite interesting. First and foremost, the group told what to do reported a longer time between pressing the button and hearing the tone, exactly as expected. This meant that taking orders made them feel less in control of their actions, the brains evaluating what just happened as an involuntary action despite requiring their agency to be carried out. Secondly, a thorough analysis of their EEG patterns showed that they processed their decisions significantly less than the control group by analyzing activity known as event-related potential, or ERP, used to determine the cognitive load of an action in response of a stimulus. In other words, ordering someone to perform a task makes them feel as if they’re not actually the ones doing it and give the task and its consequences less thought. Revealingly, the topographical maps of the neural activity show areas where you’d find the prefrontal cortex, the seat of decision-making, showing the most activation in both groups while being a lot dimmer for the experimental participants to support this notion. As scary as it sounds, it seems that our brains might just be wired to follow orders with less thought and care than making our own choices. Why? We’ll need more studies to find out, but I’d bet it has to do with us evolving as a social species rather than loners.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon