Archives For science

experimental plant

Several years ago, scientists at the sustainable farming research center Rothamstead decided to splice a gene from peppermint into wheat to help ward off aphid infestations. You see, when hungry adult aphids decide it’s time for a snack, the essential oil given off by peppermint mimics a danger signal for the insect. Imagine trying to bite into your sandwich just as a fire alarm goes off over your head with no end in sight. That’s exactly what happens to aphids, and the thought was that this ability could be spliced into wheat to reduce pesticide use while increasing yield. It should also be noted that Rothamstead is non-profit, the research initiative was its own and no commercial venture was involved in any way, shape or form. Sadly, the test crops failed to live up to their expectations and deter aphids with the pheromone they produced, EβF. Another big, important note here is that despite the scary name, this is a naturally occurring pheromone you will find in the peppermint oil recommended by virtually every organic grower out there.

Of course, noting the minor nature of the genetic modification involved, the total lack of a profit motive on the part of a highly respected research facility, the sustainability-driven thinking which motivated the experiment, and the fact that the desired aphid repellent was derived from a very well known, natural source, anti-GMO activists decided that they wanted to destroy test crops in more mature stages of the research anyway because GMOs are bad. No, that was the excuse. Scientists planting GMO plants? They obviously want to kill people to put money in Monsanto’s pockets with evil Frankenfoods. With the experiment failing, they’re probably celebrating that all those farmers trying to protect their wheat lost a potential means of doing so and they won’t be driving to the research plots in the middle of the night to set everything on fire. The group which planned to carry out this vandalism, like many other anti-GMO organizations, lacks any solid or scientifically valid reason to fear these crops, and was acting based solely on its paranoia.

Indeed, anti-GMO activism is basically the climate change denial of the left. It revolves around a fear of change and bases itself on fear-mongering and repeating the same debunked assertion after another ad nauseam, with no interest in debate and even less in actually getting educated about the topic at hand. While anti-GMO zealots rush to condemn any Big Ag study showing no identifiable issues with GMO consumption on any criticism they can manage, real or imagined, with no study ever being good enough, they cling to horrifically bad papers created by scientists specifically trying to pander to their fears, who threaten to proactively sue any critics who might ruin the launch party for their anti-GMO polemics. Had Big Ag scientists done anything remotely like that, the very same people singing praises to Séralini would have demanded their heads on the chopping block. Hell, they only need to know they work in the industry to declare them parts of a genocidal New World Order conspiracy. But you see, because these activists are driven by fear and paranoia, to them it’s ok to sabotage the safety experiments they demanded to assure that scientists can’t do their research, while praising junk pseudoscience meant to bilk them.

alpha centauri bb

Carbon is a great element for kick-starting life thanks to its uncanny ability to form reactive, but still stable molecules perfect for creating proteins, amino acids, and even the backbone of DNA and RNA, or their functional equivalents. And yet, according to those who argue that the reason we exist is that the universe is somehow fine-tuned for us, or that life exists as a random, one in a trillion chance, it shouldn’t even be here. You see, when the first stars started fusing hydrogen into helium-4 deep in their searing cores, the resulting helium atoms should have combined into beryllium-8 which decays so quickly that there should have been virtually no chance for another helium atom to combine with it to form carbon-12, which accounts for 98.9% of all carbon in the known universe and makes life possible. According to astronomer Fred Hoyle, whose misuse of the anthropic principle has been used to justify many an anti-evolutionary screed, since carbon based life exists, there must be a mechanism by which this beryllium bottleneck is resolved and the clue to this mechanism must lie in the conditions under which the star fuses helium.

You see, when atoms fuse into a new element, the newly formed nucleus has to be at one of its natural, stable energy levels, otherwise the combination of the protons’ and neutrons’ energies, as well as the energy of their kinetic motion will prevent the fusion. Hoyle’s insight was than any new carbon atom must have had a resonance with the process by which a beryllium and helium atom would combine, which would exert just enough energy to slow down the decay rate for the reaction with a passing helium-4 atom to happen, so the natural energy level of the result would sustain a stable carbon-12 nucleus. Imagine rolling magnetic spheres down a hill, and as these magnets roll, they collide. Some will hit each other with just enough energy to keep rolling as a single unit and absorb new spheres they run into, others combine, then break apart, or just roll on their own. The angle, the force of impact, and the speed and masses of the spheres all have to be right for them to join, and when they do, they’ll have to stay that way long enough to settle down. This is quantum resonance in a nutshell, and it’s what made carbon-12 possible.

But while this is all well and good, especially for us carbon based lifeforms, where does Hoyle’s discovery leave us in regards to the question of whether the universe was fine-tuned for life? If we assume that only carbon based life is possible, and that the only life that could exist is what exists today, the argument makes sense. However those assumptions don’t. Even if there was no quantum resonance between helium-4, beryllium-8, and carbon-12 in the earliest stars from which the first atoms of organic molecules were spawned, the first stars were massive and it’s a reasonable guess that when they went supernova, they would have created carbon, silicon, and metals like aluminium and titanium. All four elements can be useful in creating molecules which can form the chemical backbones of living organisms. In fact, it’s entirely possible that we could one day find alien life based on silicon and that in some corner of the galaxy there are microbes with genomes wound around a titanium scaffold. Life does not have to exist as we know it, and only as we know it. We didn’t have to exist either, it’s just lucky for us that we did.

When creationists try to come up with the probability that life exactly the way we understand, or have at least observed to exist, came out the way it has, against all other probabilities, they are bound to get ridiculous odds against us being here. But what they’re really doing is calculating a probability of a reaction for reaction, mutation for mutation, event for event, repeat of the entire history of life on Earth, all 4 billion years of it, based on the self-absorbed and faulty assumption that because we’re here, there must a reason why that’s the case. The idea that there’s no real predisposition towards modern humans evolving in North Africa, or that life could exist if there’s no abundant carbon-12 to help bind its molecules is just something they cannot accept because the notion that our universe created us by accident and we can be gone in the blink of a cosmic eye to be replaced by something unlike ourselves in every way, is just too scary for them. They simply don’t know how to deal with not feeling like they are somehow special or that nature isn’t really interested in whether they exist or not, just like it hadn’t for at least 13.8 billion years…

paper crowd

Amazon’s Mechanical Turk lets you assign menial, yet attention-intensive tasks to actual human beings, despite the name’s ambiguity, and those humans want to be paid consistently and a fair fee for their efforts. This is why in March of last year, they launched the Dynamo platformwhich allows them to warn each other of bad clients who were stingy or unreasonable. The brainchild of Stanford PhD student Niloufar Salehi, who wanted to study digital labor rights, it came about in large part due to many of those stingy, unfair clients being academics. With small budgets for surveys and preparing complex machine learning algorithms, researchers were often paying an insultingly token sum to the workers they recruited, something Dynamo argues hurts the quality of their research by limiting their labor pool to the truly desperate and ill-qualified in its rules and guidelines for ethical academic requests for inquiring researchers looking for assistance.

It’s hard to know what’s worse, the fact that we give so little funding to researchers they have to rely on strangers willing to work for scraps, or that academics are fine with the notion of paying the equivalent of prison odd job wages to their remote assistants. Part of the problem is that the issues are interdependent. Many academics can’t afford to pay more and still meet their targets for sufficient survey responses or machine learning algorithms’ training set sizes. Turkers most qualified for the job can’t afford to accept less than 10 cents a minute, which doesn’t sound like much, until you realize that 15,000 units of work taking half an hour come out to $45,000 or so, a hefty chunk of many grad students’ budgets. Something’s gotta give and without more money from universities and states, which is highly unlikely, academics will either keep underpaying the crowds they recruit, or end up doing less ambitious research, if not less research in general…

happy alarm

In a quote frequently attributed to John Lennon a boy was asked what he wanted to be wanted to be when he grew up and he replied that he wanted to be happy. He was then told that he did not understand the question, to which he retorted that the person asking him didn’t understand life. And he’s right, we all want to be happy. That’s especially true at work, where most of us will spend nearly a third of our waking hours and we’ll deal with countless stresses big and small on a daily basis, seemingly for nothing more than a paycheck. Work should be interesting, give us some sense of worth and purpose, but 70% of all workers are apathetic about, or outright hate their jobs, which clearly means whatever your bosses are doing to make you happy simply isn’t working. Though I’m sort of making a big assumption that your bosses are even trying to make you happy, much less care that you exist, or that they need to worry about whether you like the job they have you doing. And that, objectively, is perhaps the most worrisome part of it all…

You see, social scientists and doctors have long figured out what makes you happy, why it is in the interest of every company’s bottom line to keep employees happy, and how your perpetual case of the Mondays could be eliminated, or at least severely reduced. Most American workers, as we can see from the statistics, are dealing with the stress of being at a job they dislike, which increases their levels of cortisol, a stress hormone that hardens arteries and increases the odds of having a heart attack. If they’re not there yet, the prolonged stress also causes a host of very unpleasant issues like irregular sleep, disordered eating, anxiety, and depression. In fact, close to a quarter of the American workforce is depressed, which is estimated to cost over $23 billion per year in lost productivity. We also know exactly why people hate their jobs, and unlike many business owners think, it has nothing to do with employees being greedy and lazy, it’s usually a terrible management policy, and feeling as if they’re utterly disposable and irrelevant.

People who are unemployed for a year or more are almost as likely to be depressed as working stiffs and their odds of being diagnosed with depression go up by nearly 2% for every time they double their time out of work. So while a bad job can make people miserable, not having one is every bit as bad if not worse. And these are just the numbers for one year of unemployment, so what lies beyond that could be far scarier since every trend shows mental health suffers without work or purpose, and physical health quickly deteriorates as well. This leaves us stuck in an odd dilemma. We know that people need to, and want to work, and we know full well that when they hate their jobs, their performance lags, as does their health, forming a vicious cycle of bad work and disengagement contributing to poor health, worse work, and more disaffection on the job. It seems obvious that something should be done to address this, year for the last 15 years, there has been no change in the stats. Why? The short answer? Terrible management.

One of this blog’s earliest posts explored experiments in which scientists confirmed that often, a group chooses a leader based on little more than bravado, overlooking the results. In follow-up experiments, we even saw mathematical evidence that companies would be better off randomly assigning their managers instead of promoting them the way they do now. Managers also tend to think they’re a lot better than they actually are, while in reality, half the workforce put in a two week notice specifically because of their bosses, and despite often giving themselves very high praise, managers are almost as disengaged as their employees, with 65% of them simply going through the motions of another day. Go back to the most frequent reasons why people are not happy at work. Half of them are about being micromanaged, left in the dark, and treated like a disposable widget rather than a person. They’re primed to see themselves are less valuable, if not useless, and we know that negative priming leads to terrible performance. Tell people they should just be lucky you don’t fire them, and you’ve effectively set them up for failure.

Think about your own worst bosses. They never hesitated to tell you that you were wrong, or to look down on you, or watch over your shoulder because they had no trust in you and turned all the inevitable slip-ups or errors, even if they were immediately caught and corrected by you, as justification for watching you like a hawk, right? Or if not, did they simply never talk to you about anything, merely dropped off more work and expected you to be done silently? Combine those daily putdowns with a constant threat of being outsourced simply to save a dollar, being shoved to an open office where you have no personal space or privacy and have constant distractions, on top of a lack of any career progression path in sing, and tell me that’s a job even those who live to work would find engaging. As many organizations grow, managers disassociate from the people they are managing, seeing them as little more than numbers on a spreadsheet because that’s what they are in their daily list of things to do. This breeds disengagement, which breeds frustration, and which causes talented employees to run away for greener pastures.

Keeping one’s employees happy should not be one of those HBR think pieces that makes your executive team “ooh” and “ahh” in a meeting where you run through PowerPoint slides showing how much money you’re losing to turnover, depression, and bad management. It should be the top priority of middle managers and supervisors because happy employees work harder, show loyalty and dedication, and help recruit more good talent. Yes, spending on benefits like catered lunches, or gym memberships, or better healthcare, or easy access to daycare, or flexible time off policies sounds exorbitant, I know, and many businesses can’t afford all of that. But showing employees that you care, that you listen to them, and treating them with respect pays off as the engaged employees become more productive and dedicated. In a knowledge economy there’s no excuse for the employee-employer relationship be much like one between a master and the indentured servant. It should be a business partnership with benefits for both parties extending well beyond “here’s your paycheck, now get to work.” The science says so, and besides, when you’re a manager, isn’t keeping employees motivated and productive your top priority?

rainbow flag splash

Last year, a study conducted by poly sci grad student Michael LaCour showed that just a simple conversation with a canvasser who talked to people about marriage equality and then identified as gay, was enough to sway minds towards the acceptance of same sex marriage. This was an odd result because people don’t tend to change their views on things like homosexuality after a brief conversation with a stranger, no matter how polite the stranger was. However, the data in the paper was very convincing and it may have been entirely possible that the people surveyed didn’t think about marriage equality and meeting a gay person who didn’t fit the toxic stereotype propagated by the far right, wanted to seem supportive to meet social expectations, or might’ve even been swayed off the fence towards equality. After all, the data was there, and it looked so convincing and perfect. In fact it looked a little too perfect, particularly when it came to just how many people seemed open to talking to strangers who randomly showed up at their doors, and how inhumanly consistent their voiced opinions have been over time. It was just… off.

When doing a social sciences experiment, the biggest stumbling block is the response rate and how small it usually is. Back in my undergrad days, I remember freezing my tail end off trying to gather some responses for a survey on urban development in the middle of an Ohio winter and collecting just ten useful responses in three hours. But LaCour was armed with money and was able to pay up to $100 for each respondent’s time unlike me, so he was able to enroll 10,000 or so people with a 12% response rate. Which is a problem because his budget would have had to have been over $1 million, which was a lot more than he had, and a 12% rate on the first try will not happen. Attempts to replicate it yielded less than a 1% response rate even when there was money involved. Slowly but surely, as another researcher and his suspicious colleagues looked deeper, signs of fraud mounted until the conclusion was inescapable. The data was a sham. Its stability and integrity looked so fantastically sound because no study was actually done.

New York Magazine has the details on how exactly the study came undone, and some parts of the story, held up in the comments as supposed proof of universities’ supposed grand Marxist-homosexual conspiracy to turn education into anti-capitalist and pro-gay propaganda as one is bound to expect, actually shine a light into why it took so long for the fraud to be discovered. It’s easy to just declare that researchers didn’t look at the study too closely because they wanted it to be true, that finding some empirical proof that sitting a homophobe down with a well dressed and successful gay person for half an hour would solve social ills was so tempting to accept, no one wanted to question it. Easy, but wrong. If you’ve ever spent time with academics or tried to become one in grad school, you’d know that the reason why it took exceptional tenacity to track down and expose LaCour’s fraud is because scientists, by in large, are no longer paid to check, review, and replicate others’ work. Their incentive is to generate new papers and secure grants to pay for their labs and administrators’ often outrageous salaries, and that’s it.

Scientists have always lived by the paradigm of “publish or perish,” the idea that if you publish a constant stream of quality work in good journals, your career continues, and once you stop, you are no longer relevant or necessary, and should quit. But nowadays, the pressure to publish to get tenure and secure grants is so strong that the number of papers on which you have a byline more or less seals your future. Forget doing five or six good papers a year, no one really cares how good they were unless they’re Nobel Prize worthy, you’re now expected to have a hundred publications or more when you’re being considered for tenure. Quality has lost to quantity. It’s a one of the big reasons why I decided not to pursue a PhD despite having the grades and more than enough desire to do research. When my only incentives would be to churn out volume and try to hit up DARPA or the USAF for grant money against another 800 voices as loud and every bit as desperate to keep their jobs as mine, how could I possibly focus on quality and do bigger and more ambitious projects based on my own work and current literature?

And this is not limited to engineering and hard sciences. Social science has the same problems as well. Peer review is done on a volunteer basis, papers can coast through without any critical oversight, fraud can go unnoticed and fester for years, and all academic administrators want to do is to keep pushing scientists to churn out more papers at a faster and faster rate. Scientists are moving so quickly, they’re breaking things and should they decide to slow down and fix one of the things that’s been broken, they get denied tenure and tossed aside. Likewise, whose who bring in attention and money, and whose research gets into top tier journals no matter how, get a lot of political pull, and fact checking their research not only interferes with the designated job of cranking out new papers in bulk, it also draws ire from the star scientists in question and their benefactors in the administration, which can cost the fact checkers’ their careers. You could not build a better environment to bury fraud than today’s research institutions unless you started to normalize bribes and political lobbyists commissioning studies to back their agendas.

So scientists didn’t check LaCour’s work not because they wanted to root for gay marriage with all their hearts as they were brainwashed by some radical leftist cabal in the 1960s, they didn’t check his work because their employers give them every possible incentive not to unless they’ll stumble into it when working with the same exact questions, which is actually what happened in Broockman’s case when he stumbled on evidence of fraud. And what makes this case so very, very harmful is that I doubt that LaCour is such a staunch supporter of gay rights to commit the fraud he has in the name of marriage and social equality. He just wanted to secure his job and did it by any means he thought necessary. Did he give any thought how his dishonesty impacts the world outside of academia? Unlikely. How one’s work affects the people outside one’s ivory tower is very important, especially nowadays when scientists are seen as odd, not quite human creatures isolated from everyday reality by an alarming majority of those exposed to their work, and who will be faulted for their colleagues’ shortcomings or dishonesty en masse.

Now, scientists are well aware of the problem I’ve been detailing, and there is a lot of talk about some sort of post-publication peer review, or even making peer review compensated work, not just something done by volunteers in their spare time with the express purpose of weeding out bad papers and fraud. But that’s like trying to cure cancer by treating just the metastatic tumors rather than with aggressive ressection and chemotherapy. Instead of measuring the volume of papers a scientist has published, we need to develop metrics for quality. How many labs found the same results? How much new research sprang from these findings based not only on direct citation count, but citations of research which cite the original work? We need to reward not the ability to write a lot of papers but ambition, scale, and accuracy. When scientists know that a big project and a lot of follow up work confirming their results is the only way to get tenure, they will be very hesitant to pull off brazen frauds since thorough peer review is now one of the scientists’ most important tasks, rather than an afterthought in the hunt for more bylines…

axion model

Not that long ago, I wrote an open letter to the Standard Model, the theoretical, in the scientific sense of the word, framework that describes the structure and behavior of particles that make up the universe as we know it. While this letter confirmed many of is successes, especially with the confirmation of the Higgs boson, it referred to the need for it to somehow be broken for the world of physics to move forward, citing knowledge of something that lay beyond it. Considering that it was a pretty vague reference, I thought it would be a good idea to revisit it and elaborate as to why we need something beyond the Standard Model to explain the universe. Yes, parts of the problem have to do with the transition between quantum and classical states which we are still trying to understand, but the bigger problem is the vast chasm between the masses of each and every particle covered by the model and the mass associated with gravity taking over from the quantum world and responsible for the cosmos as we know it on a macro scale.

So why is the Higgs some 20 orders of magnitude too light to help explain the gap between the behavior of quantum particles and the odd gravitational entities that we’re pretty sure make up the fabric of space and time? Well, the answer to that is that we really don’t know. There are a few ideas, one in vogue right now gives new life to a nearly 40 year old hypothesis of a particle known as an axion. The thought is that low mass particles with no charge just nudged the mass of the Higgs into what it is today during the period of extremely rapid inflation right after the Big Bang, creating the gap we see today, rather than holding on to the idea that the Higgs came to exist at its current mass of 125 GeV and hasn’t gained or lost those 5 vanity giga-electron volts those health and fitness magazines for subatomic particles are obsessed with. A field of axions could slightly warp space and time, making all sorts of subtle changes that cumulatively have a big effect on the universe,which also makes them great candidates for dark matter.

All right, so people have been predicting the existence of axions for decades and they seem to fill out so many blank spots in cosmology so well that they might be the next biggest thing in all of physics. But do they actually exist? Well, they might. We think some may have been found in anomalous X-ray emissions from the sun, though not every expert agrees, and there are a few experiments hunting for stronger evidence of them. Should we find unequivocal proof that they exist just as the equations predict they should, with the right mass and charge, one could argue you would have a discovery even bigger than that of the Higgs because it solves three massive problems in cosmology and quantum mechanics in one swoop. But until we do, we’re still stuck with the alarming thought that after the LHC ramps up to full power, it wouldn’t show us a new particle or evidence of new physics, and future colliders would never have the oomph to cover the enormous void between Standard Model and gravitational particles. And this is why it would be so great if we detect axions or the LHC manages to break particle physics as we know it…

spider attack

Are you a religious fundamentalist who despises modern science as the root of all evil? Do you think vaccines will give your children autism or allow them to become pawns of a sinister global cabal bent on world domination through population control? Do you believe that cancer is cured by prayer and sacred herbs instead of clinically proven surgery and chemotherapy? Do trials of engineered viruses capable of controlling malignant tumors make you fear the coming Rapture as man plays God? Do you want to protect your children from this unholy progress and stop a future in which we might become space-faring cyborgs with indefinite lifespans? Well, do I have great news for you! Only two states in America won’t let you claim religious exemptions when it comes to decisions about the medical well-being of your children, so you could readily neglect, pray, and fear-monger all you want as long as you say you’re doing it for religious reasons, and should your child die or fall gravely ill, you might not even be prosecuted, unlike a secularist.

Noted atheist, scientist, and author, Jerry Coyne is extremely unhappy with the current situation regarding religious exemption laws. By his logic, it’s more or less an excuse to fatally neglect, or even kill children with few or no consequences and sets up a different legal standard for theists than secularists and atheists, which means that these exemptions need to be struck down. Not even someone who loves playing Devil’s advocate could really argue here. Our society is set up to give everyone equal representation under the law and while this doesn’t happen in practice, I would think that any law which allows you to get out of jail for cruelty to children because you’re very sincere in your belief that God personally told you that little Timmy or Susie didn’t need any surgery or medication, while someone who doesn’t play the same card can lose custody rights, do serious time, and even face the death penalty, is asinine to the point of being offensive.

It’s a national shame that we allow religion to be an excuse for something we seem to all agree is beyond the pale, and it needs to stop. People should be allowed to worship as they wish and are certainly entitled to voice their religious views regardless how offensive we find them since freedom of speech should also allow for freedom to offend. But one’s right to religious practice needs to stop where the health and well-being of others begins, doubly so when the others are not old enough to make their own decisions or understand the harm that may be inflicted by an authority figure they love and trust. And again, the double standard that allows one to declare a fervent religious belief to escape prosecution that’s considered fair and appropriate for equally guilty offenders who did not make such claims, turns religious freedom into religious privileges, something that American fundamentalists convinced themselves to be entitled to but should not exist under the law. People of faith are being mocked and subjected to legal bullying, we’re told, as the very same oppressed people of faith routinely get away with negligent homicide.

Even worse, the very same fundamentalists and those who grovel to them constantly bombard us with the idea that atheists and secularists, the ones who actually will face the consequences of ignorantly malicious parenting by the way, of not loving their children enough because their worldview holds that all humans are just flesh, blood, and chemistry. What they’ll conveniently leave out is that large fundamentalist families often have large broods not because they just so love children that they can’t stop, but because “it’s their duty to raise soldiers for Christ,”which means having child after child and keeping them locked away from modernity so they’ll emerge from their Quiverfull cocoon oblivious to any other worldview. No wonder they panic when they see Muslim immigrants having high birth rates. It was their strategy to crowd out the secularists by sheer numbers and now they have competition from equally zealous imams! And I suppose, when to fundamentalists, their kids are just arrows in a quiver, they can maintain their purity in the eyes of their faith and just add another arrow should one be broken by their negligence…

late night

Every summer, there’s always something in my inbox about going to college or back to it for an undergraduate degree in computer science. Lots of people want to become programmers. It’s one of the few in-demand fields that keeps growing and growing with few limits, where a starting salary allows for comfortable student loan repayments and a quick path to savings, and you’re often creating something new, which keeps things fun and exciting. Working in IT when you left college and live alone can be a very rewarding experience. Hell, if I did it all over again, I’d have gone to grad school sooner, but it’s true that I’m rather biased. When the work starts getting too stale or repetitive, there’s the luxury of just taking your skill set elsewhere after calling recruiters and telling them that you need a change of scenery, and there are so many people working on new projects that you can always get involved in building something from scratch. Of course all this comes with a catch. Computer science is notoriously hard to study and competitive. Most of the people who take first year classes will fail them and never earn a degree.

Although, some are saying nowadays, do you really even need a degree? Programming is a lot like art. If you have a degree in fine arts, have a deep grasp of history, and can debate the pros and cons of particular techniques that’s fantastic. But if you’re just really good at making art that sells with very little to no formal training, are you any less of an artist than someone with a B.A. or an M.A. with a focus on the art you’re creating? You might not know what Medieval artisans might have called your approach back in the day, or what steps you’re missing, but frankly, who gives a damn if the result is in demand and the whole thing just works? This idea underpins the efforts of tech investors who go out of their way to court teenagers into trying to create startups in the Bay Area, telling them that college is for chumps who can’t run a company, betting what seems like a lot of money to teens right out of high school that one of their projects will become the next Facebook, or Uber, or Google. It’s a pure numbers game in which those whose money is burning a hole in their pockets are looking for lower risk to achieve higher returns, and these talented teens needs a lot less startup cash than experienced adults.

This isn’t outright exploitation; the young programmers will definitely get something out of all of this, and were this an apprenticeship program, it would be a damn good one. However, the sad truth is that less than 1 out of 10 of their ideas will succeed and this success will typically involve a sale to one of the larger companies in the Bay rather than a corporate behemoth they control. In the next few years, nearly all of them will work in typical jobs or consult, and it’s there when a lack of formalism they could only really get in college is going to be felt more acutely. You could learn everything about programming and software architecture on your own, true. But a college will help you but pointing out what you don’t even know you don’t yet know but should. Getting solid guidance in how to flesh out your understanding of computing is definitely worth the tuition and the money they’ll make now can go a long way towards paying it. Understanding only basic scalability, how to keep prototypes working for real life customers, and quick deployment limits them to fairly rare IT organizations which go into and out of business at breakneck pace.

Here’s the point of all this. If you’re considering a career in computer science and see features about teenagers supposedly becoming millionaires writing apps and not bothering with college, and decide that if they can do it, you can too, don’t. These are talented kids given opportunities few will have in a very exclusive programming enclave in which they will spend many years. If a line of code looks like gibberish to you, you need college, and the majority of the jobs what will be available to you will require it as a prerequisite to even get an interview. Despite what you’re often told in tech headlines, most successful tech companies are ran by people in their 30s and 40s rather than ambitious college dropouts for whom all of Silicon Valley opened their wallets to great fanfare, and when those companies do B2B sales, you’re going to need some architects with graduate degrees and seasoned leadership with a lot of experience in their clients’ industry to create a stable business. Just like theater students dream of Hollywood, programmers often dream of the Valley. Both dreams have very similar outcomes.

seamus

When we moved to LA to pursue our non-entertainment related dreams, we decided that when you’re basically trying to live out your fantasies, you might as well try to fulfill all of them. So we soon found ourselves at a shelter, looking at a relatively small, grumpy wookie who wasn’t quite sure what to make of us. Over the next several days we got used to each other and he showed us that underneath the gruff exterior was a fun-loving pup who just wanted some affection and attention, along with belly rubs. Lots and lots of belly rubs. We gave him a scrub down, a trim at the groomers’, changed his name to Seamus because frankly, he looked like one, and took him home. Almost a year later, he’s very much a part of our family, and one of our absolute favorite things about him is how smart and affectionate he turned out to be. We don’t know what kind of a mix he is, but his parents must have been very intelligent breeds, and while I’m sure there are dogs smarter than him out there, he’s definitely no slouch when it comes to brainpower.

And living with a sapient non-human made me think quite a bit about artificial intelligence. Why would we consider something or someone intelligent? Well, because Seamus is clever, he has an actual personality instead of just reflexive reactions to food, water, and possibilities to mate, which sadly, is not an option for him anymore thanks to a little snip snip at the shelter. If I throw treats his way to lure him somewhere he doesn’t want to go and he’s seen this trick before, his reaction is just to look at me and take a step back. Not every treat will do either. If it’s not chewy and gamey, he wants nothing to do with it. He’s very careful with whom he’s friendly, and after a past as a stray, he’s always ready to show other dogs how tough he can be when they stare too long or won’t leave him alone. Finally, from the scientific standpoint, he can pass the mirror test and when he gets bored, he plays with his toys and raises a ruckus so we play with him too. By most measures, we would call him an intelligent entity and definitely treat him like one.

When people talk about biological intelligence being different from the artificial kind, they usually refer to something they can’t quite put their fingers on, which immediately gives Singularitarians room to dismiss their objections as “vitalism” and unnecessary to address. But that’s not right at all because that thing on which non-Singularitarians often can’t put their finger is personality, an intricate, messy process in response to the environment that involves more than meeting needs or following a routine. Seamus might want a treat, but he wants this kind of treat and he knows he will needs to shake or sit to be allowed to have it, and if he doesn’t get it, he will voice both his dismay and frustration, reactions to something he sees as unfair in the environment around him which he now wants to correct. And not all of his reactions are food related. He’s excited to see us after we’ve left him along for a little while and he misses us when we’re gone. My laptop, on the other hand, couldn’t give less of a damn whether I’m home or not.

No problem, say Singularitarians, we’ll just give computers goals and motivations so they could come up with a personality and certain preferences! Hell, we can give them reactions you could confuse for emotions too! After all, if it walks like a duck and quacks like a duck, who cares if it’s a biological duck or a cybernetic one if you can’t tell the difference? And it’s true, you could just build a robotic copy of Seamus, including mimicking his personality, and say that you’ve built an artificial intelligence as smart as a clever dog. But why? What’s the point? How is this utilizing a piece of technology meant for complex calculations and logical flows for its purpose? Why go to all this trouble to recreate something we already have for machines that don’t need it? There’s nothing divinely special in biological intelligence, but to dismiss it as just another form of doing a set of computations you can just mimic with some code is reductionist to the point of absurdity, an exercise in behavioral mimicry for the sake of achieving… what exactly?

So many people all over the news seem so wrapped up in imagining AIs that have a humanoid personality and act the way we would, warning us about the need to align their morals, ethics, and value systems with ours, but how many of them ask why we would want to even try to build them? When we have problems that could be efficiently solved by computers, let’s program the right solutions or teach them the parameters of the problem so they can solve it in a way which yields valuable insights for us. But what problem do we solve trying to create something able to pass for human for a little while and then having to raise it so it won’t get mad at us and decide to nuke us into a real world version of Mad Max? Personally, I’m not the least bit worried about the AI boogeymen from the sci-fi world becoming real. I’m more worried about a curiosity which gets built for no other reason that to show it can be done being programmed to get offended or even violent because that’s how we can get, and turning a cold, logical machine into a wreck of unpredictable pseudo-emotions that could end up with its creators being maimed or killed.

atom

Dear Standard Model, we need to talk. Now, now, don’t get the wrong idea. It’s not that you are not doing your job well, in fact the exact opposite is what we want to address. It may sound odd that a number of scientists are getting frustrated when they can’t seem to break you, but look at the situation from their angle. For physics to take a huge leap forward, it needs to outgrow you, much like general relativity was the next iteration of Newtonian physics, and like neo-Darwinian synthesis combined genetics and natural selection for evolutionary research to advance in new and meaningful directions. But before we can start working on your eventual replacement, we’ll need to discover your shortfalls, something outside of your predictive power. And right now, the sad truth is that we can’t. We’re desperately stuck and are looking for a way out.

The last attempt even used particles with exotic quark alignments, neutral B mesons, to trigger the decay of the heavy top quark into a muon/anti-muon pair, or a matter/anti-matter pair with an electron’s husky cousins. The idea was to smash them and show enough such pairs forming out of the debris to exceed your limit on them. Sadly, that refused to happen. Not only were the decays in ranges described by you, but so much within them that we can’t even hint at possibly breaking you with another attempt. All hopes are on the huge power boost to the Large Hadron Collider to maybe, just maybe, create a decay path or a particle debris cloud you can’t explain, giving scientists a peek at what lies beyond the world in your framework, and possible solutions to the paradoxes and mysteries that still exist. Although you’re supremely helpful and were one of the biggest scientific triumphs of the last century, now you’re actually holding us back.

Again, this isn’t a grudge. We like you and we’ll still have work for you. But science can’t simply coast on what it has already accomplished, it must find answers to questions that still loom long after a discovery is made, or better yet, introduced by a discovery. Regardless of what all those misguided postmodernist sophists preach, science thrives on disproving itself and finding out an axiom is actually wrong or woefully incomplete. Overthrowing and improving existing theories or introducing brand new ones is how we advance and what wins Nobel Prizes. And we won’t hold ourselves down just because you won’t break today or even tomorrow. There will be a day we will pass your limitations as the media across the world will declare that the hunt for your future iteration is now on. Because you see, we know there has to be something more laying beneath you, we know there has to so we can explain the anomalies with which bleeding edge work has to be peppered. And we will break you to find it. Nothing personal. It’s just science.