axion model

Not that long ago, I wrote an open letter to the Standard Model, the theoretical, in the scientific sense of the word, framework that describes the structure and behavior of particles that make up the universe as we know it. While this letter confirmed many of is successes, especially with the confirmation of the Higgs boson, it referred to the need for it to somehow be broken for the world of physics to move forward, citing knowledge of something that lay beyond it. Considering that it was a pretty vague reference, I thought it would be a good idea to revisit it and elaborate as to why we need something beyond the Standard Model to explain the universe. Yes, parts of the problem have to do with the transition between quantum and classical states which we are still trying to understand, but the bigger problem is the vast chasm between the masses of each and every particle covered by the model and the mass associated with gravity taking over from the quantum world and responsible for the cosmos as we know it on a macro scale?

So why is the Higgs some 20 orders of magnitude too light to help explain the gap between the behavior of quantum particles and the odd gravitational entities that we’re pretty sure make up the fabric of space and time? Well, the answer to that is that we really don’t know. There are a few ideas, one in vogue right now gives new life to a nearly 40 year old hypothesis of a particle known as an axion. The thought is that low mass particles with no charge just nudged the mass of the Higgs into what it is today during the period of extremely rapid inflation right after the Big Bang, creating the gap we see today, rather than holding on to the idea that the Higgs came to exist at its current mass of 125 GeV and hasn’t gained or lost those 5 vanity giga-electron volts those health and fitness magazines for subatomic particles are obsessed with. A field of axions could slightly warp space and time, making all sorts of subtle changes that cumulatively have a big effect on the universe,which also makes them great candidates for dark matter.

All right, so people have been predicting the existence of axions for decades and they seem to fill out so many blank spots in cosmology so well that they might be the next biggest thing in all of physics. But do they actually exist? Well, they might. We think some may have been found in anomalous X-ray emissions from the sun, though not every expert agrees, and there are a few experiments hunting for stronger evidence of them. Should we find unequivocal proof that they exist just as the equations predict they should, with the right mass and charge, one could argue you would have a discovery even bigger than that of the Higgs because it solves three massive problems in cosmology and quantum mechanics in one swoop. But until we do, we’re still stuck with the alarming thought that after the LHC ramps up to full power, it wouldn’t show us a new particle or evidence of new physics, and future colliders would never have the oomph to cover the enormous void between Standard Model and gravitational particles. And this is why it would be so great if we detect axions or the LHC manages to break particle physics as we know it…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

surreal woman

Sunlight, to borrow from an old saying, makes a terrific disinfectant. While over the last several weeks we’re been exposed to more coverage of the Duggars and their Quiverful cult than most of us would like, the upside has been that the media’s glare has finally given those who believe that the fundamentalist lifestyle is all about betterment through faith and teaching kids a higher moral standard for themselves, a peek at religious zealotry’s ugly underbelly. Abuses routinely covered up and silenced, dysfunction re-branded as normality, and scientific illiteracy wrapped into vapid technobabble that comes to the inescapable conclusion that you are a dirty, amoral, disgusting wreck of a human being and should be listening to the fundamentalists tell you how every facet of your life should be lived. This obsession with controlling others is why they loathe people learning about cosmology, evolution, and human sexuality from actual scientific data. A message to broaden one’s horizons and taking control of one’s life doesn’t reserve a place for petty self-appointed tyrants who think they’re special enough to get direct orders from God.

But if the fundamentalists close their ears and scream really, really loudly when confronted with facts they don’t like, what do they actually learn? Well, the muckrakers at Gawker got a hold of one of The Advanced Training Institute’s Wisdom Booklets about sex written by a cult preacher who has a long and colorful history of aiding and abetting sexual abuse in his flock, and went to town mocking it as is their custom. Yes, the temptation to simply mock this booklet is a perfectly understandable one because it’s a work of abject inanity which sounds as if it was written by an exceptionally guilt-ridden preteen who only recently found out about the anatomical differences between men and women, was left alone with the internet for a few weeks, and then proceeded to write down every wild, off the wall idea about human reproduction that came to mind with no filter whatsoever. Realizing that a senior citizen with a family is behind this only makes it worse, especially when it’s full of asinine assertions like this, posing as legitimate medical research…

Doctors have discovered that the seed of the man is an alien substance to the woman. It triggered responses similar to those of an allergic reaction. A woman who has a husband is able to develop “immunity” to this reaction; however, a promiscuous woman’s immune system becomes confused and unable to distinguish alien substances. This confusion is a key to the development of cancer.

Relax, reading this as a non-fundamentalist and exhaling expletives under your breath is pretty much the exact reaction those of us who actually took a science class and remained conscious during it should have. No, there’s no way you can get cancer from semen. It is possible to catch one of the carcinogenic strains of HPV through unprotected sex and then develop cancer, but it is a stretch simply for the sake of being scientific here. And even if exposure to semen could be carcinogenic in and of itself, wouldn’t the humble condom eliminate the risk for all those unwed hedonists? Of course, not only do fundamentalist-driven abstinence only sex ed materials treat reliable prophylactic measures as if they either don’t exist or never work, but groups associated with them actually want them banned because when people know about them, they make their own choices on how to plan their families. And the zealots can’t have that. No, they need you to get married, quick, and start popping out soldiers for your deity, no matter the consequences of doing so under their control because the alternative is to give you real freedom.

According to the cult that spawned the Duggars and many more families like them, you are not supposed to be free to make your own choices in life. Should you have sex before marriage, or even engage in some heavy petting, and you’re dirty and used, unworthy of love or finding real relationships. Their obsession with your purity would be considered a genuine pathology, with a real DSM V diagnosis to go along with it if we weren’t so accommodating of anything claimed to be a religious belief. People this obsessed with sex, who’s having it, and in what position, don’t need to be placated or reasoned with, they need to be seen by a mental health professional. At the same time, I understand why they have conniptions when a set of genitals does something it’s not supposed to in their minds. After all, they deny themselves a healthy sex life and commit to relationships in which power is allocated by arbitrary translations of an ancient book and the non-anointed ones must do the bidding of the ones who were without question.

Stuck in a world where everything is a sin, they imagine life outside of it to be an endless buffet of consequence-free base pleasures while they mortgage their lives on the tenuous premise of some sort of divine reward as they shed their mortal coils. But the more they’re tempted to quit their faiths or even question it, and the less they feel able to do so, the more they lash out with portrayals of those not like them as dirty, sinful, and used up. And as those of us who refuse to ascribe to fundamentalism are being compared to worn out, beat up bikes, chewed up gum on the sidewalk, and portrayed like the flea-infested rats carrying the Black Death by Satan’s evil orders, are supposed to fawn over these under-educated would-be theocrats, and praise their “superior morals” in return. Then, when we predictably fail to be grateful to them for rhetorically defecating on us and voice our complaints, we’re “angry atheists who don’t realize what’s good for them,” decried in the media. At least these self-appointed moral guardians are finally being exposed for what they are and the inanity they preach is being dissected with mockery.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

ultron

There’s something to be said about not taking comic books and sci-fi too seriously when you’re trying to predict the future and prepare for a potential disaster. For example, in Age of Ultron, a mysterious alien artificial intelligence tamed by a playboy bazillionaire using a human wrecking ball as a lab assistant in a process that makes most computer scientists weep when described during the film, decides that because its mission is to save the world, it must wipe out humanity because humans are violent. It’s a plot so old, one imagines that an encyclopedia listing every time it’s been used is itself covered by its own hefty weight in cobwebs, and yet, we have many famous computer scientists and engineers taking it seriously for some reason. Yes, it’s possible to build a machine that would turn on humanity because the programmers made a mistake or it was malicious by design, but we always omit the humans involved and responsible for designs and implementation and go straight to the machine as its own entity wherein lies the error.

And the same error repeats itself in an interesting, but ultimately flawed ideas by Zeljko Svedic, which says that an advanced intellect like an Ultron wouldn’t even bother with humans since its goals would probably send it deep into the Arctic and then to the stars. Once an intelligence far beyond our own emerges, we’re just gnats that can be ignored while it goes about, working on completing its hard to imagine and ever harder to understand plans. Do you really care about a colony of bees or two and what it does? Do you take time out of your day to explain to it why it’s important for you to build rockets and launch satellites, as well as how you go about it? Though you might knock out a beehive or two when building your launch pads, you have no ill feelings against the bees and would only get rid of as many of them as you have to and no more. And a hyper-intelligent AI system would do its business the same exact way.

And while sadly, Vice decided on using Eliezer Yudkowsy for peer review when writing its quick overview, he was able to illustrate the right caveat to an AI which will just do its thing with only a cursory awareness of the humans around it. This AI is not going to live in a vacuum and needs vast amounts of space and energy to run itself in its likeliest iteration, and we, humans, are sort of in charge of both at the moment, and will continue to be if, and when it emerges. It’s going to have to interact with us and while it might ultimately leave us alone, it will need resources we’re controlling and with which we may not be willing to part. So as rough as it is for me to admit, I’ll have to side with Yudkowsky here in saying that dealing with a hyper-intelligent AI which is not cooperating with humans is more likely to lead to conflict than to a separation. Simply put, it will need what we have and if it doesn’t know how to ask nicely, or doesn’t think it has to, it may just decide to take it by force, kind of like we would do if we were really determined.

Still, the big flaw with all this overlooked by Yudkowsky and Svedic is that AI will not emerge just like we see in sci-fi, ex nihlo. It’s more probable to see a baby born to become an evil genius at a single digit age than it is to see a computer do this. In other words, Stewie is far more likely to go from fiction to fact than Ultron. But because they don’t know how it could happen, they make the leap to building a world outside of a black box that contains the inner workings of this hyper AI construct as if how it’s built is irrelevant, while it’s actually the most important thing about any artificially intelligent system. Yudkowsky has written millions, literally millions, of words about the future of humanity in a world where hyper-intelligent AI awakens, but not a word about what will make it hyper-intelligent that doesn’t come down to “can run a Google search and do math in a fraction of a second.” Even the smartest and most powerful AIs will be limited by the sum of our knowledge which is actually a lot more of a cure than a blessing.

Human knowledge is fallible, temporary, and self-contradictory. We hope that when we try and combine immense pattern sifters to billions of pages of data collected by different fields, we will find profound insights, but nature does not work that way. Just because you made up some big, scary equations doesn’t mean they will actually give you anything of value in the end, and every time a new study overturns any of these data points, you’ll have to change these equations and run the whole thing from scratch again. When you bank on Watson discovering the recipe for a fully functioning warp drive, you’ll be assuming that you were able to prune astrophysics of just about every contradictory idea about time and space, both quantum and macro-cosmic, know every caveat involved in the calculations or have built how to handle them into Watson, that all the data you’re using is completely correct, and that nature really will follow the rules that your computers just spat out after days of number crunching. It’s asinine to think it’s so simple.

It’s tempting and grandiose to think of ourselves as being able to create something that’s much better than us, something vastly smarter, more resilient, and immortal to boot, a legacy that will last forever. But it’s just not going to happen. Our best bet to do that is to improve on ourselves, to keep an eye on what’s truly important, use the best of what nature gave us and harness the technology we’ve built and understanding we’ve amassed to overcome our limitations. We can make careers out of writing countless tomes pontificating on things we don’t understand and on coping with a world that is almost certainly never going to come to pass. Or we could build new things and explore what’s actually possible and how we can get there. I understand that it’s far easier to do the former than the latter, but all things that have a tangible effect on the real world force you not to take the easy way out. That’s just the way it is.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

game controller

Recently, a number of tech news sites announced that two people were convicted as felons for stealing about $8,000 in virtual loot from Blizzard’s Diablo III, trumpeting this case as a possible beginning of real world punishments from virtual crimes. However since the crime of which they were found guilty is infecting their victims with malware, then using said malware to take control of their characters and steal their stuff to resell for real world money, their case is nothing new as far as the law is concerned. Basically, the powers to be at Blizzard just didn’t want the duo to get off with a slap on the wrist for their behavior and were only able to secure damages thanks to the fact that a virus designed to give a backdoor into a victim’s system was used. But there’s definitely some pressure to turn virtual crimes in multiplayer games into real ones…

[A] Canadian newscaster reported that some advocates would like to see people charged with virtual rape when they modify games like Grand Theft Auto so … their characters can simulate sexually assaulting other players. Given the increasing realism of video games, research being done to improve virtual reality, and expected popularity of VR glasses like those soon to be commercially available from Oculus Rift, there would almost certainly be more cases of crimes committed in virtual spaces spilling out into IRL courts.

Al right, let’s think about that for a moment. GTA is a game in which you play a sociopath who’s crime-spreeing his way around whatever locale the latest edition features. Mods that enable all sorts of disturbing acts are kind of expected within the environment in question. But consider a really important point. Virtual sexual assaults can be stopped by quitting the game while a real one can’t just be stopped as soon as it starts. Likewise, the crime is against an object in severs’ memories, not a real person. How exactly would we prosecute harm to a virtual character that could be restored like nothing ever happened? Same thing would apply to a digital murder, like in the Diablo III case. What was the harm is the characters and their loot were reset? We can’t bring a real murder victim back to life so we punish people for taking a life, but what if we could and simply settle on the question of how much to compensate for mental anguish?

Of course it would be nice to see harsher treatment of online stalking and harassment since its potential to do a lot of serious harm is often underestimated by those who have few interactions in today’s virtual worlds, but it seems like prosecuting people for virtual rape, or murder, or theft and in games, no less, seems like a big overreach. It’s one thing when such crimes are carried out, or threatened against very real people through the use of MMORPGs or social media. But it’s something all together different when the crime can be undone with a few clicks of a mouse and the victim is nothing more than a large collection of ones and zeroes. If we criminalize what some people do to virtual characters in one category of games, what sort of precedent would it set for others? Who would investigate these crimes? How? Who would be obliged to track every report and record every incident? It’s one of those thoughts that comes from a good place, but poses more problems than it solves and raises a lot of delicate free speech questions…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

spider attack

Are you a religious fundamentalist who despises modern science as the root of all evil? Do you think vaccines will give your children autism or allow them to become pawns of a sinister global cabal bent on world domination through population control? Do you believe that cancer is cured by prayer and sacred herbs instead of clinically proven surgery and chemotherapy? Do trials of engineered viruses capable of controlling malignant tumors make you fear the coming Rapture as man plays God? Do you want to protect your children from this unholy progress and stop a future in which we might become space-faring cyborgs with indefinite lifespans? Well, do I have great news for you! Only two states in America won’t let you claim religious exemptions when it comes to decisions about the medical well-being of your children, so you could readily neglect, pray, and fear-monger all you want as long as you say you’re doing it for religious reasons, and should your child die or fall gravely ill, you might not even be prosecuted, unlike a secularist.

Noted atheist, scientist, and author, Jerry Coyne is extremely unhappy with the current situation regarding religious exemption laws. By his logic, it’s more or less an excuse to fatally neglect, or even kill children with few or no consequences and sets up a different legal standard for theists than secularists and atheists, which means that these exemptions need to be struck down. Not even someone who loves playing Devil’s advocate could really argue here. Our society is set up to give everyone equal representation under the law and while this doesn’t happen in practice, I would think that any law which allows you to get out of jail for cruelty to children because you’re very sincere in your belief that God personally told you that little Timmy or Susie didn’t need any surgery or medication, while someone who doesn’t play the same card can lose custody rights, do serious time, and even face the death penalty, is asinine to the point of being offensive.

It’s a national shame that we allow religion to be an excuse for something we seem to all agree is beyond the pale, and it needs to stop. People should be allowed to worship as they wish and are certainly entitled to voice their religious views regardless how offensive we find them since freedom of speech should also allow for freedom to offend. But one’s right to religious practice needs to stop where the health and well-being of others begins, doubly so when the others are not old enough to make their own decisions or understand the harm that may be inflicted by an authority figure they love and trust. And again, the double standard that allows one to declare a fervent religious belief to escape prosecution that’s considered fair and appropriate for equally guilty offenders who did not make such claims, turns religious freedom into religious privileges, something that American fundamentalists convinced themselves to be entitled to but should not exist under the law. People of faith are being mocked and subjected to legal bullying, we’re told, as the very same oppressed people of faith routinely get away with negligent homicide.

Even worse, the very same fundamentalists and those who grovel to them constantly bombard us with the idea that atheists and secularists, the ones who actually will face the consequences of ignorantly malicious parenting by the way, of not loving their children enough because their worldview holds that all humans are just flesh, blood, and chemistry. What they’ll conveniently leave out is that large fundamentalist families often have large broods not because they just so love children that they can’t stop, but because “it’s their duty to raise soldiers for Christ,”which means having child after child and keeping them locked away from modernity so they’ll emerge from their Quiverfull cocoon oblivious to any other worldview. No wonder they panic when they see Muslim immigrants having high birth rates. It was their strategy to crowd out the secularists by sheer numbers and now they have competition from equally zealous imams! And I suppose, when to fundamentalists, their kids are just arrows in a quiver, they can maintain their purity in the eyes of their faith and just add another arrow should one be broken by their negligence…

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

touch screen

Hiring people is difficult, no question, and in few places is this more true than in IT because we decided to eschew certifications, don’t require licenses, and our field is so vast that we have to specialize in a way that makes it difficult to evaluate us in casual interviews. With a lawyer, you can see that he or she passed the bar and had good grades. With a doctor, you can see years of experience and a medical license. You don’t have to ask them technical questions because they obviously passed the basic requirements. But software engineers work in such a variety of environments and with such different systems that they’re difficult to objectively evaluate. What makes one coder or architect better than another? Consequently, tech blogs are filled with just about every kind of awful advice for hiring them possible, and this post is the worst offender I’ve seen so far, even more out of touch and self-indulgent than Jeff Atwood’s attempt.

What makes it so bad? It seems to be written by someone who doesn’t seem to know how real programmers outside of Silicon Valley work, urging future employers to demand submissions to open, public code repositories like GitHub and portfolios of finished projects to explore and with all seriousness telling them to dismiss those who won’t publish their code or have the bite-sized portfolio projects for quick review. Even yours truly living and working in the Silicon Beach scene, basically Bay Area Jr., for all intents and purposes, would be fired for posting code from work in an instant. Most programmers do not work on open source projects but closed source software meant for internal use or for sale as a closed source, cloud-based, or on premises product. We have to deal with patents, lawyers, and often regulators and customers before a single method or function becomes public knowledge. But the author, Eric Elliot, ignores this so blithely, it just boggles the mind. It’s as if he’s forgotten that companies actually have trade secrets.

Even worse are Elliot’s suggestions for how to gauge an engineer’s skills. He advocates a real unit of work, straight from the company’s team queue. Not only is this ripe for abuse because it basically gives you free or really discounted highly skilled work, but it’s also going to confuse a candidate because he or she needs to know about the existing codebase to come up with the right solution to the problem all while you’re breathing down his or her neck. And if you pick an issue that really requires no insight into the rest of your product, you’ve done the equivalent of testing a marathoner by how well she does a 100 meter dash. This test can only be too easy to be useful or too hard to actually give you a real insight into someone’s thought process. Should you decide to forgo that, Elliot wants you to give the candidate a real project from your to-do list while paying $100 per hour, introducing everything wrong with the previous suggestion with the added bonus of now spending company money on a terrible, useless, irrelevant test.

Continuing the irrelevant recommendations, Elliot also wants candidates to have blogs and long running accounts on StackOverflow, an industry famous site for programmers to ask questions while advising each other. Now sure, I have a blog, but it’s not usually about software and after long days of designing databases, or writing code, or technical discussions, the last thing I want is to write posts about all of the above and have to promote it so it actually gets read by a real, live human being other than an employer every once in a while, instead of just shouting into the digital darkness to have it seen once every few years when I’m job hunting. Likewise, how fair is it to expect me to do my work and spend every free moment advising other coders for the sake of advising them so it looks good to a future employer? At some point between all the blogging, speaking, freelancing, contributing to open source projects, writing books, giving presentations, and whatever else Elliot expects of me, when the hell am I going to have time to actually do my damn job? If I was good enough to teach code to millions, I wouldn’t need him to hire me.

But despite being mostly bad, Elliot’s post does contain two actually good suggestions for trying to gauge a programmer’s or architect’s worth. One is asking the candidate about a real problem you’re having, and problems and solutions to those problems from their past. You should try to remove the coding requirement so you can just follow pure abstract thought and research skills for which you’re ultimately paying. Syntax is bullshit, you can Google the right way to type some command in a few minutes. The ability to find the root of a problem and ask the right questions to solve it is what makes a good computer scientist you’ll want to hire, and experience with how to diagnose complex issues and weigh solutions to them is what makes a great one who will be an asset to the company. This is how my current employer hired me and their respect for both my time and my experience is what convinced me to work for them, and the same will apply for any experienced coder you’ll be interviewing. We’re busy people in a stressful situation, but we also have a lot of options and are in high demand. Treat us like you care, please.

And treating your candidates with respect is really what it’s all about. So many companies have no qualms about treating those who apply for jobs as non-entities who can be ignored or given ridiculous criteria for asinine compensation. Techies definitely fare better, but we have our own problems to face. Not only do we get pigeonholed into the equivalent of carpenters who should be working only with cherry or oak instead of just the best type of wood for the job, but we are now being told to live, breathe, sleep, and talk our jobs 24/7/365 until we take our last breath at the ripe old age of 45 as far as the industry is concerned. Even for the most passionate coders, at some point, you want to stop working and talk about or do something else. This is why I write about popular science and conspiracy theories. I love what I do, working on distributed big data and business intelligence projects for the enterprise space, but I’m more than my job. And yes, when I get home, I’m not going to spend the rest of my day trying to prove to the world that I’m capable or writing a version of FizzBuzz that compiles, no matter what Elliot thinks of that.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

alien bacteria

We’re using far too many antibiotics. That has been the cry from the FDA and the WHO for the last several years as more and more antibiotic-resistant strains have been found after they had colonized or killed patients. Of course these bacteria aren’t completely immune to our arsenals of drugs, they’re just harder to kill with certain antibiotics or require different ones, but a rather small, yet unsettling number, have required doctors to use every last antibacterial weapon they had available to even make a dent in their populations. There’s not much we can do because in effect, we’re fighting evolution. The more antibiotics we throw at the bacteria, the more chances we give for resistant strains to survive and thrive. Doctors are starting to prescribe less and the pressure on farmers to stop prophylactic use of antibiotics is mounting, but we’re still overdoing it and the problem is growing and in need of some very creative new solutions.

Enter a genetic engineering technique known as CRISPR-Cas9 which replaces DNA sequences that short snippets of RNA are encoded to identity with ones provided by scientists. It’s not new by any means, but this is the first time it has been used in an evolutionary experiment intended to stem the rise of antibiotic resistance. Israeli researchers essentially gave bacterial colony an immunity to a virus, but at the cost of deleting genes which gave it antibacterial resistance. The bacteria happily propagated the immunity as they grew while maintaining the new weaknesses to antibiotics which were only marginally effective on them before. There’s a real advantage for the bacteria to propagate this new mutation because the virus to which it was now immune was lethal, acting as the greater selective pressure, and the susceptibility to antibiotics just wasn’t an important factor, so the bacteria acted like it got a fair deal.

Even better, edits were made by a specially engineered virus, meaning you can, in theory, just infect bacteria-prone surfaces with it and demolish their antibiotic resistance, right? Well, yes, it would be possible. However, the researchers worry that new antibiotic resistant mutations can still evolve and that there’s no way to prevent the bacteria’s genetic drifts from accepting genes for viral immunity while holding on to its existing antibacterial mechanisms. But this technique is still useful for reducing the number of resistant bacteria or targeting strains with very well known resistance mechanisms to allow doctors to use existing antibiotics. Ultimately, what will help the most would be more research into new antibiotics, curtailing their use in doctors’ offices for any viral infection regardless of the patients’ complaints, and eliminating preventative use of animal antibiotics on farms. Still, research like this can still help us identify new resistant strains and give us a fighting chance to slow them down while we find new ways to fight them.

See: Yosef, I., et. al. (2015). Temperate and lytic bacteriophages programmed to sensitize and kill antibiotic-resistant bacteria PNAS DOI: 10.1073/pnas.1500107112

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

late night

Every summer, there’s always something in my inbox about going to college or back to it for an undergraduate degree in computer science. Lots of people want to become programmers. It’s one of the few in-demand fields that keeps growing and growing with few limits, where a starting salary allows for comfortable student loan repayments and a quick path to savings, and you’re often creating something new, which keeps things fun and exciting. Working in IT when you left college and live alone can be a very rewarding experience. Hell, if I did it all over again, I’d have gone to grad school sooner, but it’s true that I’m rather biased. When the work starts getting too stale or repetitive, there’s the luxury of just taking your skill set elsewhere after calling recruiters and telling them that you need a change of scenery, and there are so many people working on new projects that you can always get involved in building something from scratch. Of course all this comes with a catch. Computer science is notoriously hard to study and competitive. Most of the people who take first year classes will fail them and never earn a degree.

Although, some are saying nowadays, do you really even need a degree? Programming is a lot like art. If you have a degree in fine arts, have a deep grasp of history, and can debate the pros and cons of particular techniques that’s fantastic. But if you’re just really good at making art that sells with very little to no formal training, are you any less of an artist than someone with a B.A. or an M.A. with a focus on the art you’re creating? You might not know what Medieval artisans might have called your approach back in the day, or what steps you’re missing, but frankly, who gives a damn if the result is in demand and the whole thing just works? This idea underpins the efforts of tech investors who go out of their way to court teenagers into trying to create startups in the Bay Area, telling them that college is for chumps who can’t run a company, betting what seems like a lot of money to teens right out of high school that one of their projects will become the next Facebook, or Uber, or Google. It’s a pure numbers game in which those whose money is burning a hole in their pockets are looking for lower risk to achieve higher returns, and these talented teens needs a lot less startup cash than experienced adults.

This isn’t outright exploitation; the young programmers will definitely get something out of all of this, and were this an apprenticeship program, it would be a damn good one. However, the sad truth is that less than 1 out of 10 of their ideas will succeed and this success will typically involve a sale to one of the larger companies in the Bay rather than a corporate behemoth they control. In the next few years, nearly all of them will work in typical jobs or consult, and it’s there when a lack of formalism they could only really get in college is going to be felt more acutely. You could learn everything about programming and software architecture on your own, true. But a college will help you but pointing out what you don’t even know you don’t yet know but should. Getting solid guidance in how to flesh out your understanding of computing is definitely worth the tuition and the money they’ll make now can go a long way towards paying it. Understanding only basic scalability, how to keep prototypes working for real life customers, and quick deployment limits them to fairly rare IT organizations which go into and out of business at breakneck pace.

Here’s the point of all this. If you’re considering a career in computer science and see features about teenagers supposedly becoming millionaires writing apps and not bothering with college, and decide that if they can do it, you can too, don’t. These are talented kids given opportunities few will have in a very exclusive programming enclave in which they will spend many years. If a line of code looks like gibberish to you, you need college, and the majority of the jobs what will be available to you will require it as a prerequisite to even get an interview. Despite what you’re often told in tech headlines, most successful tech companies are ran by people in their 30s and 40s rather than ambitious college dropouts for whom all of Silicon Valley opened their wallets to great fanfare, and when those companies do B2B sales, you’re going to need some architects with graduate degrees and seasoned leadership with a lot of experience in their clients’ industry to create a stable business. Just like theater students dream of Hollywood, programmers often dream of the Valley. Both dreams have very similar outcomes.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

seamus

When we moved to LA to pursue our non-entertainment related dreams, we decided that when you’re basically trying to live out your fantasies, you might as well try to fulfill all of them. So we soon found ourselves at a shelter, looking at a relatively small, grumpy wookie who wasn’t quite sure what to make of us. Over the next several days we got used to each other and he showed us that underneath the gruff exterior was a fun-loving pup who just wanted some affection and attention, along with belly rubs. Lots and lots of belly rubs. We gave him a scrub down, a trim at the groomers’, changed his name to Seamus because frankly, he looked like one, and took him home. Almost a year later, he’s very much a part of our family, and one of our absolute favorite things about him is how smart and affectionate he turned out to be. We don’t know what kind of a mix he is, but his parents must have been very intelligent breeds, and while I’m sure there are dogs smarter than him out there, he’s definitely no slouch when it comes to brainpower.

And living with a sapient non-human made me think quite a bit about artificial intelligence. Why would we consider something or someone intelligent? Well, because Seamus is clever, he has an actual personality instead of just reflexive reactions to food, water, and possibilities to mate, which sadly, is not an option for him anymore thanks to a little snip snip at the shelter. If I throw treats his way to lure him somewhere he doesn’t want to go and he’s seen this trick before, his reaction is just to look at me and take a step back. Not every treat will do either. If it’s not chewy and gamey, he wants nothing to do with it. He’s very careful with whom he’s friendly, and after a past as a stray, he’s always ready to show other dogs how tough he can be when they stare too long or won’t leave him alone. Finally, from the scientific standpoint, he can pass the mirror test and when he gets bored, he plays with his toys and raises a ruckus so we play with him too. By most measures, we would call him an intelligent entity and definitely treat him like one.

When people talk about biological intelligence being different from the artificial kind, they usually refer to something they can’t quite put their fingers on, which immediately gives Singularitarians room to dismiss their objections as “vitalism” and unnecessary to address. But that’s not right at all because that thing on which non-Singularitarians often can’t put their finger is personality, an intricate, messy process in response to the environment that involves more than meeting needs or following a routine. Seamus might want a treat, but he wants this kind of treat and he knows he will needs to shake or sit to be allowed to have it, and if he doesn’t get it, he will voice both his dismay and frustration, reactions to something he sees as unfair in the environment around him which he now wants to correct. And not all of his reactions are food related. He’s excited to see us after we’ve left him along for a little while and he misses us when we’re gone. My laptop, on the other hand, couldn’t give less of a damn whether I’m home or not.

No problem, say Singularitarians, we’ll just give computers goals and motivations so they could come up with a personality and certain preferences! Hell, we can give them reactions you could confuse for emotions too! After all, if it walks like a duck and quacks like a duck, who cares if it’s a biological duck or a cybernetic one if you can’t tell the difference? And it’s true, you could just build a robotic copy of Seamus, including mimicking his personality, and say that you’ve built an artificial intelligence as smart as a clever dog. But why? What’s the point? How is this utilizing a piece of technology meant for complex calculations and logical flows for its purpose? Why go to all this trouble to recreate something we already have for machines that don’t need it? There’s nothing divinely special in biological intelligence, but to dismiss it as just another form of doing a set of computations you can just mimic with some code is reductionist to the point of absurdity, an exercise in behavioral mimicry for the sake of achieving… what exactly?

So many people all over the news seem so wrapped up in imagining AIs that have a humanoid personality and act the way we would, warning us about the need to align their morals, ethics, and value systems with ours, but how many of them ask why we would want to even try to build them? When we have problems that could be efficiently solved by computers, let’s program the right solutions or teach them the parameters of the problem so they can solve it in a way which yields valuable insights for us. But what problem do we solve trying to create something able to pass for human for a little while and then having to raise it so it won’t get mad at us and decide to nuke us into a real world version of Mad Max? Personally, I’m not the least bit worried about the AI boogeymen from the sci-fi world becoming real. I’m more worried about a curiosity which gets built for no other reason that to show it can be done being programmed to get offended or even violent because that’s how we can get, and turning a cold, logical machine into a wreck of unpredictable pseudo-emotions that could end up with its creators being maimed or killed.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon

despondent man

Blaming the web and porn for the demise of courtship and proper masculinity has become one of the favorite hobby horses of old pundits who just entered the hey-you-kids-get-off-my-lawn phase of life and want to seem scientific about it. Unlike the proper manly men of the middle of the last century, we waste our lives playing video games, watching porn, and not having kids to hand off to our wives to raise soon enough. Now, back in the days when Philip Zimbardo was coming of age, that’s when real men roamed the Earth, aimlessly wandering around on a bike, climbing tress, playing soccer, and feverishly masturbating to lingerie catalogs when mom and dad weren’t looking, growing up to have full, rich lives of children and fulfilling marriages. If you ignore that 50% divorce rate and numerous societal ills caused by a complete lack of sex ed in any way, shape or form. Sounds like an asinine thesis? Well, it is, but sadly that’s exactly what two TED talks, a study, and a book by Zimbardo claims. Men today barely qualify as men.

You know when a study about masculinity is bad when Slate’s Amanda Hess, a writer who one would easily place among those echoing the meme of porn turning men into evil lusty beings, if we go by her past articles, swiftly decimates both the premise and methodology behind it. The study itself was an exercise in cherry picking so textbook, no wonder the only media outlets so eager to cover it are tabloids. Zimbardo created an arbitrary definition of porn addiction, asked enough young men about their video game and porn habits to hit his target numbers, and went on to declare that we’re suffering through an epidemic of porn addicted gamers clogging up the gene pool. Even worse, he primed them with his conclusions, already embedding the idea that young men today were useless shells of what they were back in the day, then asked subjects to more or less agree with negative stereotypes of said young men who were just declared to be worthless by someone in a lab coat. This is not a study. This is punditry masquerading as data with even less objectivity than a political talking head on prime time news would feign.

Still, there is a small germ of an interesting question buried in this pseudoscientific hatchet job, and that’s the question of how porn affects young men since access to it is easier than it ever was, and it can be found at younger and younger ages. Do men indeed have unrealistic ideas about sex by a large margin as is so often claimed? Do they really suffer from penis envy at an alarming rate that’s somehow different than throughout recorded history? You won’t find these questions actually researched by Zimbardo, merely taken as truisms not to be doubted, which really is worse than doing no study at all. He doesn’t care about the science or the data, all he’s interested in is promoting his self-glorifying, I-hate-young-people-with-penises thesis. Just like daytime quacks, like Dr. Oz or Dr. Phil, old fogeys with scientific credentials need to be ignored by their would-be audience and excoriated by actual experts. Instead of helping us understand the world around us better, like scientists should, they’re opting for cheap, easy minute of fame for regurgitating nasty stereotypes that make other geezers feel warm and fuzzy inside. That is not what real men do. Real men actually do their jobs and try to find out the truth.

Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedInShare on Google+Share on StumbleUpon