Archives For media

newspaper face

If you were to listen to today’s newspapers, blogs provide nothing but sensationalism, rehashes of other blogs, and are just generally ran by rather untrustworthy people sitting at their kitchen tables in their underpants, looking for whatever brings in the big hits. Yes, all major newspapers now feature blogs on their sites but don’t tell their editors that because all to many of them seem completely unaware of this fact as they boast about the need for newspapers to do the longform investigative work that seldom gets done anywhere else, and use this to justify keeping a quickly failing business model afloat through paywalls and lawsuits. And this is why it was very odd for a case against a news clipping service to basically say that readers don’t need any more than the clipping provides, arguing that giving away the lead of the article renders the whole thing totally irrelevant to the public which is why the clipping service should have to pay the papers.

Now it’s true that only newspapers sometimes have the resources to send reporters on complex assignments and work on stories that will take months to result in a huge article that shines new light on something we thought we knew, or exposes a case we want to know more about. Since newspaper ownership is now more of a prestige symbol than a viable business, profits could be sacrificed for the PR value of the resulting story. But PR doesn’t pay the bills and the barriers to investigating big stories keep getting lower and lower. If you’re a professional blogger, you can get a really good chunk of your research done with Skype, Google, Twitter, and Facebook, and when you do need to go out and track someone down for some answers physically, airfare can certainly be justified since you could work from your laptop anywhere with a wi-fi hotspot. You’ll also get a well-researched story and it will cost you less and make you money in ad revenue.

But instead of learning from bloggers how to work more efficiently, newspapers are sticking to a dead tree with ink model and trying to mount paywall after paywall to protect what they’re saying people don’t even need to read past the first paragraph or two. And that makes me wonder why even read them until a huge story comes along. Why print all that paper? Why bother with good, old-fashioned column inches and not simply go all digital with an on-demand print option? The big papers are already doing that with e-readers so why not kill the trees, cut the prices and get bloggers in on the act, learning form them how to attract hits and make the best use of their time and resources? Of course not Nick Denton style mind you, but more of an Ars or Wired who are in the tech game and absolutely get it despite being owned by the dinosaur Conde Nast, which just so happens also made a winning choice on buying Reddit. If there’s so much stuff that’s not worth reading past a few paragraphs, why waste time and money trying to get paid for it?

Share

barefoot

Skeptics and vocal atheists across the web fumed when Newsweek published a cover story that proclaimed the afterlife to be real based on a firsthand account of a neurosurgeon who nearly lost his bout with meningitis. His tale is hardly atypical from ones we’ve heard many times before across a wide variety of patients who had one foot in the grave and were revived; lush greenery and white fluffy clouds leading to a wonderful and peaceful place, a companion of some sort for what looked like a guided tour of Heaven, all the pieces are there. Such consistency is used by the faithful to say that there must be an afterlife. How else could the stories be so consistent and feature the same elements? If the patients were simply hallucinating as their brains were slowly but surely shutting down, wouldn’t their experiences be radically different? And aren’t a number of them extremely difficult to explain with what we know about how the brain functions?

It’s not as if people could sense when they’re about to die and are constantly bombarded with a description of how they should ascend to Heaven for eternal peace and rest. Wait a minute, wait a minute… They can and they are. So wouldn’t it make sense that so many near death accounts of an ascension to an afterlife follow the same pattern because the patients who remember their alleged journey to the great beyond are told day in, day out how this pattern should go? Most of the tales we get come from the Western world and have a very heavy Judeo-Christian influence coloring them. There’s also a rather odd prevalence of ascending to Heaven in these accounts and cases of people describing torment or something like Hell, while certainly not unheard of in the literature, are exceedingly rare. This either means that much of humanity is good and could look forward to a blissful afterlife, or that most people experience a natural high before death so they feel peaceful and at ease, dreaming of Heaven, while others still feel pain and see Hell.

And this is when Occam’s Razor has to come into play. The second assumption, while not very comforting or marketable to believers who still doubt the idea of an afterlife, makes the fewest, and the most probable assumptions, and if therefore more likely to be true in the absence of a stronger case for a genuine Heaven. We tend to choose the afterlife version of the story since we’re all fundamentally scared of death and no amount of arguing why death is natural or how it just has to happen and there’s nothing we can do about it makes this fear any less. The stories give us hope that we won’t simply cease to exist one day. But whereas believers are satisfied by anecdotal tales, the skeptics feel that we deserve more than just hope being spoon-fed to us. If an afterlife exists, we want to know for sure. We want empirical data. And that’s why trying to sell a story that tickles those who already believe or want to believe in the worst of ways is so rage-inducing to so many skeptics. We need truth and facts to deal with the real world, not truths that people want to hear and facts they can discard at will when they don’t match their fantasy.

Share

At the Slate, political blogger David Wiegel decided to play media mythbuster and publicly clarify Rick Santorum’s instant punch line of a quote about "smart people" not supporting what he sees as the true conservative movement. And he’s right that Santorum was trying to be very bitterly, obnoxiously sarcastic and was really decrying liberal paternalism rather than saying that there’s no such thing as a smart conservative. Even Santorum’s disdain for colleges can’t really come to the rescue of those who desperately wanted to catch him on a Freudian slip because his loathing for post-secondary education is based on the 1960s stereotype of colleges being a communist haven where the evil, godless reds recruited political sleeper cells. What we can say about his argument that conservatives must resist leftist snobs who want to tell them what to do, is that it’s revealingly hypocritical because while he decries liberal paternalism, he very forcefully pushed for rightist paternalism and lashed out at libertarians for not following his lead.

Basically, according to him, liberals telling you what to do is evil because they hate families, and children, and little puppies, and grandma, and apple pie, and they’re sinners constantly mad at God. On the other hand, conservatives publicly declaring what positions should be appropriate for married couples during sex, how to run your household, and who you can date, love, marry, or divorce is perfectly fine because they fall in line with Santorum’s ideology and you better get those listening ears out and pay attention or the terrorists and gays win as America descends into a bisexual-multispecies orgy while Sharia law rules the land. How this would work out since under Sharia law the punishments for premarital sex and homosexual behaviors are extreme to put it mildly, is left for the listeners to imagine in cold sweat. But details and self-awareness are really not Santorum’s strong suits. If they were, he’d at least try to pick whether gays or Muslims are the bigger threat and wouldn’t blatantly advocate doing the exact same thing he opposes from the other side of the ideological divide. The fact that he can’t do that is scary.

Share

A recently trumpeted paper on astrobiology did some very interesting modeling in a search for places on Mars where some very tough terrestrial microorganisms could survive and came to a very surprising conclusion. It appears that some 3.2% of the red planet could be habitable by volume, which would make it more friendly to life than our seemingly idyllic world, a world which has been populated with countless living things for billions of years. Now, considering that the Martian surface is as inhospitable to life as it gets because it’s constantly bathed in radiation potent enough to kill even the most radiation resistant creatures we know to exist, all of this habitable alien real estate is underground, where the deadly rays can’t reach and the temperature and pressure are just right for liquid water to flow through porous rock. Good news, right? If we just dig enough, a future robot, or better yet, a human astrobiologist, should be able to find honest to goodness little aliens.

Yes, little green germs aren’t exactly the little green men of classic science fiction, but hey, at least they’ll be a real extraterrestrial organism and we’ll know for a fact that we’re not alone in the universe. If life could arise on two planets in the same solar system and might be swimming under miles of ice on a moon that looks like a better and better candidate for alien habitation every day, certainly the entire universe is teeming with all sorts of living things, right? Hold that thought. One of the big caveats of using these models as a definitive guide for alien hunting is the lack of detail. In their zeal to report a sensational story, most pop sci outlets just repeated the great statistic and used it as a tie in to Curiosity’s upcoming mission to track down where exactly Martian microbes would settle into a nice colony to call home. But the simulations merely looked at how far down into the red planet’s caves and rocks we could go and still find possible traces of liquid water. The question of an active, frequently stirred and replenished nutrient base for life to function was briefly mentioned in the paper’s disclaimers for future research, despite being the second main prerequisite for habitability.

Of course it’s perfectly fine for a scientific paper to focus on just one narrow question and leave tangents for a team interested in building up on its work. It’s only frustrating when a premise is obviously flimsy or just out of left field and all the important details are waived off as something for others to refine. But in this case, the pop sci news circuit neglected to mention that the authors only set out to see how far Martian rovers could keep on following the water, as per NASA’s strategy for finding life on the red planet, and reported their results as one, big, definitive model showing that Mars is actually more habitable to life than Earth by volume while all it really says is that under the Martian surface, liquid water should be quite plentiful if we extrapolate some models of our own subterranean conditions and ecology to our diminutive, red, desert cousin in the inner solar system, and does a fairly thorough job of establishing the reasoning behind this conclusion. The leap from where we could find water on Mars to declaring that the typically monolithic block known as "scientists" estimate that the caverns of Mars hold three times the habitable territory by volume than Earth from that conclusion was simply a sensationalistic over-exaggeration. We don’t know how truly hospitable to life Mars really is.

But all that said, Mars is a very promising target for extraterrestrial microbes and the curtain of radiation which makes life nearly impossible on its surface will actually aid in our search for them. As noted in the reference, leaving our equipment to soak up the powerful UV rays for a few hours would sterilize it and any biota found in caverns or after digging several dozen feet into the red soil is then extremely likely to be native rather than the forward contamination from our own world. And yes, that means we absolutely should go there and devote as many resources as possible to make walking on Mars a reality. Of course the R&D involved won’t only benefit astrobiologists since the necessary reactors, self-sustaining habitats, and treatments to combat the damage caused by constant exposure to radiation could generate tens of billions in revenues and profits for all of the companies involved in putting together the mission’s toolkits if they channel them into mass market products ranging from medical devices to infrastructure. Actually, come to think of it, maybe one of the best things we’d be able to do for the world’s fragile economy is to go on a hunt for some little green germs and test all the pop sci news friendly astrobiology papers like this one on the actual surface of another planet. We tried just about everything else at this point and it doesn’t seem to be working, so why not think outside the box for a bit?

See: Jones, E., Lineweaver, C., Clarke, J. (2011) An extensive phase space for the potential Martian biosphere Astrobiology DOI: 10.1089/ast.2011.0660

Share

Back in September, news worldwide reported the results of a paper which claimed that a supercomputer had a knack for predicting revolutions and key global events, able to pick up on the events of Tahir square in Cairo and even get a fix on Osama bin Laden’s location. After reviewing the paper in question, I quickly got a strong vibe of many previous projects tried to use computing data to predict the future, projects a lot like Nexus 7, an attempt to mine reams of correlated data for predictive markers. Amazingly, after decades of failure to do that, there are still computer scientists who believe that all they really need is more data and then they’ll find what they want. Just like I wrote before of such attempts, more data simply cannot yield accurate predictions, and the supposed success of the supercomputer in question is actually a retroactive look at speculation followed by the claim that because negative sentiment about Mubarak in Egypt was widespread and because rumors of bin Laden hiding out in Pakistan persisted for years, the supercomputer effectively predicted both. And this is essentially what economist Tim Harford astutely called the God Complex in a relevant TED presentation.

Now, let’s say that the supercomputer in question was given a set of events like the sudden chain of extreme protests in the Middle East which saw over a dozen people self-immolate in front of government offices to which it spat out a chain of events for the Arab Spring, predicting the toppling of the autocrats in Tunisia and Egypt, the civil war in Lybia, and the assassination attempt on Yemen’s Saleh. That would be an impressive result and certainly the methodology used to arrive at these conclusions would merit further study. However, I am not aware of any computer coming up with such results. In fact, the paper’s model simply reflected all the buzz about the growing protest movements in Egypt and managed to pinpoint the FATA region of Pakistan as bin Laden’s hiding spot, not even close to where he was actually found, simply echoing the pundits who said that FATA was home to Taliban groups and al Qaeda elements which would be happy to harbor him and very loath to cooperate with any authorities looking for him, no matter what those authorities offered in return. This means that we’re not looking at a predictive model but a news aggregator which knows how to search a few preset keywords in the articles it’s fed and come to a general “mood” of the media.

As an attitude barometer, this machine is fairly effective. But as a predictive model? Not even close. You could even make the same kind of model at home and see its shortfalls for yourself. Simply make a list of negative words like “autocratic,” “tyrannical,” “aggressive,” and “outcry,” a list of positive words like “approval,” “cheers,” “welcomed,” and “helpful,” and a list of neutral words like “consensus,” “mediation,” “satisfied,” and “relaxed,” then include them into a script to parse a news article and identify said words. Then, have the script evaluate how many words fell in each category, giving each category a simple score. For example, 1 would be positive, a zero would be neutral, and -1 would of course be negative. Average your scores together to get a number in between the 1 and -1 bounds and assign that to the news article. Likewise, you should also identify the cities and countries from where the news comes (virtually always listed in the header of a wire service release) so you can map the location. Finally, assign a location flag and a color between green and red with which to flag your article on a map. Keep scanning article after article until you get a lot of data points, connections, and red and green flags. This step may take you a while unless you have a supercomputer. Then, after you’re all done take a look at your map and try to predict the next war, revolution, and scientific breakthrough.

Kind of a challenge, isn’t it? How accurate do you think you will be? And keep in mind that you have to have an extremely well balance news source base. Your map after a few thousand Fox News articles and roughly the same number of AlterNet articles is bound to look very different since the reporting biases will influence word choices, and remember that your entire model runs on those bias-affected words. A world pictured by writers who are on the far right is rather different than the world pictured by those on the far left. Which one would you choose as the most reliable model? Do you trust your own worldviews and those of your news sources to be as impartial as possible and balance out every bit of spin and bias no matter how slight by sheer quantity? It would also be interesting to note foreign language sources and what they say. Come to think of it, this might actually be a very interesting experiment to conduct and it might tell us even more about the state of the press at any given time period. Just don’t use the results to try and predict what will happen over the next year. Many sages have tried and failed and for good reason. A mutation of post hoc ergo prompter hoc is very limited in what it can offer an aspiring soothsayer so if you really want to try to be one, I suggest cold reading. It’s about as effective and requires a lot less coding and a lot less math.

See: Leetaru, K. (2011). Culturomics 2.0: Forecasting large-scale human behavior using global news media tone in time and space First Monday Online Journal, 16 (9)

Share

Virtually every culture, sub-culture, and profession has it’s own method of posturing about how big and oh so very, very important you and your work are, basically, what’s known as a penis waiving contest. Just like male primates are thought to try to intimidate each other with their erections and do so to win mating rights, and as it would seem, female preferences places size limits on those erections, humans use their job titles, cars, and media mentions to boast about how successful they are to win access to more resources or positions of authority in their social group to the extent that we let them before calling them boastful or obnoxious. Pundits like to brag about how many shows they do on a weekly, if not daily basis. Lawyers proudly mention any high profile case on which they worked. Scientists have the impact factor of the journals and studies they publish and the more citations they have, the logic has been going, the better and more prestigious their work. So if a study finding a correlation between violent video games and violence is frequently brought up, well, it must be better than a more obscure one that did not and more valuable in a court case. Right? Actually, no. It isn’t.

One would think that trained scientists, who know that the true value of work is in the data, wouldn’t pay nearly as much attention to it as university quants who use impact factors to judge the worth of a scientific effort. I’m sure you’ll remember how many scientists complained about complicated experiments being reduced to just a couple of meaningless numbers during the tenure decision process, and how many physicists wrote brief rants about impact factors not being what they once used to be on my posts dissecting arXiv studies. So why would a pair of psychologists try to argue that because a legal brief with more popularly cited studies on the alleged link between violent video games and aggression is better than one with less known ones? It’s essentially an argumentum ad populum in academise. Could it be that the psychologists in question, Craig Anderson, Brad Bushman, and Deana Pollard Sacks, who always tend to find that video games and porn are evil and make people more aggressive, are trying to raise their profiles a bit with a few media mentions? The media loves to jump on a controversial topic and they certainly study a few of them. No problem there, but that doesn’t mean they can wave around a paper-thin conclusion as if they had some sort of scientific evidence of relevance to the case, which is exactly what they seem to be trying to do. Their argument boils down to this: in a legal brief arguing that violent video games cause aggression there are more people who published some sort of study in a peer reviewed journal than in the counter-brief, therefore, it’s a better argument.

Do I really even need to point out why this argument is flawed? Shouldn’t I just leave it at that and let this huge and glaring fallacy remain self-evident? We’re talking about 69 people who did a study on the video game and aggression link signing a document saying that video games can make people aggressive and managed to publish it in a top tier journal at some point in time. So because those 69 people were rounded up by a lawyer and signed his brief, that means there’s now a strong link between video games and violent behavior? Say, I seem to remember psychologists who published in respected journals signing on to the Satanic ritual abuse cases concocted by hoaxers to make money from Christian fundamentalists. And there were more than a paltry 69 of them signing just one legal brief. Does this fact mean that we were wrong and there’s now solid, empirical evidence of Satanic ritual abuse and its harm based on how many times papers on it were cited by others, even if it was to show the conclusions as erroneous? Yes, having your paper cited and then dissected actually ads to your impact factor the same way as having it cited as evidence for follow-up or similar work. An impact factor only tells us how much of a splash a paper made, and in the case our trio of psychologists has made, this only extends to the journals. We don’t even know how widely the papers themselves were cited in other scientific literature or in what context. We just know they published in a good journal.

We also can’t rule out the influence of lawyers themselves. One would think that a lawyer on the hunt for a few self-appointed experts in a controversial field is probably not going to solicit perfectly objective advice. He will find those who are willing to agree and ask their friends as well. Plus, a brief worded with enough conditions and qualifiers is easy to sign. Yes, sure, depending on the person’s mood, content of the game, lifestyle and upbringing, and how much Red Bull a player had, some aggression may persist after playing a violent game, fine. Does this now mean that video games make you aggressive and there’s now evidence that it’s harmful to all players? No. Studies on several dozen freshmen at a university and the signatures of 69 people who at some point studied the topic an iron-clad case does not make. And so what if people stay a little aggressive for a few hours if they play Grand Theft Auto or Halo? Rates of violent crime are down across the board, both adult and juvenile. So whatever aggressive feelings a few people might have in a lab or at home don’t exactly seem to be spilling out into the street. If the premise here was true, we’d expect to see spikes in violent crime with new editions of violent blockbusters. We don’t. So why do we insist on making a big deal out of it? To me it seems like just another case of old fogeism in action, coupled with a plea for media attention…

[ illustration by Olly Moss for Wired Magazine ]

Share

Nowadays, it seems like Ray Kurzweil is one of the most exciting people in tech, apparently warranting a big write-up of his predictions in Time Magazine, and despite his nearly religious view of technology, named as one of the world’s most influential living atheists. And so, once in a while, we’re treated to a look at how well his prediction actually fared, often by those who’ve actually done very little research into the major disconnect between his seemingly successful predictions and reality. One of the latest iterations of almost suspiciously subtle praise for Kurzweil’s powers of prognostication from TechVert’s JD Rucker, is a perfect example of exactly that, presenting an infographic with a track record of someone who seems to have nothing less than a precognitive powers when it comes to the world of high tech. Though if you manage to catch the attribution on the bottom of the graphic itself, you’ll find that its source is none other than Ray and once again, he’s giving a very, very generous reinterpretation to his predictions and omits the myriad of details he actually got wrong.

Remember when last year, the much less lenient judges at IEEE spectrum decided to put his predictions in their proper place and evaluate how what he actually said compares to what he claims he said when grading his own predictions in retrospect? Even when simply quoting obvious trends, his record actually tends to be quite mediocre and starts out with a reasonable idea, such as that more and more computing will be mobile and web access will be close to ubiquitous, and starts adding in statements about brain implants, intelligent and web-enabled clothing, and other personal fantasies which are decades away from practical use, if they’ll actually ever be mass marketed in the first place. Then, he goes back and revises his own claims as shown by the link above, claiming that he never actually said that computers as we know them would vanish by 2010 even though in his TED presentation he said it in pretty much those exact words. Along the way, he also held that by 2009, we would’ve adopted intelligent highways with self-piloting cars. Google’s autonomous vehicle guided by sensors and GPS is still just an experiment and highways don’t manage their own traffic, unless a sign telling you about an accident or travel time to an exit counts as high tech traffic management.

So were you to do a cold reading of technology’s future ala Kurzweil, just think big, make lots of claims, and if you get something rather obvious right, just forget all the other stuff you added to your prediction and you too can be cited as a pioneer and visionary in fields you actually know little to nothing about, and said to have the kind of uncanny accuracy that makes everything you say compelling. You know, kind of like astrologers whose random wild guesses are edited down to just the vaguely right ones if we give them a lot of leeway when they manage to get something they consider to be an accurate prediction? And hey, maybe your own evaluation of your own predictive powers can also be cited by particularly lazy writers as they gush about the fantastic world of tomorrow you’ve been promising in your countless speeches and articles. The speeches and articles with which you make a good chunk of cash by hawking everything from alkaline water and vitamins for those who want to live long enough to see the Singularity, to classes at your own little university of futurism. Why study to be a real expert in AI or computing when you can just play one on TV and in the press? If anything, the pay is a lot better when you just talk about the promise of AI and intelligent machines rather than try to build them…

Share

Last week’s big story, in case you somehow missed it, was a very lengthy, deep, and richly detailed article by journalist Lawrence Wright on the apostasy of former Scientologist celebrity Paul Haggis. At nearly 24,600 words and spanning some 28 pages, it’s epic even by The New Yorker’s standards, but every word is worth it and if you haven’t read the piece already, I highly suggest you do because you will learn quite a bit about how those who run the Church of Scientology actually work and why so many celebrities participate. Scientology is a very bizarre beast to me, probably because watching the development of the ancient astronaut theory and its rising popularity on The (Alternative) History Channel, as well as UFOlogists’ search for a benevolent alien sage who wants to save us from ourselves, I’m starting to think that we’ll see far more religious movements based around a belief in alien visitation. And any future alien religion will have to work very hard to make sure that no one associates it with Scientology, the cult that made enemies out of Anonymous and WikiLeaks with its near constant hysterics, and whose practices started an investigation into human trafficking by the FBI.

Since I don’t have the luxury of writing a third of a book on this blog, and it seems very hard to add to what is a very thorough and well researched effort, I thought I’d just share a few observations for discussion. The first is that very little of what Wright writes is actually new. It’s just been brought together in narratives of several highly visible members of the cult. Scientology’s business model of providing what is basically armchair therapy with a mix of technobabble from the pulp quality sci-fi Hubbard usually churned out, was already documented and the investigations into its mistreatment of its dissenting members and the SeaOrg, who are essentially slave labor for the organization, were covered and brought to national attention by a newspaper in Florida. Just take a look at the last link above for reference. The only really new things are an elaboration of Scientology’s history and the FBI’s hard look into the fate of the SeaOrg members, whose mistreatment should really be appalling, especially in the modern Western world where indentured servitude and slavery are illegal. I mean how else can we call arrangements in which someone is expected to give up his or her freedom for "one billion years," work from dawn until dusk for laughably small wages, and serve Scientologists hand and foot? The very idea that a profession of a particular faith all of a sudden makes this fine is absolutely insane.

Secondly, those who run the Church of Scientology are terrible liars. Just about every basic lie detection class will teach you that one of the tell-tale signs that a suspect has something to hide is twitchiness, which usually takes one of two forms. The first is a total lack of cooperation and furious demands to be left alone. The other is the exact opposite, a furious appeal to innocence in which the suspect showers you with what he will insist absolves him of all responsibility. Scientologists rush to shower anyone around them with affidavits insisting that nothing bad ever happened or could ever happen in their cult at the slightest complaint, and send Tommy Davis, their hyperbole-spewing spokesdragon, to scream about how anyone who dares to say that his cult is anything but perfect must be an escaped lunatic with a vendetta to destroy a pristine, virginally innocent group of charming, beautiful, and wonderful people who’d never hurt a fly. That, ladies and germs, is what we call a major red flag. When you have as much control and can exert as much pressure on your members as Davis’ and Miscavige’s lieutenants, and have far more money in the bank than common sense, of course you could produce a whole lot of affidavits and sworn agreements. Now, what would be impressive is if the group had a reasonable response to an allegation and instead of the usual screaming and legal fits.

Thirdly, the current higher ups of Scientology seem to be utter lunatics, from the abusive and bombastic cult leader, Miscavige, to his main spokesperson, Davis, to their small horde of loyal sycophants. They’ve spent a whole lot of time as Scientologists and lived in cocoons where every criticism of the cult was censored or just dismissed as inane ramblings of the disgruntled. And that’s really what a lot of cults are about. They want you to unplug from the outside world and dedicate all your time (and money) to the group. You must conform and you must defend your group from any criticism, otherwise you’re a traitor who doesn’t appreciate what the cult did for her when she was lost and didn’t quite know where to turn. And this is especially true for Scientology’s vaunted celebrities, many of whom owe their careers to other Scientologists they’ve met, and who were often lured in by the cult’s promise of jump starting their careers. Provided that they also take their auditing classes and donate money every chance they get. So what can we expect when the long-term cultists take the reigns, put their faces on the image of their organization, and get exposed to some real criticism for the first time? I’m not surprised that Davis and Miscavige basically flip out and foam at the mouth with accusations of psychosis and conspiracy, flinging around forged documents which supposedly prove their deceased leader’s tall tales of breaking up black magic circles and suffering horrible wounds which he treated with alien magic.

Finally, there’s one more thing we can gather from Wright’s piece. If you feel lost and unsure of what to do with your life and someone offers you purpose and direction in exchange for money, telling you that he has a key to a secret for how to fulfill your potential and all you have to do is follow him, run the other way. In the end, it’s not worth it. You can be happy without living in a cult and wasting your hard-earned money on enriching its heads, and if you really want to find a charitable cause to which to donate your cash, there are plenty of organizations that could use your money for something a lot better than stuffing their wallets and building slave camps out in the desert. If you find yourself surrounded almost entirely by people from the same group, people who rush to change your mind the minute you say something critical of the group to which all of you belong, or cut off those who decide not to participate in that group any longer, you need to think about making some new friends.

Share

While today, scientists who actively participate in skeptical movements and run blogs with topics which cover more than just their areas or research, are wondering about which experts would want to promote a variety of sciences to the general public and those who fund research through government organizations, they’re also not thrilled with popular scientists who cross their lines of competence. One of the experts frequently shown on what remains of The Science Channel after he wrote several books about radical ideas in bleeding edge physics, Michio Kaku, has done just that in declaring that human evolution ended, and earning the blistering fury of PZ in the process. I have to say though, the fury is not without a good justification because Kaku does seem to know an awful little about human evolution and the fact that it’s actually speeding up, insisting that our civilization has virtually ended the natural selection that’s supposed to keep us evolving, despite that just last year, the web was abuzz with a recently discovered case of significant natural selection in humans.

Now, I could just refer you to a biologist for a list of reasons as to why Kaku is wrong and leave it at that, but it would miss a bigger issue with his repetition of the canard regarding our biological future. This notion of the static human who pretty much domesticated himself, left with nowhere to go but down, appears constantly in science fiction and among the amateur techies flocking to Kurzweil-styled transhumanists, who tell them that either merging with machines or transcending our physical bodies is "the next step in our evolution," and that we’re essentially destined to become immortal as soon as the technology gets here. If you remember a very particular sci-fi show that went on way too long after its expiration date, Stargate SG1, you’ll probably recall its habit of using transcendence to immortality via some highly evolved psychic powers in episode after episode, even using it to bring characters back from the dead. And we certainly can’t forget New Age woo devotees who flock by the thousands to hear post-modernist cranks coo about "the spiritual evolution of humanity" while they liberally pepper what amounts to nonsense with trendy, sciency-sounding buzzwords, chanting "quantum" as if they are Zen Buddhists reciting their mantras during an intense mediation session.

Of course, I could cite other examples of this trope rearing its head in pop culture, but you probably see where this is headed. Human evolution’s supposed end is a very popular mistake and like many urban legends, its constant, uncritical repetition has ingrained it in a whole lot of minds, even those of scientists who really don’t follow biology or didn’t pay much attention to it during their schooling. And all too often, the media forgets that scientists actually have very, very narrow areas of expertise and the broad labels we give them often engulf a whole lot more than their actual research. A scientist we call a marine biologist might spend her entire career studying two species of squid, and one we call a theoretical astrophysicist could work only on the behavior of accretion disks around black holes for the next decade. But because they’re scientists, journalists and editors like to assume, they must be really, really smart and can give us a valid opinion on everything. It’s basically an inversion of a falsum in uno, falsum in omnibus fallacy where we assume that because someone like Kaku has a fair bit of weight in the world of exotic physics, he should also know a lot about human evolution or is a good authority on artificial intelligence and cyborgs, which by the way, he’s not. So really, I’m not surprised to see a random pop sci canard better suited for a show on whatever it is the Sci-Fi channel wants to call itself nowadays, from a scientist being asked a question out of his depth. Disappointed. But not surprised.

Share

Since this blog is probably best known for it’s skeptical view of the strain of cyber-utopianism being promoted by professional technocrat, and apparently one of the world’s top atheists, Ray Kurzweil, it seems that I have to somehow note his appearance in Time Magazine and point out the numerous flaws in treating him like an immensely influential technology prophet with a pulse on the world of computer science. And it’s unfortunate that so many reporters seem to take him seriously because almost half a century ago, he was experimenting with some really cool machines and over the next few decades, came up with some interesting stuff. But for a while now, he’s been coasting on his own coattails, making grand pronouncements about areas of computer science in which he was never involved, and the reporters who profile him seem to think that if he can make a music machine in the 1965, it must mean that he knows where the AI world is headed, forgetting the fact that being an expert in one area of computer science doesn’t make you an expert in another. And so we’re treated to a breezy recitation of Kurzweil’s greatest hits which glosses over the numerous problems with his big plan for the world of 2045 with the good, old exponential advancement numerology that he loves to cite so often.

Again, there’s really nothing new here if you’re familiar with Kurzweil’s brand of transhumanism, just the same promises of mind uploading and digital immortality on the date predicted on the exponential chart that far too many devoted Singularitarians embrace with the same zeal as post-modernists ascribing to every concept with the word "quantum" in it. Never mind that mind uploading will require the kind of mind-body dualism based on the religious concept of a soul rather than sound science, and that even if it were possible, there would be huge adjustments involved with the process. Never mind that no matter whether Kurzweil takes a thousand vitamins a day, his body will simply fall apart by 125 because evolution does not produce humans who aren’t subject to senescence. Never mind that new inventions can backfire or never find an outlet and that the tech industry has been overpromising the benefits of what computers can do for nearly 50 years, always assuring us that the next generation of electronics would give us a nearly perfect world. Never mind that by now, more scholarly Singularitarians are trying to reign in Kurzweil’s hype while politely pointing out to whom we may want to listen instead. And never mind that Ray has a miserable record when it comes to predicting future trends in computing and technology and constantly changes what he said after the fact to give everyone the impression that he actually knows what he says he does. We’re told that every challenge to his dream world of immortal humans who swap minds between machines is easily met by the march of tech progress which will quickly add up to grant him his fantasies at just the right moment.

There’s really something borderline religious about Kurzweil’s approach to technology. He’s embraced is as his savior and his weapon to cheating death, and his devotion runs so deeply, he even says that any threats from new technology could be countered with more and better technology. But technology is just a tool, the means to an end, not an end in and of itself. It’s not something to be tamed and worshipped like an elusive or mysterious creature that works in bizarre ways, and it doesn’t work on schedule to give you what it wants. It is what you make of it and there are problems that it can’t overcome because we don’t know the solutions. Sure, being able to live for hundreds of years sounds great. But all the medical technology in the world won’t help a researcher who doesn’t know why we age and exactly what needs to be fixed or how to sufficiently and safely slow the aging process. Those kinds of discoveries aren’t done on schedule because they’re broad and very open-ended. Just saying that we’ll get there in 2030 because a chart you drew based on Moore’s Law, which was a marketing gimmick of Intel rather than an actual law of physics or technology, says so, is ridiculous to say the least. It’s like a theologist trying to calculate the day of the Rapture by digging through the Bible or the quatrains of Nostradamus’ volume. You can’t just commit scientists and researchers to work according to an arbitrary schedule so they can help you deal with your midlife crisis and come to terms with your own mortality. And yet, that’s exactly what Ray does, substituting confidence and technobabble for substance and attracting way too many reporters who just regurgitate what he says into their articles and call him a genius.

Here’s what will likely happen by 2045. We might live a little longer thanks to prosthetics and maybe artificial organs and valves which will replace some of our natural entrails when they start going out of order with age, and hopefully, better diet and exercise. We’ll have very basic AI which we’ll use to control large processes and train using genetic algorithms and artificial neural networks. We may even have a resurgence in space travel and be wondering about sending cyborgs into space for long term missions. We’ll probably have new and very interesting inventions and gadgets that we’ll consider vital for our future. But we’ll still inhabit our bodies, we’ll still die, and we’ll still find answers to the our biggest and most important problems when we find them, not according to a schedule assembled by a tech shaman. Meanwhile, I’ll be an old fogey who used to write about how Singularitarians are getting way ahead of themselves and have to face my own upcoming end as best I can, without dreaming of some magical technology that will swoop from the sky and save me because imaginary scientists are supposed to come up with it before I die and ensure my immortality. All I’ll be able to do is live out my life as best I can and try to do as many things as I want to do, hopefully leaving some kind of legacy for those who’ll come after me, or maybe a useful gadget or two with which they can also tinker. And if we manage to figure out how to stop aging before I’m gone for good, terrific. But I won’t bet on it.

[ illustration by Martin Lisec ]

Share