Friday, February 22, 2013

Polishing Rabbits and Passing Off Squirrels—Andrew Zolli on Jonah Lehrer

Andrew Zolli, the Executive Director and curator of PopTech, as well as the co-author of Resiliencesent me a very thoughtful reflection in response to my earlier post on Jonah Lehrer and his recent apology. He had tried to post it as a comment on my post, but ran up against Blogger's comment length limits. So with Andrew's consent, I am posting it below. I think that Andrew's points are excellent. (Note: I have been an invited speaker at PopTech, both on the stage and to the Fellows program.)
At this point, the whole sad L'Affaire d'Lehrer has been dissected into a finely-ground powder, and everyone has assigned Jonah appropriate culpability, including Jonah himself.  What I find of more lasting interest is a systemic issue which Chris touches on glancingly, above: 
We live in a media moment that massively encourages and rewards the pulling of proverbial rabbits out of hats—storytelling that culminates in a counterintuitive fact about human beings and their nature.  It's sort of "Sudoku storytelling", in which the reader is presented with a confusing storyline, and the author presents a rubric and reassembles the elements in a way that snaps the pieces into place in a clean and satisfying way.  This kind of writing gives the reader a little positive jolt, a sense that they've been let in on some secret wisdom that decodes part of the human condition. (That "snapping into place" phenomenon—it's what makes a joke with a good punchline work too - you know it's coming, and you can't quite see how it will resolve itself, and then *wham*—there it is! The same is true for get-rich-quick-schemes.) 
These are the kinds of pieces—not just books, but blog pieces, and other forms of writing—that go "viral." Our appetite for such secret wisdom is so strong that passing them along actually raises the social capital of the *forwarder*, not just the author. (This is what Twitter was made for, I believe.) 
And this is *exactly* the kind of content that beleaguered mainstream editors often push writers, particularly talented writers, to produce—not nuanced tomes with confidence intervals attached to data, including examples of counterfactuals and copious footnotes—but snappy, highly "applicable," linear narratives (with counterintuitive endings!) that sacrifice complexity for accessibility. (As one editor put it to me: "You wanna write that other shit? Go to a university press!") 
And its not just editors—these are the kinds of books that command significant advances, that backlist, that build the author's speaking fees, that get them bylined articles in prominent magazines, and tv appearances—a whole edifice that, most of the time, ends up with the "talent" becoming a not-terribly-intellectual-public-intellectual. (By the way, it's not just science writers … business gurus in particular are often peddlers of pure horseshit, yet find a insatiable appetite for their nonsense. Because if there's one thing human beings find even more interesting than ourselves, it's how to make a buck off of some other clueless rube.) 
Of course, the big problem is that there really aren't an endless supply of rabbits to pull out of hats. And not all rabbits are of first quality—sometimes, we have to "polish the rabbit," so to speak. And that's how I believe Jonah (whom I know personally, though not well) got into this predicament—being overly committed to the rabbit production line. So you start to reuse your rabbits, then you try to pass off second quality rabbits by making them look all the more surprising. And then you're panicked to discover you're passing off squirrels. 
Oddly enough, the rabbit-out-of-the-hat counterintuitive ending is actually Jonah's story, which is why his downfall itself went viral. You think this guy is just blessed with preternatural explanatory talent, but it turns out, "the 'Imagine' guy was making up his own quotes!" It's a joke! And a punchline! Love it! Instant schadenfreude! Have you heard? Pass it on! 
I am not excusing Jonah for his mistakes, which are significant. I think it's an honor to be held to a high standard, and he failed that standard, more than once. Worse, he had (and has) the abundant and enviable talent not to fail. And there should be real consequences for his having done so. 
Yet I also think we ought to be careful in making him a cautionary tale for a civilization drowning in its own bullshit. He was unprofessional, but he was also responding to perverse incentives and societal norms in our public square that we collectively bolster, if not passively tolerate, by our own consumption habits. 
For me, I'm trying to become more mindful of my own bullshitological contributions—which are, I'm sure greater than I'd care to admit. I'm also finding myself reflecting on how we might make the system itself better, with fewer incentives for bad behavior, and better rewards for good behavior. 
Because, while I'm sure there is some intrinsic character in all of us, it's also true that incentives draw forth aspects of that character, which then can come to publicly define us. (I can be fairly charitably-minded until someone cuts me off in traffic; fortunately for me, my utterances thereafter are not part of the public record.) 
So here's my concluding truism: Piling on Jonah is like jumping on a trampoline: fun for a while, but it won't take us very far. Better to think about how we can springboard to a better place for everyone. 
I know it's not counter-intuitive enough. I guess I'll never make it in this business. 

Monday, February 18, 2013

How Much BAM for the Buck, and Other Thoughts on the Brain Activity Map Project

Today's New York Times reports that the Obama administration is considering a massive, partly government-funded project to map the human brain, the Brain Activity Map (BAM!) Project, inspired by the success of the Human Genome Project.

Let me start by saying that I am all in favor of more research in neuroscience, because there is certainly a lot we don't know about how the brain works. While to outsiders like Ray Kurzweil it may look like progress is coming in leaps and bounds, and backing up the mind's hard drive is therefore a calculable number of years away, from the inside the effort to understand the brain often seems to zigzag from new idea to cool finding to neat technology without a clear forward trajectory. I am also a big fan of George Church, a genius and visionary of molecular biology who is one of the driving forces behind the new plan. (I even once co-taught a course on cognitive genetics at Harvard with George's wife, the geneticist Ting Wu.) But before we all jump on this bandwagon, let's discuss the pros and cons—based on what has been said publicly so far (mainly in the Times article, which was prefigured by a Neuron article by Church and several others published last June).

Per the Times, the project is expected to cost "billions of dollars" and last 10 years. Its goals are to "advance the knowledge of the brain's billions of neurons and gain greater insights into perception, actions, and, ultimately, consciousness." So far, so good—basic science. Some also hope that the project will "develop the technology essential to understanding diseases like Alzheimer's and Parkinson's, as well as to find new therapies for a variety of mental illnesses." That's certainly possible, though I cannot think of any treatments for mental illness or brain disease that have been derived from previous maps of the brain or knowledge of its activity patterns. Perhaps this is just an argument that we need better maps. Finally, "the project holds the potential of paving the way for advances in artificial intelligence." Certainly also possible, but I think AI has been doing pretty well lately by ignoring brain architecture and going with whatever algorithms work on computer hardware to produce intelligent-seeming behavior.

The Times account is short on details of what precisely is being proposed, which has led some people to think that the idea is to map every connection and the firing activity of every neuron in (at least) one human brain, or to make more maps of the functions of brain regions using neuroimaging techniques. But the Neuron article by the Brain Activity Map proponents makes it clear that, last June at least, the idea was to start with small circuits in very small organisms, where it may soon be possible to record from every participating neuron at once, and to work up to larger circuits and larger organisms. All these maps would record "the patterns and sequences of neuronal firing by all neurons" in the relevant circuit or brain, so they would be much more detailed, in both space and time, than any existing databases. A drosophila brain might be done in ten years, a mouse neocortex in fifteen. The entire human brain would be a more distant goal. And of course there would be ethical issues to be surfaced and solved along the way to that ultimate step.

There are a lot of things to like about this ambition. Although we already have lots of maps of the brain, none of them (but one—the structural connectome of the C. elegans worm) approach the spatial resolution of a neuron-by-neuron map. The main source of our knowledge about how neurons represent information, carry out computations, and communicate with other neurons is still the single-cell recording, a technique developed about half a century ago. Such methods are based on inserting tiny electrodes in or near living neurons, and have obvious limitations, not least their inability to scale to full circuits or brain regions. Recording entire circuits in action would be a fantastic achievement and probably would lead to all sorts of ancillary benefits for advancing brain research, some foreseeable and some not. And perhaps more neuroscientists would be able to find jobs along the way!

But there are some considerations on the other side of the ledger, too. One that should not be underestimated is the opportunity cost; always, but especially nowadays, it would be a mistake to imagine that the funding for a new, large project will appear out of thin air. If the BAM goes forward, other areas are likely to get less funding, and other neuroscience and behavioral science projects will likely be among the first to be reduced. Moreover, a single mega-project is likely to supplant many smaller projects. Is our neuroscience money best spent on one project costing, say, $5 billion, or instead a thousand projects of $5 million each, or ten thousand projects with $500K budgets? Gary Marcus has a suggestion for five $1 billion projects. Which funding strategy is likely to result in more important discoveries, as viewed from the perspective of the next generation of scientists looking back? Maybe the BAM, but maybe not. The answer is hardly obvious to me. The big project is concrete and tangible, with milestones in the near future. The net effect of the tinkering of ten thousand labs with comparatively small budgets is harder to conceive of, but might turn out to be much larger.

One reason to be suspicious of the potential return-on-investment of a massive BAM project is that it's being sold by comparing it to the Human Genome Project (HGP), with a claim that the HGP produced $141 in economic activity for every $1 the government spent on it. President Obama cited this figure in his State of the Union Address. That's a return of fourteen thousand percent! Can that be right? If so, it would mean that about $800 billion in economic activity has been generated by that one government "investment." It turns out that this claim comes from a Batelle report (which is cited by the BAM advocates in their Neuron article) that was sponsored by a company that makes equipment used in life science research.

I find this figure hard to believe, not to say preposterous. Does it really represent net economic activity, or does it account for activity displaced from other spheres, and was all that economic activity the best activity that could have been done, or was it activity that pursuit of grant funding and other non-market incentives encouraged? What if the same amount of government money had been spent in funding lots of individual genetics researchers instead, or on other biology researchers, or other science entirely? The certainty with which these sorts of analyses are presented makes it hard to see counterfactual alternatives, but they lurk everywhere. At a minimum the $800B value must rest on a lot of assumptions, and the specific assumptions made probably have a large impact on the value that comes out of the analysis.

To be clear: I think the genome project was a great scientific idea, I suspect that it has produced a lot of benefits, and I am personally happy it was done. I just don't think it should be oversold. As Richard Feynman pointed out in his famous "Cargo Cult" speech, public support for research will eventually erode if it is sold with outrageous-sounding claims or promises of early benefits.

But suppose it is true that the Human Genome Project was the single best thing the U.S. government ever spent its money on—sorry, "investment it ever made"—the government's version of buying Apple stock for $5 and selling at $700. Should we expect similar returns from the next big science project? Or should we expect to see the economic return and gains in knowledge achieved by the average of the big science projects that the government has funded over the past decades? The abandoned supercollider, the war on cancer, the cancelled breeder reactor, and I am sure many others fade from memory—and certainly never get mentioned—when we are told about the 141X ROI of the genome project (worthy as it was). An analysis that looked at all the comparable projects rather than just the all-time outlier might come to a different projection of the likely value of the BAM. We might still expect a positive return, but without the 141X (or whatever the true value is), it will have a tougher time competing with other priorities, or with other ways of parceling out neuroscience funding.

Europe has thrown its lot behind the single mega-project approach, with an effort to simulate an entire brain at a cost of over 1 billion Euros. Regardless of the (questionable) merit of this idea, perhaps the U.S. should play a different strategy in the competition for research glory by letting a thousand flowers bloom rather than planting one ginormous tree. Indeed, such a contrarian approach may have value precisely because of the limits of the mapmaking approach to understanding the brain.

Forty years ago, single-cell neurophysiologist Horace Barlow famously proposed that "a description of that activity of a single nerve cell which is transmitted to and influences other nerve cells and of a nerve cell's response to such influences from other cells, is a complete enough description for functional understanding of the nervous system." The BAM Project seems to be a plan to create exactly this sort of description, but at a much larger scale. But as David Marr explained in his 1982 book Vision, and as Hilary Putnam also suggested in his 1973 Cognition article "Reductionism and the Nature of Psychology," there are several other levels of explanation that are equally important in reaching a "functional understanding" of how the brain works. The representations, algorithms, and computational functions of the brain and its circuits, as well as the relationship of the brain to the organism and its environment and niche, are just as important as a map that shows how the neurons are wired up and how they send signals to one another.

Again, it is not that a BAM would have no value. I would personally be fascinated to see its results, and those results might well help us to crack the problem of how higher-level properties emerge out of agglomerations of lower-level events (which the psychologist Stephen Kosslyn, a founder of cognitive neuroscience, proposed as one of the hardest problems in social science). But the sheer size of a full BAM project might focus our attention and hopes on the BAM as the be-all and end-all of neuroscience, and distract the field from devoting energy to those other levels. Cognitive scientist Mark Changizi has eloquently argued, in fact, that the massive project we ought to be pursuing is a map of the "teleome," his coinage for the suite of functions and abilities that the nervous system was designed by evolution to perform. Without knowing more about function, it will be hard to understand the BAM's results, and perhaps even harder to build the EU's whole-brain computer simulation. As the proposal moves forward, I hope the decision-makers keep in mind that maps, while incredibly useful tools, don't give answers to every important question.

Tuesday, February 12, 2013

What Has Been Forgotten About Jonah Lehrer

Today the science writer Jonah Lehrer made his first extended public remarks since he resigned his various positions and his publisher withdrew his third book last summer. The venue was a Knight Foundation conference in Miami. Lehrer gave a short speech about decision making, focusing on his own bad decisions and how he plans to prevent them from recurring in the future. To my surprise, the foundation, which supports "journalistic excellence," seems to have paid Lehrer $20,000 for his appearance.

As is well known, Lehrer first got into trouble last year when it was revealed that his new blog at the New Yorker incorporated much material that he had previously published, including in his old column at the Wall Street Journal. This led to a suspension of his blogging privileges. Then various investigations showed that he had not only "self-plagiarized" (a lazy and exploitative practice) but also plagiarized the work of others, and perhaps worst of all embellished and fabricated quotes from his interview subjects (most prominently Bob Dylan) and other sources. The New Yorker finally let him go, as did Wired. He completely ceased tweeting, Facebooking, or updating his website.

At first I felt bad about Jonah Lehrer's problems. He seemed like a nice person. When I published a fairly negative review of his third book, Imagine: How Creativity Works, in the New York Times, he was up on his blog with a reply, titled "On Bad Reviews," in a matter of hours. I wrote my own strong rebuttal and posted it a couple of days later. The next day, Lehrer emailed me proposing that he interview me by email about the issues I had raised, for publication on his blog. We did the interview, which took several weeks to complete. After various delays, caused by the suspension and then cancellation of his blog, the interview was finally published at the Creativity Post website. I was pleasantly surprised that Lehrer bothered to engage my criticism, and then to ask me directly how I thought he (and other science writers) could improve their practices. I was a bit upset when he tried to block the final publication of the interview, which was supposed to happen (coincidentally) the day after he departed the New Yorker, but the Creativity Post editors managed to convince him to change his mind.

When the allegations of plagiarism and fabrication came out, the story became one of "greatest science writer of his generation makes unthinkable mistakes," and the analysis was mostly psychoanalysis of Lehrer's motives or of the media culture. Entirely lost was the fact that Jonah Lehrer was never a very good science writer. He seemed not to fully understand the science he was trying to explain; his explanations were inaccurate, overblown, and often just plain wrong, usually in the direction of giving his readers counterintuitive thrills and challenging their settled beliefs. You can read my review and the various parts of my exchange with him that are linked above for detailed explanations of why I make this claim. Others have made similar points too, for example Isaac Chotiner at the New Republic and Tim Requarth and Meehan Crist at The Millions. But the tenor of many critics last year was "he committed unforgivable journalistic sins and should be punished for them, but he still got the science right." There was a clear sense that one had nothing to do with the other.

In my opinion, the fabrications and the scientific misunderstanding are actually closely related. The fabrications tended to follow a pattern of perfecting the stories and anecdotes that Lehrer -- like almost all successful science writers nowadays -- used to illustrate his arguments. Had he used only words Bob Dylan actually said, and only the true facts about Dylan's 1960s songwriting travails, the story wouldn't have been as smooth. It's human nature to be more convinced by concrete stories than by abstract statistics and ideas, so the convincingness of Lehrer's science writing came from the brilliance of his stories, characters, and quotes. Those are the elements that people process fluently and remember long after the details of experiments and analyses fade.

After the Dylan episode, others found more examples of how Lehrer did this. I think one of the clearest was Seth Mnookin's analysis of Lehrer's retelling of psychologist Leon Festinger's famous original story of "cognitive dissonance," based on Festinger's experience of infiltrating a doomsday cult in 1954. Of the moments after an expected civilization-destroying cataclysm failed to start, Festinger wrote, "Midnight had passed and nothing had happened ... But there was little to see in the reactions of the people in that room. There was no talking, no sound. People sat stock still, their faces seemingly frozen and expressionless." Lehrer narrated the same event as follows: "When the clock read 12:01 and there were still no aliens, the cultists began to worry. A few began to cry. The aliens had let them down." Do you see the difference? Lehrer's version is more dramatic: people worry, they cry, they feel let down. It's more human. Each one of these little errors or fabrications makes the story work a little bit better, makes it match our expectations more closely, and thus gives it greater influence on our beliefs.

So by cutting exactly these corners in his writing, Lehrer was able to mask the fact that his conclusions were facile or erroneous, and his prose earned him a reputation for being much more authoritative than he was. Who was harmed by all of this? Writers who were trying to do with correct understanding and real quotes and stories what Lehrer did with his "material," for one. And certainly his editors, publishers, and anyone else who paid money for his halo and his drawing power. But readers most of all, since they were told things about how nature works that simply weren't true. Not just what Bob Dylan said and when he said it, but what it has to do with creativity, neuroscience, and everything else.

Jonah Lehrer gave a talk today that was more interesting than I expected. He acknowledged his mistakes and said he was trying to erect operating procedures and safeguards to make sure his own arrogance stays in check in the future. He said some things that were hard to believe, such as his claim that he has a poster in his office of Bob Dylan by Milton Glaser (a graphic artist also misquoted by Lehrer), and that he flinches every time he sees it. Does he really flinch every time? Hasn't habituation or inattention taken care of that by now?

I actually think Lehrer might be able to return to writing successfully, because he has the technical skills, and he is obviously a very intelligent and energetic person. But he should take the time to not only protect himself against his tendency to fabricate and plagiarize, but also to learn the basics of journalistic practice and ethics, to learn how to think clearly about science and facts, and above all to commit himself to the truth. Then maybe he will have something valuable to tell us.

Monday, February 11, 2013

Six Big Problems With "Why Can Some Kids Handle Pressure ..."

Surely how kids handle pressure is an important and interesting question. And surely how we perform in pressure situations has a lot to do with our genes. But the recent New York Times article "Why Can Some Kids Handle Pressure While Others Fall Apart?" by Po Bronson and Ashley Merryman is shot through with the most basic mistakes in science writing about behavior genetics. This makes me sad, because I have liked the authors' previous books, and because I think it is quite possible to communicate research on genetics accurately for an intelligent general audience. Here, unfortunately, they appear to have taken no note of what has happened in behavior genetics in the past 5–10 years, which ought to have been a prerequisite for this piece. A few examples:

Exaggerated claims: "One particular gene, referred to as the COMT gene, could to a large degree explain why one child is more prone to be a worrier, while another may be unflappable" [emphasis added]. In reality, what kind of COMT gene you have, if it is relevant, is an extremely minor influence by itself on how much you worry. The particular variant of the COMT gene being discussed here is very common, and like all other common genetic variants, it has never been shown to have a large, or even medium-sized, influence on any behavioral traits.

Cherrypicking the study with the most dramatic results: "Other research has found that those with the slow-acting enzymes have higher IQs, on average. One study of Beijing schoolchildren calculated the advantage to be 10 IQ points." In 2013 it should be regarded as journalistic malpractice to write things like this when the average of all the studies on this gene and IQ show the effect to be, at best, a tiny fraction of 10 IQ points. In an analysis that included almost 10,000 subjects from two countries, in fact, a team of colleagues and myself found virtually no evidence of any effect of COMT on IQ.

Idealizing your favorite study: "In other words, the exam was a perfect, real world experiment for studying the effects of genetics on high-stakes competition." In reality, there are no "perfect" experiments, and the one Bronson and Merryman report on had only 779 subjects, which might seem like a lot, but is almost certainly too small to learn anything reliable about genetic effects. About 100 times more participants are needed to really answer these questions.

Labeling genes with behaviors and pretending that possessing a genetic variant makes you a particular type of lucky or unlucky person: The two variants of the COMT gene are labelled "warrior" and "worrier" (for the different responses to stress they supposedly cause people to have—get it??), and then people are in turn labelled as Warriors or Worriers based on their genotypes. That's tantamount to calling the variants of APOE the "Doofus" and "Genius" genes because one makes you more likely to develop Alzheimer's disease while the other offers some protection against dementia. No, wait, it's not, because APOE has a highly significant effect on Alzheimer risk that has been replicated over and over by independent researchers, but COMT's links to the behaviors discussed in this article are smaller and more tenuous. Later we are told that the Worriers' "genetically blessed working memory and attention advantage kicked in. And their experience meant they didn't melt under the pressure of their genetic curse." I thought we gave up on this kind of superficial genes-as-personality-types-and-blessings-or-curses kind of science writing years ago.

Contradicting your own point: "... we are all Warriors or Worriers ... In truth, because we all get one COMT gene from our father and one from our mother, about half of all people inherit one of each gene variation, so they have a mix of the enzymes and are somewhere in between the Warriors and the Worriers." (Is anyone else reminded of the camp 1970s film "The Warriors," about gangs that roam the New York City subways?) We can't all be one type or the other if half of us are both. And incidentally, the pattern of 25%-50%-25% of the three genotypes does not arise only because we get one allele from each parent. It also depends on the frequency of the two variants being about 50% each in the population, which it happens to be in the case of this COMT polymorphism.

Pretending that what has been known for generations is a new discovery: "Stress turns out to be far more complicated than we've assumed ... short-term stress can actually help people perform ..." And later: "It may be difficult to believe ... that stress can benefit your performance." But psychology textbooks have long taught that the level of arousal for optimal performance is moderate, with too much arousal or too little leading to lower performance. This is called the Yerkes-Dodson Law, and it was originally proposed in 1908. Perhaps worth a mention?

The article makes much of findings that "those with Worrier-genes can still handle incredible stress." This would only be surprising if COMT had such a strong effect that it could determine what kind of person you are. But COMT doesn't have that effect. It's surprising when someone with the genotype for brown eyes has blue eyes instead, because the relevant genes almost completely determine the phenotype. It's not surprising that people with one of hundreds or thousands of genes that make one susceptible to stress turn out to be able to handle themselves just fine.

If the authors were conversant with—and showed concern for—the relevant literature and the background science, they would not have made these mistakes. I understand that they are writers, not researchers, but people who write about research for the public have a simple obligation to communicate not just good stories, but reliable facts.