Friday, October 4, 2013

Why Malcolm Gladwell Matters (And Why That's Unfortunate)

Malcolm Gladwell, the New Yorker writer and perennial bestselling author, has a new book out. It's called David and Goliath: Misfits, Underdogs, and the Art of Battling Giants. I reviewed it (PDF) in last weekend's edition of The Wall Street Journal. (Other reviews have appeared in The Atlantic, The New York Times, The Guardian, and The Millions, to name a few.) Even though the WSJ editors kindly gave me about 2500 words to go into depth about the book, there were many things I did not have space to discuss or elaborate on. This post contains some additional thoughts about Malcolm Gladwell, David and Goliath, the general modus operandi of his writing, and how he and others conceive of what he is doing.

I noticed some interesting reactions to my review. Some people said I was a jealous hater. One even implied that as a cognitive scientist (rather than a neuroscientist) I somehow lacked the capacity or credibility to criticize anyone's logic or adherence to evidence. A more serious response, of which I saw several instances, came from people who said in essence "Why do you take Gladwell so seriously—it's obvious he is just an entertainer." For example, here's Jason Kottke:
I enjoy Gladwell's writing and am able to take it with the proper portion of salt ... I read (and write about) most pop science as science fiction: good for thinking about things in novel ways but not so great for basing your cancer treatment on. 
The Freakonomics blog reviewer said much the same thing:
... critics have primarily focused on whether the argument they think Gladwell is making is valid. I am going to argue that this approach misses the fact that the stories Gladwell tells are simply well worth reading.
I say good for you to everyone who doesn't take Gladwell seriously. But the reason I take him seriously is because I take him and his publisher at their word. On their face, many of the assertions and conclusions in Gladwell's books are clearly meant to describe lawful regularities about the way human mental life and the human social world work. And this has always been the case with his writing.

In The Tipping Point (2000), Gladwell wrote of sociological regularities and even coined new ones, like "The Law of the Few." Calling patterns of behavior "laws" is a basic way of signaling that they are robust empirical regularities. Laws of human behavior aren't as mathematically precise as laws of physics, but asserting one is about the strongest claim that can be made in social science. To say something is a law is to say that it applies with (near) universality and can be used to predict, in advance, with a fair degree of certainty, what will happen in a situation. It says this is truth you can believe in, and act on to your benefit.

A blurb from the publisher of David and Goliath avers: "The author of Outliers explores the hidden rules governing relationships between the mighty and the weak, upending prevailing wisdom as he goes." A hidden rule is a counterintuitive, causal mechanism behind the workings of the world. If you say you are exploring hidden rules that govern relationships, you are promising to explicate social science. But we don't have to take the publisher's word for it. Here's the author himself, in the book, stating one of his theses:
The fact of being an underdog changes people in ways that we often fail to appreciate. It opens doors, and creates opportunities and educates and permits things that might otherwise have seemed unthinkable.
The emphasis on changes is in the original (at least in the version of the quote I saw on Gladwell's Facebook page). In an excerpt published in The Guardian, he wrote, "If you take away the gift of reading, you create the gift of listening." I added the emphasis on create to highlight the fact that Gladwell is here claiming a causal rule about the mind and brain, namely that having dyslexia causes one to become a better listener (something he says made superlawyer David Boies so successful).

I've gone on at length with these examples because I think they also run counter to another point I have seen made about Gladwell's writings recently: That he does nothing more than restate the obvious or banal. I couldn't disagree more here. Indeed, to his credit, what he writes about is the opposite of trivial. If Gladwell is right in his claims, we have all been acting unethically by watching professional football, and the sport will go the way of dogfighting, or at best boxing. If he is right about basketball, thousands of teams have been employing bad strategies for no good reason. If he is right about dyslexia, the world would literally be a worse place if everyone were able to learn how to read with ease, because we would lose the geniuses that dyslexia (and other "desirable difficulties") create. If he was right about how beliefs and fads spread through social networks in The Tipping Point, consumer marketing would have changed greatly in the years since. Actually, it did: firms spent great effort trying to find "influentials" and buy their influence, even though there was never good causal evidence that this would work. (See Duncan Watts's brilliant book Everything is Obvious, Once You Know the Answerreviewed here—to understand why.) If Gladwell is right, also in The Tipping Point, about how much news anchors can influence our votes by deploying their smiles for and against their preferred candidates, then democracy as we know it is a charade (and not for the reasons usually given, but for the completely unsupported reason that subliminal persuaders can create any electoral results they want). And so on. These ideas are far from obvious, self-evident, or trivial. They do have the property of engaging a hindsight bias, of triggering a pleasurable rush of counterintuition, of seeming correct once you have learned about them. But an idea that people feel like they already knew is much different from an idea people really did know all along.

Janet Maslin's New York Times review of David and Goliath begins by succinctly stating the value proposition that Gladwell's work offers to his readers:
The world becomes less complicated with a Malcolm Gladwell book in hand. Mr. Gladwell raises questions — should David have won his fight with Goliath? — that are reassuringly clear even before they are answered. His answers are just tricky enough to suggest that the reader has learned something, regardless of whether that’s true.
(I would only add that the world becomes not just less complicated but better, which leaves the reader a little bit happier about life.) In a recent interview with The Guardian, Gladwell as much as agreed: "If my books appear to a reader to be oversimplified, then you shouldn't read them: you're not the audience!"

I don't think the main flaw is oversimplification (though that is a problem: Einstein was right when he—supposedly—advised that things be made as simple as possible, but no simpler). As I wrote in my own review, the main flaw is a lack of logic and proper evidence in the argumentation. But consider what Gladwell's quote means. He is saying that if you understand his topics enough to see what he is doing wrong, then you are not the reader he wants. At a stroke he has said that anyone equipped to properly review his work should not be reading it. How convenient! Those who are left are only those who do not think the material is oversimplified.

Who are those people? They are the readers who will take Gladwell's laws, rules, and causal theories seriously; they will tweet them to the world, preach them to their underlings and colleagues, write them up in their own books and articles (David Brooks relied on Gladwell's claims more than once in his last book), and let them infiltrate their own decision-making processes. These are the people who will learn to trust their guts (Blink), search out and lavish attention and money on fictitious "influencers" (The Tipping Point), celebrate neurological problems rather than treat them (David and Goliath), and fail to pay attention to talent and potential because they think personal triumph results just from luck and hard work (Outliers). It doesn't matter if these are misreadings or imprecise readings of what Gladwell is saying in these books—they are common readings, and I think they are more common among exactly those readers Gladwell says are his audience.

Not backing down, Gladwell said on the Brian Lehrer show that he really doesn't care about logic, evidence, and truth—or that he thinks discussions of the concerns of "academic research" in the sciences, i.e., logic, evidence, and truth—are "inaccessible" to his lowly readers:
I am a story-teller, and I look to academic research … for ways of augmenting story-telling. The reason I don’t do things their way is because their way has a cost: it makes their writing inaccessible. If you are someone who has as their goal ... to reach a lay audience ... you can't do it their way.
In this and another quote, from his interview in The Telegraph, about what readers "are indifferent to," the condescension and arrogance are in full view:
And as I’ve written more books I’ve realised there are certain things that writers and critics prize, and readers don’t. So we’re obsessed with things like coherence, consistency, neatness of argument. Readers are indifferent to those things. 
Note, incidentally, that he mentions coherence, consistency, and neatness. But not correctness, or proper evidence. Perhaps he thinks that these are highfalutin cares for writers and critics, or perhaps he is some kind of postmodernist for whom they don't even exist in any cognizable form. In any case, I do not agree with Gladwell's implication that accuracy and logic are incompatible with entertainment. If anyone could make accurate and logical discussion of science entertaining, it is Malcolm Gladwell.

Perhaps ... perhaps I am the one who is naive, but I was honestly very surprised by these quotes. I had thought Gladwell was inadvertently misunderstanding the science he was writing about, and making sincere mistakes in the service of coming up with ever more "Gladwellian" insights to serve his audience. But according to his own account, he knows exactly what he is doing, and not only that, he thinks it is the right thing to do. Is there no sense of ethics that requires more fidelity to truth, especially when your audience is so vast—and, by your own admission, so benighted—as to need oversimplification and to be unmoved by little things like consistency and coherence? I think a higher ethic of communication should apply here, not a lower standard.

This brings me back to the question of why Gladwell matters so much. Why am I, an academic who is supposed to be keeping his head down and toiling away on inaccessible stuff, spending so much time on reading his interviews, reviewing his book, and writing this blog post? What Malcom Gladwell says matters because, whether academics like it or not, he is incredibly influential.

As Gladwell himself might put it: "We tend to think that people who write popular books don't have much influence. But we are wrong." Sure, Gladwell has huge sales figures and is said to command big speaking fees, and his TED talks are among the most watched. But James Patterson has huge sales too, and he isn't driving public opinion or belief. I know Gladwell has influence for multiple reasons. One is that even highly-educated people in leadership positions in academia—a field where I have experience—are sometimes more familiar with and more likely to cite Gladwell's writings than those of the top scholars in their own fields, even when those top scholars have put their ideas into trade-book form like Gladwell does.

Another data point: David and Goliath has only been out for a few days, but already there's an article online about its "business lessons." A sample assertion:
Gladwell proves that not only do many successful people have dyslexia, but that they have become successful in large part because of having to deal with their difficulty. Those diagnosed with dyslexia are forced to explore other activities and learn new skills that they may have otherwise pursued. 
Of course this is nonsense—there is no "proof" of anything in this book, much less a proof that dyslexia causes success. I wonder if the author of this article even has an idea what proper evidence in support of these assertions would be, or if he knows that these kinds of assertions cannot be "proved."

One final indicator of Malcolm Gladwell's influence—and I'll be upfront and say this is an utterly non-scientific and imprecise methodology—that suggests why he matters. I Googled the phrases "Malcolm Gladwell proved" and "Malcolm Gladwell showed" and compared the results to the similar "Steven Pinker proved" and "Steven Pinker showed" (adding in the results of redoing the Pinker search with the incorrect "Stephen"). I chose Steven Pinker not because he is an academic, but because he has published a lot of bestselling books and widely-read essays and is considered a leading public intellectual, like Gladwell. Pinker is surely much more influential than most other academics. It just so happens that he published a critical review of Gladwell's previous book—but this also is an indicator of the fact that Pinker chooses to engage the public rather than just his professional colleagues. The results, in total number of hits:

Gladwell: proved 5300, showed 19200 = 24500 total
Pinker: proved 9, showed 625 = 634 total

So the total influence ratio as measured by this crude technique is 24500/634, or over 38-to-1 in favor of Gladwell. I wasn't expecting it to be nearly this high myself. (Interestingly, those "influenced" by Pinker are only 9/634, or 1.4% likely to think he "proved" something as opposed to the arguably more correct "showed" it. Gladwell's influencees are 5300/24500 or 21.6% likely to think their influencer "proved" something.) Refining the searches, adding "according to Gladwell" versus "according to Pinker" and so on will change the numbers, but I doubt enough corrections will significantly redress a 38:1 difference.

When someone with this much influence on what people seem to really believe (as indexed by my dashed-off method) says that he is just a storyteller who just uses research to "augment" the stories—who places the stories first and the science in a supporting role, rather than the other way around—he's essentially placing his work in the category of inspirational books like The Secret. As Dan Simons and I noted in a New York Times essay, such books sprinkle in references and allusions to science as a rhetorical strategy. Accessorizing your otherwise inconsistent or incoherent story-based argument with pieces of science is a profitable rhetorical strategy because references to science are crucial touchpoints that help readers maintain their default instinct to believe what they are being told. They help because when readers see "science" they can suppress any skepticism that might be bubbling up in response to the inconsistencies and contradictions.

In his Telegraph interview, Gladwell again played down the seriousness of his own ideas: "The mistake is to think these books are ends in themselves. My books are gateway drugs – they lead you to the hard stuff." And David and Goliath does cite scholarly works, books and journal articles, and journalism, in its footnotes and endnotes. But I wonder how many of its readers will follow those links, as compared to the number who will take its categorical claims at face value. And of those that do follow the links, how many will realize that many of the most important links are missing?

This leads to my last topic, the psychology experiment Gladwell deploys in David and Goliath to explain what he means by "desirable difficulties." The difficulties he talks about are serious challenges, like dyslexia or the death of a parent during one's childhood. But the experiment is a 40-person study on Princeton students who solved three mathematical reasoning problems presented in either a normal typeface or a difficult-to-read typeface. Counterintuitively, the group that read in a difficult typeface scored higher on the reasoning problems than the group that read in a normal typeface.

In my review, I criticized Gladwell for describing this experiment at length without also mentioning that a replication attempt with a much larger and more representative sample of subjects did not find an advantage for difficult typefaces. One of the original study's authors wrote to me to argue that his effect is robust when the test questions are at an appropriate level of difficulty for the participants in the experiment, and that his effect has in fact been replicated “conceptually” by other researchers. However, I cannot find any successful direct replications—repetitions of the experiment that use the same methods and get the same results—and direct replication is the evidence that I believe is most relevant.

This may be an interesting controversy for cognitive psychologists, but it's not the point here. The point is that Gladwell says absolutely nothing about the controversy over whether this effect is reliable. All he does is cite the original 2007 study of 40 subjects and rest his case. Even those who have been hooked by his prose and look to the endnotes of this chapter for a new fix will find no sources for the "hard stuff"—e.g., the true state of the science of "desirable difficulty"—that he claims to be promoting. And if the hard stuff has value, why does Gladwell not wade into it himself and let it inform his writing? When discussing the question of how to pick the right college, why not discuss the intriguing research that debates whether going to an elite school really adds economic value (over going to a lesser-ranked school) for those people who get admitted to both. Or, when discussing dyslexia, instead of claiming it is a gift to society, how about devoting the space to a serious consideration of the hypothesis that this kind of early life difficulty jars the course of development, adding uncertainty (increasing the chances of both success and failure, though probably not in equal proportions) rather than directionality. There was so much more he could have done with these fascinating and important topics.

But at least the difficulty finding a simple experiment to serve as metaphor might have jarred Gladwell into realizing that the connection between the typeface effect, however robust it might turn out to be, and the effect of a neurological condition or loss of a parent, is in fact just metaphorical. There is no relevant nexus between reading faint type and losing a parent at an early age, and pretending there is just loosens the threads of logic to the point of breaking. But perhaps Gladwell already knows this. After all, in his Telegraph interview, he said readers don't care about stuff like consistency and coherence, only critics and writers do.

I can certainly think of one gifted writer with a huge audience who doesn't seem to care that much. I think the effect is the propagation of a lot of wrong beliefs among a vast audience of influential people. And that's unfortunate.

Tuesday, October 1, 2013

The Part Before the Colon: Is There a Trend Toward Cleverer Journal Article Titles?

I joined the Society for Personality and Social Psychology last year, even though I am not a social psychologist, because I had to in order to give an invited talk at a pre-conference session of the annual SPSP meeting, which was held in New Orleans. I had a good time, despite having a bad headache during most of my visit. Social psychologists give lots of interesting talks, they tend to be social, and they also dress better than cognitive psychologists and neuroscientists. It was also fun to see which ones made a visit to the casino across the street from the conference hotel.

As an SPSP member, I now receive their flagship journal every month: Personality and Social Psychology Bulletin (PSPB—academics love to refer to journals with acronyms). One of the best parts of the journal, to a non-specialist like me, is the article titles. In psychology, as in many areas of science, there are different strategies for a good title. One is to concisely state the main finding of the paper or the main theoretical claim (occasionally formulated as a question rather than a statement). Another is to precede that kind of title with a clever quip, allusion, pun, or other phrase that grabs attention and orients the (potential) reader towards some aspect of the research you want to emphasize or that makes the work stand out. That is the part before the colon.

An example of this latter strategy is the 1999 article that Dan Simons and I published in Perception. The title was "Gorillas in our midst: Sustained inattentional blindness for dynamic events." (Thanks to M.J. Wraga, a fellow postdoc in the Harvard psychology department at the time, for suggesting the part before the colon.) If you are a real black belt in journal article writing, you can be like Dan Gilbert and combine both a statement of the main finding and a clever quip all into one phrase, as in his wonderful 1993 article  (with two co-authors) "You Can't Not Believe Everything You Read." If there were a best title award this would surely be in the running. At least it's one of my favorites.

I think all kinds of titles can be good, if they are done well. There seems to be a trend toward more clever titles, at least during my time in psychology and social science. Consider the latest issue of PSPB (volume 39, number 10). Here are the article titles, just the parts before the colon:

1. "Show Me the Money"
2. Losing One's Cool
3. Changing Me to Keep You
4. Never Let Them See You Cry
5. Gender Bias in Leader Evaluations
6. Getting It On Versus Getting It Over With
7. The Things You Do For Me
8. "I Know Your Pain"
9. How Large Are Actor and Partner Effects of Personality on Relationship Satisfaction?
10. Touch as an Interpersonal Emotion Regulation Process in Couples' Daily Lives

I classify seven out of ten articles (all but #5, #9, and #10) as following the clever title strategy. That seems like a lot more than I used to see. To hastily test this intuition, I looked at the tables of contents for the same journal 10, 20, and 30 volumes ago, using issue 10 in 2003 and the final issue in 1993 and 1983 (since there were fewer than ten issues per volume then). There seems to have been a sharp increase:

2013: 70%   (7 out of 10)
2003: 10%   (1 out of 10: "The Good, the Bad, and the Healthy")
1993: 0%     (0 out of 11)
1983: 17%   (2 out of 12: "You Just Can't Count on Things Any More" and "Lonely at the Top")

Coincidentally, I received the latest issue of Clinical Psychological Science (volume 1, number 4; TOC apparently not online yet) today as well. It also has ten articles, and none of them have clever parts before the colon in their titles. Maybe clinical psychologists and their subject matter just aren't as funny.

Of course, this is hardly a serious statistical analysis of the phenomenon, and the quippy titles might have just coalesced at random in this particular issue, or this journal might have editors who encourage this kind of title. I should also say that I perceive the trend to exist in other areas besides social psychology. But I have heard it argued that this trend towards cleverer titles—if it really exists!—is a deleterious one, since it puts pressure on authors to come up with clever titles, and makes reviewers and editors and journalists expect to see them, and therefore it may distort the entire research endeavor towards work that can be summed up in not just the proverbial "25 words or less" but in the much higher standard of "10 very clever words or less." I have no strong belief as to whether all this is happening, or in what fields of study, but perhaps it's something to think about.

If someone does the research and writes a journal article on this, they are welcome to use the title "In 25 Words or Less: The Effect of Trends Toward Clever Pre-Colon Article Titles on the Content and Quality of Research." Just make sure to cite this blog entry, or come up with a catchier title yourself.

PS: I am fully prepared to be told that someone else has already said all this, or even done the research relating title catchiness to citation counts or other metrics. I have anticipated this in my other article, "Leap Before You Look: The Surprising Value of Writing Blog Entries Without Doing Your Research First."

Thursday, September 12, 2013

Similarities Between Rolf Dobelli's Book and Ours


Rolf Dobelli, a Swiss writer, published a book called The Art of Thinking Clearly earlier this year with HarperCollins in the U.S. The book’s original German edition was a #1 bestseller, and the book has sold over one million copies worldwide.

In perusing Mr. Dobelli’s book, we noticed several familiar-sounding passages. On closer examination, we found five instances of unattributed material that is either reproduced verbatim or closely paraphrased from text and arguments in our book, The Invisible Gorilla (Crown, 2010). They are listed at the end of this note.

Nassim Taleb (author of The Black Swan and other books) has also publicly noted similarities between his work and material in Mr. Dobelli’s book. We have also become aware of a similarity between material in Being Wrong by Kathryn Schulz and material in Mr. Dobelli’s book.

We sent a letter to Mr. Dobelli and his publishers noting our concern about these five passages. Mr. Dobelli replied to us privately and posted the following text on his website (since removed):
“I received two letters claiming inadequate attributions or citations in the book 'The Art of Thinking Clearly.' Some of the claims are true, some false. For the ones that are true, I take full responsibility. I will work closely with the publishers of my book that the corrections are put in effect as quickly as possible.”
In the interest of transparency, we have decided to post the list of similar passages here. We understand that Mr. Dobelli will be identifying the changes he intends to make to his book on his website as well.

— Christopher Chabris & Daniel Simons


Passages with overlap between The Invisible Gorilla by Christopher Chabris and Daniel Simons, and The Art of Thinking Clearly by Rolf Dobelli (portions of greatest similarity between the books are highlighted in red):

1.   Chabris/Simons: In April 2006, rising waters made a ford through the start of the Avon River temporarily impassable, so it was closed and markers were put on both sides. Every day during the two weeks following the closure, one or two cars drove right past the warning signs and into the river. These drivers apparently were so focused on their navigation displays that they didn’t see what was right in front of them. [pp. 41–42]
Dobelli: After heavy rains in the south of England, a river in a small village overflowed its banks. The police closed the ford, the shallow part of the river where vehicles cross, and diverted traffic. The crossing stayed closed for two weeks, but each day at least one car drove past the warning sign and into the rushing water. The drivers were so focused on their car’s navigation systems that they didn’t notice what was right in front of them. [p. 263; opening paragraph of Chapter 88]

2.   Chabris/Simons: The “Nun Bun” was a cinnamon pastry whose twisty rolls eerily resembled the nose and jowls of Mother Teresa. It was found in a Nashville coffee shop in 1996, but was stolen on Christmas in 2005. [p. 155]
Dobelli: The “Nun Bun” was a cinnamon pastry whose markings resembled the nose and jowls of Mother Teresa. It was found in a Nashville coffee shop in 1996 but was stolen on Christmas in 2005. [p. 310]

3.   Chabris/Simons: “Our Lady of the Underpass” was another appearance by the Virgin Mary, this time in the guise of a salt stain under Interstate 94 in Chicago that drew huge crowds and stopped traffic for months in 2005. Other cases include Hot Chocolate Jesus, Jesus on a shrimp tail dinner, Jesus in a dental x-ray, and Cheesus (a Cheeto purportedly shaped like Jesus). [p. 155]
      Dobelli: “Our Lady of the Underpass” was another appearance by the Virgin Mary, this time as a salt stain under Interstate 94 in Chicago in 2005. Other cases include Hot Chocolate Jesus, Jesus on a shrimp tail dinner, Jesus in a dental X-ray, and a Cheeto shaped like Jesus. [p. 310]
4.   Chabris/Simons: In other words, almost immediately after you see an object that looks anything like a face, your brain treats it like a face and processes it differently than other objects. [p. 156]
      Dobelli: As soon as an object looks like a face, the brain treats it like a face—this is very different from other objects. [p. 310]

5.   The paragraph in Dobelli is a condensation and paraphrase of a longer passage and argument appearing in The Invisible Gorilla:
      Chabris/Simons: It may come as a surprise, then, to learn that talking to a passenger in your car is not nearly as disruptive as talking on a cell phone. In fact, most of the evidence suggests that talking to a passenger has little or no effect on driving ability.40
            Talking to a passenger could be less problematic for several reasons. First, it’s simply easier to hear and understand someone right next to you than someone on a phone, so you don’t need to exert as much effort just to keep up with the conversation. Second, the person sitting next to you provides another set of eyes—a passenger might notice something unexpected on the road and alert you, a service your cell-phone conversation partner can’t provide. The most interesting reason for this difference between cell-phone conversation partners and passengers has to do with the social demands of conversations. When you converse with the other people in your car, they are aware of the environment you are in. Consequently, if you enter a challenging driving situation and stop speaking, your passengers will quickly deduce the reason for your silence. There’s no social demand for you to keep speaking because the driving context adjusts the expectations of everyone in the car about social interaction. When talking on a cell phone, though, you feel a strong social demand to continue the conversation despite difficult driving conditions because your conversation partner has no reason to expect you to suddenly stop and start speaking. These three factors, in combination, help to explain why talking on a cell phone is particularly dangerous when driving, more so than many other forms of distraction. [p. 26]
      Dobelli: And, if instead of phoning someone, you chat with whomever is in the passenger seat? Research found no negative effects. First, face-to-face conversations are much clearer than phone conversations, that is, your brain must not work so hard to decipher the messages. Second, your passenger understands that if the situation gets dangerous, the chatting will be interrupted. That means you do not feel compelled to continue the conversation. Third, your passenger has an additional pair of eyes and can point out dangers. [pp. 353–354]

Thursday, August 22, 2013

Should Poker Be (A Tiny Bit) More Like Chess?

There are similarities between tournament poker and tournament chess, and many serious chess players, including some grandmasters, have taken up poker with success. Poker is a much, much richer game, however, because there is more variance in outcomes when weaker and stronger players face each other. Thousands of poker players pay $1000, $1500, or even $10,000 to enter poker tournaments, routinely creating multimillion-dollar prize pools. Most chess tournaments, except for invitational events reserved for the very top players in a country or the world, also charge entry fees, but tournament chess doesn't have enough variance to get people to put up even $1000 (in today's dollars) to play. So it's a good thing that poker isn't more like chess in the way that stronger chess players are very likely to win against weaker ones.

But as I was reading the August 21 issue of Card Player magazine recently, I stumbled across a discussion that got me thinking that poker still needs some improving. In a column called "The Rules Guy" (which is not yet available online) I read the following:

The Rules Guy: Props to Antonio Esfandiari. TRG salutes Antonio Esfandiari for saying "You're both out of line" to Jungleman (Dan Cates) and Scott Seiver after their intense verbal altercation on a Party Poker Premier League VI broadcast. A calming voice can, well, work magic.
What happened, in a nutshell, was that Cates had broken the rules by acting out of turn several times at the table. Acting out of turn means betting, folding, or doing other things you normally do when it is your turn to bet, but doing them before it is your turn. This is bad because it gives the players who are supposed to act before you information about your hand strength and intentions that they aren't supposed to have. Therefore it can help those players, and also hurt other players. It can also be a way of colluding with others at the table, which is obviously a fundamental no-no. Seiver called out Cates on his repeated out-of-turn acting, Cates said something in response, Seiver said "it's like actual cheating," Cates used the f-word, and it went on from there.

At some point, according to the article, Esfandiari said, "You're both out of line. You're [Cates] out of line for acting out of turn; you're [Seiver] out of line for attacking him." He is portrayed as the level-headed hero of the whole episode and gets "props" from The Pseudonymous Rules Guy.

When I was at the World Series of Poker this past June, I played in a $1500 buy-in no limit hold'em tournament. I was doing pretty well at my first table, but then the table broke and I was moved to a table of mostly younger players, plus one very well-known pro: Phil Laak, who is a close friend of Antonio Esfandiari. (They appear together on the ESPN broadcasts and even co-hosted an entire series about prop betting a few years back.) Laak was three seats to my right, and he was acting just like he acts on all the televised poker events that love to show him. He was hamming it up, saying crazy and clever things, acting alternately bored and intensely interested, jumping up to take an occasional picture with a fan, making friends with everyone, and so on.

Laak was in the last hand before the dinner break. The clock had run down, so everyone was free to go if they wanted to, but I stayed at the table to see the hand play out. Laak was heads-up against the player to his left (two seats to my right). There was much betting, and after the river card was dealt Laak went all-in. His opponent started thinking about this major decision of whether to call or fold. He had enough chips to call without being knocked out, but a lot of chips were at stake.

By this point Esfandiari had come over to our table. I don't know if he was playing in the same tournament, another tournament, or what, but he came to talk to Laak about plans for the dinner break. The two of them were talking, while Laak was in the hand, even before Laak had made his final all-in bet. This seems bad to me. Why should any player who is in a hand be allowed to say anything to anyone else while the hand is going on? But it got worse.

As Laak's opponent, who as far as I could tell did not know Laak personally, or at least was not great friends with him, was thinking over his decision, Esfandiari leaned over him and said something like "Hurry up and fold, we want to go eat!" Those probably weren't his exact words; I didn't write them down. But he clearly spoke to the player whose turn it was to act, and clearly spoke to him about one of the actions he was contemplating. He didn't just say "hurry up"—he mentioned folding too.

Now I don't think Esfandiari knew what Laak's hand was. Perhaps he was just goofing around because he was hungry and wanted to go and eat. But I wouldn't be surprised if he was also trying to help Laak just a little bit, perhaps unconsciously, by throwing Laak's opponent off his train of thought, or by sewing doubt about the right play to make.

As it happened, the guy didn't seem bothered by Esfandiari and didn't complain about him. He eventually called, only to find that Laak had made a straight flush on the river. In retrospect, he should have taken the "advice" and folded.

Regardless, I found Esfandiari's actions appalling. I didn't say anything, not being an expert in poker rules and etiquette, and not being involved in the hand, but I thought that if this was legal, it was a very big difference from chess. In a chess tournament, you aren't allowed to do anything close to that. If a friend of Magnus Carlsen walked up to Carlsen's opponent and said "just resign already" while he was thinking about his next move ... I cannot imagine what would happen, since it's so far outside the realm of possibility. Garry Kasparov used to be criticized severely for making faces during games in reaction to his opponent's moves. This is orders of magnitude worse.

Esfandiari may have been right about Cates and Seiver (though I think repeatedly acting out of turn is worse than getting pissed off at someone for repeatedly acting out of turn), but I think he was wrong to say a single word to Laak, or especially Laak's opponent, while their hand was in progress. It doesn't matter that he's a famous pro, or that he's the all-time biggest money winner in tournament poker, or that he's considered to be a nice guy. Let's keep poker different from chess in all the ways that matter for its popularity, but let's make it more like chess by enacting or enforcing rules that help each player, amateur and pro alike, make their decisions by themselves, in peace.

Friday, February 22, 2013

Polishing Rabbits and Passing Off Squirrels—Andrew Zolli on Jonah Lehrer

Andrew Zolli, the Executive Director and curator of PopTech, as well as the co-author of Resiliencesent me a very thoughtful reflection in response to my earlier post on Jonah Lehrer and his recent apology. He had tried to post it as a comment on my post, but ran up against Blogger's comment length limits. So with Andrew's consent, I am posting it below. I think that Andrew's points are excellent. (Note: I have been an invited speaker at PopTech, both on the stage and to the Fellows program.)
At this point, the whole sad L'Affaire d'Lehrer has been dissected into a finely-ground powder, and everyone has assigned Jonah appropriate culpability, including Jonah himself.  What I find of more lasting interest is a systemic issue which Chris touches on glancingly, above: 
We live in a media moment that massively encourages and rewards the pulling of proverbial rabbits out of hats—storytelling that culminates in a counterintuitive fact about human beings and their nature.  It's sort of "Sudoku storytelling", in which the reader is presented with a confusing storyline, and the author presents a rubric and reassembles the elements in a way that snaps the pieces into place in a clean and satisfying way.  This kind of writing gives the reader a little positive jolt, a sense that they've been let in on some secret wisdom that decodes part of the human condition. (That "snapping into place" phenomenon—it's what makes a joke with a good punchline work too - you know it's coming, and you can't quite see how it will resolve itself, and then *wham*—there it is! The same is true for get-rich-quick-schemes.) 
These are the kinds of pieces—not just books, but blog pieces, and other forms of writing—that go "viral." Our appetite for such secret wisdom is so strong that passing them along actually raises the social capital of the *forwarder*, not just the author. (This is what Twitter was made for, I believe.) 
And this is *exactly* the kind of content that beleaguered mainstream editors often push writers, particularly talented writers, to produce—not nuanced tomes with confidence intervals attached to data, including examples of counterfactuals and copious footnotes—but snappy, highly "applicable," linear narratives (with counterintuitive endings!) that sacrifice complexity for accessibility. (As one editor put it to me: "You wanna write that other shit? Go to a university press!") 
And its not just editors—these are the kinds of books that command significant advances, that backlist, that build the author's speaking fees, that get them bylined articles in prominent magazines, and tv appearances—a whole edifice that, most of the time, ends up with the "talent" becoming a not-terribly-intellectual-public-intellectual. (By the way, it's not just science writers … business gurus in particular are often peddlers of pure horseshit, yet find a insatiable appetite for their nonsense. Because if there's one thing human beings find even more interesting than ourselves, it's how to make a buck off of some other clueless rube.) 
Of course, the big problem is that there really aren't an endless supply of rabbits to pull out of hats. And not all rabbits are of first quality—sometimes, we have to "polish the rabbit," so to speak. And that's how I believe Jonah (whom I know personally, though not well) got into this predicament—being overly committed to the rabbit production line. So you start to reuse your rabbits, then you try to pass off second quality rabbits by making them look all the more surprising. And then you're panicked to discover you're passing off squirrels. 
Oddly enough, the rabbit-out-of-the-hat counterintuitive ending is actually Jonah's story, which is why his downfall itself went viral. You think this guy is just blessed with preternatural explanatory talent, but it turns out, "the 'Imagine' guy was making up his own quotes!" It's a joke! And a punchline! Love it! Instant schadenfreude! Have you heard? Pass it on! 
I am not excusing Jonah for his mistakes, which are significant. I think it's an honor to be held to a high standard, and he failed that standard, more than once. Worse, he had (and has) the abundant and enviable talent not to fail. And there should be real consequences for his having done so. 
Yet I also think we ought to be careful in making him a cautionary tale for a civilization drowning in its own bullshit. He was unprofessional, but he was also responding to perverse incentives and societal norms in our public square that we collectively bolster, if not passively tolerate, by our own consumption habits. 
For me, I'm trying to become more mindful of my own bullshitological contributions—which are, I'm sure greater than I'd care to admit. I'm also finding myself reflecting on how we might make the system itself better, with fewer incentives for bad behavior, and better rewards for good behavior. 
Because, while I'm sure there is some intrinsic character in all of us, it's also true that incentives draw forth aspects of that character, which then can come to publicly define us. (I can be fairly charitably-minded until someone cuts me off in traffic; fortunately for me, my utterances thereafter are not part of the public record.) 
So here's my concluding truism: Piling on Jonah is like jumping on a trampoline: fun for a while, but it won't take us very far. Better to think about how we can springboard to a better place for everyone. 
I know it's not counter-intuitive enough. I guess I'll never make it in this business. 

Monday, February 18, 2013

How Much BAM for the Buck, and Other Thoughts on the Brain Activity Map Project

Today's New York Times reports that the Obama administration is considering a massive, partly government-funded project to map the human brain, the Brain Activity Map (BAM!) Project, inspired by the success of the Human Genome Project.

Let me start by saying that I am all in favor of more research in neuroscience, because there is certainly a lot we don't know about how the brain works. While to outsiders like Ray Kurzweil it may look like progress is coming in leaps and bounds, and backing up the mind's hard drive is therefore a calculable number of years away, from the inside the effort to understand the brain often seems to zigzag from new idea to cool finding to neat technology without a clear forward trajectory. I am also a big fan of George Church, a genius and visionary of molecular biology who is one of the driving forces behind the new plan. (I even once co-taught a course on cognitive genetics at Harvard with George's wife, the geneticist Ting Wu.) But before we all jump on this bandwagon, let's discuss the pros and cons—based on what has been said publicly so far (mainly in the Times article, which was prefigured by a Neuron article by Church and several others published last June).

Per the Times, the project is expected to cost "billions of dollars" and last 10 years. Its goals are to "advance the knowledge of the brain's billions of neurons and gain greater insights into perception, actions, and, ultimately, consciousness." So far, so good—basic science. Some also hope that the project will "develop the technology essential to understanding diseases like Alzheimer's and Parkinson's, as well as to find new therapies for a variety of mental illnesses." That's certainly possible, though I cannot think of any treatments for mental illness or brain disease that have been derived from previous maps of the brain or knowledge of its activity patterns. Perhaps this is just an argument that we need better maps. Finally, "the project holds the potential of paving the way for advances in artificial intelligence." Certainly also possible, but I think AI has been doing pretty well lately by ignoring brain architecture and going with whatever algorithms work on computer hardware to produce intelligent-seeming behavior.

The Times account is short on details of what precisely is being proposed, which has led some people to think that the idea is to map every connection and the firing activity of every neuron in (at least) one human brain, or to make more maps of the functions of brain regions using neuroimaging techniques. But the Neuron article by the Brain Activity Map proponents makes it clear that, last June at least, the idea was to start with small circuits in very small organisms, where it may soon be possible to record from every participating neuron at once, and to work up to larger circuits and larger organisms. All these maps would record "the patterns and sequences of neuronal firing by all neurons" in the relevant circuit or brain, so they would be much more detailed, in both space and time, than any existing databases. A drosophila brain might be done in ten years, a mouse neocortex in fifteen. The entire human brain would be a more distant goal. And of course there would be ethical issues to be surfaced and solved along the way to that ultimate step.

There are a lot of things to like about this ambition. Although we already have lots of maps of the brain, none of them (but one—the structural connectome of the C. elegans worm) approach the spatial resolution of a neuron-by-neuron map. The main source of our knowledge about how neurons represent information, carry out computations, and communicate with other neurons is still the single-cell recording, a technique developed about half a century ago. Such methods are based on inserting tiny electrodes in or near living neurons, and have obvious limitations, not least their inability to scale to full circuits or brain regions. Recording entire circuits in action would be a fantastic achievement and probably would lead to all sorts of ancillary benefits for advancing brain research, some foreseeable and some not. And perhaps more neuroscientists would be able to find jobs along the way!

But there are some considerations on the other side of the ledger, too. One that should not be underestimated is the opportunity cost; always, but especially nowadays, it would be a mistake to imagine that the funding for a new, large project will appear out of thin air. If the BAM goes forward, other areas are likely to get less funding, and other neuroscience and behavioral science projects will likely be among the first to be reduced. Moreover, a single mega-project is likely to supplant many smaller projects. Is our neuroscience money best spent on one project costing, say, $5 billion, or instead a thousand projects of $5 million each, or ten thousand projects with $500K budgets? Gary Marcus has a suggestion for five $1 billion projects. Which funding strategy is likely to result in more important discoveries, as viewed from the perspective of the next generation of scientists looking back? Maybe the BAM, but maybe not. The answer is hardly obvious to me. The big project is concrete and tangible, with milestones in the near future. The net effect of the tinkering of ten thousand labs with comparatively small budgets is harder to conceive of, but might turn out to be much larger.

One reason to be suspicious of the potential return-on-investment of a massive BAM project is that it's being sold by comparing it to the Human Genome Project (HGP), with a claim that the HGP produced $141 in economic activity for every $1 the government spent on it. President Obama cited this figure in his State of the Union Address. That's a return of fourteen thousand percent! Can that be right? If so, it would mean that about $800 billion in economic activity has been generated by that one government "investment." It turns out that this claim comes from a Batelle report (which is cited by the BAM advocates in their Neuron article) that was sponsored by a company that makes equipment used in life science research.

I find this figure hard to believe, not to say preposterous. Does it really represent net economic activity, or does it account for activity displaced from other spheres, and was all that economic activity the best activity that could have been done, or was it activity that pursuit of grant funding and other non-market incentives encouraged? What if the same amount of government money had been spent in funding lots of individual genetics researchers instead, or on other biology researchers, or other science entirely? The certainty with which these sorts of analyses are presented makes it hard to see counterfactual alternatives, but they lurk everywhere. At a minimum the $800B value must rest on a lot of assumptions, and the specific assumptions made probably have a large impact on the value that comes out of the analysis.

To be clear: I think the genome project was a great scientific idea, I suspect that it has produced a lot of benefits, and I am personally happy it was done. I just don't think it should be oversold. As Richard Feynman pointed out in his famous "Cargo Cult" speech, public support for research will eventually erode if it is sold with outrageous-sounding claims or promises of early benefits.

But suppose it is true that the Human Genome Project was the single best thing the U.S. government ever spent its money on—sorry, "investment it ever made"—the government's version of buying Apple stock for $5 and selling at $700. Should we expect similar returns from the next big science project? Or should we expect to see the economic return and gains in knowledge achieved by the average of the big science projects that the government has funded over the past decades? The abandoned supercollider, the war on cancer, the cancelled breeder reactor, and I am sure many others fade from memory—and certainly never get mentioned—when we are told about the 141X ROI of the genome project (worthy as it was). An analysis that looked at all the comparable projects rather than just the all-time outlier might come to a different projection of the likely value of the BAM. We might still expect a positive return, but without the 141X (or whatever the true value is), it will have a tougher time competing with other priorities, or with other ways of parceling out neuroscience funding.

Europe has thrown its lot behind the single mega-project approach, with an effort to simulate an entire brain at a cost of over 1 billion Euros. Regardless of the (questionable) merit of this idea, perhaps the U.S. should play a different strategy in the competition for research glory by letting a thousand flowers bloom rather than planting one ginormous tree. Indeed, such a contrarian approach may have value precisely because of the limits of the mapmaking approach to understanding the brain.

Forty years ago, single-cell neurophysiologist Horace Barlow famously proposed that "a description of that activity of a single nerve cell which is transmitted to and influences other nerve cells and of a nerve cell's response to such influences from other cells, is a complete enough description for functional understanding of the nervous system." The BAM Project seems to be a plan to create exactly this sort of description, but at a much larger scale. But as David Marr explained in his 1982 book Vision, and as Hilary Putnam also suggested in his 1973 Cognition article "Reductionism and the Nature of Psychology," there are several other levels of explanation that are equally important in reaching a "functional understanding" of how the brain works. The representations, algorithms, and computational functions of the brain and its circuits, as well as the relationship of the brain to the organism and its environment and niche, are just as important as a map that shows how the neurons are wired up and how they send signals to one another.

Again, it is not that a BAM would have no value. I would personally be fascinated to see its results, and those results might well help us to crack the problem of how higher-level properties emerge out of agglomerations of lower-level events (which the psychologist Stephen Kosslyn, a founder of cognitive neuroscience, proposed as one of the hardest problems in social science). But the sheer size of a full BAM project might focus our attention and hopes on the BAM as the be-all and end-all of neuroscience, and distract the field from devoting energy to those other levels. Cognitive scientist Mark Changizi has eloquently argued, in fact, that the massive project we ought to be pursuing is a map of the "teleome," his coinage for the suite of functions and abilities that the nervous system was designed by evolution to perform. Without knowing more about function, it will be hard to understand the BAM's results, and perhaps even harder to build the EU's whole-brain computer simulation. As the proposal moves forward, I hope the decision-makers keep in mind that maps, while incredibly useful tools, don't give answers to every important question.

Tuesday, February 12, 2013

What Has Been Forgotten About Jonah Lehrer

Today the science writer Jonah Lehrer made his first extended public remarks since he resigned his various positions and his publisher withdrew his third book last summer. The venue was a Knight Foundation conference in Miami. Lehrer gave a short speech about decision making, focusing on his own bad decisions and how he plans to prevent them from recurring in the future. To my surprise, the foundation, which supports "journalistic excellence," seems to have paid Lehrer $20,000 for his appearance.

As is well known, Lehrer first got into trouble last year when it was revealed that his new blog at the New Yorker incorporated much material that he had previously published, including in his old column at the Wall Street Journal. This led to a suspension of his blogging privileges. Then various investigations showed that he had not only "self-plagiarized" (a lazy and exploitative practice) but also plagiarized the work of others, and perhaps worst of all embellished and fabricated quotes from his interview subjects (most prominently Bob Dylan) and other sources. The New Yorker finally let him go, as did Wired. He completely ceased tweeting, Facebooking, or updating his website.

At first I felt bad about Jonah Lehrer's problems. He seemed like a nice person. When I published a fairly negative review of his third book, Imagine: How Creativity Works, in the New York Times, he was up on his blog with a reply, titled "On Bad Reviews," in a matter of hours. I wrote my own strong rebuttal and posted it a couple of days later. The next day, Lehrer emailed me proposing that he interview me by email about the issues I had raised, for publication on his blog. We did the interview, which took several weeks to complete. After various delays, caused by the suspension and then cancellation of his blog, the interview was finally published at the Creativity Post website. I was pleasantly surprised that Lehrer bothered to engage my criticism, and then to ask me directly how I thought he (and other science writers) could improve their practices. I was a bit upset when he tried to block the final publication of the interview, which was supposed to happen (coincidentally) the day after he departed the New Yorker, but the Creativity Post editors managed to convince him to change his mind.

When the allegations of plagiarism and fabrication came out, the story became one of "greatest science writer of his generation makes unthinkable mistakes," and the analysis was mostly psychoanalysis of Lehrer's motives or of the media culture. Entirely lost was the fact that Jonah Lehrer was never a very good science writer. He seemed not to fully understand the science he was trying to explain; his explanations were inaccurate, overblown, and often just plain wrong, usually in the direction of giving his readers counterintuitive thrills and challenging their settled beliefs. You can read my review and the various parts of my exchange with him that are linked above for detailed explanations of why I make this claim. Others have made similar points too, for example Isaac Chotiner at the New Republic and Tim Requarth and Meehan Crist at The Millions. But the tenor of many critics last year was "he committed unforgivable journalistic sins and should be punished for them, but he still got the science right." There was a clear sense that one had nothing to do with the other.

In my opinion, the fabrications and the scientific misunderstanding are actually closely related. The fabrications tended to follow a pattern of perfecting the stories and anecdotes that Lehrer -- like almost all successful science writers nowadays -- used to illustrate his arguments. Had he used only words Bob Dylan actually said, and only the true facts about Dylan's 1960s songwriting travails, the story wouldn't have been as smooth. It's human nature to be more convinced by concrete stories than by abstract statistics and ideas, so the convincingness of Lehrer's science writing came from the brilliance of his stories, characters, and quotes. Those are the elements that people process fluently and remember long after the details of experiments and analyses fade.

After the Dylan episode, others found more examples of how Lehrer did this. I think one of the clearest was Seth Mnookin's analysis of Lehrer's retelling of psychologist Leon Festinger's famous original story of "cognitive dissonance," based on Festinger's experience of infiltrating a doomsday cult in 1954. Of the moments after an expected civilization-destroying cataclysm failed to start, Festinger wrote, "Midnight had passed and nothing had happened ... But there was little to see in the reactions of the people in that room. There was no talking, no sound. People sat stock still, their faces seemingly frozen and expressionless." Lehrer narrated the same event as follows: "When the clock read 12:01 and there were still no aliens, the cultists began to worry. A few began to cry. The aliens had let them down." Do you see the difference? Lehrer's version is more dramatic: people worry, they cry, they feel let down. It's more human. Each one of these little errors or fabrications makes the story work a little bit better, makes it match our expectations more closely, and thus gives it greater influence on our beliefs.

So by cutting exactly these corners in his writing, Lehrer was able to mask the fact that his conclusions were facile or erroneous, and his prose earned him a reputation for being much more authoritative than he was. Who was harmed by all of this? Writers who were trying to do with correct understanding and real quotes and stories what Lehrer did with his "material," for one. And certainly his editors, publishers, and anyone else who paid money for his halo and his drawing power. But readers most of all, since they were told things about how nature works that simply weren't true. Not just what Bob Dylan said and when he said it, but what it has to do with creativity, neuroscience, and everything else.

Jonah Lehrer gave a talk today that was more interesting than I expected. He acknowledged his mistakes and said he was trying to erect operating procedures and safeguards to make sure his own arrogance stays in check in the future. He said some things that were hard to believe, such as his claim that he has a poster in his office of Bob Dylan by Milton Glaser (a graphic artist also misquoted by Lehrer), and that he flinches every time he sees it. Does he really flinch every time? Hasn't habituation or inattention taken care of that by now?

I actually think Lehrer might be able to return to writing successfully, because he has the technical skills, and he is obviously a very intelligent and energetic person. But he should take the time to not only protect himself against his tendency to fabricate and plagiarize, but also to learn the basics of journalistic practice and ethics, to learn how to think clearly about science and facts, and above all to commit himself to the truth. Then maybe he will have something valuable to tell us.

Monday, February 11, 2013

Six Big Problems With "Why Can Some Kids Handle Pressure ..."

Surely how kids handle pressure is an important and interesting question. And surely how we perform in pressure situations has a lot to do with our genes. But the recent New York Times article "Why Can Some Kids Handle Pressure While Others Fall Apart?" by Po Bronson and Ashley Merryman is shot through with the most basic mistakes in science writing about behavior genetics. This makes me sad, because I have liked the authors' previous books, and because I think it is quite possible to communicate research on genetics accurately for an intelligent general audience. Here, unfortunately, they appear to have taken no note of what has happened in behavior genetics in the past 5–10 years, which ought to have been a prerequisite for this piece. A few examples:

Exaggerated claims: "One particular gene, referred to as the COMT gene, could to a large degree explain why one child is more prone to be a worrier, while another may be unflappable" [emphasis added]. In reality, what kind of COMT gene you have, if it is relevant, is an extremely minor influence by itself on how much you worry. The particular variant of the COMT gene being discussed here is very common, and like all other common genetic variants, it has never been shown to have a large, or even medium-sized, influence on any behavioral traits.


Cherrypicking the study with the most dramatic results: "Other research has found that those with the slow-acting enzymes have higher IQs, on average. One study of Beijing schoolchildren calculated the advantage to be 10 IQ points." In 2013 it should be regarded as journalistic malpractice to write things like this when the average of all the studies on this gene and IQ show the effect to be, at best, a tiny fraction of 10 IQ points. In an analysis that included almost 10,000 subjects from two countries, in fact, a team of colleagues and myself found virtually no evidence of any effect of COMT on IQ.


Idealizing your favorite study: "In other words, the exam was a perfect, real world experiment for studying the effects of genetics on high-stakes competition." In reality, there are no "perfect" experiments, and the one Bronson and Merryman report on had only 779 subjects, which might seem like a lot, but is almost certainly too small to learn anything reliable about genetic effects. About 100 times more participants are needed to really answer these questions.


Labeling genes with behaviors and pretending that possessing a genetic variant makes you a particular type of lucky or unlucky person: The two variants of the COMT gene are labelled "warrior" and "worrier" (for the different responses to stress they supposedly cause people to have—get it??), and then people are in turn labelled as Warriors or Worriers based on their genotypes. That's tantamount to calling the variants of APOE the "Doofus" and "Genius" genes because one makes you more likely to develop Alzheimer's disease while the other offers some protection against dementia. No, wait, it's not, because APOE has a highly significant effect on Alzheimer risk that has been replicated over and over by independent researchers, but COMT's links to the behaviors discussed in this article are smaller and more tenuous. Later we are told that the Worriers' "genetically blessed working memory and attention advantage kicked in. And their experience meant they didn't melt under the pressure of their genetic curse." I thought we gave up on this kind of superficial genes-as-personality-types-and-blessings-or-curses kind of science writing years ago.


Contradicting your own point: "... we are all Warriors or Worriers ... In truth, because we all get one COMT gene from our father and one from our mother, about half of all people inherit one of each gene variation, so they have a mix of the enzymes and are somewhere in between the Warriors and the Worriers." (Is anyone else reminded of the camp 1970s film "The Warriors," about gangs that roam the New York City subways?) We can't all be one type or the other if half of us are both. And incidentally, the pattern of 25%-50%-25% of the three genotypes does not arise only because we get one allele from each parent. It also depends on the frequency of the two variants being about 50% each in the population, which it happens to be in the case of this COMT polymorphism.


Pretending that what has been known for generations is a new discovery: "Stress turns out to be far more complicated than we've assumed ... short-term stress can actually help people perform ..." And later: "It may be difficult to believe ... that stress can benefit your performance." But psychology textbooks have long taught that the level of arousal for optimal performance is moderate, with too much arousal or too little leading to lower performance. This is called the Yerkes-Dodson Law, and it was originally proposed in 1908. Perhaps worth a mention?


The article makes much of findings that "those with Worrier-genes can still handle incredible stress." This would only be surprising if COMT had such a strong effect that it could determine what kind of person you are. But COMT doesn't have that effect. It's surprising when someone with the genotype for brown eyes has blue eyes instead, because the relevant genes almost completely determine the phenotype. It's not surprising that people with one of hundreds or thousands of genes that make one susceptible to stress turn out to be able to handle themselves just fine.


If the authors were conversant with—and showed concern for—the relevant literature and the background science, they would not have made these mistakes. I understand that they are writers, not researchers, but people who write about research for the public have a simple obligation to communicate not just good stories, but reliable facts.