Friday, March 28, 2014

"Data Journalism" on College ROI at FiveThirtyEight: Where's theCritical Thinking?

A website called PayScale recently published a "College ROI Report" that purports to calculate the return on investment (ROI) of earning a Bachelor's degree from each of about 900 American colleges and universities. I found out about this report from an article on Nate Silver's new FiveThirtyEight website. The article appears under a banner called "DataLab," implying that it is an example of the new "data journalism" that Silver and his site are all about. Unfortunately, the article contains approximately zero critical thinking about the meaning of the PayScale report, its data sources, and its conclusions.

PayScale did a lot of number-crunching (read all about it here), but the computation resulted in two key numbers for each institution: (1) the cost of getting an undergraduate degree, taking into account factors like financial aid and time to graduation; and (2) the expected total earnings of a graduate over the next twenty years. The first one can be figured out from public data sources. The second one came from a survey by PayScale (more on this later). The ROI for a college was calculated by subtracting #1 from #2, and then further subtracting the expected total earnings of a person who skipped college and worked for 24–26 years instead (which happens to be about $1.1 million). The table produced by PayScale thus purports to show how much you would get back—in monetary income—on the "investment" of obtaining a degree from any particular college or university.

Indeed, PayScale says that "This measure is useful for high school seniors evaluating their likely financial return from attending and graduating college." But this is simply not true. As I read the FiveThirtyEight article on the PayScale report, I was waiting for them to point out the reasons why, but they never did. The only critical comments were about incorporating the effects of student debt.

What are the problems with the PayScale analysis? First of all, it only makes sense to speak of the comparative return on an investment when the investors have a choice of what to invest in. If every person could choose to attend any college (and to graduate from it and get a full-time job), or to skip college entirely, then it would be meaningful to ask which choice maximizes return. This is what we do when calculating a financial ROI: we try to figure out whether investing in stocks versus bonds, or one mutual fund versus another, or one business opportunity versus another, will be more profitable. But colleges have admissions requirements, so not everyone can go to whatever college he or she wants. Colleges select their students as much as students select their colleges. And in fact, the people who attend different colleges can be very different, and they can be even more different from the people who don't attend college at all.

This means that the Return in this "ROI" depends on much more than the Investment. It also depends on who is doing the investing. In fact, it is far from trivial to figure out the true ROI of going to Harvard versus Vanderbilt versus Wayland Baptist versus Nicholls State versus not attending college at all. To figure this out, you would have to control in the analysis for all the characteristics that make students at different colleges different from one another, and different from students who don't go to college. Factors like cognitive ability, ambition, work habits, parental income and education, where the students grew up and went to high school, what grades they got, and many others are likely to be important. In fact, those other factors could be so important that they might wind up explaining more of the variation in income between people than is explained by going to college—let alone which particular college people go to.

Even controlling for data we might be able to obtain, like the average SAT score and parental income of students who attend each college, would not completely solve the problem, because there could be factors that we can't measure that have an important effect. Only by randomly assigning students to different colleges (or to directly entering the workforce after high school) would we get an estimate of the true ROI (measured in money—which of course leaves aside all the other benefits one might get from college that don't show up in your next twenty years of paychecks).

Of course this ideal experiment won't ever happen, but clever researchers have tried to approximate it by doing things like looking at students who were accepted to both a higher-ranked and a lower-ranked school, and then comparing those who enroll in the higher-ranked one to those who enroll in the lower-ranked one. Since all the students in this analysis got into both schools, the problem of different schools having different students is mitigated. (Not erased entirely, though: for example, people who deliberately attend lower-ranked schools might be doing so because of financial circumstances, or their college experience may differ because they are likely to start out above average in ability and preparation for the school they attend, as compared to those who choose higher-ranked schools.)

FiveThirtyEight said nothing about this fundamental logical problem with the entire PayScale exercise. Nor did it address the other flaws in the analysis and presentation of the data.

It could have also asked about the confidence intervals around the ROI estimates provided by PayScale. When you give only point estimates (exact values that represent just the mean or median of a distribution), and proceed to rank them, you create the appearance of a world where every distinction matters—that the school ranked #1 really has a higher ROI than #2, which is higher than #3, and so on. PayScale's methdology page says, "the 90% confidence interval on the 20 year median pay is ±5%" (but 10% for "elite schools" and "small liberal arts schools or schools where a majority of undergraduates complete a graduate degree"). The narrowness of these intervals is a bit hard to believe, as well as their uniformity (how does every school in a category get the same confidence interval?). Why not just put the school-specific confidence intervals into the report, so that it is obvious that, for example, school #48 (Yale) is probably not significantly higher in ROI than, say, school #69 (Lehigh), but is probably lower in ROI than school #6 (Georgia Tech)?

It's hard to have much confidence in these confidence intervals anyhow, since we don't know how many people PayScale surveys at each college to make the income calculations (which will be the critical drivers of the variability in ROI). Many of the colleges are small; how reliable can the estimates of what their graduates will earn be? And are the surveys of college graduates unbiased with respect to what field the graduates work in? Or, for example, do engineers and teachers tend to respond to these surveys more than, say, baristas and consultants? The unemployed and under-employed are not included; this will have the effect of inflating the apparent ROI of schools whose graduates tend, for whatever reasons, not to have full-time jobs. Payscale says that non-cash compensation and investment income are not included, which might bias down the reported ROI of graduates of elite schools who go into financial careers.

Finally, perhaps FiveThirtyEight could have looked at whether the schools that stand out at either end of the distribution happen to be smaller than the ones in the middle. Ohio State, Florida State, et al. have so many students, drawn from such a broad distribution of ability and other personal traits, that they should be expected to have "ROI" values nearer to the middle of the overall distribution of universities than should small colleges, which through pure chance (having, by luck, more high- or low-income graduates) are more likely to land in the top or bottom thirds of the list. Some degree of mean reversion may be expected, so the rankings of PayScale will lose some predictive value for future ROIs, especially in the case of small schools.

The comments I have made all concern the underlying PayScale report, but I think it is FiveThirtyEight that has not upheld the best standards of "data journalism." If that term is to have any meaning, it can't simply refer to "journalism" that consists of the passing along of other people's flawed "data" (especially when those people are producing and promoting the data for commercial purposes). Nate Silver earned his reputation, and that of his FiveThirtyEight brand, largely by calling out—and improving on—just this kind of simplistic and misleading analysis. It's sad to see his "data journalism organization" no longer criticizing superficiality, but instead promoting it.

Postscripts: 3/29/14: After I first posted this piece, I realized three things. First, I hadn't mentioned mean reversion originally, so I added it in. But it's a minor issue compared to the others. Second, I didn't make it clear that notwithstanding what I wrote above, I am 100% in favor of more good data journalism. I agree with Nate Silver and others that journalists (and everyone!) should be more aware of the data that exists to answer questions, how to gather data that has not already been compiled, how to think about data, and so on. A great example of silly data-ignorant journalism is the series of articles the New York Post has been running on the "epidemic" of suicides and suspicious deaths in the financial industry. The proper question to start with is whether there is an epidemic, or even a significant excess over normal variation, as opposed to  a set of coincidences that would be expected to happen every so often. Perhaps there is an epidemic, but I am skeptical. The Post (and other outlets that have reported on these deaths) skip right over this crucial threshold issue. Maybe FiveThirtyEight could address it and teach its readers about the danger of jumping to conclusions after seeing nonexistent patterns in noise. Third, and finally, I should have mentioned that FiveThirtyEight has on board some people who really do know how to think seriously about data (and do it much better than I do), such as the economist Emily Oster. I hope Emily's influence will spread throughout the organization. 3/30/14: I removed text in the original version that asked whether outliers like hedge fund managers had their incomes included in PayScale's calculations. They won't have too much influence, regardless, because PayScale is reporting medians, not means. My apologies for the inadvertent error. 4/5/14: I changed the number of colleges included from 1310 to "about 900." There are 1310 entries in Payscale's table, but many colleges are listed more than once if they have different tuition options (e.g. state resident versus non-resident). 4/7/14: I added links to the Krueger & Dale (and Dale & Krueger) economics papers that tried to estimate the returns from attending more selective/elite colleges. I knew about these papers when I wrote the initial post, but had forgotten who the authors were.

Friday, October 4, 2013

Why Malcolm Gladwell Matters (And Why That's Unfortunate)

Malcolm Gladwell, the New Yorker writer and perennial bestselling author, has a new book out. It's called David and Goliath: Misfits, Underdogs, and the Art of Battling Giants. I reviewed it (PDF) in last weekend's edition of The Wall Street Journal. (Other reviews have appeared in The Atlantic, The New York Times, The Guardian, and The Millions, to name a few.) Even though the WSJ editors kindly gave me about 2500 words to go into depth about the book, there were many things I did not have space to discuss or elaborate on. This post contains some additional thoughts about Malcolm Gladwell, David and Goliath, the general modus operandi of his writing, and how he and others conceive of what he is doing.

I noticed some interesting reactions to my review. Some people said I was a jealous hater. One even implied that as a cognitive scientist (rather than a neuroscientist) I somehow lacked the capacity or credibility to criticize anyone's logic or adherence to evidence. A more serious response, of which I saw several instances, came from people who said in essence "Why do you take Gladwell so seriously—it's obvious he is just an entertainer." For example, here's Jason Kottke:
I enjoy Gladwell's writing and am able to take it with the proper portion of salt ... I read (and write about) most pop science as science fiction: good for thinking about things in novel ways but not so great for basing your cancer treatment on. 
The Freakonomics blog reviewer said much the same thing:
... critics have primarily focused on whether the argument they think Gladwell is making is valid. I am going to argue that this approach misses the fact that the stories Gladwell tells are simply well worth reading.
I say good for you to everyone who doesn't take Gladwell seriously. But the reason I take him seriously is because I take him and his publisher at their word. On their face, many of the assertions and conclusions in Gladwell's books are clearly meant to describe lawful regularities about the way human mental life and the human social world work. And this has always been the case with his writing.

In The Tipping Point (2000), Gladwell wrote of sociological regularities and even coined new ones, like "The Law of the Few." Calling patterns of behavior "laws" is a basic way of signaling that they are robust empirical regularities. Laws of human behavior aren't as mathematically precise as laws of physics, but asserting one is about the strongest claim that can be made in social science. To say something is a law is to say that it applies with (near) universality and can be used to predict, in advance, with a fair degree of certainty, what will happen in a situation. It says this is truth you can believe in, and act on to your benefit.

A blurb from the publisher of David and Goliath avers: "The author of Outliers explores the hidden rules governing relationships between the mighty and the weak, upending prevailing wisdom as he goes." A hidden rule is a counterintuitive, causal mechanism behind the workings of the world. If you say you are exploring hidden rules that govern relationships, you are promising to explicate social science. But we don't have to take the publisher's word for it. Here's the author himself, in the book, stating one of his theses:
The fact of being an underdog changes people in ways that we often fail to appreciate. It opens doors, and creates opportunities and educates and permits things that might otherwise have seemed unthinkable.
The emphasis on changes is in the original (at least in the version of the quote I saw on Gladwell's Facebook page). In an excerpt published in The Guardian, he wrote, "If you take away the gift of reading, you create the gift of listening." I added the emphasis on create to highlight the fact that Gladwell is here claiming a causal rule about the mind and brain, namely that having dyslexia causes one to become a better listener (something he says made superlawyer David Boies so successful).

I've gone on at length with these examples because I think they also run counter to another point I have seen made about Gladwell's writings recently: That he does nothing more than restate the obvious or banal. I couldn't disagree more here. Indeed, to his credit, what he writes about is the opposite of trivial. If Gladwell is right in his claims, we have all been acting unethically by watching professional football, and the sport will go the way of dogfighting, or at best boxing. If he is right about basketball, thousands of teams have been employing bad strategies for no good reason. If he is right about dyslexia, the world would literally be a worse place if everyone were able to learn how to read with ease, because we would lose the geniuses that dyslexia (and other "desirable difficulties") create. If he was right about how beliefs and fads spread through social networks in The Tipping Point, consumer marketing would have changed greatly in the years since. Actually, it did: firms spent great effort trying to find "influentials" and buy their influence, even though there was never good causal evidence that this would work. (See Duncan Watts's brilliant book Everything is Obvious, Once You Know the Answerreviewed here—to understand why.) If Gladwell is right, also in The Tipping Point, about how much news anchors can influence our votes by deploying their smiles for and against their preferred candidates, then democracy as we know it is a charade (and not for the reasons usually given, but for the completely unsupported reason that subliminal persuaders can create any electoral results they want). And so on. These ideas are far from obvious, self-evident, or trivial. They do have the property of engaging a hindsight bias, of triggering a pleasurable rush of counterintuition, of seeming correct once you have learned about them. But an idea that people feel like they already knew is much different from an idea people really did know all along.

Janet Maslin's New York Times review of David and Goliath begins by succinctly stating the value proposition that Gladwell's work offers to his readers:
The world becomes less complicated with a Malcolm Gladwell book in hand. Mr. Gladwell raises questions — should David have won his fight with Goliath? — that are reassuringly clear even before they are answered. His answers are just tricky enough to suggest that the reader has learned something, regardless of whether that’s true.
(I would only add that the world becomes not just less complicated but better, which leaves the reader a little bit happier about life.) In a recent interview with The Guardian, Gladwell as much as agreed: "If my books appear to a reader to be oversimplified, then you shouldn't read them: you're not the audience!"

I don't think the main flaw is oversimplification (though that is a problem: Einstein was right when he—supposedly—advised that things be made as simple as possible, but no simpler). As I wrote in my own review, the main flaw is a lack of logic and proper evidence in the argumentation. But consider what Gladwell's quote means. He is saying that if you understand his topics enough to see what he is doing wrong, then you are not the reader he wants. At a stroke he has said that anyone equipped to properly review his work should not be reading it. How convenient! Those who are left are only those who do not think the material is oversimplified.

Who are those people? They are the readers who will take Gladwell's laws, rules, and causal theories seriously; they will tweet them to the world, preach them to their underlings and colleagues, write them up in their own books and articles (David Brooks relied on Gladwell's claims more than once in his last book), and let them infiltrate their own decision-making processes. These are the people who will learn to trust their guts (Blink), search out and lavish attention and money on fictitious "influencers" (The Tipping Point), celebrate neurological problems rather than treat them (David and Goliath), and fail to pay attention to talent and potential because they think personal triumph results just from luck and hard work (Outliers). It doesn't matter if these are misreadings or imprecise readings of what Gladwell is saying in these books—they are common readings, and I think they are more common among exactly those readers Gladwell says are his audience.

Not backing down, Gladwell said on the Brian Lehrer show that he really doesn't care about logic, evidence, and truth—or that he thinks discussions of the concerns of "academic research" in the sciences, i.e., logic, evidence, and truth—are "inaccessible" to his lowly readers:
I am a story-teller, and I look to academic research … for ways of augmenting story-telling. The reason I don’t do things their way is because their way has a cost: it makes their writing inaccessible. If you are someone who has as their goal ... to reach a lay audience ... you can't do it their way.
In this and another quote, from his interview in The Telegraph, about what readers "are indifferent to," the condescension and arrogance are in full view:
And as I’ve written more books I’ve realised there are certain things that writers and critics prize, and readers don’t. So we’re obsessed with things like coherence, consistency, neatness of argument. Readers are indifferent to those things. 
Note, incidentally, that he mentions coherence, consistency, and neatness. But not correctness, or proper evidence. Perhaps he thinks that these are highfalutin cares for writers and critics, or perhaps he is some kind of postmodernist for whom they don't even exist in any cognizable form. In any case, I do not agree with Gladwell's implication that accuracy and logic are incompatible with entertainment. If anyone could make accurate and logical discussion of science entertaining, it is Malcolm Gladwell.

Perhaps ... perhaps I am the one who is naive, but I was honestly very surprised by these quotes. I had thought Gladwell was inadvertently misunderstanding the science he was writing about, and making sincere mistakes in the service of coming up with ever more "Gladwellian" insights to serve his audience. But according to his own account, he knows exactly what he is doing, and not only that, he thinks it is the right thing to do. Is there no sense of ethics that requires more fidelity to truth, especially when your audience is so vast—and, by your own admission, so benighted—as to need oversimplification and to be unmoved by little things like consistency and coherence? I think a higher ethic of communication should apply here, not a lower standard.

This brings me back to the question of why Gladwell matters so much. Why am I, an academic who is supposed to be keeping his head down and toiling away on inaccessible stuff, spending so much time on reading his interviews, reviewing his book, and writing this blog post? What Malcom Gladwell says matters because, whether academics like it or not, he is incredibly influential.

As Gladwell himself might put it: "We tend to think that people who write popular books don't have much influence. But we are wrong." Sure, Gladwell has huge sales figures and is said to command big speaking fees, and his TED talks are among the most watched. But James Patterson has huge sales too, and he isn't driving public opinion or belief. I know Gladwell has influence for multiple reasons. One is that even highly-educated people in leadership positions in academia—a field where I have experience—are sometimes more familiar with and more likely to cite Gladwell's writings than those of the top scholars in their own fields, even when those top scholars have put their ideas into trade-book form like Gladwell does.

Another data point: David and Goliath has only been out for a few days, but already there's an article online about its "business lessons." A sample assertion:
Gladwell proves that not only do many successful people have dyslexia, but that they have become successful in large part because of having to deal with their difficulty. Those diagnosed with dyslexia are forced to explore other activities and learn new skills that they may have otherwise pursued. 
Of course this is nonsense—there is no "proof" of anything in this book, much less a proof that dyslexia causes success. I wonder if the author of this article even has an idea what proper evidence in support of these assertions would be, or if he knows that these kinds of assertions cannot be "proved."

One final indicator of Malcolm Gladwell's influence—and I'll be upfront and say this is an utterly non-scientific and imprecise methodology—that suggests why he matters. I Googled the phrases "Malcolm Gladwell proved" and "Malcolm Gladwell showed" and compared the results to the similar "Steven Pinker proved" and "Steven Pinker showed" (adding in the results of redoing the Pinker search with the incorrect "Stephen"). I chose Steven Pinker not because he is an academic, but because he has published a lot of bestselling books and widely-read essays and is considered a leading public intellectual, like Gladwell. Pinker is surely much more influential than most other academics. It just so happens that he published a critical review of Gladwell's previous book—but this also is an indicator of the fact that Pinker chooses to engage the public rather than just his professional colleagues. The results, in total number of hits:

Gladwell: proved 5300, showed 19200 = 24500 total
Pinker: proved 9, showed 625 = 634 total

So the total influence ratio as measured by this crude technique is 24500/634, or over 38-to-1 in favor of Gladwell. I wasn't expecting it to be nearly this high myself. (Interestingly, those "influenced" by Pinker are only 9/634, or 1.4% likely to think he "proved" something as opposed to the arguably more correct "showed" it. Gladwell's influencees are 5300/24500 or 21.6% likely to think their influencer "proved" something.) Refining the searches, adding "according to Gladwell" versus "according to Pinker" and so on will change the numbers, but I doubt enough corrections will significantly redress a 38:1 difference.

When someone with this much influence on what people seem to really believe (as indexed by my dashed-off method) says that he is just a storyteller who just uses research to "augment" the stories—who places the stories first and the science in a supporting role, rather than the other way around—he's essentially placing his work in the category of inspirational books like The Secret. As Dan Simons and I noted in a New York Times essay, such books sprinkle in references and allusions to science as a rhetorical strategy. Accessorizing your otherwise inconsistent or incoherent story-based argument with pieces of science is a profitable rhetorical strategy because references to science are crucial touchpoints that help readers maintain their default instinct to believe what they are being told. They help because when readers see "science" they can suppress any skepticism that might be bubbling up in response to the inconsistencies and contradictions.

In his Telegraph interview, Gladwell again played down the seriousness of his own ideas: "The mistake is to think these books are ends in themselves. My books are gateway drugs – they lead you to the hard stuff." And David and Goliath does cite scholarly works, books and journal articles, and journalism, in its footnotes and endnotes. But I wonder how many of its readers will follow those links, as compared to the number who will take its categorical claims at face value. And of those that do follow the links, how many will realize that many of the most important links are missing?

This leads to my last topic, the psychology experiment Gladwell deploys in David and Goliath to explain what he means by "desirable difficulties." The difficulties he talks about are serious challenges, like dyslexia or the death of a parent during one's childhood. But the experiment is a 40-person study on Princeton students who solved three mathematical reasoning problems presented in either a normal typeface or a difficult-to-read typeface. Counterintuitively, the group that read in a difficult typeface scored higher on the reasoning problems than the group that read in a normal typeface.

In my review, I criticized Gladwell for describing this experiment at length without also mentioning that a replication attempt with a much larger and more representative sample of subjects did not find an advantage for difficult typefaces. One of the original study's authors wrote to me to argue that his effect is robust when the test questions are at an appropriate level of difficulty for the participants in the experiment, and that his effect has in fact been replicated “conceptually” by other researchers. However, I cannot find any successful direct replications—repetitions of the experiment that use the same methods and get the same results—and direct replication is the evidence that I believe is most relevant.

This may be an interesting controversy for cognitive psychologists, but it's not the point here. The point is that Gladwell says absolutely nothing about the controversy over whether this effect is reliable. All he does is cite the original 2007 study of 40 subjects and rest his case. Even those who have been hooked by his prose and look to the endnotes of this chapter for a new fix will find no sources for the "hard stuff"—e.g., the true state of the science of "desirable difficulty"—that he claims to be promoting. And if the hard stuff has value, why does Gladwell not wade into it himself and let it inform his writing? When discussing the question of how to pick the right college, why not discuss the intriguing research that debates whether going to an elite school really adds economic value (over going to a lesser-ranked school) for those people who get admitted to both. Or, when discussing dyslexia, instead of claiming it is a gift to society, how about devoting the space to a serious consideration of the hypothesis that this kind of early life difficulty jars the course of development, adding uncertainty (increasing the chances of both success and failure, though probably not in equal proportions) rather than directionality. There was so much more he could have done with these fascinating and important topics.

But at least the difficulty finding a simple experiment to serve as metaphor might have jarred Gladwell into realizing that the connection between the typeface effect, however robust it might turn out to be, and the effect of a neurological condition or loss of a parent, is in fact just metaphorical. There is no relevant nexus between reading faint type and losing a parent at an early age, and pretending there is just loosens the threads of logic to the point of breaking. But perhaps Gladwell already knows this. After all, in his Telegraph interview, he said readers don't care about stuff like consistency and coherence, only critics and writers do.

I can certainly think of one gifted writer with a huge audience who doesn't seem to care that much. I think the effect is the propagation of a lot of wrong beliefs among a vast audience of influential people. And that's unfortunate.

Tuesday, October 1, 2013

The Part Before the Colon: Is There a Trend Toward Cleverer Journal Article Titles?

I joined the Society for Personality and Social Psychology last year, even though I am not a social psychologist, because I had to in order to give an invited talk at a pre-conference session of the annual SPSP meeting, which was held in New Orleans. I had a good time, despite having a bad headache during most of my visit. Social psychologists give lots of interesting talks, they tend to be social, and they also dress better than cognitive psychologists and neuroscientists. It was also fun to see which ones made a visit to the casino across the street from the conference hotel.

As an SPSP member, I now receive their flagship journal every month: Personality and Social Psychology Bulletin (PSPB—academics love to refer to journals with acronyms). One of the best parts of the journal, to a non-specialist like me, is the article titles. In psychology, as in many areas of science, there are different strategies for a good title. One is to concisely state the main finding of the paper or the main theoretical claim (occasionally formulated as a question rather than a statement). Another is to precede that kind of title with a clever quip, allusion, pun, or other phrase that grabs attention and orients the (potential) reader towards some aspect of the research you want to emphasize or that makes the work stand out. That is the part before the colon.

An example of this latter strategy is the 1999 article that Dan Simons and I published in Perception. The title was "Gorillas in our midst: Sustained inattentional blindness for dynamic events." (Thanks to M.J. Wraga, a fellow postdoc in the Harvard psychology department at the time, for suggesting the part before the colon.) If you are a real black belt in journal article writing, you can be like Dan Gilbert and combine both a statement of the main finding and a clever quip all into one phrase, as in his wonderful 1993 article  (with two co-authors) "You Can't Not Believe Everything You Read." If there were a best title award this would surely be in the running. At least it's one of my favorites.

I think all kinds of titles can be good, if they are done well. There seems to be a trend toward more clever titles, at least during my time in psychology and social science. Consider the latest issue of PSPB (volume 39, number 10). Here are the article titles, just the parts before the colon:

1. "Show Me the Money"
2. Losing One's Cool
3. Changing Me to Keep You
4. Never Let Them See You Cry
5. Gender Bias in Leader Evaluations
6. Getting It On Versus Getting It Over With
7. The Things You Do For Me
8. "I Know Your Pain"
9. How Large Are Actor and Partner Effects of Personality on Relationship Satisfaction?
10. Touch as an Interpersonal Emotion Regulation Process in Couples' Daily Lives

I classify seven out of ten articles (all but #5, #9, and #10) as following the clever title strategy. That seems like a lot more than I used to see. To hastily test this intuition, I looked at the tables of contents for the same journal 10, 20, and 30 volumes ago, using issue 10 in 2003 and the final issue in 1993 and 1983 (since there were fewer than ten issues per volume then). There seems to have been a sharp increase:

2013: 70%   (7 out of 10)
2003: 10%   (1 out of 10: "The Good, the Bad, and the Healthy")
1993: 0%     (0 out of 11)
1983: 17%   (2 out of 12: "You Just Can't Count on Things Any More" and "Lonely at the Top")

Coincidentally, I received the latest issue of Clinical Psychological Science (volume 1, number 4; TOC apparently not online yet) today as well. It also has ten articles, and none of them have clever parts before the colon in their titles. Maybe clinical psychologists and their subject matter just aren't as funny.

Of course, this is hardly a serious statistical analysis of the phenomenon, and the quippy titles might have just coalesced at random in this particular issue, or this journal might have editors who encourage this kind of title. I should also say that I perceive the trend to exist in other areas besides social psychology. But I have heard it argued that this trend towards cleverer titles—if it really exists!—is a deleterious one, since it puts pressure on authors to come up with clever titles, and makes reviewers and editors and journalists expect to see them, and therefore it may distort the entire research endeavor towards work that can be summed up in not just the proverbial "25 words or less" but in the much higher standard of "10 very clever words or less." I have no strong belief as to whether all this is happening, or in what fields of study, but perhaps it's something to think about.

If someone does the research and writes a journal article on this, they are welcome to use the title "In 25 Words or Less: The Effect of Trends Toward Clever Pre-Colon Article Titles on the Content and Quality of Research." Just make sure to cite this blog entry, or come up with a catchier title yourself.

PS: I am fully prepared to be told that someone else has already said all this, or even done the research relating title catchiness to citation counts or other metrics. I have anticipated this in my other article, "Leap Before You Look: The Surprising Value of Writing Blog Entries Without Doing Your Research First."

Thursday, September 12, 2013

Similarities Between Rolf Dobelli's Book and Ours


Rolf Dobelli, a Swiss writer, published a book called The Art of Thinking Clearly earlier this year with HarperCollins in the U.S. The book’s original German edition was a #1 bestseller, and the book has sold over one million copies worldwide.

In perusing Mr. Dobelli’s book, we noticed several familiar-sounding passages. On closer examination, we found five instances of unattributed material that is either reproduced verbatim or closely paraphrased from text and arguments in our book, The Invisible Gorilla (Crown, 2010). They are listed at the end of this note.

Nassim Taleb (author of The Black Swan and other books) has also publicly noted similarities between his work and material in Mr. Dobelli’s book. We have also become aware of a similarity between material in Being Wrong by Kathryn Schulz and material in Mr. Dobelli’s book.

We sent a letter to Mr. Dobelli and his publishers noting our concern about these five passages. Mr. Dobelli replied to us privately and posted the following text on his website (since removed):
“I received two letters claiming inadequate attributions or citations in the book 'The Art of Thinking Clearly.' Some of the claims are true, some false. For the ones that are true, I take full responsibility. I will work closely with the publishers of my book that the corrections are put in effect as quickly as possible.”
In the interest of transparency, we have decided to post the list of similar passages here. We understand that Mr. Dobelli will be identifying the changes he intends to make to his book on his website as well.

— Christopher Chabris & Daniel Simons


Passages with overlap between The Invisible Gorilla by Christopher Chabris and Daniel Simons, and The Art of Thinking Clearly by Rolf Dobelli (portions of greatest similarity between the books are highlighted in red):

1.   Chabris/Simons: In April 2006, rising waters made a ford through the start of the Avon River temporarily impassable, so it was closed and markers were put on both sides. Every day during the two weeks following the closure, one or two cars drove right past the warning signs and into the river. These drivers apparently were so focused on their navigation displays that they didn’t see what was right in front of them. [pp. 41–42]
Dobelli: After heavy rains in the south of England, a river in a small village overflowed its banks. The police closed the ford, the shallow part of the river where vehicles cross, and diverted traffic. The crossing stayed closed for two weeks, but each day at least one car drove past the warning sign and into the rushing water. The drivers were so focused on their car’s navigation systems that they didn’t notice what was right in front of them. [p. 263; opening paragraph of Chapter 88]

2.   Chabris/Simons: The “Nun Bun” was a cinnamon pastry whose twisty rolls eerily resembled the nose and jowls of Mother Teresa. It was found in a Nashville coffee shop in 1996, but was stolen on Christmas in 2005. [p. 155]
Dobelli: The “Nun Bun” was a cinnamon pastry whose markings resembled the nose and jowls of Mother Teresa. It was found in a Nashville coffee shop in 1996 but was stolen on Christmas in 2005. [p. 310]

3.   Chabris/Simons: “Our Lady of the Underpass” was another appearance by the Virgin Mary, this time in the guise of a salt stain under Interstate 94 in Chicago that drew huge crowds and stopped traffic for months in 2005. Other cases include Hot Chocolate Jesus, Jesus on a shrimp tail dinner, Jesus in a dental x-ray, and Cheesus (a Cheeto purportedly shaped like Jesus). [p. 155]
      Dobelli: “Our Lady of the Underpass” was another appearance by the Virgin Mary, this time as a salt stain under Interstate 94 in Chicago in 2005. Other cases include Hot Chocolate Jesus, Jesus on a shrimp tail dinner, Jesus in a dental X-ray, and a Cheeto shaped like Jesus. [p. 310]
4.   Chabris/Simons: In other words, almost immediately after you see an object that looks anything like a face, your brain treats it like a face and processes it differently than other objects. [p. 156]
      Dobelli: As soon as an object looks like a face, the brain treats it like a face—this is very different from other objects. [p. 310]

5.   The paragraph in Dobelli is a condensation and paraphrase of a longer passage and argument appearing in The Invisible Gorilla:
      Chabris/Simons: It may come as a surprise, then, to learn that talking to a passenger in your car is not nearly as disruptive as talking on a cell phone. In fact, most of the evidence suggests that talking to a passenger has little or no effect on driving ability.40
            Talking to a passenger could be less problematic for several reasons. First, it’s simply easier to hear and understand someone right next to you than someone on a phone, so you don’t need to exert as much effort just to keep up with the conversation. Second, the person sitting next to you provides another set of eyes—a passenger might notice something unexpected on the road and alert you, a service your cell-phone conversation partner can’t provide. The most interesting reason for this difference between cell-phone conversation partners and passengers has to do with the social demands of conversations. When you converse with the other people in your car, they are aware of the environment you are in. Consequently, if you enter a challenging driving situation and stop speaking, your passengers will quickly deduce the reason for your silence. There’s no social demand for you to keep speaking because the driving context adjusts the expectations of everyone in the car about social interaction. When talking on a cell phone, though, you feel a strong social demand to continue the conversation despite difficult driving conditions because your conversation partner has no reason to expect you to suddenly stop and start speaking. These three factors, in combination, help to explain why talking on a cell phone is particularly dangerous when driving, more so than many other forms of distraction. [p. 26]
      Dobelli: And, if instead of phoning someone, you chat with whomever is in the passenger seat? Research found no negative effects. First, face-to-face conversations are much clearer than phone conversations, that is, your brain must not work so hard to decipher the messages. Second, your passenger understands that if the situation gets dangerous, the chatting will be interrupted. That means you do not feel compelled to continue the conversation. Third, your passenger has an additional pair of eyes and can point out dangers. [pp. 353–354]

Thursday, August 22, 2013

Should Poker Be (A Tiny Bit) More Like Chess?

There are similarities between tournament poker and tournament chess, and many serious chess players, including some grandmasters, have taken up poker with success. Poker is a much, much richer game, however, because there is more variance in outcomes when weaker and stronger players face each other. Thousands of poker players pay $1000, $1500, or even $10,000 to enter poker tournaments, routinely creating multimillion-dollar prize pools. Most chess tournaments, except for invitational events reserved for the very top players in a country or the world, also charge entry fees, but tournament chess doesn't have enough variance to get people to put up even $1000 (in today's dollars) to play. So it's a good thing that poker isn't more like chess in the way that stronger chess players are very likely to win against weaker ones.

But as I was reading the August 21 issue of Card Player magazine recently, I stumbled across a discussion that got me thinking that poker still needs some improving. In a column called "The Rules Guy" (which is not yet available online) I read the following:

The Rules Guy: Props to Antonio Esfandiari. TRG salutes Antonio Esfandiari for saying "You're both out of line" to Jungleman (Dan Cates) and Scott Seiver after their intense verbal altercation on a Party Poker Premier League VI broadcast. A calming voice can, well, work magic.
What happened, in a nutshell, was that Cates had broken the rules by acting out of turn several times at the table. Acting out of turn means betting, folding, or doing other things you normally do when it is your turn to bet, but doing them before it is your turn. This is bad because it gives the players who are supposed to act before you information about your hand strength and intentions that they aren't supposed to have. Therefore it can help those players, and also hurt other players. It can also be a way of colluding with others at the table, which is obviously a fundamental no-no. Seiver called out Cates on his repeated out-of-turn acting, Cates said something in response, Seiver said "it's like actual cheating," Cates used the f-word, and it went on from there.

At some point, according to the article, Esfandiari said, "You're both out of line. You're [Cates] out of line for acting out of turn; you're [Seiver] out of line for attacking him." He is portrayed as the level-headed hero of the whole episode and gets "props" from The Pseudonymous Rules Guy.

When I was at the World Series of Poker this past June, I played in a $1500 buy-in no limit hold'em tournament. I was doing pretty well at my first table, but then the table broke and I was moved to a table of mostly younger players, plus one very well-known pro: Phil Laak, who is a close friend of Antonio Esfandiari. (They appear together on the ESPN broadcasts and even co-hosted an entire series about prop betting a few years back.) Laak was three seats to my right, and he was acting just like he acts on all the televised poker events that love to show him. He was hamming it up, saying crazy and clever things, acting alternately bored and intensely interested, jumping up to take an occasional picture with a fan, making friends with everyone, and so on.

Laak was in the last hand before the dinner break. The clock had run down, so everyone was free to go if they wanted to, but I stayed at the table to see the hand play out. Laak was heads-up against the player to his left (two seats to my right). There was much betting, and after the river card was dealt Laak went all-in. His opponent started thinking about this major decision of whether to call or fold. He had enough chips to call without being knocked out, but a lot of chips were at stake.

By this point Esfandiari had come over to our table. I don't know if he was playing in the same tournament, another tournament, or what, but he came to talk to Laak about plans for the dinner break. The two of them were talking, while Laak was in the hand, even before Laak had made his final all-in bet. This seems bad to me. Why should any player who is in a hand be allowed to say anything to anyone else while the hand is going on? But it got worse.

As Laak's opponent, who as far as I could tell did not know Laak personally, or at least was not great friends with him, was thinking over his decision, Esfandiari leaned over him and said something like "Hurry up and fold, we want to go eat!" Those probably weren't his exact words; I didn't write them down. But he clearly spoke to the player whose turn it was to act, and clearly spoke to him about one of the actions he was contemplating. He didn't just say "hurry up"—he mentioned folding too.

Now I don't think Esfandiari knew what Laak's hand was. Perhaps he was just goofing around because he was hungry and wanted to go and eat. But I wouldn't be surprised if he was also trying to help Laak just a little bit, perhaps unconsciously, by throwing Laak's opponent off his train of thought, or by sewing doubt about the right play to make.

As it happened, the guy didn't seem bothered by Esfandiari and didn't complain about him. He eventually called, only to find that Laak had made a straight flush on the river. In retrospect, he should have taken the "advice" and folded.

Regardless, I found Esfandiari's actions appalling. I didn't say anything, not being an expert in poker rules and etiquette, and not being involved in the hand, but I thought that if this was legal, it was a very big difference from chess. In a chess tournament, you aren't allowed to do anything close to that. If a friend of Magnus Carlsen walked up to Carlsen's opponent and said "just resign already" while he was thinking about his next move ... I cannot imagine what would happen, since it's so far outside the realm of possibility. Garry Kasparov used to be criticized severely for making faces during games in reaction to his opponent's moves. This is orders of magnitude worse.

Esfandiari may have been right about Cates and Seiver (though I think repeatedly acting out of turn is worse than getting pissed off at someone for repeatedly acting out of turn), but I think he was wrong to say a single word to Laak, or especially Laak's opponent, while their hand was in progress. It doesn't matter that he's a famous pro, or that he's the all-time biggest money winner in tournament poker, or that he's considered to be a nice guy. Let's keep poker different from chess in all the ways that matter for its popularity, but let's make it more like chess by enacting or enforcing rules that help each player, amateur and pro alike, make their decisions by themselves, in peace.

Friday, February 22, 2013

Polishing Rabbits and Passing Off Squirrels—Andrew Zolli on Jonah Lehrer

Andrew Zolli, the Executive Director and curator of PopTech, as well as the co-author of Resiliencesent me a very thoughtful reflection in response to my earlier post on Jonah Lehrer and his recent apology. He had tried to post it as a comment on my post, but ran up against Blogger's comment length limits. So with Andrew's consent, I am posting it below. I think that Andrew's points are excellent. (Note: I have been an invited speaker at PopTech, both on the stage and to the Fellows program.)
At this point, the whole sad L'Affaire d'Lehrer has been dissected into a finely-ground powder, and everyone has assigned Jonah appropriate culpability, including Jonah himself.  What I find of more lasting interest is a systemic issue which Chris touches on glancingly, above: 
We live in a media moment that massively encourages and rewards the pulling of proverbial rabbits out of hats—storytelling that culminates in a counterintuitive fact about human beings and their nature.  It's sort of "Sudoku storytelling", in which the reader is presented with a confusing storyline, and the author presents a rubric and reassembles the elements in a way that snaps the pieces into place in a clean and satisfying way.  This kind of writing gives the reader a little positive jolt, a sense that they've been let in on some secret wisdom that decodes part of the human condition. (That "snapping into place" phenomenon—it's what makes a joke with a good punchline work too - you know it's coming, and you can't quite see how it will resolve itself, and then *wham*—there it is! The same is true for get-rich-quick-schemes.) 
These are the kinds of pieces—not just books, but blog pieces, and other forms of writing—that go "viral." Our appetite for such secret wisdom is so strong that passing them along actually raises the social capital of the *forwarder*, not just the author. (This is what Twitter was made for, I believe.) 
And this is *exactly* the kind of content that beleaguered mainstream editors often push writers, particularly talented writers, to produce—not nuanced tomes with confidence intervals attached to data, including examples of counterfactuals and copious footnotes—but snappy, highly "applicable," linear narratives (with counterintuitive endings!) that sacrifice complexity for accessibility. (As one editor put it to me: "You wanna write that other shit? Go to a university press!") 
And its not just editors—these are the kinds of books that command significant advances, that backlist, that build the author's speaking fees, that get them bylined articles in prominent magazines, and tv appearances—a whole edifice that, most of the time, ends up with the "talent" becoming a not-terribly-intellectual-public-intellectual. (By the way, it's not just science writers … business gurus in particular are often peddlers of pure horseshit, yet find a insatiable appetite for their nonsense. Because if there's one thing human beings find even more interesting than ourselves, it's how to make a buck off of some other clueless rube.) 
Of course, the big problem is that there really aren't an endless supply of rabbits to pull out of hats. And not all rabbits are of first quality—sometimes, we have to "polish the rabbit," so to speak. And that's how I believe Jonah (whom I know personally, though not well) got into this predicament—being overly committed to the rabbit production line. So you start to reuse your rabbits, then you try to pass off second quality rabbits by making them look all the more surprising. And then you're panicked to discover you're passing off squirrels. 
Oddly enough, the rabbit-out-of-the-hat counterintuitive ending is actually Jonah's story, which is why his downfall itself went viral. You think this guy is just blessed with preternatural explanatory talent, but it turns out, "the 'Imagine' guy was making up his own quotes!" It's a joke! And a punchline! Love it! Instant schadenfreude! Have you heard? Pass it on! 
I am not excusing Jonah for his mistakes, which are significant. I think it's an honor to be held to a high standard, and he failed that standard, more than once. Worse, he had (and has) the abundant and enviable talent not to fail. And there should be real consequences for his having done so. 
Yet I also think we ought to be careful in making him a cautionary tale for a civilization drowning in its own bullshit. He was unprofessional, but he was also responding to perverse incentives and societal norms in our public square that we collectively bolster, if not passively tolerate, by our own consumption habits. 
For me, I'm trying to become more mindful of my own bullshitological contributions—which are, I'm sure greater than I'd care to admit. I'm also finding myself reflecting on how we might make the system itself better, with fewer incentives for bad behavior, and better rewards for good behavior. 
Because, while I'm sure there is some intrinsic character in all of us, it's also true that incentives draw forth aspects of that character, which then can come to publicly define us. (I can be fairly charitably-minded until someone cuts me off in traffic; fortunately for me, my utterances thereafter are not part of the public record.) 
So here's my concluding truism: Piling on Jonah is like jumping on a trampoline: fun for a while, but it won't take us very far. Better to think about how we can springboard to a better place for everyone. 
I know it's not counter-intuitive enough. I guess I'll never make it in this business. 

Monday, February 18, 2013

How Much BAM for the Buck, and Other Thoughts on the Brain Activity Map Project

Today's New York Times reports that the Obama administration is considering a massive, partly government-funded project to map the human brain, the Brain Activity Map (BAM!) Project, inspired by the success of the Human Genome Project.

Let me start by saying that I am all in favor of more research in neuroscience, because there is certainly a lot we don't know about how the brain works. While to outsiders like Ray Kurzweil it may look like progress is coming in leaps and bounds, and backing up the mind's hard drive is therefore a calculable number of years away, from the inside the effort to understand the brain often seems to zigzag from new idea to cool finding to neat technology without a clear forward trajectory. I am also a big fan of George Church, a genius and visionary of molecular biology who is one of the driving forces behind the new plan. (I even once co-taught a course on cognitive genetics at Harvard with George's wife, the geneticist Ting Wu.) But before we all jump on this bandwagon, let's discuss the pros and cons—based on what has been said publicly so far (mainly in the Times article, which was prefigured by a Neuron article by Church and several others published last June).

Per the Times, the project is expected to cost "billions of dollars" and last 10 years. Its goals are to "advance the knowledge of the brain's billions of neurons and gain greater insights into perception, actions, and, ultimately, consciousness." So far, so good—basic science. Some also hope that the project will "develop the technology essential to understanding diseases like Alzheimer's and Parkinson's, as well as to find new therapies for a variety of mental illnesses." That's certainly possible, though I cannot think of any treatments for mental illness or brain disease that have been derived from previous maps of the brain or knowledge of its activity patterns. Perhaps this is just an argument that we need better maps. Finally, "the project holds the potential of paving the way for advances in artificial intelligence." Certainly also possible, but I think AI has been doing pretty well lately by ignoring brain architecture and going with whatever algorithms work on computer hardware to produce intelligent-seeming behavior.

The Times account is short on details of what precisely is being proposed, which has led some people to think that the idea is to map every connection and the firing activity of every neuron in (at least) one human brain, or to make more maps of the functions of brain regions using neuroimaging techniques. But the Neuron article by the Brain Activity Map proponents makes it clear that, last June at least, the idea was to start with small circuits in very small organisms, where it may soon be possible to record from every participating neuron at once, and to work up to larger circuits and larger organisms. All these maps would record "the patterns and sequences of neuronal firing by all neurons" in the relevant circuit or brain, so they would be much more detailed, in both space and time, than any existing databases. A drosophila brain might be done in ten years, a mouse neocortex in fifteen. The entire human brain would be a more distant goal. And of course there would be ethical issues to be surfaced and solved along the way to that ultimate step.

There are a lot of things to like about this ambition. Although we already have lots of maps of the brain, none of them (but one—the structural connectome of the C. elegans worm) approach the spatial resolution of a neuron-by-neuron map. The main source of our knowledge about how neurons represent information, carry out computations, and communicate with other neurons is still the single-cell recording, a technique developed about half a century ago. Such methods are based on inserting tiny electrodes in or near living neurons, and have obvious limitations, not least their inability to scale to full circuits or brain regions. Recording entire circuits in action would be a fantastic achievement and probably would lead to all sorts of ancillary benefits for advancing brain research, some foreseeable and some not. And perhaps more neuroscientists would be able to find jobs along the way!

But there are some considerations on the other side of the ledger, too. One that should not be underestimated is the opportunity cost; always, but especially nowadays, it would be a mistake to imagine that the funding for a new, large project will appear out of thin air. If the BAM goes forward, other areas are likely to get less funding, and other neuroscience and behavioral science projects will likely be among the first to be reduced. Moreover, a single mega-project is likely to supplant many smaller projects. Is our neuroscience money best spent on one project costing, say, $5 billion, or instead a thousand projects of $5 million each, or ten thousand projects with $500K budgets? Gary Marcus has a suggestion for five $1 billion projects. Which funding strategy is likely to result in more important discoveries, as viewed from the perspective of the next generation of scientists looking back? Maybe the BAM, but maybe not. The answer is hardly obvious to me. The big project is concrete and tangible, with milestones in the near future. The net effect of the tinkering of ten thousand labs with comparatively small budgets is harder to conceive of, but might turn out to be much larger.

One reason to be suspicious of the potential return-on-investment of a massive BAM project is that it's being sold by comparing it to the Human Genome Project (HGP), with a claim that the HGP produced $141 in economic activity for every $1 the government spent on it. President Obama cited this figure in his State of the Union Address. That's a return of fourteen thousand percent! Can that be right? If so, it would mean that about $800 billion in economic activity has been generated by that one government "investment." It turns out that this claim comes from a Batelle report (which is cited by the BAM advocates in their Neuron article) that was sponsored by a company that makes equipment used in life science research.

I find this figure hard to believe, not to say preposterous. Does it really represent net economic activity, or does it account for activity displaced from other spheres, and was all that economic activity the best activity that could have been done, or was it activity that pursuit of grant funding and other non-market incentives encouraged? What if the same amount of government money had been spent in funding lots of individual genetics researchers instead, or on other biology researchers, or other science entirely? The certainty with which these sorts of analyses are presented makes it hard to see counterfactual alternatives, but they lurk everywhere. At a minimum the $800B value must rest on a lot of assumptions, and the specific assumptions made probably have a large impact on the value that comes out of the analysis.

To be clear: I think the genome project was a great scientific idea, I suspect that it has produced a lot of benefits, and I am personally happy it was done. I just don't think it should be oversold. As Richard Feynman pointed out in his famous "Cargo Cult" speech, public support for research will eventually erode if it is sold with outrageous-sounding claims or promises of early benefits.

But suppose it is true that the Human Genome Project was the single best thing the U.S. government ever spent its money on—sorry, "investment it ever made"—the government's version of buying Apple stock for $5 and selling at $700. Should we expect similar returns from the next big science project? Or should we expect to see the economic return and gains in knowledge achieved by the average of the big science projects that the government has funded over the past decades? The abandoned supercollider, the war on cancer, the cancelled breeder reactor, and I am sure many others fade from memory—and certainly never get mentioned—when we are told about the 141X ROI of the genome project (worthy as it was). An analysis that looked at all the comparable projects rather than just the all-time outlier might come to a different projection of the likely value of the BAM. We might still expect a positive return, but without the 141X (or whatever the true value is), it will have a tougher time competing with other priorities, or with other ways of parceling out neuroscience funding.

Europe has thrown its lot behind the single mega-project approach, with an effort to simulate an entire brain at a cost of over 1 billion Euros. Regardless of the (questionable) merit of this idea, perhaps the U.S. should play a different strategy in the competition for research glory by letting a thousand flowers bloom rather than planting one ginormous tree. Indeed, such a contrarian approach may have value precisely because of the limits of the mapmaking approach to understanding the brain.

Forty years ago, single-cell neurophysiologist Horace Barlow famously proposed that "a description of that activity of a single nerve cell which is transmitted to and influences other nerve cells and of a nerve cell's response to such influences from other cells, is a complete enough description for functional understanding of the nervous system." The BAM Project seems to be a plan to create exactly this sort of description, but at a much larger scale. But as David Marr explained in his 1982 book Vision, and as Hilary Putnam also suggested in his 1973 Cognition article "Reductionism and the Nature of Psychology," there are several other levels of explanation that are equally important in reaching a "functional understanding" of how the brain works. The representations, algorithms, and computational functions of the brain and its circuits, as well as the relationship of the brain to the organism and its environment and niche, are just as important as a map that shows how the neurons are wired up and how they send signals to one another.

Again, it is not that a BAM would have no value. I would personally be fascinated to see its results, and those results might well help us to crack the problem of how higher-level properties emerge out of agglomerations of lower-level events (which the psychologist Stephen Kosslyn, a founder of cognitive neuroscience, proposed as one of the hardest problems in social science). But the sheer size of a full BAM project might focus our attention and hopes on the BAM as the be-all and end-all of neuroscience, and distract the field from devoting energy to those other levels. Cognitive scientist Mark Changizi has eloquently argued, in fact, that the massive project we ought to be pursuing is a map of the "teleome," his coinage for the suite of functions and abilities that the nervous system was designed by evolution to perform. Without knowing more about function, it will be hard to understand the BAM's results, and perhaps even harder to build the EU's whole-brain computer simulation. As the proposal moves forward, I hope the decision-makers keep in mind that maps, while incredibly useful tools, don't give answers to every important question.