Stanford Medical School Professor Misrepresents What I Wrote (but I Kind Of Understand Where He’s Coming From)
This story is kinda complicated. It’s simple, but it’s complicated.
The simple part is the basic story, which goes something like this:
– In 2020, a study was done at Stanford–a survey of covid exposure–that I publicly criticized: I wrote that this study had statistical problems, that its analysis did not adjust for uncertainty in the false positive rate of the test they were using, and that they did something wrong by not making use of the statistical expertise that is readily available at Stanford.
– In 2023, one of the people involved in that study wrote a long post about that study and its aftermath. In that post, my comments from 2020 on that study were misrepresented–hence the title of the present post.
– Just last week someone pointed me to the 2023 post. I was unhappy to have been misrepresented so I emailed the author of the post. He didn’t respond–I don’t take that personally: he’s very busy, the holiday season is coming, the post came out over a year ago (I just only happened to hear about it the other day), and my complaint concerns only one small paragraph in a long post–so, just to correct the record, I’m posting this here.
The more complicated, and interesting, part, involves the distinction between evidence and truth. It’s something we’ve talked about before–indeed, I published a short article on the topic back in 2020 using this very example!–and here it has come again, so here goes:
And now, the details:
Stanford professor Jay Bhattacharya wrote about a covid study from 2020 that he was involved in, which attracted some skepticism at the time:
Some serious statisticians also weighed in with negative reviews. Columbia University’s Andrew Gelman posted a hyperbolic blog that we should apologize for releasing the study. He incorrectly thought we had not accounted for the possibility of false positives. He later recanted that harsh criticism but wanted us to use an alternative method of characterizing the uncertainty around our estimates.
On the plus side, I appreciate that he characterizes me as a serious statistician. He also called my post “hyperbolic.” He doesn’t actually link to it, so I’ll give the link here so you can make your own judgment. The title of that post is, “Concerns with that Stanford study of coronavirus prevalence,” and I don’t think it’s hyperbolic at all! But that’s just a matter of opinion on Bhattacharya’s part so I can’t say it’s wrong.
He does have two specific statements there that are wrong, however:
1. It’s not true that I “incorrectly thought their study had not accounted for the possibility of false positives.” In my post, I explicitly recognized that their analysis accounted for the possibility of false positives. What I wrote is that they were “focusing on the point estimates of specificity.” Specificity = 1 – false positive rate. What I wrote is that they didn’t properly account for uncertainty in the false positive rate. I did not say they had not accounted for the possibility of false positives.
2. I never “recanted that harsh criticism.” What I wrote in my post is that their article “does not provide strong evidence that the rate of people in Santa Clara county exposed by that date was as high as claimed.” But I also wrote, “I’m not saying that the claims in the above-linked paper are wrong . . . The Bendavid et al. study is problematic if it is taken as strong evidence for those particular estimates, but it’s valuable if it’s considered as one piece of information that’s part of a big picture that remains uncertain.” And I clarified, “When I wrote that the authors of the article owe us all an apology, I didn’t mean they owed us an apology for doing the study, I meant they owed us an apology for avoidable errors in the statistical analysis that led to overconfident claims. But, again, let’s not make the opposite mistake of using uncertainty as a way to affirm a null hypothesis.”
I do not see this as a “recanting,” nor did I recant at any later time, but I’m fine if Bhattacharya or anyone else wants to quote me directly to make clear that at no time did I ever say that their substantive claims were false; I only said that the data offered in that study did not supply strong evidence.
This bothers me. I’d hate for people to think that I’d incorrectly thought they had not accounted for the possibility of false positives, or that I’d recanted my criticism. Again I emphasize that my criticism was statistical and involved specification of uncertainty; it was not a claim on my part that the percentage of people who’d been exposed to Covid was X, Y, or Z.
The good news is that this post from Bhattacharya appeared in 2023 and I only heard about it just the other day, so I guess it did not get wide circulation. Maybe more people will see this correction than the original post! In any case I’m glad to have the opportunity to correct the record.
Stanford contagion
I have no problem with Bhattacharya making arguments about covid epidemiology and policy–two topics he’s thought a lot about. It’s completely reasonable for him and his colleagues to say that their original study was inconclusive but that it was consistent with their larger message.
Bhattacharya also writes:
In the end, Stanford’s leadership undermined public and scientific confidence in the results of the Santa Clara study. Given this history, members of the public could be forgiven if they wonder whether any Stanford research can be trusted.
He doesn’t fully follow up on this point, but I think he’s right.
A few weeks before the above-discussed covid study came out, Stanford got some press when law professor Richard Epstein published something through Stanford’s Hoover Institution predicting that U.S. covid deaths would max out at 500, a prediction he later updated to 5000 (see here for details). I’ve never met Epstein or corresponded with him, but he comes off as quite the asshole, having said this to a magazine interviewer: “But, you want to come at me hard, I am going to come back harder at you. And then if I can’t jam my fingers down your throat, then I am not worth it. . . . But a little bit of respect.” A couple years later, he followed up with some idiotic statements about the covid vaccine. Fine–the guy’s just a law professor, not a health economist or anyone else with relevant expertise here–the point is that Stanford appears to be stuck with him. In Bhattacharya’s words, “Given this history, members of the public could be forgiven if they wonder whether any Stanford research can be trusted.”
This sort of guilt-by-Stanford-association would represent poor reasoning. Just cos Stanford platforms idiots like Richard Epstein, it doesn’t mean we shouldn’t trust the research of serious scholars such as Rob Tibshirani and Jay Bhattacharya. But I guess that members of the public could be forgiven if they show less trust in the Stanford brand. Just as my Columbia affiliation is tarnished by my employer’s association with Mehmet Oz, Robert Hadden, its willingness to fake its U.S. News numbers. And Harvard is tarnished by its endorsement of various well-publicized bits of pseudoscience.
Reputation goes both ways. By publishing Epstein’s uniformed commentary, Stanford’s leadership undermined public and scientific confidence, and then Bhattacharya had to pay some of the price for this undermined confidence. That’s too bad, also too bad that he ended up on the advisory board of an organization that gave the following advice: “Currently, there is no one for whom the benefit would outweigh the risk of these [covid] vaccines–even the most vulnerable, elderly nursing home patients.”
The challenge is that legitimate arguments about policy responses under uncertainty get tangled with ridiculous claims such as that covid would only kill 500 Americans, or ridiculous policies such as removing the basketball hoops in the local park. On one side, you had people spreading what can only be called denial (for example, the claim that the pandemic was over in the summer of 2020); on the other side were public health authorities playing the fear card and keeping everyone inside.
So I can see how Bhattacharya was frustrated by the response to his study. But he’s missing the mark when he misrepresents what I wrote, and when, elsewhere in his post, he disparages Stephanie Lee for doing reporting on his study. We’re doing our jobs–I’m assessing the strength of statistical evidence, Lee is tracking down leads in the story–just like Bhattacharya is doing his job by making policy recommendations.