This week’s Nature has an article and editorial dealing with an example of not explaining adequately to the public the limitations of a major finding. About a month ago there was an announcement about an HIV vaccination trial in Thailand which was somewhat effective. Now that the results are published there’s some controversy on whether the scientists involved over-hyped their results in the initial announcement.
The results first described had a P value of 0.04, which means a 4% probability that the difference between the vaccinated group and placebo group was due to chance variations. Typically the cutoff to consider a result statistically significant is 5% so this just barely met that cutoff, which was acknowledged in the initial press announcement. However it seems that excluding people who weren’t treated exactly as the procedures specified resulted in a P value of 0.16, not statistically significant. An additional red flag in the published results was that the vaccination didn’t seem to increase the levels of antibodies in people, which might be expected if the vaccine was really responsible for the lowered levels of infection. It may turn out that the vaccinations really did help things, but just by a too small of an amount to be picked up by statistics. After all, even the statistical insignificant result of having 16% probability of being due to chance still means there’s an 84% probability that it’s a real difference. However this means the situation is a lot more ambiguous than was initially described.
The sponsors explained the presentation of only the single method of analysis at first by saying they wanted to present a clear result without needing to explain the statistical details and emphasized that even the modest effectiveness presented was a unique enough result to be worth noting. Personally I think this is a little backwards. If there hasn’t been any significant progress in developing an HIV vaccine, it seems scientists should be especially cautious when presenting any kind of result that might get people’s hopes up.
A reason for this is the response of news media to the initial announcement compared to the follow up findings. I remember seeing the first stories published about this being pretty easy to stumble on. However the first I heard about the follow up was in Nature. I went back to specifically look for coverage of the later findings and did find some stories, and this happened during a time while I wasn’t paying as much attention to news so I might have just missed the later coverage. But it isn’t unusual for the first story to get a lot of publicity but later details can be buried. For that reason it’s important to try to make sure those initial stories include all the important details.