Confusing Literary Criticism for Science
Continuing to read up on the Insidevaccines group blog I decided to search their blog on my own interest which is Autism. I then found and read Helen Tucker’s “Bad Writing Can’t Save a Bad Study” published by Insidevaccines under a new title of “Vaccine Science??? Part II”. Having read Tucker’s piece, I read the article she reviewed, and then re-read Tuckers criticism. There are indeed a number of problems with Barlow et al. (2001), but what is remarkable, is that Tucker cannot seem to correctly identify a single one of them. Instead what Tucker offers us might be called a form of literary criticism.
“The entire paper was a convoluted description of how Barlow et al whittled the 679,942 children they said they studied in the abstract, to the 624 children they actually did study. The procedure was so complex that they used up most of the “Methods” section to explain it, and then resorted to a flow chart to present the final sample size as one of the results of the study.”
First, the entire paper is not a description of the ascertainment procedure, but instead contains all the appropriate components we should expect to see. Anyone who has thirty seconds can verify this. Second, saying the authors “used up” the methods section is silly. The methods section is as long as it needs to be. In fact in some research with complex procedures, the methods section is notably longer than what we find here. What is an actual concern in writing up research is a page limit for a given journal. This means authors must sometimes weigh what get the most attention. However, this is not the same thing as what Tucker describes.
Continuing, in the methods section the authors do describe a complex procedure. They are both scrupulous and clear in the description of ascertainment. This does not mean they lack the necessary parts. They have headings for (1) Study Sites, Sources of Data, and Identification and Classification of Cases, (2) Data on Immunization, (3) Statistical Analysis, and (4) Follow-up Study. Moreover, in the results section the authors do use a flow chart to present data on what they found in terms of unanalyzed data for seizures. Frankly, so what? Tucker’s criticism here has absolutely zero bearing on statistical problems or internal validity.
“They did not find any increased risk for nonfebrile seizures for either vaccine, compared to the control group (the non-recently-vaccinated). Any increased risks were then downplayed by comparing study seizure rates with “background” seizure rates obtained from the same HMO’s.”
No, they didn’t downplay increased risk. The authors are very clear here; there is no room for confusion and no reason to state that risks were downplayed. Here is what the authors actually said:
“Using the background rates of seizure in the Group Health Cooperative, we found that there were 5.6 and 25.0 additional febrile seizures per 100,000 children receiving DTP and MMR vaccines, respectively. Using published background rates of seizure from Kaiser Permanente of Northern California, we found that there were 8.9 and 34.2 additional febrile seizures per 100,000 children immunized with DTP and MMR vaccines, respectively.16 For these calculations, we used the estimated relative risks in Table 1 for each period of exposure, since these are the best estimates.”
“This paper is so muddled that it is tempting to suspect that it is muddled on purpose, as if to obscure fundamental flaws in its design. However, in the end, the flaws are too monumental to hide behind a fog.”
And yet Tucker never seems able to cogently present what flaws these might be.
“A relative risk study should have never been retroactive in the first place–it would have been easier and more scientifically valid to follow children who were either vaccinated or not vaccinated.”
Again, Tucker misses the boat. Relative risk studies can be, and are indeed done all the time using databases that were already in existence. That data have already been collected is in itself a frivolous criticism. The actual concern should be how well the given database controls for the 6 types of random and systematic statistical error and how well the authors collect those data. Building a case for how they collected data from the Vaccine Safety Database (VSD) is something the authors spend a fair amount of time doing in their methods section.
“The risk ratios are meaningless without knowing how many study and control children did not have seizures.”
Huh? Yes they did! The way data were collected there is a bias towards missing seizures that did not require hospitalization. However, there is no reason to think that severe seizures should bias the association, as the authors themselves note. Based on what the data indicate the authors can calculate how many children did not have seizure… at least severe enough to warrant hospitalization.
“The control group itself was poorly distinguished from the study group (recently vaccinated vs. non-recently-vaccinated), especially in a long-term study.”
From the article:
“The reference group at the time of the seizure was composed of children matched for
age, calendar time, and HMO but who had not had a vaccination in the preceding 30 days.”
So, actually the difference was between groups was quite clear. As to the long-term follow up why would it be different if not explicitly stated? The authors noted at least one such explicit difference in their description of the follow-up study.
“Inclusion in and exclusions from the study were inadequately justified, yielding a sample that cannot be considered representative–of anything.”
The authors explicitly describe who was included and who was not. As to her comment about not representative, I have no idea what logical process she took to reach such a conclusion.
From the article:
“716 were confirmed to have had a first seizure during the study period. The primary
reason for nonconfirmation was the identification of an earlier seizure.”
So, the author didn’t include kids who had an earlier seizure. In other words the study is representative of kids who didn’t already have seizures at the time of their DTP and MMR…. seems pretty fair to me.
“And perhaps the most glaring of all, the follow-up study excluded the most widely reported neurobehavioral diagnoses temporally associated with vaccines: non-infantile autism and pervasive developmental disorders. Murkiness on details can’t hide errors this egregious.”
The authors use the clinical manual of the ICD-9, this is appropriate as this is what the doctors used to report to the HMOs in the United States. And while some of the PDD criteria are used in current autism epidemiology via the ICD-10, the practical everyday reporting happens with the ICD-9-CM. However, most of the recent autism epidemiology in the West is done using the DSM-IV-TR, a completely different manual altogether.
Tucker’s criticism here is unfortunate, because at the year of publication there was little evidence that other PPDs and specifically PDD-NOS make up the majority of the PDDs in the epidemiology. In fact the good epidemiology such as Bertrand et al. (2001) and Chakrabarti & Fombonne (2001) used to help establish this fact would first appear in the same year as Barlow et al., but wouldn’t receive confirmation until Chakrabarti & Fombonne (2005) several years later.
In summary, Tucker indentifies no statistical problems, nor concerns with instrumentation, history, or selection bias. In fact Tucker identifies no threats to the validity of Barlow et al. I like reading science criticism and I like reading literary criticism, but I prefer a cleaner delineation between the two.
Further, some criticism needs to be directed at Insidevaccines. This may not have been their article, but they selected to reprint it in full with author permission, but without comment and give it blog-time on their site. Again they are the ones promoting it and so can be held in part accountable for its lack of quality. That one or more of them didn’t write it is irrelevant; it is re-printed in full on their blog, this is what matters.
For a blog that gets promoted as a model of good science I have thus far been very disappointed by Insidevaccinesm both in the current post and in the one I reviewed the other day. More time will be needed to see if some of their other “heavy science” articles are better than this. And while forming a group that looks at vaccine issues from a variety of viewpoints might be a good thing, sacrificing science in the process is not.
References
Bertrand, J., Mars, A., Boyle, C., Bove, F., Yeargin-Allsop, M., & Decoufle, P.(2001). Pediatrics, 108, 1155-161.
Chakrabarti, S., & Fombonne, E. (2001). Pervasive developmental disorders in preschool children. Journal of the American Medical Association, 285,3093-3099.
Chakrabarti, S., Fombonne, E., (2005). Pervasive developmental disorders in preschool children: confirmation of high prevalence. American Journal ofPsychiatry, 162(6), 1133-1141.
“The entire paper was a convoluted description of how Barlow et al whittled the 679,942 children they said they studied in the abstract, to the 624 children they actually did study. The procedure was so complex that they used up most of the “Methods” section to explain it, and then resorted to a flow chart to present the final sample size as one of the results of the study.”
First, the entire paper is not a description of the ascertainment procedure, but instead contains all the appropriate components we should expect to see. Anyone who has thirty seconds can verify this. Second, saying the authors “used up” the methods section is silly. The methods section is as long as it needs to be. In fact in some research with complex procedures, the methods section is notably longer than what we find here. What is an actual concern in writing up research is a page limit for a given journal. This means authors must sometimes weigh what get the most attention. However, this is not the same thing as what Tucker describes.
Continuing, in the methods section the authors do describe a complex procedure. They are both scrupulous and clear in the description of ascertainment. This does not mean they lack the necessary parts. They have headings for (1) Study Sites, Sources of Data, and Identification and Classification of Cases, (2) Data on Immunization, (3) Statistical Analysis, and (4) Follow-up Study. Moreover, in the results section the authors do use a flow chart to present data on what they found in terms of unanalyzed data for seizures. Frankly, so what? Tucker’s criticism here has absolutely zero bearing on statistical problems or internal validity.
“They did not find any increased risk for nonfebrile seizures for either vaccine, compared to the control group (the non-recently-vaccinated). Any increased risks were then downplayed by comparing study seizure rates with “background” seizure rates obtained from the same HMO’s.”
No, they didn’t downplay increased risk. The authors are very clear here; there is no room for confusion and no reason to state that risks were downplayed. Here is what the authors actually said:
“Using the background rates of seizure in the Group Health Cooperative, we found that there were 5.6 and 25.0 additional febrile seizures per 100,000 children receiving DTP and MMR vaccines, respectively. Using published background rates of seizure from Kaiser Permanente of Northern California, we found that there were 8.9 and 34.2 additional febrile seizures per 100,000 children immunized with DTP and MMR vaccines, respectively.16 For these calculations, we used the estimated relative risks in Table 1 for each period of exposure, since these are the best estimates.”
“This paper is so muddled that it is tempting to suspect that it is muddled on purpose, as if to obscure fundamental flaws in its design. However, in the end, the flaws are too monumental to hide behind a fog.”
And yet Tucker never seems able to cogently present what flaws these might be.
“A relative risk study should have never been retroactive in the first place–it would have been easier and more scientifically valid to follow children who were either vaccinated or not vaccinated.”
Again, Tucker misses the boat. Relative risk studies can be, and are indeed done all the time using databases that were already in existence. That data have already been collected is in itself a frivolous criticism. The actual concern should be how well the given database controls for the 6 types of random and systematic statistical error and how well the authors collect those data. Building a case for how they collected data from the Vaccine Safety Database (VSD) is something the authors spend a fair amount of time doing in their methods section.
“The risk ratios are meaningless without knowing how many study and control children did not have seizures.”
Huh? Yes they did! The way data were collected there is a bias towards missing seizures that did not require hospitalization. However, there is no reason to think that severe seizures should bias the association, as the authors themselves note. Based on what the data indicate the authors can calculate how many children did not have seizure… at least severe enough to warrant hospitalization.
“The control group itself was poorly distinguished from the study group (recently vaccinated vs. non-recently-vaccinated), especially in a long-term study.”
From the article:
“The reference group at the time of the seizure was composed of children matched for
age, calendar time, and HMO but who had not had a vaccination in the preceding 30 days.”
So, actually the difference was between groups was quite clear. As to the long-term follow up why would it be different if not explicitly stated? The authors noted at least one such explicit difference in their description of the follow-up study.
“Inclusion in and exclusions from the study were inadequately justified, yielding a sample that cannot be considered representative–of anything.”
The authors explicitly describe who was included and who was not. As to her comment about not representative, I have no idea what logical process she took to reach such a conclusion.
From the article:
“716 were confirmed to have had a first seizure during the study period. The primary
reason for nonconfirmation was the identification of an earlier seizure.”
So, the author didn’t include kids who had an earlier seizure. In other words the study is representative of kids who didn’t already have seizures at the time of their DTP and MMR…. seems pretty fair to me.
“And perhaps the most glaring of all, the follow-up study excluded the most widely reported neurobehavioral diagnoses temporally associated with vaccines: non-infantile autism and pervasive developmental disorders. Murkiness on details can’t hide errors this egregious.”
The authors use the clinical manual of the ICD-9, this is appropriate as this is what the doctors used to report to the HMOs in the United States. And while some of the PDD criteria are used in current autism epidemiology via the ICD-10, the practical everyday reporting happens with the ICD-9-CM. However, most of the recent autism epidemiology in the West is done using the DSM-IV-TR, a completely different manual altogether.
Tucker’s criticism here is unfortunate, because at the year of publication there was little evidence that other PPDs and specifically PDD-NOS make up the majority of the PDDs in the epidemiology. In fact the good epidemiology such as Bertrand et al. (2001) and Chakrabarti & Fombonne (2001) used to help establish this fact would first appear in the same year as Barlow et al., but wouldn’t receive confirmation until Chakrabarti & Fombonne (2005) several years later.
In summary, Tucker indentifies no statistical problems, nor concerns with instrumentation, history, or selection bias. In fact Tucker identifies no threats to the validity of Barlow et al. I like reading science criticism and I like reading literary criticism, but I prefer a cleaner delineation between the two.
Further, some criticism needs to be directed at Insidevaccines. This may not have been their article, but they selected to reprint it in full with author permission, but without comment and give it blog-time on their site. Again they are the ones promoting it and so can be held in part accountable for its lack of quality. That one or more of them didn’t write it is irrelevant; it is re-printed in full on their blog, this is what matters.
For a blog that gets promoted as a model of good science I have thus far been very disappointed by Insidevaccinesm both in the current post and in the one I reviewed the other day. More time will be needed to see if some of their other “heavy science” articles are better than this. And while forming a group that looks at vaccine issues from a variety of viewpoints might be a good thing, sacrificing science in the process is not.
References
Bertrand, J., Mars, A., Boyle, C., Bove, F., Yeargin-Allsop, M., & Decoufle, P.(2001). Pediatrics, 108, 1155-161.
Chakrabarti, S., & Fombonne, E. (2001). Pervasive developmental disorders in preschool children. Journal of the American Medical Association, 285,3093-3099.
Chakrabarti, S., Fombonne, E., (2005). Pervasive developmental disorders in preschool children: confirmation of high prevalence. American Journal ofPsychiatry, 162(6), 1133-1141.
0 Comments:
Post a Comment
<< Home