Facilitated Communication: A Further Comment
A short time ago wrote a post reviewing the research on Facilitated Communication (FC). I found a fair amount of research that did not support this method and a limited number of studies that did. One of the studies I referenced was Weiss, Wagner, Bauman, (1996). This study involved a single participant; a 13 year boy diagnosed with autism and severe mental retardation. The authors concluded the the young man’s communication was genuine.
I listed this study as a support for FC after having read the authors abstract. It is usually a dangerous policy to offer a review on an article having only read an abstract. I did so in this case, because I had already read many of the articles I cited in my review. And because I simply couldn’t imagine that the Weiss et al. article would be so methodologically poor as to cause me to not even inlcude it in a review.
I received a tip in the comments section of my previous post that several of the supporting research articles for FC, had significant problems. Some especially damaging points about Weiss et al. were brought. After a online link to the article was found, I took the time to review the article in depth.
I was rather horrified by what I read. Weiss et al. isn’t just a study with a few small problems, it is a study whose problem negate any worth this study may have had. I will detail these problems below. I am withdrawing my inclusion of this article in my research that supports FC section. It doesn’t merit it.
The most glaring flaw in Weiss et al. is that the participant was asked many of the same questions about the same story before the test condition. The researchers called this practice phase “consolidation”. It involved an experimenter who knew the correct answers to the questions “facilitating” for the participant prior to the test condition with the “naive” facilitator.
Here is an example in trial 1:
Consolidation question “What game did they play?”
Test question “What game did they play in the story”
I have honestly never seen contamination this bad in peer reviewed research.
There are other flaws in this research as well. Please go to the comments of my previous post and see for yourself.
I listed this study as a support for FC after having read the authors abstract. It is usually a dangerous policy to offer a review on an article having only read an abstract. I did so in this case, because I had already read many of the articles I cited in my review. And because I simply couldn’t imagine that the Weiss et al. article would be so methodologically poor as to cause me to not even inlcude it in a review.
I received a tip in the comments section of my previous post that several of the supporting research articles for FC, had significant problems. Some especially damaging points about Weiss et al. were brought. After a online link to the article was found, I took the time to review the article in depth.
I was rather horrified by what I read. Weiss et al. isn’t just a study with a few small problems, it is a study whose problem negate any worth this study may have had. I will detail these problems below. I am withdrawing my inclusion of this article in my research that supports FC section. It doesn’t merit it.
The most glaring flaw in Weiss et al. is that the participant was asked many of the same questions about the same story before the test condition. The researchers called this practice phase “consolidation”. It involved an experimenter who knew the correct answers to the questions “facilitating” for the participant prior to the test condition with the “naive” facilitator.
Here is an example in trial 1:
Consolidation question “What game did they play?”
Test question “What game did they play in the story”
I have honestly never seen contamination this bad in peer reviewed research.
There are other flaws in this research as well. Please go to the comments of my previous post and see for yourself.
47 Comments:
Your previous post did not mention the "pointing" type communication described in "Strange Son." I understand that this was not the subject for a formal study so it would not belong in your review, but I am interested to hear your opinion of it. There are some videos of the technique in action here:
http://www.strangeson.com/index.php?page=media
I would imagine that this would be considered a type of facilitated communication, because it is clear that the child would not be doing this without the intervention of the facilitator. What is striking for me is that a) the content of the communication is remarkable; and b) although the child is being prompted and urged, I can't see how this would affect which letters he was pointing to.
Interverbal:
It doesn't matter how good the methodology was anyway. It was a single subject design. According to many other's who are part of the HUB, that means it's worthless or it could only be interpreted by a behavior analyst.....
When I read the study, I really couldn't beleive the methodology. Is this what people think all single subject designs look like??? No wonder they don't think ABA has any research behind it!
One thing that really bothered me.
"We make no attempt to describe the nature of the physical support being offered by the facilitator, other than to say that she supported beneath Kenny's left wrist with her cupped open left hand and that she indicated that "resistance" was offered to Kenny's movement." I thought an important part of the Methods section included what you actually did during the experiment??? I guess I'm just old school.
My favorite line, "Clearly, it is possible that these words came from the uninformed facilitator and not Kenny." When it's correct, Kenny was the author. When it's incorrect, it was the faciliator's fault....... (shrug)
It certainly didn't make me any less skeptical of FC. Although it still doesn't mean that there aren't people out there who may really benefit from a keyboard and typing lessons either!
Hi VAB,
I have no opinion of the pointing type of communication.
It looks like there is the potential for abuse, but at the same time it might be an authentic means of communication for certain persons.
Excellent work on Weiss.
Now take a look at the Cardinal and Sheehan studies, both published in Mental Retardation at about the same time.
A reader who is genuinely interested in real evidence for or against FC doesn't have to read too far to be disappointed in those two studies. Neither includes a test for facilitator influence. Because influence is a demonstrated problem in FC, neglecting to check for it negates the usefulness of the studies as evidence of valid FC.
Sheehan should get some props for being honest--putting it all out there. She says how many right and wrong answers were given by each of the subjects (about 10% right and 90% wrong total). She describes the pre-test abilities of each the subjects well enough for the reader to know that one of the subjects could already type independently and had enough other intact behavior that he might have cued the faciltator in unknown ways. She says some illogical things about statistical significance that should cause a reader to question her methodological abilities. Or, perhaps, we should lament how badly methodology is being taught nowadays.
But inference from correlated performance and gnashing of teeth about bad clinical education is not needed to question the results. The procedures include so much contamination and potential for contamination that the results are meaningless. And no test for facilitator influence.
An aside: In all our consideration of where bias and contamination come from, we seem to reliably forget about the subject as a potential source of contamination and influence. A subject who has some established responses to certain objects, or could learn to make a specific response to a few of the stimuli in the experiment--lick lips, fidget, make certain sounds--could easily to cue a facilitator to type something specific, especially after practice, making the result look better than chance. This is a problem in ESP studies. These studies often have hundreds or thousands of trials. The sender and receiver learn to make and respond to arbitrary and subtle correlated responses that raise the results just enough to make them seem interesting. This is another reason why tests of facilitator influence are always necessary. The facilitator might not know anything, even the observers might be kept out of the loop. But a non-verbal subject might do at least a few meaningful things that could lead to spurious positive results.
Back to our story. Cardinal has tighter procedures. Or so it seems. But look at the set-up of the experimental room. The observer who knows each of the answers is in view of the facilitator. The subject, who might or might not have a little native communicative ability is there too, available to add subtle influence. However, unlike Sheehan, Cardinal doesn't say how many right and wrong answers his subjects gave. He buries these things in a table designed to tell a story of gradual acquisition of FC from virtually no correct responses in the first couple trials to better performance later. He's trying to make the argument that one-shot validation tests are inherently doomed to failure because FC takes time to develop. However, after extracting the numbers from the table--I think I'm right, he's not clear what anything means--the reader will find that the real overall results seem to be about the same as Sheehan's. Another table on another page hides more treasure. An hour or so of calculation seems to reveal that there were as many or more correctly spelled wrong answers as correctly spelled right ones. I say this provisionally too. The meaning of the numbers in the tables and the their referents are not completely clear. Even so, I think the reasons for not just coming out with these numbers is obvious. Of course, without a test for facilitator influence, the small number of right answers doesn't mean much, even if it is above the 1/100 chance Cardinal suggests. I do think the number of meaningful wrong answers tells an interesting story.
If the FC people want to prove their technique works, it looks like they are just going to have do a really complicated, costly, hard-to-replicate study with lots of subjects, fabricate some plausible results, and get a naive editor to put it all in print. Or they could just do like Biklen and his people at Syracuse have, give up on controlled studies entirely and use the Nancy Lurie Marks money to spread the word by giving educational seminars to journalists
Good luck finding the other studies and happy reading.
Rapid prompting (RP) is FC in disguise. It is a wolf dressed in another wolf's clothing.
We don't make the association with FC because Portia Iverson, Morton Gernsbacher, and Soma Mukhopadyay have done a great job staying away from the FC people and their frequent forays into weirdness--except for Gernsbacker breaking ranks to speak at Autcom
But to claim that sliding a finger back and forth over a letterboard held in the air by someone else, or pointing at letters or words on the wall or table, is distinctly different from FC? No. It's clever to hide the cueing in the upward pressure of the board against the person's finger as it moves about approaching the right letter. Even alleged methodologist Gernsbacher couldn't see how having the letters arranged just right on the wall or piece of paper can permit a limited number of cues to signal accurate positional responding. (Sort of like how legitimate single-button, hierarchical augmentative communication systems are set up. Sort of like how the "educated animal" tricks are done in Vegas.)
Like the FC people, the RP people stay miles away from anyone interested in examining their method closely or doing validation tests. Of course, they needn't worry about validation. If FC has taught us anything, it is that people will believe absolutely anything if it is associated with autism. Like with FC, everyone, especially people in the media, go all slack-jawed and goggle-eyed when they see RP, and become incapable of asking even the most basic critical questions. Too bad for the people with autism, becoming puppets for the aggrandizement of others. I guess someone has to make the sacrifices.
Oops. Something happened to truncate the second paragraph of my previous comment on RP. It should have read:
We don't make the association with FC because Portia Iverson, Morton Gernsbacher, and Soma Mukhopadyay have done a great job staying away from the FC people and their frequent forays into weirdness--except for Gernsbacker breaking ranks to speak at Autcom and endorse the Savarese
book.
Interverbal writes:
"The most glaring flaw in Weiss et al. is that the participant was asked many of the same questions about the same story before the test condition."
Why is that a flaw? If the facilitator in the test condition didn't know the answers, and the person asking the questions was not able to cue the facilitator or the FC user, then where are the correct answers coming from?
"Why is that a flaw? If the facilitator in the test condition didn't know the answers, and the person asking the questions was not able to cue the facilitator or the FC user, then where are the correct answers coming from?"
Because it is no longer proof that the young man could give a correct answer based on what was read to him in the story.
The young man may simply be making the same response the non-naive facilitator helped him make in the "consolidation" phase. It might be proof that the young man can repeat typed words others previously helped him type. It is no proof of authentic reading comprehension skills.
This problem isn't just a confound, it is a horrific confound.
Interverbal wrote:
"The young man may simply be making the same response the non-naive facilitator helped him make in the "consolidation" phase."
Except he's not. He didn't rotely type the same answers in the test phase. In trial 1 the characters played baseball. In the consolidation phase, Kenny types "FBASEFBAKIOOLLT". In the test phase, he types "GBASEBALL". How did he learn to type GBASEBALL if your hypothesis is correct?
"Except he's not. He didn't rotely type the same answers in the test phase. In trial 1 the characters played baseball. In the consolidation phase, Kenny types "FBASEFBAKIOOLLT". In the test phase, he types "GBASEBALL". How did he learn to type GBASEBALL if your hypothesis is correct?"
It is not a "hypothesis" it is a criticism of an apparent confound. I have no idea whether it is true or not. However, the fact that such a glaring uncontrolled confound exists is just incredible.
The young man may have learned it from having his non-naive facilitator help him spell "FBASEFBAKIOOLLT".
Don't forget that having the right answers in the test phase is meaningless as evidence of independent communication because no test for facilitator influence was done to determine where the answers were coming from. We can't just throw a designed demonstration out there and pretend that it's a source validation study.
Interverbal is right that there can be all kinds of response acquisition and cuing going on between the consolidation and test phases. But not to test for the source of the typing, especially in a study in which facilitator influence was actually invoked to selectively dismiss some of the results (but not others) is an amazing fatal flaw. It is remarkable, really, that such a thing found its way into print. Editor Stephen J. Taylor's unusual published rationalizations for publishing the Weiss and Cardinal studies was remarkable as well. (He may have included Sheehan's study, which is just as bad, but I don't recall.) His commentary accompanies the Weiss and Cardinal's articles in Mental Retardation. It is rare that valid studies with good methodology require such justification. Taylor's comments suggest, to me at least, that these things were steered toward an FC-friendly review panel by an FC-friendly editor after the original panel had serious methodological objections. His implications that editors need to be unbiased at the outset ring false to me. Editors should not be biased by the wrong things. Bad science--studies that do not include even the most rudimentary controls--is something an editor can be safely biased against.
But our little foray into the sociology of peer review is beside the point. WIthout a test for facilitator influence nothing in the Weiss study means much of anything, except, that the top-performing middle school student, who has obviously taken many tests successfully with FC before this one, was unable to come close to his previous performance level with second-grade-level material. Weiss is a failure as a validity study while being an excellent example of comprehensive methodological badness.
"Original Anonymous"
Note: The anonymous comments above re the apparent confound are mine. I'm now able to login; sorry for any confusion. There can be only one "anonymous" ;-)
This is a long post, but the bottom line of pretty much all my posting about FC is this: however controversial FC is, it's possible to check it out one's self, rigorously and honestly. As the saying goes, trust, but verify. Trust in your child's potential, and verify that s/he is able to read and to type. Set up ways to control for the inevitable fact that as a facilitator, you will influence the communication to some degree.
For IV: The confound you speak of is possible in theory, but the data gathered don't support that it actually happened. I don't think there's a very high likelihood that the subject, while failing to understand the meaning of the words, managed to get a naive facilitator to type "GBASEBALL" as a result of previously being "taught" to do so by a non-naive facilitator cueing him to type "FBASEFBAKIOOLLT". That's really pretty improbable. Also, some of the other questions and answers in the test phase were different. If Occam's Razor means anything, it's much more plausible to conclude that (absent un-blinding of the facilitator) what was seen in this study was a genuine message-pass.
Similarly, for Anon, your logic is flawed when you argue that the lack of a control for facilitator influence moots the results. If the facilitator is truly blinded, then she can cue the kid 100% and not get the right answer, because she doesn't know the right answer. It's highly improbably that a blinded facilitator would come up with most of the answers obtained in the test phase, no matter how many Kohlberg-style stories she had heard. Even though those stories all have similar plotlines, you still have to explain how she can guess correctly the particulars. And you can't do that, except by asserting that somehow the experiment was screwed up and the facilitator wasn't really blinded. But there's nothing in the experimental procedure to suggest that was going on, which makes your criticism completely non-specific. It's like arguing that a person cheated on a test despite there being no evidence to that effect.
It's also false that "facilitator influence was actually invoked to selectively dismiss some of the results (but not others)". The authors used the word "may", and the point was that the second trial was terminated because of the subject's apparent anxiety. Facilitator influence is well-known as a factor in FC, but in this example (i.e., the origin of the words in the second trial, and why that trial was aborted), facilitator influence is not a factor in interpreting the results of the study. It’s a complete red herring. Such objections are at best the result of careless reading, and at worst demonstrate intellectual dishonesty.
And why go on repeating the canard that the test invalidated Kenny's academic achievement? First of all, the study set out to demonstrate message-passing, not to validate the subject's overall academic achievement. And once again, in what sort of parallel reality does getting nearly everything right on two out of three second-grade level tests invalidate higher scholastic achievement? Any individual could take three such tests, not complete one due to an anxiety attack, and ace the other two: and it would be impossible to determine whether that person was in 2nd, 5th or 12th grade. You repeat this objection despite my having earlier corrected your math error (i.e., your assumption that Kenny’s 72% correct rate included, rather than excluded, the aborted trial), which was what your objection was supposedly based on. Data be damned, full speed ahead on the argument, eh?
Similarly, you have the temerity to argue that Kenny's typos somehow reflected upon his cognition a/o academic level. But FC users have typos all the time, which is to be expected if they're dyspraxic. Another rhetorical belly-flop is your holding FC proponents to a double standard (bad for FC types to accept money for services, OK for other professionals). I think the sloppiness (to put it charitably) of your approach speaks for itself. One sees this often in the popular discourse when science is distorted for political ends.
In any case, I'm going to address Anon's other objections to this paper, and Mostert's and Jacobson's, later in a post on my own blog.
IMO, the Weiss study is adequate as a demonstration of valid message-passing. The real problem, as you correctly point out, is that there aren't more like it. But that could be a reflection of FC researchers as much as the phenomenon itself. It's curious that some, including some who call themselves autism advocates, seem more hell-bent on skewering the former than understanding the latter.
I remember reading on Usegroup a media report to the effect that one student with whom Weiss was acquainted had gone on to type independently. I'm not sure whether it was the Kenny of this study or not. Anyway, that phenomenon was what really persuaded me to have a look at FC: the fact that some FC users, whatever the percentage may be, do go on to type independently and in some cases even speak (Jamie Burke, Sharisa Joy Kochmeister, Sue Rubin, Lucy Blackman, Richard Attfield etc.). That, along with various approaches to emerging literacy (Karen Erickson, M. Gernsbacher), allowed me to get my own kid to start communicating.
Initially, Ben would just do multiple-choice questions, crossing out his choice on a large dry-erase board. (Note that the multiple choice is generally with no physical support; a lot of OT had gotten him to the point where he could mark a single vertical line with chalk, unassisted). After a single session of FC training (for which I was asked to pay exactly $0, but insisted on paying for), my son began FC-ing. Notably, the words he can and cannot spell via FC are in line with the literacy he displays via unassisted multiple-choice.
The main advantage of FC is that it provides unlimited utterance. Sometimes we can't guess what's on Ben's mind. And that's a very important way that FC suggests its utility in real life: one sees major behavioral changes when an FC user successfully says what s/he needs to say, and gets what s/he wants. And of course there are all sorts of opportunities in daily life to work on message-passing and fading prompts. All that is, btw, in line with what FC proponents currently recommend.
So, for me, the validity of FC doesn't rise or fall with how convincing the admittedly slim evidence base in the literature is. It's valid enough where it matters most, i.e. my own family. With a reasonable amount of common sense and scientific literacy, I was able to set up conditions in which I could validate the process myself. That's the approach I'd advocate for anyone sitting on the fence with regard to FC, as I was a few years ago.
regards,
jim
Hi Jim,
Nice to have you back; I hope your trip was successful. I have some replies both for my portion and some of the ideas you shared with Anonymous.
You write “For IV: The confound you speak of is possible in theory, but the data gathered don't support that it actually happened. I don't think there's a very high likelihood that the subject, while failing to understand the meaning of the words, managed to get a naive facilitator to type "GBASEBALL" as a result of previously being "taught" to do so by a non-naive facilitator cueing him to type "FBASEFBAKIOOLLT". That's really pretty improbable.
The data neither confirm nor deny the confound I mentioned. That’s because the methods were not designed to answer whether or not the confound occurred.
Jim, any time in research we introduce another phase, we need to be scrupulously careful that what the person is exposed to in the phase could not contaminate their answers in the testing phase. There was no such caution here. This is a no-no, and it ruins any (and yes I mean “any”) confidence we should have in this research.
You are welcome to argue that the answers between the phases are just too different, and that there is no way the young man or the facilitator could add a few letters here and there. However, since no data were taken to control for this, you don’t have a data based rebuttal to this problem. You are left to ad hoc reasoning.
You write “If Occam's Razor means anything, it's much more plausible to conclude that (absent un-blinding of the facilitator) what was seen in this study was a genuine message-pass.”
Which version of Occam’s razor? The simplest explanation is usually the best version or the, don’t multiply explanative entities beyond what is necessary version? Also, if we are going to deal with the razor are you willing to apply it to the following case:
A young man reported to be at the 7th grade and pulling an A-B average using a communication method where the authenticity is in question, but the student can only do an C average on low Elementary reading comprehension tasks during the validation trial with inadequate controls. So, what would William of Occam tell us here?
Also, I agree with Anonymous that facilitator influence could be an issue here. I don’t think the facilitator guessed, I think it is possible the facilitator found out some how. Again, because this study lacks a control for this aspect (which is known to occur in previous research) we don’t know.
The research with psychics and spoon benders should be a lesson here. Research isn’t about assuming anyone’s honesty or denying it. It is about recognizing a potential threat and adequately dealing with it. This research fails to deal with that.
What scares me is that in the name of science, people are going to get very hurt.
"the fact that some FC users, whatever the percentage may be, do go on to type independently and in some cases even speak (Jamie Burke, Sharisa Joy Kochmeister, Sue Rubin, Lucy Blackman, Richard Attfield etc.)." -- EXACTLY. And these people have wrote about what it is like to have done FC, when it is clear they have no influence in the form of physical support while doing so.
I would think that good science wouldn't ignore these accounts either, even if there is not yet a good study on this.
It's clear there is some crap called FC. But the general disability rights principle of "least harmful assumption" also seems important. The consequences of being wrong about even one "exception" (and I'm not sure these exceptions are exactly rare) is beyond horrible if the assumption is that a person is not communicating and only is influenced.
Certainly, in cases of abuse and such, verify. But actually verify, don't dismiss (what strangely never gets mentioned in FC discussions is the abuse accusations that turned out to be *true* - not that this proves or disproves FC, but it does disprove the idea that all FC abuse allegations are bunk).
I'd love to see some better studies, that recognize the realities of FC, autism, anxiety, motor skills, etc. In the meantime, I think we should be very careful about dismissing the abilities of autistic people who use FC, unless we are damn well sure it's nothing more than a facilitator's words. The standard of proof that this demands is far more than anything I've seen on this blog.
Hi new Anonymous,
You say “What scares me is that in the name of science, people are going to get very hurt.”
Well, science has hurt people; you raise a legitimate concern. Being part of research even when there was informed consent has left bad feelings lingering more than once. Also, the findings of any single example of research can be over-applied. Further, research can potentially invalidate ideas or concepts that we hold dear. It can take a lot of courage to face this last issue especially.
You write “I would think that good science wouldn't ignore these accounts either, even if there is not yet a good study on this.”
Yes, these cases teach us that there are a few cases where people who used FC have become independent typists. What these cases don’t do, is provide research based support for FC. Another thing they don’t do is validate FC in any specific case were the typist isn’t independent.
“It's clear there is some crap called FC.”
Do, you think you would be able to tell good FC from crap and then back this up with validation trials? Do you think anyone could do this? Why hasn’t it happened already, do you think?
“But the general disability rights principle of "least harmful assumption" also seems important. The consequences of being wrong about even one "exception" (and I'm not sure these exceptions are exactly rare) is beyond horrible if the assumption is that a person is not communicating and only is influenced.”
I think the least harmful assumption is a valuable ethical principle if used carefully. I also think it can be excuse for quackery if used inappropriately. I do not believe that accurate information will hurt autistics, whether or not that information supports FC.
You write “Certainly, in cases of abuse and such, verify. But actually verify, don't dismiss (what strangely never gets mentioned in FC discussions is the abuse accusations that turned out to be *true* - not that this proves or disproves FC, but it does disprove the idea that all FC abuse allegations are bunk).”
I think that in any case where FC is reported to be valid we should verify. As to abuse, I don’t have a problem with the occasional case shown to be valid. My problem (and I suspect many other folk’s problem) are the high rate of cases that are shown to not be valid (Botash et al). I think many of us are curious what is going on here.
You write “I'd love to see some better studies, that recognize the realities of FC, autism, anxiety, motor skills, etc. In the meantime, I think we should be very careful about dismissing the abilities of autistic people who use FC, unless we are damn well sure it's nothing more than a facilitator's words.”
You are welcome to criticize the available negative research on FC. However, in the end, the effort of validation concerning FC falls squarely on its advocates. It is unfortunate that I don’t see them embracing this effort. In fact at the 2006 AutCom which our original anonymous linked to, we have Dr. Kluth reviewing the “Tipping Point” and advocating that less effort be put on validation studies and more effort be put on testimonials. A picture of Dr. Kluth is listed; it is followed by one of Dr. Larson and what appears to be his presentation at AutCom of the overtly quackish “Chiropractic craniopathy in autism”.
This isn’t science, this isn’t even pseudo-science. This is anti-science.
“The standard of proof that this demands is far more than anything I've seen on this blog.”
Far more than appears anywhere from what I can tell. I hope advocates like you can conduct the research you seek or convince others to do so. In either case, you will have to do much better than Weiss et al in terms of controls.
In fact at the 2006 AutCom which our original anonymous linked to, we have Dr. Kluth reviewing the “Tipping Point” and advocating that less effort be put on validation studies and more effort be put on testimonials. A picture of Dr. Kluth is listed; it is followed by one of Dr. Larson and what appears to be his presentation at AutCom of the overtly quackish “Chiropractic craniopathy in autism”.
This isn’t science, this isn’t even pseudo-science. This is anti-science.
No, it's not anti-science to educate people about non-scientifically-proven modalities (especially when they are generally safe, apart from a freely-chosen hit to the pocketbook; obviously, caveat emptor). Nor is it unethical to try them, especially in situations where anything that helps is a blessing.
However, it is ad hominem to use that topic to criticize Autcom and FC. What does cranial osteopathy have to do with FC, other than the guilt-by-association you insinuate? You'll find a full spectrum of presentations at Autcom, including some gentle alt-med stuff that may verge on placebo, but the alt-med stuff is hardly central. What is front and center is the emphasis on respect, dignity and quality of life for people with autism. Here's the link (PDF) again to the newsletter covering the 2006 conference, where that emphasis is readily apparent.
What drew me to Autcom was positions like this (emphasis added):
"The Committee further believes that the principles of social justice can only be upheld through organizational methods which reflect those principles. We welcome the participation of all ... who wish to implement, not debate, the right to self-determination by hearing and heeding the voices of people with autism."
That's why Autcom, unlike some other prominent autism advocacy organizations, actually has autistic people in leadership positions.
But I do agree with you that Paula Kluth was partly wrong. Of course we need more and better studies. (Sidebar: I was invited to join the Autcom board this year, and in my application I stated my opinion that the "tipping point" for FC will depend on a stronger evidence base in the peer-reviewed literature, and that as a board member I intended to do what I could to help improve that evidence base. My nomination was approved, so that ought to tell you something about Autcom's view of science.)
However, Dr Kluth was partly right, also. If FC works, but the evidence base is thin, and the quality of life for some people depends on it... then yes, pending better studies, testimonials are going to help. If I hadn't met Jamie Burke, I might not be FC-ing with my son, and he wouldn't have unlimited utterance, and therefore his quality of life (and that of his parents, I'm guessing: cf. the behavior of anyone who can't express what xe wants) would be pretty bad.
Again, what is often overlooked in these conversations about science and ethics is that it is entirely ethical, in intractable situations, to try modalities whose efficacy is unproven, especially when their risks are relatively well-understood. Nor is anyone that I know of making a ton of money off of FC, fleecing the desperate hordes. Quite the opposite, in my experience. I spent thousands of dollars on speech and occupational therapy, the former of marginal value and the latter of moderate value (depending on how good the therapist was). I also dropped a grand on nutritional counseling that was of dubious value, and was invited to spend thousands more on all sorts of interventions. Developmental optometry was pushed pretty hard at a few "seminars" I attended. I had one father earnestly tell me that I really should get ABA for my kid, and as an illustration of how the hundreds of hours (and presumably dollars) had helped, he said that his kid could now make a fist and lift his pinky finger on demand. FC cost me nothing, apart from a voluntary donation of a hundred bucks to the person who trained us. FC, and the independent-pointing, multiple-choice modality I mentioned above, made the biggest difference in addressing my son's communication deficits.
So for my family, FC is not about money or quackery. It's one thing, and one of the only things, that happens to work, since my son is one of those people who can read. (The other major thing was a neurological consult, sleep EEG, diagnosis, and treatment with an anticonvulsant.)
P.S. Time flies, so my responses regarding Weiss will have to wait till tomorrow.
Regarding the Botash study on the prevalence of false allegations of sexual abuse made via FC: I think if you dig a little deeper, IV, you'll change your interpretation. A Wikipedia article notes that "False allegations should not be confused with unsubstantiated reports or unfounded reports since the latter two terms refer to cases where abuse may or may not have occurred but there is not enough evidence to pursue an investigation." Certainly a valid point.
With that in mind, this link is self-explanatory. The relevant excerpt, quoting Botash, follows:
As reported in the article, Child Protective Services made the determination of "indicated" "when the allegation of sexual abuse was substantiated by other suspicious family or child characteristics.' Corroborating evidence of sexual abuse was described to include 'physical examination findings that are considered to be suspicious or clear evidence of abuse, the child's additional verbal or independent typing disclosure, and/or a confession by the perpetrator." Supportive evidence was defined as CPS determinations, court findings, and siblings' disclosures" (p. 1283).
During the period January 1, 1990 -March 10, 1993, 1096 children were evaluated by the CARE program for suspected sexual abuse. Thirteen (1.2%) of these children disclosed abuse using Facilitated Communication. Thirty one percent (4/13) of the children who disclosed sexual abuse via FC had corroborating evidence. An additional five had supportive evidence. Two children had physical examination findings considered suspicious for sexual abuse. Seven children had nonspecific physical examination findings (defined as "unusual rectal findings that may or may not be due to abuse or unusual bruises in non genital sites") (p. 1283).
There was enough evidence to legally prove the allegations of sexual abuse of three children, and one additional child's perpetrator confessed. Although there may not have been enough evidence for legal prosecution, another seven of the children's cases were determined to be indicative of abuse by CPS. The indication rate for abuse and neglect found in this study is consistent with the upstate New York indication rate of approximately 47% (p. 1287).
"These results demonstrate that allegations of abuse that are initiated owing to an FC disclosure should be taken seriously" (p. 1287) (emphasis added).
In sum, there is not enough evidence to compare the prevalence of false allegations of sexual abuse made with and without FC, and the little that does exist -- the Botash study -- found that, if anything, they were comparable.
Hi Jim,
“No, it's not anti-science to educate people about non-scientifically-proven modalities”
No dice Jim. The position that was advocated was to focus on the personal stories and testimonials rather than the reliance on validation studies. This is anti-science.
“Nor is it unethical to try them, especially in situations where anything that helps is a blessing.”
How is this position different from parents who use Lupron and chelation? How does this make you different from parents of a generation ago who approved of exclusionary, humiliating, or painful teaching procedures? You may differ from them on an ethical level, but your supporting logic could be used to justify any of them.
You write “However, it is ad hominem to use that topic to criticize Autcom and FC.”
I am very well aware that I can criticize Dr. Kluth for making an anti-scientific comment. I can also state that she presented at AutCom. I can certainly criticize AutCom for having a speaker advocate and discuss a book where anti-science is used.
Moreover, chiropractic craniopathy might be a gentle alt-med technique, but it has no research basis. It is what many skeptics like me would call quackery. I criticize AutCom for associating with this. I think this reflects very poorly on AutCom. Jim, I am well aware that none of this qualifies as an argumentum ad hominem. You have used this term inappropriately here.
You write “That's why Autcom, unlike some other prominent autism advocacy organizations, actually has autistic people in leadership positions.”
AutCom has been a model of advocacy for social justice for autistics and as a group that includes autistics in its leadership.
I am pleased that you are on the AutCom board. I hope that you can encourage a high standard in your presentations and that you advocate well controlled studies be conducted concerning FC. You will have to do better than Weiss et al.
Thank you for the Botash information,but I have already seen it. 4 out the 13 had corroborating evidence. If this study is a support of FC’s validity in this regard, then perhaps FC deserves it’s poor reputation in this area.
In response to: "Nor is it unethical to try them, especially in situations where anything that helps is a blessing."
Jim please read this. Or at least the first 8 paragraphs. It's kinda long.
http://people.uncw.edu/kozloffm
/fads.html
Jim: "If FC works, but the evidence base is thin, and the quality of life for some people depends on it... "
If FC works, the evidence base should not be thin... The underlying mechanisms that work should be identified by now. It would be quite easy to put together a study that demonstrated it's effectivenes. That could be a group design, or a single subject design.
Hi Interverbal -- catching up here on a variety of topics...
Nice to have you back; I hope your trip was successful.
You're very kind, thank you! It was a good trip, but exhausting and overwhelming to travel with Ben without anyone else to provide backup childcare support. My heart goes out to single parents, especially those with kids with special needs.
Regarding the possible confound in the Weiss study wherein, as you suggested, Kenny "may simply be making the same response the non-naive facilitator helped him make in the 'consolidation' phase", you wrote:
The data neither confirm nor deny the confound I mentioned. That’s because the methods were not designed to answer whether or not the confound occurred.
That doesn't necessarily follow. Data often tells us things we weren't specifically looking for.
For example, I may set up a chemistry experiment with the possible confound that my lab-mate, who holds a grudge against me, may pour his coffee in. I don’t control for that: I don’t replicate the experiment outside my lab in a place where my spiteful lab-mate can’t get at it. But if the final mixture ends up not looking coffee-colored, and analysis of the mixture doesn’t show any coffee, then the confound didn’t occur, even though I didn’t control for it.
Jim, any time in research we introduce another phase, we need to be scrupulously careful that what the person is exposed to in the phase could not contaminate their answers in the testing phase. There was no such caution here. This is a no-no, and it ruins any (and yes I mean “any”) confidence we should have in this research.
You are welcome to argue that the answers between the phases are just too different, and that there is no way the young man or the facilitator could add a few letters here and there. However, since no data were taken to control for this, you don’t have a data based rebuttal to this problem. You are left to ad hoc reasoning.
I wouldn't agree that my reasoning, or that of the researchers, is ad hoc. The explanation can be generalized to other instances, as in my chemistry example above. Data can tell you more than what you specifically designed the experiment to tell you. Anyone who has done research could tell you that. Your skepticism is overdone because it fails to take that principle into account. It's not ad hoc reasoning; it's a general point that can manifest in any number of ways, depending on the experiment.
I still hold that when the study finds substantially different strings of letters being typed in the trial vs consolidation phases – GBASEBALL is most certainly different from FBASEFBAKIOOLLT, relative to the confound you suggest – we can reasonably infer the confound didn’t happen. Even though they didn’t control for it, it’s extremely unlikely that typing the latter string to a nonliterate kid would teach him how to spell “GBASEGALL”. And similarly for the other data; for example, the correct answer “T~OWO BROITHERS” from trial #3 isn’t given in the consolidation phase at all. We are allowed to use common sense here, just as the authors did when they said:
“Third, Kenny's responses during the test phase could not be explained as a repetition of a previously learned motor program. The literal stroke-by-stroke transcripts during the two phases bear little or no resemblance to one another, indicating that Kenny did not simply "play back" a previously memorized message.”
to be continued...
Hi Jim,
“That doesn't necessarily follow. Data often tells us things we weren't specifically looking for.”
Let’s return to what I wrote. I was very specific when I argued that “the methods were not designed to answer whether or not the confound occurred.” The fact that data can produce unexpected or novel results, doesn’t mean that data can rebut a confound. That doesn’t follow.
Also, if we don’t control for a confound we can have no confidence as to whether not it occurred. I don’t think your parable of the coffee, is an adequate answer here.
“I wouldn't agree that my reasoning, or that of the researchers, is ad hoc. The explanation can be generalized to other instances, as in my chemistry example above. Data can tell you more than what you specifically designed the experiment to tell you.”
The researchers didn’t control for a potential confound. You argue that the data rebut the confound because the data have variations. This meets the definition of an ad hoc argument. There was no pro-active measure to control for the potential threat.
“Anyone who has done research could tell you that.”
I have done research as well. I have no problem stating that research can produce unexpected or novel results. My disagreement here is that the data can be a adequate rebuttal to a potential confound.
“Your skepticism is overdone because it fails to take that principle into account. It's not ad hoc reasoning; it's a general point that can manifest in any number of ways, depending on the experiment.”
We already agree that data can show more than what your methods attempted to show. Where we disagree is that this principle can be applied to confounds. I guess I am still still waiting for an argument that can convince me otherwise. I do not accept your criticism.
We apparently strongly disagree about what the various spellings of “baseball” can or can not indicate. As to “common sense”, whose common sense are we talking about here? I don’t think an appeal to “common sense”, is any real argument at all. In fact I would call it a logical fallacy.
Hi Interverbal,
Yes, it sounds like we disagree on the importance of the confound in the Weiss study that you discussed. You suggest (if I read you correctly) that adequate controls weren't taken, so we can't possibly rely on the data no matter what they say. My view is that the data (linked again, for convenience, at the table at the bottom of this page) demonstrate that the confound didn't occur. I think it's very unlikely that a nonliterate child could learn from the consolidation phase to type the answers that appear in the test phase. Examples abound, e.g. "T~OWO BROITHERS" in the test phase doesn't occur at all in the consolidation phase.
You say you reject my pouring-coffee-in-the-solution example above as an analogy, but you don't explain why. The confound in my example is real. By your logic, if I don't explicitly control for it at the outset, my data mean nothing. It doesn't matter if I don't find any evidence of coffee in my mixture: I didn't control for it, so it might have happened anyway, according to your logic. That's where common sense comes in, in the form of your argument being vulnerable to reductio ad absurdum. To rebut that charge, you need to show why your reasoning regarding confounds -- that if not explicitly controlled for, they ruin all data, no matter what the data are -- doesn't also apply to my example of the coffee confound.
Regarding Occam's Razor, I mean it in the sense that "entities should not be multiplied beyond necessity". One frequently sees this principle ignored in debate on FC. In this case, you argue that Kenny's being nonliterate, and parroting words in the consolidation phase, is a better explanation of the data than genuine message-passing. To argue that, though, you have to introduce some mysterious factor X that allows the answers to change significantly between the two phases. (Another great example I've seen is the argument that a facilitator can create entire paragraphs with an occasional light touch on the shoulder.)
OK, to keep posts manageable I'll address other points in separate posts.
cheers,
jim
The problem, again, with the drawing conclusions about false sexual abuse allegations is threefold:
First, we need adequate data. One study (Botash) is by definition not sufficient for meta-analysis.
Second, we need to distinguish between true, false and unsubstantiated allegations.
Third, we need a baseline: what is the incidence of true, false and unsubstantiated allegations made with FC vs without?
The data we have aren't sufficient to draw conclusions.
Botash looked at 13 claims made with FC out of a total of ca. 1000 made in the upstate New York area over a period of about two years.
If I read summaries of Botash correctly (do you have the original, IV?), three cases were considered "unfounded" and not investigated further by CPS, and another three had "indications" of abuse but were dismissed. That's 6 of 13 that could be considered unsubstantiated or false. Another four had outcomes that were unknown or undisclosed at the time the article was written, so they can't be put in any category. Finally, three were found to be true.
What is the baseline, i.e. those allegations not made with FC? This article cites the following:
The National Study of Child Abuse (Smith, 1985) estimated that 1.1 million child abuse and neglect reports have been filed with Child Protective Agencies each year and that more than 600,000 of these probably cannot be substantiated even using the broad definitions of child abuse and neglect often used by the protective service agencies.
That's a little over half unsubstantiated (or false), which (FWIW) is in the ballpart of 6/13 in Botash.
Maybe FC results in an unusually high rate of false allegations of abuse, or maybe it doesn't. We lack systematic data to be able to say for sure. What we do know is that both false reports, and unreported abuse, are terrible tragedies. And there are credible, documented instances of both.
IV wrote:
Also, if we are going to deal with the razor are you willing to apply it to the following case:
A young man reported to be at the 7th grade and pulling an A-B average using a communication method where the authenticity is in question, but the student can only do an C average on low Elementary reading comprehension tasks during the validation trial with inadequate controls. So, what would William of Occam tell us here?
I think he'd first check to see that the assumptions in the question are valid. I discussed above why depicting Kenny's results as a "C" average is misleading. Ace-ing two trials and failing one can hardly be taken as an indicator of the validity of the child's academic work. (The failed trial was apparently due to anxiety, for which one would imagine a competently-written IEP would accomodate.)
So, my bet is that the good William of Occam would retain a healthy skepticism over judging overall academic potential and performance by the results of three short tests.
IV wrote:
Also, I agree with Anonymous that facilitator influence could be an issue here. I don’t think the facilitator guessed, I think it is possible the facilitator found out some how.
Why do you believe that? What aspect of the experimental protocol evidently allowed for facilitator unblinding?
As I said, facilitator influence is known to occur in FC, but if you blind the facilitator, you adequately control for it. The OD Heck study design had conditions where the facilitator was given true, false, and no information relative to what the FC user saw.
You are suggesting (for some reason that you haven't explained) that in this study, the facilitator was somehow unblinded. If that's true, then it doesn't matter whether the experimentors provide true/false/no information; the facilitator has already been corrupted by true information in all conditions.
But I don't see where that happens in this study.
I am very well aware that I can criticize Dr. Kluth for making an anti-scientific comment.
I wouldn't characterize her remarks as anti-scientific unless she came out and said that the scientific method is not to be believed in. It is not anti-scientific to say FC is valid for some people. That's a difference of opinion consistent with the scientific findings we have. It is not anti-scientific to tell laypeople that they can make a difference via testimonials. That is called political advocacy. While testimonials are not scientific, relying on them is not anti-scientific unless the thing being advocated is disproven. FC is scientifically non-proven, not known to be fakery. That's a key difference.
Jim, I am well aware that none of this qualifies as an argumentum ad hominem. You have used this term inappropriately here.
You're right. I take that back. Your anti-science criticism is off the mark, as I said above, but it was incorrect of me to characterize your remarks as ad hominem, and I hope you will accept my apology. What I mean to say, and this applies to arguments made elsewhere but not by you, is that it's ad hominem to say that "some FC advocates believe flaky things, therefore FC itself is flaky".
I am pleased that you are on the AutCom board. I hope that you can encourage a high standard in your presentations and that you advocate well controlled studies be conducted concerning FC. You will have to do better than Weiss et al.
Thanks. I still disagree that the Weiss study is significantly flawed, but I can see how reasonable people could think it is. Plus, since FC is controversial, studies should be as tight as possible. That's good to know. As Carl Sagan said, valid criticism does you a favor. I don't know how much of an impact this discussion will have on this stuff, but at least the reasoned discourse is helping.
This comment has been removed by the author.
KeithABA, thanks for the link above to the essay. I did read it. I'd balance it with this, especially the second paragraph:
Marcello Truzzi - On Pseudo-Skepticism
If FC works, the evidence base should not be thin... The underlying mechanisms that work should be identified by now.
That doesn't follow. There are lots of things being practiced today that are not well understood. Some are placebo or otherwise not valid and will be shown as such. Some are real and will be shown as such, and this perhaps well before the mechanisms are elucidated. In the case of FC, perhaps proponents have just done a poor job investigating it via controlled studies.
Above, I wrote, regarding unproven treatments:
Nor is it unethical to try them, especially in situations where anything that helps is a blessing.
You wrote:
How is this position different from parents who use Lupron and chelation? How does this make you different from parents of a generation ago who approved of exclusionary, humiliating, or painful teaching procedures? You may differ from them on an ethical level, but your supporting logic could be used to justify any of them.
My position differs because I qualify it by limiting it to low-risk, low-harm modalities. I did say that above, but maybe not strongly enough, so again: In intractable situations, it is ethical to use treatments of uncertain efficacy but known low risk and low harm.
FC qualifies as such, in my book, as long as appropriate practices are followed: in our case, we encourage independence (including independent pointing at words on a large field) and accurate message-passing. We also ensure that the facilitators are known, trustworthy people. Those measures minimize the risks of false disclosure or excessive dependence on physical support. I believe that in our case, the result -- namely, unlimited utterance, something Ben can't get by pointing at preassigned choices -- is worth it.
I recommend approaching every individual case both with due caution and open-mindedness.
For a couple years after Marcello Truzzi died, I kept getting emails from him. Since I knew Marcello was too careful to have sent me so many emails if he didn't mean to, I have to conclude that he was actually alive. I mean, what better proof is there? What confused me was why he thought I wanted to meet Russian women eager to marry American men.
For a couple years after Marcello Truzzi died, I kept getting emails from him. ... What confused me was why he thought I wanted to meet Russian women eager to marry American men.
So these were false sexual allegations?
Randi is spelled "Randi." Sexual allegations? That resembles humor. No. They were SPAM messages from a virus-compromised computer with Truzzi's email address in the Outlook/Entourage address book. It is likely that dozens of people got them.
As for being James Randy (or James Randi), the evidence for that attribution, consisting of an inference from the contents of the statement and not a real authentication, is insufficient and misleading. Sort of like the evidence for attributing the communications reported in the Weiss study to the subject.
Hi Jim,
I seem to have been quite busy the last several. However, I will try to catch up now.
I think we will continue to disagree over the existence of a confound in the Weiss study. I also think that future research will have to deal with this confound in a better way than Weiss and company. I think until that time I will maintain my criticism.
I reject your coffee analogy, because it doesn’t address the problem I brought up. It deals with another issue entirely. Your example shows that a confound can be controlled for even when there was no intent to control for it. This is not a point we disagree on. We are in complete agreement here.
We disagree on whether the data here can confirm or deny that a given confound occurred. Let’s look at my original statement: “The data neither confirm nor deny the confound I mentioned. That’s because the methods were not designed to answer whether or not the confound occurred.”
You seem to argue (feel free to correct this) that if a skeptical chemist came along later and offered a criticism about an apparently uncontrolled confound, we could point her in the direction of the chemical analysis where coffee was apparently absent and that this in itself would be a rebuttal.
Honestly, if I was the chemist, I would still be skeptical. I would expect the initial researcher or his colleagues to repeat the experiment. This time deliberately adding coffee in one batch and in the other carefully controlling to make sure no nefarious lab partner was allowed access.
All that said, I don’t agree that a chemical analysis is anything even close to looking at to two samples of a misspelled word and concluding based on one’s personal version of common sense that they are both authentic examples of a child’s communication. If you think you can perform a reductio ad absurdum on my logic here, then please proceed.
You write “In this case, you argue that Kenny's being nonliterate, and parroting words in the consolidation phase, is a better explanation of the data than genuine message-passing. To argue that, though, you have to introduce some mysterious factor X that allows the answers to change significantly between the two phases.”
Not so much. I can simply argue that Kenny demonstrates some acquisition ability when responses are modeled. I see this regularly. I recall there is a rather vast amount of literature on the efficacy of modeling.
I agree that Botash et al is not a final answer to question of abuse. However, Botash et al. does nothing to alleviate such skepticism.
You write “I think he'd first check to see that the assumptions in the question are valid. I discussed above why depicting Kenny's results as a "C" average is misleading.”
I read your rebuttal to Anonymous very carefully. However I did not agree with you. I think that it is highly unlikely that a student who performs at an A to B level on 7th grade curricula would only take a C average on low Elementary listening tasks. Something is not concordant here. It doesn’t match up. The special excuse is offered that Kenny was nervous. I can think of no valid reason to allow this to invalidate the trial.
You write “(The failed trial was apparently due to anxiety, for which one would imagine a competently-written IEP would accomodate.)”
That’s not necessarily true if the test in question is standardized. In these cases, if your deviate from the protocol then, the child’s performance can not be compared to the normed reference. It sometimes happens, but it is incorrect practice.
You ask in regards to my agreement that facilitator influence could be a factor: “Why do you believe that? What aspect of the experimental protocol evidently allowed for facilitator unblinding?”
The same principle that we would demand in situations involving talented psychics. That when multiple people know “the answers” it becomes tricky to blind the participant. There was a great deal of people coming in and out in this experiment. It is possible that the information was passed accidentally or on purpose to the facilitator. Anonymous had a comment in this area that I agree. One never… ever…. assumes.
In regards to Dr. Kluth you write “I wouldn't characterize her remarks as anti-scientific unless she came out and said that the scientific method is not to be believed in.”
That makes no sense at all. One does not have to say that science is not to be trusted to make an anti-scientific remark. That is not the criteria for anti-science. For all I know Dr. Kluth may be an outstanding supporter of science. However, when she advocates that FC should be propped up via testimonials rather than research, she uses an anti-scientific argument. That is all that is required to use an anti-scientific argument.
You write “It is not anti-scientific to say FC is valid for some people. That's a difference of opinion consistent with the scientific findings we have. It is not anti-scientific to tell laypeople that they can make a difference via testimonials. That is called political advocacy.”
However, when we tell people to testify! If we specify that we should not focus on the existing research, then our advocacy also becomes anti-science.
Thank you for correcting your statement about the ad hominem. I value our discourse even if we disagree.
Re my question asking t how your statement differs from certain parent’s justification of Lupron et al you write: “My position differs because I qualify it by limiting it to low-risk, low-harm modalities. I did say that above, but maybe not strongly enough, so again: In intractable situations, it is ethical to use treatments of uncertain efficacy but known low risk and low harm.”
But….. many of these parents would be the first to say that chelation has limited risk, maybe even infinitesimal risk. Bet ya that you can go to any of the big autism biomed discussion boards and get that answer.
They are dead wrong in my opinion, but not in theirs. I don’t think there would be any difference at all, in the essential logic that they use and you use. Just different opinions as to what constitutes “low risk”.
Lets all try to stick to our own names or psuedonyms as the case might be.
I could not follow who wrote what, but somebody wrote before:
"If FC works, the evidence base should not be thin... The underlying mechanisms that work should be identified by now."
Marcello Truzzi responded:
"That doesn't follow. There are lots of things being practiced today that are not well understood. Some are placebo or otherwise not valid and will be shown as such. Some are real and will be shown as such, and this perhaps well before the mechanisms are elucidated. In the case of FC, perhaps proponents have just done a poor job investigating it via controlled studies."
Starting about 15 years ago, some persons involved in FC (including myself in May 1994) did begin to identify the underlying mechanisms that work in FC and brought our information to the attention of the leading proponents of FC. These leading proponents of FC (such as Douglas Biklen and Rosemary Crossley) chose to suppress our information and to not attempt investigating it via controlled studies. Since our information was just the beginning, I will not bother to mention it in this forum. I am not qualified to investigate it via controlled studies so I will have to wait until those who are decide to do so.
BTW, about 12 years ago, I was the facilitator one time with Kenny, the subject in the study by Dr. Michael Weiss. Kenny and I had a very interesting discussion about theological matters that no one else could have done.
Hi Interverbal,
Yes, I’ve been busy too, and tend to alternate between phases of posting a lot and then disappearing. Good discussion, agree; will add a little more here while I can.
You are right when you say we agree on Statement A:
"...a confound can be controlled for even when there was no intent to control for it."
Then you mention your earlier comments, which I’m also going to label for convenience:
(Statement B:) “The data neither confirm nor deny the confound I mentioned.
(Statement C:) That’s because the methods were not designed to answer whether or not the confound occurred."
IOW, you were saying that statement B follows from statement C. But now I’m confused, because it seems to me that statement A contradicts that inference. We both agree that statement A is true. The purpose of my chemistry example was to show that A is true and that B does not follow from C. Anyway, if we agree on A, that's the main thing.
You go on to say “Honestly, if I was the chemist, I would still be skeptical." But my point is that skepticism is justified only under certain conditions. If the solution obtained at the end of the experiment was a brown liquid, then sure, it would make sense to suspect coffee as possible contaminant. But if the solution were clear, and/or analysis of its components showed only things that aren’t found in coffee, we don’t have to worry about whether or not coffee was added.
Similarly, I think the results in Weiss et. al. adequately address the "contamination confound". And I think you and I have done a good job of clarifying where we disagree. You believe that it's possible that a nonliterate Kenny was parroting in the test phase what he was cued to type in the consolidation phase. I doubt anything more I say will convince you, although I will note that you haven’t adequately explained how nonliterate parroting could lead to results like these, from the story featuring brothers Jim and Tom and their mother:
Consolidation:
"Who was In the story?" TKM
"And who else?" JIJJMOLTGH3ER
Test:
"Who was in the story?" MOcTHER
"Who else?" T~OWO BROITHERS
"What are their names?" BOB JIM
You wrote:
I agree that Botash et al is not a final answer to question of abuse. However, Botash et al. does nothing to alleviate such skepticism.
Indeed. However, nor does Botash, and the fact that (as you mentioned earlier) you had read some accounts of false abuse allegations in magazines, justify your stance in your original post, where you said:
It seems that a surprising number of facilitators have accused parents, often the father, of abusing the client. Also unsurprisingly, most of these cases turn out to be garbage.
You offer no data to back up those opinions. That is inconsistent with the states purpose of your blog, to take a “critical look at science in the autism world”. All that we can infer from the data is that both true and false allegations of sexual abuse have been made via FC. We cannot infer what is the frequency of false allegations made via FC relative to false allegations made via speech or other forms of communication. Given the data at hand, all you have done in your post is repeat an unfounded rumor. Will you now retract it?
You wrote:
I think that it is highly unlikely that a student who performs at an A to B level on 7th grade curricula would only take a C average on low Elementary listening tasks. Something is not concordant here. It doesn’t match up. The special excuse is offered that Kenny was nervous. I can think of no valid reason to allow this to invalidate the trial.
First, it bears repeating that the purpose of this study was to test for Kenny’s message-passing, not measure his academic work. Second, three short tests are probably insufficient. Third, I am – how to put this -- not at all surprised at an autistic child manifesting inconsistency and a shutdown-like total failure to accomplish a communication-related task. Those are central features of the disorder. So what’s not concordant? Good question.
I wrote "(The failed trial was apparently due to anxiety, for which one would imagine a competently-written IEP would accomodate.)", and you responded: "That’s not necessarily true if the test in question is standardized."
Your statement is true, but also irrelevant. The controlled experiment utilized a quiz format that was not standardized. We don’t know what forms of testing were used with Kenny in the classroom. What we do know is that the trial was terminated due to anxiety, and that it is not unusual for anxiety to have a major impact on the performance of autistic kids.
With regard to facilitator unblinding, you write:
There was a great deal of people coming in and out in this experiment. It is possible that the information was passed accidentally or on purpose to the facilitator.
Sure it’s possible, just as it’s possible that any given study in the literature deviated from the protocol described. In this specific study, the authors tell us: "Prior to the initial story presentation, the test facilitator was escorted far out of the room to ensure that she was unable to see or hear any of the information presented." Would I bet the mortgage on that being absolutely true? No, but I wouldn’t do so for any other study. I don’t see the specific flaw here. Your criticism is generic. IOW, what sort of study design you would recommend to address your objections? Have the authors say they made really, really sure the facilitator was blinded, or something?
You write, commenting on a summary of Paula Kluth’s talk:
However, when she advocates that FC should be propped up via testimonials rather than research, she uses an anti-scientific argument. That is all that is required to use an anti-scientific argument.
It’s only anti-scientific if she is speaking in a global sense. If she is speaking about what her audience can do, and her audience is composed of laypeople, and she is not saying that testimonials trump or substitute for research – and I was at the talk, and that is what I heard -- then her argument is not anti-science. A stronger evidence base is necessary, but not sufficient, for a procedure to be popularly accepted. I agree that testimonials are appropriate, if the testimonial is about something valid (e.g., my hearing about Jamie Burke FC-ing and going on to type independently and even speak). I also believe, evidently more strongly than many FC supporters, that better scientific evidence is very much needed.
Regarding my statement that "in intractable situations, it is ethical to use treatments of uncertain efficacy but known low risk and low harm," you wrote that advocates of chelation would
"say that chelation has limited risk, maybe even infinitesimal risk. Bet ya that you can go to any of the big autism biomed discussion boards and get that answer.
They are dead wrong in my opinion, but not in theirs. I don’t think there would be any difference at all, in the essential logic that they use and you use. Just different opinions as to what constitutes ‘low risk’."
There is evidence that chelation does have risks. Someone saying so doesn’t change that fact. I can say that I’m in favor of interrogating terrorism suspects, but that I oppose torture. The fact that someone else may attempt to define waterboarding as “not torture” doesn’t mean I agree with them. And it’s fallacious to say that my logic falls based on someone else’s wrong definition. Since you like fallacies, google for the fallacy of “low redefinition”.
The principle of doing no harm, but trying something if it might work, is a reasonable one. People use it all the time. Doctors do. Sure there is potential for a slippery slope, but the principle is defensible, even excellent, because one might discover something useful and make a great difference. It depends on the situation.
OK, gotta go.
cheers,
jim
Hi Jim,
You write “IOW, you were saying that statement B follows from statement C. But now I’m confused, because it seems to me that statement A contradicts that inference. We both agree that statement A is true. The purpose of my chemistry example was to show that A is true and that B does not follow from C. Anyway, if we agree on A, that's the main thing.”
I think that we don’t agree on (A) after all. We seem to be looking at the same bit of writing and interpreting it quite differently.
I am not arguing that a researcher can’t unwittingly manage to control for a confound. I am arguing that a researcher can’t be certain that she controlled for a confound in an analysis after the fact, if the control was unwitting. That is what this is really about. Look again where I wrote in the last post.
“You go on to say “Honestly, if I was the chemist, I would still be skeptical." But my point is that skepticism is justified only under certain conditions. If the solution obtained at the end of the experiment was a brown liquid, then sure, it would make sense to suspect coffee as possible contaminant. But if the solution were clear, and/or analysis of its components showed only things that aren’t found in coffee, we don’t have to worry about whether or not coffee was added.”
So, that’s it? Especially if both articles were published? I know very little about the chemistry field, but if an analogous situation happened in the special ed or psychology there would be some real questions raised. I am sticking with my original answer here.
“I doubt anything more I say will convince you”
You would have to find a new argument. Or you could get some data that specifically tackle the issue. Data that could deal with the concerns that have been raised here.
“although I will note that you haven’t adequately explained how nonliterate parroting could lead to results like these, from the story featuring brothers Jim and Tom and their mother”
I don’t think I have even been asked about that aspect yet (I could have missed it). I think that to explain the portions you give I could cite facilitator influence.
“You offer no data to back up those opinions.”
Actually, I made reference to Botash et al, albeit a bit indirectly in my first article.
“That is inconsistent with the states purpose of your blog, to take a “critical look at science in the autism world”. All that we can infer from the data is that both true and false allegations of sexual abuse have been made via FC. We cannot infer what is the frequency of false allegations made via FC relative to false allegations made via speech or other forms of communication. Given the data at hand, all you have done in your post is repeat an unfounded rumor. Will you now retract it?”
Simply put, “No”…..
I do find the number of cases we hear about to be surprising. It is usually the father that is blamed. And corroborating evidence was only found in 4 out of the 13 cases. Moreover, the burden here falls on the advocates of FC, not the skeptics. I strongly reject your criticism in this area, including that my statement was inconsistent with the stated purpose of this blog. I have a real problem with the potential for false assertions in FC and I refuse to be shy about stating such. I will change my tune if your organization or any organization does some valid research in this area that supports the points I argued against. Data trumps opinion.
“Third, I am – how to put this -- not at all surprised at an autistic child manifesting inconsistency and a shutdown-like total failure to accomplish a communication-related task.”
I can’t think of any other type of research for children with autism where such an excuse is allowed. I have read my fair share of autism educational research and even helped do some. I’ll be blunt, I don’t buy it.
“Your statement is true, but also irrelevant. The controlled experiment utilized a quiz format that was not standardized. We don’t know what forms of testing were used with Kenny in the classroom. What we do know is that the trial was terminated due to anxiety, and that it is not unusual for anxiety to have a major impact on the performance of autistic kids.”
Maybe, maybe not. Any number of achievement tests are standardized. Some classrooms use them others do not.
Also, we don’t know that the trial was terminated due to anxiety. We know that it was terminated due to reported anxiety via FC, that happened to follow very poor performance.
“IOW, what sort of study design you would recommend to address your objections? Have the authors say they made really, really sure the facilitator was blinded, or something?”
I would have been happy with a simple test for facilitator influence, like what is used in a great deal of the other FC research.
“It’s only anti-scientific if she is speaking in a global sense.”
I guess that it where we very strongly disagree.
“There is evidence that chelation does have risks. Someone saying so doesn’t change that fact.”
The point is that even though the underlying axioms are different, the same logic is used. The logic is generic.
“And it’s fallacious to say that my logic falls based on someone else’s wrong definition.”
Well…. I don’t say your logic falls. I say that your logic can be used to justify a cornucopia of treatments, some of which are in my book, unethical or dangerous.
Thanks Jim,
Hi IV,
Regarding the chemistry example, it doesn't take special knowledge of chemistry to infer that a clear solution, which upon analysis shows no coffee, didn't have coffee added to it. In such a case, one doesn't have to go back and run the experiment again to be sure. The same principle does apply in other fields. In the Weiss study, it's not as clear-cut, but I agree with the authors that the pattern of responses suggests parroting is unlikely.
You wrote, regarding the improbability of some answers like "BROITHERS" being nonliterate parroting:
I don’t think I have even been asked about that aspect yet (I could have missed it).
In fact, I mentioned it twice before. Please use the "find in this page" function in your browser for "BROITHERS".
I think that to explain the portions you give I could cite facilitator influence.
If the facilitator was unblinded, yes, although you have yet to identify any specific reason to assume this could have happened. You seem to be arguing "well, you can never be sure".
With regard to sexual abuse allegations, I find it remarkable that you're digging in and attempting to support a completely unsupportable position. You're entitled to your own opinion, but in this case the data don't back you up at all, so I have every right to call you on your errors.
I already quoted an excerpt from Botash in my comment above that starts "Regarding the Botash study...". That study does not support the sweeping claims you go on to make.
Now let's look at your original post again:
It seems that a surprising number of facilitators have accused parents, often the father, of abusing the client.
"Surprising number". Tell us that number. Tell us what makes it "surprising" and why.
Also unsurprisingly, most of these cases turn out to be garbage.
Source please? It wouldn't be Botash, cf. my comments above where I showed that the 6/13 unproven a/o false allegations appeared to be in the same ballpark as national numbers.
In comments on your earlier post, you write:
I formed mine after reading Botash and some of the stories about the false allegations.
Reading anecdotal accounts isn't going to give you any idea of prevalance, since false allegations are naturally going to get more media play than true ones. So all you have quantitatively is Botash, which doesn't support your claim at all.
Just above, you say:
It is usually the father that is blamed.
Source? And how does that differ from other allegations of sexual abuse (true or false, FC or not)?
And corroborating evidence was only found in 4 out of the 13 cases.
"Corroborating" is a term of art. See the excerpts above. ("An additional five had supportive evidence", etc.)
Moreover, the burden here falls on the advocates of FC, not the skeptics.
Here you are confusing the positions of agnosticism and denial. It is the burden of FC advocates to show the technique is valid. It is not the burden of FC advocates to falsify the claims you make (e.g. that a "surprising number" of allegations are made via FC, or that the rate of false claims made via FC exceeds the rate of non-FC'd claims).
Here is sociologist Marcello Truzzi, explaining the distinction that you are missing:
In science, the burden of proof falls upon the claimant; and the more extraordinary a claim, the heavier is the burden of proof demanded. The true skeptic takes an agnostic position, one that says the claim is not proved rather than disproved. He asserts that the claimant has not borne the burden of proof and that science must continue to build its cognitive map of reality without incorporating the extraordinary claim as a new "fact." Since the true skeptic does not assert a claim, he has no burden to prove anything. He just goes on using the established theories of "conventional science" as usual. But if a critic asserts that there is evidence for disproof, that he has a negative hypothesis --saying, for instance, that a seeming psi result was actually due to an artifact--he is making a claim and therefore also has to bear a burden of proof.
You continue:
I strongly reject your criticism in this area, including that my statement was inconsistent with the stated purpose of this blog.
If you want to reject these criticisms "strongly", please try doing so with facts and logic.
I have a real problem with the potential for false assertions in FC and I refuse to be shy about stating such.
Then why not just say that, rather than making unfounded claims about evidence that you can't back up? You're only undermining your position with such an unreasonable stance.
Data trumps opinion.
Indeed....
Regarding the second trial in the Kenny study, I wrote:
“Third, I am – how to put this -- not at all surprised at an autistic child manifesting inconsistency and a shutdown-like total failure to accomplish a communication-related task.”
You replied:
I can’t think of any other type of research for children with autism where such an excuse is allowed. I have read my fair share of autism educational research and even helped do some. I’ll be blunt, I don’t buy it.
Excuse for what? You and Anonymous are attempting to spin the results of the study as invalidating the child's academic work, when in fact that study was designed solely to measure valid instances of FC. To borrow an analogy from Anne Donnellan, the idea was to see if there were any fish in the sea by trying to catch fish. One doesn't expect to catch fish on every try.
So I don't see any problem at all with aborting that trial having an negative impact on the study's results. It's ethically sound, also, to stop a trial if the subject expresses discomfort. I do take exception to the notion that these trials provide a meaningful measure of the child's academic ability.
With regard to facilitator influence, you write: I would have been happy with a simple test for facilitator influence, like what is used in a great deal of the other FC research.
In response, let me repeat my comments above that you left unanswered:
(quote)
As I said, facilitator influence is known to occur in FC, but if you blind the facilitator, you adequately control for it. The OD Heck study design had conditions where the facilitator was given true, false, and no information relative to what the FC user saw.
You are suggesting (for some reason that you haven't explained) that in this study, the facilitator was somehow unblinded. If that's true, then it doesn't matter whether the experimenters provide true/false/no information; the facilitator has already been corrupted by true information in all conditions.
(end quote)
In light of the above comments, tell us precisely how you would design such a control in this case?
Finally, with regard to my earlier statement that "in intractable situations, it is ethical to use treatments of uncertain efficacy but known low risk and low harm", you say that my logic could be used to justify chelation and similar risky treatments. You say this is so not because chelation is without risks, but because some people may say it is and thereby use the same logic.
This is a very simple issue, and I'm surprised you still don't get it. In a nutshell, redefining "safe" amounts to a logical fallacy:
LOW REDEFINITION: Fallacy in which the meaning of a word is stretched in an attempt to defend a questionable proposition (‘You’re still using your student discount card though you graduated five years ago’ - ‘Ah, but we’re all students, really’). Contrast with HIGH REDEFINITION.
regards,
Jim
Jim,
Re the coffee example. We seem to be rehashing the same arguments. Let’s either find some new arguments here or move on.
Re "BROITHERS". You are quite right. I was happy to finally provide an answer.
“If the facilitator was unblinded, yes, although you have yet to identify any specific reason to assume this could have happened. You seem to be arguing "well, you can never be sure".”
No, I am not arguing that you can never be sure. In fact I am arguing that “you can be sure”. But to be sure, I would require certain controls. I have already explained numerous times what these controls consist of. They are absent.
“With regard to sexual abuse allegations, I find it remarkable that you're digging in and attempting to support a completely unsupportable position. You're entitled to your own opinion, but in this case the data don't back you up at all, so I have every right to call you on your errors.
As for your criticism here... I partially accept them. I will specify below.
1) I refuse to remove my opinion that the number is surprising. However, I agree to clarify that this is an opinion.
2) I have read Botash in its entirety. I recall that most of the cases specified the father. However, I am currently away from my resources. I can’t back that up. I agree to remove this claim until I can review the article. However, if I am correct, I will reinstate my comment.
3) I am looking for the strong corroborating evidence. If the cases the cases don’t have it, then they don’t have it. I maintain my claim that most of the cases turn out to be garbage as based on Botash et al. That’s just what the data show.
On the whole I am sorry that you believe me to be “digging in”. I think I am taking a skeptical, but fair view of the data. You argue that I am “digging in” concerning this issue. I think you that you are trying to find a fault in me as opposed to my argument. Your criticisms are welcome, but I don’t necessarily think they have much merit in this area. If you continue to be surprised by what you feel is my “digging in” I would recommend that you take your search for answers to some other forum.
“Here you are confusing the positions of agnosticism and denial. It is the burden of FC advocates to show the technique is valid.”
I accept your criticism, but only because of my sloppy wording. I maintain that FC advocates must prove that FC is valid means of communication for a person making a legal claim. That burden does fall on FC advocates.
As to Marcello Truzzi, I have repeatedly been exposed to his writings including the passage you reference. His writing is usually whipped out by advocates of a questionable theory when they are in the presence of a skeptic. He is less popular than Semmelweisse and Galileo as a rebuttal to skeptics, but still quite popular. I am not saying that “only quacks cite Truzzi”, all I am saying is “Nothing here we ain’t seen before”.
What it boils down to is that many skeptics do not agree with Truzzi about what a skeptic “believes”.
For example:
“The true skeptic takes an agnostic position, one that says the claim is not proved rather than disproved.”
No true Scotsman Fallacy. Many skeptics if there is a least some evidence one way or the other will lean in that direction, especially if other factors come into play.
Skepticism here, is a ruthless data driven fairness. Not a ruthless agnosticism.
“If you want to reject these criticisms "strongly", please try doing so with facts and logic.”
Certainly, and if you wish to provide a critique, please do them same.
“Excuse for what? You and Anonymous are attempting to spin the results of the study as invalidating the child's academic work, when in fact that study was designed solely to measure valid instances of FC.”
No, I am stating that I don’t agree that the excuse of purported nervousness invalidates the participants very poor responding in the second trial.
“It's ethically sound, also, to stop a trial if the subject expresses discomfort.”
But it was expressed via FC. That pretty intense circular logic.
“In light of the above comments, tell us precisely how you would design such a control in this case?”
You know I am pretty sure I did answer this before despite your claim. I would use a phase where the facilitator was given different information from the participant.
“This is a very simple issue, and I'm surprised you still don't get it. In a nutshell, redefining "safe" amounts to a logical fallacy”
No dice. The fallacy of low redefinition isn’t the same as stating the basal logic you used is generic in the field of autism. These aren’t even close to being the same thing.
I am not saying that your logic is wrong. I am saying that it could be (and has!) been used by many people justifying a cornucopia of treatments. There is no logical fallacy here.
This comment has been removed by the author.
Hi Interverbal -
Today is 12/20/07, and I'm amending my post from a couple days ago to clarify terms.
If you respond to just one thing in this post, please answer this:
What are the baseline numbers for unsubstantiated allegations of childhood sexual abuse? That is, those not made via FC?
The issue, of course, is how do the unsubstantiated FC allegations compare to that baseline.
Additionally: I'm pretty sure most allegations of sexual abuse, FC or not, involve the father. A majority turn out to be unsubstantiated, too, according to one source (Smith, 1985, quoted here).
So, I don't see what is so surprising about the Botash study.
I'll reply to the rest of your post later. In the meantime, when you have a moment, please share the baseline numbers you were referring to when you expressed surprise about the FC numbers.
regards,
Jim
Hi Jim,
I have been quite busy lately, but I should have more time now.
“What are the baseline numbers for unsubstantiated allegations of childhood sexual abuse? That is, those not made via FC?”
According to this site http://www.betterendings.org/justice/Stats.htm
30-40% of cases are substantiated. That is consistent with what Gilmer cited and Botash found.
However, there is some variability in special cases. The satanic abuse scare of the late 80s and early 90s turned up no validated cases.
However, I don’t see FC allegations rates and other general abuse allegation rates as looking at the same issue. In the general case, the child could be directly or indirectly asked. Not so, in many of the FC cases. I want to know what is the rate in other cases where the child can not or will not make an answer.
I am interested in Jim Butler's take on this "proof" of FC posted on a Calling Dr. Gupta blog by "Kacky" in response to questions about the authenticity of communications attributed to Sharisa Kochmeister. Kacky says:
"You obviously have no experience with FC. So what if they need someone touching them? It's easily tested in daily life. My son wrote a note through me to his housemate, who wasn't feeling well. A week later, the housemate mentioned the note while typing with someone who knew nothing about it. Proof positive. We don't care what you think. You're expendable."
Aside from "Kacky" declaring another human being to be "expendable"--another proud moment in FC advocacy--there seems to be a bit of illogic here.
Is this really "proof positive" of FC, or proof of facilitator influence?
This comment has been removed by the author.
Just in case anybody speaks Spanish, I have in my web a decent article plus a video on FC.
La Comunicacion Facilitada
Regards,
Jorge
I know this is a very old discussion, but I had missed part of it and want to comment about the person doing well in school at one level and testing poorly at another.
I thought that this kind of thing was well-known about autistic people. I don't know any specific studies, but I know a number of autistic people who have gotten totally different scores on standardized tests than most people would expect under the circumstances.
Some (me included) have taken the same test twice, identical test, and done worse the second time. Significantly worse (I'm thinking of a particular short test the name of which I can't remember) in my case -- I was given a test twice within months of each other. The first time I got one of the highest scores they'd ever seen, the second time my score was very low.
I have a friend who excels in some areas of mathematics, such as calculus, but who would have been in remedial math forever if someone hadn't encouraged her talents in higher math, because she could not then and cannot now do any but the most basic arithmetic. So she has at times in her life performed at college level in mathematics while she at the time would never have passed a second-grade math test.
Uneven scores on IQ tests are known, and I know there are studies but I can't access them right now. A number of autistic people I know, me again included, have tested significantly higher as children with less knowledge, than as adults with more knowledge.
Donna Williams scored 67 on an IQ test (which usually involves not getting past "simpler" tasks on the test in at least some areas, since those tests tend to go from easier to harder within each category) after getting one or more university degrees.
What I am confused about is why it is that such discrepancies in people who communicate 'independently' are... if not accepted, at least somewhat known to exist, and not taken as evidence that we're not really communicating. But if FC users have the same odd discrepancies, it's seen as reflecting on the FC process somehow.
Hi Amanda,
Old thread for sure, but you are welcome to revive it.
You ask:
"What I am confused about is why it is that such discrepancies in people who communicate 'independently' are... if not accepted, at least somewhat known to exist, and not taken as evidence that we're not really communicating. But if FC users have the same odd discrepancies, it's seen as reflecting on the FC process somehow."
Because of the nature of the basic facts. The independent typists may have inconsistant test performance, but they can still type independently. They have a near incontrovertible proof.
The facilitation dependent group, lacks such a proof. And so it is unclear whether the poor performance is caused by inconsistant performance on tests, or fraud.
In the absence of such a proof and in the face of questions of authenticity, 3 possibilites exist.
1) The communication is entirely from the facilitator.
2) The communication is partially from the facilitator.
3) The communication is entirely authentic.
In the absence of a proof any of these 3 are possible.
So, I acknowledge your point about the potential for inconsistant or poor test performance to account for the research results, but until we have some type of controlled proof that shows otherwise I see no reason to be anything cautiously skeptical about the validity of FC in these cases.
Further, while I acknowledge that controlled trials for FC are an imperfect tool, I also think they are the best available tool..... and more importantly they are good enough tool for the job. Even with inconsistant performance I would suspect that there be at least 1 winner in the FC controlled study pile. But it isn't there.
Now maybe the advocates of FC have just had a string of really rotten luck in terms of research. Or maybe it is not bad luck at all, but rather that the autistic persons who were tested hated and couldn't perform under the non-naturalistic conditions with strangers present. However, irregardless of whether one of the above is true, I am totally sure, that the ball (burden of proof) is in the FC advocates court.
Post a Comment
<< Home