Kate Chatfield

A useful article about Kate Chatfield.Full details about Kate Chatfield

Kate divides her time between private practice, teaching, and research. In addition to teaching at University of Central Lancashire, she is a director at the Galway College of Homeopathy. She sits on the Research and the Research Ethics Committees of The Society of Homeopaths, of which she is a co-founder, and is a coordinator and board member for the European Network of Homeopathy Researchers. She has an active international practice consulting on matters of research in homeopathy and in curriculum development for homeopathic education.


Neil:
Hi, Kate. First, let me thank you for agreeing to occupy our hot seat this month. As you know, this issue, and next month’s issue also, is devoted to scientific research in homeopathy, so your extensive involvement in research is obviously of special interest to us now.

But let me start, if I might, with a personal question: I noticed that your original training was in philosophy and biology, and only later did you become involved in homeopathy. So, a question that is common in homeopathic circles: can you tell us how it was that you first became involved with homeopathic medicine?

Just to explain this question a bit, let me say I don’t raise it merely out of idle curiosity, but because it reflects the fact that many of us in homeopathy were first introduced to it through personal experiences. So this whole question, in context of “science” and “research” and “objectivity,” is central to the controversy surrounding homeopathy for the last 200 years: the “personal testimonial” reflects the enormous power of ultra dilute remedies, while mainstream science has produced predominantly negative evidence regarding efficacy of homeopathic practice.

Kate: My main motivation for looking for an alternative emerged from dissatisfaction with allopathy at an early age. Incompetence on the part of our GP led to a long delay in detecting my mother’s breast cancer. The cancer was so extensive by the time it was diagnosed that her subsequent treatment was highly invasive and appeared to me to be totally barbaric. This happened during my teenage years when I was considering a career in medicine. I began at that age to explore alternatives. I know that I am not alone in this experience and many of us are involved with homeopathy because of dissatisfaction with allopathy.

Some research that I carried out several years ago with Society of Homeopaths members in the UK asked them why they had become homeopaths. Dissatisfaction with allopathy was cited as a reason by 18% of those surveyed. Experience of beneficial homeopathy treatment for the individual or their close friends or family was the second most commonly cited factor, but the biggest motivator appeared to be a sense of calling or vocation. It looks as if many people become homeopaths because it ‘feels’ right to them.

Strangely enough no one mentioned being attracted by the ‘scientific’ evidence! This in itself raises for me all kinds of questions about homeopaths and their apparent ability to override commonly held ‘rational’ perspectives. This same piece of research indicated that over 60% of the homeopaths had qualifications other than homeopathy of degree level and above. Obviously these are a well-educated group of people who are quite comfortable with challenging the prevailing scientific paradigm. It would be great to see someone doing some more research in that area.

Neil: Obviously, this raises the question of the competing value of the “personal testimonial” compared to the supposedly more “objective” characteristic of evidence produced by the methods of mainstream science, which has produced predominantly negative testimony regarding efficacy of homeopathic practices

Kate: The introduction of the evidence-based medicine movement has brought with it a supposed hierarchy of evidence that places research evidence from randomized controlled trials (RCTs) and reviews of these trials at the top of the hierarchy. RCTs are often referred to as the ‘gold standard’ in research evidence. Personal experience or anecdotal evidence is placed right at the bottom of the hierarchy. The reason for this being that we cannot be certain what is having the curative effect in these cases. Maybe the person would have got better anyway, maybe there are other factors involved, maybe they improve as the result of placebo effect. Personal experience is not considered to be scientific because scientific inquiry involves systematic collection and collation of data.

However, this is not the same as saying that personal experience has no value. It may not offer scientific evidence but we can still learn a lot from anecdotal evidence.

Much of our information in homeopathy is passed on through case studies and this appears to be a good way of testing the clinical effects of remedies. I believe we need to be more careful however about the claims based upon this kind of evidence. Homeopathy is a complex intervention that often includes the benefits of a deep therapeutic relationship as well as the effect of the remedy.

This brings me to another of your points regarding the “negative testimony” of the RCT’. But before I go into this I want to correct a common misconception that you appear to share, that research has produced predominantly negative results regarding homeopathy. This is not the case. The jury is still out over whether the balance of research evidence favors efficacy of homeopathy or not.

Numerous systematic reviews and meta-analyses of homeopathy have been carried out. A systematic review is a means of examining results from more than one trial to look for trends. A meta-analysis takes this overall view further by performing statistical analysis on the combined results to look at the statistical significance of these trends. Some of these reviews claim that homeopathy does have an effect over and above placebo while others support the claim that it does not. Really it depends upon which trials are examined and how. There are a large number of trials that appear to demonstrate a positive effect over and above that of placebo, but there are also trials that have the opposite result.

When researchers first started to systematically research homeopathy they adopted the allopathic model of research. As I have already pointed out, however, real world homeopathy is a complex intervention that is not only complicated by a therapeutic effect but also involves individualized prescribing and whole person treatment. For this reason many trials used isopathy or combination remedies and some have produced highly significant results in favor of homeopathy, such as a series of trials carried out by David Taylor Reilly in patients with hay fever, asthma and perennial rhinitis.

Trying to fit the whole-person, individualized approach to homeopathic treatment into the randomized, placebo-controlled model is fraught with difficulties and many are now claiming that it is simply not possible to test homeopathy in this way. Poor research methods in the past have generated spurious results that now feed into meta-analyses of homeopathy and influence their conclusions. Many researchers are now looking for more sensitive ways to research the effects of complex holistic interventions and mostly they do not include the randomized placebo-controlled trial.

Neil: Let me pick up on your point, whether research has produced predominantly negative evidence regarding homeopathy. On this you are, of course, correct. My wording was imprecise if it suggested that there was not a mix of results to be found in the literature. The paper you yourself helped put together, An Overview of Positive Homeopathy Research and Surveys, which, as you of course know, we plan to publish in the December issue of this ezine, is but one example that documents the increasing volume of evidence favoring homeopathy; and I have reviewed a book with the same thrust, in the present issue of the ezine, namely, the GUNA publication, Homeopathy: The Scientific Proofs of Efficacy.

Still, as a practical matter it remains true that the “common” perception is that research is overwhelmingly weighing in contra: witness, to take an especially egregious example, the ill-considered editorial last year in The Lancet, announcing “the end of homeopathy.”

My question for you, in this connection, is: what ongoing research is, in your opinion, representative of the best contemporary work in the field, offering the best and most reliable outcomes, and the best hope for documenting the positive effects of homeopathy that are not reflected in much of contemporary “scientific research?”

Kate: Neil, this is a difficult question for me in many ways. My background in philosophy compels me to challenge concepts of proof and objectivity and facts such that I don’t believe it is possible to use clinical research to prove the efficacy of homeopathy. It’s my belief that we can add to the body of evidence and increase probability but we can’t actually prove anything through this kind of inquiry. The biggest challenge for us is undoubtedly the issue of using ultra-high dilutions (UHDs). It doesn’t seem to matter how many clinical trials with positive outcomes that we present as evidence while there is still the prevailing belief that UHDs cannot possibly have an effect.

For this reason I am very pleased that there are people who are experimenting with using UHDs independently of homeopathy. The research that most impressed me of late involved international collaboration, the Irish subset being led by led by Madeleine Ennis, a pharmacologist at Queen’s University, Belfast. Ennis’ team looked at the effects of highly diluted solutions of histamine on basophiles. The study, replicated in four different laboratories in Europe, found that the highly diluted solutions worked just like histamine. This kind of research will do far more for the credibility of homeopathy than any number of RCTs.

As an aside here I would like to mention that I have great concerns about the ethics of some research that is being carried out in animals with intent to demonstrate the effects of UHDs. One of the things that attracted me to homeopathy was the understanding that our medicines were tested on humans. It saddens me to see animals being used in this fashion.

Generally I would really like to see a shift in emphasis away from trying to prove that homeopathy works towards more research that seeks to improve practice. Observational studies like that carried out in Bristol Homeopathic Hospital can provide us with some very useful information. This study evaluated 23,000 consultations over a six year period, demonstrating that on average more than 70% of those seen improved.

The research was highly criticized externally because this kind of study is way down in the hierarchy of evidence. Subjective, self-assessment by the patients is not considered reliable and lack of controls in the study means that we can’t claim it was the homeopathic remedy that made the difference. Whilst admittedly this study cannot prove it was the homeopathic remedy that had an effect it does show us that the homeopathic intervention is highly effective in the clinic. After all over 70% of our patients improve, a figure that has been mirrored and bettered in similar studies, so we must be getting something right. We can use this type of study to find out what kinds of things people present with, what kinds of remedies they are prescribed, what things improve more easily than others and link this to information about remedies and potency and so on. So for us as homeopaths observational studies can provide us with some very basic information that isn’t generally available.

In addition, I agree with you about the possible beneficial use of single case studies. The benefits of studying one person’s case in many respects versus the study of many people’s cases in few respects is another hot topic in research. It is argued by some that it can be of immense value to assess an intervention in a holistic and deep sense with one person at a time. This kind of research can include quantitative measurements as well as qualitative data to provide the patient’s perspective about their experience of the intervention.

It’s not possible for me to go into all types of research methods and their pros and cons in this interview. It’s also not possible for me to mention as many good examples of research as I would like to, but in summary, I prefer to see research that is fully reflective of practice in the real world that holds some potential for learning about and improving our practice.

Neil: Thanks, Kate. That provides a useful panorama, with informative specifics, of the range of work being done, or needed, in research into homeopathy. I think the point I would like to pick up on, is the question of clinical research (e.g., the case study) versus controlled, or systematic, or statistical research – whatever terminology you prefer.

As a clinician, I am obviously biased, but let me state my case this way: would you prefer to trust findings of a closely reasoned, comprehensive case study, compete with pre-, intercurrent, and post-treatment labs … or would you trust in preference to that, any one of the numerous, highly problematic controlled trials such as formed the basis for the latest conclusions drawn by Linde and company?

Kate: Well firstly let me try to avoid confusion through differences in terminology. All research is by definition systematic because that is what defines it as research as opposed to other forms of observation or study, whether this is a single case study, large-scale trial or basic research. Clinical research refers to study of a clinical intervention whether this is in a real world situation or in a tightly controlled environment. Quantitative research involves the use of measuring or counting as opposed to qualitative, which seeks to gain subjective views and opinions.

In answer to your question about case studies, these can be carried out in a systematic way that would constitute research or an unsystematic way such as we often see in homeopathic journals. Writing up a case in retrospect does not constitute systematic research and provides us with a different quality of information from one that is the subject of study from the outset.

It may appear that we can prove causality through a good case study but is this truly the case? Take for example a case where a mostly linear intervention, such as an allopathic drug, appears to produce a set of unwanted symptoms in a patient. The most obvious explanation is that the drug is causing the side effects and that may well be the case but there is always an alternate explanation, however remote the possibility. The response may be placebo because the patient doesn’t want to take the drug, or has come to associate allopathic drugs with having side effects. There may be something else in the person’s life, which coincides with taking of the drug. And ludicrous as it may sound, how could you prove to me that the side effects are not the result of divine intervention?

Okay so this is an extreme example that I use to make a point but what I wish to highlight here is the fact that the homeopathic intervention is even more complicated than the above example because it is in no way linear, it is highly complex. We don’t prescribe remedies with intent to affect change in one specific area; we aim at the whole vitality. Changes may be subtle, on many different levels and difficult to measure. Then there is the issue of how we disentangle the effects of the remedy from the effects of the therapeutic intervention. Another problem associated with uncontrolled studies is that of how we prove that it was the homeopathy that triggered a healing response and not some apparently irrelevant factor such as eating strawberries for lunch on Tuesday.

One of the main problems with using single case studies in research is that we can’t assume the conclusions can be generalized. What affects one person, in one particular state, in one particular moment, under particular (irreproducible) circumstances may not ever produce the same effect again. I’m not saying here that we cannot learn a great deal from case studies because I’m actually a great fan of this as a method for research and study, I just think we need to be careful about the claims we make about them.

Finally I would like to point out that I don’t think it’s a matter of which method, case studies or large-scale trials, I trust the most. Large-scale trials are fraught with their own problems and subject to many forms of bias. It’s more a matter of different tools for different tasks and each method provides us with a different sort of information. I maintain a healthy amount of mistrust of all conclusions that come out of clinical research because however seemingly objective it is there is always room for human error.

Neil: Your definitions really go to the core of some critical questions in contemporary medical research. Let me focus on one key issue raised by these definitions: you state “What affects one person, in one particular state, in one particular moment, under particular (irreproducible) circumstances may not ever produce the same effect again.”

Of course, this is the central problem with clinical practice in the first place, that, unlike a lab experiment, the same “experimental setup” can never be replicated – you can never see the same patient for the same “first interview,” and, once you have made an initial prescription, or intervention, you can never replicate his condition as it was prior to that first intervention.

But case studies are also constructed “systematically.” All good case studies will include the same elements, such as family history, personal development, present circumstances, health history, symptom picture, course of illness, lab results, etc. In a word, the fact that the Cochrane Collaboration has established its own hierarchy of standards for its own brand of systematic research does not mean that there are not other models for systematizing “data collection” – a.k.a. “case taking,” or for analyzing that data and formulating a statement regarding its reliability.

In brief, you are right that there is always room for bias and error, but that is true for “systematic” (quantitative) research as well as “systematic” case taking. Neither produces confidence weighing in at the hundredth percentile. My question, I guess, comes down to this, then: given conflicting outcomes, how do you resolve the difference, and, lacking a resolution to the disagreement, how do you choose whom to believe?

Kate: It’s a tough one isn’t it? But such is the world of uncertainty that we live in. Science is not a static thing and scientific theories are constantly being adapted or replaced to fit with the available evidence. There’s a theory in the philosophy of science known as underdetermination, which was introduced by Dunhem and Quine. They pointed out that the same pool of data could be used to support many different, even contradictory theories. We see a clear example of this in meta-analyses of homeopathy where different statisticians review and analyze the same available data source, RCTs, to come up with contradictory conclusions. Some meta-analyses support the conclusion that homeopathy has an effect over and above placebo and some conclude the opposite.

So which do we believe? I believe this is largely a matter of personal perspective. It’s easy for me to believe that homeopathy has an effect because I am not firmly rooted in a reductionist paradigm and I see people improve with homeopathic treatment every day. I can comfortably accept some things that appear contrary to scientific opinion because my study of the history and philosophy of science tells me that scientific opinion will inevitably change.

Equally I can appreciate the perspective of a person who is firmly rooted in the prevailing scientific paradigm and why they find it difficult to accept homeopathy. I don’t believe it is my task to convince these people of the efficacy of homeopathy because that will happen in its own time.

What motivates me in research is not the search for evidence or proof, rather that there is so much that is yet undiscovered in homeopathy. Most of the people who come to us have tried allopathy first. The fact that we can do anything at all for these patients is pretty amazing but if over 70% of them are improving then what we have to offer is worthy of serious investigation.

Neil: That so much “systematic” research has contributed to the current, common impression that homeopathy does not work, certainly raises the question how much faith we can reliably put in any system of research in and of itself. In short, doesn’t the current popularity of the controlled trial itself reflect the pattern you describe, of the ever-changing landscape of intellectual fashion in the evolution of science? And doesn’t that call into question how much faith we should put in “evidence based” and “systematic” research methodologies, at least at the expense of empiricism, or “systematic observation?”

Kate: Well yes and yes would be my simple answer. I agree that the current popularity and importance placed upon RCTs is part of an intellectual ‘fashion’ and we should indeed be wary of how much importance is placed upon them. The whole of the evidence-based movement is premised upon the acceptance of a hierarchy of types of evidence and I’ve never been a fan of hierarchies of any kind, but we have a lot of our own in homeopathy, don’t we? The evidence hierarchy is man made of course, not written in stone and not the ultimate truth. It serves a purpose and at present that purpose is to drive out subjectivity in clinical decision-making and rely more upon so-called objective facts. Whether this will be achieved remains to be seen. I’m not convinced it is even desirable since this trend appears to value highly the science of medicine whilst completely negating the art.

Neil: The last part of your last response surprised me, and takes me in a somewhat different direction. In a way, though you are not involved in laboratory experimentation, your interest in research, as you frame it now, is really closer to “basic research” than to “controlled research.” You are not trying to prove efficacy, as that has already been established in your mind, but to enhance understanding of the processes by which it achieves its successes.

So my next question is, on a more “personal” professional note: what are your special areas in interest in research into homeopathy? What are some of the more important gains that have been recorded by “researchers” in recent years? If you could have one wish, to “prove” one thing, or to make one discovery, what would it be?

Kate: Whilst trying not to invite too high a dose of sulphur, I have to admit that my special interest in research at present is to take a step back, view from a wide perspective and try to understand the underlying philosophy behind what and where we are going with it. We haven’t been well resourced in homeopathy research either in terms of finance or expertise and some lessons have been learned the hard way. Through jumping on to the allopathic bandwagon in clinical research we have learned that homeopathy does not adapt easily to fit this model. This is perhaps one of the most significant things we have learned.

For over 30 years now the emphasis in clinical research has been placed upon proof of efficacy and as far as I can tell we are no closer to it than we were 30 years ago. In terms of what we have learned about our practice and how we can improve it, I can’t think of anything at all.

Much of the research in allopathy is not focused upon effectiveness over and above placebo, rather it is about improving practice: comparing one drug with another; comparing one technique with another. This is the direction I would like to see us take in homeopathy research. There are many reasons why we have not taken this route in the past. Political pressures to offer evidence of efficacy are tangible and we do not have the luxury of the immense resources that are poured into allopathic research.

Still, as an objective I believe it is worthy. It was to this end that I first established the European Network of Researchers in Homeopathy with Petter Viksveen, chair of ECCH. There are so few researchers in homeopathy in Europe that I believe it essential we all exchange ideas and information about what we are trying to achieve.

We have a lot of sacred cows in homeopathy and not many people have the time, the expertise or the inclination to challenge them. Through the MSc Homeopathy by e-learning that we are about to start at UCLan we will be bringing together experienced homeopaths, with an interest in research, from all over the world to work on such topics. I think this is essential for the growth and future of our practice.

On a more personal note I am about to embark on a new direction in my research, looking at the use of homeopathy for people with mental health problems. In a recent nationwide evaluation of Society of Homeopaths’ members we found that this was the largest single concern that patients present with. It is also apparent that people who work in the conventional field of mental health care generally have no idea that homeopathy has anything to offer so it’s something I think we should be addressing.

Needless to say there will be no RCT involved in this research. It began in late September with a survey sent to Society members to ask for their opinions on what kinds of things they treat, how they personally rate efficacy, what concerns they have etc. Naturally this information will come entirely from the homeopaths’ perspective but that is intentional. I can think of no better way to find out what homeopathy is effective at treating than to ask the homeopaths themselves. After this initial survey has been analyzed I will decide which direction to take the research next.

Thanks for inviting me, it has been a pleasure

Neil: Thank you.

—————————————————–

Kate Chatfield can be contacted at t [email protected]

About the author

Neil D. Shere

Neil D. Shere

Neil D. Shere
Neil is a Board Certified clinical social worker, specializing in psychotherapy with children, individual therapy with adults, and marital counseling. Neil has worked as a therapist, supervisor, and administrator in the public schools, in family service and mental health agencies.

Presently, Neil works in his own practice, Neil D. Shere & Associates, in suburbs near Chicago. Neil also serves voluntarily with the LAN (Local Area Network), a local, state-sponsored inter-agency committee that awards grants to families of children experiencing emotional and behavioral problems and situational distress.

Leave a Comment