If Ivermectin really works, how come the studies are so unclear? A guide for laypeople to understand how the evidence emphatically supports Ivermectin without reservation.
It is far from self-evident just by looking at the studies on Ivermectin that it is the panacea claimed by its proponents, so how can they know so definitively that Ivermectin works?
Due to the already mind-numbing length of this article, I refrained from developing the individual points as much as I could. I am relying on the reader to take what’s written and develop and flesh out the individual arguments further in his or her own mind.
One of the questions I get most frequently is something along the lines of “You keep saying that we know without the slightest shred of doubt that Ivermectin works against covid, but the studies are far more divided and unclear (at minimum)”. The people who ask me this almost always are well-meaning and sincere individuals just trying to sort out the confusing cacophony of contradictory claims about Ivermectin, to whom the numerous studies purportedly showing that Ivermectin doesn’t work seem formidable and convincing.
I am writing this article to address this problem, to explain in language and concepts accessible and intelligible to laypeople what the academic and real-world evidence is, and why or how it is reasonably extrapolated from the evidence base that Ivermectin works beyond a shadow of doubt. I am writing this with largely using language and examples specific to Ivermectin and medical treatments, but the concepts are equally applicable regardless of the subject. I apologize in advance for the length of this article, but it is an unavoidable nuisance necessary in order to properly explain the subject, so I hope you’ll forgive me.
Explaining the Different Types of Evidence:
The first step is to explain the nature of the different types of evidence, namely what is being observed and what is the significance of the observation with relevance to what you’re trying to prove. I will list each category and the different types within each category, along with the defining characteristic that makes it a unique type of evidence, in the sense that we see its evidentiary ‘power’ or ‘convincingness’ as distinct from other types of evidence. This is organized starting from the most granular / least ‘formal’ (you’ll see what I mean by that) to the least granular / most formal. It should go without saying that no evidence type is foolproof, and no expert is too expert-y to make a mistake, especially when all sorts of powerful emotions and biases are involved.
Type 1: Experiential Observation
Experiential Observation refers to the direct experience - or observation - of real-world reality, the “facts on the ground” as it were. There are 3 distinct types of “experiential evidence”:
1. Expert Clinical Experience:
This refers to the experience of individual doctors or clinicians who use Ivermectin to treat patients, and see the results.
Defining characteristic: The unique expertise of a doctor to discern that an intervention is the reason for the change in the patient. Doctors have a clinical intuition born from years of experience and from their medical knowledge. This has both conscious and subconscious components. A doctor who has been treating patients for years develops a conscious knowledge and a subconscious sense or intuition for the accompanying characteristics of various types of interventions working (or not working). They are integrating numerous details and changes about the patient that they are intimately familiar with their association with the success or failure of various treatments or interventions. Thus, a seasoned doctor or clinician has an ability to “tell” if a treatment they’re using is the reason for the patient’s improvement or recovery.
2. Mass Usage on a Massive Population
This refers to the observation of the correlation between mass-distributing Ivermectin and the macro-level results in the covid metrics.
Defining characteristic: The scale of the size of the observation, especially to people who have specific experience in analyzing and understanding society-level phenomena. The size of the “experiment” is what makes this type of evidence convincing — a few people, or even a town or maybe even a small city, could be fluky due to some unknown factor/s that can influence covid outcomes. The more people are involved though, the more unlikely it is that there could be another significant factor that is causing the results. An important subset of this is that experts in this specific field (in the sense that they possess genuine experience from years or decades of studying and/or working on the ground to try mass-scale type interventions, not that they have “credentials”) have an intuition similar to doctor’s clinical intuition regarding an individual patient, but applying to macro-level outcomes; they have a much more pronounced ability to tell if the intervention is what’s really driving the result, the same manner a doctor can perceive the effect (or lack thereof) of a treatment he’s administering to a patient. (Obviously, this is both less powerful/reliable than the “doctor intuition”, and more importantly, far more easily corruptible by personal biases or conflicts of interest that can lead the expert to fool or convince himself to see what he wants to see. I am certainly not advocating for blind trust in sociological “experts”.)
3. Personal Experience:
Defining characteristic: Laypeople’s comprehension of their own medical experience or observation — while people generally can tell when something makes them get better quickly, laypeople are also prone to mistaking correlation or coincidence for causation and internalizing their erroneous understanding of their own experience/observation. This type of evidence is generally useful only as the collection of individual people who have taken Ivermectin, or were involved with someone’s usage of Ivermectin and saw/experienced an improvement in the course of their covid disease (or - in theory - failed to see/experience, were that a widespread phenomenon). Regarding Ivermectin, this becomes a category of genuine evidence as opposed to “internet rumors” because of the sheer numbers of people self-reporting positive results from Ivermectin usage well exceeds a degree of prevalence that can be reasonably attributed to various manners of societal ‘mischief’ in the form of rumors, conspiracy-spreading, confusion, etc.
(These 3 categories as applied to all subjects would be: individual-case expert experience, macro-level observation/macro-level expert experience, and individual self-reported observation)
Type 2: Analytical Observation
Analytical observation refers to the data of observed reality - think of the type of information that is typical for detailed statistics to be about. In other words, the layering of data points - the “details” - on top of the observed reality - what happened - in order to better define and understand what we see happening. This then determines how much credence we give to different instances of the thing being studied, in this case Ivermectin usage and covid outcomes. This is the essence of a “study” - analyzed observations.
There are 2 distinct types of Analytical Observation. Before listing them, I think that it is important to stipulate in advance that the quality of both types can range from convincing to complete garbage and depends upon a host of other characteristics. These don’t fundamentally define a unique type of evidence though, as I hope will be conveyed clearly by how I describe the 2 types of analytical observation:
1. Human Controlled Observation
Defining characteristic: The unique pros and cons relating to evidentiary value inherent to something subject to human control and decision making. In more practical terms, this is when people design a study, so they are choosing the characteristics of the study’s subjects and environment.
The pros are that people can design a study to remove other things that can “muddy the waters” of what we’re trying to figure out. (Easy hypothetical example to illustrate the point: if you want to see how well Drug “A” works for Disease “X”, you only include people not taking Drug “B” which also might work on Disease “X”, because if you include patients on Drug “B” in the study, you can’t tell if the people are being cured of Disease “X” from Drug “A” or Drug “B”.)
The cons are that a study can be designed in a way that itself compromises the experiment, which is something that cannot be stressed strongly enough. (And scientists often think they are way smarter than they really are.)
2. Uncontrolled Observation
Defining characteristic: The unique pros and cons relating to evidentiary value inherent in either randomness or the environment of the study. In practical terms, there are many things that can cause something without being noticeable or intuitive to people looking at what happened - in other words, bias an observation - that can be caused by “random” or background factors.
The pros are that human biases and corruption are not meddling (as much) with the “natural” experiment or observation.
The cons are that it’s harder to be confident in what we think we see, because we can’t rule out potential unknown factors as easily, because “we don’t know what it is we don’t know”.
The reason that these are distinct types of evidence is that the unique pros and cons are critical to how we analyze, conceptualize, and rate the quality of the evidence. How we assess these types of evidence obviously also varies depending on the specific situation or evidentiary point/fact - neither one is “always” superior to the other one, it depends on the specific example. And this difference is important to understanding the fight over the Ivermectin evidence base.
The Upshot:
There is a big distinction between the first category of evidence and the second: Whereas the first category you see directly without any distortions, the second you only see through the lens of analytical adjustments. Adjustments which can be tampered with or otherwise become victim to human error, folly, or malice and deceit. This difference cannot be stressed enough.
How Would We Rank the Types of Evidence?
The next step is to establish a hierarchy, or ranking system, of the evidence types to be sort of a default “rule of thumb” that is the starting point for analyzing an evidence base.
What makes evidence better/more convincing or worse/less convincing?
This is the critical lynchpin for the entire analysis - we need a clear definition of what is the essence of proving something, what makes something look convincing to us, because by definition, weighing conflicting evidence is trying to determine which facts are more convincing.
The value or power of evidence is:
How improbable is it for the observations/facts we’re making the inference from (the “evidence”) to exist, if the conclusion we are inferring from the evidence (what the evidence proves) is not actually true.
In other words, what makes something look convincing is that “there’s no way this could be if “X” wasn’t true”.
So the question that we must ask in analyzing an evidence base is how likely is it that any piece of evidence could be there even if what we think it indicates isn’t true in reality. You can think of this question like this: “how likely is it that a study is showing a result that isn’t true”.
Evidence Type Rankings:
This is where things are going to get a bit complicated. Despite the categories of evidence types laid out above, in real life they do not come so cleanly packaged. This is because there’s automatically a gap between people who are trying to read through the evidence and the evidence itself. How do you know a doctor is truthful about his personal experience? How do you know if a study was run with integrity or corruptly?
Here is where the confusion about the Ivermectin evidence base starts from. The entire controversy really can be boiled down to one simple argument: How do you rank the various pieces of evidence against each other?
I will first rank the basic types of evidence, and then explore why the anti-Ivermectin people rank evidence according to a system that is quite literally upside-down. I am going to rely on a bit of common sense to adjudicate this though, but it should be manifestly self-evident to anyone who’s generally intellectually honest (and rational). Anyway, here are the rankings:
1. Clinical Experience
Let me add one caveat - we’re talking about widespread clinical experience of thousands of doctors worldwide, not just one or two or a meager handful where it is unclear whether you can believe the doctors or the reports about the doctors. This is far and away the most compelling evidence, and there’s nothing close. There is no way in a million years that literally every doctor who uses Ivermectin to treat covid - and by use, I mean with reasonable dosing, and without giving his patients the impression that Ivermectin is somehow stupid or risky that would lead them to not take it, etc. - would have such success if Ivermectin did not work. Look at the results that the Ivermectin doctors report vs the societal rates of covid outcomes.
2. Mass Distribution of Ivermectin
The successful Ivermectin mass-distribution campaigns by multiple countries or jurisdictions within countries that produced tightly correlated and profoundly impactful almost elimination of covid is the next most-convincing evidence. It is hard to make a rational case that a country can experience a sudden, unexpected, rapid, and massive change in the covid trajectory that magically happened to coincide with the mass-distribution of Ivermectin if Ivermectin wasn’t the driving force.
3. The Studies
The studies are the weakest form of evidence, because they are susceptible to all sorts of problems. The two biggest issues that compromise studies are poor design/methodology and corruption (regardless of what type, and if they’re pro- or anti- Ivermectin).
The confusion about Ivermectin is entirely the fault of overreliance on studies, and the mischaracterization the evidentiary value or power of the studies.
So we will now go for a deep dive into the various studies on Ivermectin (don’t worry, I will keep the technical jargon and concepts to a bare minimum).
Analysis of the Ivermectin Studies
By way of introduction, there are two very necessary points to make clear from the outset:
The evidence from the thousands of doctors worldwide successfully using Ivermectin to treat covid is by itself irrefutable proof that Ivermectin works, and we could really go home just with that. The mass distribution campaigns are a (massive) cherry on top that hammers down that conclusion.
The Ivermectin studies properly interpreted overwhelmingly show that Ivermectin works.
What we are trying to figure out is why the medical establishment is misinterpreting (that’s being nice) the Ivermectin studies.
The 3 Types of Ivermectin Studies:
There are 3 distinct categories that the Ivermectin studies fall into:
1. Non-RCT studies, usually small with few subjects
These are the bulk of the Ivermectin studies, and these are the ones that overwhelmingly tend to find positive results for Ivermectin.
2. RCT’s
RCT’s, or Random Control Trials, are considered the “gold standard” of studies that are conducting a live experiment (as opposed to the next category). An RCT is a study that “randomly” sorts the subjects into the two groups of the study - the group getting the treatment, and the group that gets a placebo (“fake treatment” that is supposed to be indistinguishable to both the subjects taking it and the doctors administering it), but does so in a way that balances the groups so that they are as similar as possible -- for example, if the Ivermectin group had healthier, less sick patients on average, then you couldn’t make a good comparison between the two groups that Ivermectin helped, because it could just be that the patients getting Ivermectin got better because they were healthier and less sick.
There are a bunch of Ivermectin RCT’s of varying sizes, and with widely divergent results, with some finding benefit, some finding harm, and some finding Ivermectin did not make a difference.
3. Meta-Analyses
Meta’s are a type of study that is an analysis of the studies about a specific topic or issue. There are a few meta- studies on Ivermectin, with some finding that it works and some finding that there is no evidence that it works.
Cochrane Analysis
There is a special type of meta-study called a Cochrane Review, where the scientists conducting the study are (supposed to be) even more constrained reducing further the degree to which human decisions can factor into the review. A Cochrane Review essentially is an analysis of evidence that grades the evidence according to a very comprehensive collection of standards and tests dictating the quality and quantity of evidentiary value assigned to a data point or collection of data points or evidence based on characteristics such as the source’s size, sampling, biases, protocols, and so on. It also provides a series of statistical methodologies by which one can assign a value to data or data sets, and combine data from different sources. Ultimately, the application of Cochrane standards relies upon the research capacity and integrity of whomever is performing the review, which can be easily biased or influenced by decidedly non-evidence considerations. (That’s quite a mouthful, but you don’t really need to remember the specifics to understand this article.)
There is an institution called the Cochrane Library, which is basically exactly what it sounds like, a collection of the Cochrane Reviews performed for all manner of topics, including Ivermectin.
For our purposes, a Cochrane Review is the same as a meta study.
What do the Various Ivermectin Studies Tell Us?
A study is basically a fifth grade science report for professional scientists - it’s the documentation of an experiment or observation (but hopefully keeping much higher standards than fifth grade reports). Just because a study claims something does not make it true, or even mean that the study authors have correctly interpreted their own data.
By way of introduction, a recent phenomenon invented in the medical community a few decades ago was the advent of what is referred to as “Evidence Based Medicine” (EBM). Sounds good, right - who could be against using evidence to resolve medical questions or disputes?
Well, EBM in practice turned out not to be very evidence based. The basic gist of EBM is that it codified a basic hierarchy of evidence that placed clinical experience at the bottom and RCT’s (eventually displaced by meta studies) at the top. This is quite backwards, and incredibly radical. Clinical experience of bedside doctors who are treating patients is far better quality information and proof, because doctors can see if something works, and also have the skillset and knowledge to pair a treatment with other interventions to improve its beneficial effects (or reduce harmful effects). On the other hand, RCT’s, while in theory the most thorough type of observation (because the randomization and equalizing the groups would - in theory - remove biases better than any other type of study or observation), are not intrinsically better quality evidence “just because”. RCT’s are subject to the same human biases and corruption that all studies are subject to, and without proper safeguards to “keep them honest”, they are as corruptible as any other study, and in fact are uniquely vulnerable to corrupting biases because of how they are typically funded. (A full review of the limitations of RCT’s is unfortunately beyond the scope of this article.)
The pertinent question regarding the Ivermectin studies is simply: which set of studies are more likely to have been biased or corrupted by either design flaws/limitations or actual malicious corruption? Let’s take a look:
The non-RCT Ivermectin studies are essentially stories of a doctor or small team of doctors/healthcare workers/clinicians who are reporting their personal experience where they had two groups of patients, one of which they treated with Ivermectin; or they are reporting their personal observations of the differences in covid outcomes between patients who had been treated with Ivermectin vs those who hadn’t been. The biggest shortcoming is that they are typically small, so luck may have played a role, and also aren’t able to rule out things besides Ivermectin that could have contributed to the covid outcomes. In other words, it is reasonable that on an individual basis, any of these studies could have produced a very favorable result for Ivermectin even if Ivermectin didn’t work. However, the numerous studies, by numerous doctors/scientists, in numerous countries/jurisdictions, taken together makes for quite powerful and convincing evidence. It is beyond unlikely for dozens of small studies to be conducted separately and with pretty widely ranging characteristics that would almost uniformly show Ivermectin works, and works brilliantly, if Ivermectin did not actually work.
The RCT studies are a mess, which I will avoid getting into to deeply in order to keep the technical jargon out. Basically, RCT’s are very easily corruptible and/or “rigged”. Multiple RCT’s don’t add anything if they repeat the same “mistakes” that the previous RCT’s did. And there is ample basis to suspect that there are deep conflicts of interest and other non-scientific influences that are affecting all the anti-Ivermectin RCT’s, which are prohibitively expensive to run and require very generous benefactors and always come with all sorts of strings attached. The methodology/design of the anti-Ivermectin RCT’s are uniformly of very poor quality to assess whether Ivermectin was an effective treatment for covid at proper dosing, administered correctly, etc. Then there is the issue of “statistical significance”, which is another technical requirement that is easy to use as a way of hiding Ivermectin working but is similarly way beyond the scope of this article. Therefore, it is very reasonable and likely for RCT’s to show that Ivermectin doesn’t work even if Ivermectin actually does work.
The various meta-analysis studies of the Ivermectin studies are the results of human decision making about which studies to include, how much weight to give to the studies individually, etc. They are no better than the RCT’s. Therefore, it is also very reasonable and likely for meta-studies to show that Ivermectin doesn’t work even if Ivermectin actually does work.
This is not even close to being a sufficient explanation of the nature of the Ivermectin studies. My point in bringing this up is to stress that the studies generally considered to be “gold standard” are also the most easily corrupted compared to a group of smaller, less fundamentally sound studies individually but that collectively can overcome each other’s flaws and weaknesses.
The basic upshot of this is that the most reliable studies are the collection of individually weak, underpowered studies that together are powerful evidence because they can negate each other’s flaws and weaknesses — in simple terms, the probability of the vast, vast majority of studies (not just the RCT’s) showing Ivermectin works if Ivermectin doesn’t actually work are so infinitesimal that it’s not a credible possibility. However, the massive, technically “superior” RCT’s that show Ivermectin doesn’t work are easily and quite plausibly malleable enough to “coax” them into producing the ‘preferred’ outcome. The meta studies against Ivermectin simply disregard all the smaller studies, and don’t bother to consider them in concert with each other, which is an indefensible omission, and by itself an incurable disqualification of any study that excludes most of the Ivermectin studies.
The Great Ivermectin Literature Debate
This is pretty simple to lay out with all the above as introduction:
The medical establishment/scientific community - the anti- Ivermectin side - holds the view that evidence is graded based on whether it “dots the I’s and crosses the T’s” - conforms - to the myriad and comprehensive minutiae of EBM standards. By this system, clinical/observational experience is disregarded, and small studies are considered basically worthless altogether (and are therefore excluded from the meta-analyses of the Ivermectin literature). One of the more prominent attacks on the pro-Ivermectin studies, and one of the more prominent sources of confusion, is that the smaller pro- studies and even some of the larger and control pro- studies don’t adhere to the formal, technical rules for a variety of reasons. The lack of adherence is instinctively and automatically understood and internalized by the anti-Ivermectin side as a fatal flaw that is essentially incurable. (Fleshing out the nitty-gritty details of this requires its own article, and lots and lots of technical jargon.) Throw in the two studies that are pro-ivermectin against which there are serious allegations of deliberate fraud, and that’s how you get a verdict of “Ivermectin doesn’t work”.
The pro-Ivermectin doctors know Ivermectin works because they use it, and know numerous colleagues who use it, and can see the profound effect on countries that mass distributed Ivermectin. And they often have contact with some of the doctors/scientists who conducted the various studies that showed Ivermectin works, so their understanding of those studies is informed by the clinical expertise of the doctors who conducted those studies and can add their personal expert clinical observation to validate their study’s results.
Are the Anti-Ivermectin Scientists/Doctors Against Ivermectin in Good Faith?
I will say this: the failure of the anti-Ivermectin professionals to consult with any of the numerous doctors around the US and around the world who have been successfully using Ivermectin for the past year is indefensible and egregious, so much so as to indicate a significant degree of deliberate malfeasance and rampant dishonesty. Their abject failure to be remotely honest about the issue has created a confusing morass of scientific opacity that effectively hides the indisputable efficacy of Ivermectin from even the well-intentioned public trying to look into the issue in good faith.
I hope that this is sufficiently articulated