It has become fashionable in the media recently to lament the apparent lack of faith people have in science today. “Anti-vaxxers,” in particular, are often singled out for censure as “anti-science.” Nowhere is this trend better exemplified than in a March 2015 National Geographic cover story written by Joel Achenbach: “Why Do Many Reasonable People Doubt Science?”
A diorama of a moon landing graces the magazine’s cover, and the article’s caption reads “We live in an age when all manner of scientific knowledge — from climate change to vaccinations — faces furious opposition. Some even have doubts about the moon landing,” erroneously implying that all scientific “doubt” springs from the same source and is of equal value or validity — or lack thereof – and that “doubting” the validity of science with regard to vaccines and genetically modified organisms is equivalent to “doubting” climate science, evolution, and moon landings.
As a geeky physics major who happens to be quite proud of my father’s contribution to the moon landings (he was part of the team responsible for the lunar module’s antenna and communication system), yet has the temerity to question the science and wisdom behind the current vaccine schedule and widespread dissemination of genetically modified organisms, I finally find myself irritated enough by this journalistic trend to rebut to the popular conception of those who question vaccine science as “anti-science.”
Despite his title, Achenbach makes the case that (all) people who “doubt science” are not in fact “reasonable.” Rather, they are driven by emotion – what he calls “intuitions” or “naïve beliefs.” “We have trouble digesting randomness,” he says, “our brains crave pattern and meaning,” implying that we use our own experiences or “anecdotes” to see pattern and meaning where there actually is none – insisting on believing things that are counter to the evidence.
I find Achenbach’s piece fascinating – well-written and persuasive, yet built upon logical inconsistencies and false assumptions that, taken together, make a better case against his thesis than for it – at least with regard to vaccine science. He quotes geophysicist Marcia McNutt, editor of Science magazine, as saying, “Science is not a body of facts. Science is a method for deciding whether what we choose to believe has a basis in the laws of nature or not.” And he himself claims, “Scientific results are always provisional, susceptible to being overturned by some future experiment or observation. Scientists rarely proclaim an absolute truth or absolute certainty. Uncertainty is inevitable at the frontiers of knowledge.”
Having studied science pretty intensively at Williams College, including a major in physics and concentrations in astronomy and chemistry – there were semesters it felt like l lived in the science quad – I fully concur with these statements. Sitting through lecture after lecture laying out elaborate scientific theories that were once accepted and used to further scientific knowledge and then discarded when it became clear they did not account for all the available data, it would have been hard not to be aware that scientific results are provisional and subject to change when a more complete picture is developed. And indeed, information received by the Hubble Telescope and continuing work conducted by people like Stephen Hawking have changed the landscape in astronomy and physics rather dramatically since I graduated in 1983.
The episodic nature of scientific progress
In his 1962 seminal work, The Structure of Scientific Revolutions, philosopher Thomas Kuhn instigated a revolution of his own – in our understanding of how science and scientific understanding progress. Kuhn’s main idea was that science does not simply progress by the gradual accretion of knowledge but is instead more episodic in nature, characterized by periods of “normal science” — “puzzle-solving” that is guided by the prevailing scientific paradigm – punctuated by periods of intense “revolutionary science,” when the old paradigm gives way to a new paradigm that better explains the totality of observed phenomena.
The old paradigm is never given up lightly or easily in the face of new evidence. In fact, an established paradigm is generally not abandoned until overwhelming evidence accumulates that the paradigm cannot account for all the observed phenomena in scientific research and an alternative credible hypothesis has been developed. Wikipedia summarizes it well,
As a paradigm is stretched to its limits, anomalies — failures of the current paradigm to take into account observed phenomena — accumulate. Their significance is judged by the practitioners of the discipline. . . But no matter how great or numerous the anomalies that persist, Kuhn observes, the practicing scientists will not lose faith in the established paradigm until a credible alternative is available; to lose faith in the solvability of the problems would in effect mean ceasing to be a scientist.
When The Structure of Scientific Revolutions was first published it garnered some controversy, according to Wikipedia, because of “Kuhn’s insistence that a paradigm shift was a mélange of sociology, enthusiasm and scientific promise, but not a logically determinate procedure.” Since 1962, though, Kuhn’s theory has become largely accepted and his book has come to be considered “one of The Hundred Most Influential Books Since the Second World War,” according to the Times Literary Supplement and is taught in college history of science courses all over the country.
It would seem likely that a journalist writing a high-profile article on science for National Geographic would not only be aware of Kuhn’s work, but would also understand it well. Achenbach seems to understand the evolution of science as inherently provisional and subject to change when new information comes in, but then undercuts that understanding with the claim, “The media would also have you believe that science is full of shocking discoveries made by lone geniuses. Not so. The (boring) truth is that it usually advances incrementally, through the steady accretion of data and insights gathered by many people over many years.”
This statement is patently false. First off, the mainstream media tends to downplay, if not completely ignore, any contributions of “lone geniuses” to science, as exemplified by the 2014 Time magazine cover story proclaiming “Eat Butter! Scientists labeled fat the enemy. Why they were wrong.” Suddenly, everyone was reporting that consumption of fat, in general, and saturated fat, in particular, is not the cause of high serum cholesterol levels and is not in fact bad for you. “Lone geniuses” (also known as “quacks” in the parlance of the old paradigm) understood and accepted these facts 25-30 years ago and have been operating under a completely different paradigm ever since, but it wasn’t until 2014 that a tipping point occurred in mainstream medical circles and the mainstream media finally took note.
Secondly, “the steady accretion of data and insights gathered by many people over many years,” what Kuhn calls “normal science,” cannot by its nature bring about the biggest advancements in science – the scientific revolutions. Also from Wikipedia,
In any community of scientists, Kuhn states, there are some individuals who are bolder than most. These scientists, judging that a crisis exists, embark on what Thomas Kuhn calls revolutionary science, exploring alternatives to long-held, obvious-seeming assumptions. Occasionally this generates a rival to the established framework of thought. The new candidate paradigm will appear to be accompanied by numerous anomalies, partly because it is still so new and incomplete. The majority of the scientific community will oppose any conceptual change (emphasis mine), and, Kuhn emphasizes, so they should. To fulfill its potential, a scientific community needs to contain both individuals who are bold and individuals who are conservative. There are many examples in the history of science in which confidence in the established frame of thought was eventually vindicated. It is almost impossible to predict whether the anomalies in a candidate for a new paradigm will eventually be resolved. Those scientists who possess an exceptional ability to recognize a theory’s potential will be the first whose preference is likely to shift in favour of the challenging paradigm (emphasis mine). There typically follows a period in which there are adherents of both paradigms. In time, if the challenging paradigm is solidified and unified, it will replace the old paradigm, and a paradigm shift will have occurred.
That paradigm shift will usher in a scientific revolution resulting in an explosion of new ideas and new directions for research. Achenbach recognizes this tension between the bolder and more conservative scientists to a degree:
Even for scientists, the scientific method is a hard discipline. Like the rest of us, they’re vulnerable to what they call confirmation bias — the tendency to look for and see only evidence that confirms what they already believe. But unlike the rest of us, they submit their ideas to formal peer review before publishing them.
Scientific consensus relies heavily on the flawed process of peer review
Achenbach acknowledges that scientists are human beings and, as such, are subject to the very same biases and stresses to which other human beings are subject, but implies that those biases are somehow held in check by the magical process of peer review. What Achenbach fails to mention, however, is the fact that the process of peer review is hardly a “scientific” discipline itself. In fact, peer review is so imperfect in practice that Richard Smith, former editor of the prestigious British Medical Journal, described it this way in his 2006 article, “Peer Review: A Flawed Process at the Heart of Science and Journals”:
My point is that peer review is impossible to define in operational terms (an operational definition is one whereby if 50 of us looked at the same process we could all agree most of the time whether or not it was peer review). Peer review is thus like poetry, love, or justice. But it is something to do with a grant application or a paper being scrutinized by a third party — who is neither the author nor the person making a judgement (sic) on whether a grant should be given or a paper published. But who is a peer? Somebody doing exactly the same kind of research (in which case he or she is probably a direct competitor)? Somebody in the same discipline? Somebody who is an expert on methodology? And what is review? Somebody saying ‘The paper looks all right to me‘, which is sadly what peer review sometimes seems to be. Or somebody pouring (sic) all over the paper, asking for raw data, repeating analyses, checking all the references, and making detailed suggestions for improvement? Such a review is vanishingly rare.
What is clear is that the forms of peer review are protean. Probably the systems of every journal and every grant giving body are different in at least some detail; and some systems are very different. There may even be some journals using the following classic system. The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you’d expect by chance.
That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked ‘publish’ and ‘reject’. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back ‘How do you know I haven’t already done it?’
In the introduction to their book Peerless Science, Peer Review and U.S. Science Policy, Daryl E. Chubin and Edward J. Hackett, lament
Peer review is not a popular subject. Scientists, federal program managers, journal editors, academic administrators and even our social science colleagues, become uneasy when it is discussed. This occurs because the study of peer review challenges the current state of affairs. Most prefer not to question the way things are done – even if at times those ways appear illogical, unfair and detrimental to the collective life of science and the prospects of one’s own career. Instead, it is more comfortable to defer to tradition, place faith in collective wisdom and hope that all shall be well.
In short, exactly what Achenbach does.
Problems with scientific research run much deeper than peer review
Dr. Marcia Angell, former editor-in-chief of the also-prestigious New England Journal of Medicine makes the case that the problems with scientific research, especially with respect to the pharmaceutical industry, go much deeper than peer review issues. In May 2000 she wrote an editorial in the NEJM that asked “Is Academic Medicine for Sale?” about the increasingly blurry lines between academic institutions (and their research) and the pharmaceutical companies that pay the bills. The editorial was prompted by a research article written by authors whose conflicts-of-interest disclosures were longer than the article itself. In 2005, Angell wrote The Truth About the Drug Companies: How They Deceive Us and What to Do About It, a book that Janet Maslin of The New York Times described as “a scorching indictment of drug companies and their research and business practices . . . tough, persuasive and troubling.”
What determines who will be among the bold scientists who usher in a paradigm shift and who will be the more conservative scientists opposing it? I submit that it is those very scientists who can extrapolate from their own experiences and observations, i.e. “anecdotes,” and synthesize them with their understanding of the scientific research to-date who possess the “exceptional ability to recognize a theory’s potential.” In other words, those who can take a step back from the “puzzle-solving” of “normal science” enough to see the bigger picture. Pediatric neurologist and Harvard researcher, Dr. Martha Herbert, describes this eloquently in her introduction to Robert F. Kennedy Jr.’s book, Thimerosal: Let the Science Speak:
What is an error? Put simply, it is a mismatch between our predictions and the outcomes. Put in systems terms, an “error” is an action that looks like a success when viewed through a narrow lens, but whose disruptive additional effects become apparent when we zoom out.
Why do predictions fail to anticipate major complications? Ironically the exquisite precision of our science may itself promote error generation. This is because precision is usually achieved by ignoring context and all the variation outside of our narrow focus, even though biological systems in particular are intrinsically variable and complex rather than uniform and simple. In fact our brains utilize this subtlety and context to make important distinctions, but our scientific methods mostly do not. The problems that come back to bite us then come from details we didn’t consider.
Once an error is entrenched it can be hard to change course. The initial investment in the error, plus fear of the likely expense (both in terms of time and money) of correcting the error, as well as the threat of damage to the reputations of those involved — these all serve as deterrents to shifting course. Patterns of avoidance then emerge that interfere with free and unbiased conduct of scientific investigations and public discourse. But if the error is not corrected, its negative consequences will continue to accumulate. When change eventually becomes unavoidable, it will be a bigger, more complicated, and expensive problem to correct – with further delay making things still worse.
Personally, I think a large part of the brewing paradigm shift in medical science (which I expect to predominate in the near future) comes from the very tension that Herbert describes between the view of bodies, biological systems, as machines that respond predictably and reliably to a particular force or intervention and the view of bodies as “intrinsically variable and complex.” Virtually every area of biological research has identified outliers to every kind of treatment or intervention that are not explainable in terms of the old paradigm, arguing for a more individualized approach to medicine that takes the whole person into account.
For instance, it is clear that most overweight people will lose weight on a high-protein/very low-carbohydrate diet such as Dr. Robert C. Atkins promoted or the Paleo Diet that is all the current rage. What is not clear, however, is how an individual will feel on that diet, which feeling will determine to a large degree the overall outcome of the diet strategy. Some will feel fantastic, while others will feel like the cat’s dinner after it has been vomited up on the carpet. Logically, one can see that it doesn’t make sense to make both types of people conform to one type of diet. “One size” does not fit all.
There are those who believe that all we need is more biological information about a particular system in order to reliably predict outcomes, but there is a good deal of evidence to show that this may never be the case as biological systems appear to be as susceptible to subtle energetic differences as they are to gross chemical and physical interventions. The old paradigm of body as predictable machine has no mechanism to account for the effectiveness of acupuncture on easing chronic pain or the difference that group prayer can make in the length of a hospital stay. The biological sciences may be giving way to their own version of quantum theory, just as Newtonian physics had to.
Intuition as a characteristic of scientists who perform “revolutionary science”
The ability to “utilize this subtlety and context to make important distinctions” that Herbert describes constitutes the difference between the scientific revolutionaries and those who will continue defending an error until long past the point that it has been well and truly proven to be an error. It is an ability that Albert Einstein possessed to a larger degree than most. Einstein felt that “The true sign of intelligence is not knowledge but imagination.” And that “All great achievements of science must start from intuitive knowledge. I believe in intuition and inspiration . . . . At times I feel certain I am right while not knowing the reason.” Interestingly, another well-known scientist whom many consider to have been “revolutionary” was known to place a great deal of emphasis on intuition. Jonas Salk, the creator of the first inactivated polio vaccine to be licensed, even wrote a book called Anatomy of Reality: Merging Intuition and Reason.
Gavin de Becker, private security expert and author of the 1999 best-selling book The Gift of Fear, upended the prevailing idea that the eruption of violent behavior is inherently unpredictable by explaining how we can and do predict it with the use of intuition. Like Einstein and Salk, far from denigrating intuition as an irrational response based on “naïve beliefs,” de Becker considers intuition a valid form of knowledge that does not involve the conscious mind. He teaches people to recognize, honor, and rely upon their intuition in order to keep themselves and their loved ones safe. In fact, if we could not do so and had to rely solely upon our conscious minds to protect us from danger, chances are very good human beings would no longer walk the earth.
Anenbach makes the argument that our intuition will lead us astray, encouraging men to get a prostate-specific antigen test, for instance, even though it’s no longer recommended because studies have shown that on a population level the PSA doesn’t increase the overall number of positive outcomes. But there are people whose first indication of prostate cancer was a high PSA result, and those people’s lives might have been saved due to having that test. Who is to say that the person requesting the test will not be among them? In other words, intuition is not necessarily wrong just because it encourages you to do something that is statistically out of the norm or has yet to be “proven” by science.
With regard to proof that a hazardous waste dump is causing a high rate of cancer, Anenbach, says that
To be confident there’s a causal connection between the [hazardous waste] dump and the [local cluster of] cancers, you need statistical analysis showing that there are many more cancers than would be expected randomly, evidence that the victims were exposed to chemicals from the dump, and evidence that the chemicals really can cause cancer.
That’s true, of course, but surely it’s not all that you would – or should – take into account when deciding whether or not to build your house next to the hazardous waste dump. And if you had to wait for the corporation doing the dumping to produce that statistical analysis, something that could presumably be expected to run counter to its own interests, it seems likely that the stronger the correlation between the dumping and the cancers, the longer you would be waiting for that analysis to appear.
Consider the case of a child growing up in a house with chain smokers in the early 1900s, listening to them hacking up phlegm after every cigarette and upon rising every morning. The smokers die youngish, at least one riddled with lung cancer making every breath a torture. The child has an inkling that the cigarette smoking, the coughing, and subsequent lung cancer are all related. What would be the best choice for that child to make – to assume that the lung cancer and the smoking were not related until science had proven 50 years later that smoking does indeed cause lung cancer or to listen to that initial intuition and steer clear of cigarettes in the first place? Obviously, in retrospect, the latter option would have been the far better choice. As indeed avoidance of the hazardous waste dump may be as well in Anenbach’s example.
Some of you may know that I was on Larry Wilmore’s The Nightly Show in February of this year because it is well known that I do not vaccinate my children. That show also featured Dr. Holly Phillips, medical contributor on CBS News. What most of you won’t know is that Dr. Phillips also spent her undergraduate years at Williams College, my alma mater, but unlike me, she didn’t major in science; she majored in English literature. (Coincidentally, I was also on the CBS News show UpClose with Diana Williams that month with Dr. Richard Besser who was also at Williams while I was there. He was an economics major.)
I’m quite certain that Dr. Phillips learned the body of facts taught in medical school as well as anyone, but I think it’s very likely she’s deeply entrenched in the old paradigm of the body as predictable machine. I was taken aback and, frankly, horrified to hear her say, “I think it’s one of those things where there’s a mother’s intuition where you don’t necessarily want to put a needle in your [healthy] child, but I think this is one of those times when you have to let science trump intuition.”
Did she actually tell people to ignore their intuition – that ability to utilize subtlety and context to make distinctions extolled by Einstein, Salk and de Becker – in favor of someone else’s interpretation of “scientific consensus”? To quote those fabulously creative geniuses Phineas and Ferb, “Yes. Yes, she did.” I regret not finding an opportunity that night to point out how dangerous Dr. Phillips’s advice was.
“Doubt” of scientific consensus is due to adherence to the “tribe”
Anenbach’s thesis ultimately fails due to his reliance on the application of Dan Kahan of Yale University’s theory to explain all science “doubt,”
Americans fall into two basic camps, Kahan says. Those with a more “egalitarian” and “communitarian” mind-set are generally suspicious of industry and apt to think it’s up to something dangerous that calls for government regulation; they’re likely to see the risks of climate change. In contrast, people with a “hierarchical” and “individualistic” mind-set respect leaders of industry and don’t like government interfering in their affairs; they’re apt to reject warnings about climate change, because they know what accepting them could lead to—some kind of tax or regulation to limit emissions.
In the U.S., climate change somehow has become a litmus test that identifies you as belonging to one or the other of these two antagonistic tribes. When we argue about it, Kahan says, we’re actually arguing about who we are, what our crowd is. We’re thinking, People like us believe this. People like that do not believe this. For a hierarchical individualist, Kahan says, it’s not irrational to reject established climate science: Accepting it wouldn’t change the world, but it might get him thrown out of his tribe.
This is crystallized by another quote from Marcia McNutt,
We’re all in high school. We’ve never left high school. People still have a need to fit in, and that need to fit in is so strong that local values and local opinions are always trumping science. And they will continue to trump science, especially when there is no clear downside to ignoring science.
The problem with this viewpoint is that it is inherently contradictory. On the one hand, it pretends that only science that fits in with the prevailing viewpoint is “correct” science or worthy of note, when it is apparent from Kuhn’s work on scientific revolution that that is not the case. When, then, is it “okay” to “ignore” science? Anenbach makes the case that it is okay to ignore any science that does not fit the “scientific consensus,” or the prevailing paradigm. For instance, he says that “vaccines really do save lives,” without ever mentioning the fact that, while that may be true, they maim and kill some people as well, and he says that “people who believe vaccines cause autism . . . are undermining ‘herd immunity’ to such diseases as whooping cough and measles” when science has made it clear that, at least for now, they are doing no such thing. In addition, he pretends that there is no other science than the infamous 1998 case series of 12 children written by Andrew Wakefield and twelve of his eminent colleagues that supports a link between vaccines and autism, when there are in fact a large number of studies that do so.
Corporate interests may slant scientific findings
Anenbach uses an interesting argument to encourage “ignoring” climate science that opposes the prevailing paradigm, “It’s very clear, however, that organizations funded in part by the fossil fuel industry have deliberately tried to undermine the public’s understanding of the scientific consensus by promoting a few skeptics.” It may surprise you to know that I tend to agree with Anenbach on this point. I don’t have an opinion on climate science because I haven’t read it. What I do have is a healthy distrust of “consensus” – given what I know about paradigm shifts – coupled with an even stronger distrust of science that is conducted by an industry that stands to gain from the outcome of that science, and an intuition that leads me to believe that we been heaping abuse upon the planet and that recent bizarre weather patterns – tornadoes in Brooklyn? – are among the many signs that it will not be long before the Earth can no longer sustain that level of abuse.
But, illogically, Anenbach doesn’t show that same mistrust of science performed or financed by an industry that stands to gain when the industry itself controls the prevailing paradigm. The vast majority of vaccine science, for instance, is conducted by the vaccine manufacturers themselves or the Centers for Disease Control and Prevention, which is largely staffed by people with tremendous conflicts of interest. Vaccines are one of the fastest rising sectors in a hugely profitable industry. In fact, according to Marcia Angell, for over two decades the pharmaceutical industry has been far and away the most profitable in the United States. This year total sales of vaccines, a number the World Health Organization says tripled from 2000 to 2013, is expected to reach $40 billion, and the WHO predicts that it will rise to $100 billion by 2025 (by the way, is anyone else a little creeped out by all the economic data on vaccine profitability in that WHO report?) – none of which could possibly be finding its way to the people staffing our government agencies or making decisions on which vaccines to “recommend,” could it?
Julie Gerberding, who left her job as the director of the CDC to take over the vaccine division at Merck soon after overseeing most of the research that supposedly “exonerates” vaccines in general, and Merck’s MMR in particular, of any role in rising autism rates was just an anomaly, right? Unfortunately, no. No, she wasn’t. Robert F. Kennedy Jr.’s description of the CDC as a “cesspool of corruption” may be strongly stated, but it is largely borne out by recent studies. And the situation is eerily similar when it comes to safety studies on genetically modified organisms conducted by Monsanto and rubber-stamped by the FDA.
Scientists willing to break with the “tribe” are more likely to be truth tellers
Practically in the same breath that Anenbach tells us we should ignore any science that does not fit the prevailing paradigm, he makes the claim that those scientists who are most dedicated to truth – and therefore presumably the most trustworthy – are the ones who are willing to break with their “tribe” in order to accurately report what they have observed or discovered, despite the risks of censure, loss of prestige, or even loss of career. But what is a scientist’s “tribe” made up of but other scientists – the very ones so invested in the prevailing paradigm of “scientific consensus.” It sounds to me as if those truth-telling scientists might even be accurately described as “lone geniuses.” But didn’t Anenbach just imply that we should ignore those people willing to say that the Emperor is in fact naked, despite the inherent risk in doing so, in favor of the “tribe” of “scientific consensus”? Does he truly not see the inherent irony of this position?
There is no one who sacrificed his position in the “tribe” by speaking his truth more than Andrew Wakefield, who, prior to publication of the infamous 1998 case study, was a well-respected gastroenterologist with a prestigious position at the Royal Free Hospital in London – a deeply entrenched member of the “tribe” of physicians in other words – and who, as a result of standing by his work and that of his colleagues, has since had his medical license revoked and almost never sees his name in print without the word “discredited” next to it, yet still performs and supports work that undercuts the prevailing paradigm because, as he puts it, “this issue is far too important.” By Anenbach’s own argument, Andrew Wakefield is inherently more credible than all the scientists clinging to the “vaccines are (all) safe and effective” “consensus” position. Frankly, I’m inclined to agree.
Anenbach uses this fear of betrayal of the tribe to explain why people do not put their faith in the prevailing paradigm. And it may perhaps explain certain aspects of scientific doubt in some quarters, but it certainly does not explain why most of the people who question the safety or wisdom of vaccines or genetically modified organisms, and the science that purports to establish it, do so. Time and time again I hear about people losing friends, loved ones, and even jobs when they question the current vaccine schedule – and heaven forbid they should express active opposition to it! It can be a very lonely position to take indeed. So lonely, in fact, that many people express profound relief at finally finding like-minded people online. (If you peruse the numerous vaccine blog posts on this website, you will see many examples of this in the comments.) In effect, having given up their place in the tribe, they must seek out a new tribe, a tribe of truth tellers. Evangelical Christians and traditional Catholics, in particular, the very people one might think of as most likely to be “hierarchical individualists” may have the loneliest road of all as many of their periodicals and organizations have come out strongly in support of the vaccine program.
Every doctor who publicly expresses perfectly rational questions about vaccine reactions in certain subpopulations is vilified by the press and an increasingly vitriolic group of self-identified “science” bloggers and their followers, despite the fact that many of them start out as vocal supporters of, and believers in, the basic premise of vaccines. In other words, any doctor who even dares to question our current vaccine schedule risks his or her membership in the “tribe.” And yet, surprisingly, quite a few have the courage to do so anyway, including Dr. Bernardine Healy, ex-head of the National Institutes of Health, which makes her a de-facto “tribal chief,” who in a 2008 interview with former CBS correspondent Sharyl Attkisson disclosed that “when she began researching autism and vaccines she found credible published, peer-reviewed scientific studies that support the idea of an association. That seemed to counter what many of her colleagues had been saying for years. She dug a little deeper and was surprised to find that the government has not embarked upon some of the most basic research that could help answer the question of a link.”
In order to get at truth, scientific or otherwise, one needs to be able to take a step back and see the whole picture, incorporating one’s own observations and experience with that of others, including the subtleties and the context. In addition, when it comes to science, one needs to unflinchingly and critically examine all the evidence presented and be willing to break with the prevailing paradigm if the evidence demands it.
In 1987 a “holistic” doctor put me on a diet to lower my cholesterol. Counter to the prevailing paradigm at the time, she put me on a hypoglycemia diet that was very low in carbohydrates but quite high in saturated fat, including cholesterol. In the prevailing paradigm that would have been a recipe for disaster, if anything increasing my serum cholesterol as the proportion of saturated fat in my diet was certainly higher than it had been previously – which is exactly what I feared would happen. So what did happen? My cholesterol dropped from 280 to 140 in a month. Fluke? Could be . . . Only the doctor showed zero surprise at my result, which implied that, while I may have been shocked, she herself had seen many like it before.
Since that time I have read study after study confirming the truth of that doctor’s understanding, serum cholesterol levels can be adequately controlled by diet, but not a low cholesterol diet. Also since that time, I have bored the heck out of my older brothers, at least three of whom have had high cholesterol, with lectures about how the low-cholesterol diets their doctors had prescribed were useless and the statins were unnecessary and maybe even dangerous given the fact that the cholesterol performs a protective anti-inflammatory function in the body. (If you bring down the cholesterol level without bringing down the underlying inflammation, you are setting someone up for disaster.) I briefly hoped they would take note when Dr. Barry Sears’s book Enter the Zone became a bestseller in 1995, but they had to find out the hard way, however. And now that the mainstream has finally caught up with what the “alternative health” folks have known for more than 25 years, it’s a little hard to resist an “I told you so.”
The same is true every time yet another study comes out that supports and confirms the alternative health (a.k.a. “new paradigm”) view of autism as a medical condition with its roots in gut dysbiosis and toxicity, exacerbated by impaired detox pathways, rather than a psychiatric condition.
Technology does not equal science
The biggest problem with Anenbach’s piece, and every other piece that laments the “rejection of science,” is that it conflates rejection of technology with rejection of science. As Alice Dreger, a professor of clinical medical humanities and bioethics at Northwestern University’s Feinberg School of Medicine, beautifully illustrates in an article in The Atlantic titled, “The Most Scientific Birth Is Often the Least Technological Birth,” technology does not equal science. “In fact,” says Dreger, “if you look at scientific studies of birth, you find over and over again that many technological interventions increase risk to the mother and child rather than decreasing it.”
Dreger quotes Bernard Ewigman, the chair of family medicine at the University of Chicago and NorthShore University Health System and author of a major U.S. study of over 15,000 pregnancies, who says that our culture has “a real fascination with technology, and we also have a strong desire to deny death. And the technological aspects of medicine really market well to that kind of culture.” Dreger herself adds, “Whereas a low-interventionist approach to medical care – no matter how scientific – does not.” Indeed. What many scientists forget is that just because something “cool” can be done, doesn’t mean it should be done.
Which brings me to the “Precautionary Principle.” There is no accepted wording of the Precautionary Principle, perhaps because it is something that is largely informed by intuition. According to the Science & Environmental Health Network, “All statements of the Precautionary Principle contain a version of this formula: When the health of humans and the environment is at stake, it may not be necessary to wait for scientific certainty to take protective action.” In other words, if there is any uncertainty about the risk of harm, it is better to err on the side of caution. That seems not only intuitively obvious to me, but logical as well. Science sometimes takes quite a long time to prove something is harmful. It certainly takes long enough that many drugs have done tremendous damage before they were withdrawn from the market: Thalidomide, Vioxx, DES, Darvon, Dexatrim are just a few of the myriad examples.
What the Precautionary Principle isn’t is anti-science. In fact, it supports one of Anenbach’s goals – making efforts to avoid disastrous climate change. It would also support making damned sure that genetically modified organisms can’t do systemic damage to either people or the environment before licensing them for general use (with follow-up studies verifying that is indeed the case after licensing) and testing the vaccines we use against true placebos and in the combinations we actually use them before “recommending” them for every newborn in the country, as well as studying the health outcomes of the vaccinated vs. unvaccinated populations after licensing. It would also support investigation into the commonalities of children with regressive autism whose parents claim that their children were harmed by their vaccines in order to identify possible subpopulations that may be more susceptible to vaccine injury – like, oh say. . . children who exhibit genes that can cause impairment in detoxification pathways, for instance. Wait a second . . . What’s going on here? It sounds like I’m recommending science!
Science should serve humanity over corporations
When it comes down to it, science is a tool. And like any tool, it can be used ethically or unethically, morally or immorally, humanely or inhumanely, in pursuit of ends ranging from the sublime to unquestionably evil. Is it anti-science to deplore the experiments conducted by Josef Mengele on concentration camp captives? Is it anti-science to condemn the ethics of the “Tuskegee Study of Untreated Syphilis in the Negro Male”? Was J. Robert Oppenheimer, known as the “father of the atomic bomb,” anti-science when he said, “ . . . the physicists felt a peculiarly intimate responsibility for suggesting, for supporting, and in the end, in large measure, for achieving the realization of atomic weapons . . . . the physicists have known sin, and this is a knowledge which they cannot lose”? Was Hans Albrecht Bethe, Director of the Theoretical Division of Los Alamos during the Manhattan Project, anti-science when 50 years later he called upon fellow scientists to refuse to make atomic weapons?
Science can serve corporate interests or it can serve the interests of humanity. There will certainly be places where the two will intersect, but there will always be places where they will be in opposition and science cannot serve them both. Is it “anti-science” to insist that, where the interests of the two are opposed, science must serve humanity over corporations? Certainly not. A far better description would be “pro-humanity.” We are not even close to being able to say that science is currently putting humanity’s interests first, however, and while it may be prudent for individual scientists to stick with the tribe in order to further their careers, it cannot be prudent for us as a human collective to let corporate interests govern what that tribe thinks and does.
Until the day we can say that we truly use science in service to humanity first, not only is it prudent for us to question, analyze, and even scrutinize “scientific consensus” from a humanist viewpoint, it is also incumbent upon us to do so.
For more by Professor, click here.