PSYCHOLOGICAL IMPEDIMENTS TO COGENT THINKING

Discussion: Carefully read the attached essays by Donald Simanek, Pseudoscientists and Their Worlds; Harriet Hall, Playing by the Rules; and

Elizabeth Sherman, Science and Anti-science in America: Why It Matters. As you read these essays take careful notes. You are asked to write a

personal response in your ‘main post’ and a ‘response post’ to another student`s ‘main post’
Book Review
Donald Simanek
Volume 32.4, July / August 2008
Worlds of Their Own: A Brief History of Misguided Ideas: Creationism, Flat-Earthism, Energy Scams, and the Velikovsky Affair. By Robert Schadewald.

Xlibris, 2008. ISBN 978-1-4363-0435-1. 272 pp. Paper, $19.99; hardcover, $29.99.
The word pseudoscience is a bit slippery. It suggests something “fake” or “fraudulent”—something that is not a science but pretends to be. We can

easily name some of the classic examples: astrology, phrenology, homeopathy, parapsychology, and creationism. People who promote such

pseudosciences have been called “paradoxers,” because they propose ideas that superficially seem plausible but on closer examination are internally

contradictory or counter to what is possible in the real world. The term has been applied to circle-squarers, perpetual motionists, and those who

believe the Earth is flat. Sometimes the term “fringe science” is used.
We must admit that in the history of science, some of the early “accepted” ideas would, if judged by the standards of today’s science, qualify as

pseudoscientific: astrology, alchemy, geocentric solar system models, the luminiferous ether. So how do we distinguish science from pseudoscience?
Bob Schadewald had a continuing interest in fringe science and pseudoscience. This posthumous collection of his published and unpublished materials

(skillfully edited by Schadewald’s sister Lois) is a highly readable account of several varieties of pseudoscience, including Flat Earth theories,

perpetual motion, creationism, and predictions of the end of the world. The unifying theme is “fringe thinkers” who create their own versions of

reality, contemptuous of the models of nature accepted by established mainstream science. Schadewald treats his subjects with respect and even

sympathy (he knew many of them personally), but he clearly reveals why their ideas are flawed and misguided.
Here you will find the stories of colorful characters such as Immanuel Velikovsky, who rewrote the book on solar system astronomy; Charles Johnson,

who was certain that Earth was as flat as a pancake; John Keely, who claimed he could tap etheric energy to power a freight train coast-to-coast on

a gallon of water; and assorted creationists, who freely engaged in “lying for God.”
One might suppose that these folks and their worldviews have little in common. Surely one who believes the Earth is flat and one who believes it is

hollow cannot think alike. But, as this book reveals, they have more in common with each other than they do with mainstream science. Looming large

in their thinking and their motivations was a literal belief in the King James Bible. Velikovsky used biblical sources freely. Flat earthers’

beliefs were bound up with fundamentalist religious beliefs. Creationists and flat earthers have common historical roots, and I don’t know of a

single perpetual motionist who was not also a religious fundamentalist. The flat earthers were united in their contempt for the idea of

gravitational force. To them, it was a sufficient explanation to observe that “things fall because they are heavy.” Even here we find a parallel to

Velikovsky, whose 1950 book Worlds in Collision and three subsequent books made much of electromagnetic interactions between planets and comets

while dismissing gravity as nonexistent or relatively unimportant.
Velikovsky supposed that a comet was ejected from Jupiter, went careening around the solar system brushing Earth and Mars, and finally settled down

to become the planet Venus. In several passes of Earth it managed to cause the walls of Jericho to tumble, interrupted Earth’s rotation (making the

Sun appear to stand still for Joshua), caused the plagues of Egypt, the parting of the Red Sea, and miscellaneous other seemingly miraculous events

of recorded history. Few who read these books realized that Velikovsky had published a little-known pamphlet Cosmos without Gravitation (1946) in

which he declared “The moon does not ‘fall,’ attracted to Earth from an assumed inertial motion along a straight line, nor is the phenomena of

objects falling in the terrestrial atmosphere comparable to the ‘falling effect’ in the movement of the moon, a conjecture which is the basic

element of the Newtonian theory of gravitation.” Velikovsky clearly rejected Newtonian gravity, replacing it with electromagnetic interactions.
Bob Schadewald recognized that some pseudosciences are relatively harmless, but he considered the creationists a serious threat to the integrity of

science because of their political campaign to inject their religiously motivated philosophy into public-school science courses. For this reason he

attended creationist conferences (calling them “great entertainment”) to see what they were up to and was on friendly terms with many of the

prominent creationist spokesmen. But at the same time, he helped found the National Center for Science Education and served on its board. This

organization is on the front lines in the battle to preserve the integrity of science in the schools against the efforts of creationists to

redefine science to include the supernatural.
This book can be enjoyed on several levels, for Schadewald writes with droll humor, and many of his characters have comic dimensions. Included are

his interviews with Immanuel Velikovsky and flat-earther Charles Johnson. Here is the story of naturalist Alfred Russell Wallace, who in 1870

unwisely accepted a wager with flat-earther John Hampden on the flatness of the water in the Old Bedford Canal. John Worrell Keely’s story was

fodder for late-nineteenth-century journalists, who delighted in reporting on his antics promoting machines that ran on etheric energy. Keely was a

clever showman who kept his Keely Motor Company going for twenty-six years without producing a single product or paying a dividend to his wealthy

investors. Nor did he reveal his secrets.
Concluding chapters on “The Philosophy of Pseudoscience” explore the common characteristics of these independent thinkers. This is an informative

and entertaining book of continuing relevance, for pseudoscientific ideas of this sort never die but are continually reborn in new clothing.
Donald Simanek
Donald Simanek is an emeritus professor of physics at Lock Haven University of Pennsylvania. His website includes science, pseudoscience, humor,

and satire.

Playing by the Rules
Share:






Get back issues, subscriptions, and merchandise at the CSI store.
Feature
Harriet Hall
Volume 33.3, May / June 2009
It is useless for skeptics to argue with someone who doesn’t play by the rules of science and reason.

If no amount of evidence will change your opponent’s mind, you are wasting your breath.
I recently read Flock of Dodos: Behind Modern Creationism, Intelligent Design, and the Easter Bunny (Barrett Brown and Jon P. Alston, Cambridge

House Press, New York, 2007, no relation to the movie Flock of Dodos). It’s a hilarious, no-holds-barred send-up of the lies and poor reasoning

employed by the intelligent design movement. I was particularly struck by a quotation from William Dembski’s book Intelligent Design: “We are

dealing here with something more than a straightforward determination of scientific facts or confirmation of scientific theories. Rather we are

dealing with competing world-views and incompatible metaphysical systems.”
That doesn’t just apply to intelligent design. It cuts to the essence of what skeptics encounter on every front, from
dowsing to homeopathy, from ESP to therapeutic touch. We are trying to evaluate the science behind claims that are often not based on science but

on beliefs that are incompatible with science. The claimants are happy to use science when it supports them, but when it doesn’t they are likely to

READ ALSO :   Information Security Threat Advisory Whitepaper

unfairly critique the science or even to dismiss the entire scientific enterprise as a “materialistic worldview” or “closed-minded.” We are talking

at cross purposes. How can we communicate if we say “this variety of apple is red,” and they insist “it feels green to me”?
We get frustrated when we show these folks the scientific evidence and they refuse to accept it. Dowsing fails all tests, but dowsers “know” from

personal experience that it works for them. Homeopathy is not only implausible, but it has been tested and has failed the tests. Yet proponents

refuse to acknowledge those failures and still want to talk about data from the nineteenth century and make claims for the memory of water. We have

to realize we are not even speaking the same language. We are trying to play a civilized game of gin rummy, and they are dribbling a basketball all

over the card table. Before competing, doesn’t it make sense to define what game you’re playing and what the rules are?
Before arguing with a mathematician about the solution to a geometry problem, it’s essential to establish whether he is following the rules of

Euclidean geometry, where parallel lines never cross, or non-Euclidean geometry, where they sometimes do.
Science has been a very successful self-correcting group endeavor. It wouldn’t be successful if it didn’t follow a strict set of rules designed to

avoid errors. (Note: there are no rules written in stone; I’m talking about conventions that are generally understood and accepted by scientists,

conventions that grow naturally out of reason and critical thinking.) If proponents of intelligent design or alternative medicine want to play the

science game, they ought to play by the rules. If they won’t play by the rules, they effectively take themselves out of the scientific arena and

into the metaphysical arena. In that case, it is useless for us to talk to them about science.
If you want to play the science game, here’s what you do:
1. Submit your hypothesis to proper testing. Testimonials, intuitions, personal experience, and “other ways of knowing” don’t count.
2. See if you can falsify the hypothesis.
3. Try to rule out alternative explanations and confounding factors.
4. Report your findings in journal articles submitted to peer review.
5. Allow the scientific community to critique the published evidence and engage in dialogue and debate.
6. Withhold judgment until your results can be replicated elsewhere.
7. Respect the consensus of the majority of the scientific community as to whether your hypothesis is probably true or false (always allowing for

revision based on further evidence).
8. Be willing to follow the evidence and admit you are wrong if that’s what the evidence says.
If you want to play the science game, here are some of the things you don’t do:
1. Accuse the entire scientific community of being wrong (unless you have compelling evidence, in which case you should argue for it in the

scientific journals and at professional meetings, not in the media).
2. Design poor-quality experiments that are almost guaranteed to show your hypothesis is true whether it really is or not. Use science to show that

your treatment works, not to ask if it works.
3. Keep using arguments that have been thoroughly discredited. (The intelligent design folks are still claiming the eye could not have evolved

because it is irreducibly complex; homeopaths are still claiming homeopathy cured more patients than conventional medicine during nineteenth-

century epidemics).
4. Write books for the general public to promote your thesis—as if public opinion could influence science!
5. Form an activist organization to promote your beliefs.
6. Step outside the scientific paradigm and appeal to intuition and belief.
7. Mention the persecution of Galileo and compare yourself to him.
8. Invent a conspiracy theory (Big Pharma is suppressing the truth!).
9. Claim to be a lone genius who knows more than all scientists put together.
10. Offer a treatment to the public after only the most preliminary studies have been conducted.
11. Set up a Web site to sell products that are not backed by good evidence.
12. Refuse to admit when your hypothesis is proven wrong.
Changing Our Minds
Scientists will change their minds when the evidence warrants. Before we waste time arguing, one thing we can do is ask our opponents what it would

take to change their minds. One woman I asked said no amount of evidence could change her mind because she knew from personal experience that her

claim was true, so any evidence that said otherwise would have to be false and fabricated. End of discussion. She’s out of the game.
The rules of science are pretty clear about what it takes to change our minds. I’ll use the example of Helicobacter and ulcers. We used to think

that stress and too much stomach acid caused ulcers; now we think a bacterium causes ulcers. Here’s a summary of why we changed our minds:
1. Scientists noticed bacteria in biopsy samples from ulcers.
2. They identified the bacteria as Helicobacter pylori.
3. They found a strong correlation between ulcers and the presence of the bacteria.
4. One of the researchers, who was healthy and not a Helicobacter carrier, was able to induce an ulcer in himself by ingesting the bacteria.
5. They found that treating patients with antibiotics cured ulcers.
6. They found that antibiotics were superior to previous ulcer treatments.
7. The studies were replicated and conducted in different ways that corroborated each other.
8. The bacterial hypothesis was not inconsistent with the rest of scientific knowledge.
If we had the same quantity and quality of evidence for homeopathy, we’d gladly accept it. In fact, if the evidence met criteria 1 through 7, we’d

provisionally accept it while we kept checking the data and tried like crazy to figure out the mechanism behind homeopathy. (For more on this, see

“Bacteria, Ulcers, and Ostracism” in the November/December 2004 Skeptical Inquirer.)
There are two issues that are often misunderstood: scientific consensus and prior plausibility.
Prior Plausibility
Homeopathy is completely implausible. We would have to accept robust evidence that it worked, but we would require much stronger evidence than we

would for, say, a new antibiotic. If the claims for homeopathy were true, we would have to revise much of what we know about physics, chemistry,

and physiology.
The crossword analogy is helpful. If you think the answer to 1-across should be “library” but the clue to 1-down is a five-letter word for the

author of Tom Sawyer and the clue to 2-down is a four-letter-word for the name of Eve’s husband in Genesis, you have to reject “library” and keep

looking for a word that starts with T-A. You have to recognize that no matter how strong your conviction that 1-across must be “library,” you must

be wrong and there must be another answer that you just haven’t considered.
Consensus
It’s easy to dismiss the scientific consensus as a popularity contest, a vote on opinions. But it’s far more than that. The body of evidence stands

or falls on its own merits, and when the weight clearly tips the balance to one side, everybody can see it. The scientific community is made up of

experts who know how to evaluate the evidence and who thrash out disagreements in medical journals and scientific conferences. It is easy for the

scientific community to reach an agreement based on clear evidence. There are times when the evidence is less clear and controversy among

scientists is appropriate, but there comes a time when it would be perverse not to accept the evidence, just as it is perverse to deny evolution or

READ ALSO :   Research Proposal

germ theory. The scientific consensus on evolution and the germ theory is a recognition of reality, not a matter of opinion.
A reasonable default assumption is that the scientific consensus is usually right; if it isn’t, it will change as the evidence becomes clearer.

Truth will prevail. It does no good to attack the scientific consensus as prejudiced or closed-minded. The consensus will change only when it

incorporates new and better evidence. One of the irrational tactics we’ve seen over and over is for opponents to cite one or a handful of studies

to support their belief. They ridiculously assume that it was new information that the people who reached the scientific consensus had failed to

consider or that it somehow outweighs all the other studies that found the opposite to be true.
Play by the Rules or Go Play Your Own Game
There’s no point in arguing scientific facts with someone whose worldview is metaphysical and nonscientific. There’s no point in presenting

geological age data to someone who “knows” the age of the Earth from the Bible. Before we get into a useless debate, maybe we should find out what

game our opponents are really playing. If they are playing ping pong, it’s silly for us to bring a football to the table. It would be handy if we

could get them to say up front what game they are really playing, but all too often they have deluded themselves into truly believing they are

following the rules of science.
If they won’t play the science game by the rules, we are justified in crying “foul” and disqualifying them. Then they can go away somewhere else

and play their own game by whatever rules they want, and we won’t be able to refute them. If they are relying on beliefs unsupported by evidence,

let them say so. Wouldn’t it be refreshing to hear a homeopath say, “I believe homeopathy works based on my personal experience and on

nonscientific evidence like testimonials, and I categorically reject the results of any scientific trial that fails to support my beliefs.

Homeopathy cured my neighbor’s uncle’s cousin of cancer. Trust me. I’m a nice guy so you should believe whatever I tell you.”
If they’d say that up front, we wouldn’t waste any of our valuable time rehashing scientific evidence that they will just ignore. They would be out

of the game, permanently. And patients would have a better basis for giving truly informed consent.
Harriet Hall
Harriet Hall is a retired physician who lives in Puyallup, Washington, and writes about alternative medicine and pseudoscience for many skeptical

magazines.

Science and Antiscience in America: Why It Matters
Share:






Get back issues, subscriptions, and merchandise at the CSI store.
Feature
Elizabeth Sherman
Volume 33.2, March / April 2009
If science doesn’t inform the decisions we make, the consequence is that people suffer.
Every time I fly, I do something that ensures the plane won’t crash. Just as I am stepping aboard the aircraft, I touch the outside fuselage next

to the door. And then the plane doesn’t crash! It’s a causal gesture. Every time I’ve flown I’ve touched the outside of the plane, and it hasn’t

crashed. One event reliably preceding another proves that the first causes the second, right? Well, of course, intellectually, I know that my

touching the plane doesn’t ensure that it won’t crash. After all, I am a scientist and I have been studying how the material world works all my

professional life. Having said that, do you think I can ever bring myself to abandon my touching-the-fuselage practice? Well, what’s the harm? So

what if science doesn’t inform my behavior?
Yet as a biology professor, I am concerned that science does not inform our behavior, not just as individuals but as a society. I can recall how

this concern captured my attention with the urgency it now has for me: I was listening to the then-president of the United States on the news

(George Bush), and he suggested that the jury was still out on evolution. And I began to push myself to articulate why I was so distressed. Each

time I answered myself, I pushed again: so what? I answered, again with “well, so what?” and again, “so what?” So what if science doesn’t inform

the decisions we make as a country, a people, a world?
The answer is that people suffer.
The absence of an understanding of how the AIDS virus is transmitted, for instance, has contributed to countless deaths and millions of children

being orphaned in Africa. Scientists had been predicting that a Katrina-like storm was bound to hit low-lying areas in the U.S., and we now know

the consequences of having ignored that prediction. Now scientists are concerned that global climate change will have terrible consequences for

people living in poor countries.
One obstacle to people’s understanding of science is that we have a tendency to infer that one event, A, causes another, B, simply if B follows A.

Moreover, we want knowledge to provide us with certainty. Science doesn’t always confirm causality and can’t always provide certainty. We don’t

know when the next Katrina-like storm will occur or when or what the next pandemic will be. But these assumptions about direct causality and

certainty speak of a misunderstanding of science.
People seem predisposed to infer causality. I’ve wondered how this predisposition might have come about. Biologically speaking, how might it have

served our fitness as we evolved? Consider this: which is more risky, failing to attend to a true positive (Uncle Bob ate that mushroom and died)

or attending to a false positive (When I touch the outside of a plane, it doesn’t crash). Attending to a false positive might not hurt me too much

(that is, touching a plane before I get on is not particularly detrimental to me) but ignoring a true positive? (Oops, I ignored the fact that

Uncle Bob died after eating the mushroom, and I then ate the mushroom and also died). So perhaps we are predisposed to infer causality. It serves

us to make associations. If we happen to goof on a false positive (the airplane) we can still reproduce. But if we don’t make the association when

someone eats a mushroom and dies, then we may die too. So on average, it probably helped us to infer causality.
But what’s the harm in inferring causality at the least provocation? Recently, I read a report noting that some parents in Indonesia have inferred

a causal relationship between polio immunization and contracting the disease. In one instance, an Indonesian mother brought her child to be

immunized and a day later he developed polio. The most likely explanation is that this child already had the virus incubating in his body prior to

the vaccination and was vaccinated too late. But without understanding how the disease is contracted and how the vaccine works, the mother’s logic

made sense. She discouraged her neighbors from immunizing their children, which will contribute to the spread of the disease.
Yet science relies on the association of events to make sense of the universe. Once we find an association or a correlation, we can begin to look

for causality—the mechanisms underlying a phenomenon. For instance, scientists noticed an association between the acidification of lakes in the

Northeast and the loss of many aquatic species of animals. And now, we are beginning to uncover the causal relationship, the mechanisms by which

the acid content in lakes hurts animals.
The absence of certainty also contributes to a misunderstanding of science. Not every human being who smokes cigarettes will develop lung cancer;

READ ALSO :   Using analogies to capture attention

we can’t even predict (at least not yet) who will. So what do we know? Of thousands of people who smoke, some proportion of them will die

prematurely as a consequence. We can only move closer to the truth of how the material world works through the play of large numbers, and thus

probabilities.
Science requires openness to possibilities and skepticism about how things work. What were your hypotheses? By what observations or experiments did

you test these hypotheses? What is your evidence?
The scientific method is such a powerful process because it is self-correcting: a hypothesis not supported by evidence doesn’t hang around long.

Scientists are constantly testing their ideas and those of others with the bar set pretty high for what it takes to be persuaded. Just recently, a

Nobel Prize-winning scientist retracted a paper she co-authored because she could not replicate the results.
Science is powerful because it accurately predicts events from the virtually certain (if I drop a ball from a building, it will fall to earth) to

the probabilistic (people who don’t smoke are likely to lead healthier lives than those who do).
A misunderstanding of science is pervasive in many institutions that shape how we see and act in the world. There are too many such institutions to

mention in this essay, so I’ll just highlight a few.
There is compelling evidence that the Bush administration manipulated data and coerced scientists when the data were not consistent with the

administration’s view of the world. I was gratified when President Obama stated that we would “restore science to its rightful place,” in his

inaugural address. Nevertheless, we must attend to the consequences of the Bush administration’s disregard of evidence in its decision-making

process. In February of 2004, sixty-two leading scientists (including Nobel laureates, National Medal of Science recipients, and advisors to the

Eisenhower and Nixon administrations) criticized the Bush administration for its science policies. Their declaration includes the statement that

“When scientific knowledge has been found to be in conflict with its political goals, the administration has often manipulated the process through

which science enters into its decisions.” For example:
• After Bush took office, the Department of Health and Human Services deleted Web site references to the efficacy of condom use in the fight

against the spread of AIDS
• NASA scientists have reported that they have been pressured repeatedly by Bush appointees to alter or delete climate change findings in their

reports
• The Bush administration interfered with stem-cell research, which could have facilitated the development of treatments to ameliorate Parkinson’s

disease and diabetes
So what if science doesn’t inform our decisions? People suffer.
Alas, the way in which science is often taught at colleges and universities can contribute to its misunderstanding. Too often, science is presented

as a disembodied collection of facts. How many of us had science classes that failed to engage us in the actual enterprise? How many science

classes insist that students generate their own questions, design and carry out appropriate experiments, and grapple with evidence?
I also have concerns regarding the ways in which the media report on scientific issues. For example, in the fall of 2005, the Dover (Pennsylvania)

Area School Board passed the following resolution: “Students will be made aware of gaps/problems in Darwin’s theory of evolution and of other

theories of evolution including, but not limited to intelligent design.” The school board further required that science teachers read a kind of

evolutionary disclaimer to their biology classes. The board was sued by a group of parents upset by this decision, and the case was widely reported

for some time. Various print, TV, and Internet media interviewed one person who was in favor of the resolution and one who was not, as though both

points of view reflected equally legitimate scientific understandings. At the time, I was teaching a course called “Science and Antiscience in

America,” and I asked my students what they thought about this tendency of the media to present all sides (or more typically “both sides”) of an

issue, particularly as it pertains to scientific questions. Mostly, my students thought that it was an appropriate way to cover an issue in order

“to be fair.” I asked them to suppose the story was about teaching that the world was flat versus round? “Oh, that’s different,” they’d say. Yet

the preponderance of evidence for the fact of evolution is as robust as that for a round earth.
It is, at times, difficult for any of us to confront our own biases and examine them in light of evidence. Many of my students had no difficulty

disparaging the folks who eschew evolution. But some of them bristled when I suggested that dismissing science as simply “a vehicle for continued

male domination” is equally problematic. When you begin your inquiry with the answer rather than the question, whether the answer is “God did it”

or “Western intellectual thought is simply a way to ensure the power of white men,” then it isn’t inquiry at all; it’s dogma.
Finally, I am deeply disturbed that roughly half of Americans don’t accept evolution. (I don’t like to use the phrase “believe in” evolution; it’s

like choosing whether or not to believe in gravity.) Darwinian evolution (including the modifications biologists have brought forth over the years)

is the only explanation that scientists have found for the relevant data. The wealth of data is so vast, evolution explains these data so well, and

nearly the entire community of professionally trained biologists is so persuaded by this explanation that it is unlikely any other explanation will

come along to supplant it. However, as good scientists, we remain open to the possibility of a better idea developing to explain the data. Until it

does, there is no scientifically valid reason to hold any other view than that our species (and all other species of animals, plants, fungi, and

bacteria) have arisen on the planet through the process of evolution.
But more than that, this denial of evolution speaks to an anti-intellectualism, a brand of antiscience that contributes to human suffering. If

people can deny evolution, which is well supported by scientific evidence and widely accepted by the professional scientific community, then they

will deny any scientific findings they dislike. The same methods and insights that have informed how scientists understand the movement of the

planets, how molecules work, and what medical remedies are most effective have also informed our understanding of evolution. We can choose to

cherry pick only the data that support a particular bias about how the world works, but how does that help us if the world does not work that way?
Science is a way of asking testable questions about the material world; the knowledge we have gained is imperfect, provisional, and can be derived

only through the play of large numbers, and yet it is the best we can do in addressing certain problems. Einstein expressed this view most

eloquently:
All our science, measured against reality, is primitive and childlike—and yet it is the most precious thing we have.
Elizabeth Sherman
Elizabeth Sherman is a professor of biology at Bennington College, Bennington, Vermont, and teaches classes in animal behavior and physiology. Her

research focuses on the responses of amphibians to environmental stresses.

Place this order with us and get 18% discount now! to earn your discount enter this code: special18 If you need assistance chat with us now by clicking the live chat button.