Saturday, February 18, 2012

Magicians, misdirection, and science: Who is fooling whom?

An older article (Dec. 2008) in Scientific American, posted on the SA blog, "Magic and the Brain: How Magicians 'Trick' the Mind" elicited the following response from reader Denimbius (comment 20, Feb. 13, 2012):

This article reminds me of a criticism I read of the "Double-blind test of astrology" by magician-scientist Shawn Carlson, published in Nature (Vol. 318, 5 Dec. 1985 pp. 419-425). The criticism seemed to suggest that Carlson used misdirection techniques in the published article. This is in addition to the criticism that the tests were unfairly designed to begin with, making them extraordinarily harder than they needed to be.

Carlson's stated protocol required the participating astrologers examine each natal chart they were supplied and to select the correct CPI from 3 supplied for each chart as either the 1st or 2nd choice. Yet, in his evaluation Carlson draws our attention to the 3rd choice, which was chosen no better than chance, and declares that the 1st and 2nd must also have been chosen no better than chance! The data in the article shows that the first two charts were actually chosen at a marginally significant rate.

Carlson's 3-choice test finding helps confirm the illusion that another test depended on its 3 groups. The astrologers rated (scale of 1-10) the accuracy of each of the 3 CPIs supplied for each chart (110 CPIs in this test vs. 116 for the 3-choice test). Applying the three bogus categories and drawing attention to the negative slope of the 1st category, Carlson declares that the result was no better than chance. However, his data shows that when ungrouped, the 10-rating test was significant for the astrologers, better than the 3-choice test.

In a control group test, Carlson found that the volunteer students could not identify their correct chart interpretation, written by the astrologers, out of 3 supplied, yet the control group successfully chose the "correct" interpretation with a high significance, which Carlson explains as a "statistical fluctuation."

In a related test, the students ranked the accuracy (scale of 1-10) the individual paragraphs within the same astrological interpretations. This test might have clarified whether the surprising "statistical fluctuation" results had somehow gotten switched. But Carlson complained that he couldn't be guaranteed that the volunteers had followed his instructions and discarded this test without giving the results.

If read uncritically, with the bias that typical readers of Nature have against astrology, then the reader sees exactly what he or she expects. Yet if read critically, it is another story. Was Carlson inadvertently fooling himself?

Read the Carlson article:

No comments:

Post a Comment