When the data and theory don’t match

Bertram Gawronski

Research output: Chapter in Book/Report/Conference proceedingChapter

1 Citation (Scopus)

Abstract

A few years ago, a study in my lab produced a pattern of results that was not only unexpected but inconsistent with a theory that my collaborators and I had proposed several years before. Making the situation even worse, the finding was directly implied by a competing theory that we aimed to refute. Our theory predicted that repeated exposure to two co-occurring stimuli would form a mental association between the two stimuli even when people reject the co-occurrence as meaningless or invalid. A useful example to illustrate this hypothesis is the concern that repeated claims of Barack Obama being Muslim may create a mental association between Obama and Muslim even when people know that the claim is factually wrong. This possibility is explicitly denied by theories assuming that newly formed memory representations depend on how people construe co-occurrences and whether they consider them as valid or invalid. Consistent with the latter theories, and in contrast to the predictions of our own theory, our study showed that the effects of repeated exposure to information about other individuals were generally qualified by the perceived validity of this information; there was no evidence for unqualified message effects that were independent of perceived validity. My graduate student and I replicated this pattern in three independent studies, so there was no question about its reliability. Yet, a major question was: What should we do with the data? Should we publish them and discredit our own theory? Or should we ignore the data and pretend that our theory is correct despite our discovery that one of its central predictions has failed? We eventually decided to submit the data for publication, and after an initial rejection the paper was accepted pending minor revisions at another journal. It was not easy to state in the paper that our theory includes an incorrect assumption, but the data ultimately helped us better understand the phenomena our theory had been designed to explain. Since the paper came out, some people have asked me why we invested so much effort into conducting and publishing research that discredits our own theory. Looking back, I still think it was the right thing to do, because the data told us something important that was inconsistent with what we believed at that time. Two years later, someone else published a study on the same question using a different operationalization. Their results confirmed the original prediction of our theory, so it turned out that we were not completely mistaken with our initial assumptions. However, taken together, the two articles suggest that our theory is at least incomplete, in that it fails to specify an important moderator of the predicted effect (which still needs to be identified). And that’s important to know if our goal is to advance science instead of pursuing our own personal agenda.

Original languageEnglish (US)
Title of host publicationEthical Challenges in the Behavioral and Brain Sciences
PublisherCambridge University Press
Pages85-86
Number of pages2
ISBN (Electronic)9781139626491
ISBN (Print)9781107039735
DOIs
StatePublished - Jan 1 2015

Fingerprint

Islam
Publications
Students
Research
Rejection (Psychology)

ASJC Scopus subject areas

  • Psychology(all)

Cite this

Gawronski, B. (2015). When the data and theory don’t match. In Ethical Challenges in the Behavioral and Brain Sciences (pp. 85-86). Cambridge University Press. https://doi.org/10.1007/9781139626491.029

When the data and theory don’t match. / Gawronski, Bertram.

Ethical Challenges in the Behavioral and Brain Sciences. Cambridge University Press, 2015. p. 85-86.

Research output: Chapter in Book/Report/Conference proceedingChapter

Gawronski, B 2015, When the data and theory don’t match. in Ethical Challenges in the Behavioral and Brain Sciences. Cambridge University Press, pp. 85-86. https://doi.org/10.1007/9781139626491.029
Gawronski B. When the data and theory don’t match. In Ethical Challenges in the Behavioral and Brain Sciences. Cambridge University Press. 2015. p. 85-86 https://doi.org/10.1007/9781139626491.029
Gawronski, Bertram. / When the data and theory don’t match. Ethical Challenges in the Behavioral and Brain Sciences. Cambridge University Press, 2015. pp. 85-86
@inbook{131985e405c44a08b9e2c5568d872976,
title = "When the data and theory don’t match",
abstract = "A few years ago, a study in my lab produced a pattern of results that was not only unexpected but inconsistent with a theory that my collaborators and I had proposed several years before. Making the situation even worse, the finding was directly implied by a competing theory that we aimed to refute. Our theory predicted that repeated exposure to two co-occurring stimuli would form a mental association between the two stimuli even when people reject the co-occurrence as meaningless or invalid. A useful example to illustrate this hypothesis is the concern that repeated claims of Barack Obama being Muslim may create a mental association between Obama and Muslim even when people know that the claim is factually wrong. This possibility is explicitly denied by theories assuming that newly formed memory representations depend on how people construe co-occurrences and whether they consider them as valid or invalid. Consistent with the latter theories, and in contrast to the predictions of our own theory, our study showed that the effects of repeated exposure to information about other individuals were generally qualified by the perceived validity of this information; there was no evidence for unqualified message effects that were independent of perceived validity. My graduate student and I replicated this pattern in three independent studies, so there was no question about its reliability. Yet, a major question was: What should we do with the data? Should we publish them and discredit our own theory? Or should we ignore the data and pretend that our theory is correct despite our discovery that one of its central predictions has failed? We eventually decided to submit the data for publication, and after an initial rejection the paper was accepted pending minor revisions at another journal. It was not easy to state in the paper that our theory includes an incorrect assumption, but the data ultimately helped us better understand the phenomena our theory had been designed to explain. Since the paper came out, some people have asked me why we invested so much effort into conducting and publishing research that discredits our own theory. Looking back, I still think it was the right thing to do, because the data told us something important that was inconsistent with what we believed at that time. Two years later, someone else published a study on the same question using a different operationalization. Their results confirmed the original prediction of our theory, so it turned out that we were not completely mistaken with our initial assumptions. However, taken together, the two articles suggest that our theory is at least incomplete, in that it fails to specify an important moderator of the predicted effect (which still needs to be identified). And that’s important to know if our goal is to advance science instead of pursuing our own personal agenda.",
author = "Bertram Gawronski",
year = "2015",
month = "1",
day = "1",
doi = "10.1007/9781139626491.029",
language = "English (US)",
isbn = "9781107039735",
pages = "85--86",
booktitle = "Ethical Challenges in the Behavioral and Brain Sciences",
publisher = "Cambridge University Press",
address = "United Kingdom",

}

TY - CHAP

T1 - When the data and theory don’t match

AU - Gawronski, Bertram

PY - 2015/1/1

Y1 - 2015/1/1

N2 - A few years ago, a study in my lab produced a pattern of results that was not only unexpected but inconsistent with a theory that my collaborators and I had proposed several years before. Making the situation even worse, the finding was directly implied by a competing theory that we aimed to refute. Our theory predicted that repeated exposure to two co-occurring stimuli would form a mental association between the two stimuli even when people reject the co-occurrence as meaningless or invalid. A useful example to illustrate this hypothesis is the concern that repeated claims of Barack Obama being Muslim may create a mental association between Obama and Muslim even when people know that the claim is factually wrong. This possibility is explicitly denied by theories assuming that newly formed memory representations depend on how people construe co-occurrences and whether they consider them as valid or invalid. Consistent with the latter theories, and in contrast to the predictions of our own theory, our study showed that the effects of repeated exposure to information about other individuals were generally qualified by the perceived validity of this information; there was no evidence for unqualified message effects that were independent of perceived validity. My graduate student and I replicated this pattern in three independent studies, so there was no question about its reliability. Yet, a major question was: What should we do with the data? Should we publish them and discredit our own theory? Or should we ignore the data and pretend that our theory is correct despite our discovery that one of its central predictions has failed? We eventually decided to submit the data for publication, and after an initial rejection the paper was accepted pending minor revisions at another journal. It was not easy to state in the paper that our theory includes an incorrect assumption, but the data ultimately helped us better understand the phenomena our theory had been designed to explain. Since the paper came out, some people have asked me why we invested so much effort into conducting and publishing research that discredits our own theory. Looking back, I still think it was the right thing to do, because the data told us something important that was inconsistent with what we believed at that time. Two years later, someone else published a study on the same question using a different operationalization. Their results confirmed the original prediction of our theory, so it turned out that we were not completely mistaken with our initial assumptions. However, taken together, the two articles suggest that our theory is at least incomplete, in that it fails to specify an important moderator of the predicted effect (which still needs to be identified). And that’s important to know if our goal is to advance science instead of pursuing our own personal agenda.

AB - A few years ago, a study in my lab produced a pattern of results that was not only unexpected but inconsistent with a theory that my collaborators and I had proposed several years before. Making the situation even worse, the finding was directly implied by a competing theory that we aimed to refute. Our theory predicted that repeated exposure to two co-occurring stimuli would form a mental association between the two stimuli even when people reject the co-occurrence as meaningless or invalid. A useful example to illustrate this hypothesis is the concern that repeated claims of Barack Obama being Muslim may create a mental association between Obama and Muslim even when people know that the claim is factually wrong. This possibility is explicitly denied by theories assuming that newly formed memory representations depend on how people construe co-occurrences and whether they consider them as valid or invalid. Consistent with the latter theories, and in contrast to the predictions of our own theory, our study showed that the effects of repeated exposure to information about other individuals were generally qualified by the perceived validity of this information; there was no evidence for unqualified message effects that were independent of perceived validity. My graduate student and I replicated this pattern in three independent studies, so there was no question about its reliability. Yet, a major question was: What should we do with the data? Should we publish them and discredit our own theory? Or should we ignore the data and pretend that our theory is correct despite our discovery that one of its central predictions has failed? We eventually decided to submit the data for publication, and after an initial rejection the paper was accepted pending minor revisions at another journal. It was not easy to state in the paper that our theory includes an incorrect assumption, but the data ultimately helped us better understand the phenomena our theory had been designed to explain. Since the paper came out, some people have asked me why we invested so much effort into conducting and publishing research that discredits our own theory. Looking back, I still think it was the right thing to do, because the data told us something important that was inconsistent with what we believed at that time. Two years later, someone else published a study on the same question using a different operationalization. Their results confirmed the original prediction of our theory, so it turned out that we were not completely mistaken with our initial assumptions. However, taken together, the two articles suggest that our theory is at least incomplete, in that it fails to specify an important moderator of the predicted effect (which still needs to be identified). And that’s important to know if our goal is to advance science instead of pursuing our own personal agenda.

UR - http://www.scopus.com/inward/record.url?scp=84952775101&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84952775101&partnerID=8YFLogxK

U2 - 10.1007/9781139626491.029

DO - 10.1007/9781139626491.029

M3 - Chapter

AN - SCOPUS:84952775101

SN - 9781107039735

SP - 85

EP - 86

BT - Ethical Challenges in the Behavioral and Brain Sciences

PB - Cambridge University Press

ER -