Article 3: The Tipping Point of Perceived Change
Article 3: The Tipping Point of Perceived Change
How many sleepless nights does it take to officially conclude that he loves you less? And how many heartwarming moments does it take to conclude that he loves you more?
Today we will talk about the psychology of tipping points—that is, the moment when, after many signs, we finally conclude: things have changed and that this difference will last.
A good guy gone bad or a bad guy gone good are equally intriguing narratives that are frequently portrayed in media. The paper today (O’Brien and Klein 2017) asks the following question:
"Do we need to see more evidence before we conclude (e.g., the tipping point) that an improvement has occurred as compared to a decline?"
The answer is yes.
Compared to change for the better, we require fewer instances, shorter duration, and a smaller magnitude of change to conclude that things have officially changed for the worse. For instance, we do not need as many signs of decline to believe a once-great football player is no longer good. Even when the chance of improvement and decline is explicitly stated as 50-50, people are more likely to detect a lasting decline.
Why Are We Quicker to Conclude a Decline?
Two reasons might explain this asymmetry.
First, signals of decline may simply feel more alarming. From an evolutionary perspective, negative signals are more threatening to our survival than positive ones. We are wired to act quickly upon a negative signal. This is known as the negativity bias. In essence: better safe than sorry.
Second—and perhaps more interesting—it connects to our beliefs about how likely decline is, compared to improvement. Think of it this way: a plate can exist in many states of “broken”, but only in one state of “unbroken”. Similarly, for a struggling student to become a “good” student, a lot of factors must align: effort, support, skills, circumstances. But it may only take one setback to derail a “good” student. So, if we already believe that decline is more likely than improvement, we don’t need much evidence before making that judgment.
What This Means for Our Decisions
In reality, both factors may contribute to this phenomenon, but the paper focuses more on the second explanation. It shows that even when prematurely concluding a decline is made explicitly more costly (and therefore more alarming), people still detect a decline (vs. an improvement) more quickly—due to their belief that a decline is simply more likely. However, when the researchers experimentally increased the perceived likelihood of improvement, the pattern reversed.
This has real-world consequences. It suggests that we might be too quick to punish people who seem to have gotten worse and too slow to reward those who have improved. As we discussed in the second blog post, this can exacerbate inequality.
The findings also carry policy implications. It takes substantial evidence to convince others that a societal problem (i.e., a decline) exists. But it might take even more evidence to prove that your solution (i.e., an improvement) is working.
To Conclude...
In the 2nd paper, we see how we might discount small progress if they fail to meet a categorical threshold (e.g., 60 for passing). In today’s paper, we see that we might discount progress overall as compared to decline.
In sum, it takes more to convince us that a villain has turned into a saint than the other way around.
But… should that necessarily be the case?