You can barely open your Facebook feed these days without hearing about the benefits of mindfulness. Taking time out of our busy days to be 'in the moment' and sit with our feelings for a few minutes has been shown to make us less stressed, reduce inflammation, and, just this week, there are reports that mindfulness can be as effective as antidepressants.
But new research suggests that we might all be doing a little too much wishful thinking when it comes to mindfulness, and it seems to be skewing which results are getting published. After analysing more than 100 published trials, scientists have found evidence that researchers are glossing over negative outcomes.
A team from McGill University in Canada looked at 124 peer-reviewed trials that assessed the effectiveness of mindfulness as a mental-health treatment, and found that positive findings were being reported 60 percent more often than is statistically likely.
They also looked at another 21 clinical trials that had been registered, and found that, 30 months after being completed, only 62 percent of them had their results published, hinting at the fact that some results are going unreported.
"The proportion of mindfulness-based therapy trials with statistically significant results may overstate what would occur in practice," the researchers conclude in PLOS ONE.
If this turns out to be the case, it's a pretty big deal for mental health practitioners, who have no choice but to base their recommendations on the latest peer-reviewed data. Which isn't great if that data is being skewed by researchers' own desire for mindfulness to work.
"I think this is a very important finding," psychologist Christopher Ferguson from Stetson University in Florida, who wasn't involved in the study, told Nature. "We'll invest a lot of social and financial capital in these issues, and a lot of that can be misplaced unless we have good data."
The McGill researchers calculated the probability that each of the 124 studies they analysed would have had a big enough sample size to detect the result they reported. The effects of chance are bigger in trials with small sample sizes, so you'd expect it would be harder for them to achieve statistically significant positive results. In fact, based on the statistics, the researchers predicted that only 66 out of the 124 trials would have been able to report positive results.
But in reality, 108 of them announced positive results. And when the team looked at those 21 registered clinical trials, they found that none of them specified in advance which variable they'd be tracking to measure success - which means researchers can theoretically ignore an unexpectedly bad result and report on something positive they found instead.
That's not to say that mindfulness doesn't work - far from it. The body of peer-reviewed research singing the praises of the meditative technique is too large to be ignored. But this new research does raise concerns that we might be overlooking some of mindfulness's limitations.
"I have no doubt that mindfulness helps a lot of people," said one of the researchers, Brett Thombs. "[But] I think that we need to have honestly and completely reported evidence to figure out for whom it works and how much."
So what's the solution? The McGill team point to trials with larger sample sizes. In their analysis, the 30 trials that had the biggest number of participants showed no sign of over-reporting positive results.
Pre-registering trials could also work, Thombs told Nature. That means that a journal reviews and accepts a study before it begins and any data is collected, so it's forced to publish the results regardless of how they end up.
After all, science only works properly if we know which hypotheses don't hold up to testing - the successful results only tell half the story.
"For the health-care system," said Thombs, "it's just as important to know what doesn't work."