Category Archives: Scientific Encounters

Thoughts as Words/Images and Thoughts as Something Else

“The word thinking is arguably the most problematic word in the exploration of pristine experience.” (Hurlburt and Heavey, 2015, p. 151).

University of Nevada Las Vegas psychologist Russell T. Hurlburt and his colleagues have been engaging in a series of studies involving beeping subjects randomly to have them jot down whatever they are experiencing at the moment of being beeped. This procedure has revealed five common features of everyday inner experience: inner speech, inner seeing, feelings, sensory awareness, and “feature 5.” Feature 5 is hard to describe. It’s as if the concept of feature 5 doesn’t fit with our understanding of inner experience – even though we may all experience this feature. So what is feature 5? It’s something Hurlburt calls “unsymbolized thinking”, which he describes as follows:

Unsymbolized thinking is the experience of an explicit, differentiated thought that does not include the experience of words, images, or any other symbols. For example, if you had been beeped a moment ago, you might have experienced an unsymbolized thought which, if expressed in words, might have been something like I wonder what Feature 5 is. But if this was an unsymbolized thought, there would have been no experienced words–no experience of the word ‘wonder’ or of ‘Feature 5.’ There would have been no experienced images–no seeing of a beeper or of anything else.” – Russell T. Hurlburt Thinking Without Words https://www.psychologytoday.com/blog/pristine-inner-experience/201111/thinking-without-words accessed on 11/20/15 at 6;11pm

Not everyone agrees that unsymbolized thinking even exists. For some, this may be a matter of how thoughts are conceived: as verbally mediated mental processes (Shulman et al 1997). Others may be skeptical of the idea of unsymbolized thinking because the act of self-reflection in itself produces verbal versions of experiences that may not have originally involved words or other symbols. Poor research design may also contribute to the misimpression of thoughts as steeped in language. For instance, during experience sampling studies, how subjects are questioned about their experiences may bias their response, such as if asked what they were just “thinking” (Hurlburt et al 2015. It would be better for researchers to just ask what the subjects had been experiencing.

There are cognitive spaces between the words and images. These spaces aren’t empty, but unless their content is converted into a form that can be maintained in working memory, they will likely be forgotten in a matter of seconds. We tend to remember what we have reported to ourselves, which requires our experience be in reportable form – and for the most part, that means in words and images.

References:

What goes on in the resting-state? A qualitative glimpse into resting-state experience in the scanner Hurlburt, R. T., Alderson-Day, B., Fernyhough, C.s and Kühn, S. Frontiers in Psychology www.frontiersin.org October 2015 Volume6 Article1535 http://dx.doi.org/10.3389/fpsyg.2015.01535

Hurlburt, R. T., and Heavey, C. L. (2015).Investigating pristine inner experience: implications for experience sampling and questionnaires. Conscious.Cogn. 31, 148–159. doi:10.1016/j.concog.2014.11.002  

Shulman, G. L., Fiez, J. A., Corbetta, M., Buckner, R. L., Miezin, F. M., Raichle, M. E., et al. (1997). Common blood flow changes across visual tasks: II. Decreases in cerebral cortex.

Awareness and the Brain, Part II

“First, attention is a physical process in the brain, whereas awareness is in the form of knowledge that the brain can potentially report. Second, although the content of awareness and the content of attention overlap most of the time, it is sometimes possible to attend to a stimulus without being aware of it. In that case, the brain’s reportable knowledge about what is currently “in mind” becomes dissociated from what it is actually attending to, suggesting that like all representations constructed by the brain awareness is an imperfect model.”

— Yin T. Kelly, Taylor W. Webb, Jeffrey D. Meier, Michael J. Arcaro, and Michael S. A. Graziano Attributing awareness to oneself and to others (2014) PNAS Early Edition (approved February 21, 2014) www.pnas.org/cgi/doi/10.1073/pnas.1401201111

“Awareness is the brain’s simplified schematic model of the complicated, data handling process of attention.”

– Consciousness and the Social Brain by Michael S. A. Graziano; (2013) Oxford University Press. p. 156 (Kindle)

We are animals with brains. Awareness evolved because it helped our ancestors survive and reproduce. Brains produce awareness. Awareness tracks attention, most of the time. Awareness is a constantly updated experience of our dynamically changing state of attention. Attention enhances signals and reflects competition among signals in the brain. Attention is a form of brain behavior. Awareness allows the brain to understand that behavior, its dynamic and consequences. Awareness is “experienceness” (Graziano 2013).

Awareness is a type of representation. All representations are simplifications, however: not perfectly accurate but good enough to “keep track of the essentials” (Graziano 2013, Kindle p 1144).

None of this is magic. Awareness belongs to the masses. There is not an elite who possess awareness in greater abundance than the hoi polloi. Awareness is something the human brain does.

 

Awareness and the Brain, Part I

In some circles, “awareness” is a higher state of consciousness imbued with magical properties, a kind of portal onto the true nature of the world. This magical awareness allows one to overcome the barriers of mind and body to participate in the “really real”, to use Clifford Geertz’s phrase for the sense “upon which the religious perspective rests”. With this religious sense of awareness, comes great revelation.

Such an idealized conception of awareness draws power from its conceptual other, its “as opposed to”: the “waking sleep” of ordinary consciousness, where one is stumbling about in the dark forest of illusion. In this waking sleep, we’re not really awake – that is, not truly aware – although we probably think we are. The standard example used by those wanting to convince us of how unaware we are is that of driving to work. See? Somehow you got to the office but can’t remember a single thing about the trip. That’s because you weren’t aware. You were asleep at the wheel and didn’t know it.

Not being the religious sort myself, I’m skeptical.  I have questions.  Is memory-on-demand proof of awareness? If so, does that mean that with awareness, comes great remembering? And what are we talking about here? Declarative memory, yeah – but what type: visual memory, auditory memory, verbal memory, emotional memory, spatial memory, memory of physical sensations? At least one bit of a memory trace out of the thousands of percepts being experienced every second? Are we also in a state of awareness while we are remembering? What neurological evidence is there to distinguish real awareness from illusory awareness? And if we weren’t really aware when we thought we were, what were we instead?

I don’t mean to imply there are no neural correlates to certain kinds of religious experience – there probably are, insofar as religious experience is a thing – that is, something that has common elements among its various manifestations. When people are in the grip of some sort of religious ecstasy, their experience may very well correlate with certain patterns of brain activation and neurotransmitter release. Ditto when people are feeling serenely unattached. Or when feeling a sense of profound understanding. These brain patterns may or may not be connected to any specific revelatory content (that is, specific beliefs about the nature of what is). That’s for science to find out and me to wonder.

And I do not doubt that the brains of people who are experts in religious experience exhibit certain neural regularities that distinguish them from novices or worse.

Coming up: more questions! Starting with: what is “awareness” in the brain? Are there different types of awareness? Are they different levels of awareness? Are some sorts of awareness better than others? What makes them better?

Reference: Geertz, Clifford (1973). The Interpretation of Cultures. Chapter: “Religion as a Cultural System.” Basic Books.

The Agile Mind

Book recommendation (it’s long but worth it):

Koutstaal, Wilma (2012) The Agile Mind; Oxford University Press: New York

For me, this book strengthens the case against simplistic dichotomies that are pervasive  within the field of clinical psychology:  the good (authentic/real/rational/intentional) and the bad (inauthentic/false/irrational/reactive). Such divisions are steeped in ideology and are fundamentally anti-scientific.

The scientific mind thinks in terms of continua. Fuzzy boundaries rule. Context matters.

The ideological mind likes to divide the world into exclusive categories. Purity matters.

Yeah, yeah – that’s just what I did, what with my “scientific mind” and ideological mind”. However, these should be considered  “ideal types”, which rarely exist in their pure form. Categories and ideal types can help us see and appreciate stuff we otherwise may not have noticed, but it’s important to remember they  can also prevent us from seeing and appreciating other things.

In that spirit, here’s a quote from The Agile Mind:

“Although inappropriate reliance on more automatic, heuristic modes of processing has frequently been shown to lead to error and biases, it is essential, indeed vital, that we refrain from any temptation to unilaterally characterize less directed, more intuitive, spontaneous, or nondeliberative modes of processing as inherently pernicious.  Context here is extremely important, both the extreme of too enthusiastically and unequivocally endorsing the virtues of deliberate thought, and the extreme of too strongly endorsing the benefits of undirected and “undeliberate” thought must be avoided. …we need to more fully understand how they work with and complement one another, in dynamic and ongoing moment-to-moment interchange and mutual support.” (p22)

 

Helping Scientists serve the Greater Good

“Refutations have often been regarded as establishing the failure of a scientist, or at least of his theory. It should be stressed that this is an inductivist error. Every refutation should be regarded as a great success. … Even if a new theory … should meet an early death, it should not be forgotten; rather its beauty should be remembered, and history should record our gratitude to it.”

– Karl Popper, Conjectures and Refutations: The Growth of Scientific Knowledge (1963)

In Why it’s time to publish research “failures”, Lucy Goodchild van Hilten writes about the movement to counter publishing bias that favors positive results, which leads to under-reporting of negative findings. For instance, the World Health Organization (WHO) is now calling on all results – including null results – to be published within 12 months of study completion. Journals dedicated to negative findings are springing up and there are serious campaigns within the scientific community to get researchers to report negative results.

Of course there’s push-back. And for good reason: scientists are super-busy individuals. Many work 60 or more hours a week. It takes a lot of time, energy and focus to do research and write publishable papers. Given that null results are much more common than positive findings, is it really reasonable to ask scientists to more than double their workload, risking health, career and relationships, for a cause that serves the Greater Good but accrues little personal benefit to themselves? As one researcher put it: “If I chronicled all my negative results during my studies, the thesis would have been 20,000 pages instead of 200.”

To tackle the time requirements, reporting of negative findings needs to be streamlined. Unless otherwise inclined, researchers shouldn’t be expected to engage in lengthy background discussions or analysis when reporting null results. Keep it simple and short when possible. Ideally, publications will develop clear guidelines with fairly low word-limits to encourage submissions. Ideally, funding sources will work closely with researchers to facilitate the collection and reporting of all findings. For instance, some funders require quarterly reports – these reports should also include sections for null results. When researchers have to organize, analyze, and report all findings on an ongoing basis as a condition of continued funding, subsequent publishing of the same findings will involve much less time and effort.

Truth and Consequences

The promise of science:

“…truth emerges as a large number of flawed and limited minds battle it out.” (Jonathan Haidt – The Righteous Mind: Why Good People Are Divided by Politics and Religion)

“The values of science: to seek to explain the world, to evaluate candidate explanations objectively, and to be cognizant of the tentativeness and uncertainty of our understanding at any time.” (Steven Pinker, The Better Angels of Our Nature: Why Violence Has Declined)

Compare with:

“…truth does not proceed from the application of general scientific rules that are valid also in natural science, but is defined by its origin.” (Leszek Kowlakowski, Main Currents of Marxism, about how the Communist Party defined the criteria of truth).

Ideological commitment often makes for bad science, because it’s easier for ideologues to rationalize non-confirming evidence. Being smitten with the grand vision can blind one to the inconvenient facts on the ground.   The broader and longer the view, the more room for confirmation bias to work.

Ideologues don’t pivot easily. They hold on to their canned goods long after the past-due date. Businesses are more likely to pivot because it’s in their self-interest to do so. For example, conspiracy theories notwithstanding, it’s rarely in a pharmaceutical company’s self-interest to suppress negative evidence from clinical trials, because if a drug has problems, it will come back and bite them. Survival in a competitive market place requires quickly identifying and fixing one’s mistaken notions. It’s less love of truth than aversion to the consequences of getting something wrong.

The attitude of reverence gives founders and masters a special authority on the truth, so that the search for truth requires achieving a correct understanding of what the founders and masters meant when they said whatever. This has nothing to do with science. Scientists don’t consult masters or sacred texts to figure out the “right” way to understand something. We don’t look to Darwin for a correct understanding of evolution, even though we may look to Darwin for insights on how evolution works.

Beware of certainty on topics one can’t possibly be certain about. Voltaire said doubt is uncomfortable, but certainty is absurd. Comfort with uncertainty may go against our nature but it is vital to getting closer to the truth of things.

Paraphrasing David Eagleman, the 3 words that science has given humankind: “I don’t know”.

 

Null and Beautiful

Per wonderful  Wikipedia, which is not everything and not always right or balanced, but anyway – thank you Wikipedia! – here’s a definition of ‘null result’:

“In science, a null result is a result without the expected content: that is, the proposed result is absent. It is an experimental outcome which does not show an otherwise expected effect. This does not imply a result of zero or nothing, simply a result that does not support the hypothesis.”

But null results aren’t published enough, so we often get a skewed idea of the range of scientific findings regarding the subject of interest. Almost 80% of null results – at least in the social sciences – are unwritten and/or unpublished. This per Publication bias in the social sciences: Unlocking the file drawer.

What to do? It’s easy to say “publish the null results, you numskulls!” Or, “do more replication research”. Try having a career that way. Try attracting money. The forces are against it.

Funding organizations (or some far-seeing billionaires) need to expand their missions to include research dedicated to sniffing out false positives. And academic journals need to be more welcoming of papers based on such research. Of course, the research would have to be on the up-and-up, where design is sound and commitment to the scientific method trumps hoped-for findings.

Ideally, journals would have sections comprised of critiques of previously published papers summarizing research findings. But this is unlikely to happen, since it’s not exactly in a journal’s self-interest to expose weaknesses in what it had previously agreed to publish. Perhaps there should be journals dedicated to critiquing papers that were published elsewhere. To avoid a lot of sloppy, mean-spirited writing, standards for critiques would have to be high and the original researchers would be invited to respond.

Research on the Benefits of Mindfulness

The benefits of mindfulness receive a lot of press (e.g., see the Huffington Post ). Mindfulness boosters frequently cite scientific studies to support the case for mindfulness as a kind of cure-all for the ills of the modern age. Given that mindfulness meditation involves the near-constant control of attentional processes and ongoing mental distancing through “observing” thoughts, labeling mental activity as “just thoughts” and gently redirecting attention away from thoughts, it makes a lot of sense that certain neuropsychological tendencies would be found in meditators.

Given a worldview that values loving kindness and calm nonreactivity, it makes sense that mindfulness practitioners would report less stress and show fewer biomarkers for stress. It makes sense that mindfulness would be associated with greater well-being and happiness. Given hundreds or thousands hours of practice directing and redirecting attention, it makes sense that neural efficiency and connectivity patterns would be altered. The brain, body and personality all change with experience. If you spend hours and hours regulating cognitive, emotional and physiological processes in specific ways, your brain, body and personality will change in specific ways.

Questions remain regarding the mechanisms of change and how large and consistent these effects are. In books, blogs and the popular press one often statements that “researchers have found” or “studies show” without information on the quality or size of the studies involved or the robustness of the findings. When I check out the actual research, more often than not the researchers acknowledge the tentativeness of their conclusions and the need for replication. More often than not, the study design was not a randomized controlled trial and if even there was a control group, there was not a suitable comparison treatment condition. More often than not, the researchers did not appear to control for the placebo effect or factors common to most interventions (“common factors”) A few examples:

Take the study Prevention of Relapse/Recurrence in Major Depression by Mindfulness-Based Cognitive Therapy by Teasdale, Segal, Williams and others. This study has been frequently cited in academic papers and used by mindfulness advocates as strong evidence of the benefits of mindfulness. For instance, here’s how Jon Kabat-Zinn summarizes the study:

“…people with a prior history of three of more episodes of major depression taking the MBCT [Mindfulness-Based Cognitive Therapy] program relapsed at half the rate of the control group, which only received routine health care from their doctor…This was a staggering result…” (Full Catastrophe Living, Kindle p. 7322)

Now for some context. This particular study had no active comparison therapy. The control group received “treatment as usual” (aka routine health care). The MBCT group actually had a higher rate of relapse for participants who had two-or-fewer prior depressive episodes, not quite statistically significant but trending that way (p=>.10). The benefit for MBCT (for participants with 3+ prior episodes) was seen with a few as 4 treatment sessions (out of 8 possible) but the authors do not let us know if additional sessions (up to 8) increased benefit. We also have no idea what the actual ingredients of change are. Without an active comparison group that matches MBCT in factors common to all efficacious treatments, we don’t know if anything specific to MBCT made a difference in participant outcomes.

(Quick word about “common factors”: these include things like therapeutic alliance, empathy, goal consensus/collaboration, “buy-in”, positive regard/affirmation, and congruence/genuineness. Common factors are thought to exert much more influence over therapy outcomes than factors specific to individual therapies – for more on common factors, see Laska, Gurman and Wampold 2014.)

Other types of therapies have also been associated with reduced relapse in chronic depressives such as Maintenance Cognitive-Behavioral Therapy and Behavioral Activation Therapy. So when we are told the results of the MBCT study are “staggering”, I’m thinking: promising, yes – staggering, hardly. Mindfulness-based cognitive therapy clearly has some value; for one thing, it provides practical tools to help reduce stress and regulate unruly thoughts and emotions. It probably does help with unproductive rumination. But are mindfulness meditation and mindfulness-based therapies that much better than what’s already out there? Hard to say – since the quality of the research often leaves much to be desired.

Unless I want to spend the next decade on this project, I won’t be going into a lot of detail about each study that addresses the benefits of mindfulness. Let’s just look at a couple meta-analyses. One, “Mindfulness-based therapy: A comprehensive meta-analysis “(2013), concludes that mindfulness-based therapies are “an effective treatment for a variety of psychological problems”, but the authors also note that the moderate effectiveness of MBT “did not differ from traditional CBT [Cognitive-Behavioral Therapy] or behavioral therapies … or pharmacological treatments.”

The other meta-analysis was “The effect of mindfulness-based therapy on anxiety and depression: A meta-analytic review” (2010), which analyzed 39 studies (out of 727 originally identified as possible candidates for review). The authors found that mindfulness-based treatments were moderately effective for anxiety and depression, with stronger effects for individuals with anxiety and mood disorders. But their meta-analysis included many non-controlled studies, so how can we interpret these results?

Looking more closely at the 39 studies, 23 had no control or comparison group, 16 included a control or comparison group, of which 8 were waitlist controls, 3 were treatment-as-usual (TAU), and 5 actually had an active comparison treatment. So that’s 5 out of 39 MBT studies with a decent control group. But wait: of the 5 studies that were described as having “active controls”, two were “education programs” and two were types of art therapy. Education programs and art therapy are insufficient comparison treatments because they do not match the main intervention in common factors of efficacious treatments or placebo effects. (Note: I have designed such comparison interventions, so know a bit whereof I speak). Only one of the 5 studies listed as having an active control condition could be called an empirically supported “real” intervention – and that was cognitive-behavior group therapy, a condition with a grand total of 18 participants, representing just 1.5% of the 1,140 participants covered in the meta-analysis.

The authors of the 2010 meta-analysis actually criticize an earlier meta-analysis on the effect of mindfulness-based treatments partly because the authors of the earlier meta-analysis only reviewed controlled studies – and the other meta-analysis concluded that MBT does not have reliable effects on anxiety and depression. To quote: “Our study suggests that this conclusion was premature and unsubstantiated. The authors included only controlled studies, thereby excluding a substantial portion of the MBT research.”

Well, yeah, that is a legitimate problem. I’d recommend more high-quality controlled studies to address it. Then do another meta-analysis.

The problem with a lot of research on mindfulness is the same problem that plagues a lot of psychotherapy research: experimenter bias, which can taint even controlled studies. James Coyne puts this point beautifully in Salvaging Psychotherapy Research: a Manifesto:

“The typical   RCT [Randomized Controlled Trial] is a small, methodologically flawed study conducted by investigators with strong  allegiances to one of the treatments being evaluated. Which treatment is preferred by  investigators is a better predictor of the outcome of the trial than the specific treatment   being evaluated…Overall, meta-analyses too heavily depend on underpowered, flawed studies conducted by investigators with strong allegiances to a particular treatment or to finding that psychotherapy is in general efficacious. When controls are introduced for risk of bias or  investigator allegiance, affects greatly diminish or even disappear.”

So, where does that leave us? With the need to do more, better research on mindfulness-based treatments. In the meantime, it’s probably safe to say that mindfulness practice and mindfulness-based treatments probably are helpful in some ways, for some people – but a lot of questions remain unanswered. To the degree that mindfulness advocates present evidence about the wonderful effects of mindfulness as unequivocal and/or uncontested (much less “staggering”), they are exaggerating and overstating their case.

Note: This post is also in Observing Mindfulness under the title “Mindfulness and the Ideological Square: Emphasize Our good things – Part II”

Reference: Jon Kabat-Zinn Full Catastrophe Living: Using the Wisdom of Your Body and Mind to Face Stress, Pain, and Illness, Kindle Version, Revised Edition 2013; Bantam Books, New York

 

Common factors behind Successful Outcomes in Psychotherapy

In the past post, I mentioned that that concepts and techniques specific to various types of psychotherapy may account for as little as1% of outcomes. Here is a fuller accounting of the numbers for “percentage of variability of outcomes” in psychotherapy – care of Laska and Gurman (2014)

Common factors to all effective therapies (therapeutic alliance, empathy, goal consensus/collaboration, positive regard/affirmation, congruence/genuineness and therapist characteristics: 43% total

Differences between treatments<1.0

Specific ingredients (dismantling) 0.0

Adherence to protocol<0.1

Rated competence in delivering particular treatment0.5

And the rest care of Lambert, M. J., & Bergin, A. E. (1994):

Extra-therapeutic events: 40%

Placebo effect (expectations): 15%

The above adds up to about 100%.

The Laska and Gurman paper posit the following common factors as “necessary and sufficient for change: (a) an emotionally charged bond between the therapist and patient, (b) a confiding healing setting in which therapy takes place, (c) a therapist who provides a psychologically derived and culturally embedded explanation for emotional distress, (d) an explanation that is adaptive (i.e., provides viable and believable options for overcoming specific difficulties) and is accepted by the patient, and (e) a set of procedures or rituals engaged by the patient and therapist that leads the patient to enact something that is positive, helpful, or adaptive.” (p. 469)

Laska and Gurman argue that therapies and treatments (“interventions”) that contain all these factors are likely to be efficacious. Note that some of these factors enhance the placebo effect – hope and higher expectations would be inspired by a caring therapist with a believable explanation for one’s troubles and believable options for change. This makes any clear division between “common factors” and “placebo effect” somewhat problematic.

The two papers also didn’t provide estimates for the contribution of client characteristics to outcomes, although I’d imagine client characteristics are an important source of variance. And where does “regression towards the mean” enter the picture? Bottom line: the above percentages are suggestive but the actual numbers and variables could use some tweaking.

Laska and Gurman point out that although random controlled trials (RCTs) are the gold standard in research, they are often flawed and so the conclusions one can draw from them are limited. For example, some RCTs have active comparison interventions that are supposed to match the main intervention in important respects (e.g., length of time, number of sessions). If the main intervention has better outcomes than the comparison intervention, the researchers may conclude there is something specific to the main intervention that made the difference. But most comparison interventions don’t include all the “necessary and sufficient” factors common to all effective interventions, so successful outcomes in the main intervention may not be attributable to anything special it does.

What we need are studies that compare interventions that share all the common factors and aim to inspire the same degree of hope, expectation, and buy-in, not just for the subjects but for the therapists as well. Then maybe we’ll be closer to designing therapies that offer more than what is commonly available.

References

Kevin M. Laska and Alan S. Gurman Expanding the Lens of Evidence-Based Practice in Psychotherapy: A Common Factors Perspective 2014, Vol. 51, No. 4, 467–481.

Lambert, M. J., & Bergin, A. E. (1994). The effectiveness of psychotherapy. In A. E. Bergin & S. L. Garfield (Eds.), Handbook of psychotherapy and behavior change (4th ed., pp. 143–189). New York, NY: Wiley.

 

Harnessing the Placebo Effect in Psychotherapy

Often when we think of the placebo effect, we think of sugar pills that have no “real” medicinal benefit but work their magic through expectation of benefit. In physical medicine, there is some reluctance to rely on psychological mechanisms for healing (even if these mechanisms trigger physiological processes, like the release of endorphins) given the taint of deception. After all, the physician is asked to “do no harm” and isn’t encouraging self-delusion a form of harm? Truth and goodness go together, right?

Well, yeah, in my book – but it’s complicated. Take psychotherapy. Psychotherapy is all about harnessing psychological mechanisms to get better. So are we to privilege some mechanisms over others? And when we try to heal ourselves, we often self-consciously rely on “belief” even though part of us knows we’re placing more faith in optimism than is warranted by the cold, harsh facts of our situation. But we also know that such faith can make a difference in getting through or getting buried.

A lot of psychological interventions instill hope, provide a plausible narrative that makes sense of one’s misery and show a credible way out. The specific narrative and techniques matter less than whether the client buys into them. Plus there are the “common factors” of therapy that make a big difference in outcomes: the experience of positive regard, empathy, collaboration, and an alliance with the therapist. Leaving out placebo effects, common factors, environmental factors (e.g., “fresh start” experiences like new job or new partner), regression to the mean and personal characteristics of the therapist and client, the specific  treatment may contribute as little as 1% to treatment outcomes (ouch!).

These thoughts were triggered by a recent meta-analysis showing the benefits of Cognitive Behavioral Therapy (CBT) to be declining as time goes on. The authors of the study speculate that this may be because CBT’s placebo effect is wearing off. Let’s face it: CBT is pretty old-hat. Yesterday it was CBT; today, it’s mindfulness. When you take away the hype, and the expectations it engenders, what’s left? Mostly the warm and fuzzies.

I’m still a big advocate of The Truth, come hell or high water. But I know that a lot of people aren’t such sticklers for Getting It Right. And for some people, belief really helps. It changes their physiology, behavior and environment – and somewhere in that sea of change, something real happens.

Still…