What do weight-loss studies REALLY show? Looking behind the scenes

Every so often, one of the big medical journals will publish a study by hard working medical researchers who look for answers to the question, “Which weight-loss programs are effective for long-term weight loss?” (SPOILER ALERT: the answer is “none of them”).

A couple of weeks ago, the Annals of Internal Medicine published a meta-analysis examining a bunch of studies that themselves tested various commercial weight-loss programs for effectiveness. By “effectiveness”, they meant whether and to what extent the participants in the studies (not the programs in general) registered weight loss 12 months after starting the program. Some of the programs evaluated were:

So, what did the study show? Well, it depends on where you read about the study, and what you can read from the study. Of course, readers of this blog have been well-served by many posts that show and explain scientific evidence for how unsuccessful diets are. And, if we look behind the scenes at what’s actually in this study, we can see just how limited the evidence really is.

This matters for a lot of reasons, but two big ones come to mind: 1) Almost all of these weight loss programs are expensive—at least $100/month, and some cost upwards of $600/month. Most people can’t afford that, and if their insurance covers it, that’s money not being spent on other (more effective) health promotion treatments. 2) Given the dismal success rates of all of the programs, they set people up to fail and feel bad about failing (while paying money for the privilege). So let’s now take a closer look.

If you read about the commercial weight-loss program study in the popular news media, you heard a variety of messages. Both the LA Times and Reuters reported the most digestible results, which were that Weight Watchers and Jenny Craig were more effective for long-term weight loss, with 3% and 5% average weight losses at the end of a year.

Time provided a more thorough and nuanced report, in which the positive results are listed, but they’re put in perspective by medical experts who say that 3-5% weight loss is not much, and the design of the studies limits their applicability to real life (they’re not kidding—more on this in a minute). However, ever-optimistic about the possibility of long-term substantial weight loss, they quote several experts who maintain that 3—5% is a good start, and that a small amount of weight loss can nonetheless translate into better health measures (like blood pressure, blood sugar and cholesterol).

The Associated Press story is more upbeat about the study’s results, and even throws in a personal story of a former employee of one of the researchers who is now on Weight Watchers and has lost 7 pounds in one month. We’re now all supposed to think: Wow! Maybe that could be me, too! Or: Oh, I tried that program and gained back all the weight I lost and then some; what’s wrong with me?

So what’s the real story here?   What are we supposed to think about commercial weight loss programs based on this study?

Reading through a medical journal article takes some time, patience, and background knowledge. I happen to benefit from the technical assistance and experience of my partner Dan, who’s an internist at a community health center. And I also do research on obesity and behavior change, so I’m used to plowing through this kind of information. Still, deciphering the careful understatement of medical authors takes some doing.

Here’s what I found in a close read of the article.

In order to evaluate a study for effectiveness, you need to take into the account the following factors:

Duration—any study that lasts less than 12 months is useless, as we know that short-term weight loss can happen on just about any diet (grapefruits, anyone?). The hard (which is to say, virtually impossible) part is maintaining it long-term.

Adherence—this refers to how many people actually followed the diet plan of the study. Any study with low adherence rates is not helpful. Why not? Well, maybe they didn’t keep up the plan because the diet was not doable (or too expensive—more on this in a bit).

Attrition—this refers to how many people dropped out of the study for whatever reason (e.g. they moved, they had bad side effects, they lost interest, they didn’t like the diet plan, etc.). This is relevant for reasons similar to adherence.

Bias features—this is a bit complicated, but studies need to select their participants, evaluate their progress, and make conclusions in scientifically scrupulous ways. Any study with a high risk-of-bias is suspect (more on this, too).

Cost—the authors, being sincere and medical and all, actually want to apply their findings in the form of clinical recommendations for primary care providers. This means that cost matters (I mentioned this already but it bears repeating), as many patients can’t afford or aren’t willing to pay large sums for these commercial programs. Also, insurance companies that might consider extending coverage for these plans take cost into account.

Adverse outcomes – this is medical-speak for “when bad stuff happens to people”. For instance, in one of the diet plans (Health Management Resources, a very-low-calorie diet plan that includes meal replacement shakes), 6% of the participants had gall bladder operations during the study (which is many times higher than the incidence of such operations in the general population in North America), and 56% of them reported constipation. Both of these are adverse outcomes (although one of significantly worse than the other).

So here’s the inside skinny on these features of their study:

Duration: Only Weight Watchers, Atkins and Jenny Craig were subjected to 12-month studies. The rest of the programs were used in 3—6 month studies, some of which reported 2—10% weight loss results, but without follow-up, they don’t mean anything (do I have to bring up the grapefruit diet again?). There are lots of ways to lose weight quickly, but so far no systematic ways to maintain weight loss in the long term.

Adherence: Many of the studies didn’t measure adherence—it is hard to measure, as you either have to sequester people in a medical setting (this happens for some short-term studies, but it’s expensive and hard to find participants) or ask people to fill out diet recall forms. These are known to have problems with accuracy. So this means that when a diet plan study has low success, we don’t really know how much of that effect is due to the diet and how much of the effect is due to people not following it.

And in addition, we don’t know WHY people didn’t follow it—this is a question I’m particularly interested in. Does the food not appeal to them? Does it require preparation that they don’t know how to do? Does it conflict with some of their regular eating habits? Do their friends and family not support eating in this way? We don’t eat in isolation—a big study on social networks and obesity )showed that our friends’ body weights have a strong influence on ours. There’s more to say here, but then again, I’ve got a weekly column now (yay!), so I’ll leave it here for now.

Attrition: Attrition is easier to measure– you just count up who is still in the study at the end. Right. But again, like in the adherence case, we want to know WHY people drop out. The same intriguing questions remain, but these studies aren’t designed to ask them. Why not? Beats me.

Bias features: The authors came up with a way to rate the bias of each study, based on the ways participants were selected, ways progress was evaluated, and ways conclusions were drawn. They gave each study one of three ratings: Low-risk-of-bias, Medium-risk-of-bias, or High-risk-of-bias. It turns out that of the 36 studies included, most of them were rated medium to high-risk. And almost all of the Weight Watchers and Jenny Craig studies used to create those positive headlines were medium to high-risk. For instance, one of the studies selected their participants from women who had had breast cancer. Surely this is not a representative sample of the population. Moving on…

Cost: Most of them are expensive, some of the very expensive. As I mentioned above, they simply may not be worth the money—either yours or your insurance company’s.

Adverse outcomes: I already mentioned some adverse outcomes for the HMR very-low calorie diet that was reported in the study. But many of the studies that the researchers looked at didn’t report adverse outcomes. So we don’t know much about them. Let me be clear in saying that I don’t suspect that there are unreported serious medical conditions that are resulting from these diets. But, since very few of the studies examined lasted longer than 3—6 months, and none of them lasted more than 12—18 months, we don’t know about the most relevant adverse effect: regain of weight lost, or the yo-yo diet effect.

Yo-yo dieting has lots of negative effects, both physical and psychological. This is well-known in the medical literature. However, in their recommendations for clinicians, the authors of this study fail to mention the potential downside of yo-yo dieting for their patients. At the very least this suggests that any serious medical studies on weight-loss programs need to follow their participants for longer periods. Of course, studies are very expensive, and body weight variation is complicated. It responds to an array of influences that range from environmental to economic to ethnic to psychosocial. So this is not easy science to do or understand.

One area of research I strongly support is trying to better understand people’s conceptions of health and healthy eating and healthy activity. If public health and medicine can help people better achieve their own health and eating goals, what would that be like? That’s a thought….

Exit mobile version