New Study Shows that Lying About Your Hamburger Intake Prevents Disease and Death When You Eat a Low-Carb Diet High in Carbohydrates

Visit Us
Follow Me
A few readers alerted me to a new study claiming that a plant-based, low-carb “Eco-Atkins” diet is associated with a lower risk of mortality and disease, while an animal-based low-carb diet is associated with an increased risk of mortality and disease, as well as an editorial by Dean Ornish in the Huffington Post lauding the study for supporting his emphatic declaration that an optimal diet is one high in fruits, vegetables, whole grains, legumes, soy products, fat-free dairy, and egg whites. Dr. Ornish titled the article “Atkins Diet Increases All-Cause Mortality” as if the study had shown cause-and-effect (or had anything to do with the Atkins diet), when of course it only would have shown this in an alternate universe where the laws of logic are reversed, or perhaps on Opposite Day.

Denise Minger has already produced a great critique of the specific findings of the study. I highly recommend hopping over to her blog and reading it if you haven't already. In this post, I'd like to make a few comments about its approach, and the abuse and misuse to which its findings have been subjected. In brief, these comments will address the following themes:

  • Anyone who draws the conclusion from this study that a low-carb, animal-based diet promotes disease while a low-carb, plant-based diet prevents disease should be teaching a graduate class called “How to Abuse Logic, Misuse Statistics, and Invert the Scientific Method.”
  • This study used a bizarre classification of “low-carbohydrate,” only one-third of which was determined by carbohydrate intake.
  • Epidemiological studies about meat intake usually tell us less about the risk or benefit of eating meat and more about people's propensity to lie about how much hamburger they eat.

Going Backwards

This study is an epidemiological, observational study. It looks at what people are eating of their own choosing, follows them for a few decades, and looks at how many people contract a given disease or die. According to the scientific method, this qualifies as an observation.

The scientific method goes like this: we make observations, we come up with ideas to explain them called hypotheses, we perform experiments to test predictions generated from those hypotheses, we make conclusions based on the experiments about whether our ideas are correct, and then if other people can replicate those experiments, more and more poeple will begin to accept our conclusions.

Let's depict it graphically. If we pretend this is a map and pay close attention to the arrows, we can see why the appoach of this study is a bit like trying to travel from California to Virginia by going west. You're going to get pretty wet.

These authors decided to test their hypothesis by making more observations:

Because the leading causes of death in the United States — cardiovascular disease (CVD) and cancer — develop over many years, long-term studies of low-carbohydrate diets are needed to evaluate effects on mortality. However, randomized trials of low-carbohydrate diets on mortality are not feasible because of the difficulty in maintaining adherence and follow-up over many years. . . . Therefore, we prospectively examined the relationship between different types of low-carbohydrate diets and all-cause and cause-specific mortality in 2 large cohorts in the United States.

Splash! I hope they enjoy swimming in the Pacific, but the arrows simply don't flow backwards from hypothesis to observation, and amassing more and more observations to “test” your hypothesis leaves you with nothing but the same hypothesis and a huge stack of paper.

The syllabus for “How to Abuse Logic, Misuse Statistics, and Invert the Scientific Method” might look something like this:

  • 9/3/10 — How to Escape the Restraints of the Scientific Method
  • 9/10/10 — The Brazilian Paradox: Scientists Struggle to Explain How the Correlation Between Shoe Size and Reading Ability During Childhood Persists in Brazil Despite the Complete Absence From the Market of Brazilian Children's Books About How to Increase the Size of Your Feet
  • 9/17/10 — Mama's Muffins Always Tasted Good, Proof That Mama Must be Making Mario's Pizza, Verified to Taste Good in the Italian Prevention With Pepperoni Cohort, Consisting of Over 45,000 Person-Years and 76,000,000 Consumed Pepperonis
  • 9/24/10 — The Application of a New Statistical Method to Prove that Animal Protein Causes Cancer by Estimating Total Meat Intake From the Consumption of Hamburger Buns
  • 9/31/10 — The Scientific Method as an Anachronism: An Historical Overview
  • 10/6/10 — Application of a New Statistical Method to Prove the U.S. Constitution is an Anachronism by Measuring the Growth of Bureaucracy Over Time

And so on. What a class! Sign me up. Expert panels, here I come.

A Fuzzy Definition

The authors had a pretty funny way of determining what constitutes “low-carbohydrate.” They made a score of thirty points. Ten of the points came from eating low-carbohydrate. Ten came from eating high-fat, and ten came from eating high-protein. They didn't give detailed information on the protein and carbohydrate intakes of all the groups, so it's easier to look at the paper in which they originally defined this score (a separate study).

In table one, we see that those with the highest protein intakes consumed more than 26 percent of their calories as protein, those with the highest fat intake consumed more than 47 percent of their calories as fat, and those with the lowest carbohydrate intake consumed less than 29 percent of their calories as carbs.

This means that if you consumed 10 or 20 percent of your calories as carbs, it wouldn't bump your low-carb score up any more than someone who ate 25 percent of their calories as carbs. If you ate 50 percent of your calories as fat and 15 percent of your calories as protein, you couldn't gain any points from eating 10 more percent of your calories as fat, but you could gain a whopping NINE POINTS out of a thirty point score by eating 10 more percent of your calories as protein.


No wonder, as Denise Minger already pointed out, these “low-carb” diets were so high in carbs.

Proof That Lying About Your Hamburger Intake Prevents Heart Disease

People obviously do not make dietary choices in a vacuum. They make them within a complex network of lifestyle, belief, culture, and perception. As a result, no matter how much adjustment is done to correlations, the correlations are never fully adjusted. To believe that “the correlations have been adjusted” is to believe that at any given point in history, all or most of the potential knowledge existing in the universe is known. Pretty arrogant, and pretty silly.

Epidemiological studies would be really interesting in populations that had no cultural beliefs about what constitutes a healthy diet, if ever those populations could be found. Otherwise, the people who most adhere to the prevailing cultural beliefs about what constitutes “healthy” are, on average, the ones who are most motivated, have the most sense of self-responsibility, and are most likely to be health-conscious. That these people have a lower-than-average risk of disease is the most uninteresting and unsurprising discovery one could possibly make.

For example, this study was based on nurses and doctors. About two-thirds of the population were from the Nurses' Health Study, and about one-third were from the Health Professional's Follow-up Study. They are employees of the medical establishment. The fact that they really believe what the medical establishment teaches can be seen by the fact that all the groups listed in the paper had PUFA intakes between fifty percent higher and almost double the national average of 6-7% of calories, something the American Heart Association recommends, albeit something that randomized controlled trials have shown is harmful and fatal. It is in any case a clear seal of belief in the mainstream position.

They were all eating more than “5 a day” of fruits and vegetables too, something likewise recommended but which is probably actually health-promoting. Again, however, it is not practiced on the SAD. These are the True Believers.

How accurate are the estimates of animal product consumption in these studies? Check out the below graph, which I adapted from the validation of the Nurses' Health Study 1980 questionnaire:

The Nurses' Health Study evaluated the accuracy of its food frequency questionnaire (FFQ) in a smaller subset of the nurses who participated. They had them fill out one FFQ at the beginning of the year. Then, four times through the year, once during each season, they had them spend a whole week meticulously weighing out and recording everything they ate. Then they filled out the FFQ a second time.

The dietary record is supposed to represent the “true” intake of food. The validation was remarkably rigorous because the investigators accounted for seasonality and extended the record over 28 days instead of three days, as was done in the China Study. Other studies have “validated” their FFQ with a 24-hour recall. The assumptions are not perfect, but the method of validation used for the Nurses' Health Study is about the best we can do short of placing a secret video camera in everyone's kitchen. (I probably shouldn't have said that. I don't want to give any of the government bureaucrats who read my blog any ideas!)

In order to determine how accurate an FFQ is, researchers calculate a value called “r-squared” between the dietary record and the FFQ. This represents the degree to which the true intake of a food accounts for the intake predicted by the FFQ. In other words, it's a rough measure of the FFQ's accuracy.

I pointed out once before in my post on red meat and mortality that the accuracy of the FFQ to predict the intake of animal foods is pretty abysmal.

In the chart, each food has two bars. The one on the left represents the accuracy of the FFQ at the beginning of the year and the one on the right represents the accuracy of the FFQ from the end of the year.

The FFQ's ability to accurately predict egg intake is the best of all the animal foods, coming in at almost 50%. The other animal foods don't look so hot. I included tea and beer, ranking in at 75-80%, just to prove the point that an FFQ isn't inherently pathetic. But look at the accuracy for hamburgers — only 1.4%! Even after meticulously weighing out and recording their foods for 28 days, the participants still only reported their hamburger intake on the FFQ with 6.8% accuracy.

Perhaps the people just didn't remember how much hamburger they ate, or perhaps the 28 days they spent recording their diet were not representative of the other 337 days of the year. But the authors of the validation study noted a trend that suggests something else is going on:

Focusing on the second questionnaire, we found that butter, whole milk, eggs, processed meat, and cold breakfast cereal were underestimated by 10 to 30% on the questionnaire. In contrast, a number of fruits and vegetables, yoghurt and fish were overestimated by at least 50%. These findings for specific foods suggest that participants over-reported consumption of foods often considered desirable or healthy, such as fruit and vegetables, and underestimated foods considered less desirable. . . . This general tendency to over-report socially desirable foods, whether conscious or unconscious, will probably be difficult to eliminate by an alteration of questionnaire design.

In other words, a study showing a statistical relationship between hamburger consumption and some health-related variable is telling us less about hamburger consumption and more about health professionals' overwhelming propensity to lie about how much hamburger they eat.

Back to the Scientific Method

The end result is that this study constitutes an observation, and cannot be used to support a hypothesis of any kind. Hypotheses are ideas developed to try to explain observations. You cannot test a hypothesis by making more observations. It is not impossible to test a hypothesis about diet over the long-term, and indeed trials have done this in the past, usually quietly swept under the rug because the establishment didn't like their results. Compliance with low-carb diets will never be perfect, but it will probably be better than the 1.4% accuracy with which food frequency questionnaires predict hamburger intake. Logical fallacies cannot substitute for the scientific method just because the scientific method seems difficult or even infeasible.

Nevertheless, logical fallacies will not be disappearing from the scene any time soon. So there's lots of work in the blogger world ahead.

Visit Us
Follow Me

You may also like


  1. I'd like to see somebody take the information gathered by this "study" follow the correct procedure and give us a hypothesis of what this actually shows

  2. Thanks for sharp critique! Unfortunately, your pre-determined world-view pushes through between the lines.

    Dissecting a study in a way you do (without a seed of constructiveness and without setting it into a bigger picture) does not do much more than re-inforce the pre-existing attitudes (wether against and pro low carb).

    Important questions to ask here include: What do the other similar studies cite on the phenonmenon (in this case other prospective cohort trials on heart and cancer outcomes). What do surrogate marker studies state on the phenomenon? What do randomized outcome trials demonstrate on the phenomenon?

    And, why do I only what to publish critique without a seed of constructiveness? Who do I serve?

    Want to check my analysis on Fung et al. and more?

    1. I'm sorry but I could find no "analysis" in your series of slides. It is just a collation of results reported in a few studies with no discussion of methodology or validity. It adds nothing to the discussion.

  3. It's just marketing.

    1. Buy expensive ad space in numerous mainstream media publications, say, 10.

    2. For the same price, fund a tiny study you support with promo afterwards, and get that study into AP and it becomes an article in more mainstream sources than you can count, from magazines to blogs to foreign language stuff, all at no further cost to you, and all coming through "the doorway of authority as NEWS" instead of as an ad.

    I see most research funding as nothing but "alternative marketing" now.


  4. This kind of study has long been known to be useless (yet expensive!) and I'm glad more people are catching onto that. Thanks for expanding on some of the minor points which make the data from this study less than useless.

  5. Hi Chris
    I too have some queasy moments when reading of "research" like this, but for a slightly different reason – MARKETING.

    For example, another Eco-Atkins study published last June in the Archives of Internal Medicine showed some pretty nifty correlations between eating soy and nuts and other 'Eco-Atkins-friendly foods in the "new" low-carb diet and subsequent lowering of LDL (bad) cholesterol.

    Pretty nifty – until you read the fine print at the very bottom, the part containing the conflict-of-interest disclosures from the study authors.

    Turns out that the lead author boasts a laundry list of extensive financial ties with industry, particularly Solae, the world's largest soy company who funded the "research". Two study co-authors listed were actually current or former full-time employees of Solae.

    More on this at: 'Doctors On The Take: How To Read The Fine Print in Medical Research Reports' on THE ETHICAL NAG: MARKETING ETHICS FOR THE EASILY SWAYED at:

    I've just edited it to include a link back to your article here.


  6. Hey Bob,

    Thanks! Unfortunately I'm a bit buried in work so had to scan through parts of it, but looks like a great article. It's ok that you used the images. Perhaps put in a link right near the image so it's easy to navigate back to the original? Either way I'm fine with it, the more exposure to truth the merrier. 🙂


  7. These types of "research papers" and articles written by the likes of Ornish really make it more difficult for the unknowing public to understand the truth. How someone takes such a misleading approach linking it to a "dead Dr. Adkins" kind of spoils the delicious, grass fed, roast beef I had last night.
    Thanks Chris for the clear-headed articulation. It would be nice to see a rebuttal in that rag.

  8. Anonymous,

    Thanks! I'm glad you liked the article!


    Well I don't know if they can read, but if you look at the size of the Federal Register they can sure write!

    Dr. Emily,

    I think the quality and value of research is roughly normally distributed with a mean of "mediocre." You may be able to divide that into similar distributions by journal, with means of varying quality, but even in the best, there is going to be junk in the tails of the distribution.

    Epidemiology is also going to get reviewed most often by epidemiologists, and epidemiologists tend to favor very loose interpretations of correlation that favor a bias towards inferring causation when doing so is illogical, simply because it inflates the importance of their work, which researchers in most fields like to do. Fortunately these journals will always publish a well argued critical letter. Thanks for stopping in on my blog!

    Heather, I agree that excess iron is likely disease-promoting and that most low-carb research is confounded by the wheat issue, as well as the carb-quality issue. Thanks for stopping by!


    The main problem with that approach is that few people are going to be willing to record what they eat every single day, and the ones who are willing will not be a representative sample of the population. And if you want FitDay to be accurate, you still have to weigh your food, so you are back to the weighted dietary record.

    Thank you so much everyone for stopping by!


  9. Obviously older studies drawing data from 20 years ago wouldn't be able to do this, but new studies could potentially resolve much of this inaccuracy by using something like FitDay to have subjects document their food intakes daily, so very little memory would be required, and data entry could be easy.

    And, as you so handily point out, defining diets by their actual macronutrient contents would be a refreshing change for the better!

  10. Wow! So this study is basically a collection of groundless interpretations drawn from nearly useless data slapped with "correlation equals causation" headlines by those who want it to support their cause. Science at its best!

  11. I'd like to see a study where they track overall iron intake with mortality. "Red meat" isn't the biggest source of iron in the US. Your average McDonald's meal gets most of the iron from the bun and the fries. But trying to figure out what sources of "carb" people eat (enriched vs. not enriched) just adds even more variables to what you already point out is a complex issue. Plus iron is absorbed in wildly different ways, depending on the foods it is paired with and the genetics of the person eating the meal.

    However, stored iron is something that people CAN easily measure, and have, and ferritin levels DO track with both diabetes and heart disease. So until you figure out the "iron variable", the whole fat/meat/carb thing is a red herring.

    Not to mention the impact of gluten intolerance and celiac. If the "low carb" means "not much wheat", then that has a huge impact right there, for some subset of the respondents. But they don't take THAT into account either.

  12. I've been paying a lot more attention to the primary source nutrition literature recently, and I'm used to seeing crap like this study in AJCN, and don't even get me started on the lesser nutrition journals. But the Annals is probably in the second tier of internal medicine journals and should be more reliable than this. Embarrassing. That is one thing, though, that is not obvious on pubmed. There, all articles seem equal, when the truth is one should really know a bit about the level of journal one is reading.

  13. "I don't want to give any of the government bureaucrats who read my blog any ideas!"

    Government bureaucrats know how to read? 😉

    Anybody who bases conclusions on an ad hoc "score" distilled from data automatically gets filed in my "idiot box". You obviously can "prove" anything you want if you're willing to stick an arbitrary function in the middle of you analysis.

    1. I believe that was the point…
      I don't think they care about people who do things like fact check but if they influence one internet article reading simpleton out of five, they have suceeded.

Leave a Reply

Your email address will not be published.