11.3.14

Why I’m Not Dismissing the Latest “Animal Protein is Bad” Study (But Not Losing Sleep Over It, Either) | Raw Food SOS

Why I’m Not Dismissing the Latest “Animal Protein is Bad” Study (But Not Losing Sleep Over It, Either) | Raw Food SOS



I keep doing this thing where I stand in the shower writing blog
posts in my head, emerging from the suds giddy and prune-fingered,
feeling strangely accomplished about the words I have not yet typed. And
then I squeegee the fog off the bathroom mirror and tell myself you can do it Denise!
and think about how awesome it will be to actually update my blog after
so much horrible silence. And then I load WordPress and think I’m blogging, I’m blogging, I’m finally blogging, it’s really happening.


And then suddenly it’s three hours later and I’ve opened 800 new
browser tabs in Firefox and have become distracted by something shiny,
Facebooky, or delicious, at which point all hope is lost.


This madness must end. Today, we blog.




So now I stand before you here in Cyberland, up on my
soapbox, rantin’ muscles ready to flex. In case you haven’t heard, the
world just got slammed with a new “meat is bad” tsunami—and it’s a
doozy. We’ve got the familiar swirl of headlines designed to strike fear
in our hearts (“That chicken wing you’re eating could be as deadly as a cigarette!” – The Financial Express), and pretty much every mainstream outlet caught it on their radar (hello ABC, Fox, The Guardian, Scientific American, Washington Post,
and any other big-hitters I left out). The actual study, which is
decidedly less popular than the press releases heralding its existence,
is available here: Low
Protein Intake Is Associated with a Major Reduction in IGF-1, Cancer,
and Overall Mortality in the 65 and Younger but Not Older Population
.
Go take a gander. The gist is that animal protein will (purportedly)
shorten your life and increase your risk of chronic disease—at least if
you’re eating a bunch of it before you turn 66. (Once you’re in your
golden years, though, the study implies animal protein is a good thing. Tricky, eh?)
So what’s really going on here? Should we all go vegan until we retire?

To be honest, I get weary blogging about what seems like
the same study repackaged and regurgitated every few months under a
different name (and it appears I’m not the only one). Observational meat studies are a dime a dozen. The media-viral ones seem to pop up at least a few times per year (I’ve already dissected a few).
Ultimately, there’s only so much you can say about a study that uses
wobbly survey methods, tries to squeeze causation from correlation, and
falls victim to the confounders plaguing most epidemiological projects
involving food. So whenever I see a new Meat Is Gon’ Kill Ya Dead study
hijacking the airwaves, I feel kind of  like

ugh
except with more sadness, and less nostril flare.

But this latest study grabbed my attention for a few reasons.
For one, it doesn’t orbit around the usual meat-damning suspects—saturated fat and cholesterol—but instead looks at animal protein, which I’m rather fond of discussing due to my previous shenanigans
on this blog. And two, the researchers padded their observational study
with some follow-up work on mice and cells, which at least earns them
an A for effort. It’s still not the sort of research that should keep
you awake at night, but at least in my mind, it’s interesting enough to
warrant a closer look.
And perhaps more importantly, I think there might be some truth to the researchers’ findings. Yep, I said it. Gasp shock horror!

So let’s plow into this thing, shall we?

The Study Low-Down

Here’s the gist. The study itself was a two-parter: half human, half
mouse (I realize that sounds like some kind of weird centaur). The human
part grabbed the most media attention, so let’s start with that.


For this leg of the study, the researchers analyzed data from NHANES III—a
giant survey of the health and nutritional status of American adults
and kiddos, which churns up reams of info about what the good folks of
this country eat. Basically, the researchers pulled data from almost
6,400 NHANES III participants aged 50 and over, looked at their food
consumption (gleaned from a 24-hour recall survey they answered two
decades ago), divided them up based on reported protein intake, and
followed their disease and mortality outcomes for up to 18 years. (As
best I can tell, that single recall survey was the sole source of the
study’s dietary data.)




Those eating less than 10 percent of their calories from protein were
scooted into the “low protein” group; those eating between 10 and 19
percent of their calories from protein comprised the “moderate protein”
group; and those eating at least 20 percent of their calories from
protein became the “high protein” group. Simple enough.




Initially, the only visible pattern was a much higher death rate from
diabetes among the moderate- and high-protein groupers—not really worth
sweating, though, because the sample size was too small to draw any
meaningful conclusions. Other than that, protein consumption didn’t seem
to be doing anything statistically noteworthy for the group as a whole:
it was unrelated to all-cause mortality, death from cancer, and death
from heart disease.




But here’s where it gets interesting. Instead of keeping all the
participants lumped together, the researchers tried stratifying folks
based on age—with the 50 to 65 year olds ushered into one group and the
66+ folks into another. The goal was to test for an age interaction, where a variable behaves differently depending on how old the participants are.




And it turned out “age interaction” was there in spades. Suddenly, a
whole slew of diet-disease links cropped up—highlighting a trend for
high protein to be bad news for middle-aged adults (50 to 65 years) but a
boon for anyone older than that. Weird, right? It’s why protein didn’t
have many meaningful correlations for the participant pool as a whole:
its positive effects in the older crowd were canceled out by the
negative effects in the younger crowd, creating the illusion of
neutrality.



Gotcha.
GOTCHA.
Anyway, the most interesting findings of that age stratification included:


  • The 50 to 65 crowd had a 74 percent greater risk of death from all
    causes for the high-protein group compared to the low-protein group
    (hazard ratio: 1.74), and a 433 percent greater risk of dying from
    cancer (hazard ratio: 4.33).
  • Folks aged 66 and older had a 60 percent lower risk of cancer
    mortality for the high-protein group compared to the low-protein group
    (hazard ratio: 0.40), and a 28 percent decrease in deaths from all
    causes (hazard ratio: 0.72).
In other words, the middle-aged adults eating the most protein faced
higher rates of cancer mortality and deadness in general. Meanwhile, the
66-and-older crowd was apparently benefiting from all things
proteinaceous, and those eating the most were living longer and more
cancer-freely. And because I can’t not: here’s a friendly reminder that
this is an observational study, and we can’t slap a cause-and-effect label on any of these relationships.



* Important caveat: both in the media hoopla and throughout the text of the Cell Metabolism paper, the results are reported as relative risk (e.g., “five-fold greater chance of getting heart disease”) rather than absolute risk
(e.g., “3 percent died of heart disease”)—a great way to make findings
seem wildly more dramatic and scary than they really are. For instance,
this study found that among the NHANES III participants who were
diabetes-free at the study’s onset, those eating the most protein were
73 times more likely to die of diabetes (yikes!). But if we
look at the absolute numbers, which are tucked away in a little PDF
supplement accompanying the study, we’d see that 0.2 percent of the
low-protein group died of diabetes (one person) versus 2.0 percent of
the high-protein group. That’s an absolute difference of 1.8 percent, which no longer sounds quite as horrifying.



The researchers also added another layer to their analysis: percent
of calories from animal protein and percent of calories from plant
protein. Here’s where the plot thickens. When adjusting for animal
protein, all those links between protein intake, cancer mortality, and
all-cause mortality went poof into the abyss—with the
protein-cancer connection significantly diminishing, and the
protein-total-mortality connection disappearing entirely. But when the
researchers tried adjusting for plant protein in the same way, nothing
happened.



So what does that mean? In a nutshell, that animal protein
specifically was driving those disease links, whereas plant protein
didn’t elicit an effect one way or another. (That rules out the
possibility that plant protein had special mortality-slaying superpowers
that made animal protein look bad by comparison.)



Should You Freak Out?
To figure out how seriously we should take this, let’s look at the study’s lifeblood: its dietary intake data. Although the Cell Metabolism paper
is strangely silent about how people’s food intakes were gauged (a bit
unnerving, considering how heavily this study depends on that data being
sound), we know that NHANES collects its information via 24-hour
recalls. The CDC website has a file discussing the whole process.
Basically, participants get phoned by an interviewer, are asked to name
everything they ate from midnight to midnight of the previous day, get
prodded to make sure they didn’t forget any snacks or butter pats or
late-night cookie nibbles, and then receive some follow-up questions
about tap water and salt and other fun things. According to the CDC
file, the participants also answer a short questionnaire “to ascertain
whether the person’s intake on the previous day was usual or unusual.”
After looking over that questionnaire, I’ve got to say the
word “ascertain” seems a bit optimistic to me. Keep in mind, the 24-hour
recall is the sole source of dietary data in this study—so it darn well
better strive for accuracy. And indeed, the NHANES survey employs a
five-step strategy to help participants remember every bite they ate,
described in “Nutrition in the Prevention and Treatment of Disease” (PDF) as follows:
  1. An initial “quick list,” in which the respondent reports all the
    foods and beverages consumed, without interruption from the interviewer;
  2. A forgotten foods list of nine food categories commonly omitted in 24-hour recall reporting;
  3. Time and occasion, in which the time each eating occasion began and what the respondent would call it are reported;
  4. A detail pass, in which probing questions ask for more detailed
    information about the food and portion size, in addition to review of
    the eating occasions and times between the eating occasions; and
  5. Final review, in which any other item not already reported is asked.
As far as boosting reporting accuracy, that’s all a great
help. But it appears the interviewers only asked one question to gauge
how typical each participant’s reported diet was, relative to
what they generally eat: “Was the amount of food that you ate yesterday
much more than usual, usual, or much less than usual?”
That’s it. No qualifier for what “much more” or
“much less” actually meant; no queries about specific foods; no prodding
to see whether yesterday happened to feature a birthday barbeque, thus
skewing the day’s frankfurter-to-kale ratio in a meatier direction than
usual. Just one vague question about total food quantity, whose answer
could only ever be subjective. (After the diet recall, each person’s
reported intake was converted into food codes and nutrient components—so
any flaws in that initial reporting trickled upstream to the final
statistical analysis.)
And it gets worse. While it’d be nice to suspend disbelief
and pretend the NHANES III recall data still manages to be solid, that’s
apparently not the case. A 2013 study took NHANES to task
and tested how accurate its “caloric intake” data was, as calculated
from those 24-hour recall surveys. The results? Across the board, NHANES
participants did a remarkable job of being wrong. Nearly everyone
under-reported how many calories they were consuming—with obese folks
underestimating their intake by an average of 716 calories per day for
men and 856 calories for women. That’s kind of a lot. The study’s
researchers concluded that throughout the NHANES’ 40-year existence,
“energy intake data on the majority of respondents … was not
physiologically plausible.” D’oh. If such a thing is possible, the
24-hour recall rests at an even higher tier of suckitude than does its
cousin, the loathesome food frequency questionnaire.
(And in case that’s not enough to make your blood boil: the
NHANES data is what the US government uses to determine what the
country is eating, formulate dietary guidelines, and divvy up funding.
Your tax dollars hard at work!)
If it’s that bad with calories, can we really expect the protein data to be much better?
In case you’re wondering why anyone uses such a destined-for-failure
way of determining food intake, the answer is simple: it’s a heck of a
lot cheaper (and easier) to ask people what they’re eating than to
hellicopterishly stalk them all day long, weighing and measuring every
morsel of food headed for their lips. When it comes to massive surveys
like NHANES that track thousands of enrollees, affordability and
convenience reign supreme. And sometimes that means cutting corners with
precision.


Bottom line, it’s almost a given that the recall data here is less
than stellar. And despite all the magical things math can do, no amount
of statistical wizardry will heal numbers that are wrong from the start.


And to add insult to injury, keep in mind that this was the only diet
information collected for each participant over the course of 18 whoppin’ years.
Even if the recall managed to be accurate for the time it was recorded,
there’s no way to know whether the participants’ diets evolved over the
next two decades, and how any changes in their noshing habits impacted
mortality outcomes.



That’s a lot of trust to put in one day’s worth of self-reported eating!



Diamonds Among Coals?



Now that I’ve bashed the NHANES diet survey to the moon and
back, let’s look at why it might actually have some legitimacy. Bear
with me!
While combing through the Cell Metabolism paper, one thought
kept tickling my brain. Typically, if we dig into an observational
study about meat, we see the heavy meat eaters—particularly those
daredevils mowing down on red and processed varieties—engaging in a
variety of lifestyle practices that spell I AM NOT HEALTH CONSCIOUS loud
and clear: more smoking and drinking, less exercise, higher calorie
intake, fewer fruits and vegetables each day, the works.



In turn, all those health-defeating behaviors tend to confound the
true relationship between meat and various diseases and mortality. It’s
hard to decipher whether meat itself is increasing cancer and
heart disease and early death, or if reckless-with-their-health
folks—already on a crash course towards chronic illness—just eat more of
it because they don’t heed any conventional advice about diet and
lifestyle.



If a situation like that was at play in this study, and protein
intake was a surrogate for health-unconsciousness the same way meat
tends to be, we’d expect to see the folks in the high-protein group
fitting a similar anti-health profile—poorer diets overall, more risky
behaviors. In turn, that would mean the study’s results could’ve been
biased against the high-protein consumers due to all that residual
confounding.



So was that the case?



Unfortunately, the paper doesn’t make it very easy to answer that
question. There’s no data for biggies like drinking or exercise in the
paper’s “participant characteristic” list. But we can see that the high-protein group actually had relatively fewer
smokers than the low-protein group (18.2 percent versus 21.8 percent
for current smokers; 37.8 percent versus 39.8 percent for former
smokers), and that the high-protein group reported a lower
calorie intake than the low-protein group (though heaven knows if
that’s accurate). In addition, more people in the high-protein group
than the low-protein group reported trying to lose weight during the
past year (43.9 percent versus 37.5 percent), as well as changing their
diet for health reasons (29.3 percent versus 15 percent). But it’s hard
to say whether that’s a reflection of greater health awareness or poorer
health when the study started.



What can we piece together from that?



Here’s my take. Contrary to what we might assume, the deck probably wasn’t
stacked against the high-protein eaters from the start. If anything,
the study’s confounders should have given them an advantage in their
health outcomes. And I think that possibility is supported by more than
just the (admittedly sparse) participant characteristics.
Here’s why. When the researchers took their protein
correlations and adjusted for fat and carbohydrate intake (as percent of
total calories), the numbers didn’t budge. That’s pretty
interesting, because this batch of NHANES III surveys happened at the
height of the nation’s fat-phobia, when mainstream thought was that all
fat was bad—regardless of whether it came from something hooved, winged,
or rooted in the dirt. Since adjusting for fat intake didn’t dissolve
the links between protein and mortality, it reduces the likelihood that
fat was acting as a confounder here.
Likewise, protein—at least until this study came out and
ignited terror in omnivorous hearts near and far—has been the only
macronutrient not demonized by any popular diets or mainstream
health authorities. Fat and carbs have received more than their fair
share of bashing over the years, but protein, as far as conventional
thought goes, has clung tightly to its health halo—emerging unscathed
from even the bloodiest of diet wars. (And the perception of “good
protein” certainly includes that from animal sources, thanks in large
part to the USDA’s push to consume our meat and dairy lean. How many
egg-white omelets and and skinless chicken breasts have been choked down
in the name of health?)
So again, if we were going to find any bias in the survey data, it’d probably lean towards protein being a good
thing—at least in the eyes of the health-conscious crowd. The fact that
a non-stigmatized macronutrient had such defined links with mortality
cranks up its relevance, in my mind.
Of Mice and Rodent Chow (And Growth Factors and Protein)





Is your brain full yet? Save room, because there’s still
another piece of the study to run through our wringer—and this one’s a
lot more rambunctious and furry. To understand why protein might be
linked to cancer and overall mortality as their human study suggested,
the researchers conducted a series of experiments on mice, feeding them a
range of protein levels mirroring that of the NHANES III participants—4
percent to 18 percent of calories. The prime goal was to see whether
tweaking those protein levels would impact levels of insulin-like growth factor 1 (IGF-1) circulating in the mice’s bodies, as well as cancer incidence and progression.
But first, lets back up for a moment and get some context on this whole IGF-1 thing and why it’s so relevant.
As you might’ve seen in some of the news reports, the lead
researcher of this study was Valter Longo—the director of the University
of Southern California’s Longevity Institute, who already has a scroll
of really cool studies under his belt (mostly on fasting and cancer). And he was profiled on “Through the Wormhole” with Morgan Freeman, which ups his awesomeness quotient considerably. Because science.
morgan_freeman

And in the world of aging research, IGF-1 is a bona-fide spotlight stealer. As its name implies, insulin-like growth factor 1 is
a hormone molecularly similar to insulin, with its modus operandi being
“grow, grow, grow!” It promotes growth for nearly every cell in your
body—building muscle, making young’uns get taller, creating new brain
cells, repairing nerve damage, and doing other awesome things that keep
your body cranking like the fabulous machine it is. But IGF-1 is kind of
a double-edged sword. And the bad-slicey side plunges right through the
heart of longevity.
Part of the problem is that, while fulfilling its
growth-promoting duties, IGF-1 doesn’t distinguish between healthy cells
and damaged ones—potentially spurring cancer proliferation and contributing to tumor growth, if the conditions are right. High levels of IGF-1 have been linked to breast cancer, prostate cancer, bladder cancer, colorectal cancer, endometrial cancer, and lung cancer
(though most of that research is observational, so there’s always the
possibility of tumors increasing IGF-1 levels instead of the other way
around, or a third unmeasured variable raising both). On the flip side,
folks with a genetic deficiency in IGF-1 appear nearly immune to cancer—a phenomenon Longo himself has investigated.
Apart from the potential cancer connection, IGF-1 plays a
huge role in the aging process. After all, the cycle of cells growing,
dividing, and repairing is just a fancy way of explaining that they’re
aging—so IGF-1 is pretty much orchestrating how rapidly that happens.
And the evidence comes from more than just the usual rat
and test-tube studies. As far as human data goes, there’s some
interesting research showing a connection between IGF-1 levels and
lifespan when we look at the oldest of the old. A disproportionate
number of centenarians have mutations affecting their IGF-1 receptor activity, which probably plays a role in their long-livedness. Likewise, the offspring of centenarians have lower IGF-1 levels
than others of their age, gender, and BMI—suggesting the hereditary
component of longevity could be due to reduced IGF-1 tricking through a
family’s bloodline. (It’s less useful to look at IGF-1 levels in
centenarians themselves, since the hormone naturally declines with age
and will be pretty low in anyone who reaches the century mark.)
For longevity researchers, there’s an ongoing quest to
“hack” all this life-extending genetic stuff and help us average Joe
Shmoes reap the same benefits. Quite a few things can influence your
body’s levels of IGF-1 beyond genes—everything from your stress level to
your ethnicity to your estrogen status to the time of day—but diet is a
huge determinant, and perhaps the easiest to tweak and control. So it’s
not too surprising that food gets so much attention in this field. And
as Longo was keenly aware of, protein is chief among the IGF-1 governors
we consume.
But the life-extension hunt is still a work in progress. For at least 60 years, the darling of longevity seekers was calorie restriction (CR)—with the first case of its life-extension properties appearing in 1935, when an experiment showed that rats lived longer if their energy intake was reduced. Ditto for mice, flies, crustaceans, and yeast, more studies revealed. And subsequent research showed the same longevity effect in calorie-restricted ringworms, who idled in their larva’s “dauer
stage instead of catapulting towards maturity like usual. (Which made
their lives longer, but not necessarily more enjoyable: the dauer stage
consists mostly of not eating, not reproducing, and sitting like a lump
on a bump until food supply becomes abundant. Even if we humans had a
dauer stage, I can’t imagine wanting to stay there for very long. It
sounds too much like high school.)



c-elegans

Please, come join me in existential limbo.
The reasons behind calorie restriction’s perks? A biggie was thought
to be its suppressive effect on growth hormone and IGF-1—essentially
slowing down aging and age-related diseases. Calorie-restricted
organisms had much lower levels of IGF-1 than their more abundantly fed
counterparts, at least in the creatures and fungi that’d been studied up
to that point. And those reduced IGF-1 levels seemed to help protect
cells from DNA damage—a boon for warding off cancer. It made sense:
there’s a huge evolutionary and survival advantage to halting growth in
times of food scarcity.


Although there wasn’t controlled data available for humans (we live
too darn long to make lifespan studies an easy feat), the benefits of
calorie restriction were expected to be universal. And thus emerged an
era of books, gurus, theories, and research funding all pouring towards
the promising new field of “CR.”



But soon cracks in the calorie-restriction theory started appearing. More comprehensive rodent studies, including one looking at 41 different strains of mice, found that calorie restriction shortened the
lifespan in more strains than it extended. Likewise, in wild mice
opposed to massively lab-domesticated ones, a lower energy intake did nada for average life expectancy
(though it did curtail cancer rates). A 25-year rhesus monkey
study—which the longevity world had waited with baited breath for
completion—failed to show any life-extension benefit from feeding them
less. And while studies on calorie-restricted humans weren’t far enough
along to offer mortality data, the existing numbers showed their IGF-1 levels were pretty much the same as everyone else’s, casting doubt on the hope those earlier rodent and yeast and worm studies would be translatable to humans.



mad_monkey

Well, f%@#.
What the heck was going on?



Eventually it emerged that calorie restriction, for most species, was only effective if it also restricted protein intake.
And as the study gods breathed more and more research into being, it
seemed all that deliberate hunger might be for naught. Protein
restriction alone could push down IGF-1 levels
and spark a cascade of longevity-enhancing changes. (In case you’re
wondering, neither fat restriction nor carbohydrate restriction seemed
to increase lifespan, at least in rodent models.)



But it didn’t end there! A new wave of studies zeroed in on methionine, a
sulfur-containing amino acid abundant in muscle meat and eggs. In mice,
restricting methionine—without reducing calories—was enough to increase lifespan and induce health perks
like slowing down immune aging, improving blood glucose, reducing IGF-1
and insulin levels, and protecting organ cells from oxidative damage.
The reason? It appeared to be twofold: methionine tends to generate
toxic byproducts—


And then the plot turned once more! Seriously, this saga had more
twists than a pretzel factory. A fascinating but woefully little-known
study in 2011 showed that in mice, supplementing with glycine—an amino
acid found abundantly in connective tissue and gelatin and bone
broth—had the exact same life-extending effect
as restricting methionine. Without reducing calories or other amino
acids, glycine supplementation increased the rodents’ lifespan, reduced
fasting glucose and insulin, decreased IGF-1 levels, and nearly halved
their triglycerides—the very perks that’ve variously been attributed to
calorie restriction, protein restriction, and methionine restriction.


Let me make it abundantly clear: THIS IS HUGE.
If the glycine finding translates over to humans (which I strongly
suspect it does), life-extension-seekers may be barking up the wrong
tree—or at least an unnecessarily complicated one—by trying to
selectively reduce animal protein in order to live longer, as Longo
seems to support. A wiser method could be simply getting a more
“biologically appropriate” balance of amino acids than the standard
Western diet typically provides. That means eating more of the
glycine-rich foods that’ve been gradually stripped from our menus—things
like skin, bones, tendons, tripe, feet, hooves, ears, connective
tissue, and some organ meats—and less of the muscle meat typically
dominating our fridges and freezers.



So to put all that in a Reader’s Digest version, the history of life-extension research went something like this:




“Calorie restriction extends rats’ lifespans. We must eat less to live longer!”



“Wait… reducing protein without reducing calories does the same thing. We must eat less protein to live longer!”



“Well I’ll be darned! The whole ‘life extension’ thing works just by
limiting methionine. Methionine bad. Other amino acids okay! Down with
meat!”



And now, it seems we’re at yet another crossroads—one where
methionine intake may become less important than its balance and
interaction with other nutrients, especially glycine.



*Note: This is deliberately oversimplified and lots of other
really interesting discoveries happened. But heaven knows this blog post
doesn’t need to be even longer than it already is.



Now back to Longo’s mice.
mouse
Exhibit
A: Denise has inserted yet another cute animal picture to distract you
from the fact this blog post is horrifyingly long and word-full.
In a nutshell, the researchers put groups of mice on
different experimental diets: one relatively high protein (18 percent of
total calories) and one very low (4 to 7 percent of total calories).
Then the mice were injected with cancer cells—melanoma for one
experiment, breast cancer for another—in order to kick off the
tumor-growing process.
Now brace yourself for some China Study déjà-vu.
Fifteen days after getting their melanoma implants, all of the mice on
the high-protein diets had developed measurable tumors—compared to only
80 percent of the low-protein group. (Over the course of the experiment,
that number rose to 90 percent, but never got any higher.) What’s more,
the low-protein group’s tumors seemed to grow at a much slower rate: by
the final day of the experiment, the average tumor size in the
high-protein group was 78 percent bigger than in the low protein group.
When that experiment was repeated with breast cancer cells,
the results were pretty similar—except the tumor rate of the
low-protein group maxed out at 70 percent of the mice, while the
high-protein group was universally tumor-stricken.
And to tie IGF-1 back into the picture, the low-protein mice—as we
might expect—had significantly lower levels than their protein-gorging
brethren.



So there’s the gist. Is it a legitimate strike against eating lots of protein?
There’s one major reason I’m reluctant to draw any conclusions from all this (apart from the whole we aren’t mice
thing). And that reason is called “AIN-93G standard chow.” That’s the
name of the lab diet used for the high-protein mice, according to some
notes in the paper’s supplement. You can download the AIN-93G specs here, but if you’d like to save yourself the effort (and hard drive space), here are the top six ingredients:
  • Corn starch (397 g)
  • Casein (200 g)
  • Maltodextrin (132 g)
  • Sucrose (100 g)
  • Soybean oil (70 g)
  • Cellulose (50 g)
Yum?



A lot of things are wrong with this picture, such as “where are the food things?”—but
for the sake of brevity, I’m just going to focus on that second
ingredient: casein. It’s one of the major proteins in milk, and it’s got
an awful track record for promoting tumor growth more than other types of protein, including its dairy-derived cousin whey.



I’ve already written tomes on casein, cancer, and rodents in previous blog entries—including my Fork’s Over Knives critique and China Study critique—so I won’t torture you by rehashing it all here. Chris Masterjohn also has some awesomesauce posts
on the subject, so hop on over there if you’re insatiably curious about
it all. The bottom line is that when we look at the mice-and-protein
studies outlined in Longo’s paper, this is what we’re dealing
with: a cocktail of purified ingredients, with the protein component
being a well-known promoter of cancer in rodents. It’s not at all
surprising that the mice eating the most of it sprouted tumors like mad.
But it’s impossible to say how much of that’s due to protein per se, or
to casein—especially casein that’s been stripped of all the other
goodies in dairy and tossed into a party bag of refined junk.



Putting It All Together



For those of us in the ancestral, paleo, “real food,” low carb, and
other related communities, there’s a tendency to see a study like this
and be like RAAWRRR KILL IT BEFORE IT BREEDS at the first whiff of its
correlation-is-causation tone. And as someone who generally places
epidemiology in the ninth circle of Research Hell, I’ve certainly been
guilty of that myself. But one of the biggest gifts of observational
studies like this one is the opportunity to explore new hypotheses and
test out perspectives that challenge what we believe.



I think that’s definitely the case here.



Think of it this way. For most of human history, dietary consistency
was a fairy tale. Famines struck. Periods of scarcity tangoed with those
of abundance. We gorged on energy-dense foods like meat when they
became available, knowing full well we might not be so lucky the next
day or week. And to be sure, we ate the whole freakin’ animal after a
kill—not just the skeletal muscle.



Constant abundance and pickiness is absolutely new to our bodies,
even for those of us eating foods we deem ancient or ancestral. So it’s
really not all that far-fetched to think that America’s animal protein
habits—heavy on the methionine-rich muscle meats, scant on the glycine,
swimming in ceaseless surplus instead of punctuated with scarcity—could
be a problem for our health.



Perhaps it’s not a coincidence that many of the world’s
longest-living populations eat fairly low-methionine diets or
periodically abstain from protein-rich foods (like in Ikaria,
where the predominantly Orthodox Christian residents cyclically fast
from animal products). And perhaps just as relevant as the types
of foods we eat is the manner in which we eat them—nose-to-tail for
animals, with some plant-only days thrown in for good measure.



That doesn’t mean the solution is to go vegan. Nor is it necessarily
to eat a low-animal-protein diet. But perhaps it’s time to seriously
explore options like protein cycling, periodic fasting, or just cooking
up heaps o’ bone broth to get that glycine down our gullets.



Just to be clear, nothing I’ve written here—even my moments of
quasi-defending this study—changes the fact that the NHANES III data is
observational and the diet recalls are basically handicapped from the
start, thanks to the history-revising sinkhole that is the human mind.
As always, correlation isn’t causation. It’s pretty
disappointing that the study’s own researchers seemed to forget that.
The reason I’m not sliding this study straight into the slush pile is
because regardless of its validity, it at least opens the door to some
important discussion. The bigger point is that the trends it excavated
and hypotheses it explored could feasibly be real—evolutionarily,
biologically, logically. In my opinion, the greatest value of this
study, then, is its role as a springboard for breaking out of the
comfort zone of what we think—and want—to be true.



Otherwise, I guess it could make a nice doorstop.