The Nordic Nutritional Recommendations 2023 – balancing nutrition, health, and environment.

Authors:

March 2024 UPDATE: we published parts of our review as a peer-reviewed article in Nutrition. See full text here!

As risk modelers, it’s exciting to see researchers advancing their field by combining different sources of evidence in a systematic way while also considering knowledge gaps and statistical uncertainty. So, when MatPrat – Norwegian Egg and Meat Council – asked if we would be interested in performing an independent review of the new methodology developed to establish the most recent Nordic Nutrition Recommendations (known as the NNR), we were happy to accept. 

 We wrote a full report with our findings, so in this page we highlight the key findings from our review to spare you from reading 40+ pages (well, we still want you to also read the report and please send us your feedback once you do so!).  

A bit of background on the NNR and its relevance to nutrition recommendations worldwide  

Food, our health, and the environment are deeply interlinked. Yet, the development and adoption of food-based dietary guidelines (FBDGs) explicitly that blend human health and environmental aspects has been elusive. This is the very goal that Nordic Nutrition Recommendations set for their 2023 edition.  

The NNR are a set of diet recommendations for Nordic and Baltic countries. They are updated every few years – the last version was NNR2012 – in the form of a public report and sometimes peer-reviewed manuscripts. The NNR report’s food-based dietary guidelines (FDBGs) are intended to provide scientific evidence that each country can take to develop their own dietary guidelines. The report is drafted by a committee of scientists from the participating countries who are subject matter experts on nutrition.  The committee collects input from many scientists who author specific chapters on foods or nutrients or other supporting subjects. Some of these chapters are published as peer-reviewed manuscripts in academic journals.  

This collaboration has a rich history dating back to 1980, and the scope of these guidelines has become progressively more comprehensive: In 2004, they took a stride by incorporating exercise recommendations alongside nutritional guidance, and as we mentioned earlier, this year they aimed to integrate environmental impacts for the first time. Given the novelty of this inclusion, the NNR Committee enlisted Chatham House to assemble background articles on the environmental impact of diets.  

Although their guidance is intended for the Nordic and Baltic countries, the NNR is very influential worldwide. Indeed, their most recent report is gaining a lot of attention from the international community, as reflected in the list of international speakers during their launch event, which included the WHO general-director participated (remotely) in their launch event, and representatives from FAO and the World Wide Fund for Nature. The influential nature of the NNR work highlights the need for their work to be transparent and science-based so that other researchers can use it, and policy-makers can rely on their recommendations.   

What we found

In our report, we focused primarily on processed red meat (think hot dogs or bacon) and unprocessed red meat (e.g., beef steaks). The NNR authors recommended that 350 grams/week (50 g/day) or less of unprocessed red meat should be consumed, and as little as possible processed meat. Overall, we found multiple issues ranging from how the evidence for those foods was selected and used, to a lack of consistent methods for combining health and environmental impact factors into the FDBGs. And beyond technical issues, we also found inconsistencies in the recommendations across food groups, which we summarized in table 3 in our report, suggesting a general lack of transparency and reproducibility. 

But at the end of the day, any scientific endeavor has drawbacks, so it is important for us to stay focused on whether these findings have real-life implications in the FBDGs.

Inconsistencies in the recommendations: an eggsasperating problem

Let’s review an example of the inconsistencies we found using the classic nutritional conundrum that is the egg: are eggs good or bad? How many can I eat? In Table 24 “Science advice for food groups for adults”, the NNR wrote that there was “Not sufficient evidence to inform a quantitative FBDG” based on health effects of chronic diseases for eggs, and then advised authorities (in Nordic and Baltic countries) that “A moderate intake of egg may be part of a healthy and environment-friendly diet.” So far, so good: the information in that table remains consistent. However, in the section summarizing the recommendations for eggs, the language used under adverse health effects suggests that intake beyond one egg daily is not recommended i.e. “Consumption of up to 1 egg per day can be part of a healthy diet.” And immediately prior to that sentence, the authors wrote “evidence on health outcomes from intakes of more than one egg per day is limited.” The logic seems to be that the absence of evidence of risks above 1 egg/day might suggest that a risk exists above that amount!? 

So, if we summarize the information and recommendation for eggs, the NNR authors stated that they don’t have enough information to provide quantitative recommendations for eggs, then proceeded to provide a quantitative recommendation of consumption of up to 1 egg per day. And puzzlingly, this maximum recommendation seems to be based on the fact that no negative health effects of egg consumption were observed in the literature they reviewed. Confusing, isn’t it?  

Where do these inconsistencies come from, and how can they be addressed?

We don’t know what systematic reviews were initially considered for review, and why they were excluded

Again, no study is perfect. Systematic reviews, which aim to collect and summarize all available data to answer a question (a question like “Does eating red meat raise my risk of colon cancer?”), can disagree because of the way they choose to include or exclude studies or to carry out their analysis, or simply by chance. For example, while the Burden of Proof study on unprocessed red meat and colorectal cancer (CRC) concluded that eating 0-98 grams per day of unprocessed red meat was associated with at least a 6% higher risk of CRC, they also described the strength of this association as “weak” based on their star rating, and this is the study the NNR chose to base their recommendations for red meat and CRC (what they call a “qualified systematic review”). On the other hand, the NutriRECS consortium authors concluded “The possible absolute effects of red and processed meat consumption on cancer mortality and incidence are very small, and the certainty of evidence is low to very low” and found no significant impact of red meat  on colorectal cancer incidence.  

Which one is right? How do we tell? The Burden of Proof study is a couple of years newer, so it includes some more recent studies and more study participants, but the NutriRECS study did a better job of choosing studies based on the participants’ consumption of strictly unprocessed meat OR processed meat instead of a mixture of unprocessed and processed meats. The Burden of Proof study, was intended to focus on unprocessed meat, yet included some studies where participants may consumed a mixture of processed and unprocessed meat, which might conflate some of their estimations. Ultimately, although their estimates of the size and significance of the association between unprocessed red meat and cancer differed, the authors of the Burden of Proof study seem to have agreed with the NutriRECS authors as they state “While there is some evidence that eating unprocessed red meat is associated with increased risk of disease incidence and mortality, it is weak and insufficient to make stronger or more conclusive recommendations.” However, the NNR chose the Burden of Proof study and the (older) 2018 World Cancer Research Fund colorectal cancer report as their evidence base, and discarded the NutriRECS study. This exclusion may have had some merits, but we don’t know how they arrived at their final evidence base (or why NutriRECS was excluded). 

Why do inclusion and exclusion of evidence matter?

The problem is that without a list of the reviews the NNR authors initially found, reviewed, and ultimately rejected, we don’t know why they picked the BoP study, and we can’t judge if they were aware of the limitations of their choice. For example, as we described earlier, the BoP relies on some studies that didn’t differentiate unprocessed and processed red meats in their assessment. But processed meat must be cured or otherwise has added materials to preserve and it can be anything from ham hocks, to mortadella and other deli meats, all the way to your more familiar piece of bacon. In contrast, as the name suggests, unprocessed red meat is muscle tissue coming from cows and perhaps sheep/lamb that is not processed. 

With these comparisons in mind, it’s easy to understand why mixing both products can lead to estimation problems, and if nothing else, we are considering two very different products as one and the same. Would we consider decaf and caffeinated coffee equivalent if we wanted to understand the effect of coffee consumption and health?  

To further confuse things, the original Meat and Meat Products chapter draft (incidentally, this file is no longer available in NNR’s website, but it’s archived here courtesy of the Wayback machine) which was supposed to inform the Committee’s report doesn’t include the BoP as a source, and no updated version has been released at the time of writing. These inconsistencies put a gap between the evidence base and the final recommendation.  

We don’t know how statistical uncertainty around health and nutrition data was handled or if it was even considered. Given that some food recommendations didn’t include ranges, statistical uncertainty may have been ignored altogether, but we can’t say because we don’t know how the NNR authors went from evidence to recommendations.

Let’s back up for a second and talk about what we mean by statistical uncertainty.  To keep it short, we use the term “uncertainty” or the fancier “epistemic uncertainty” to numerically represent our lack of knowledge (or lack of confidence) about a statistic such as an average. The wider that range is, the more uncertain we are about the true value we are trying to quantify. This means our “confidence”, or the reliance we can put on a specific number being the “right” number, drops. For example, if you visit a new city and the bus you take to visit the city center is late two out of the five times you took that route, it would be a long shot to state that the true delay rate for that bus route is 40% (2/5) as you only took it a handful of times. But with that information (and quite a few assumptions!), we could use this method to state that our confidence in that delay rate goes from 11% to 77%, and the full range of uncertainty in this rate would look like Figure 1 to the right.

Figure 1 an example of statistical uncertainty

We also use the term “model uncertainty” when we are not sure about the statistical model that best represents our data. For example, if we had enough data on bus trips and accidents, we could 1) model the risk of accidents as linearly increasing with the miles traveled in that bus (assuming it’ll arrive at some point), or 2) perhaps assume an exponential relationship between miles and accidents. If both models fit the data well, we would say that we have model uncertainty, as either model could be accurate to the data.  

But we digress! The point here is that NNR’s recommendations for some foods, like vegetables and fruits, have a range around them (500-800 grams/day), whereas the recommendations for other foods such as cereals and meats have a lower or upper limit, but not a range. The report does not provide enough detail to know how these numbers, whether a range or a single value, reflect the uncertainty in the data behind them. Does the 50 gram-per-day recommendation for red meat come from an average risk-free amount that doesn’t consider uncertainty, or a more conservative measure like the lower confidence interval of the average risk-free amount?  What sources of uncertainty are included in that estimate? And what model was used to model these values, if any?

Why does uncertainty matter?

When we ignore uncertainty, we are pinning our decisions on a value that might not be exactly the correct number, such as in our earlier bus example. Putting an estimate on uncertainty lets us make calculations to figure out how likely it is that we have the right value (what our “confidence” in that value is), and by how much we might expect our results to vary. Ignoring uncertainty can lead to all sorts of trouble. Consider election polls: Leading up to the 1960 US presidential election between Richard Nixon and John F. Kennedy, the polls had the two neck-and-neck. Famed pollster Elmo Roper went so far as to predict a Nixon victory in 1960 based on a poll with a 1% margin of error (which is the term that pollsters have come up with to express uncertainty in their polls). However, the two candidates were so nearly tied in the election that a shift of less than a few voters would have changed the course of history, well within the poll’s margin of error.  

To see the difference uncertainty can make, let’s review the Figure 2 based on analyses that we performed using the data presented in the NNR report. We built this figure by building models based on data from the studies cited by the qualified Systematic Reviews the NNR cited in their Meat and Meat Products chapter draftThis resulted in a total of 8 studies cited by the WCRF on CRC and red meat. The vertical axis on the left represents an amount of unprocessed red meat consumed per day, and the horizontal axis on the bottom represents the relative risk of developing CRC at some point based on a daily consumption of 0-140 grams/day. Each colored point represents the average (as we call it the “mean”) risk for that amount relative to someone who does not eat unprocessed red meat. Each color represents the result of using a different type of model to combine the data from all the sources cited by the NNR on the topic, giving a sense of the model uncertainty here. For example, if we were to use a two-stage dose response metaanalysis model (one of the more common methods for this type of work), we would estimate an average extra risk of CRC for someone who eats 100 grams per day at about 19%. But once we consider uncertainty as shown by the whiskers coming out from either side of the point, this value could include negative values as it overlaps the vertical line at neutral risk. The official interpretation of confidence intervals is a little complex, but the short version is that when the interval (illustrated by the vertical bars in our plot) overlaps with the vertical line we cannot say there is an extra risk to consuming 100 grams/day of unprocessed red meat based on this data and model 

Figure 2 The impact of statistical model assumptions in the estimation of colorectal cancer risks from red meat consumption

The NNR did not describe a quantitative method to combine data across multiple metrics of any kind such as different health outcomes, or balancing health and environmental concerns.

There are many different ways to balance factors in a decision, but the more complex a decision is, and the more factors we have, the more important a systematic approach becomes to a good result. Some researchers have suggested humans cannot process more than four decision variables in making a choice, so a decision with the multitude of factors covered by human health impacts and furthermore environmental impacts definitely qualifies as complicated. In the report, we discuss methods for systematic decision making that could have been used here. 

To get a sense of the problem, think about the health impacts that the NNR associated with unprocessed red meat: cardiovascular disease, type 2 diabetes, and colorectal cancer. How would you choose a safe amount among the different safe amounts for each different illness? It might seem straightforward to pick the one with the lowest value (because then you can be sure you protect against the others). Indeed, this may have been the tactic of the NNR Committee, but no details to confirm this approach were included in the report. But also consider, what if the one with the lowest safe amount was extremely rare, or what if it was a non-fatal health condition? Would it then be reasonable to go with the lowest limit? The Global Burden of Disease project has previously taken the approach of weighting health impacts associated with a risk factor by their total global burden to create an average minimum risk value, so that’s a documented approach that could’ve been taken by the NNR.  

Why does the combination of evidence matter?

If one fails to use a systematic approach to a decision when this many variables are involved, and when expert opinion is applied without guidelines, it’s very easy for bias to be introduced, even with the best intentions for being objective. Without a method to balance the importance of environmental impacts relative to each other and relative to human health impacts, and to ensure that factors were weighed the same way between each food, it’s really hard for outside observers to understand, let alone replicate the decision process taken to guide the FBDGs. 

Parting thoughts

Overall, we found that the NNR2023 Committee’s intended transparency fell short because:

  1. We don’t know what systematic reviews were initially considered for review, and why they were excluded. 
  2. We don’t know enough about how statistical uncertainty around health and nutrition data was handled or if it was even considered. Given that some food recommendations didn’t include ranges, statistical uncertainty may have been ignored altogether, but we can’t say with certainty because we don’t know how the NNR authors went from evidence to recommendations. 
  3. The NNR did not describe a quantitative method to combine data across multiple metrics of any kind such as different health outcomes, or balancing health and environmental concerns.

If the foundation of the recommendations lacks clarity or reproducibility, their transparency comes into question. The substantial efforts invested in supporting these recommendations end up detached from the FBDGs, ultimately eroding their scientific validity. 

The integration of nutrition, health, and environmental impact is a crucial objective that has been pursued in other contexts and is likely to be adopted in the future. Other nations and regions can take lessons from this endeavor and improve upon these methods to establish transparent, systematic, and scientifically grounded food based dietary guidelines to achieve the best possible outcomes for the health of both humans and the planet. 

References

  1. Lescinsky, H., Afshin, A., Ashbaugh, C. et al. Health effects associated with consumption of unprocessed red meat: a Burden of Proof study. Nat Med 28, 2075–2082 (2022). https://www.nature.com/articles/s41591-022-01968-z#citeas
  2. NNR 2023 Committee. Global interest in the new Nordic Nutrition Recommendations – here are the speakers for the launch. https://www.norden.org/en/news/global-interest-new-nordic-nutrition-recommendations-here-are-speakers-launch. 2023-09-24
  3. Mi Ah Han, Dena Zeraatkar, Gordon H. Guyatt, et al. Reduction of Red and Processed Meat Intake and Cancer Mortality and Incidence: A Systematic Review and Meta-analysis of Cohort Studies. Ann Intern Med.2019;171:711-720. [Epub 1 October 2019]. doi:10.7326/M19-0699
  4. Järvinen, R., Knekt, P., Hakulinen, T. et al. Dietary fat, cholesterol and colorectal cancer in a prospective study. Br J Cancer 85, 357–361 (2001). https://www.nature.com/articles/6691906
  5. Pramil N. Singh, Gary E. Fraser, Dietary Risk Factors for Colon Cancer in a Low-risk Population, American Journal of Epidemiology, Volume 148, Issue 8, 15 October 1998, Pages 761–774, https://academic.oup.com/aje/article/148/8/761/69260.
  6. Gallup, Inc. Gallup Presidential Election Trial-Heat Trends, 1936-2008. September 24, 2008. https://news.gallup.com/poll/110548/gallup-presidential-election-trialheat-trends-19362004.aspx#4 
  7. World Cancer Research Fund. Diet, nutrition, physical activity, and colorectal cancer. Continuous Update Project. Revised 2018. https://www.wcrf.org/wp-content/uploads/2021/02/Colorectal-cancer-report.pdf. ISBN 978-1-912259-00-7.
  8. John F. Kennedy Library. Fast Facts about John F. Kennedy.  February 2004. https://www.jfklibrary.org/learn/about-jfk/life-of-john-f-kennedy/fast-facts-john-f-kennedy/closeness-of-1960-presidential-election
  9. Halford, G. S., Baker, R., McCredden, J. E., & Bain, J. D. (2005). How Many Variables Can Humans Process? Psychological Science, 16(1), 70–76. https://doi.org/10.1111/j.0956-7976.2005.00782.x 
  10. EpiX Analytics, LLC. Model Assist. Estimation of the Probability p after having observed s successes in n trials.  https://modelassist.epixanalytics.com/display/EA/Estimation+of+the+probability+p+after+having+observed+s+successes+in+n+trials