How do I write the recommendation Key info?

You have to click on a recommendation to make the recommendation open up. Navigate to the Key information tab. (look at the picture below)

There you fill in a summary for each of the four factors in going from Evidence to Recommendations in GRADE:

  • Benefits & Harms
  • Quality of Evidence
  • Preferences and values
  • Resources and other considerations

Choose your own style of writing these factors, but remember that the end user out should be able to read it and feel that it helps their decision-making process.
Be short, but not too short. What you write should be concrete and useful.

There is a step-by-step 'interactive Evidence to Decision' (iEtD) framework that has been developed through DECIDE. The guidance for the different factors are found here: Guidance for judgements for each criterion

You can use this iEtD framework for discussion and voting directly from your PICO questions in MAGICapp. Go to the option-menu of any PICO and click "Create iEtD"

Benefit and Harms
State absolute numbers of the most important outcomes and differences in Benefit and Harms.
This is perceived as useful for the decision-making and can be used by the end user to convey to their patients or discussions with colleagues.
Remember that the clinicians have easy access to the Evidence profiles where all the expected benefits and harms will be listed, so try to concentrate on the ones which differ the most between given interventions and will likely be the tipping-point of decision-making. 

See full Evidence to Decision guidance 

Quality of Evidence/Confidence in effect estimates
We often see people gets confused by the term Quality of Evidence. What we really mean is how confident we are in our effect estimates. That is: if we state that we have low Quality of Evidence, we do not necessarily mean that the studies we look at have been done poorly, they might be great. But they might be non-randomized, or have another patient population than what we would ideally want, or we expect a publication bias. We then get lower confidence in our effect estimates, as the real estimates may differ substantially. It is this message that should be conveyed to the end user.
 
You can state your overall confidence across all outcomes (which is generally the lowest confidence you have in any of the critical outcomes), or also state the confidence you have for some of the most critical outcomes. E.g If you have High confidence in the effect estimates for mortality and moderate for stroke, you end up with overall moderate confidence in your effect estimates, but it might still be worth mentioning that you have high confidence in the effect on mortality.

Preferences and values
This is a hard summary to write as we generally do know little certain about peoples general preferences. State where you take your comments from, is it the panels experience, did you include patients in the authoring process or do you have preference -studies?
Sometimes you would want to include societal values, like keeping antibiotic resistance down vs. highest cure-rate with 1. level antibiotic. 
We suggest you try to highlight the instances where you would expect some patients choosing an alternative to the recommended/suggested strategy.

See full Evidence to Decision guidance 

Resources and other considerations
Here you give information about expected resource use, either for the patient or for the institution responsible for the costs that incur. Resources are not only direct money payed, but personnel or time -resources spent.
Do you have a cost-benefit analysis? Should you make one?
Does the suggested strategy affect other aspects of the treatment or health care system?
Included information in 'Resources' will likely differ when moving from an international setting and to a more local setting (national, institution).    

See full Evidence to Decision guidance 

For all factors: The level of your confidence given the information you have about the factor
You can set the level of confidence you have in each factors ability to bring about benefits. This is both to help the guideline authors in their discussion and final decision of the strength of recommendation, but also to give the end user of the recommendation a visual cue of how the different factors weighted in on the strength. 
In our user testing so far not all users understood the color coding, but the ones that did found it helpful. The ones that didn't understand did not find it disturbing, but rather looked at it as decoration. 

You can also read about this in the GRADE handbook: Going from evidence to recommendations


Feedback and Knowledge Base