Definitive evidence-based medicine: the NICE–SUGAR trial
Last year, we commented on the hard job doctors have in gathering evidence from the scientific literature, learning from it and then actually applying it in clinical practice . To give an example, we described the issue of hyperglycaemia associated with insulin resistance in critically ill patients: glycaemic control was recently recommended to be ‘tight’ in such patients, in order to improve several outcomes, including intensive care unit survival ; nevertheless, a clear indication from a high level of evidence studies was surprisingly lacking, and a meta-analysis showed negative results . We also shortly described a large randomized controlled trial (RCT), in progress at that time, planned to definitively compare the glucose levels of 81–108 mg/dL versus 180–144 mg/dL in critically ill patients when their length of stay in the intensive care unit (ICU) was more than three days [Normoglycaemia in Intensive Care Evaluation and Survival Using Glucose Algorithm Regulation (NICE–SUGAR)]. The last patient of the NICE–SUGAR trial was randomized before the end of last year, and its results were recently published . The trial's primary outcome was a 90-day all-cause mortality. The secondary outcome measures included the survival time during the first 90 days, the cause-specific death and the durations of mechanical ventilation, renal replacement therapy (RRT) and length of stay in the ICU and hospital. The randomization assigned 3054 patients to undergo intensive control and 3050 patients to undergo conventional control. In both groups, the management of glucose levels was guided by specifically designed web-accessed treatment algorithms . A blood glucose level of 40 mg/dL or less was considered a serious adverse event and was consequently treated. Each event was recorded and reviewed by four study committees. The two groups had similar characteristics at baseline. The two treatment groups showed a good glycaemic separation, with a mean absolute difference of 29 mg/dL in the overall blood glucose levels. Severe hypoglycaemia was reported in 6.8% patients in the intensive-control group and 0.5% in the conventional-control group (P < 0.001). Unexpectedly, more patients (27.5%) in the intensive-control group than in the conventional-control group (24.9%) died (P = 0.02). The treatment effect did not differ significantly between surgical patients and non-surgical (medical) patients. There was no significant difference between the two treatment groups in the median number of days in the ICU (P = 0.84) or hospital (P = 0.86), or the median number of days of mechanical ventilation (P = 0.56) or RRT (P = 0.39). As far as sample size, methodology, scientific value, clinical effort and strength of final results are concerned, this trial is a new ‘mile stone’ from the Australian and New Zealand Intensive Care Society Clinical Trials Group and collaborators. We previously commented that current clinical practice requires evidence-based medicine to answer unresolved debates and further improve applied protocols; the opinion we were expressing was, however, that only the results coming from ‘definitive level of evidence’ should encourage changes in therapeutic strategies and elimination of good sense-based medicine. In fact, undefinitive (and eventually contradictory) trials tend to develop a widespread ‘literature skepticism’ in the medical community . The NICE–SUGAR trial can be presented as an example of a definitive level of evidence; in well over 80 years, the scientific and clinical endocrine community was still unsure about the right targets in glycaemic management. Now, on the basis of this trial, we are able to recognize that the use of a glucose target <140 mg/dL in critically ill adults should no longer be recommended. As the accompanying editorial remarks, whether the observed harm resulted from the reduced blood glucose level, increased administration of insulin, occurrence of hypoglycaemia, methodologic factors specific to the trial or other factors remains unclear . Interestingly, in a letter, Van den Berghe and colleagues criticized the feeding practices of the NICE–SUGAR trial because there was a clear preference for enteral nutrition described as being ‘hypocaloric’, whereas the patients in the Leuven studies were receiving a high caloric intake from parenteral nutrition: in truth, the optimal quantum of nutrition for critically ill patients is currently unknown . As a matter of fact, before the NICE–SUGAR study, extensive observational data have shown a consistent almost linear relationship between blood glucose levels in hospitalized patients and adverse clinical outcomes, even in patients without established diabetes . It has never been entirely clear, however, whether glycaemia served as a mediator of these outcomes or merely as a marker of the sickest patients, who present with the well-known counter-regulatory stress response to illness . Careful glycaemic control and prevention of severe hyperglycaemia remains one of the primary goals and a standard of care in critically ill adults. Nevertheless, hypoglycaemia may be as harmful as severe hyperglycaemia. It might be speculated that a stress-induced elevation of glucose levels is the body's proper response to the illness that attempts to shunt energy from temporarily unessential skeletal muscle to critical organs. A most recent retrospective examination of data from the Australian and New Zealand Intensive Care Society Adult Patient Database on 66 184 adult admissions showed that, within the first 24 hours of ICU admission, the cumulative incidence of hypoglycaemia (defined as glucose levels <72 mg/dL) and blood glucose variability (defined as the occurrence of both hypoglycaemia and hyperglycaemia levels >216 mg/dL) were 13.8% and 2.9%, respectively. Increasing severity of hypoglycaemia was associated with a ‘dose–response’ increase in crude and adjusted ICU and hospital mortality, with patients with blood glucose variability having an independent risk of ICU and hospital mortality . These findings ultimately show that the decrease of glycaemic levels has significant clinical relevance in critically ill patients and that the protocol-driven tight glycaemic control protocols must take into consideration patients who tend to have early derangements of blood glucose levels, either towards hyper- or hypoglycaemia.
Dialysis: dose but not overdose
The Randomized Evaluation of Normal versus Augmented Level Replacement Therapy (RENAL) trial was eagerly anticipated by us last year : this other example of ‘definitive evidence’ study was planned to test the hypothesis that a higher dose continuous veno-venous haemodiafiltration (CVVHDF) at an effluent rate of 40 mL/kg/h would increase survival compared to CVVHDF at 25 mL/kg/hour of effluent dose . This trial randomized 1508 critically ill patients in 35 intensive care units in Australia and New Zealand: 747 were randomly assigned to higher intensity therapy and 761 to lower intensity therapy. The two study groups had obviously similar baseline characteristics and received the study treatment for an average of 6.3 and 5.9 days, respectively. At 90 days after randomization, 322 deaths had occurred in the higher intensity group and 332 deaths in the lower intensity group, for a mortality of 44.7% in each group. Overall, the mortality rates were significantly lower, and recovery of kidney function in surviving patients was more common in the RENAL study than in the Acute Renal Failure Trial Network (ATN) study . It is possible that these differences are related to the alternative strategies for the timing of the initiation of RRT and to the greater use of continuous therapy, as compared with intermittent therapy, as the initial mode of RRT in the RENAL study. However, they may also be due to the differences between the two study populations. No significant differences between the intensity groups were present concerning the percentage of patients who were still receiving RRT after 90 treatment days, whereas hypophosphataemia was more common in the higher intensity group than in the lower intensity group (65% versus 54%).
A few months before this important Australian randomized trial, an interesting European multicentric observational study, the DOse REsponse Multicentre International (DoReMi) collaborative Initiative, did find similar results . A delivered RRT dose was examined in patients treated exclusively with either continuous RRT (CRRT) or intermittent RRT (IRRT) during their ICU stay and provided no evidence for a survival benefit afforded by a higher dose RRT. The dose was categorized into more intensive (CRRT ≥35 mL/kg/hour, IRRT ≥6 sessions per week) or less intensive (CRRT <35 mL/kg/hour, IRRT <6 sessions per week). The authors analysed 553 acute kidney injury (AKI) patients treated with RRT, including 338 who received CRRT only and 87 who received IRRT only. For CRRT, the median delivered dose was about 27 mL/kg/hour. For IRRT, the median dose was seven sessions per week. Of note, only a relative minority of the CRRT patients (22%) received a more intensive dose. The crude ICU mortality among the intensive CRRT patients was 60.8% versus 52.5% in the less intensive patients. In IRRT, this was 23.6% versus 19.4%, respectively. On multivariable analysis, there was no significant association between RRT dose and ICU mortality. Among survivors, shorter ICU stay and duration of mechanical ventilation were observed in the more intensive RRT groups.
The results of ATN, RENAL and DoReMi imply that, if a threshold dose of therapy must be achieved to optimize clinical outcomes, then increasing the intensity of therapy beyond this dose seems not to provide further clinical benefit . Unfortunately, as shown by the DoReMi study group, such minimal dosing threshold is often not achieved.
Biomarkers: neutrophil gelatinase-associated lipocalin
In 2007, in the journal Kidney disease beyond nephrology, we presented the most important results at that time on urinary and plasmatic biomarkers explored either in experimental or human AKI . Since then, one of the most active fields of study in nephrology has been that of novel biomarkers for AKI. Of these, neutrophil gelatinase-associated lipocalin (NGAL) is among those that have received the most attention in the past year. In 2009 alone, over 135 articles on this subject have been published, in both nephrology and non-nephrology journals. Human NGAL is a 25-kDa protein covalently bound to gelatinase, and is one of the most upregulated transcripts in the kidney very early after acute injury . It is measured in urine and plasma/serum using ‘research-based’ assays and with standardized clinical laboratory platforms . Several studies evaluated the performance of NGAL as an early marker for cardiac surgery-associated AKI (CSA-AKI). This is a particularly convenient clinical setting for the study of early AKI biomarkers, since there is a temporally predictable discrete injury to the kidneys, and it is possible to measure the urine and blood levels of these biomarkers before the actual injury, and compare them to levels at pre-specified time points afterwards. Wagener and colleagues studied serial urinary NGAL levels in a large population of 426 adult cardiac surgical patients , of whom 20% developed AKI, defined as an increase in serum creatinine from pre-operative values by >50% or >0.3 mg/dL within 48 hours. Urinary NGAL, but not serum creatinine, was found to correlate significantly with cardiopulmonary bypass and aortic cross-clamp times. However, urinary NGAL immediately and 3, 18 and 24 hours after cardiac surgery had limited diagnostic accuracy to predict CSA-AKI, with areas under the receiver operating characteristic curve (AUC-ROC) ranging from 0.573 to 0.611. These findings contrasted with those of a smaller study by the same investigators . The later study has been criticized for storage of specimens at −20°C, which may have affected the results. It has been demonstrated that urinary NGAL protein degrades significantly after prolonged storage at −20°C, but not at −80°C . Other smaller studies were subsequently published. Tuladhar and colleagues studied 50 patients undergoing cardiac surgery requiring cardiopulmonary bypass (CPB): 18% developed AKI within the first 48 hours post-CPB, defined as an increase in serum creatinine of >0.5 mg/dL . Two hours after CPB, both plasma and urinary NGAL increased significantly in the CSA-AKI patients. Both were predictive of subsequent AKI with AUC-ROC of 0.80 (plasma NGAL) and 0.96 (urinary NGAL), respectively. Haase et al. compared the performance of plasma NGAL and serum cystatin C in 100 adult cardiac surgical patients at baseline, on arrival in the ICU and at 24 hours post-operatively . On arrival in the ICU, both plasma NGAL and serum cystatin C were of good predictive value with AUC-ROC of 0.80 and 0.83, respectively. The performance of serum creatinine (AUC-ROC 0.68) and urea (AUC-ROC 0.60) were comparatively inferior. They also noted that NGAL and cystatin C on arrival in the ICU correlated with the subsequent duration and severity of AKI and length of stay in the ICU . Although these studies evaluated the performance of individual biomarkers, it is likely that no single marker will suffice, and a panel of biomarkers may be the better option. Han et al. evaluated the panel approach for CSA-AKI  in 90 adults; 40% developed AKI. Urinary kidney injury molecule-1 (KIM-1), N-acetyl-beta-d-glucosaminidase (NAG) and NGAL were measured at five time points for the first 24 hours after cardiac surgery. In this study, combining the three biomarkers enhanced the sensitivity of early detection of CSA-AKI compared with any individual biomarker: the AUC-ROCs for the three biomarkers combined were 0.75 (immediately after operation) and 0.78 (3 h after operation), providing supportive evidence for the potential advantage of a biomarker panel.
AKI is also an important problem in the ICU, affecting up to 50% of patients . In contrast to cardiac surgery, neither the timing nor severity of the renal insult is known in a general ICU. The renal insult could occur even before ICU admission, and an observational study showed that AKI may already be present on the day of ICU admission in as many as 36% of patients . In the ICU, the kidney is likely to receive ‘multiple hits’ of varying severity, quite different from the seemingly ‘single hit’ model encountered in most surgical situations. Although biomarker levels could be measured in critically ill patients within the first 24 hours of admission to the ICU, this is unlikely to be the first 24 hours of their disease process. Furthermore, infections, comorbid conditions and underlying chronic kidney disease (CKD) frequently encountered in an adult ICU population may lead to the elevation of biomarker levels independent of renal injury, which could confound their predictive ability for AKI. The experimental studies have suggested that septic AKI may be characterized by a distinct pathophysiology that differs from ischaemic/toxic-induced kidney injury . These events may be reflected in unique patterns of biomarkers in septic versus non-septic AKI, both of which are frequently encountered in the ICU. All these factors contribute to the unique challenge of studying early AKI biomarkers in a heterogeneous adult ICU population.
Cruz and colleagues performed a prospective study on 307 adult ICU patients to estimate the diagnostic accuracy of plasma NGAL for early detection of AKI . AKI occurred in 44% of patients. NGAL was higher in patients who already had AKI at enrolment (213.7 ng/mL) and in those who developed AKI within 48 hours (209.0 ng/mL) compared to non-AKI patients (82.0 ng/mL). ROC analysis was conducted only on patients who did not have AKI at enrolment [26,27]. Plasma NGAL was a good diagnostic marker for AKI development (AUC-ROC 0.78), and allowed the diagnosis of AKI up to 48 hours prior to a clinical diagnosis based on the RIFLE criteria (on AKI severity classification whose acronym is expanded in risk, injury, failure, loss of function, end stage renal disease). The plasma NGAL levels also correlated with the severity of AKI (R = 0.554, P ≤ 0.001). It also had a good predictive value for RRT use with AUC-ROC of 0.82. Constantin et al. studied 88 ICU patients; AKI occurred in 59% . Similarly, ROC analysis was conducted on patients who did not already have AKI at enrolment. They found plasma NGAL to be an excellent predictor of AKI (AUC-ROC 0.96) and a good predictor for RRT (AUC-ROC 0.79). Recently, Siew and co-workers prospectively studied a heterogeneous population of 451 critically ill adults, 64 (14%) and 86 (19%) of whom developed AKI within 24 and 48 hours of enrolment, respectively . Median uNGAL at enrolment was higher among patients who developed AKI within 48 hours compared with those who did not (190 versus 57 ng/mg creatinine, P < 0.001). ROC curves describing the relationship between uNGAL level and the occurrence of AKI within 24 and 48 hours were 0.71 and 0.64 [95% confidence interval (CI): 0.57–0.71], respectively. Urine NGAL remained independently associated with the development of AKI after adjustment for age, the serum creatinine closest to enrolment, the illness severity, sepsis and the ICU location, although it only marginally improved the predictive performance of the clinical model alone. According to these authors, although a single measurement of uNGAL exhibited moderate predictive utility for the development and severity of AKI in a heterogeneous ICU population, its additional contribution to conventional clinical risk predictors appears limited. In contrast to the above studies, which primarily evaluated the ability of NGAL to predict AKI, Bagshaw et al. looked at 83 patients with established AKI . They found plasma and urine NGAL to be significantly higher in septic AKI (293 ng/mL and 204 ng/mg creatinine, respectively) compared to non-septic AKI (166 ng/mL and 39 ng/mg creatinine, respectively). As expected, the septic AKI patients also had more comorbid disease, higher illness severity and more organ dysfunction. It is interesting to note that, in the absence of a non-AKI group either with sepsis or without, this study does not answer the question whether NGAL levels can discriminate between the pathophysiologic processes of septic versus non-septic AKI, or whether the presence or absence of sepsis may independently cause an additional increase of NGAL levels . This important topic will most likely be addressed in future studies. As somewhat of a conclusion to the presentation of these conflicting results, the NGAL Meta-analysis Investigator Group recently published the results of the analysis of data from 19 studies and eight countries involving 2538 patients, of whom 487 (19.2%) developed AKI . The authors found that, in the literature, different definitions of AKI, various settings of AKI and varying timings of NGAL measurement with regard to a renal insult have been used to assess the predictive value of the NGAL level, thus creating effective modifiers of NGAL's usefulness as a biomarker. Furthermore, a clear cutoff NGAL concentration for the detection of AKI has not yet been reported. Overall, the NGAL levels clearly appeared to be of diagnostic and prognostic value for AKI: this was improved when standardized platforms were used compared with the ‘research-based’ NGAL assays, in cardiac surgery patients compared with critically ill patients, in children compared with adults. However, the diagnostic accuracy of plasma/serum NGAL was similar to that of urine NGAL. This biomarker was a useful prognostic tool with regard to the prediction of RRT initiation and in-hospital mortality.
The renaissance of ECMO
Extracorporeal membrane oxygenation (ECMO) is an increasingly utilized technique able to support patients with cardiac or respiratory failure that do not respond to conventional intensive care. Significant improvements in indication, timing and technical utilization of ECMO are currently ongoing. The rationale of adding an RRT in series with an extracorporeal cardiopulmonary support is also intriguing, since protective ventilation can be associated to removal of inflammatory mediators, reduction of diuretic need and optimal control of fluid balance.
In 1971, prolonged extracorporeal circulation was used for the first time to treat a young man who had acute respiratory distress syndrome (ARDS) after trauma . In the following years, several other cases were reported for the support of cardiac and respiratory failure from a variety of causes. The entire technology of prolonged extracorporeal life support came to be known as ECMO: of course, the technology involved much more than oxygenation, but the acronym has remained. In 1975, the National Institutes of Health (NIH) sponsored a prospective, randomized, multicentre trial of ECMO for ARDS in adult patients. Only 10% of both ECMO and control groups survived, effectively stopping research on ECMO for adult respiratory failure for the next 15 years .
The debate that surrounds the use of ECMO as a rescue therapy for severe (but potentially reversible) cardio-respiratory failure reflects the difficulty of this type of research in the critically ill patient. This year, the results of an ambitious trial on safety, clinical efficacy and cost-effectiveness of ECMO were published (CESAR: conventional ventilatory support versus extracorporeal membrane oxygenation for severe adult respiratory failure) : 180 adults were randomized in a 1:1 ratio to receive continued conventional ventilation (n = 90) or referral for consideration for treatment by ECMO (n = 90). Strict criteria were set in order to enroll patients: eligibility was allowed for patients between 18 and 65 years and severe (Murray score >3.0 or pH <7.20) but potentially reversible respiratory failure. Exclusion criteria were high pressure (>30 cm H2O of peak inspiratory pressure) or high FiO2 (>0.8) ventilation for more than 7 days; intracranial bleeding; any other contraindication to limited heparinization; or any contraindication to continuation of active treatment. All ECMO treatments were performed in the veno-venous mode with percutaneous cannulation. Servo-controlled roller pumps and polymethylpentene oxygenators were used. The trial was obviously unblinded, and only researchers who performed did the 6-month follow-up were masked to treatment assignment. Only 68 (75%) patients marked for consideration for treatment by ECMO actually received ECMO: of the remaining 22, 16 improved with conventional management (14 of these will ultimately survive), 3 died within 48 hours before transfer, 2 died during transfer and 1 had a contraindication to heparin. In the ECMO group, however, 63% (57/90) of patients survived to 6 months without disability compared with 47% (41/87) of those allocated to conventional management (relative risk 0.69; 95% CI 0.05–0.97, P = 0.03). The referral for consideration for treatment by ECMO treatment led to a gain of 0.03 quality-adjusted life-years (QALY) at 6-month follow-up. A lifetime model predicted the cost per QALY of ECMO to be cost-effective. The authors recommended transferring adult patients with severe but potentially reversible respiratory failure whose Murray score exceeds 3.0 or who have a pH of <7.20 on optimum conventional management to a centre with an ECMO-based management protocol to significantly improve survival without severe disability. Strength of the study design is the fact that the transport risk was included in the comparison: the relative risk benefit reported, in fact, is not for ECMO treatment but for referral to a single ECMO-capable hospital for ECMO assessment and management if criteria are met. It must be remarked that survival without severe disability in patients referred for ECMO assessment was at twice the length of stay and twice the unadjusted cost. Furthermore, ECMO benefit was significantly blunted if those transferred but not treated with ECMO were not included in the analysis: external validity of such a trial might be questioned since it is not clear if the results would have been the same (either for conventional management or ECMO) in a different referral centre .
Recently, Santiago and coauthors showed how a CRRT device can be safely and easily introduced into an ECMO circuit in order to efficiently treat renal dysfunction and improve fluid balance without significant technical complications or unexpected therapy interruptions . These observations represent the positive evolution of the concerns reported last year by us about the effects of the interaction of two extracorporeal devices not conceived to work in parallel . Again, however, such innovative and probably life-saving therapies may lead to disappointing results if not delivered in referral centres.
In this light, another interesting technique that coupled the expertise of nephrologists and intensivists for the management of respiratory failure is the use of low-flow extracorporeal gas exchange specifically set in order to reduce high carbon dioxide (CO2) levels associated with protective low tidal volume (VT) ventilation. It is well known that the ventilation with low VT (<6 mL/kg) compared to the conventional VT ventilation (10–12 ml/kg) not only improved mortality from ARDS but led to improved kidney function as well. Specifically, the patients receiving 6 ml/ kg VT had a lower number of days with renal failure, defined as an increase in serum creatinine over 50% than baseline, in the first 28 ICU days compared to the patients receiving 12 ml/kg VT . Nevertheless, respiratory acidosis and ‘permissive hypercapnia’ commonly occur in patients ventilated with protective techniques. In this light, extracorporeal CO2 removal may be seen as an adjunctive therapy to mechanical ventilation, or as a partial respiratory extracorporeal support. Vascular access is achieved by the utilization of an arteriovenous or, ideally, a veno-venous catheter. Recently, arteriovenous CO2 removal (AVCO2R) has been performed in different experimental models and small case series [37,38]. Commercially available low-resistance gas exchangers have been tested in order to achieve near-total extracorporeal removal of CO2 production with blood flows ranging from 300 to 1200 mL/min in different models [39–41]. AVCO2R should allow CO2 normalization at low tidal volumes and respiratory rate. The results from experimental models are promising, but the absence of a dedicated technical equipment, feasible for a semi-continuous and potentially prolonged therapy, the need for relatively high blood flows and adequate anticoagulation still limit the possibility of a routinary utilization of such therapy in critically ill patients. Nevertheless, it must be said that the technology and materials for extracorporeal circulation and vascular access are rapidly improving, and the availability of extracorporeal life support has become a reality in different centres [42,43]. In a recent clinical trial, Terragni and coauthors verified whether respiratory acidosis caused by a protective VT <6 mL/kg may be managed by extracorporeal CO2 removal . In the current study, CO2 removal was performed through a dedicated pump-driven extracorporeal veno-venous circuit with a neonatal membrane lung and a haemofilter coupled in series. The original though complex design of such a circuit required a membrane lung to be coupled to a haemofilter in order, as the authors claim, to increase the pressure inside the membrane lung by adding the downstream resistance exerted by the haemofilter and therefore reduce the risk of air bubble formation; minimize the need for heparin by diluting the blood entering the membrane lung by recirculating the plasma water separated by the haemofilter and enhance the performance of the extracorporeal device extracting the CO2 dissolved in the plasma water separated by the haemofilter and recirculated through the membrane lung. The blood flow used for this trial was relatively low (191–422 mL/min), and the utilized membrane lung had the surface of a neonatal one (0.33 m2): the final priming volume of the system was small (140–160 mL). The authors showed that extracorporeal assist normalized PaCO2 (with an absolute reduction of >30%), optimized pH and allowed use of VT between 3.7 and 4.6 mL/kg for about 144 hours. The improvement of morphological markers of lung protection and a reduction of pulmonary cytokine concentrations were observed after 72 hours of such ‘ultraprotective’ ventilation. The device showed to be safe and efficient. The enthusiastic editorial of this study foresees that, in the near future, the management of ARDS will include a minimally invasive extracorporeal CO2 removal circuit and non-invasive continuous positive airway pressure ventilation: tracheal tubes will be avoided, sedation minimized and ventilator-induced acute lung injury and nosocomial infections prevented .
Critical care medicine is making important steps ahead: important clinical trials, efficient diagnostic tools (for AKI) and improved technological devices for extracorporeal support characterized the last year. Even if these achievements seem promising, as usual, they will need to be confirmed by the actual modification of significant outcomes for critical care medicine: improvement of long-term survival and/or quality of life.
Conflict of interest statement. D.N.C. and C.R. have received an honorarium for speaking for Biosite Incorporated.