Europe PMC
Sign in | Create an account https://orcid.org
We are currently experiencing a reduction in the number of new full-text articles added to Europe PMC. We are working to resolve the issue as soon as possible.

This website requires cookies, and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our privacy notice and cookie policy.
  • Abstract
  • Free full text
  • Citations & impact
  • Data
  • Funding

Abstract 


Background

Models that predict postoperative complications often ignore important intraoperative events and physiological changes. This study tested the hypothesis that accuracy, discrimination, and precision in predicting postoperative complications would improve when using both preoperative and intraoperative data input data compared with preoperative data alone.

Methods

This retrospective cohort analysis included 43,943 adults undergoing 52,529 inpatient surgeries at a single institution during a 5-y period. Random forest machine learning models in the validated MySurgeryRisk platform made patient-level predictions for seven postoperative complications and mortality occurring during hospital admission using electronic health record data and patient neighborhood characteristics. For each outcome, one model trained with preoperative data alone; one model trained with both preoperative and intraoperative data. Models were compared by accuracy, discrimination (expressed as area under the receiver operating characteristic curve), precision (expressed as area under the precision-recall curve), and reclassification indices.

Results

Machine learning models incorporating both preoperative and intraoperative data had greater accuracy, discrimination, and precision than models using preoperative data alone for predicting all seven postoperative complications (intensive care unit length of stay >48 h, mechanical ventilation >48 h, neurologic complications including delirium, cardiovascular complications, acute kidney injury, venous thromboembolism, and wound complications), and in-hospital mortality (accuracy: 88% versus 77%; area under the receiver operating characteristic curve: 0.93 versus 0.87; area under the precision-recall curve: 0.21 versus 0.15). Overall reclassification improvement was 2.4%-10.0% for complications and 11.2% for in-hospital mortality.

Conclusions

Incorporating both preoperative and intraoperative data significantly increased the accuracy, discrimination, and precision of machine learning models predicting postoperative complications and mortality.

Free full text 


Logo of nihpa

Added Value of Intraoperative Data for Predicting Postoperative Complications

Abstract

Introduction

Predicting postoperative complications in the preoperative setting better informs the surgeon’s decision to offer an operation as well as the patient’s decision to undergo surgery. These predictions can also guide targeted risk-reduction strategies (i.e., prehabilitation) for modifiable risk factors, plans for postoperative triage and resource use, and expectations regarding short- and long-term functional recovery. Online risk calculators, mobile device applications, and automated predictive analytic platforms can be easily accessed to accomplish these goals.( 1– 4) However, these models often ignore intraoperative data, and thereby miss potentially important opportunities to generate updated predictions that can further inform future decisions regarding postoperative triage, surveillance for complications, and targeted preventative measures (e.g., renal protection bundles for patients at high risk for acute kidney injury (AKI) and continuous cardiorespiratory monitoring for patients at high risk for cardiovascular complications).

Although it seems logical and advantageous to use intraoperative data in predicting postoperative complications, this advantage remains theoretical until establishing that predictive performance improves with the incorporation of intraoperative data. Furthermore, we would hope that these enhanced predictions could translate into better decisions and outcomes for patients undergoing surgery. This study addresses the former objective by first quantifying the added value of intraoperative data for predicting seven postoperative complications and mortality with a MySurgeryRisk extension that incorporates vital sign and mechanical ventilator data collected during surgery. The original MySurgeryRisk platform uses electronic health record (EHR) data and patient neighborhood characteristics to predict postoperative complications and mortality, but ignores intraoperative data.( 4) We hypothesized that accuracy, discrimination, and precision in predicting postoperative complications and mortality would improve when using both preoperative and intraoperative data input features compared with preoperative data alone.

Materials and Methods

We created a single-center longitudinal cohort of surgical patients with data from preoperative, intraoperative, and postoperative phases of care. We used random forest machine learning models to predict seven major postoperative complications and death during admission, comparing models using preoperative data (i.e. EHR and patient neighborhood characteristics) alone versus models using the same preoperative data plus intraoperative physiological time-series vital sign and mechanical ventilator data. The University of Florida Institutional Review Board and Privacy Office approved this study with waiver of informed consent (IRB #201600223).

Predictor Features

The risk assessment used 367 demographic, socioeconomic, comorbidity, medication, laboratory value, operative, and physiological variables from preoperative and intraoperative phases of care. The preoperative model used 134 variables; an additional 233 intraoperative features were added to develop postoperative models. We derived preoperative comorbidities from International Classification of Diseases (ICD) codes to calculate Charlson comorbidity indices.( 5) We modeled primary procedure type on ICD-9-CM codes with a forest structure in which nodes represented groups of procedures, roots presented the most general groups of procedures, and leaf nodes represented specific procedures. Medications were derived from RxNorm codes grouped into drug classes as previously described.( 4) We converted intraoperative time series data into statistical features such as minimum, maximum, mean, and short- and long-term variability.( 6) Intraoperative data input features that were added to preoperative features to generate the postoperative model included heart rate, systolic blood pressure, diastolic blood pressure, body temperature, respiratory rate, minimum alveolar concentration (MAC), positive end-expiratory pressure (PEEP), peak inspiratory pressure (PIP), fraction of inspired oxygen (FiO2), blood oxygen saturation (SpO2), and end-tidal carbon dioxide (EtCO2). The time series features are then used to produce statistical features such as minimum, maximum, average, long term variability, short term variability, duration of measurement, counts of readings in certain value ranges decided based on average and standard deviation of the measurements of overall datasets. We also included surgical variables (e.g., nighttime surgery, surgery duration, operative blood loss, and urine output) during surgery. Supplemental Digital Content 3 lists all input features and their statistical characteristics. Supplemental Digital Content 4 lists the percentages of missing values for each variable in the training and testing cohorts.

Predictive Analytic Workflow

The proposed MySurgeryRisk PostOp algorithm is conceptualized as a dynamic model that readjusts preoperative risk predictions using physiological time series and other data collected during surgery. The resulting adjusted postoperative risk is assessed immediately at the end of surgery. This workflow simulates clinical tasks faced by physicians involved in perioperative care where patients’ preoperative information is subsequently enriched by the influx of new data from the operating room. The final output produces MySurgeryRisk PostOp, a personalized risk panel for complications after surgery with both preoperative and immediate postoperative risk assessments. The algorithm consists of two main layers, preoperative and intraoperative, each containing a data transformer core and a data analytics core.( 4) Details regarding MySurgeryRisk predictive analytic workflow are provided in Supplemental Digital Content 5. Briefly, the MySurgeryRisk platform uses a data transformer to integrate data from multiple sources, including the EHR with zip code links to US Census data for patient neighborhood characteristics and distance from the hospital, and optimizes the data for analysis through preprocessing, feature transformation, and feature selection techniques. In the data analytics core, the MySurgeryRisk PostOp algorithm was trained to calculate patient-level immediate postoperative risk probabilities for selected complications using all available preoperative and intraoperative data with random forest classifiers.( 7) We chose random forest methods to maintain consistency with previous work with the original MySurgeryRisk model.( 4) This work also describes our methods for reducing data dimensionality. Random forest models are composed of an assembly of decision trees (i.e., a forest of trees). Each decision tree performs a classification or prediction task; the most common class (i.e., majority vote) or average prediction is then identified. Supplemental Digital Content 6 lists allowable ranges for continuous variables, determined by clinical expertise. Figure 1 illustrates our method for building the random forest machine learning models and model analytic flow.

Model Performance

We assessed each model’s discrimination using AUROC. For each complication, we calculated Youden’s index threshold to identify the point on the receiver operating characteristic curve with the highest combination of sensitivity and specificity, using this point as the cut-off value for low versus high risk.( 8) We used these cut-off values to determine the fraction of correct classifications as well as sensitivity, specificity, positive predictive value, and negative predictive value for each model. When rare events are being predicted, a model can have high accuracy by favoring negative predictions in a predominantly negative dataset.( 9) False negative predictions of complications are particularly harmful because patients and their caregivers may consent to an operation under the pretense of an overly optimistic postoperative prognosis, as well as missing opportunities for any preoperative mitigation of risk factors through prehabilitation and other optimization strategies. Additionally, the appropriate escalation in levels of monitoring and patient care may be missed with false negative findings. Therefore, model performance was also evaluated by calculating area under the precision-recall curve (AUPRC), which is well-suited for evaluating rare event predictive performance.( 10) To assess the statistical significance of AUROC, AUPRC, and accuracy differences between models, we performed Wilcoxon’s Sign-Ranked test.( 11) We used bootstrap sampling and non-parametric methods to obtain 95% confidence intervals for all performance metrics. We used the Net Reclassification Improvement (NRI) index to quantify how well the postoperative model reclassified patients compared with the preoperative model.( 12)

Results

Participant Baseline Characteristics and Outcomes

Table 1 lists subject characteristics of primary interest. Supplemental Digital Content 7 lists all additional subject characteristics used to build the models. Approximately 49% of the population was female. Average age was 57 years. The incidence of complications in the testing cohort was as follows: 28% for prolonged ICU stay, 6% for mechanical ventilation for >48 hours, 20% for neurological complications and delirium, 18% for acute kidney injury, 19% for cardiovascular complications, 8% for venous thromboembolism, 25% for wound complications, and 2% for in-hospital mortality. The distribution of outcomes did not significantly differ between training and testing cohorts, as listed in Table 1.

Table 1:

TrainingTesting
Date rangesJune 2014-Feb 2018
(n=40560)
March 2018-Feb 2019
(n=11969)
Average age (years)56.557.5
Ethnicity, n (%)Not Hispanic38116 (93.9)11210 (93.6)
Hispanic1772 (4.4)599 (5)
Missing717 (1.8)171 (1.4)
Race, n (%)White31399 (77.3)9376 (78.3)
African American6136 (15.1)1739 (14.5)
Other2483 (6.1)702 (5.9)
Missing587 (1.5)163 (1.4)
Gender, n (%)Male20614 (50.8)6072 (50.7)
Female19991 (49.2)5908 (49.3)
Primary Insurance, n (%)Medicare18581 (45.8)5774 (48.2)
Private12463 (30.7)3308 (27.6)
Medicaid6577 (16.2)1928 (16.1)
Uninsured2984 (7.4)970 (8.1)
Outcomes, n (%)ICU Stay > 48 hours10355 (25.5)3408 (28.5)
MV Duration > 48 hours2372 (5.9)767 (6.4)
Neurological Complications and Delirium5860 (14.5)2364 (19.8)
Acute Kidney Injury6098 (15)2111 (17.6)
Cardiovascular Complication5866 (14.5)2240 (18.7)
Venous Thromboembolism2283 (5.6)943 (7.9)
Wound7548 (18.6)3044 (25.4)
Hospital Mortalitya192 (2.3)93 (2.6)

Model Performance

Compared with the model using preoperative data alone, the postoperative model using both preoperative and intraoperative data had higher accuracy, AUROC, and AUPRC for all complications and mortality predictions, as described below and in Table 2. The net reclassification index as well as event, non-event, and overall classification improvements for each outcome are listed in Table 3. Figures 29 illustrate predictive performance for individual complications and mortality. Figures include gray regions for which predictive discrimination or precision are ≤0.2, precluding reasonable clinical application. In addition, feature weights from the best performing model for each complication are provided in Supplemental Digital Content 6, along with feature names and descriptions.

Table 2:

ComplicationModelSensitivitySpecificityNPVPPVAccuracyAUROCAUPRC
ICU Stay > 48 hoursPreoperative0.82 (0.81–0.83)0.74 (0.74–0.75)0.91 (0.91–0.92)0.56 (0.55–0.57)0.77 (0.76–0.77)0.87 (0.86–0.87)0.72 (0.71–0.74)
Postoperative0.75 (0.73–0.76)0.87 (0.86–0.87)0.90 (0.89–0.90)0.69 (0.68–0.70)0.83 (0.83–0.84)0.88 (0.88–0.89)0.80 (0.78–0.81)
MV Duration > 48 hoursPreoperative0.80 (0.78–0.82)0.82 (0.82–0.83)0.98 (0.98–0.99)0.24 (0.22–0.25)0.82 (0.81–0.83)0.89 (0.87–0.89)0.45 (0.42–0.48)
Postoperative0.91 (0.89–0.93)0.92 (0.92–0.92)0.99 (0.99–1.00)0.45 (0.41–0.45)0.92 (0.91–0.92)0.96 (0.95–0.97)0.71 (0.68–0.74)
Neurological Complications and DeliriumPreoperative0.79 (0.77–0.80)0.78 (0.77–0.78)0.94 (0.93–0.94)0.47 (0.45–0.48)0.78 (0.77–0.79)0.86 (0.85–0.87)0.64 (0.63–0.66)
Postoperative0.81 (0.80–0.82)0.81 (0.80–0.82)0.95 (0.94–0.95)0.51 (0.49–0.53)0.81 (0.79–0.82)0.89 (0.88–0.89)0.69 (0.67–0.71)
Acute Kidney InjuryPreoperative0.80 (0.79–0.82)0.67 (0.66–0.67)0.94 (0.93–0.94)0.34 (0.33–0.35)0.69 (0.68–0.70)0.81 (0.80–0.82)0.47 (0.45–0.49)
Postoperative0.71 (0.70–0.72)0.8 (0.79–0.82)0.93 (0.93–0.93)0.44 (0.42–0.46)0.79 (0.78–0.80)0.84 (0.83–0.85)0.57 (0.55–0.59)
Cardiovascular ComplicationPreoperative0.78 (0.76–0.80)0.69 (0.67–0.69)0.93 (0.92–0.94)0.36 (0.35–0.37)0.70 (0.69–0.71)0.80 (0.79–0.81)0.51 (0.49–0.53)
Postoperative0.8 (0.76–0.81)0.77 (0.74–0.83)0.94 (0.93–0.94)0.45 (0.41–0.51)0.78 (0.76–0.82)0.87 (0.86–0.88)0.66 (0.64–0.68)
Venous ThromboembolismPreoperative0.79 (0.76–0.79)0.69 (0.69–0.72)0.97 (0.97–0.98)0.18 (0.17–0.20)0.70 (0.69–0.73)0.80 (0.79–0.82)0.25 (0.23–0.28)
Postoperative0.76 (0.74–0.79)0.75 (0.73–0.75)0.97 (0.97–0.98)0.21 (0.19–0.22)0.75 (0.74–0.75)0.83 (0.81–0.84)0.28 (0.26–0.31)
WoundPreoperative0.69 (0.69–0.72)0.66 (0.60–0.68)0.86 (0.86–0.87)0.41 (0.38–0.43)0.67 (0.63–0.68)0.74 (0.73–0.75)0.50 (0.48–0.52)
Postoperative0.66 (0.64–0.67)0.70 (0.70–0.71)0.86 (0.85–0.87)0.43 (0.42–0.45)0.69 (0.69–0.70)0.75 (0.74–0.76)0.52 (0.50–0.54)
In-hospital MortalityPreoperative0.83 (0.73–0.87)0.76 (0.78–0.80)0.99 (0.99–1.00)0.09 (0.08–0.11)0.77 (0.77–0.80)0.87 (0.84–0.90)0.15 (0.12–0.20)
Postoperative0.85 (0.80–0.91)0.88 (0.86–0.88)1.00 (0.99–1.00)0.16 (0.13–0.18)0.88 (0.86–0.88)0.93 (0.91–0.95)0.21 (0.17–0.27)

Table 3:

Classification Improvement (%)
ComplicationNRI (95% CI)pEventNon-EventOverall
ICU stay > 48 hours0.05 (0.03–0.06)<0.001−7.912.66.8
MV duration > 48 hours0.21 (0.16–0.22)<0.00110.99.910.0
Neurological Complications and Delirium0.05 (0.03–0.07)<0.0012.13.12.9
Acute Kidney Injury0.05 (0.04–0.07)<0.001−8.713.99.9
Cardiovascular Complication0.12 (0.1–0.12)<0.0012.39.27.9
Venous Thromboembolism0.03 (0.04–0.06)0.09−2.75.64.9
Wound Complication0.02 (0.01–0.04)0.14−2.54.12.4
Hospital Mortality0.14 (0.06–0.21)0.0242.211.511.2

Prolonged Mechanical Ventilation

The postoperative model achieved greater accuracy (0.92 vs. 0.82, p<0.001), discrimination (AUROC 0.96 vs. 0.89, p<0.001), and precision (AUPRC 0.71 vs. 0.45, p<0.001) in predicting mechanical ventilation >48 hours with greater sensitivity, specificity, and positive predictive value, and similar negative predictive value compared with the model using preoperative data alone ( Table 2). The postoperative model correctly reclassified 11.0% of all cases that featured prolonged mechanical ventilation and 9.9% of all cases that did not ( Figure 3). Overall reclassification improvement was 10.0%.

Neurological Complications and Delirium

The postoperative model achieved greater accuracy (0.81 vs. 0.78, p<0.001), discrimination (AUROC 0.89 vs. 0.86, p<0.001), and precision (AUPRC 0.69 vs. 0.64, p<0.001) in predicting postoperative neurological complications and delirium with greater specificity, positive predictive value, and negative predictive value than the model limited to preoperative data alone ( Table 2). The postoperative model correctly reclassified 2.1% of all cases that featured postoperative neurological complications and delirium and 3.1% of all cases that did not ( Figure 4). Overall reclassification improvement was 2.9%.

Cardiovascular Complications

The postoperative model achieved greater accuracy (0.78 vs. 0.70, p<0.001), discrimination (AUROC 0.87 vs. 0.80, p<0.001), and precision (AUPRC 0.66 vs. 0.51, p<0.001) in predicting postoperative cardiovascular complications with greater sensitivity, specificity, negative predictive value, and positive predictive value than the model using preoperative data alone ( Table 2). The postoperative model correctly reclassified 2.3% of all cases that featured postoperative cardiovascular complications and 9.2% of all cases that did not ( Figure 5). Overall, there was 7.9% reclassification improvement by the postoperative model.

Venous Thromboembolism

The postoperative model achieved greater accuracy (0.75 vs. 0.7, p<0.001), discrimination (AUROC 0.83 vs. 0.80, p<0.001), and precision (AUPRC 0.28 vs. 0.25, p<0.001) in predicting postoperative venous thromboembolism with greater specificity and positive predictive value, but similar negative predictive value and lower sensitivity (0.76 vs 0.79, p<0.001) than the model using preoperative data alone ( Table 2). The postoperative model misclassified 2.7% of all cases that featured postoperative venous thromboembolism and correctly reclassified 5.6% of all cases that did not ( Figure 7). Overall, there was 4.9% reclassification improvement by the postoperative model.

Discussion

By incorporating intraoperative physiological data to preoperative data, we added value to a machine learning model that can predict postoperative complications by improving the accuracy, discrimination, and precision relative to a previous model that accessed preoperative data alone. This improvement held true for all postoperative complications tested as well as in-hospital mortality; there were no cases in which accuracy, discrimination, or precision did not improve by incorporating intraoperative data. The only negative results occurred with the prediction of prolonged ICU stay, venous thromboembolism, and wound complications; specifically, the postoperative models had lower sensitivity than models using preoperative data alone. In predicting prolonged ICU stay, it appears that the model using preoperative data alone had unusually low thresholds for classifying patients as high risk. The postoperative models raised this threshold, correctly classifying a greater proportion of patients and achieving greater accuracy, discrimination, and precision, at the cost of lower sensitivity. For predicting venous thromboembolism and wound complications, although postoperative model accuracy, discrimination, and precision were greater than that of the preoperative model, overall reclassification improvements were not statistically significant. Additionally, the optimum thresholds for predicting in-hospital mortality for both models fall outside of clinically applicable discrimination or precision (i.e., ≤0.2). This likely occurred because mortality rates were low (approximately 2%) and mortality predictions were tested using 30% of the test cohort, representing only 3,591 surgeries of the 52,529 surgeries in the entire cohort, whereas predictions for the other seven postoperative complications were tested using the entire test cohort (11,969 surgeries). Based on dataset behavior, mortality risks are more descriptive when using risk scores for complications than the raw variables used to estimate risk for those complications. Because complication risks must be developed and validated prior to use as mortality prediction factors, only the test cohort could be used to train, validate, and test in-hospital mortality predictions. Therefore, 30% of the test cohort was used to report mortality model performance.

Online risk calculators like the National Surgical Quality Improvement Program (NSQIP) Surgical Risk Calculator can reduce variability and increase the likelihood that patients will engage in prehabilitation, but they have time-consuming manual data acquisition and entry requirements, which hinders their clinical adoption.( 13– 18) Emerging technologies can circumvent this problem. The MySurgeryRisk platform autonomously draws data from multiple input sources and uses machine learning techniques to predict postoperative complications and mortality. However, easily and readily available predictions are only useful if they are accurate and precise enough to augment clinical decision-making. In a prospective study of the original MySurgeryRisk platform, the algorithm predicted postoperative complications with greater accuracy than physicians, but there was room for continued improvement.( 19) The present study demonstrates that incorporation of intraoperative physiological time-series data improves predictive accuracy, discrimination, and precision, presumably by representing important intraoperative events and physiological changes that influence postoperative clinical trajectories and complications. Dziadzko et al.( 20) used a random forest model to predict mortality or the need for greater than 48 hours of mechanical ventilation using EHR data from patients admitted to academic hospitals, achieving excellent discrimination (AUROC 0.90), similar to MySurgeryRisk discrimination for mechanical ventilation for greater than 48 hours (AUROC 0.96) using both preoperative and intraoperative data. Therefore, the MySurgeryRisk PostOp extension takes another step toward clinical utility, maintaining autonomous function while improving accuracy, discrimination, and precision.

Despite advances in the facility of use and performance, predictive analytic platforms face a major barrier to clinical adoption: predictions do not directly translate into decisions. When predicted risk for postoperative AKI is very low or very high, it is relatively clear whether the patient would benefit from renal-protection bundles. Similarly, when predicted risk for cardiovascular complications is very low or very high, it is relatively clear whether the patient would benefit from continuous cardiac monitoring. However, a substantial number of patients are at intermediate risk for these complications, and thus the need for additional intervention or investigation remains uncertain. In the present study, we dichotomized outcome predictions into low- and high-risk categories to facilitate analysis of model performance, however any risk for a complication exists on a continuum. MySurgeryRisk platforms addresses this and makes predictions along a continuum (i.e., range from 0%−100% chance of a complication), but this method is also unable to augment clinical decisions for intermediate-risk scenarios. The average risk across a population usually defines intermediate risks. Therefore, this challenge will affect most patients and their corresponding risks, which leaves additional room for modeling improvements.

We predict that advances in machine learning technologies will rise to meet this challenge. Predictive analytics indirectly inform discrete choices facing clinicians; reinforcement learning models can provide instructive feedback by identifying specific actions that yield the highest probability of achieving a defined goal. For example, a reinforcement learning model could be trained to achieve hospital discharge with baseline renal and cardiovascular function, without major adverse kidney or cardiac events, making recommendations for or against renal protection bundles and continuous cardiac monitoring according to these goals. Similar models have been used to recommend vasopressor doses and intravenous fluid resuscitation volumes for septic patients, demonstrating efficacy relative to clinician decision-making in large retrospective datasets( 21). However, to our knowledge, these models have not been tested clinically or applied to surgical decision-making scenarios. Therefore, the potential benefits of reinforcement learning to augment surgical decision-making learning remain theoretical.

This study used data from a single institution, limiting the generalizability of these findings. As previously discussed, true risk for complications is not dichotomous, but we dichotomized risk in this study to facilitate model performance evaluation and comparison. We used administrative codes to identify complications, so coding errors could have influenced results. The MySurgeryRisk algorithm learned predictive features from raw data, and so it may have used features that are not classic risk factors. This approach has the potential advantage of discovering and incorporating unknown or underused risk factors, and the disadvantage that the existence and identity of these risk factors remain unknown. Finally, in some cases, intraoperative model input features may have been evidence of a complication rather than true predictors of a complication, i.e., oliguria intraoperatively may be evidence of AKI rather than predictive of developing AKI.

References

2. Bilimoria KY, Liu Y, Paruch JL, Zhou L, Kmiecik TE, et al. Development and evaluation of the universal ACS NSQIP surgical risk calculator: a decision aid and informed consent tool for patients and surgeons. J Am Coll Surg 2013:217:833–842 e831–833. [ Europe PMC free article] [ Abstract] [Google Scholar]
3. Bertsimas D, Dunn J, Velmahos GC, Kaafarani HMA Surgical Risk Is Not Linear: Derivation and Validation of a Novel, User-friendly, and Machine-learning-based Predictive OpTimal Trees in Emergency Surgery Risk (POTTER) Calculator. Ann Surg 2018:268:574–583. [ Abstract] [Google Scholar]
4. Bihorac A, Ozrazgat-Baslanti T, Ebadi A, Motaei A, Madkour M, et al. MySurgeryRisk: Development and Validation of a Machine-learning Risk Algorithm for Major Complications and Death After Surgery. Ann Surg 2018:269:652–662. [ Europe PMC free article] [ Abstract] [Google Scholar]
5. Charlson ME, Pompei P, Ales KL, MacKenzie CR A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis 1987:40:373–383. [ Abstract] [Google Scholar]
6. Saria S, Rajani AK, Gould J, Koller D, Penn AA Integration of early physiological responses predicts later illness severity in preterm infants. Science translational medicine 2010:2:48ra65. [ Europe PMC free article] [ Abstract] [Google Scholar]
7. Breiman L Random Forests. Machine Learning 2001:45:5–32. [ Google Scholar]
8. Youden WJ Index for rating diagnostic tests. Cancer 1950:3:32–35. [ Abstract] [ Google Scholar]
9. Saito T, Rehmsmeier M The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS One 2015:10:e0118432. [ Europe PMC free article] [ Abstract] [Google Scholar]
10. Chiew CJ, Liu N, Wong TH, Sim YE, Abdullah HR Utilizing Machine Learning Methods for Preoperative Prediction of Postsurgical Mortality and Intensive Care Unit Admission. Ann Surg 2019. [ Abstract] [Google Scholar]
11. Wilcoxon F Individual Comparisons by Ranking Methods. Biometrics Bull 1945:1:80–83. [ Google Scholar]
12. Pencina MJ, D’Agostino RB, Steyerberg EW Extensions of net reclassification improvement calculations to measure usefulness of new biomarkers. Stat Med 2011:30:11–21. [ Europe PMC free article] [ Abstract] [Google Scholar]
13. Chiu AS, Jean RA, Resio B, Pei KY Early postoperative death in extreme-risk patients: A perspective on surgical futility. Surgery 2019. [ Abstract] [Google Scholar]
14. Clark DE, Fitzgerald TL, Dibbins AW Procedure-based postoperative risk prediction using NSQIP data. J Surg Res 2018:221:322–327. [ Abstract] [Google Scholar]
15. Lubitz AL, Chan E, Zarif D, Ross H, Philp M, et al. American College of Surgeons NSQIP Risk Calculator Accuracy for Emergent and Elective Colorectal Operations. J Am Coll Surg 2017:225:601–611. [ Abstract] [Google Scholar]
16. Cohen ME, Liu Y, Ko CY, Hall BL An Examination of American College of Surgeons NSQIP Surgical Risk Calculator Accuracy. J Am Coll Surg 2017:224:787–795 e781. [ Abstract] [Google Scholar]
17. Hyde LZ, Valizadeh N, Al-Mazrou AM, Kiran RP ACS-NSQIP risk calculator predicts cohort but not individual risk of complication following colorectal resection. Am J Surg 2019:218:131–135. [ Abstract] [Google Scholar]
18. Leeds IL, Rosenblum AJ, Wise PE, Watkins AC, Goldblatt MI, et al. Eye of the beholder: Risk calculators and barriers to adoption in surgical trainees. Surgery 2018:164:1117–1123. [ Europe PMC free article] [ Abstract] [Google Scholar]
19. Brennan M, Puri S, Ozrazgat-Baslanti T, Feng Z, Ruppert M, et al. Comparing clinical judgment with the MySurgeryRisk algorithm for preoperative risk assessment: A pilot usability study. Surgery 2019:165:1035–1045. [ Europe PMC free article] [ Abstract] [Google Scholar]
20. Dziadzko MA, Novotny PJ, Sloan J, Gajic O, Herasevich V, et al. Multicenter derivation and validation of an early warning score for acute respiratory failure or death in the hospital. Crit Care 2018:22:286. [ Europe PMC free article] [ Abstract] [Google Scholar]
21. Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AA The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care. Nat Med 2018:24:1716–1720. [ Abstract] [Google Scholar]

Citations & impact 


Impact metrics

Jump to Citations

Citations of article over time

Alternative metrics

Altmetric item for https://www.altmetric.com/details/96553556
Altmetric
Discover the attention surrounding your research
https://www.altmetric.com/details/96553556

Smart citations by scite.ai
Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles.
Explore citation contexts and check if this article has been supported or disputed.
https://scite.ai/reports/10.1016/j.jss.2020.05.007

Supporting
Mentioning
Contrasting
2
47
1

Article citations

  • The Future of Artificial Intelligence in Surgery.

    Hamilton A

    Cureus, 16(7):e63699, 02 Jul 2024

    Cited by: 0 articles | PMID: 39092371 | PMCID: PMC11293880

    Review

    This article is in the Europe PMC Open access subset. Refer to the copyright information in the article for licensing details.
    Free full text in Europe PMC


  • Clinical Decision Support for Surgery: A Mixed Methods Study on Design and Implementation Perspectives From Urologists.

    Tan HJ, Spratte BN, Deal AM, Heiling HM, Nazzal EM, Meeks W, Fang R, Teal R, Vu MB, Bennett AV, Blalock SJ, Chung AE, Gotz D, Nielsen ME, Reuland DS, Harris AH, Basch E

    Urology, 190:15-23, 30 Apr 2024

    Cited by: 0 articles | PMID: 38697362


  • Remote Monitoring and Artificial Intelligence: Outlook for 2050.

    Feinstein M, Katz D, Demaria S, Hofer IS

    Anesth Analg, 138(2):350-357, 12 Jan 2024

    Cited by: 2 articles | PMID: 38215713


  • Clinical Considerations for Patients Experiencing Acute Kidney Injury Following Percutaneous Nephrolithotomy.

    Reich DA, Adiyeke E, Ozrazgat-Baslanti T, Rabley AK, Bozorgmehri S, Bihorac A, Bird VG

    Biomedicines, 11(6):1712, 14 Jun 2023

    Cited by: 1 article | PMID: 37371807 | PMCID: PMC10296554

    This article is in the Europe PMC Open access subset. Refer to the copyright information in the article for licensing details.
    Free full text in Europe PMC


  • Building an automated, machine learning-enabled platform for predicting post-operative complications.

    Balch JA, Ruppert MM, Shickel B, Ozrazgat-Baslanti T, Tighe PJ, Efron PA, Upchurch GR, Rashidi P, Bihorac A, Loftus TJ

    Physiol Meas, 44(2), 09 Feb 2023

    Cited by: 1 article | PMID: 36657179 | PMCID: PMC9910093

    This article is in the Europe PMC Open access subset. Refer to the copyright information in the article for licensing details.
    Free full text in Europe PMC


Go to all (19) article citations

Data 


Data behind the article

This data has been text mined from the article, or deposited into data resources.

Funding 


Funders who supported this work.

NCATS NIH HHS (2)

  • Grant ID: UL1 TR000064

  • Grant ID: UL1 TR001427

NIBIB NIH HHS (1)

  • Grant ID: R21 EB027344

NIGMS NIH HHS (2)

  • Grant ID: P50 GM111152

  • Grant ID: R01 GM110240

National Center for Advancing Translational Sciences

    National Institute of General Medical Sciences

      University of Florida

        玻璃钢生产厂家浙江步行街玻璃钢雕塑供应商玻璃钢雕塑模具大全济南节庆商场美陈沈阳玻璃钢雕塑订制动物玻璃钢卡通雕塑需要几天宁夏玻璃钢广场雕塑定制淄博玻璃钢雕塑哪家好南阳肖像玻璃钢仿铜雕塑户外玻璃钢卡通雕塑价格合理广东会发光的玻璃钢雕塑生产商佛山玻璃钢园林雕塑文山市玻璃钢雕塑哪里有卖玻璃钢雕塑图片卡通大型仿真玻璃钢梦露雕塑厂家定制玻璃钢卡通雕塑宿州环保玻璃钢雕塑贵州景区玻璃钢雕塑销售电话通辽玻璃钢雕塑公司申公豹玻璃钢雕塑商场装饰玻璃钢卡通雕塑销售价格商场dp点美陈租赁南昌玻璃钢景观雕塑新密彩绘玻璃钢雕塑陕西节日商场美陈二连浩特玻璃钢雕塑太原玻璃钢智能雕塑泮艺雕玻璃钢雕塑厂家洛阳铸铜玻璃钢景观雕塑厂家定西大型玻璃钢雕塑广州商场美陈哪里买香港通过《维护国家安全条例》两大学生合买彩票中奖一人不认账让美丽中国“从细节出发”19岁小伙救下5人后溺亡 多方发声单亲妈妈陷入热恋 14岁儿子报警汪小菲曝离婚始末遭遇山火的松茸之乡雅江山火三名扑火人员牺牲系谣言何赛飞追着代拍打萧美琴窜访捷克 外交部回应卫健委通报少年有偿捐血浆16次猝死手机成瘾是影响睡眠质量重要因素高校汽车撞人致3死16伤 司机系学生315晚会后胖东来又人满为患了小米汽车超级工厂正式揭幕中国拥有亿元资产的家庭达13.3万户周杰伦一审败诉网易男孩8年未见母亲被告知被遗忘许家印被限制高消费饲养员用铁锨驱打大熊猫被辞退男子被猫抓伤后确诊“猫抓病”特朗普无法缴纳4.54亿美元罚金倪萍分享减重40斤方法联合利华开始重组张家界的山上“长”满了韩国人?张立群任西安交通大学校长杨倩无缘巴黎奥运“重生之我在北大当嫡校长”黑马情侣提车了专访95后高颜值猪保姆考生莫言也上北大硕士复试名单了网友洛杉矶偶遇贾玲专家建议不必谈骨泥色变沉迷短剧的人就像掉进了杀猪盘奥巴马现身唐宁街 黑色着装引猜测七年后宇文玥被薅头发捞上岸事业单位女子向同事水杯投不明物质凯特王妃现身!外出购物视频曝光河南驻马店通报西平中学跳楼事件王树国卸任西安交大校长 师生送别恒大被罚41.75亿到底怎么缴男子被流浪猫绊倒 投喂者赔24万房客欠租失踪 房东直发愁西双版纳热带植物园回应蜉蝣大爆发钱人豪晒法院裁定实锤抄袭外国人感慨凌晨的中国很安全胖东来员工每周单休无小长假白宫:哈马斯三号人物被杀测试车高速逃费 小米:已补缴老人退休金被冒领16年 金额超20万

        玻璃钢生产厂家 XML地图 TXT地图 虚拟主机 SEO 网站制作 网站优化