Advertisement

Explainable model of deep learning for outcomes prediction of in-hospital cardiac arrest patients

      Introduction: Deep learning has outperformed traditional methods in predicting healthcare outcomes. However, deep learning models struggle with explainability and are considered a black box. This article demonstrated the output from Shapley additive explanations (SHAP) analysis can provide meaningful insight into a model's predictions.
      To read this article in full you will need to make a payment

      Purchase one-time access:

      Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online access
      One-time access price info
      • For academic or personal research use, select 'Academic and Personal'
      • For corporate R&D use, select 'Corporate R&D Professionals'

      Subscribe:

      Subscribe to Resuscitation
      Already a print subscriber? Claim online access
      Already an online subscriber? Sign in
      Institutional Access: Sign in to ScienceDirect