Explaining deep classification of time-series data with learned prototypes

Alan H. Gee, Diego Garcia-Olano, Joydeep Ghosh, David Paydarfar

Research output: Contribution to journalConference article

Abstract

The emergence of deep learning networks raises a need for explainable AI so that users and domain experts can be confident applying them to high-risk decisions. In this paper, we leverage data from the latent space induced by deep learning models to learn stereotypical representations or "prototypes" during training to elucidate the algorithmic decision-making process. We study how leveraging prototypes effect classification decisions of two dimensional time-series data in a few different settings: (1) electrocardiogram (ECG) waveforms to detect clinical bradycardia, a slowing of heart rate, in preterm infants, (2) respiration waveforms to detect apnea of prematurity, and (3) audio waveforms to classify spoken digits. We improve upon existing models by optimizing for increased prototype diversity and robustness, visualize how these prototypes in the latent space are used by the model to distinguish classes, and show that prototypes are capable of learning features on two dimensional time-series data to produce explainable insights during classification tasks. We show that the prototypes are capable of learning real-world features - bradycardia in ECG, apnea in respiration, and articulation in speech - as well as features within sub-classes. Our novel work leverages learned prototypical framework on two dimensional time-series data to produce explainable insights during classification tasks.

Original languageEnglish (US)
Pages (from-to)15-22
Number of pages8
JournalCEUR Workshop Proceedings
Volume2429
StatePublished - Jan 1 2019
Event4th International Workshop on Knowledge Discovery in Healthcare Data, KDH 2019 - Macao, China
Duration: Aug 10 2019Aug 16 2019

Fingerprint

Time series
Electrocardiography
Decision making
Deep learning

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Explaining deep classification of time-series data with learned prototypes. / Gee, Alan H.; Garcia-Olano, Diego; Ghosh, Joydeep; Paydarfar, David.

In: CEUR Workshop Proceedings, Vol. 2429, 01.01.2019, p. 15-22.

Research output: Contribution to journalConference article

Gee, Alan H. ; Garcia-Olano, Diego ; Ghosh, Joydeep ; Paydarfar, David. / Explaining deep classification of time-series data with learned prototypes. In: CEUR Workshop Proceedings. 2019 ; Vol. 2429. pp. 15-22.
@article{f589163c8f1c4fa192b95985997f8a30,
title = "Explaining deep classification of time-series data with learned prototypes",
abstract = "The emergence of deep learning networks raises a need for explainable AI so that users and domain experts can be confident applying them to high-risk decisions. In this paper, we leverage data from the latent space induced by deep learning models to learn stereotypical representations or {"}prototypes{"} during training to elucidate the algorithmic decision-making process. We study how leveraging prototypes effect classification decisions of two dimensional time-series data in a few different settings: (1) electrocardiogram (ECG) waveforms to detect clinical bradycardia, a slowing of heart rate, in preterm infants, (2) respiration waveforms to detect apnea of prematurity, and (3) audio waveforms to classify spoken digits. We improve upon existing models by optimizing for increased prototype diversity and robustness, visualize how these prototypes in the latent space are used by the model to distinguish classes, and show that prototypes are capable of learning features on two dimensional time-series data to produce explainable insights during classification tasks. We show that the prototypes are capable of learning real-world features - bradycardia in ECG, apnea in respiration, and articulation in speech - as well as features within sub-classes. Our novel work leverages learned prototypical framework on two dimensional time-series data to produce explainable insights during classification tasks.",
author = "Gee, {Alan H.} and Diego Garcia-Olano and Joydeep Ghosh and David Paydarfar",
year = "2019",
month = "1",
day = "1",
language = "English (US)",
volume = "2429",
pages = "15--22",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "CEUR-WS",

}

TY - JOUR

T1 - Explaining deep classification of time-series data with learned prototypes

AU - Gee, Alan H.

AU - Garcia-Olano, Diego

AU - Ghosh, Joydeep

AU - Paydarfar, David

PY - 2019/1/1

Y1 - 2019/1/1

N2 - The emergence of deep learning networks raises a need for explainable AI so that users and domain experts can be confident applying them to high-risk decisions. In this paper, we leverage data from the latent space induced by deep learning models to learn stereotypical representations or "prototypes" during training to elucidate the algorithmic decision-making process. We study how leveraging prototypes effect classification decisions of two dimensional time-series data in a few different settings: (1) electrocardiogram (ECG) waveforms to detect clinical bradycardia, a slowing of heart rate, in preterm infants, (2) respiration waveforms to detect apnea of prematurity, and (3) audio waveforms to classify spoken digits. We improve upon existing models by optimizing for increased prototype diversity and robustness, visualize how these prototypes in the latent space are used by the model to distinguish classes, and show that prototypes are capable of learning features on two dimensional time-series data to produce explainable insights during classification tasks. We show that the prototypes are capable of learning real-world features - bradycardia in ECG, apnea in respiration, and articulation in speech - as well as features within sub-classes. Our novel work leverages learned prototypical framework on two dimensional time-series data to produce explainable insights during classification tasks.

AB - The emergence of deep learning networks raises a need for explainable AI so that users and domain experts can be confident applying them to high-risk decisions. In this paper, we leverage data from the latent space induced by deep learning models to learn stereotypical representations or "prototypes" during training to elucidate the algorithmic decision-making process. We study how leveraging prototypes effect classification decisions of two dimensional time-series data in a few different settings: (1) electrocardiogram (ECG) waveforms to detect clinical bradycardia, a slowing of heart rate, in preterm infants, (2) respiration waveforms to detect apnea of prematurity, and (3) audio waveforms to classify spoken digits. We improve upon existing models by optimizing for increased prototype diversity and robustness, visualize how these prototypes in the latent space are used by the model to distinguish classes, and show that prototypes are capable of learning features on two dimensional time-series data to produce explainable insights during classification tasks. We show that the prototypes are capable of learning real-world features - bradycardia in ECG, apnea in respiration, and articulation in speech - as well as features within sub-classes. Our novel work leverages learned prototypical framework on two dimensional time-series data to produce explainable insights during classification tasks.

UR - http://www.scopus.com/inward/record.url?scp=85071664014&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071664014&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85071664014

VL - 2429

SP - 15

EP - 22

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -