Abstract

This editorial refers to ‘Feasibility of using deep learning to detect coronary artery disease based on facial photo’, by S. Lin et al., on page 4400.

Medical research has seen major advances during the past few years, particularly as big data have emerged in biomedical research.1 The presence of large registries with a wealth of data available offers unique opportunities for the deployment of artificial intelligence (AI)-powered technologies, for diagnosis and prognosis of disease.2 Implementation of AI technology in day-to-day clinical practice has already begun, with emerging applications using it to interpret medical images, read pathology slides, analyse electrocardiograms (ECGs), track vital signs, and many other uses.3

In the field of cardiology, AI—mainly deep learning—has been used primarily in automated ECG interpretation. The first machine-read ECGs emerged almost 40 years ago and naturally at the time lacked accuracy, with the latest deep neural networks reporting accuracy equivalent to, if not better than, that of cardiologists in classifying a broad range of distinct arrhythmias.4 In echocardiography, convolutional neural networks have been reported to accurately predict structural cardiac disease with reported c-statistics >0.85.5 The recent spurt in computed tomography (CT) and medical magnetic resonance imaging (MRI) has brought these two imaging modalities to the forefront of cardiovascular diagnostics, with a large rise in AI and deep learning research being published around them.6 Neural networks have been trained to accurately segment the anatomy of the human heart,7 whereas studies from large registries are reporting coronary artery disease (CAD) detection algorithms using unsupervised approaches from chest CT imaging data.8 AI technology is also being utilized for prognostic evaluation of cardiovascular disease and residual risk identification, harnessing the combined strengths of CT, radiomics, and machine learning.9

In this issue of the European Heart Journal, Lin et al.  10 draw our attention yet again to images in cardiology, this time not the usual type of medical imaging from ultrasound, CT, or MRI we are accustomed to. Rather, their study focuses on facial images in an effort to explore, dissect, and present the potential they may have in our battle against CAD. The authors deploy a large training set of 5216 individuals to develop their deep learning algorithm which is tested in a group of 1013 individuals predominantly of Han Chinese ethnicity recruited in tertiary centres across China. All patients underwent a standardized protocol for acquisition of facial images, and a coronary computed tomography angiography (CCTA) was used as the reference method for dichotomizing the cohort into groups of CAD presence, defined as occlusion >50%. The algorithm yielded an area under the curve (AUC) of 0.73 [95% confidence interval (CI) 0.699–0.761], sensitivity of 0.80, and specificity of 0.54 in the test group. Interestingly, the algorithm outperformed scores typically used in assessing CAD pre-test probability, marking it according to the authors as a promising successor to the CAD consortium clinical score model or its older predecessor the Diamond–Forrester model.11

In further analyses, it was found that the algorithm had excellent ability in discriminating between the two sexes, but moderate accuracy in predicting hypertension, hyperlipidaemia, or diabetes, while the part of the face that contributed the most to the algorithm’s predictions seemed to be the cheek.

Overall, the study by Lin et al. highlights a new potential in medical diagnostics. Facial appearance has long been identified as a marker of cardiovascular risk, with features such as male pattern baldness, earlobe crease, xanthelasmata, and skin wrinkling being the most common predictors.12 However, these past approaches require human intervention to evaluate and analyse the images. The robustness of the approach of Lin et al. lies in the fact that their deep learning algorithm requires simply a facial image as the sole data input, rendering it highly and easily applicable at large scale (Take home figure). Effective screening tools are highly sought after, given that optimal selection of populations that benefit from more specialized testing is crucial to disease management and prognosis, as well as beneficial to healthcare systems from a monetary aspect. Using selfies as a screening method can enable a simple yet efficient way to filter the general population towards more comprehensive clinical evaluation. Such an approach can also be highly relevant to regions of the globe that are underfunded and have weak screening programmes for cardiovascular disease. A selection process that can be done as easily as taking a selfie will allow for a stratified flowof people that are fed into healthcare systems for first-line diagnostic testing with CCTA. Indeed, the ‘high risk’ individuals could have a CCTA, which would allow reliable risk stratification with the use of the new, AI-powered methodologies for CCTA image analysis.9

Schematic proposal for using face tracking for CAD screening. Outside the hospital setting, facial features can be traced from images using deep learning to compute the risk for CAD. Eligible individuals would then be followed with further screening by clinicians and imaging modalities. CCTA, coronary computed tomography angiography; CAD, coronary artery disease. Vectors are taken and used from the Noun Project per the Creative Commons Attribution license requirements.
Take home figure

Schematic proposal for using face tracking for CAD screening. Outside the hospital setting, facial features can be traced from images using deep learning to compute the risk for CAD. Eligible individuals would then be followed with further screening by clinicians and imaging modalities. CCTA, coronary computed tomography angiography; CAD, coronary artery disease. Vectors are taken and used from the Noun Project per the Creative Commons Attribution license requirements.

There are still a few points, however, for consideration that make a practical application of the current algorithm challenging. The low specificity of the method raises a concern regarding false-positive results that may confuse both patient and clinician, and eventually overload the system with redundant and unnecessary testing. The authors admit to this in the limitation section, proposing the use of their algorithm in its current form in target populations who have a relatively high CAD risk. In any case, the study population is extremely small to allow extraction of safe conclusions, and external validation cohorts will be needed to test the validity of the algorithm. Furthermore, it should be noted that in the present study, CAD was defined as the presence of >50% stenosis in one major coronary vessel evaluated in CCTA. This may be a simplistic and rather crude classification as it pools in the non-CAD group individuals that are truly healthy, but also people who have already developed the disease but are still at early stages (which might explain the low specificity observed). In addition, the test group was significantly different from the training group, with overall lower percentages of cardiac and lifestyle risk factors, which might explain the lower diagnostic accuracy of standard scores (CAD consortium clinical score and Diamond–Forrester model), and overestimate the difference in performance compared with the deep learning model. The photo pre-processing used may be another issue for consideration; resolution was reduced to 256 × 256 pixels, which hinders the detection of fine facial features, such as arcus lipoides, that may play a role in the diagnostic accuracy of the model. Moreover, proper external validation of deep learning models in populations that are independent is needed to ascertain their use and functionality. Here, the use of the test group that was recruited from the same centres as the training group provides a first indication of the robustness of the algorithm; however, application of the algorithm in cohorts recruited from different centres or even countries will provide more concrete evidence. Finally, in an era that observes a record surge in cosmetic surgery, we should keep in mind that artificial facial alterations may severely discredit such screening tools.

Speedy diagnostic testing is rapidly becoming an important part of medical practice. Information extracted from analysis of an individual’s facial photo utilizing the proposed technology can unquestionably benefit the individual, the attending physician, and the healthcare system altogether. Early detection of individuals at risk for CAD can initiate lifestyle and other personal mitigation approaches, guide medication treatment, and inspire a novel approach in diagnostic testing and screening algorithms for the general population. At the same time, such a technology may raise concerns about misuse of information for discriminatory purposes. Unwanted dissemination of sensitive health record data, that can easily be extracted from a facial photo, renders technologies such as that discussed here a significant threat to personal data protection, potentially affecting insurance options. Such fears have already been expressed over misuse of genetic data,13 and should be extensively revisited regarding the use of AI in medicine.

Despite these challenges, the full potential of such novel and out-of-the-box diagnostics lies ahead of us. Deep learning and AI in general are slowly claiming the central spot in biomedical research. Combined with advances in technology, they will pave the way for highly accurate, personalized diagnostics and revolutionize medicine as we know it.

The opinions expressed in this article are not necessarily those of the Editors of the European Heart Journal or of the European Society of Cardiology.

Footnotes

doi:10.1093/eurheartj/ehaa640.

Acknowledgements

This work was supported by the Engineering and Physical Sciences Research Council, the Scatcherd Fund at the University of Oxford, and the British Heart Foundation (FS/16/15/32047, TG/16/3/32687).

Conflict of interest: C.A. is a founder, shareholder, and director of Caristo Diagnostics, a spinout company of the University of Oxford. C.A. is also director of the Oxford Academic Cardiovascular CT Core lab. C.P.K. has no conflicts to declare.

References

1

Krittanawong
 
C
,
Johnson
 
KW
,
Rosenson
 
RS
,
Wang
 
Z
,
Aydar
 
M
,
Baber
 
U
,
Min
 
JK
,
Tang
 
WHW
,
Halperin
 
JL
,
Narayan
 
SM.
 
Deep learning for cardiovascular medicine: a practical primer
.
Eur Heart J
 
2019
;
40
:
2058
2073
.

2

Pennell
 
D
,
Delgado
 
V
,
Knuuti
 
J
,
Maurovich-Horvat
 
P
,
Bax
 
JJ.
 
The year in cardiology: imaging
.
Eur Heart J
 
2020
;
41
:
739
747
.

3

Topol
 
EJ.
 
High-performance medicine: the convergence of human and artificial intelligence
.
Nat Med
 
2019
;
25
:
44
56
.

4

Hannun
 
AY
,
Rajpurkar
 
P
,
Haghpanahi
 
M
,
Tison
 
GH
,
Bourn
 
C
,
Turakhia
 
MP
,
Ng
 
AY.
 
Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network
.
Nat Med
 
2019
;
25
:
65
.

5

Zhang
 
J
,
Gajjala
 
S
,
Agrawal
 
P
,
Tison
 
GH
,
Hallock
 
LA
,
Beussink-Nelson
 
L
,
Lassen
 
MH
,
Fan
 
E
,
Aras
 
MA
,
Jordan
 
C.
 
Fully automated echocardiogram interpretation in clinical practice: feasibility and diagnostic accuracy
.
Circulation
 
2018
;
138
:
1623
1635
.

6

Oikonomou
 
EK
,
Siddique
 
M
,
Antoniades
 
C.
 
Artificial intelligence in medical imaging: a radiomic guide to precision phenotyping of cardiovascular disease
.
Cardiovasc Res
 
2020
;doi: 10.1093/cvr/cvaa021.

7

Oktay
 
O
,
Ferrante
 
E
,
Kamnitsas
 
K
,
Heinrich
 
M
,
Bai
 
W
,
Caballero
 
J
,
Cook
 
SA
,
De Marvao
 
A
,
Dawes
 
T
,
O’Regan
 
DP.
 
Anatomically constrained neural networks (ACNNs): application to cardiac image enhancement and segmentation
.
IEEE Trans Med Imaging
 
2017
;
37
:
384
395
.

8

Al’Aref
 
SJ
,
Maliakal
 
G
,
Singh
 
G
,
van Rosendael
 
AR
,
Ma
 
X
,
Xu
 
Z
,
Alawamlh
 
OAH
,
Lee
 
B
,
Pandey
 
M
,
Achenbach
 
S.
 
Machine learning of clinical variables and coronary artery calcium scoring for the prediction of obstructive coronary artery disease on coronary computed tomography angiography: analysis from the CONFIRM registry
.
Eur Heart J
 
2020
;
41
:
359
367
.

9

Oikonomou
 
EK
,
Williams
 
MC
,
Kotanidis
 
CP
,
Desai
 
MY
,
Marwan
 
M
,
Antonopoulos
 
AS
,
Thomas
 
KE
,
Thomas
 
S
,
Akoumianakis
 
I
,
Fan
 
LM
,
Kesavan
 
S
,
Herdman
 
L
,
Alashi
 
A
,
Centeno
 
EH
,
Lyasheva
 
M
,
Griffin
 
BP
,
Flamm
 
SD
,
Shirodaria
 
C
,
Sabharwal
 
N
,
Kelion
 
A
,
Dweck
 
MR
,
Van Beek
 
EJR
,
Deanfield
 
J
,
Hopewell
 
JC
,
Neubauer
 
S
,
Channon
 
KM
,
Achenbach
 
S
,
Newby
 
DE
,
Antoniades
 
C.
 
A novel machine learning-derived radiotranscriptomic signature of perivascular fat improves cardiac risk prediction using coronary CT angiography
.
Eur Heart J
 
2019
;
40
:
3529
3543
.

10

Lin
 
S
, Li Z, Fu B, Chen S, Li X, Wang Y, Wang X, Lv B, Xu B, Song X, Zhang Y-J, Cheng X, Huang W, Pu J, Zhang Q, Xia Y, Du B, Ji X, Zheng Z.
Feasibility of using deep learning to detect coronary artery disease based on facial photo
.
Eur Heart J
 
2020
;
41
:4400–4411.

11

Bittencourt
 
MS
,
Hulten
 
E
,
Polonsky
 
TS
,
Hoffman
 
U
,
Nasir
 
K
,
Abbara
 
S
,
Di Carli
 
M
,
Blankstein
 
R.
 
European Society of Cardiology-recommended Coronary Artery Disease Consortium Pretest Probability Scores more accurately predict obstructive coronary disease and cardiovascular events than the Diamond and Forrester score: the Partners Registry
.
Circulation
 
2016
;
134
:
201
211
.

12

Gunn
 
DA
,
de Craen
 
AJM
,
Dick
 
JL
,
Tomlin
 
CC
,
van Heemst
 
D
,
Catt
 
SD
,
Griffiths
 
T
,
Ogden
 
S
,
Maier
 
AB
,
Murray
 
PG
,
Griffiths
 
CEM
,
Slagboom
 
PE
,
Westendorp
 
RGJ
,
Kritchevsky
 
S.
 
Facial appearance reflects human familial longevity and cardiovascular disease risk in healthy individuals
.
J Gerontol A
 
2012
;
68
:
145
152
.

13

Bélisle-Pipon
 
J-C
,
Vayena
 
E
,
Green
 
RC
,
Cohen
 
IG.
 
Genetic testing, insurance discrimination and medical research: what the United States can learn from peer countries
.
Nat Med
 
2019
;
25
:
1198
1204
.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)