-
PDF
- Split View
-
Views
-
Cite
Cite
Christos P Kotanidis, Charalambos Antoniades, Selfies in cardiovascular medicine: welcome to a new era of medical diagnostics, European Heart Journal, Volume 41, Issue 46, 7 December 2020, Pages 4412–4414, https://doi.org/10.1093/eurheartj/ehaa608
- Share Icon Share
Abstract
This editorial refers to ‘Feasibility of using deep learning to detect coronary artery disease based on facial photo’†, by S. Lin et al., on page 4400.
Medical research has seen major advances during the past few years, particularly as big data have emerged in biomedical research.1 The presence of large registries with a wealth of data available offers unique opportunities for the deployment of artificial intelligence (AI)-powered technologies, for diagnosis and prognosis of disease.2 Implementation of AI technology in day-to-day clinical practice has already begun, with emerging applications using it to interpret medical images, read pathology slides, analyse electrocardiograms (ECGs), track vital signs, and many other uses.3
In the field of cardiology, AI—mainly deep learning—has been used primarily in automated ECG interpretation. The first machine-read ECGs emerged almost 40 years ago and naturally at the time lacked accuracy, with the latest deep neural networks reporting accuracy equivalent to, if not better than, that of cardiologists in classifying a broad range of distinct arrhythmias.4 In echocardiography, convolutional neural networks have been reported to accurately predict structural cardiac disease with reported c-statistics >0.85.5 The recent spurt in computed tomography (CT) and medical magnetic resonance imaging (MRI) has brought these two imaging modalities to the forefront of cardiovascular diagnostics, with a large rise in AI and deep learning research being published around them.6 Neural networks have been trained to accurately segment the anatomy of the human heart,7 whereas studies from large registries are reporting coronary artery disease (CAD) detection algorithms using unsupervised approaches from chest CT imaging data.8 AI technology is also being utilized for prognostic evaluation of cardiovascular disease and residual risk identification, harnessing the combined strengths of CT, radiomics, and machine learning.9
In this issue of the European Heart Journal, Lin et al. 10 draw our attention yet again to images in cardiology, this time not the usual type of medical imaging from ultrasound, CT, or MRI we are accustomed to. Rather, their study focuses on facial images in an effort to explore, dissect, and present the potential they may have in our battle against CAD. The authors deploy a large training set of 5216 individuals to develop their deep learning algorithm which is tested in a group of 1013 individuals predominantly of Han Chinese ethnicity recruited in tertiary centres across China. All patients underwent a standardized protocol for acquisition of facial images, and a coronary computed tomography angiography (CCTA) was used as the reference method for dichotomizing the cohort into groups of CAD presence, defined as occlusion >50%. The algorithm yielded an area under the curve (AUC) of 0.73 [95% confidence interval (CI) 0.699–0.761], sensitivity of 0.80, and specificity of 0.54 in the test group. Interestingly, the algorithm outperformed scores typically used in assessing CAD pre-test probability, marking it according to the authors as a promising successor to the CAD consortium clinical score model or its older predecessor the Diamond–Forrester model.11
In further analyses, it was found that the algorithm had excellent ability in discriminating between the two sexes, but moderate accuracy in predicting hypertension, hyperlipidaemia, or diabetes, while the part of the face that contributed the most to the algorithm’s predictions seemed to be the cheek.
Overall, the study by Lin et al. highlights a new potential in medical diagnostics. Facial appearance has long been identified as a marker of cardiovascular risk, with features such as male pattern baldness, earlobe crease, xanthelasmata, and skin wrinkling being the most common predictors.12 However, these past approaches require human intervention to evaluate and analyse the images. The robustness of the approach of Lin et al. lies in the fact that their deep learning algorithm requires simply a facial image as the sole data input, rendering it highly and easily applicable at large scale (Take home figure). Effective screening tools are highly sought after, given that optimal selection of populations that benefit from more specialized testing is crucial to disease management and prognosis, as well as beneficial to healthcare systems from a monetary aspect. Using selfies as a screening method can enable a simple yet efficient way to filter the general population towards more comprehensive clinical evaluation. Such an approach can also be highly relevant to regions of the globe that are underfunded and have weak screening programmes for cardiovascular disease. A selection process that can be done as easily as taking a selfie will allow for a stratified flowof people that are fed into healthcare systems for first-line diagnostic testing with CCTA. Indeed, the ‘high risk’ individuals could have a CCTA, which would allow reliable risk stratification with the use of the new, AI-powered methodologies for CCTA image analysis.9

Schematic proposal for using face tracking for CAD screening. Outside the hospital setting, facial features can be traced from images using deep learning to compute the risk for CAD. Eligible individuals would then be followed with further screening by clinicians and imaging modalities. CCTA, coronary computed tomography angiography; CAD, coronary artery disease. Vectors are taken and used from the Noun Project per the Creative Commons Attribution license requirements.
There are still a few points, however, for consideration that make a practical application of the current algorithm challenging. The low specificity of the method raises a concern regarding false-positive results that may confuse both patient and clinician, and eventually overload the system with redundant and unnecessary testing. The authors admit to this in the limitation section, proposing the use of their algorithm in its current form in target populations who have a relatively high CAD risk. In any case, the study population is extremely small to allow extraction of safe conclusions, and external validation cohorts will be needed to test the validity of the algorithm. Furthermore, it should be noted that in the present study, CAD was defined as the presence of >50% stenosis in one major coronary vessel evaluated in CCTA. This may be a simplistic and rather crude classification as it pools in the non-CAD group individuals that are truly healthy, but also people who have already developed the disease but are still at early stages (which might explain the low specificity observed). In addition, the test group was significantly different from the training group, with overall lower percentages of cardiac and lifestyle risk factors, which might explain the lower diagnostic accuracy of standard scores (CAD consortium clinical score and Diamond–Forrester model), and overestimate the difference in performance compared with the deep learning model. The photo pre-processing used may be another issue for consideration; resolution was reduced to 256 × 256 pixels, which hinders the detection of fine facial features, such as arcus lipoides, that may play a role in the diagnostic accuracy of the model. Moreover, proper external validation of deep learning models in populations that are independent is needed to ascertain their use and functionality. Here, the use of the test group that was recruited from the same centres as the training group provides a first indication of the robustness of the algorithm; however, application of the algorithm in cohorts recruited from different centres or even countries will provide more concrete evidence. Finally, in an era that observes a record surge in cosmetic surgery, we should keep in mind that artificial facial alterations may severely discredit such screening tools.
Speedy diagnostic testing is rapidly becoming an important part of medical practice. Information extracted from analysis of an individual’s facial photo utilizing the proposed technology can unquestionably benefit the individual, the attending physician, and the healthcare system altogether. Early detection of individuals at risk for CAD can initiate lifestyle and other personal mitigation approaches, guide medication treatment, and inspire a novel approach in diagnostic testing and screening algorithms for the general population. At the same time, such a technology may raise concerns about misuse of information for discriminatory purposes. Unwanted dissemination of sensitive health record data, that can easily be extracted from a facial photo, renders technologies such as that discussed here a significant threat to personal data protection, potentially affecting insurance options. Such fears have already been expressed over misuse of genetic data,13 and should be extensively revisited regarding the use of AI in medicine.
Despite these challenges, the full potential of such novel and out-of-the-box diagnostics lies ahead of us. Deep learning and AI in general are slowly claiming the central spot in biomedical research. Combined with advances in technology, they will pave the way for highly accurate, personalized diagnostics and revolutionize medicine as we know it.
The opinions expressed in this article are not necessarily those of the Editors of the European Heart Journal or of the European Society of Cardiology.
Footnotes
† doi:10.1093/eurheartj/ehaa640.
Acknowledgements
This work was supported by the Engineering and Physical Sciences Research Council, the Scatcherd Fund at the University of Oxford, and the British Heart Foundation (FS/16/15/32047, TG/16/3/32687).
Conflict of interest: C.A. is a founder, shareholder, and director of Caristo Diagnostics, a spinout company of the University of Oxford. C.A. is also director of the Oxford Academic Cardiovascular CT Core lab. C.P.K. has no conflicts to declare.
References