Predicting coronary artery occlusion risk from noninvasive images by combining CFD-FSI, cGAN and CNN

Cardiovascular complications, such as heart attacks, are often linked to the long-term accumulation of blood cholesterol within coronary arteries. This accumulation contributes to the formation of atherosclerotic plaque, and can constrict blood flow; consequently, heart muscles are deprived of oxygen1,2,3,4,5,6. Numerous factors, including blood density, residence time, and vessel geometry, can influence plaque formation and its progression. Hemodynamic factors like low or fluctuating values of shear stress applied to vessel walls, play a significant role in the occlusions progression and plaque detachment7,8,9.Given the high mortality rate associated with coronary artery disease (CAD), researchers from various scientific fields seek accurate prediction methods to prevent disease progression and develop novel treatments. Wall shear stress (WSS) as a crucial in vivo parameter, is a tangential force exerted by blood on the vessel walls, reflects the frictional force experienced by endothelial cells. WSS can impact plaque formation and its progression in various ways; for example, low WSS levels may promote plaque formation while high WSS levels can cause plaque detachment from vessel surfaces, leading to obstruction10,11,12,13,14.Modeling methods and numerical simulations offer unique insights into blood flow behavior and hemodynamic properties, facilitating the detection of in vivo parameters contributing to congestion or plaque separation. These computational approaches complement traditional medical techniques, like imaging, by revealing information about changes in vessel shape and blood flow patterns that may be difficult to assess directly15,16,17,18,19. Consequently, these methods have gained attention as new tools for predicting blood flow behavior and hemodynamic characteristics20,21,22,23.Computational fluid dynamics (CFD) and its coupling with fluid-structure interaction (FSI) are two primary methods for simulating blood flow. The vessel wall is considered rigid in CFD, whereas FSI accounts for wall elasticity24. While CFD is computationally less expensive, most studies utilize this method despite FSI providing more accurate and realistic simulations25,26. Notably, considering the vessel wall as rigid in CFD can overestimate blood flow velocity and WSS27. Straughan et al.6 presented an algorithmic method for the automatic generation of 3D simulations of optical coherence tomography images using the finite element method. Their method allows large-scale clinical analysis and personalized diagnoses and treatment planning.Carvalho et al.28 employed CFD and FSI coupling to evaluate the influence of arterial wall compliance on hemodynamic outcomes by examining blood flow in coronary arteries. Their findings demonstrated that CFD-FSI provides significantly more accurate blood flow simulations and revealed that wall compliance significantly affects WSS distribution while having a minor impact on other parameters.Malvè et al.29 investigated the effects of wall compliance on WSS distribution by comparing left coronary artery WSS simulations using both FSI and rigid wall models. They found that WSS distribution is affected by wall compliance, leading to substantial differences in minimum and maximum WSS values. Lopes et al.20 performed carotid blood flow simulations using FSI and rigid wall models for specific geometries and transient boundary conditions. Their study further emphasized the importance of using FSI models, whereas rigid wall models tend to overestimate flow velocity and WSS.Simulation methods are appropriate alternatives for examining coronary artery occlusion compared to high-risk and expensive medical methods like angiography, that carries risks such as death, heart attack, and stroke30,31. However, the time-consuming nature of these simulations poses a challenge, as each individual’s unique vessel geometry necessitates repeating the entire process for every new case, making CAD predictions cost-prohibitive32,33,34. Moreover, other technologies, like four-dimensional phase-contrast magnetic resonance imaging (4D flow MRI), which is used to determine the quantity of WSS, have a shallow resolution. This can be resolved by replacing artificial intelligence-based methods35,36. Consequently, there is a need for an alternative technique to overcome these limitations.Although there are no definitive early signs for the diagnosis of CAD, image-processing systems have emerged as powerful tools for prediction and noninvasive20,37,38,39,40. Given that machine learning (ML) algorithms have already shown significant potential in medical imaging and disease prediction, their importance and application in advancing modern medicine and improving healthcare services continue to grow41,42,43,44.Jordanski et al.45 presented an ML-based approach to reduce computational time for CFD-based WSS distribution calculations. They accurately predicted WSS distribution at various time points using multivariate linear regression (MLR), multilayer perceptron neural network (MLPNN), and Gaussian conditional random fields (GCRF). Tesche et al.46 compared CFD results with an ML algorithm for determining fractional flow reserve (FFR) values, verifying their findings against several reference standards. Both methods yielded comparable results with minimum differences. Building on this, Wang et al.47 developed the DEEPVESSEL-FFR platform using an automatic quantification method and deep learning (DL), enabling efficient evaluation of coronary artery stenosis.Coenen et al.48 evaluated their proposed method by comparing FFR results from CFD and ML methods using data from 525 vessels, with invasive coronary angiography (ICA) as the reference. Both methods performed equally well. In another study, Koo et al.49 combined machine learning and CFD simulations to assess the impact of various parameters on the performance of a machine learning-based computed FFR diagnostic method. Despite the close agreement between the two methods in statistical analysis, the ML method was more appropriate for lesion-specific ischemia prediction.Deep learning and artificial neural networks have recently demonstrated remarkable progress50,51. These techniques have revealed meaningful insights by uncovering complex, non-linear relationships between parameters that shallow machine learning approaches cannot capture52,53,54,55. DL algorithms have successfully bridged this gap, demonstrating the ability to quickly identify and categorize cases prone to CAD with high accuracy56,57,58.Gharleghi et al.59 analyzed blood flow in coronary arteries by applying DL for the time-averaged wall shear stress (TAWSS) hemodynamic index as a predictive measure based on vessel geometry. By combining CFD simulations and DL, they achieved highly accurate TAWSS predictions. Li et al.60 employed CFD and DL for the prediction of 3D cardiovascular hemodynamic in bypass surgery and successfully reproduced the relationship between cardiovascular geometry and in vivo hemodynamic for FFR. Suk et al.61 used mesh convolutional neural networks to estimate WSS in a 3D artery model. Their DL model achieved 90.5% accuracy with a 1.6% absolute average error in predicting WSS.Raissi et al.62 employed a DL-based method as an alternative to Navier-Stokes equation-based approaches for visualizing flow patterns in biological systems. This innovative method successfully extracted quantitative information that could not be directly measured. Using physical simulations and machine learning, Feiger et al.63 examined the influences of stenosis degree, blood flow velocity, and blood viscosity on pressure gradient and WSS. They proposed a model to predict these parameters, achieving a pressure gradient estimation error of 1.18 mmHg and a WSS estimation error of 0.99 Pa. Arzani et al.64 explored near-wall blood flow and calculated WSS using a physics-informed neural networks (PINNs) model, by incorporating information from governing equations. Their results revealed the potential of DL models to improve WSS quantification.Chen et al.65 presented an effective image inpainting algorithm via partial multi-scale channel attention mechanism and deep neural networks to deal with problems such as fuzzy images, texture distortion and semantic inaccuracy, which are solved by using deep learning modules (the Res-U -Net module). This approach can adequately represent multi-scale features with many irregular defects.Generative adversarial networks (GANs) are a class of powerful machine learning models that have revolutionized many fields including medical imaging and diagnostics. Researchers have used the ability of GANs to generate high-quality, and realistic medical images to address challenges in diagnosing and treating CAD66,67,68. Gurusubramani and Latha66 introduced a novel hybrid GAN with semantic resonance for generating and analyzing synthetic cardiac images, addressing the crucial need for accuracy and clinical relevance in cardiac image synthesis. Their method achieved high accuracy by incorporating pre-trained CNN classifiers and optimizing adversarial and classification losses.Another application of GAN is data augmentation. Ahmadi Golilarz et al.68 introduced a deep learning model called generative adversarial networks-multi discriminator (GAN-MD) as a noninvasive method to diagnose myocarditis using cardiac magnetic resonance (CMR) images. GAN-MD addresses the challenges of imbalanced classification and image generation by incorporating a reconstruction loss, regularization techniques, and focal loss-based training. This research results in superior performance compared to other methods and makes it a promising tool for detecting and monitoring myocarditis. Also, anomaly detection (AD) is one of the applications of GAN. Saeeda et al.69 provided an extensive survey on using GANs for AD in various applications like Digital Healthcare. Their study discussed state-of-the-art approaches, available datasets for evaluation, challenges, and future research directions to enhance the effectiveness of GANs-based AD techniques further.Although there have been many studies on predicting and classifying occlusion in coronary arteries using DL and combining it with numerical simulations, none of them have been able to predict the risk of each patient’s unique vascular occlusion with high accuracy and speed. Therefore, the motivation of this study is to develop a method to overcome uncertainty in computational models as well as low resolution and experimental quantification of hemodynamic.In this study, 350 fluid-structure interaction (FSI)-coupled computational fluid dynamics (CFD) simulations were performed to closely replicate blood flow behavior in coronary arteries under various occlusion scenarios. These scenarios included different occlusion percentages, different occlusion locations, different occlusion lengths, and combinations of multiple occlusions with various percentages. Each simulation’s hemodynamic features were used to label the corresponding wall shear stress (WSS) contour as data. To enhance the accuracy of the employed algorithms, the dataset was expanded while maintaining the label for each data point. Subsequently, a conditional generative adversarial network (cGAN) model was employed to predict WSS distribution as a crucial parameter in CAD analysis on inside surfaces of the arteries. An 11-layer convolutional neural network (CNN) model was employed to classify the WSS data into three trained grades for CAD risk predictions. Finally, we used a practical case as untrained data to demonstrate the efficacy of the proposed methods in this study.In this paper, the main contributions are as follows: (1) proposing a novel method by using cGAN called WSSGAN to predict WSS contours accurately from noninvasive images, (2) developing a neural network model, WSSGAN, to generate WSS contour images, enhancing data generation capabilities, (3) designing the cGAN model architecture with an encoder-decoder network for WSS prediction, showcasing the technical advancements in image processing, (4) utilizing an 11-layer CNN model to classify WSS contours into three grades, enabling accurate patient classification.MethodologySimulation detailsTo better understand the study process, the flow-process diagram is shown in Fig. 1.Fig. 1Flow-process diagram for predicting the risk of each patient’s unique vascular occlusion.Since one of the most essential parts of this study is performing coupling CFD with FSI simulations of blood flow in coronary vessels, the following sections give the simulation details.GeometryThe under-study geometry was a part of the exact and detailed geometry of the human left coronary artery and bifurcation regions, which was made in a study at the University of Colorado70 (Fig. 2). The models consist of major bifurcation regions (diagonal, circumflex, obtuse marginal (OM) and other regions) and vessel segments. To study most of the possible states, 350 separate CFD-FSI simulations were run in each branch with different conditions, including occlusion in percentages (i.e., the ratio of the area resulting from obstruction to the cross-sectional area of the vessel) of 80, 60, 40, and 20 at different places of the left coronary artery (in various vessels and at different distances from each other and at the junction), and different occlusion lengths (i.e.,the same diameter, twice the diameter, and three times the diameter), as well as the combination of two or three consecutive occlusions with different percentages, along the vessel length or around the multi-branch points (Fig. 2).Fig. 2The geometry of the human left coronary artery and bifurcation regions.Governing equationsThis research tried to apply the characteristics of blood flow in simulations as closely as possible to the actual conditions of the body. Therefore, the blood was assumed to be incompressible, non-Newtonian, unsteady, and laminar flow7,71,72,73. The average Reynolds number was 320 for the blood flow inside the artery. The density and the specific heat capacity of blood were considered to be 1050 \(\:\frac{kg}{{m}^{3}}\) and 3470\(\:\frac{J}{kg.^\circ\:C}\), respectively. To bring the results closer to the actual conditions, the numerical simulations were run in coupling CFD with FSI using ANSYS Fluent software 17.274, which FSI applies the wall elasticity. Equations 1 and 2 give continuity and Navier-Stokes equations, respectively:$$\:\nabla\:\cdot\:\mathbf{v}=0$$
(1)
$$\:\rho\:\left(\frac{\partial\:\mathbf{v}}{\partial\:\text{t}}+\mathbf{v}\cdot\:\nabla\:\mathbf{v}\right)=-\nabla\:\text{p}+{\upmu\:}{\nabla\:}^{2}\mathbf{v}$$
(2)
where \(\:\mathbf{v}\) is the velocity vector, \(\:\text{p}\) is the static pressure, \(\:\rho\:\) is the fluid density and \(\:{\upmu\:}\) is the dynamic viscosity73. In addition, since the blood is considered to be a non-Newtonian fluid, it is necessary to use the CARREAU model for its behavior, which is as follows75:$$\:{\upmu\:}={\mu\:}_{{\infty\:}}+\left({\mu\:}_{0}-{\mu\:}_{\infty\:}\right){\left[1+{\uplambda\:}{\dot{{\upgamma\:}}}^{2}\right]}^{\frac{\text{n}-1}{2}}$$
(3)
where the infinite shear viscosity (\(\:{\mu\:}_{{\infty\:}})\) and the viscosity at a shear rate of 0 (\(\:{\mu\:}_{0})\) are 0.00345 and 0.0565 Pa.s, respectively,\(\:\:\dot{{\upgamma\:}}\) is the instantaneous shear rate. Also, \(\:{\uplambda\:}\) is the time constant, and n is the power-law index; their values are considered to be 3.313 and 0.3568, respectively75.The solid part of the simulation, i.e., the artery wall, is considered to be an incompressible and isotropic linear elastic solid. The interaction and stress tensor are given in Eqs. 4 and 5, respectively.$$\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:{\rho\:}_{\text{s}}\frac{{\partial\:}^{2}\overrightarrow{\text{u}}}{\partial\:{\text{t}}^{2}}-\nabla\:\bar{\bar{\upsigma\:}}={{\uprho\:}}_{\text{s}}\overrightarrow{\text{b}}$$
(4)
$$\:\bar{\bar{\sigma}\:}=2{{\upmu\:}}_{\text{L}\:}\bar{\bar{\epsilon\:}}+{{\uplambda\:}}_{\text{L}}\text{t}\text{r}\left(\bar{\bar{\upepsilon\:}}\right)\text{I}$$
(5)
where \(\:{\rho\:}_{\text{s}}\) is solid density, \(\:\text{u}\) is solid displacement vector, \(\:\overrightarrow{\text{b}}\) is the forces entering the solid, \(\bar{\bar{\sigma\:}}\) is the Cauchy stress tensor, \(\:{{\upmu\:}}_{\text{L}\:}\) is the first Lamé parameter, \(\:{{\uplambda\:}}_{\text{L}}\)is the second Lamé parameter. \(\bar{\bar{\upepsilon\:}}\) is the strain tensor, \(\:\text{t}\text{r}\) is the trace function and \(\:\text{I}\) is the identity matrix68. WSS in 3D form is required in this study that is obtained from Eq. 6, where \(\:\gamma\:\) and \(\:\mu\:\) are the deformation rate and the dynamic viscosity, respectively76,77.$$\:{{\uptau\:}}_{wss}=\mu\:(\frac{\partial\:u}{\partial\:y}+\frac{\partial\:v}{\partial\:x})=\mu\:\gamma\:$$
(6)
Boundary conditionsThe pulsatile velocity profile was applied using a user-defined function (UDF) code as the boundary condition at the vessel inlet. This profile is shown in Fig. 3. The pressure was considered to be 90 mm Hg at the vessel outlet. While considering a constant outlet pressure may not account for the pressure variability across different branches of the coronary tree, the simulation methodology of this study provides valuable insights into the hemodynamic consequences of occlusions in specific branches. This approach offers a controlled framework for assessing the impact of localized disease and serves as a foundation for future studies incorporating more complex physiological conditions. All inlet and outlet boundaries were considered to be fixed, and other fluids and solid boundaries were defined as fluid-structure interfaces78,79.Fig. 3Pulsatile velocity profile at the inlet of vessel.Solution detailsSimulations were performed in coupling CFD with FSI using ANSYS Fluent software based on the finite volume method. CFD-FSI stands for computational fluid dynamics with fluid-structure interaction, a method used to obtain WSS in vessels. It involves running numerical simulations using software like ANSYS Fluent to simulate the effect of flow on the vessel wall. CFD-FSI simulations are essential for accurately simulating blood flow in coronary arteries and understanding the impact of wall compliance on parameters like WSS distribution. This method is crucial for predicting unique vascular occlusions in individual patients with high accuracy and speed using noninvasive photographs. CFD-FSI simulations provide a more realistic representation of blood flow than rigid wall models, ensuring that flow velocity and WSS are not overestimated.The SIMPLE method was used for pressure and velocity coupling with a time step of 0.01 s. A time step of 0.01s is appropriate for simulating blood flow in the coronary arteries using the FSI method, as it can capture the relevant hemodynamic and structural changes over the cardiac cycle. The CFD method was applied to the fluid domain, and this part was divided into several control volumes. Conservation equations were applied to each of them. The governing equations for the solid boundary (i.e., vessel wall) were solved using the Finite Element Method (FEM). The one-way FSI method was used. First, the fluid flow simulation was performed and then the resulting fluid forces were applied as boundary conditions to the deformable vessel wall. The fluid and solid mesh were properly coupled at the fluid-solid interface using techniques such as overset meshing. The vessel wall was assumed as a hyperelastic material used in the Ogden model. After that, the calculated changes were transferred to the fluid. These steps were repeated until the difference between the changes in the last two iterations was less than 0.5%80.An unstructured mesh was used for geometry meshing in both fluid and solid domains. To ensure the mesh quality, blood flow simulation was done for one of the branches with different number of elements to find the most optimal number of elements.The maximum volume of flow occurs in 20% occlusion, which means a larger control volume and more mesh were used in calculations; if the results of this percentage of occlusion were independent from the number of elements, it can assure that the other occlusion results with this number of elements are independent of the number of the elements. For this purpose, the velocity profiles at the beginning, middle, and end of the occlusion were calculated. In this way, five meshes with different numbers of elements were generated. The velocity profiles for these meshes at the three cross-sectional areas are compared in Fig. 4. As one can see, “mesh 3” with 650,928 elements was the best because there is little difference between the velocity profile of this mesh and that of finer meshes.Fig. 4Mesh independency for velocity profiles at (a) beginning, (b) middle, and (c) end of the occlusion.The experimental data of Gijsen et al.81,82 for axial blood velocity profile in a vessel were used to validate CFD-FSI results. These experimental data are related to a carotid bifurcation for unsteady and non-Newtonian blood flow and the vessel length is six times its diameter (L = 6D). This bifurcation also is a part of coronary vessels in Fig. 2. Figure 5 compares the calculated velocity profile from CFD-FSI method with the corresponding values of experimental data. There is a good agreement between the simulation results and experimental data81,82.Fig. 5Comparison of the calculated velocity profile based on CFD-FSI coupling and the corresponding values of experimental data81,82.DatasetBuilding the datasetBuilding a dataset is one of the essential steps to reach this study’s final purpose. It consists of collecting the numerical simulations results, normalizing and labelling of them and generating artificial data. The dataset used consists of 3D contours of the WSS of each numerical simulation. Normalization is necessary to maintain the value and stability of the dataset before using them in the models. It is known as one of the essential measures for data pre-processing. With this step, the learning speed of the proposed model increases significantly. In this study, the value of each image pixel is normalized according to the minimum and maximum WSS values ​​in the range between 0 and 1. Since all the data are obtained by simulation, no data in this collection is missing, repetitive, or has problems that need to be removed.Data labelingSince the purpose of this study is to provide a method to detect the risk of coronary artery occlusion, proper labeling and identification for data classification are very important. Necessary information was collected from reliable scientific sources83,84,85,86,87,88,89,90,91 that quantitatively examine the relationship between WSS and the risk of atherosclerotic plaques as well as the relationship between WSS with velocity and pressure contours in different situations. The effect of vascular anatomy on WSS and the classification of coronary arteries using quantitative and qualitative parameters were studied.The data labeling is manually determined based on the parameters that describe the anatomy of coronary arteries (occlusion location, occlusion length and vessel radius in the occlusion region), minimum and maximum WSS, pressure, velocity, blood rheological properties, and other parameters. Data labeling was done in such a way that all data were classified into three groups based on their characteristics: Grade 1 (low risk), Grade 2 (medium risk), and Grade 3 (high risk). Figure 6 shows an example of the members of each classification.Fig. 6Samples of three grades: (a) low risk, (b) medium risk and (c) high risk.To facilitate a comprehensive understanding of the data generation process, a flowchart was provided to illustrate the key steps in creating a representative dataset sample. This flowchart gives a clear and structured overview of the data generation process (Fig. 7).Fig. 7Flow-process diagram for generating a data sample.Increasing the number of dataIn deep learning strategies, the amount of data plays an essential role in training the model and the accuracy of final results because having a large amount of data gives the model enough information for correct training92,93. In some issues like this study, obtaining a large amount of data requires spending a lot of money and time, so in such cases, some techniques are used to get a dataset that provides reliable and acceptable results. These new data are added to the dataset to increase the training data by making changes such as rescaling, rotation, lighting changes, and adding noise to the primary data. They are also added by keeping the class label and the number of pixels of the original image.Figure 8 shows four different techniques for data enhancement, those are: (a) shrinking the image by 20%, (b) rotating the original image by 90 degrees to the right, (c) rotating the original image by 45 degrees counterclockwise, and (d) adding Gaussian noise with mean and variance of 0 and 0.5, respectively. These four techniques were applied to all 350 data and finally the total number of data increased by four times.Fig. 8Four techniques to increase the amount of data in a sample. 1) original data, (a) shrinking the image by 20%, (b) rotating the original image by 90 degrees to the right, (c) rotating the original image by 45 degrees counterclockwise, (d) adding Gaussian noise.WSSGAN modelGenerative adversarial networks (GANs) were proposed for the first time by Goodfellow et al.94 in 2014. It is a generative model that produces new data similar to the training dataset; in this study, the algorithm’s output is an image of the WSS contour. GANs consist of two different neural networks, generator and discriminator, which can automatically discover and learn the patterns in the input data95. A widely used model based on the GANs algorithm is the conditional generative adversarial network (cGAN) that has improved the performance of these networks by conditioning the generator and the discriminator to control the input and output images96,97,98,99. The wall shear stress generative adversarial networks (WSSGAN), as of the cGAN type, are evaluated on two datasets so that the created dataset is randomly divided into two parts.In the first part, only their geometry is used as input to the generator, and in the second part, their WSS contours are directly entered into the discriminator. These two networks compete to check, record and repeat the changes in the dataset. The generator network produces fake data so that they can train the discriminator and also the generator learns to create acceptable data. These samples produced by the generator are considered negative training samples for the discriminator. The primary purpose of the generator is to trick the discriminator into classifying its output as true samples. The task of the discriminator is to identify and classify the true data from the generated fake data. During training, the discriminator is connected to two loss functions, which ignore the generative loss and only use the discriminator loss.The architecture of the cGAN model is shown in Fig. 9. The WSSGAN architecture was designed as an encoder-decoder network and generates a feature vector with size of 512 at the bottleneck. The input to the generator was a geometry modeled with a 128 × 128 meshes and void regions were modeled applying an infinitesimally small Young’s modulus. The generated output is a 128 × 128 mesh that shows the WSS distribution. The encoder includes log2(m) sampling blocks with a convolutional layer, a batch normalization layer and a LeakyReLU layer. The decoder consists of log2(m) sampling blocks with a deconvolutional layer, a batch normalization layer and a ReLU layer. Both convolutional and deconvolution layers had a kernel size of 5 × 5 and a step size of 2. WSSGAN was trained with a learning rate of 0.001 using the Adam optimizer100 with a batch size of 64. To better understand the working process of WSSGAN algorithm, its pseudocode is shown in Fig. 10.Fig. 9WSSGAN architecture, the generator (top) and the discriminator (bottom).Fig. 10The pseudocode of WSSGAN algorithm.Evaluating the performance of each model is very important. Four criteria are used to evaluate the performance of WSSGAN: Mean Absolute Error (MAE), Percentage Mean Absolute Error (PMAE), Peak Absolute Error (PAE) and Percentage Peak Absolute Error (PPAE).MAE as defined in Eq. 7 is used to evaluate the overall quality of the predicted shear stress distribution. In this equation n is the total number of samples, yj and y*j are the actual and predetermined values, respectively101. The lower value of these criteria is the better performance of the model.$$\:\text{M}\text{A}\text{E}=\frac{1}{\text{n}}\sum\:_{\text{j}=1}^{\text{n}}\left|{\text{y}}_{\text{j}}-{{\text{y}}^{\text{*}}}_{\text{j}}\right|$$
(7)
PMAE measures the model prediction accuracy as a percentage, which is given in Eq. 8. In this equation, y is the sample value; whereas, min(y) and max(y) are the minimum and maximum sample values, respectively101.$$\:\text{P}\text{M}\text{A}\text{E}=\frac{\text{M}\text{A}\text{E}}{\text{m}\text{a}\text{x}\left(\text{y}\right)-\text{m}\text{i}\text{n}\left(\text{y}\right)}\times\:100$$
(8)
PAE and PPAE measure the local WSS critical value and are defined in the form of Eqs. 9 and 10101.$$\:\text{P}\text{A}\text{E}=\left|\text{m}\text{a}\text{x}\left(\text{y}\right)-\text{m}\text{a}\text{x}\left({\text{y}}^{\text{*}}\right)\right|$$
(9)
$$\:\text{P}\text{P}\text{A}\text{E}=\frac{\text{P}\text{A}\text{E}}{\text{m}\text{a}\text{x}\left(\text{y}\right)}\times\:100$$
(10)
CNN modelConvolutional neural networks (CNNs) are a type of neural network architectures usually used for image recognition102. Since two-dimensional convolutional filters can detect the edges of images, it is suitable for generalizing of image patterns103. CNNs perform excellently in many applications, like image classification, object detection and medical image analysis104,105,106.This study employed a CNN to predict vessel occlusion risk, categorizing it into three classes: high, medium, and low. The WSS contour from the cGAN model was used as input, with the output being the risk prediction for vessel occlusion. The CNN model proposed in this study consists of four layers for feature detection, four layers for Max pooling, one flattened layer and two fully connected layers for classifying data into three classes. The specifications and values ​​of the used parameters and the network architecture are shown in Fig. 11. The convolution operation is performed in the first layer, which contains 32 kernels with a size of 5 × 5. Its activation function is selected to be the Rectified Linear Unit (ReLu), and then max pooling of size 2 × 2 and stride is 2. The kernel was considered in the following layers with a size of 3 × 3. The second stage convolution was done with 64 kernels using ReLu and again max-pooling of size 2 × 2, stride 2, and the Stochastic Gradient Descent (SGD) optimizer. The same values ​​were repeated for the third and fourth convolutions. The last fully connected layer consists of two hidden layers with the neuron numbers 3823 and 492, respectively. The activation function and optimizer in the output layer were considered to be softmax and hyperparameter, respectively. Figure 12 also shows a pseudocode for this network.Fig. 11CNN model architecture including inputs, feature maps, hidden units and outputs.Fig. 12The pseudocode of CNN algorithm.To check the proposed model accuracy and validity, some criteria were calculated by the Confusion Matrix107. This matrix consists of the information of real and predicted values for a two-class classification problem (positive and negative) according to Table 1.Table 1 Confusion matrix for checking the proposed model accuracy and validity.In general, accuracy means how much the model has correctly predicted the output and it indicates the number of classifications that have been correctly identified107. According to Table 1, the accuracy is defined as Eq. 11.$$\:\text{A}\text{c}\text{c}\text{u}\text{r}\text{a}\text{c}\text{y}=\frac{\text{T}\text{P}+\text{T}\text{N}}{\text{T}\text{P}+\text{T}\text{N}+\text{F}\text{P}+\text{F}\text{N}}$$
(11)
The sensitivity parameter means the ratio of favorable cases that the model has correctly identified as positive samples and is calculated as follows107:$$\:\text{S}\text{e}\text{n}\text{s}\text{i}\text{t}\text{i}\text{v}\text{i}\text{t}\text{y}=\frac{TP}{\text{T}\text{P}+\text{F}\text{N}}$$
(12)
Precision is the ratio of the number of correct cases classified by the model from a particular class to the total number of positive cases the model has classified, either correctly or incorrectly107.$$\:\text{P}\text{r}\text{e}\text{c}\text{i}\text{s}\text{i}\text{o}\text{n}=\frac{TP}{\text{T}\text{P}+\text{F}\text{P}}$$
(13)
In the cases where identifying a negative class is important, using Specificity along with sensitivity is a suitable criterion. It is the ratio of the number of correct cases classified by the algorithm from one class to the number of cases in the desired class107:$$\:\text{S}\text{p}\text{e}\text{c}\text{i}\text{f}\text{i}\text{c}\text{i}\text{t}\text{y}=\frac{TN}{\text{T}\text{N}+\text{F}\text{P}}$$
(14)
False positive rate or miss rate is the probability of missing a true positive in classification. This criterion can be positive or negative107.$$\:\text{F}\text{P}\text{R}=\frac{\text{F}\text{P}}{\text{F}\text{P}+\text{T}\text{N}}$$
(15)

Hot Topics

Related Articles