GAN-Augmented Naïve Bayes for identifying high-risk coronary artery disease patients using CT angiography data

The proposed system is illustrated in Fig. 1. Input of CCTA images is the first step in applying Generative Adversarial Networks (GANs) to CCTA imaging data to identify high-risk patients for CAD. Next, the images were preprocessed through image normalization and enhancement. The generator and discriminator, the two primary parts of the GAN, receive the previously processed images.Fig. 1Block diagram of the proposed model to identify CAD.The discriminator distinguishes between actual and synthetic pictures to enhance the output of the generator, which produces synthetic CCTA images or enriches the original images to emphasize the CAD-relevant aspects. Once trained, the intermediate layers of the discriminator were used to extract rich and informative features from the real CCTA images. These features serve as input to the Naïve Bayes classifier, which models the probability distribution of the features for CAD and non-CAD cases to identify high-risk patients. The process of converting cardiovascular images from the spatial domain to the frequency domain for GAN-based segmentation preprocessing involves the use of Fourier Transform. Through this transformation, visual data may be altered in terms of their frequency components, which makes it possible to reduce noise or accentuate pertinent information. Certain features of cardiovascular architecture can be efficiently accentuated or suppressed using filters. Following filtering, the images were transformed back into the spatial domain using inverse Fourier Transform. The treated images exhibited decreased noise artifacts and improved structural features. These preprocessed images were used to train models for the precise segmentation of cardiovascular anatomy using Generative Adversarial Networks (GANs) as optimal inputs.PreprocessingDuring acquisition, all types of noise, including electrical and photon noise, might impact CCTA images. The pre-processing steps are illustrated in Fig. 2.Fig. 2Preprocessing of CCTA images.As shown in Fig. 2, the input CCTA images were first captured and normalized to standardize intensity values. This is where the block diagram for preparing CCTA images using the Fourier Transform starts. Subsequently, a 2D Fourier Transform was applied to these images, shifting their spatial domains to their frequency domains. Different frequency-domain filtering techniques are used in the frequency domain to amplify or suppress particular frequency components. A 2D Inverse Fourier Transform was then used to return the filtered frequency-domain images to the spatial domain. To increase quality, the final images undergo post-processing procedures, including noise reduction and image enhancement.The accuracy of diagnostic evaluations can be affected by noise artifacts, which can hide crucial anatomical information and be reduced with the use of preprocessing procedures. The Fourier Transform preprocessing of the cardiovascular images for GAN segmentation is described as follows: To transform an image from the spatial domain to the frequency domain, we applied a 2D Fourier Transform. By using the sine and cosine components to dissect an image, the Fourier Transform can be used to describe the CCTA image in terms of frequencies.$$\:F(u,v)=\sum\:_{x=0}^{M-1}\sum\:_{y=0}^{N-1}f(x,y)\cdot \:{e}^{-j2\pi\:\left(\frac{ux}{M}+\frac{vy}{N}\right)}$$
(1)
where \(\:f(x,y)\) is the pixel value at position \(\:(x,y)\); M and N are the dimensions of the image, u and v are the frequency-domain coordinates, and j is the imaginary unit. The FFT shift moved the zero-frequency component to the center of the frequency-domain image for better visualization and processing.$$\:{F}_{shifted}\left(FFTShift\right(F(u,v))$$
(2)
The filters in the frequency domain were applied to enhance the features and reduce noise.$$\:{H}_{band-pass}(u,v)=\left\{\begin{array}{c}1\\\:0\end{array}\right.\genfrac{}{}{0pt}{}{if\:{D}_{1}\le\:\sqrt{{u}^{2}+{v}^{2}}}{otherwise}$$
(3)
The shifted Fourier Transform is multiplied using the chosen filter to enhance or suppress specific frequencies.$$\:{F}_{filtered}(u,v)={F}_{shifted}(u,v)\cdot \:H(u,v)$$
(4)
The inverse 2D Fourier Transform is applied to convert the processed image back to the spatial domain.$$\:{f}_{processed}(x,y)=IFFT2\left({F}_{inv-shifted}\right(u,v\left)\right)$$
(5)
Diagnostic accuracy and image quality can be compromised by CCTA image artifacts; such as motion artifacts resulting from patient movement or beam-hardening abnormalities caused by variations in tissue density. To ensure that the pixel values fall within a standard range, such as [0, 1] or [0, 255], the processed image is normalized.$$\:{f}_{normalized}(x,y)=255\cdot \:\frac{{f}_{processed}(x,y)-min\left({f}_{processed}\right)}{max\left({f}_{processed}\right)-min\left({f}_{processed}\right)}$$
(6)
Figure 3 shows the high-risk CCTA images available in the dataset and the corresponding pre-processed images obtained through the Fourier Transform. The preprocessing steps of the CCTA images utilizing Fourier Transform are shown in Fig. 4. The input CCTA image is shown in Fig. 4(a), and includes noise, artifacts, and inconsistent image quality. In Fig. 4(b), the image is converted into the frequency domain using a 2D Fourier Transform, which symbolizes the frequency components of the image. Applying a low-pass filter decreases high-frequency noise, highlighting important structures, as shown in Fig. 4(c).Fig. 3(a) to (c): Input CCTA images; (b)–(f) Preprocessed CCTA images by Fourier Transform.Fig. 4Preprocessed CCTA- CS images (CS) through the step-by-step process of Fourier Transform.A high-pass filter that focuses on higher frequencies enhances the edges and fine details, as shown in Fig. 4(d). By allowing a certain range of frequencies to pass through, a band-pass filter was applied, as shown in Fig. 4(e), which balances the augmentation of both large structures and fine details. The preprocessed images are shown in Fig. 4(f), where they were transformed back into the spatial domain using 2D Inverse Fourier Transform. Preprocessing produces images with less noise and better visibility of important anatomical features, which are necessary for a precise diagnosis and analysis.SegmentationAfter pre-processing, the enhanced cardiovascular images were fed into the GAN for segmentation. Using CCTA imaging data, Generative Adversarial Networks (GANs) are a revolutionary technique in medical image analysis for detecting individuals at high risk for CAD. A GAN is composed of two neural networks: a discriminator neural network and generator neural network. The participants received competitive training in parallel. The generator aims to produce realistic synthetic pictures of CCTA scans that resemble genuine images from high-risk patients, whereas the discriminator seeks to distinguish between real and synthetic images.Fig. 5GAN for segmentation of CCTA imaging data.As shown in Fig. 5, the generator network is accomplished through the following process: The input of generator G is random noise (z). Typically, a basic distribution, such as a Gaussian distribution, is used to generate this noise.$$\:z \sim N\left(\text{0,1}\right)$$
(7)
Generator G consists of multiple layers of neural network units. These layers are used to transform the input noise z into a higher-dimensional representation, which can be interpreted as an image.The generator G’s\(\:{\theta\:}_{G}\) parameters of the generator G are tuned during the training phase to obtain a mapping between the output image space and the input noise space. The gradients from discriminator D are utilized to update \(\:{\theta\:}_{G}\) throughout this backpropagation learning phase within the GAN framework.The final output of the generator network is a synthetic CCTA image \(\:\widehat{X}\). This image is generated in such a way that it should resemble real CCTA scans of high-risk patients in terms of key features and patterns.Discriminator D’s task is to reduce the degree to which the produced images \(\:\widehat{X}\:\) can be distinguished from the actual CCTA images X by the generator. To deceive D into thinking that the pictures are genuine, G wants to create \(\:\widehat{X}\:\) that is sufficiently realistic.The generator G is updated during training using the gradients obtained from the feedback of discriminator D. Iteratively, this adversarial process enhances G’s capacity of G to produce increasingly accurate and lifelike synthetic CCTA images. The following elements are part of the GAN framework.Generator(G)The generator network G learns to generate synthetic CCTA images \(\:\widehat{X}\) from random noise z.$$\:\widehat{X}=G(z;{\theta\:}_{G})$$
(8)
where \(\:{\theta\:}_{G}\) represents the parameters of the generator network.The transformation includes many layers, such as activation functions, batch normalization layers, transposed convolutional layers, and fully connected layers. The following is a representation of the overall framework:$$\:\widehat{X}={f}_{L}\left({f}_{L-1}\right(\cdot\cdot\cdot{f}_{2}\left({f}_{1}\right(z))))$$
(9)
where \(\:{f}_{L}\) denotes the function represented by the lth layer in the network and L is the number of layers.Discriminator (D)Discriminator network D aims to distinguish between real CCTA images X from high-risk patients and synthetic images \(\:\widehat{X}\).$$\:D(X;\:{\theta\:}_{D})={g}_{K}\left({g}_{K-1}\right(\cdot\cdot\cdot{g}_{2}\left({g}_{1}\right(X\left)\right)\left)\right)$$
(10)
where \(\:{\theta\:}_{D}\) represents the parameters of the discriminator network, and \(\:{g}_{K}\) denotes the function represented by the kth layer in the network, and K denotes the number of layers.$$\:D(X;\:{\theta\:}_{D})\approx\:\:1\:\to\:\:Probability\:that\:X\:is\:real$$
(11)
$$\:D(\widehat{X};\:{\theta\:}_{D})\:\approx\:\:0\to\:\:Probability\:that\:\widehat{X}\:is\:real$$
(12)
Adversarial trainingThe training process involves minimizing the following objective function, which balances the generator’s goal of fooling the discriminator and the discriminator’s goal of correctly distinguishing real images from synthetic images.$$\:\underset{G}{\text{min}}\underset{D}{\text{max}}{\mathbb{E}}_{X\sim{P}_{data\left(X\right)}\left[log\:D\left(X\right)\right]}+{\mathbb{E}}_{z\sim{p}_{z\left(Z\right)}\left[log(1-D(G\left(z\right)\left)\right)\right]}$$
(13)
where \(\:{P}_{data\left(X\right)}\) is the distribution of real CCTA images from high-risk patients and \(\:{p}_{z\left(Z\right)}\) is the prior distribution of the input noise, z.Once trained, generator G can produce synthetic CCTA images \(\:\widehat{X\:}\)that resemble CCTA images of high-risk patients. Clinicians can detect the characteristics or trends that point to CAD risk factors, including stenosis, plaque accumulation, or other anomalies, by examining the images.Loss functions for trainingDifferent loss functions are used for the training of the discriminator and the generator.Generator Loss is as follows.$$\:{L}_{G}\:=\:-\:{\mathbb{E}}_{z\sim{p}_{z\left(Z\right)}\left[log\left(D\right(G\left(z\right))\right]}$$
(14)
Discriminator Loss is as follows.$$\:{L}_{D}=-{\mathbb{E}}_{X\sim{P}_{data\left(X\right)}\left[log\:D\left(X\right)\right]}+{\mathbb{E}}_{z\sim{p}_{z\left(Z\right)}\left[log(1-D(G\left(z\right)\left)\right)\right]}$$
(15)
Gradient descent updatesTraining involves updating the parameters G and D using gradient descent. The update rule for the discriminator is as follows.$$\:{\theta\:}_{D}\leftarrow\:{\theta\:}_{D}+\eta\:{\nabla\:}_{{\theta\:}_{D}}{L}_{D}$$
(16)
where \(\:\eta\:\) is the learning rate.For the generator, the update rule is as follows.$$\:{\theta\:}_{G}\leftarrow\:{\theta\:}_{G}+\eta\:{\nabla\:}_{{\theta\:}_{G}}{L}_{G}$$
(17)
CAD risk assessment based on CCTA scans may be more accurate using GANs for this work because of their capacity to recognize intricate patterns and variability in medical imaging data. This approach aims to help physicians with early identification and individualized treatment planning for high-risk CAD patients. This is an inventive use of deep learning in the healthcare industry.Training algorithmThe GAN training algorithm involves iterating over the following steps:

Step 1: Sample a mini-batch of real CCTA images \(\:{\left\{{X}^{\left(i\right)}\right\}}_{i=1}^{m}\) from the real data distribution \(\:{p}_{data}\left(X\right).\)
Step 2: Sample a mini-batch of noise vectors \(\:{\left\{{z}^{\left(i\right)}\right\}}_{i=1}^{m}\) ​ from prior distribution \(\:{p}_{z}\left(z\right).\)
Step 3: Generate synthetic images \(\:{\left\{{\widehat{X}}^{\left(i\right)}\right\}}_{i=1}^{m}\) using the generator G, as shown in Eq. (1).
Step 3: Compute the discriminator loss \(\:{L}_{D}\)​ using the real images and generated images, as shown in Eq. (5).
Step 4: Update the discriminator parameter \(\:{\theta\:}_{D}\) using gradient descent, as shown in Eq. (6).
Step 5: Compute the generator loss \(\:{L}_{G}\)​ using the generated images, as shown in Eq. (7).
Step 6: Update the generator parameters \(\:{\theta\:}_{G}\) using gradient descent, as shown in Eq. (8).

In a GAN, generator G gains the ability to produce artificial CCTA images by converting random noise z into images X using a neural network trained with the adversarial feedback of discriminator D. To generate realistic pictures, optimization entails reducing the generator’s loss and optimizing the discriminator’s capacity to discern between genuine and artificial images. Using this adversarial training procedure, the generator can understand the intricate patterns and characteristics found in actual CCTA images from high-risk CAD patients.Feature extractionThe discriminator D consists of multiple layers, and \(\:{\upvarphi\:}\left(x\right)\:\)denotes the output of the intermediate layers. Let \(\:{X}_{real}\:\)be the actual CCTA-segmented image from the generator. These images pass through the discriminator to obtain intermediate features. \(\:D\left(x\right)\:\)is composed of L layers. Then,$$\:D\left(x\right)={D}_{L}^\circ\:\:{D}_{L-1}^\circ\:\cdot\cdot\cdot^\circ\:\:{D}_{1}\left(x\right)$$
(18)
Where \(\:{D}_{i}\)​ denotes the ith layer of the discriminator. The feature representation \(\:{\upvarphi\:}\left(x\right)\:can\:be\) obtained from any layer i:$$\:{\upvarphi\:}\left(x\right)={D}_{i}^\circ\:\:{D}_{i-1}^\circ\:\cdot\cdot\cdot^\circ\:\:{D}_{1}\left(x\right)$$
(19)
To obtain the feature representations the following equation is used.$$\:{\text{F}}_{real}={\upvarphi\:}\left({x}_{real}\right)$$
(20)
where \(\:{\text{F}}_{real}\) is the matrix of feature vectors for the real images.Pseudocode for Discriminator-based feature extractionThe procedure for extracting features from CCTA images using a GAN discriminator is described in the following pseudocode:# Define GAN structure.initialize generator G.initialize discriminator D.# Train GAN.for each epoch in training_epochs:for each batch in the dataset:# Train discriminator D with real and fake images.real_images = get_real_images(batch).fake_images = G(generate_random_noise(batch_size)).train D with real_images and fake_images.# Train generator G to fool discriminator D.fake_images = G(generate_random_noise(batch_size)).train G to fool D with fake_images.# Use the trained discriminator D to extract features.for each preprocessed_image in the dataset:features = D.extract_features(preprocessed_image).store features in feature_list.ClassificationHigh-quality features from the GAN discriminator help the GAN Augmented Naïve Bayes (GAN-ANB) Bayes to perform better, as these features are likely to be less correlated. The extracted features, \(\:{\text{F}}_{real}\) is used by the Gaussian Naive Bayes classifier to classify these features. Naïve Bayes provides a simple yet efficient method that helps with illness detection and clinical decision-making by assuming the conditional independence of features given the class label. To determine a class’s posterior probability \(\:{C}_{k}\), given the features \(\:{x}_{1},{x}_{2},\cdot\cdot\cdot,\:{x}_{n}\:\)are as follows:$$\:P(\left.{C}_{k}\right|{x}_{1},{x}_{2},\cdot\cdot\cdot,\:{x}_{n})=\frac{P\left({C}_{k}\right){\prod\:}_{i=1}^{n}P({{x}_{i}\left|C\right.}_{k})}{P({x}_{1},{x}_{2},\cdot\cdot\cdot,\:{x}_{n})}$$
(21)
where:

\(\:P\left(\left.{C}_{k}\right|{x}_{1},{x}_{2},\dots\:,{x}_{n}\right)\)is the posterior probability of class ​ \(\:{C}_{k}\:\)given the features \(\:{x}_{1},{x}_{2},\:\dots\:,\:{x}_{n}.\)

\(\:P\left({C}_{k}\right)\)is the prior probability of class \(\:{C}_{k}\)​.

\(\:P\left({{x}_{i}\left|C\right.}_{k}\right)\:\)is the conditional probability of feature \(\:{x}_{i}\) given class \(\:{C}_{k}.\)

\(\:{P(x}_{1},{x}_{2},\dots\:,{x}_{n})\) is the evidence probability, which acts as a normalization factor.

GAN-ANB is suitable for discrete features and is typically used with counts or frequencies.$$\:P\left(\left.{x}_{i}\right|{C}_{k}\right)=\frac{count({x}_{i},\:{C}_{k})+\alpha\:}{\sum\:_{{x}^{{\prime\:}}}count({x}^{{\prime\:}},\:{C}_{k})+\alpha\:}$$
(22)
where \(\:count({x}_{i},\:{C}_{k})\:\)is the count of features \(\:{x}_{i}\)​ in class \(\:{C}_{k}\)​ and \(\:\alpha\:\:\)is a smoothing parameter used to handle unseen features.Calculate the posterior probability of each class \(\:{C}_{k\:}\)based on the training set.$$\:P\left({C}_{k}\right)=\frac{No.of\:instance\:of\:class\:{C}_{k}}{Total\:no.of\:instances}$$
(23)
\(\:\:\:\:\:\:P\left(\left.{x}_{i}\right|{C}_{k}\right)\:es\)timate the conditional probabilities for each feature \(\:{x}_{i}\), ​ given each class \(\:{C}_{k}\)​ based on the training data. For a new CCTA image with features \(\:{x}_{1},{x}_{2},\dots\:,{x}_{n}\) calculate the posterior probability is calculated as follows:$$\:P(\left.{C}_{k}\right|{x}_{1},{x}_{2},\cdot\cdot\cdot,\:{x}_{n})\propto\:\:P\left({C}_{k}\right){\prod\:}_{i=1}^{n}P({{x}_{i}\left|C\right.}_{k})$$
(24)
Normalize using the evidence probability for all classes to obtain \(\:P(\left.{C}_{k}\right|{x}_{1},{x}_{2},\cdot\cdot\cdot,\:{x}_{n}).\) Assign the class label \(\:\widehat{y}\:\)to \(\:arg\underset{k}{\text{max}}P(\left.{C}_{k}\right|{x}_{1},{x}_{2},\cdot\cdot\cdot,\:{x}_{n})\:\), the class with the largest posterior probability. Assessing the degree of blockage, narrowing, or plaques in the coronary arteries is a part of the treatment of CAD. Signs such as enlarged ventricles, poor ejection fraction, or aberrant heart wall movement are considered when classifying heart failure.Pseudocode of GAN-ANB classificationThe pseudocode outlines the process of classifying CCTA images using GAN-ANB, incorporating feature extraction from the GAN discriminator.Convert features and labels into arrays.features_array = convert feature_list to an array.labels_array = convert labels to array.# Divide data into sets for testing and training.train_features, test_features, train_labels, test_labels = split dataset into train and test sets.# Training the GAN-ANB classifier.naive_bayes = initialize GAN-ANB classifier.naive_bayes.fit(train_features, train_labels).# Evaluate classifier.predicted_labels = GAN_ANB.predict(test_features)accuracy = compute accuracy (test_labels, predicted_labels).To generate personalized risk predictions for CAD using the GAN-ANB model, a multistep approach was employed. First, relevant patient features, including demographic information and CAD indicators, were extracted from the clinical and imaging data. The GAN component is then used to augment the dataset by generating synthetic data that simulate various CAD manifestations, which helps balance and enhance the dataset. This augmented data, combined with real patient data, is fed into the Naïve Bayes classifier. The classifier calculates the probabilities of high- and low-risk for CAD based on the patient’s features. Risk score S was computed using the following formula:$$\:S=\frac{P(Y=1|X)}{P(Y=0|X)}$$
(25)
where \(\:(Y=1|X)\) denotes the probability of being high risk given features X, and \(\:P(Y=0|X)\) is the probability of being low risk. Alternatively, if the model provides a direct risk score, it is represented as \(\:S=f(X;\theta\:)\), where f is the function of the GAN-ANB model, and θ represents the trained parameters. This process allows the GAN-ANB model to generate nuanced, sex-specific risk scores and recommendations, thereby improving personalized risk assessment and management for CAD by incorporating both real and synthetic data to capture a comprehensive risk profile for each patient.The proposed study employed a thorough strategy to identify high-risk categories. Patients with substantial stenosis, defined as a coronary artery narrowing of ≥ 70%, indicating serious blockage, were included in this group. Also taken into account of patients whose CAD affects two or more coronary arteries, indicating a broad illness. Additionally, high-risk individuals are identified by the important features of ACS-prone plaques, such as positive remodeling, wherein the plaques induce the artery to grow outward, and spotty calcification, which is an uneven, patchy calcification that increases plaque instability. High-risk individuals also have noticeable clinical symptoms or a history of significant CAD events such as myocardial infarction or recurrent angina. This comprehensive description enabled the proposed method to concentrate on patients with the most severe and unstable CAD characteristics.Results and discussionsThe proposed model leverages a combination of Generative Adversarial Networks (GAN) and GAN-ANB. To guarantee quality and consistency, the dataset underwent extensive pre-processing. The training and testing sets of the dataset were separated to enable a thorough assessment of model performance. The output of the GAN is used to augment the training data, which enhances the variability and richness of the dataset. This augmented dataset is then fed into a Naïve Bayes classifier, which is selected for probabilistic classification jobs owing to its efficiency and simplicity. The combined approach of GAN for data augmentation and Naïve Bayes for classification demonstrates promising results, showing improved classification performance owing to the enhanced dataset diversity provided by GAN, as shown in Fig. 6. This hybrid model offers a novel approach to medical image classification, potentially aiding in more accurate and early diagnosis of cardiovascular diseases. To ensure that the model is accurate and dependable in its predictions, the following performance metrics offer a thorough assessment of the model’s performance in categorizing and segmenting the CVD images.Fig. 6CVD classification results of the proposed model.Dice similarity coefficient (DSC)The DSC is a statistical metric that compares the similarity between two sets of data. It is very useful in image segmentation to determine the overlap between the expected segmentation and ground truth.$$\:DSC=\frac{2\times\:\left|X\cap\:Y\right|}{\left|X\right|+\left|Y\right|}$$
(26)
A DSC of one indicates complete overlap, whereas a DSC of zero indicates no overlap.Mean intersection over union (mIoU)mIoU is another method for assessing image segmentation performance. It computes the average IoU for all classes in multiclass segmentation.$$\:mIoU=\frac{\left|X\cap\:Y\right|}{\left|X\cup\:Y\right|}$$
(27)
Higher mIoU values indicate an improved segmentation performance. An IoU of one indicates complete segmentation.RecallRecall, also known as the sensitivity or true positive rate, is the proportion of genuine positives that the model accurately detects.$$\:Recall=\frac{True\:Positives}{True\:Positives+False\:Negatives}$$
(28)
Recall ranged from 0 to 1, with 1 indicating that all affirmative cases were properly recognized.PrecisionPrecision, which is often referred to as the Positive Predictive Value, is the proportion of correctly predicted positives.$$\:Precision=\frac{True\:Positives}{True\:Positives+\:False\:Positives}\:$$
(29)
As shown in Fig. 7, the segmentation results obtained for different cardiovascular structures using the proposed technique were highly promising. The performance metrics, including the Dice Similarity Coefficient (DSC), recall, precision, and Mean Intersection over Union (mIoU), show how well the model distinguishes and divides important cardiovascular structures. Table 4 shows the technique’s good segmentation performance for the left ventricle, with a DSC of 0.95, mIoU of 0.92, recall of 0.94, precision of 0.96, and accuracy of 0.93. With a DSC of 0.90, mIoU of 0.88, recall of 0.89, precision of 0.91, and accuracy of 0.89, the right ventricle also displayed impressive findings. With a DSC of 0.93, mIoU of 0.90, recall of 0.92, precision of 0.94, and accuracy of 0.91, the myocardium likewise demonstrated strong performance. Good performance metrics were also demonstrated by the left and right atria, demonstrating the dependability of the model in segmenting these structures. A DSC of 0.92, a mIoU of 0.89, a recall of 0.91, a precision of 0.93, and an accuracy of 0.90 were attained by the left atrium, and a DSC of 0.89, a mIoU of 0.86, a recall of 0.88, a precision of 0.90, and an accuracy of 0.87 was attained by the right atrium. These findings imply that the suggested method is quite successful at differentiating between different cardiovascular structures, which makes it a useful tool for predicting CVD and may contribute to earlier and more precise diagnosis.Fig. 7Comparison of obtained segmentation results with competitive methods.Table 4 Segmentation findings of several cardiovascular structures obtained using the proposed methodology.As shown in Fig. 8, GAN-ANB exhibits the best overall performance across most metrics, making it a strong choice for CVD classification. As shown in Table 5, a comparison of several methods for classifying CVD reveals their unique strengths and limitations. UNet provides a solid framework for segmentation tasks, although it cannot capture all complicated patterns in the data35. ResUNet improves on UNet by including residual learning, which results in higher accuracy; however, it still struggles with very complex images36. K-Nearest Neighbors (KNN) have inferior performance, most likely because of their simplicity and limited capacity to handle high-dimensional data adequately37. Convolutional Neural Networks (CNN) provide robust results through deep learning, but they are computationally expensive and require a large amount of training data38. The GAN-ANB model combines the generating and classification approaches, leading to considerable accuracy improvements39.Fig. 8Comparison of CVD classification results with competitive methods.Table 5 Comparison of CVD classification methods.The proposed method outperformed all the other methods, delivering greater performance across measures. However, it may increase the implementation complexity and require careful calibration of both GAN and Naïve Bayes components.GAN-ANB had the highest DSC value of 0.91, indicating a good balance in segmenting CCTA images. CNN (0.76), U-Net (0.75), and KNN (0.74) showed strong but slightly lower performance. ResUNet scored the lowest at 0.72, as shown in Fig. 9.Fig. 9GAN-ANB leads with an mIoU of 0.90, indicating its effectiveness in identifying the actual positives. KNN, CNN, and U-Net followed with mIoU values of 0.81, 0.75, and 0.74, respectively. ResUNet had the lowest recall of 0.71, as shown in Fig. 10.Fig. 10GAN-ANB showed the best recall with a value of 0.96. This means that they had a high proportion of true positives among the predicted positives. CNN, U-Net, and KNN had similar recall values of 0.78, 0.77, and 0.76, respectively. ResUNet again scored the lowest, with a recall of 0.74, as shown in Fig. 11.Fig. 11 GAN-ANB achieved the highest precision of 0.98, indicating a high proportion of correctly predicted pixels. CNN (0.86), U-Net (0.85), and KNN (0.84) followed closely, showing their robustness. ResUNet has the lowest precision at 0.83, indicating that it is slightly less reliable in terms of pixel-wise accuracy, as shown in Fig. 12.Fig. 12Fig. 13ROC of the proposed technique for classification of CVD.The relationship between the True Positive Rate (TPR), also referred to as sensitivity, and the False Positive Rate (FPR), often referred to as 100% specificity for various classes, was depicted by the Receiver Operating Characteristic (ROC) curve. The entire area under the ROC curve is used to assess the extent to which the categorization procedure worked. The ROC curves and related AUC are shown in Fig. 13 to help visualize the classifier’s performance. Overall, the naà ¯ ve Bayes classification approach performed well. The 1-year, 3-year, and 5-year Overall Survival (OS) forecasts had AUC (Area Under the Curve (AUC) values of 0.89, 0.91, and 0.83, respectively, as shown in Fig. 13(a). In terms of predicting survival at three years (AUC = 0.91) and five years (AUC = 0.89), the model performed quite well. As shown in Fig. 13(b), the patient fatality rate increases in tandem with the risk score. With a p-value < 0.0001, the high-risk group had considerably worse OS than the low-risk group, as shown in Fig. 13(c).

Hot Topics

Related Articles