Features derived from the primary tumor were employed. 3D Slicer, Otsu’s thresholding method, and UPerNet were used to extract key features from the images26,27,28. Then, the features from radiomics and Otsu’s thresholding were applied to classify images using support vector machines. Finally, the UPerNet framework, a cutting-edge multi-task model developed by Tete Xiao and specifically designed to tackle complex scene understanding tasks, was utilized. UPerNet is a convolutional neural network (CNN) that captures multi-level information in images by combining different modules such as encoders, decoders, and pyramid pooling modules.The UperNet architecture utilized a unified perception analysis to construct a hierarchical network, enabling the simultaneous resolution of multiple levels of visual abstraction, the learning of distinct patterns in diverse image datasets, and the integration of these insights to facilitate joint reasoning and the discovery of complex visual relationships. By leveraging UPerNet capabilities, the goal is to develop a solution that helps healthcare professionals diagnose small intestinal angiodysplasias more accurately and quickly, ultimately improving patient outcomes and enhancing the diagnostic process29.During training, UPerNet learns to extract information from heterogeneous annotations, including bounding boxes and semantic segmentation maps. With a large amount of labeled data, UPerNet can learn to recognize different objects, parts, and their textures and materials in images.In the testing phase, UPerNet can receive a new image and output a semantic segmentation map containing category information for each region in the image.Statistical methods3dSlicer (version 5.6.1) was used to extract a comprehensive set of 1075 tumor-specific features. Subsequently, we refined the feature list by removing version-specific and non-essential information, resulting in a final count of 1037 relevant features.The OpenCV library in Python was utilized to carry out Otsu threshold segmentation, followed by classification of the segmented images using a Support Vector Machine (SVM).Two metrics was to evaluate the performance of our model: accuracy and Intersection over Union (IoU)30,31.Accuracy measures the correctness of model predictions, calculated as: accuracy = (number of correctly predicted samples / total number of samples) × 100%.IoU measures the degree of overlap between the predicted results and true labels, calculated as: IoU = (intersection area between the predicted result and the true label / union area between the predicted result and the true label) × 100%.ResultsIn the comparative experiment, it was observed that the classification accuracy (ACC) of the model utilizing 3D Slicer for information extraction and applying it to the support vector machine (SVM) was only 0.33. In contrast, the data processed with the Otsu threshold segmentation method achieved a significantly higher classification accuracy of 0.59. In Fig. 2, the detailed process of establishing the AI model was outlined. The journey began with data collection, during which a comprehensive set of medical images containing tumors, along with their corresponding clinical information, was gathered. This data served as the foundation for training and testing the model.Fig. 2Comprehensive guide to AI model development.Next, data preprocessing was conducted, a crucial step that involved the cleaning and enhancement of images to ensure their suitability for analysis. Data standardization was implemented as a comprehensive multi-step process that ensures the accuracy, consistency, and comparability of the data. Crucial steped such as data selection, format conversion, and data labeling were included to guarantee that the data was in a standardized and usable format for analysis and comparison.After preprocessing, feature extraction was carried out. Advanced image processing techniques and deep learning algorithms were utilized to identify meaningful patterns and characteristics from the images. Features such as the shape, texture, and location of the tumors were formed as the basis for the model’s understanding of tumors.Once the features were extracted, the model training phase was entered. The labeled data (images with known tumor characteristics and survival outcomes) was fed into the model, and training was conducted to recognize patterns and make predictions. This process involved optimizing the model’s parameters to minimize errors and maximize accuracy.After training, the model’s performance was evaluated using independent test data. This assessment allowed for the evaluation of its generalization capabilities and the identification of any areas for improvement.Finally, the model was iterated and refined based on the evaluation results. Adjustments to the network architecture were made, hyperparameters were changed, and additional data was incorporated to enhance the model’s performance. This iterative process continued until satisfactory results were achieved. By following this rigorous model-building process, as depicted in Fig. 2, a robust and accurate AI model was developed that can assist doctors in tumor diagnosis and treatment.When describing the results in Table 2, specific analyses were conducted on the models, datasets, and corresponding accuracy and IoU ratios presented in the table. The table showed the performance of AI models on two different datasets, Group 1 and Group 2. By comparing the accuracy and IoU ratios of different models on the same dataset, a general understanding of the model performance was obtained.Table 2 Performance of AI model.Group 1: An accuracy rate of 93.66% was achieved by the artificial intelligence model on Group 1, while the intersection-over-union ratio (IoU) reached 89.79%. This indicated that strong classification and localization capabilities were demonstrated by Model A on Group 1, allowing for accurate identification and localization of targets.Group 2: On Group 2, the accuracy of Model A slightly decreased to 94.14%, while the IoU also decreased to 89.9%. This decrease may suggest that the target or background in Group 1 was more complex than that in Group 2, resulting in a decline in model performance.The effect of our AI was evident. Tumor segmentation and survival prediction were depicted in Fig. 3. Firstly, the model was recognized for its exceptional ability to segment tumor regions accurately in medical images. Advanced image processing techniques and deep learning algorithms were employed, facilitating the precise differentiation between the boundaries of tumors and adjacent healthy tissues. As a result, essential information, including the tumor’s location, shape, and size, was captured with high accuracy. This step proved to be critical for doctors, as it enriched their understanding of tumor characteristics and provided vital support for subsequent treatment plans.Fig. 3The dual capabilities of our AI model.Secondly, in addition to tumor segmentation, the model was also capable of predicting whether a patient’s survival period would exceed three years. This prediction was based on a thorough analysis of various factors, including tumor characteristics such as size, location, and shape. Through meticulous training and optimization, a high level of prediction accuracy was attained by our model, supplying doctors with invaluable reference information.The integration of these two functionalities positioned our AI model as a significant asset in the field of tumor diagnosis and treatment. It aided doctors in developing a deeper understanding of tumors and in crafting more targeted and personalized treatment strategies for patients, ultimately enhancing treatment efficacy and improving patient survival rates.DiscussionThe integration of clinical, genomic, and imaging data enables the development of AI-powered targeted drug therapy for RCC, which identifies patient-specific biomarkers and predicts treatment responses13,15,20. By leveraging machine learning algorithms and large datasets, AI models can uncover novel patterns and relationships that may not be apparent to human clinicians, thereby facilitating the development of more precise and effective treatment plans32.Given the limitation of a relatively small sample size in this study, it was determined that the effects of feature extraction using 3Dslicer and Otsu were suboptimal, whereas the application of the artificial intelligence-based UPerNet model was found to demonstrate significantly better performance in feature extraction. A combination of traditional techniques, including 3D Slicer and Otsu thresholding, was utilized alongside the cutting-edge UPerNet AI model for image analysis. While the results from the traditional methods were less than satisfactory, the outcomes from the innovative UPerNet analysis were exceptionally promising.Our research focuses on using limited 2D slices from patient CT scans for deep analysis. This approach aims to explore how to effectively utilize medical imaging technology to advance the boundaries of medical diagnosis and treatment evaluation in the context of data scarcity33,34. Our work is inspired by a series of cutting-edge research, such as the three major AI data challenges based on CT and ultrasound35, which not only promote the development of algorithms but also demonstrate the potential of AI in complex medical imaging data analysis. Similarly, we have drawn on research in the field of COVID-19 pneumonia, where newly developed AI algorithms predict the therapeutic effect of favipiravir through quantitative CT texture analysis33, revealing the great value of AI in predicting drug response. In addition, we have also been inspired by research on the use of AI tools to assess multiple myeloma bone marrow infiltration in [18 F]FDG PET/CT35, which demonstrates the broad application prospects of AI in precision medicine.Our specific research will focus on AI-assisted CT segmentation technology, especially in the validation study of body composition analysis. Although existing studies have shown the high accuracy and reproducibility of AI in CT segmentation29,33,34,36, we hope to further explore how to achieve more accurate assessment of individual body composition through optimizing algorithms and data processing processes in the case of limited patient numbers. This research will not only help to improve the scientific nature of clinical decision-making but may also provide strong support for the development of personalized medical treatment plans.Concurrently, our research also addresses the application of deep learning in medical image registration, with a particular interest in the potential of non-rigid image registration technology for high-dose rate fractionated cervical cancer brachytherapy36. While this research focus is distinct from our primary objectives in AI predictive modeling for renal cancer patients undergoing targeted therapy, it provides valuable perspectives on harnessing AI technology to tackle intricate challenges in medical image processing.This study demonstrates the clinical value of AI predictive modeling in personalizing treatment decisions for renal cancer patients undergoing targeted therapy. By analyzing patient data, the AI model was able to identify specific patient subgroups that were more likely to respond well to particular treatments, allowing clinicians to make more informed decisions about therapy selection. This has significant implications for improving patient outcomes, as patients who are most likely to benefit from a particular treatment can receive it earlier and avoid unnecessary exposure to ineffective or toxic therapies. Moreover, the model’s ability to identify patients at high risk of poor outcomes enables early intervention and adjustment of treatment plans, which can lead to better survival rates and improved quality of life. The use of AI predictive modeling in this study highlights its potential to transform the way we approach personalized medicine in oncology, enabling clinicians to deliver more effective and efficient care for patients with renal cancer.Our study has led to the development of a novel survival prediction model for targeted drug therapy in patients with RCC, leveraging AI to analyze tumor characteristics from CT imaging data. The model integrates a small-scale clinical dataset, CT imaging data, and targeted therapy information to predict patient survival outcomes. Our findings demonstrate exceptional prediction accuracy on the validation set, with accurate forecasting of patient survival outcomes. This finding has significant implications for personalized treatment strategies in RCC patient management, ultimately enhancing patient outcomes and quality of life.Limitations of the Study.While this study has yielded promising results, it is not without limitations. Notably, the sample size is relatively small, which may not fully capture the diversity of patients with renal cell carcinoma (RCC). To address this, future research should prioritize expanding the sample size and enhancing the model’s generalizability. Furthermore, this study’s focus on predicting survival outcomes is a crucial first step, but it is equally important to explore the potential of AI technology in optimizing and personalizing treatment plans. By doing so, we can provide more accurate and effective treatment plans tailored to individual patients’ needs. Future studies should aim to integrate AI-driven decision-making into treatment planning, ultimately improving patient outcomes.The model was trained on a small dataset, which increased the risk of overfitting; it learned the noise and random fluctuations in the data, leading to overly optimistic results that may not generalize to new data. The validation process revealed a drop in predictive accuracy and an increase in error rates on a separate validation set, indicating that the model’s performance might not be as robust in a broader patient population. This will raise concerns about the generalizability of the findings and the applicability of the model to other datasets or patient groups. To further investigate, a larger dataset would be employed, which confirmed the presence of overfitting, as the model’s performance varied across different data subsets.AI has been leveraged to reconstruct three-dimensional (3D) models from computed tomography (CT) images to personalize surgical treatment of renal cell carcinoma (RCC). Researchers have employed deep learning algorithms to create 3D models from CT images, achieving improved surgical planning and outcome prediction. For instance, deep learning algorithms was used to create 3D models for surgical planning and outcome prediction37. Similarly, convolutional neural networks was used to segment and reconstruct CT images for RCC surgery planning, achieving comparable results38. AI-assisted 3D reconstruction significantly improved surgical accuracy and reduced complications in RCC surgery39. These studies exemplify the potential of AI in enhancing surgical planning and outcome prediction for RCC patients.This study’s findings and limitations serve as a foundation for future research directions, which can be guided by the following key areas:
1.
Expanding the Horizon: Scalability and Generalizability.
Increasing the sample size will enable the model to generalize more accurately to diverse patient populations, thereby enhancing its predictive capabilities and broadening its applicability.
2.
Tailoring Treatment: AI-Driven Personalization.
Investigating the potential of AI technology in developing and optimizing treatment plans will allow for the creation of personalized, data-driven treatment strategies that improve patient outcomes and enhance patient-centered care.
3.
Navigating Ethical Landscapes: Responsible Adoption and Protection.
Strengthening research in this area will ensure that AI technology applications in the medical field adhere to established ethical norms, laws, and regulations, thereby safeguarding patient confidentiality and trust, and promoting responsible innovation.