FreshRSS

🔒
❌ Acerca de FreshRSS
Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerTus fuentes RSS

Predicting radiographic outcomes of vertebral body tethering in adolescent idiopathic scoliosis patients using machine learning

by Ausilah Alfraihat, Amer F. Samdani, Sriram Balasubramanian

Anterior Vertebral Body Tethering (AVBT) is a growing alternative treatment for adolescent idiopathic scoliosis (AIS), offering an option besides spinal fusion. While AVBT aims to correct spinal deformity through growth correction, its outcomes have been mixed. To improve surgical outcomes, this study aimed to develop a machine learning-based tool to predict short- and midterm spinal curve correction in AIS patients who underwent AVBT surgery, using the most predictive clinical, radiographic, and surgical parameters. After institutional review board approval and based on inclusion criteria, 91 AIS patients who underwent AVBT surgery were selected from the Shriners Hospitals for Children, Philadelphia. For all patients, longitudinal standing (PA or AP, and lateral) and side bending spinal Radiographs were retrospectively obtained at six visits: preop and first standing, one year, two years, five years postop, and at the most recent follow-up. Demographic, radiographic, and surgical features associated with curve correction were collected. The sequential backward feature selection method was used to eliminate correlated features and to provide a rank-ordered list of the most predictive features of the AVBT correction. A Gradient Boosting Regressor (GBR) model was trained and tested using the selected features to predict the final correction of the curve in AIS patients. Eleven most predictive features were identified. The GBR model predicted the final Cobb angle with an average error of 6.3 ± 5.6 degrees. The model also provided a prediction interval, where 84% of the actual values were within the 90% prediction interval. A list of the most predictive features for AVBT curve correction was provided. The GBR model, trained on these features, predicted the final curve magnitude with a clinically acceptable margin of error. This model can be used as a clinical tool to plan AVBT surgical parameters and improve outcomes.

Evaluating the performance of artificial intelligence software for lung nodule detection on chest radiographs in a retrospective real-world UK population

Por: Maiter · A. · Hocking · K. · Matthews · S. · Taylor · J. · Sharkey · M. · Metherall · P. · Alabed · S. · Dwivedi · K. · Shahin · Y. · Anderson · E. · Holt · S. · Rowbotham · C. · Kamil · M. A. · Hoggard · N. · Balasubramanian · S. P. · Swift · A. · Johns · C. S.
Objectives

Early identification of lung cancer on chest radiographs improves patient outcomes. Artificial intelligence (AI) tools may increase diagnostic accuracy and streamline this pathway. This study evaluated the performance of commercially available AI-based software trained to identify cancerous lung nodules on chest radiographs.

Design

This retrospective study included primary care chest radiographs acquired in a UK centre. The software evaluated each radiograph independently and outputs were compared with two reference standards: (1) the radiologist report and (2) the diagnosis of cancer by multidisciplinary team decision. Failure analysis was performed by interrogating the software marker locations on radiographs.

Participants

5722 consecutive chest radiographs were included from 5592 patients (median age 59 years, 53.8% women, 1.6% prevalence of cancer).

Results

Compared with radiologist reports for nodule detection, the software demonstrated sensitivity 54.5% (95% CI 44.2% to 64.4%), specificity 83.2% (82.2% to 84.1%), positive predictive value (PPV) 5.5% (4.6% to 6.6%) and negative predictive value (NPV) 99.0% (98.8% to 99.2%). Compared with cancer diagnosis, the software demonstrated sensitivity 60.9% (50.1% to 70.9%), specificity 83.3% (82.3% to 84.2%), PPV 5.6% (4.8% to 6.6%) and NPV 99.2% (99.0% to 99.4%). Normal or variant anatomy was misidentified as an abnormality in 69.9% of the 943 false positive cases.

Conclusions

The software demonstrated considerable underperformance in this real-world patient cohort. Failure analysis suggested a lack of generalisability in the training and testing datasets as a potential factor. The low PPV carries the risk of over-investigation and limits the translation of the software to clinical practice. Our findings highlight the importance of training and testing software in representative datasets, with broader implications for the implementation of AI tools in imaging.

❌