FreshRSS

🔒
❌ Acerca de FreshRSS
Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerTus fuentes RSS

Descriptive study of the challenges when implementing an app for patients with neovascular age-related macular degeneration to monitor their vision at home

Por: Reeves · B. C. · Wickens · R. · OConnor · S. R. · Gidman · E. A. · Ward · E. · Treanor · C. · Peto · T. · Burton · B. J. L. · Knox · P. C. · Lotery · A. · Sivaprasad · S. · Donnelly · M. · Rogers · C. A. · Hogg · R. E.
Objectives

Remote monitoring of health has the potential to reduce the burden to patients of face-to-face appointments and make healthcare more efficient. Apps are available for patients to self-monitor vision at home, for example, to detect reactivation of age-related macular degeneration (AMD). Describing the challenges when implementing apps for self-monitoring of vision at home was an objective of the MONARCH study to evaluate two vision-monitoring apps on an iPod Touch (Multibit and MyVisionTrack).

Design

Diagnostic Test Accuracy study.

Setting

Six UK hospitals.

Methods

The study provides an example of the real-world implementation of such apps across health sectors in an older population. Challenges described include the following: (1) frequency and reason for incoming calls made to a helpline and outgoing calls made to participants; (2) frequency and duration of events responsible for the tests being unavailable; and (3) other technical and logistical challenges.

Results

Patients (n=297) in the study were familiar with technology; 252/296 (85%) had internet at home and 197/296 (67%) had used a smartphone. Nevertheless, 141 (46%) called the study helpline, more often than anticipated. Of 435 reasons for calling, all but 42 (10%) related to testing with the apps or hardware, which contributed to reduced adherence. The team made at least one call to 133 patients (44%) to investigate why data had not been transmitted. Multibit and MyVisionTrack apps were unavailable for 15 and 30 of 1318 testing days for reasons which were the responsibility of the app providers. Researchers also experienced technical challenges with a multiple device management system. Logistical challenges included regulations for transporting lithium-ion batteries and malfunctioning chargers.

Conclusions

Implementation of similar technologies should incorporate a well-resourced helpline and build in additional training time for participants and troubleshooting time for staff. There should also be robust evidence that chosen technologies are fit for the intended purpose.

Trial registration number

ISRCTN79058224.

Implementing a complex mental health intervention in occupational settings: process evaluation of the MENTUPP pilot study

Por: Tsantila · F. · Coppens · E. · De Witte · H. · Arensman · E. · Aust · B. · Pashoja · A. C. · Corcoran · P. · Cully · G. · De Winter · L. · Doukani · A. · Dushaj · A. · Fanaj · N. · Griffin · E. · Hogg · B. · Holland · C. · Leduc · C. · Leduc · M. · Mathieu · S. · Maxwell · M. · Ni Dhalaigh
Background

According to the Medical Research Council (MRC) framework, the theorisation of how multilevel, multicomponent interventions work and the understanding of their interaction with their implementation context are necessary to be able to evaluate them beyond their complexity. More research is needed to provide good examples following this approach in order to produce evidence-based information on implementation practices.

Objectives

This article reports on the results of the process evaluation of a complex mental health intervention in small and medium enterprises (SMEs) tested through a pilot study. The overarching aim is to contribute to the evidence base related to the recruitment, engagement and implementation strategies of applied mental health interventions in the workplace.

Method

The Mental Health Promotion and Intervention in Occupational Settings (MENTUPP) intervention was pilot tested in 25 SMEs in three work sectors and nine countries. The evaluation strategy of the pilot test relied on a mixed-methods approach combining qualitative and quantitative research methods. The process evaluation was inspired by the RE-AIM framework and the taxonomy of implementation outcomes suggested by Proctor and colleagues and focused on seven dimensions: reach, adoption, implementation, acceptability, appropriateness, feasibility and maintenance.

Results

Factors facilitating implementation included the variety of the provided materials, the support provided by the research officers (ROs) and the existence of a structured plan for implementation, among others. Main barriers to implementation were the difficulty of talking about mental health, familiarisation with technology, difficulty in fitting the intervention into the daily routine and restrictions caused by COVID-19.

Conclusions

The results will be used to optimise the MENTUPP intervention and the theoretical framework that we developed to evaluate the causal mechanisms underlying MENTUPP. Conducting this systematic and comprehensive process evaluation contributes to the enhancement of the evidence base related to mental health interventions in the workplace and it can be used as a guide to overcome their contextual complexity.

Trial registration number

ISRCTN14582090.

Evaluating the performance of artificial intelligence software for lung nodule detection on chest radiographs in a retrospective real-world UK population

Por: Maiter · A. · Hocking · K. · Matthews · S. · Taylor · J. · Sharkey · M. · Metherall · P. · Alabed · S. · Dwivedi · K. · Shahin · Y. · Anderson · E. · Holt · S. · Rowbotham · C. · Kamil · M. A. · Hoggard · N. · Balasubramanian · S. P. · Swift · A. · Johns · C. S.
Objectives

Early identification of lung cancer on chest radiographs improves patient outcomes. Artificial intelligence (AI) tools may increase diagnostic accuracy and streamline this pathway. This study evaluated the performance of commercially available AI-based software trained to identify cancerous lung nodules on chest radiographs.

Design

This retrospective study included primary care chest radiographs acquired in a UK centre. The software evaluated each radiograph independently and outputs were compared with two reference standards: (1) the radiologist report and (2) the diagnosis of cancer by multidisciplinary team decision. Failure analysis was performed by interrogating the software marker locations on radiographs.

Participants

5722 consecutive chest radiographs were included from 5592 patients (median age 59 years, 53.8% women, 1.6% prevalence of cancer).

Results

Compared with radiologist reports for nodule detection, the software demonstrated sensitivity 54.5% (95% CI 44.2% to 64.4%), specificity 83.2% (82.2% to 84.1%), positive predictive value (PPV) 5.5% (4.6% to 6.6%) and negative predictive value (NPV) 99.0% (98.8% to 99.2%). Compared with cancer diagnosis, the software demonstrated sensitivity 60.9% (50.1% to 70.9%), specificity 83.3% (82.3% to 84.2%), PPV 5.6% (4.8% to 6.6%) and NPV 99.2% (99.0% to 99.4%). Normal or variant anatomy was misidentified as an abnormality in 69.9% of the 943 false positive cases.

Conclusions

The software demonstrated considerable underperformance in this real-world patient cohort. Failure analysis suggested a lack of generalisability in the training and testing datasets as a potential factor. The low PPV carries the risk of over-investigation and limits the translation of the software to clinical practice. Our findings highlight the importance of training and testing software in representative datasets, with broader implications for the implementation of AI tools in imaging.

❌