Remote monitoring of health has the potential to reduce the burden to patients of face-to-face appointments and make healthcare more efficient. Apps are available for patients to self-monitor vision at home, for example, to detect reactivation of age-related macular degeneration (AMD). Describing the challenges when implementing apps for self-monitoring of vision at home was an objective of the MONARCH study to evaluate two vision-monitoring apps on an iPod Touch (Multibit and MyVisionTrack).
Diagnostic Test Accuracy study.
Six UK hospitals.
The study provides an example of the real-world implementation of such apps across health sectors in an older population. Challenges described include the following: (1) frequency and reason for incoming calls made to a helpline and outgoing calls made to participants; (2) frequency and duration of events responsible for the tests being unavailable; and (3) other technical and logistical challenges.
Patients (n=297) in the study were familiar with technology; 252/296 (85%) had internet at home and 197/296 (67%) had used a smartphone. Nevertheless, 141 (46%) called the study helpline, more often than anticipated. Of 435 reasons for calling, all but 42 (10%) related to testing with the apps or hardware, which contributed to reduced adherence. The team made at least one call to 133 patients (44%) to investigate why data had not been transmitted. Multibit and MyVisionTrack apps were unavailable for 15 and 30 of 1318 testing days for reasons which were the responsibility of the app providers. Researchers also experienced technical challenges with a multiple device management system. Logistical challenges included regulations for transporting lithium-ion batteries and malfunctioning chargers.
Implementation of similar technologies should incorporate a well-resourced helpline and build in additional training time for participants and troubleshooting time for staff. There should also be robust evidence that chosen technologies are fit for the intended purpose.
According to the Medical Research Council (MRC) framework, the theorisation of how multilevel, multicomponent interventions work and the understanding of their interaction with their implementation context are necessary to be able to evaluate them beyond their complexity. More research is needed to provide good examples following this approach in order to produce evidence-based information on implementation practices.
This article reports on the results of the process evaluation of a complex mental health intervention in small and medium enterprises (SMEs) tested through a pilot study. The overarching aim is to contribute to the evidence base related to the recruitment, engagement and implementation strategies of applied mental health interventions in the workplace.
The Mental Health Promotion and Intervention in Occupational Settings (MENTUPP) intervention was pilot tested in 25 SMEs in three work sectors and nine countries. The evaluation strategy of the pilot test relied on a mixed-methods approach combining qualitative and quantitative research methods. The process evaluation was inspired by the RE-AIM framework and the taxonomy of implementation outcomes suggested by Proctor and colleagues and focused on seven dimensions: reach, adoption, implementation, acceptability, appropriateness, feasibility and maintenance.
Factors facilitating implementation included the variety of the provided materials, the support provided by the research officers (ROs) and the existence of a structured plan for implementation, among others. Main barriers to implementation were the difficulty of talking about mental health, familiarisation with technology, difficulty in fitting the intervention into the daily routine and restrictions caused by COVID-19.
The results will be used to optimise the MENTUPP intervention and the theoretical framework that we developed to evaluate the causal mechanisms underlying MENTUPP. Conducting this systematic and comprehensive process evaluation contributes to the enhancement of the evidence base related to mental health interventions in the workplace and it can be used as a guide to overcome their contextual complexity.
Early identification of lung cancer on chest radiographs improves patient outcomes. Artificial intelligence (AI) tools may increase diagnostic accuracy and streamline this pathway. This study evaluated the performance of commercially available AI-based software trained to identify cancerous lung nodules on chest radiographs.
This retrospective study included primary care chest radiographs acquired in a UK centre. The software evaluated each radiograph independently and outputs were compared with two reference standards: (1) the radiologist report and (2) the diagnosis of cancer by multidisciplinary team decision. Failure analysis was performed by interrogating the software marker locations on radiographs.
5722 consecutive chest radiographs were included from 5592 patients (median age 59 years, 53.8% women, 1.6% prevalence of cancer).
Compared with radiologist reports for nodule detection, the software demonstrated sensitivity 54.5% (95% CI 44.2% to 64.4%), specificity 83.2% (82.2% to 84.1%), positive predictive value (PPV) 5.5% (4.6% to 6.6%) and negative predictive value (NPV) 99.0% (98.8% to 99.2%). Compared with cancer diagnosis, the software demonstrated sensitivity 60.9% (50.1% to 70.9%), specificity 83.3% (82.3% to 84.2%), PPV 5.6% (4.8% to 6.6%) and NPV 99.2% (99.0% to 99.4%). Normal or variant anatomy was misidentified as an abnormality in 69.9% of the 943 false positive cases.
The software demonstrated considerable underperformance in this real-world patient cohort. Failure analysis suggested a lack of generalisability in the training and testing datasets as a potential factor. The low PPV carries the risk of over-investigation and limits the translation of the software to clinical practice. Our findings highlight the importance of training and testing software in representative datasets, with broader implications for the implementation of AI tools in imaging.