Quality collaboratives improve quality of care at the hospital and collaborative levels, but less is understood about how such efforts affect patient-level disparities. This study evaluated how a quality improvement (QI) effort (increasing multiarterial grafting during coronary artery bypass grafting (CABG)) translated into populations which historically receive lower-quality care (females and patients of low socioeconomic status).
Retrospective cohort study.
All non-federal hospitals in the state of Michigan that perform cardiac surgery and participate in a statewide collaborative database (n=33).
Patients undergoing first-time, isolated CABG receiving at least two bypass grafts from 2011 to 2022 were identified.
Association of sex and socioeconomic status with multiarterial grafting was evaluated across the study period. The distressed community index (DCI), a socioeconomic ranking (0—not distressed, 100—severely distressed), was matched to the patient’s zip code. Hierarchical regression modelling was performed to associate DCI and sex with multiarterial grafting, incorporating patient factors and hospital and surgeon effects. A sex-surgery year and DCI-surgery year interaction term was performed to assess the change in the rate of multiarterial grafting.
A total of 40 322 patients underwent CABG at 33 centres with a median age of 66 years and 24% were female. The rate of multiarterial grafting was 15%, although lower among females (10% vs 17%) and the highest (vs lowest) DCI quartile (14% vs 18%). After risk adjustment, females were less likely to receive multiarterial grafting (ORadj 0.51 (95% CI 0.45 to 0.58), padj 0.35 per 10-point increase (95% CI 0.24 to 0.51), p0.05).
Despite a large overall increase in multiarterial grafting due to QI efforts, females and patients with low socioeconomic status had lower rates of multiarterial grafting. QI efforts should be evaluated both overall and among patients who historically receive lower quality care to improve quality and equity.
by Maria Grazia De Iorio, Michele Polli, Sara Ghilardi, Stefano Frattini, Mara Bagardi, Alessandra Paganelli, Maria Cristina Cozzi, Kenza Seghrouchni, Paola Giuseppina Brambilla, Giulietta Minozzi
Non-epidermolytic ichthyosis (NEI) is a hereditary skin disorder affecting several dog breeds, most notably the Golden Retriever. It is primarily caused by a loss-of-function variant in the PNPLA1 gene, while a second, less common form is associated with a deletion in the ABHD5 gene. This retrospective study aimed to assess the prevalence and temporal trends of both mutations in Golden Retrievers tested in Italy between 2017 and September 2025. A total of 508 genetic tests were analyzed, including 463 dogs tested for the PNPLA1 mutation, 42 for the ABHD5 deletion, and 3 for both variants. DNA was extracted from blood or buccal samples and analyzed by real-time PCR followed by confirmatory Sanger sequencing. Among the PNPLA1 tested dogs, 42% were clears (wt/wt), 37% carriers (wt/mut), and 21% affected (mut/mut), with calculated allele frequencies of 60% wild-type and 40% mutant. A significant temporal decline in mutant allele frequency was observed, accompanied by an increasing number of animals tested over time, suggesting growing interest in genetic screening and its impact on selective breeding. Conversely, all dogs tested for the ABHD5 deletion were wild-type, supporting its rarity in the breed. Overall, these findings confirm that PNPLA1-related ichthyosis remains one of the most prevalent hereditary disorders in Golden Retrievers, although its frequency is decreasing. The results emphasize the effectiveness of genetic testing in disease prevention and highlight the importance of continued monitoring to maintain genetic health within the breed.In May 2023, the US Food and Drug Administration (FDA) initially approved an AS01E-adjuvanted respiratory syncytial virus (RSV) prefusion F protein-based vaccine (adjuvanted RSVPreF3) for adults aged ≥60 years. The approval was expanded in June 2024 to include adults 50–59 years of age at increased risk for RSV-associated lower respiratory tract disease. In this paper, we describe the protocol of a postmarketing safety study evaluating the association between adjuvanted RSVPreF3 and new-onset Guillain-Barré syndrome (GBS), acute disseminated encephalomyelitis (ADEM) and atrial fibrillation (AF) among adults ≥50 years of age in the USA and provide our rationale for key methodological decisions.
The potential associations between adjuvanted RSVPreF3 and GBS, ADEM and AF will be evaluated using secondary healthcare data and the self-controlled risk interval (SCRI) design. Data from five research partners in the USA spanning August 2023 through June 2030 will be used for the conduct of yearly monitoring queries and, sample size permitting, SCRI analyses. Claims-based definitions for new-onset outcomes (first diagnosis in 365 days) are: ≥1 inpatient diagnosis for GBS and ADEM; ≥1 inpatient or ≥2 ambulatory/emergency diagnoses for AF. The primary risk and control windows are 1–42 and 43–84 days, respectively, for GBS and ADEM; and 1–8 and 9–16 days for AF. SCRI analyses for GBS and ADEM will include chart-confirmed cases. SCRI analyses for AF will adjust for the positive predictive value obtained from validation against charts. Conditional Poisson regression will be used to calculate incidence rate ratios.
This study was approved by the Institutional Review Boards (IRB) of Harvard Pilgrim Health Care Institute; WIRB-Copernicus Group, Inc and its affiliates (collectively, ‘WCG’); WCG IRB, Inc; and Sterling IRB, with Federal Wide Assurance (FWA) numbers FWA00000100, FWA00033319 and FWA00025632, respectively, for all participating research partners. Study results will be shared with the US FDA and publicly disseminated through national or international clinical or scientific conferences and peer-reviewed publications.
This protocol has been registered in the Heads of Medicines Agencies–European Medicines Agency Real World Data Catalogues (EUPAS1000000486).
Patient falls in hospitals lead to patient harm, staff distress and economic burden on health systems. There are few strategies with robust evidence demonstrating benefit for the prevention of falls, especially in acute hospital settings. Education and multicomponent fall prevention approaches are promising. Rigorous systematic measurement of implementation has been lacking in most hospital fall prevention trials. This paper describes the protocol for a trial that will evaluate the impact of supported implementation of tailored multicomponent fall prevention interventions on patient falls in hospital.
A stepped-wedge hybrid type I effectiveness implementation cluster randomised trial will be conducted. Twelve inpatient wards across four metropolitan hospitals will be enrolled in the trial, clustered into groups of four and randomised to commence the intervention at one of three time periods. Patients and ward staff will be recruited to complete pre-implementation surveys, which, combined with analysis of routinely collected local falls data and staff brainstorming, will inform tailored multicomponent fall prevention interventions for each ward. Wards will receive quality improvement training, clinical facilitation and staff education for at least 4 months to support implementation of their fall prevention interventions. The primary outcome—rate of falls—will be measured using routinely collected hospital falls data from the incident management system and medical records. Pre-implementation and post-implementation patient and staff surveys, qualitative interviews and bedside audits will measure secondary effectiveness and implementation outcomes. Healthcare utilisation from hospital data will inform the cost-effectiveness analysis.
The Sydney Local Health District Human Research Ethics Committee (RPAH Zone) approved this trial (protocol number X24-0087 and 2024/ETH00583). The trial is registered with the Australian and New Zealand Clinical Trials Registry (ACTRN12624000896572). Data collection commenced in October 2024, due for completion in May 2026. Results will be published in reputable international journals and presented at relevant conferences.
Australian and New Zealand Clinical Trials Registry (ACTRN12624000896572).
by Sian E. Wanstall, Brandon W. J. Brown, Meagan E. Crowther, Claire Dunbar, Robert J. Adams, Anjum Naweed, Amy C. Reynolds
BackgroundParamedics face unique occupational hazards, including high operational demands, trauma exposure, and shift work, all of which impact mental well-being. Suboptimal sleep is also common in this workforce and closely linked to adverse mental health outcomes. This scoping review synthesizes evidence to date on interventions to support paramedic mental well-being including sleep-based interventions.
Materials and methodsThis review was pre-registered on the Open Science Framework (https://doi.org/10.17605/OSF.IO/7VSD9). Systematic database searches were conducted in October 2024 for original research published after 2004. Data were narratively synthesised, and findings reported following established guidelines.
ResultsNineteen sources were included, involving 1,067 participants across seven countries. Seventeen interventions were examined, predominantly via randomized controlled trials (58%), utilizing a total of 43 different measurement scales to evaluate mental health and sleep outcomes. Interventions included psychological (37%), sleep, fatigue and/or shift work (32%), and complementary and alternative medicine (32%) approaches which primarily focussed on the individual-level (94%). Studies were limited by sample sizes, design and quality, limited long term follow-up, and low baseline symptoms.
ConclusionsThis review highlights a critical gap in robust, evidence-based, system-level interventions to address poor sleep and mental well-being in paramedics. Future research should prioritise co-designed, context-sensitive approaches, ideally integrated within organisational structures to ensure relevance and accessibility.
Chronic venous leg ulcers (CVLUs) affect 1%–3% of adults. Standard compression therapy achieves healing in only 40%–70% of cases at 24 weeks. Evidence for hyperbaric oxygen (HBO) therapy remains controversial, with limited sham-controlled trials. To evaluate whether adjunctive HBO improves healing of refractory CVLUs compared to standard care alone. Single-centre, open-label randomised trial of 80 adults with CVLUs that persisted > 3 months despite standard care (defined as < 30% area reduction after 4 weeks of compression therapy). All consecutive eligible patients were randomised to HBO (20 sessions at 2.4 ATA, 90 min) plus standard care (n = 40) or standard care alone (n = 40). Primary outcome: percentage ulcer area reduction at day 30. Blinded assessors measured wounds, though participants knew their treatment allocation. HBO group had greater area reduction (62.1% ± 22.1% vs. 41.7% ± 21.5%; mean difference 20.4%, 95% CI: 10.1–30.7, p < 0.001; Cohen's d = 0.95). Complete healing at 90 days occurred in 62.5% vs. 30.0% (NNT = 3). TcPO2 increased from 26.1 ± 6.3 to 150.3 ± 45.6 mmHg in HBO group (p < 0.001). Pain decreased more with HBO (ΔVAS −5.0 vs. −1.5, p < 0.001). Three patients (7.5%) had mild ear barotrauma that resolved spontaneously. Main limitations were lack of sham control and 90-day follow-up. In this trial, adjunctive HBO was associated with faster short-term healing of refractory venous ulcers < 20 cm2. However, the open-label design and single-centre setting limit confidence in these findings. Sham-controlled multicentre trials with longer follow-up are needed before recommending routine use.
by Hemant Mahajan, Poppy Alice Carson Mallinson, Judith Lieber, Santhi Bhogadi, Santosh Kumar Banjara, Anoop Shah, Vipin Gupta, Gagandeep Kaur Walia, Bharati Kulkarni, Sanjay Kinra
Background and AimCardiovascular diseases (CVDs) represent a growing public-health challenge in India, where nearly one in four deaths is CVD-related. Accurate risk stratification underpins targeted prevention, yet laboratory-dependent tools are often impractical in resource-limited settings. The World Health Organization (WHO) and GLOBORISK initiatives both offer non-laboratory-based 10-year CVD risk algorithms alongside their laboratory-based counterparts. We aimed to compare laboratory- and non-laboratory-based WHO and GLOBORISK CVD risk scores, assess their concordance, and examine relationships with sub-clinical atherosclerosis in a rural Indian cohort.
Materials and MethodsWe conducted a cross-sectional analysis of 2,465 adults (1,184 men, 1,281 women) aged 40−74 years from the third wave (2010−12) of the Andhra Pradesh Children and Parents Study (APCAPS). Participants with prior CVD were excluded. Ten-year CVD risk was calculated using sex-specific WHO (South Asia) and India-calibrated GLOBORISK models, both laboratory-based (age, sex, smoking, systolic blood pressure, diabetes, total cholesterol) and non-laboratory-based (age, sex, smoking, systolic blood pressure, BMI) algorithms. Categorical agreement was quantified via percentage agreement and quadratic weighted kappa (κ); continuous agreement by Bland-Altman analysis. We also evaluated linear associations between each risk score (categorical and continuous) and three sub-clinical atherosclerosis markers: carotid intima-media thickness (CIMT), pulse-wave velocity (PWV), and augmentation index (AIx), through sex-stratified multi-level linear regression with random intercept at the household level, adjusting for multiple testing (p Results
Median WHO-CVD-risk was 6.0% (IQR 4% − 9%) in men and 3.0% (2% − 4%) in women for both lab and non-lab models; median GLOBORISK-CVD-risk was 12.0% (9% − 16%) for lab-model vs. 15.0% (10% − 16%) for non-lab-model in men and 5.0% (3% − 9%) for lab-model vs. 5.0% (3% − 9%) for non-lab-model in women. Categorical agreement was substantial to almost perfect: WHO κ = 0.82 (overall), GLOBORISK κ = 0.72. Bland-Altman analyses demonstrated mean differences Conclusion
Non-laboratory-based WHO and GLOBORISK CVD risk scores exhibit high overall agreement with laboratory-based models and correlate strongly with subclinical atherosclerosis in rural India. However, modest underestimation in high-risk subgroups (diabetics, hypercholesterolemia) warrants cautious interpretation. These findings support the feasibility of non-lab risk assessment in resource-constrained settings, while underscoring the need for prospective validation against hard cardiovascular outcomes prior to large-scale implementation.
Enteric fever, primarily caused by Salmonella enterica Typhi and Salmonella enterica Paratyphi A (SPA), is endemic mainly in South Asia, disproportionately affecting school-age children. Although typhoid conjugate vaccines (TCVs) are effective and implemented in many countries, no licensed vaccine exists against paratyphoid A. Bivalent vaccines targeting both S. Typhi and SPA may address this gap. Although field efficacy trials are not considered feasible, controlled human infection models (CHIMs) offer an alternative pathway for evaluating vaccine efficacy. This will be the first efficacy study of a bivalent vaccine against typhoid and paratyphoid A using a paratyphoid CHIM.
This is a phase II multicentre, double-blind, randomised controlled trial assessing the efficacy and immunogenicity of a bivalent conjugate vaccine candidate, Serum Institute of India Typhoid Conjugate Vaccine (Bivalent) (SII-TCV(B)), against SPA using a CHIM in healthy UK adults aged 18–55 years. A total of 192 participants will be randomised 1:1 to receive either SII-TCV(B) or a licensed Vi-polysaccharide typhoid vaccine (Vi-PS). All participants will be orally challenged with S. Paratyphi A (strain NVGH308) 28 days postvaccination. Participants will be monitored closely for 14 days and treated at 14 days postchallenge or promptly on diagnosis, according to prespecified criteria. The primary objective is to evaluate vaccine efficacy of SII-TCV(B) against paratyphoid infection using a CHIM. The coprimary immunogenicity objective is to assess non-inferiority of the typhoid IgG response compared with a licensed Vi-PS control.
The study has received ethical approval from the Berkshire Research Ethics Committee (24/SC/0309) and regulatory approval from the UK Medicines and Healthcare products Regulatory Agency. Results will be disseminated via peer-reviewed publications and scientific meetings.
La formación del profesional de enfermería incluye tradicionalmente teoría y práctica estructurada en entornos hospitalarios y/o ambulatorios, con la finalidad de que desarrollen las habilidades necesarias para interpretar, intervenir y cuidar a los pacientes. Para el presente estudio se describen las Experiencias del Aprendizaje de la Práctica Clínica en Pasantes de la Licenciatura en Enfermería del periodo 2024-2025. Estudio de tipo cualitativo con enfoque fenomenológico, con nueve pasantes de servicio social seleccionados de 110 de forma aleatoria, a quienes se les realizó entrevista de 20 minutos promedio. El presente estudio se apegó a los lineamientos de la secretaria de Salud en materia de investigación en seres Humanos y a la declaración de Helsinki. Los resultados van desde las experiencias, vivencias, aprendizaje auténtico, escenarios clínicos, habilidades, profesores del área clínica. Se puede concluir que se divide las experiencias en dos momentos primer y segundo semestre, el primero, los estudiantes percibían la pasantía como una oportunidad esencial para aplicar conocimientos teóricos y desarrollar habilidades prácticas en un entorno real. Sin embargo, esta visión inicial estaba acompañada de emociones como miedo y ansiedad, reflejo de la inseguridad ante las exigencias del ámbito clínico y las expectativas de aprendizaje técnico y adaptación a nuevas responsabilidades; en la segunda mitad de la práctica clínica, los participantes valoran la pasantía como una experiencia transformadora, donde adquirieron competencias técnicas como el manejo de equipos médicos y habilidades socioemocionales como la empatía y la resolución de conflictos.
Cardiogenic shock (CS) is a complex syndrome characterised by primary cardiac dysfunction. Despite advances in therapeutic options such as mechanical cardiac support, it remains associated with high mortality. Although previous registries have described heterogeneous populations and outcomes across different centres, contemporary real-world data on management practices remain limited. This gap is particularly evident in low- and middle-income countries, where there is no robust registry that clearly defines the current state of CS management. Therefore, a multicentre registry is needed to better characterise current practices and outcomes. Our study aims to gain insight into current therapeutic trends in Mexico, a low- to middle-income country with a significant cardiovascular disease burden.
The Mexican Registry of Cardiogenic Shock is a quality initiative that aims to identify therapeutic trends, demographic characteristics and clinical presentations. It also aims to evaluate outcomes, including mortality and cognitive function at in-hospital and 1-year follow-ups, and to identify areas for improvement in the care process across the broad spectrum of CS.
Ethical approval for this multicentre study was obtained from the local research ethics committees of all participating institutions. The study results will be disseminated to all participating institutions in the form of summary reports and presentations on completion of the analysis.
The substantial case detection gap in the field of child tuberculosis (TB) disease is largely driven by inadequate diagnostic tools and approaches. Chest radiographs (CXRs) remain a key component in the evaluation of children and young adolescents (0–15 years) with presumptive TB, aiding clinicians in making the diagnosis and discriminating children with TB from those with other diseases. Widespread use and optimal interpretation of CXR is hampered by a lack of access to well-trained specialists to interpret images. Artificial intelligence CXR interpretation software, termed computer-aided detection (CAD), is now well developed for adults, yet few products have been evaluated in children. The CXR features of child TB are different from those of adults, and as a result, the performance of these CAD algorithms, largely developed for use in adults, will be suboptimal when used in children. Adapting, or fine-tuning adult CAD algorithms, using CXR images from children with presumptive TB, could allow optimisation of these products for use in children. We, therefore, set out to develop a large image and data repository collected from children evaluated for TB (called Catalysing Artificial Intelligence for Paediatric Tuberculosis Research, CAPTURE) with the purpose of evaluating current CAD products and then working with developers and other partners to optimise CAD algorithms for use in children.
We identified approximately 20 studies, from which potentially up to 11 000 CXRs could be used for the proposed project. CXRs and data were eligible for inclusion in the CAPTURE repository if collected from high-quality child TB diagnostic studies that enrolled children with presumptive TB and if CXRs were obtained as part of the baseline assessment. All lead investigators of these studies are members of the CAPTURE consortium. The images and metadata contributed are centrally collated and the key variable of TB case classification as confirmed, unconfirmed or unlikely TB, using an established consensus case definition, is available. All CXRs included in the CAPTURE repository have a consensus radiological interpretation allocated by a panel of independent expert child TB CXR readers who have classified them as ‘unreadable’, ‘normal’, ‘abnormal typical of TB’ or ‘abnormal not typical of TB’. To determine diagnostic performance of existing CAD products, we will evaluate these against a primary composite clinical reference standard (confirmed TB and unconfirmed TB vs unlikely TB), as well as other secondary microbiological and radiological reference standards. A subset of images will be subsequently allocated to a ‘training set’ and made available to developers, academic groups or other parties to either develop novel paediatric CAD products or fine-tune existing adult ones, which will then be re-evaluated by the CAPTURE team using an image subset (‘validation set’) that is independent of the training set.
The CAPTURE study has been approved by Stellenbosch University Health Research Ethics Committee (N22/09/113), with additional ethics approval or waivers by relevant local authorities obtained by consortium members contributing data if required. The final pooled, harmonised and cleaned dataset, as well as the deidentified, renamed CXR images, is stored on a secure cloud-based server. All analyses of existing CAD products, as well as the paediatric-optimised products, will be published in peer-reviewed publications and shared with other stakeholders like the WHO and donor and procurement organisations to guide policy updates and procurement pathways to ensure widespread uptake.
The objective of this study was to determine the association between viral subtype/clade and disease severity.
Multicentre retrospective cohort study.
This study used data from the Global Influenza Hospital Surveillance Network (GIHSN). The dataset comprised hospitalised influenza patients with viral sequencing data across 14 countries, collected from August 2022 through October 2023.
A total of 761 hospitalised patients were enrolled during the study period, and 745 patients were included in the analysis. We excluded patients with missing data on explanatory or outcome variables, those infected with viral clades represented by fewer than 11 sequences, and those enrolled at study sites contributing fewer than 5 patients.
Disease severity was defined by admission to intensive care unit (ICU), receipt of non-invasive oxygen supplementation, 3-variable definition (ICU, mechanical ventilation or death) or 4-variable definition (3-variable plus oxygen supplementation).
Outcomes were analysed in association with subtype or clade using the mixed-effects logistic regression models, adjusting for age group, sex, underlying medical conditions, influenza vaccination status, antiviral use, country income level and epidemic period, while study site was included as a random effect.
745 patients were included: 263 A(H1N1)pdm09, 380 A(H3N2), 102 B/Victoria. A(H1N1)pdm09 infection was associated with increased odds of ICU admission (adjusted ORs (aORs) 2.5, 95% CI 1.1 to 5.8) compared with A(H3N2). 6B.1A.5a.2a.1 clade of A(H1N1)pdm09 was associated with increased severity compared with 6B.1A.5a.2a clade (aOR 3.0, 95% CI 1.0 to 9.5) and (aOR 5.4, 95% CI 1.6 to 18.3) for the 3-variable and 4-variable definitions respectively. Among A(H3N2), the (3C.2a1b.2a.)2b clade showed a trend toward increased severity using the 4-variable definition compared with the 2a.1b clade (aOR 2.9, 95% CI 0.8 to 10.0).
This analysis highlights the differential impact of influenza subtypes and clades on disease severity in hospitalised patients. Future research should investigate the role of specific viral mutations of these clades in modulating immune evasion or disease severity. These findings reinforce the GIHSN’s critical role in global surveillance. Ongoing genomic surveillance is crucial for understanding the clinical impact of emerging influenza variants and informing public health responses.
Atrial Fibrillation (AF) is the most common arrhythmia worldwide affecting an estimated 5% of people over the age of 65 and is a leading cause of stroke and heart failure. Identification of patients at risk allows preventative measures and treatment before these complications occur. Conventional risk prediction models are static, do not have flexibility to incorporate dynamic risk factors and possess only modest predictive value. Artificial intelligence and machine learning-powered health virtual twin technology offer transformative methods for risk prediction and guiding clinical decisions.
In this prospective observational study, 1200 patients will be recruited in two tertiary centres. Patients hospitalised with acute illnesses (sepsis, heart failure, respiratory failure, stroke or critical illness) and patients having undergone high-risk surgery (major vascular surgery, upper gastrointestinal surgery and emergency surgery) will be monitored with a patch-based remote wireless monitoring system for up to 14 days. Clinical and electrocardiographic data will be used for modelling the risk of new-onset AF. The primary outcome is episodes of AF >30 s and will be described as ratio of episodes/patient and as percentage of patients having episodes of AF. Secondary outcomes include 30-day and 90-day readmission rates and complications of AF.
The aim of this study is to generate data for the development and validation of health virtual twins predicting onset of AF in an at-risk population. The intelligent monitoring to predict atrial fibrillation (NOTE-AF) study is part of the TARGET project, a Horizon Europe funded programme which includes risk prediction, diagnosis and management of AF-related stroke (https://target-horizon.eu/).
The study has received approval by the Health Research Authority and the National Research Ethics Service (REC reference 24/NW/0170, IRAS project ID: 342528) in the UK and has been registered on clinicaltrials.gov (NCT06600620). Results will be disseminated as outlined in the TARGET protocol to communicate project ideas, activities and results to diverse audiences.
Establishing comparability between measured outcomes in clinical trials poses a significant obstacle for systematic reviewers. Core outcome sets (COSs) were developed to address this issue. The macular degeneration (MD) COS is designed to standardise outcome measurement across clinical trials for MD. This study investigates the uptake of the MD COS in standardising outcome measurement across clinical trials.
Cross-sectional analysis
We conducted a search on ClinicalTrials.gov to locate MD clinical trials that were registered 5 years prior to COS publication through the search date of 26 June 2023 and obtained a pool of 2152 registered studies. After applying various inclusion and exclusion criteria, we analysed 159 trials. We then analysed the COS uptake using an interrupted time series analysis (ITSA) and performed performed analyses of variance (ANOVAs) and Pearson correlations to evaluate associations between trial characteristics and outcome measurement.
ITSA showed no significant change in uptake following the MD COS (2016): mean percentage of completion of the COS increased by 0.24% per month before publication (p=0.27) and by 0.07% per month after publication (p=0.62), indicating no meaningful post-publication slope change in COS use. For context, visual acuity was most commonly measured, while several patient-reported and disutility domains were infrequently captured.
No discernible patterns in COS usage for MD trials were observed. We recommend further collaboration between regulators and COS developers to help with COS uptake. Additionally, we suggest that further studies analyse adherence to COSs in respect to regulatory recommendations.
by Petar Stanimirović, Tea Borozan, Katarina Petrović, Dragan Bjelica, Zorica Mitrović, Marko Mihić, Dejan Petrović, Anđelija Đorđević Tomić
Young people often face uncertainty during the transition from education to work, along with high unemployment and job dissatisfaction, which is addressed in the EU Youth Strategy, highlighting the need for better career support. This study aimed to identify main factors influencing youth career decisions and to develop a decision-making model. Five core constructs were defined through literature review: Dealing with Uncertainty, Risk Preference, Adaptability and Resilience, Education and Support, and Life Satisfaction. Data were collected from 673 engineering students. Regression analysis was used to test the proposed model and hypotheses, while Mann-Whitney and Kruskal-Wallis tests examined group differences. The developed model accounts for 46.2% (R² = 0.462) of the variability in students’ career choices. Adaptability and resilience emerged as the most influential factor (β = 0.557). Certain differences, for specific constructs, were also observed in relation to different groups of family income, gender and extracurricular activity engagement. The model supports more informed career decisions and provides insights that may help improve career guidance and educational policy. The findings also may contribute to bridging theory and practice in career development research. The study is limited by its sample, which included only engineering students from the Republic of Serbia, potentially restricting the generalizability of the results.Commentary on: MacGregor KA, Ho FK, Celis-Morales CA, et al. Association between menstrual cycle phase and metabolites in healthy, regularly menstruating women in UK Biobank, and effect modification by inflammatory markers and risk factors for metabolic disease. BMC Med. 2023;21:488.
Implications for practice and research Fat mass, physical activity level and cardiorespiratory fitness were identified as factors that influence the relationship between the menstrual cycle and levels of glucose, triglycerides, the triglyceride-to-glucose index, high-density lipoproteins (HDL) and low-density lipoproteins (LDL) cholesterol and the total-to-HDL cholesterol ratio. Future studies should investigate whether these relationships indicate a causal mechanism responsible for the variations in metabolic control throughout the menstrual cycle.
The rate of impaired metabolic regulation is rising among premenopausal women, characterised by decreased insulin sensitivity, increased fasting blood sugar levels and abnormal lipid profiles.
Use a socio-technical strategy to identify the use and implications of generative artificial intelligence (GenAI) tools on nursing education and practice.
Descriptive qualitative study.
Online interviews with 32 nursing students, faculty and practitioners between February and April 2024. Data were analysed using the Framework Method.
Theme 1 described participants' use of eight GenAI tools across seven use cases. Theme 2 describes the implications of using GenAI tools on nursing education. The subthemes include (2.1) facing a new pedagogical reality, (2.2) negative sentiments on using GenAI tools in nursing education and (2.3) opportunities to improve nursing education with GenAI tools. Theme 3 describes the implications of using GenAI tools on nursing practice. Subthemes include (3.1) embedding in patient care, (3.2) nursing workflow integration and (3.3) organisational support. Theme 4 describes GenAI capacity-building. Subthemes include (4.1) to develop an AI-ready workforce, (4.2) to promote responsible and ethical use and (4.3) to advance the nursing profession.
Although GenAI tools initially disrupted nursing education, it is only a matter of time before they disrupt nursing practice. Nurses across education and practice settings should be trained in the responsible and ethical use of GenAI tools to mitigate risks and maximise benefits.
GenAI tools will profoundly impact how nurses of today and tomorrow learn and practice the profession. It is crucial for nurses to actively participate in shaping this technology to minimise risks and maximise benefits to the nursing profession and patient care.
This study revealed the socio-technical intricacies of using GenAI tools in nursing education and practice. We also present wicked problems that nurses will face when using GenAI tools.
COREQ.
This study did not include patient or public involvement in its design, conduct or reporting.
To explore the influence of broader cultural and social factors on clinicians' care delivery to patients from culturally and linguistically diverse backgrounds in the emergency department.
A qualitative exploratory study.
A social ecological perspective drawn from a Social Ecological Model was used to guide the study. Clinicians from two public hospital emergency departments in Southeast Queensland, Australia were recruited with purposive and snowballing sampling strategies. Semi-structured interviews were undertaken between October 2022 and September 2023. Data were analysed using a content analysis approach.
Seventeen clinicians participated in the interviews: nine nurses and eight doctors. Nine participants were born in a country outside of Australia. Three main themes were generated from the interview data: (i) cultural and religious diversity and challenges in care delivery; (ii) social interactions and communication in clinical care; and (iii) perception about care delivery, services and supports.
Findings from this study offer insight into clinicians' experiences and perspectives regarding the influence of cultural and religious diversity as well as cross-cultural communication and prejudice in care delivery. Social interactions and communication in clinical care were found to facilitate care delivery process and navigate challenges. Cultural competency training and multicultural services and resources can help support clinicians in providing culturally appropriate care in the emergency department.
The findings of this study may help inform the development of practical guidelines and strategies to support clinicians in care delivery. Appropriate training regarding cultural competency is essential to promote culturally appropriate care. Developing a tailored multicultural service and targeted resources in the emergency department is recommended in clinical practice.
The consolidated criteria for reporting qualitative research checklist was used.
A health consumer representative was involved to provide advice on the study conceptualization and data interpretation.
Health systems must guarantee access to quality, safe and effective medicines. Essential medicine lists (EMLs) are crucial prioritisation tools to inform coverage decisions and steward limited health resources under the context of universal healthcare. This study aims to develop a consolidated framework for prioritising the assessment of health technologies to review and update EML for treating diseases or health problems managed in primary healthcare (PHC).
A mixed-methods approach was designed to validate the framework. An initial scoping systematic review will be conducted to search for studies that describe criteria used to prioritise the assessment of health technologies for PHC. The relevant studies will be examined using the Joanna Briggs Institute methodological framework for scoping review studies. A comprehensive search was conducted in the following sources: PubMed, Embase, Cochrane Library, Virtual Health Library (LILACS, WHO IRIS, IBECS, PAHO-IRIS, PAHO, LIS, BRISA), Health System Evidence, Global Healths, Health Evidence and Epistemonikos from the inception until February 2025. Two review authors will screen and extract data independently. The extracted data will be qualitatively analysed and presented in a diagrammatic or tabular form, alongside a narrative summary, in line with Preferred Reporting Items for Systematic Reviews and Meta-Analysis: Extension for Scoping Reviews reporting guidelines. An iterative process online using the Delphi hybrid with stakeholders through predetermined consensus thresholds, a combination of a four-point Likert scale and open-ended questions will be conducted to select and validate the criteria identified in the scoping review.
We will provide a consolidated framework to inform decision-makers for prioritising the assessment of health technologies for the national EML for PHC. This is an important step in using evidence to inform public health policies. We plan to share findings through a variety of means, including publications in peer-reviewed journals, presentations at national conferences, invited workshops and webinars, email discussion lists affiliated with our institutions and professional associations, and academic social media.
To identify and understand the different approaches to local consensus discussions that have been used to implement perioperative pathways for common elective surgeries.
Systematic review.
Five databases (MEDLINE, CINAHL, EMBASE, Web of Science and the Cochrane Library) were searched electronically for literature published between 1 January 2000 and 6 April 2023.
Two reviewers independently screened studies for inclusion and assessed quality. Data were extracted using a structured extraction tool. A narrative synthesis was undertaken to identify and categorise the core elements of local consensus discussions reported. Data were synthesised into process models for undertaking local consensus discussions.
The initial search returned 1159 articles after duplicates were removed. Following title and abstract screening, 135 articles underwent full-text review. A total of 63 articles met the inclusion criteria. Reporting of local consensus discussions varied substantially across the included studies. Four elements were consistently reported, which together define a structured process for undertaking local consensus discussions.
Local consensus discussions are a common implementation strategy used to reduce unwarranted clinical variation in surgical care. Several models for undertaking local consensus discussions and their implementation are presented.
Advancing our understanding of consensus building processes in perioperative pathway development could be significantly improved by refining reporting standards to include criteria for achieving consensus and assessing implementation fidelity, alongside advocating for a systematic approach to employing consensus discussions in hospitals.
These findings contribute to recognised gaps in the literature, including how decisions are commonly made in the design and implementation of perioperative pathways, furthering our understanding of the meaning of consensus processes that can be used by clinicians undertaking improvement initiatives.
This review adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines.
No patient or public contribution.
Trial Registration: CRD42023413817