To improve healthcare provider knowledge of Tanzanian newborn care guidelines, we developed adaptive Essential and Sick Newborn Care (aESNC), an adaptive e-learning environment. The objectives of this study were to (1) assess implementation success with use of in-person support and nudging strategy and (2) describe baseline provider knowledge and metacognition.
6-month observational study at one zonal hospital and three health centres in Mwanza, Tanzania. To assess implementation success, we used the Reach, Efficacy, Adoption, Implementation and Maintenance framework and to describe baseline provider knowledge and metacognition we used Howell’s conscious-competence model. Additionally, we explored provider characteristics associated with initial learning completion or persistent activity.
aESNC reached 85% (195/231) of providers: 75 medical, 53 nursing and 21 clinical officers; 110 (56%) were at the zonal hospital and 85 (44%) at health centres. Median clinical experience was 4 years (IQR 1–9) and 45 (23%) had previous in-service training for both newborn essential and sick newborn care. Efficacy was 42% (SD ±17%). Providers averaged 78% (SD ±31%) completion of initial learning and 7% (SD ±11%) of refresher assignments. 130 (67%) providers had ≥1 episode of inactivity >30 day, no episodes were due to lack of internet access. Baseline conscious-competence was 53% (IQR: 38%–63%), unconscious-incompetence 32% (IQR: 23%–42%), conscious-incompetence 7% (IQR: 2%–15%), and unconscious-competence 2% (IQR: 0%–3%). Higher baseline conscious-competence (OR 31.6 (95% CI 5.8 to 183.5)) and being a nursing officer (aOR: 5.6 (95% CI 1.8 to 18.1)), compared with medical officer, were associated with initial learning completion or persistent activity.
aESNC reach was high in a population of frontline providers across diverse levels of care in Tanzania. Use of in-person support and nudging increased reach, initial learning and refresher assignment completion, but refresher assignment completion remains low. Providers were often unaware of knowledge gaps, and lower baseline knowledge may decrease initial learning completion or activity. Further study to identify barriers to adaptive e-learning normalisation is needed.
Among people experiencing severe and multiple disadvantage (SMD), poor oral health is common and linked to smoking, substance use and high sugar intake. Studies have explored interventions addressing oral health and related behaviours; however, factors related to the implementation of these interventions remain unclear. This mixed-methods systematic review aimed to synthesise evidence on the implementation and sustainability of interventions to improve oral health and related health behaviours among adults experiencing SMD.
Bibliographic databases (MEDLINE, EMBASE, PsycINFO, CINAHL, EBSCO, Scopus) and grey literature were searched from inception to February 2023. Studies meeting the inclusion criteria were screened and extracted independently by two researchers. Quality appraisal was undertaken, and results were synthesised using narrative and thematic analyses.
Seventeen papers were included (published between 1995 and 2022). Studies were mostly of moderate quality and included views from SMD groups and service providers. From the qualitative synthesis, most findings were related to aspects such as trust, resources and motivation levels of SMD groups and service providers. None of the studies reported on diet and none included repeated offending (one of the aspects of SMD). From the quantitative synthesis, no difference was observed in programme attendance between the interventions and usual care, although there was some indication of sustained improvements in participation in the intervention group.
This review provides some evidence that trust, adequate resources and motivation levels are potentially important in implementing interventions to improve oral health and substance use among SMD groups. Further research is needed from high quality studies and focusing on diet in this population.
CRD42020202416.
To explore and describe senior nursing students’ perspectives on clinical practice during COVID-19 and provide the most up-to-date information on the quality of clinical experience for nursing students in relation to nursing practice, nursing education, and nursing research.
The research design that was employed is a qualitative, explorative, descriptive in order to explore and describe nursing students’ perceptions of clinical, training during the COVID-19 pandemic.
The study took place in a local university located in the Northwest province, South Africa.
The population consisted of 16 senior nursing students who had been exposed to clinical practice during the COVID-19 pandemic. There were 14 women and 2 men. Study included full-time, registered undergraduate nursing students who enrolled in 2019. All nursing students who did not engage in clinical practice before or during COVID-19 were exempt.
There were no direct interventions in this study; however, few recommendations were made for each of the themes that emerged in this study.
The researchers’ aim with the study was to find out the nursing students’ perspective on clinical training during a global pandemic, through interviews and focus group discussions. The researcher did in fact receive such feedback from the participants.
Four major themes emerged: (1) the lack of preceptors to facilitate clinical teaching; (2) not allowed to work in COVID-19 wards; (3) difficulties with online classes and tests and (4) poor communication.
The COVID-19 pandemic influenced how students viewed and experienced clinical training, which in turn had an impact on their learning experiences. These effects also had some impact on their experiences and decisions to continue working as professional nurses.
The QCovid 2 and 3 algorithms are risk prediction tools developed during the second wave of the COVID-19 pandemic that can be used to predict the risk of COVID-19 hospitalisation and mortality, taking vaccination status into account. In this study, we assess their performance in Scotland.
We used the Early Pandemic Evaluation and Enhanced Surveillance of COVID-19 national data platform consisting of individual-level data for the population of Scotland (5.4 million residents). Primary care data were linked to reverse-transcription PCR virology testing, hospitalisation and mortality data. We assessed the discrimination and calibration of the QCovid 2 and 3 algorithms in predicting COVID-19 hospitalisations and deaths between 8 December 2020 and 15 June 2021.
Our validation dataset comprised 465 058 individuals, aged 19–100. We found the following performance metrics (95% CIs) for QCovid 2 and 3: Harrell’s C 0.84 (0.82 to 0.86) for hospitalisation, and 0.92 (0.90 to 0.94) for death, observed-expected ratio of 0.24 for hospitalisation and 0.26 for death (ie, both the number of hospitalisations and the number of deaths were overestimated), and a Brier score of 0.0009 (0.00084 to 0.00096) for hospitalisation and 0.00036 (0.00032 to 0.0004) for death.
We found good discrimination of the QCovid 2 and 3 algorithms in Scotland, although performance was worse in higher age groups. Both the number of hospitalisations and the number of deaths were overestimated.
The rapid spread of the SARS-CoV-2 Omicron variant has raised concerns regarding waning vaccine-induced immunity and durability. We evaluated protection of the third-dose and fourth-dose mRNA vaccines against SARS-CoV-2 Omicron subvariant and its sublineages.
Systematic review and meta-analysis.
Electronic databases and other resources (PubMed, Embase, CENTRAL, MEDLINE, CINAHL PLUS, APA PsycINFO, Web of Science, Scopus, ScienceDirect, MedRxiv and bioRxiv) were searched until December 2022.
We included studies that assessed the effectiveness of mRNA vaccine booster doses against SARS-CoV-2 infection and severe COVID-19 outcomes caused by the subvariant.
Estimates of vaccine effectiveness (VE) at different time points after the third-dose and fourth-dose vaccination were extracted. Random-effects meta-analysis was used to compare VE of the third dose versus the primary series, no vaccination and the fourth dose at different time points. The certainty of the evidence was assessed by Grading of Recommendations, Assessments, Development and Evaluation approach.
This review included 50 studies. The third-dose VE, compared with the primary series, against SARS-CoV-2 infection was 48.86% (95% CI 44.90% to 52.82%, low certainty) at ≥14 days, and gradually decreased to 38.01% (95% CI 13.90% to 62.13%, very low certainty) at ≥90 days after the third-dose vaccination. The fourth-dose VE peaked at 14–30 days (56.70% (95% CI 50.36% to 63.04%), moderate certainty), then quickly declined at 61–90 days (22% (95% CI 6.40% to 37.60%), low certainty). Compared with no vaccination, the third-dose VE was 75.84% (95% CI 40.56% to 111.12%, low certainty) against BA.1 infection, and 70.41% (95% CI 49.94% to 90.88%, low certainty) against BA.2 infection at ≥7 days after the third-dose vaccination. The third-dose VE against hospitalisation remained stable over time and maintained 79.30% (95% CI 58.65% to 99.94%, moderate certainty) at 91–120 days. The fourth-dose VE up to 60 days was 67.54% (95% CI 59.76% to 75.33%, moderate certainty) for hospitalisation and 77.88% (95% CI 72.55% to 83.21%, moderate certainty) for death.
The boosters provided substantial protection against severe COVID-19 outcomes for at least 6 months, although the duration of protection remains uncertain, suggesting the need for a booster dose within 6 months of the third-dose or fourth-dose vaccination. However, the certainty of evidence in our VE estimates varied from very low to moderate, indicating significant heterogeneity among studies that should be considered when interpreting the findings for public health policies.
CRD42023376698.
by Mamo Kassa, Farai Madzimbamuto, Gaone Kediegite, Eugene Tuyishime
IntroductionLittle is known about the regional anesthesia practice in low resources settings (LRS). The aim of this study was to describe the regional anesthesia capacity, characteristics of regional anesthesia practice, and challenges and solutions of practicing safe regional anesthesia in public hospitals in Botswana.
MethodsThis was a cross-sectional survey of anesthesia providers working in public hospitals in Botswana. A purposive sampling method of public hospitals was used to achieve representation of different hospital levels across Botswana. Paper-based questionnaires were sent to anesthesia providers from selected hospitals. Descriptive statistics were used for analysis.
ResultsQuestionnaires were distributed to 47 selected anesthesia providers from selected hospitals; 38 (80.9%) were returned. Most participants were nurse anesthetists and medical officers (57.8%). All hospitals perform spinal anesthesia; however, other regional techniques were performed by a small number of participants in one referral hospital. Most hospitals had adequate regional anesthesia drugs and sedation medications, however, most hospitals (except one referral hospital) lacked ultrasound machine and the regional anesthesia kit. The common challenges reported were lack of knowledge and skills, lack of equipment and supplies, and lack of hospital engagement and support. Some solutions were proposed such as regional anesthesia training and engaging the hospital management to get resources.
ConclusionsThe results of this study suggest that spinal anesthesia is the most common regional anesthesia technique performed by anesthesia providers working in public hospitals in Botswana followed by few upper limb blocks. However, most public hospitals lack enough training capacity, equipment, and supplies for regional anesthesia. More engagement of the hospital management, investment in regional anesthesia resources, and training are needed in order to improve the regional anesthesia capacity and provide safe surgery and anesthesia in Botswana.
Training programmes for obstetrics and gynaecology (O&G) and general surgery (GS) vary significantly, but both require proficiency in laparoscopic skills. We sought to determine performance in each specialty.
Prospective, observational study.
Health Education England North-West, UK.
47 surgical trainees (24 O&G and 23 GS) were subdivided into four groups: 11 junior O&G, 13 senior O&G, 11 junior GS and 12 senior GS trainees.
Trainees were tested on four simulated laparoscopic tasks: laparoscopic camera navigation (LCN), hand–eye coordination (HEC), bimanual coordination (BMC) and suturing with intracorporeal knot tying (suturing).
O&G trainees completed LCN (p
GS trainees performed better than O&G trainees in core laparoscopic skills, and the structure of O&G training may require modification.
ClinicalTrials.gov Registry (NCT05116332).
Digital health is now routinely being applied in clinical care, and with a variety of clinician-facing systems available, healthcare organisations are increasingly required to make decisions about technology implementation and evaluation. However, few studies have examined how digital health research is prioritised, particularly research focused on clinician-facing decision support systems. This study aimed to identify criteria for prioritising digital health research, examine how these differ from criteria for prioritising traditional health research and determine priority decision support use cases for a collaborative implementation research programme.
Drawing on an interpretive listening model for priority setting and a stakeholder-driven approach, our prioritisation process involved stakeholder identification, eliciting decision support use case priorities from stakeholders, generating initial use case priorities and finalising preferred use cases based on consultations. In this qualitative study, online focus group session(s) were held with stakeholders, audiorecorded, transcribed and analysed thematically.
Fifteen participants attended the online priority setting sessions. Criteria for prioritising digital health research fell into three themes, namely: public health benefit, health system-level factors and research process and feasibility. We identified criteria unique to digital health research as the availability of suitable governance frameworks, candidate technology’s alignment with other technologies in use,and the possibility of data-driven insights from health technology data. The final selected use cases were remote monitoring of patients with pulmonary conditions, sepsis detection and automated breast screening.
The criteria for determining digital health research priority areas are more nuanced than that of traditional health condition focused research and can neither be viewed solely through a clinical lens nor technological lens. As digital health research relies heavily on health technology implementation, digital health prioritisation criteria comprised enablers of successful technology implementation. Our prioritisation process could be applied to other settings and collaborative projects where research institutions partner with healthcare delivery organisations.