The primary objective was to study pressure ulcer (PU) category II‐IV (including suspected deep tissue injury and unstageable PUs) cumulative incidence and PU incidence density, in a 30day observation period, associated with the use of the CuroCell S.A.M. PRO powered reactive air support surface in nursing home residents at risk for PU development. Secondary objectives were to study (a) PU category I cumulative incidence and PU incidence density and (b) user (caregivers and residents) experiences and perceptions of comfort associated with the use of the support surface under study. A multicentre cohort study was set up in 37 care units of 12 Belgian nursing homes. The sample consisted of 191 residents at risk of PU development (Braden score ≤ 17). The cumulative PU incidence was 4.7% (n = 9). The PU incidence density was 1.7/1000 observation days (9 PU/5370 days). The experience and perceptions of comfort analysis revealed that the CuroCell S.A.M. PRO powered reactive air support surface was comfortable for daily use. The mode of action and the quietness of the pump function had a positive impact on sleep quality. Patient comfort and sleep quality are essential criteria in the selection of a support surface.
The aim of this study was to translate into Mexican Spanish, cross‐culturally adapt and validate the wound‐specific quality of life (QoL) instrument Cardiff wound impact schedule (CWIS) for Mexican patients. This instrument went through the full linguistic translation process based on the guidelines of Beaton et al (Beaton DE, Bombardier C, Guillemin F, Ferraz MB, Guidelines for the process of cross‐cultural adaptation of self‐report measures, Spine Phila Pa, 1976, 2000, 318‐391). We included a total of 500 patients with chronic leg ulcers. The expert committee evaluated the Face validity and they agreed unanimously that the instrument was adequate to assess the QoL of these patients, covering all relevant areas presented by them. The content validity index obtained was of 0.95. The construct validity demonstrated moderately significant correlations between related sub‐scales of CWIS and SF‐36 (P = .010 to P < .001). The instrument was able to discriminate between healed and unhealed ulcers. The instrument obtained an overall Cronbach's alpha of .952, corresponding to an excellent internal consistency (.903‐.771 alpha range for domains). The CWIS can be appropriately used to assess the health‐related QoL of Mexican patients with chronic leg ulcers.
The objective of this study is to investigate the mechanism whereby innate immune molecule surfactant protein D (SP‐D) attenuates sepsis‐induced acute kidney injury (AKI) through modulating apoptosis and nuclear factor kappa‐B (NFκB)‐mediated inflammation. In the present study, a mouse sepsis model was established by cecal ligation and puncture in SP‐D knockout (KO) mice and wild‐type (WT) mice. A sham‐operated group was included as the control. The experimental materials were extracted 6 and 24 hours postoperatively. The plasma levels of tumour necrosis factor alpha (TNF‐α) and MCP‐1 were determined by enzyme‐linked immunosorbent assay (ELISA). Apoptosis was measured by double staining with Annexin V/propidium iodide and flow cytometry. The levels of NFκB in renal tissues were measured by ELISA and Western blotting assay. Apoptosis was detected by TUNEL assays. There were no significant differences in plasma TNF‐α levels between the WT sham group and the KO sham group at 6 and 24 hours postoperatively (P < .05), but the levels of TNF‐α in the WT sepsis and KO sepsis groups were significantly higher than those in controls (P < .05). The levels of TNF‐α in the KO sepsis group were significantly higher than those of the WT sepsis group (P < .05). TNF‐α levels in the WT sepsis group and the KO sepsis group at 24 hours postoperatively were significantly higher than those at 6 hours postoperatively (P < .05). The levels of MCP‐1 in the WT sepsis group and the KO sepsis group at 6 and 24 hours postoperatively were significantly higher than those in the control group (P < .05), and MCP‐1 levels in the KO sepsis group were significantly higher than those in the WT sepsis group (P < .05). MCP‐1 levels in the WT sepsis group and the KO sepsis group at 24 hours postoperatively were significantly higher than those at 6 hours postoperatively (P < .05). The expression of SP‐D in WT kidneys was significantly lower at 6 and 24 hours postoperatively (P < .05). The number of TUNEL‐positive cells in the kidneys from septic SP‐D KO mice was significantly higher (P < .05). The levels of NFκB in septic mice were significantly increased at 6 and 24 hours after induction of sepsis compared with the sham‐operated group compared with those of septic SP‐D KO mice and WT mice (P < .05). Innate immune molecule SP‐D significantly decreased plasma levels of inflammatory cytokines in mice and attenuated sepsis‐induced AKI by inhibiting NFκB activity and apoptosis.
Fistula formation in head and neck wounds is considered one of the most challenging complications that a head and neck reconstructive surgeon may encounter. The current mainstay of treatment is aggressive surgical debridement followed by vascularised soft tissue coverage. Negative pressure wound therapy (NPWT) has been successfully used for the closure of complicated wounds for decades. This study analysed the outcomes and complications of NPWT in the management of head and neck wounds with fistulas. A systematic search of studies published between January 1966 and September 2019 was conducted using the PubMed, MEDLINE, EMBASE, and SCOPUS databases and using the following key words: “negative pressure wound therapy,” “head and neck,” and “fistula.” We included human studies with abstract and full text available. Analysed endpoints were rate of fistula closure, follow‐up duration, and complications if present. Nine retrospective case series (Level IV evidence) that collectively included 122 head and neck wounds with orocutaneous fistulas, pharyngocutaneous fistulas, and salivary contamination were examined. The number of patients included in each study ranged from 5 to 64. The mode of NPWT varied among the included studies, with most adopting a continuous pressure of −125 mm Hg. Mean durations of NPWT ranged from 3.7 to 23 days, and the reported fistula closure rate ranged from 78% to 100%. To achieve complete wound healing, six studies used additional procedures after stopping NPWT, including conventional wound dressings and vascularised tissue transfer. Information regarding follow up was provided in only three of the nine studies, where patients were followed for 5, 10, and 18 months. No serious adverse events were reported. NPWT for head and neck wounds with fistulas may be considered a safe treatment method that yields beneficial outcomes with a low risk of complications. The current data originated mainly from studies with low levels of evidence characterised by heterogeneity. Therefore, definitive recommendations based on these data cannot be offered. Additional high‐quality trials are warranted to corroborate the findings of this systematic review.
The aim of this study was to study the role of Th1/Th2 cell‐associated chemokines in the formation of hypertrophic scars in rabbit ears. Twenty‐six New Zealand white rabbits were used to establish the hypertrophic scar model of rabbit ear and the normal scar model of rabbit's back. Two rabbits were sacrificed on days 0 and 21, 28, 35, 42, 49, 56, and 63 after operation. The specimens were stained with haematoxylin‐eosin (HE). Scar elevation index (SEI) was used to detect the expression of 10 chemokines related to Th1/Th2 cells in both scar formation expressions. Real‐time polymerase chain reaction (PCR) results showed that two chemokines (CXCL10, CXCL12) were highly expressed during the formation of normal scar, and there was almost no expression during the formation of hypertrophic scar (*P < 0.05). The chemokines (CCL2, CCL3, CCL4, CCL5, CCL7, CCL13, CX3CL1) were almost non‐expressed in the formation of normal scars but were expressed for a long time in the formation of hypertrophic scars. The four chemokines, CCL2, CCL4, CCL5, and CX3CL1, maintained a long‐term high expression level during the formation of hypertrophic scars (P < 0.01). There were also three chemokines (CCL14, CCL19, CCL21) that were almost undetectable in normal scarring, but there was transiently low‐level expression (P < 0.05) only during the peak proliferative phase in proliferative scarring. Th1/Th2 cell‐associated chemokines are different in the type, quantity and expression, and maintenance time of rabbit ear hypertrophic scars.
Diabetic neuropathy is defined as the presence of symptoms and signs of peripheral nerve dysfunction in diabetics. The aim of this study is to develop a predictive logistic model to identify the risk of losing protective sensitivity in the foot. This descriptive cross‐sectional study included 111 patients diagnosed with diabetes mellitus. Participants completed a questionnaire designed to evaluate neuropathic symptoms, and multivariate analysis was subsequently performed to identify an optimal predictive model. The explanatory capacity was evaluated by calculating the R 2 coefficient of Nagelkerke. Predictive capacity was evaluated by calculating sensitivity, specificity, and estimation of the area under the receiver operational curve. Protective sensitivity loss was detected in 19.1% of participants. Variables associated by multivariate analysis were: educational level (OR: 31.4, 95% CI: 2.5‐383.3, P = .007) and two items from the questionnaire: one related to bleeding and wet socks (OR: 28.3, 95% CI: 3.7‐215.9, P = .001) and the other related to electrical sensations (OR: 52.9, 95% CI: 4.3‐643.9, P = .002), which were both statistically significant. The predictive model included the variables of age, sex, duration of diabetes, and educational level, and it had a sensitivity of 81.3% and a specificity of 95.5%. This model has a high predictive capacity to identify patients at risk of developing sensory neuropathy.
For optimal wound bed preparation, wound debridement is essential to eliminate bacterial biofilms. However, it is challenging for clinicians to determine whether the biofilm is completely removed. A newly developed biofilm detection method based on wound blotting technology may be useful. Thus, we aimed to investigate the effect of biofilm elimination on wound area decrease in pressure ulcers, as confirmed using the wound blotting method. In this retrospective observational study, we enrolled patients with pressure ulcers who underwent sharp debridement with pre‐ and post‐debridement wound blotting. Biofilm was detected on the nitrocellulose membrane using ruthenium red or alcian blue staining. Patients were included if the test was positive for biofilm before wound debridement. Percent decrease in wound area after 1 week was calculated as an outcome measure. We classified the wounds into a biofilm‐eliminated group and a biofilm‐remaining group based on the post‐debridement wound blotting result. Sixteen wound blotting samples from nine pressure ulcers were collected. The percent decrease in wound area was significantly higher in the biofilm‐eliminated group (median: 14.4%, interquartile range: 4.6%‐20.1%) than in the biofilm‐remaining group (median: −14.5%, interquartile range: −25.3%‐9.6%; P = .040). The presence of remaining biofilms was an independent predictor for reduced percent decrease in wound area (coefficient = −22.84, P = .040). Biofilm‐based wound care guided by wound blotting is a promising measure to help clinicians eliminate bacterial bioburden more effectively for wound area reduction.
Exact data regarding the clinical role of maggot debridement therapy (MDT) for wound care in a specific country are not available. Thus, we analysed the use of MDT in hospitalised patients in Germany. Detailed lists of all hospitalised cases treated with MDT in Germany for the years 2011 to 2016 were provided by the Federal Statistical Office as well as the lists of the 15 most frequent principal and additional diagnoses, respectively, and the 10 most frequent procedures documented with MDT in 2016. Within the 6‐year time period of the study, the number of cases treated with MDT increased by 11% from 4513 in 2011 to 5.017 in 2016. Lower leg and foot were the most frequent anatomic sides of treatment counting up to 83.9% of all cases. In addition, MDT procedures for temporary soft tissue coverage including negative pressure wound therapy were often performed: for treatment of large areas in 36.7% and small areas in 6.2%. 41.3% of all cases treated with MDT had infection with Escherichia coli and 35.9% of all cases with Bacillus fragilis. Our analysis shows a limited use of MDT with a small increase only in the last 6 years in German hospitals. MDT is predominately used to treat foot or leg ulcers.
Early reliable, valid screening, diagnosis, and treatment improve peripheral arterial disease outcomes, yet screening and diagnostic practices vary across settings and specialties. A scoping literature review described reliability and validity of peripheral ischaemia diagnosis or screening tools. Clinical studies in the PUBMED database January 1, 1970, to August 13, 2018, were reviewed summarising ranges of reliability and validity of peripheral ischaemia diagnostic and screening tools for patients with non‐neuropathic lower leg ischaemia. Peripheral ischaemia screening and diagnostic practices varied in parameters measured such as timing, frequency, setting, ordering clinicians, degree of invasiveness, costs, definitions, and cut‐off points informing clinical and referral decisions. Traditional ankle/brachial systolic blood pressure index <0.9 was a reliable, valid lower leg ischaemia screening test to trigger specialist referral for detailed diagnosis. For patients with advanced peripheral ischaemia or calcified arteries, toe‐brachial index, claudication, or invasive angiographic imaging techniques that can have complications were reliable, valid screening, and diagnostic tools to inform management decisions. Ankle/brachial index testing is sufficiently reliable and valid for use during routine examinations to improve timing and consistency of peripheral ischaemia screening, triggering prompt specialist referral for more reliable, accurate Doppler, or other diagnosis to inform treatment decisions.
by Akiyoshi Matsugi, Naoki Yoshida, Satoru Nishishita, Yohei Okada, Nobuhiko Mori, Kosuke Oku, Shinya Douchi, Koichi Hosomi, Youichi SaitohObjective
To investigate whether gaze stabilization exercises (GSEs) improve eye and head movements and whether low-frequency cerebellar repetitive transcranial magnetic stimulation (rTMS) inhibits GSE trainability.Methods
25 healthy adults (real rTMS, n = 12; sham rTMS, n = 13) were recruited. Real or sham rTMS was performed for 15 min (1 Hz, 900 stimulations). The center of the butterfly coil was set 1 cm below the inion in the real rTMS. Following stimulation, 10 trials of 1 min of a GSE were conducted at 1 min intervals. In the GSE, the subjects were instructed to stand upright and horizontally rotate their heads according to a beeping sound corresponding to 2 Hz and with a gaze point ahead of them. Electrooculograms were used to estimate the horizontal gaze direction of the right eye, and gyroscopic measurements were performed to estimate the horizontal head angular velocity during the GSE trials. The percentage change from the first trial of motion range of the eye and head was calculated for each measurement. The percent change of the eye/head range ratio was calculated to assess the synchronous changes of the eye and head movements as the exercise increased.Results
Bayesian two-way analysis of variance showed that cerebellar rTMS affected the eye motion range and eye/head range ratio. A post hoc comparison (Bayesian t-test) showed evidence that the eye motion range and eye/head range ratio were reduced in the fifth, sixth, and seventh trials compared with the first trial sham stimulation condition.Conclusions
GSEs can modulate eye movements with respect to head movements, and the cerebellum may be associated with eye–head coordination trainability for dynamic gazing during head movements.
by Jesse Burk-Rafel, Ricardo W. Pulido, Yousef Elfanagely, Joseph C. KolarsIntroduction
The United States Medical Licensing Examination (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) are important for trainee medical knowledge assessment and licensure, medical school program assessment, and residency program applicant screening. Little is known about how USMLE performance varies between institutions. This observational study attempts to identify institutions with above-predicted USMLE performance, which may indicate educational programs successful at promoting students’ medical knowledge.Methods
Self-reported institution-level data was tabulated from publicly available US News and World Report and Association of American Medical Colleges publications for 131 US allopathic medical schools from 2012–2014. Bivariate and multiple linear regression were performed. The primary outcome was institutional mean USMLE Step 1 and Step 2 CK scores outside a 95% prediction interval (≥2 standard deviations above or below predicted) based on multiple regression accounting for students’ prior academic performance.Results
Eighty-nine US medical schools (54 public, 35 private) reported complete USMLE scores over the three-year study period, representing over 39,000 examinees. Institutional mean grade point average (GPA) and Medical College Admission Test score (MCAT) achieved an adjusted R2 of 72% for Step 1 (standardized βMCAT 0.7, βGPA 0.2) and 41% for Step 2 CK (standardized βMCAT 0.5, βGPA 0.3) in multiple regression. Using this regression model, 5 institutions were identified with above-predicted institutional USMLE performance, while 3 institutions had below-predicted performance.Conclusions
This exploratory study identified several US allopathic medical schools with significant above- or below-predicted USMLE performance. Although limited by self-reported data, the findings raise questions about inter-institutional USMLE performance parity, and thus, educational parity. Additional work is needed to determine the etiology and robustness of the observed performance differences.
by Linh V. Nguyen, Quang V. Ta, Thao B. Dang, Phu H. Nguyen, Thach Nguyen, Thi Van Huyen Pham, Trang HT. Nguyen, Stephen Baker, Trung Le Tran, Dong Joo Yang, Ki Woo Kim, Khanh V. DoanCatecholamine excess reflecting an adrenergic overdrive of the sympathetic nervous system (SNS) has been proposed to link to hyperleptinemia in obesity and may contribute to the development of metabolic disorders. However, relationship between the catecholamine level and plasma leptin in obesity has not yet been investigated. Moreover, whether pharmacological blockade of the adrenergic overdrive in obesity by the third-generation beta-blocker agents such as carvedilol could help to prevent metabolic disorders is controversial and remains to be determined. Using the high fat diet (HFD)-induced obese mouse model, we found that basal plasma norepinephrine, the principal catecholamine as an index of SNS activity, was persistently elevated and highly correlated with plasma leptin concentration during obesity development. Targeting the adrenergic overdrive from this chronic norepinephrine excess in HFD-induced obesity with carvedilol, a third-generation beta-blocker with vasodilating action, blunted the HFD-induced hepatic glucose over-production by suppressing the induction of gluconeogenic enzymes, and enhanced the muscular insulin signaling pathway. Furthermore, carvedilol treatment in HFD-induced obese mice decreased the enlargement of white adipose tissue and improved the glucose tolerance and insulin sensitivity without affecting body weight and blood glucose levels. Our results suggested that catecholamine excess in obesity might directly link to the hyperleptinemic condition and the therapeutic targeting of chronic adrenergic overdrive in obesity with carvedilol might be helpful to attenuate obesity-related metabolic disorders.
by Nicholas Roman, Patrick C. Carney, Nadine Fiani, Santiago PeraltaCleft lip (CL), cleft palate (CP) and cleft lip and palate (CLP) are the most common types of orofacial clefts in dogs. Orofacial clefts in dogs are clinically relevant because of the associated morbidity and high newborn mortality rate and are of interest as comparative models of disease. However, the incidence of CL, CP and CLP has not been investigated in purebred dogs, and the financial impact on breeders is unknown. The aims of this study were to document the incidence patterns of CL, CP and CLP in different breeds of dogs, determine whether defect phenotype is associated with skull type, genetic cluster and geographic location, and estimate the financial impact in breeding programs in the United States by means of an anonymous online survey. A total of 228 orofacial clefts were reported among 7,429 puppies whelped in the 12 preceding months. Breeds in the mastiff/terrier genetic cluster and brachycephalic breeds were predisposed to orofacial clefts. Certain breeds in the ancient genetic cluster were at increased odds of orofacial clefts. Male purebred dogs were at increased odds of CPs. Results confirm that brachycephalic breeds are overrepresented among cases of orofacial clefts. Furthermore, geographic region appeared to be a relevant risk factor and orofacial clefts represented a considerable financial loss to breeders. Improved understanding of the epidemiology of orofacial clefts (frequency, causes, predictors and risk factors) may help in identifying ways to minimize their occurrence. Information gained may potentially help veterinarians and researchers to diagnose, treat and prevent orofacial clefts.
by Adam Gilbertson, Barrack Ongili, Frederick S. Odongo, Denise D. Hallfors, Stuart Rennie, Daniel Kwaro, Winnie K. LusenoIntroduction
Voluntary medical male circumcision (VMMC) provides significant reductions in the risk of female-to-male HIV transmission. Since 2007, VMMC has been a key component of the United States President’s Emergency Plan for AIDS Relief’s (PEPFAR) strategy to mitigate the HIV epidemic in countries with high HIV prevalence and low circumcision rates. To ensure intended effects, PEPFAR sets ambitious annual circumcision targets and provides funding to implementation partners to deliver local VMMC services. In Kenya to date, 1.9 million males have been circumcised; in 2017, 60% of circumcisions were among 10-14-year-olds. We conducted a qualitative field study to learn more about VMMC program implementation in Kenya.Methods and results
The study setting was a region in Kenya with high HIV prevalence and low male circumcision rates. From March 2017 through April 2018, we carried out in-depth interviews with 29 VMMC stakeholders, including “mobilizers”, HIV counselors, clinical providers, schoolteachers, and policy professionals. Additionally, we undertook observation sessions at 14 VMMC clinics while services were provided and observed mobilization activities at 13 community venues including, two schools, four public marketplaces, two fishing villages, and five inland villages. Analysis of interview transcripts and observation field notes revealed multiple unintended consequences linked to the pursuit of targets. Ebbs and flows in the availability of school-age youths together with the drive to meet targets may result in increased burdens on clinics, long waits for care, potentially misleading mobilization practices, and deviations from the standard of care.Conclusion
Our findings indicate shortcomings in the quality of procedures in VMMC programs in a low-resource setting, and more importantly, that the pursuit of ambitious public health targets may lead to compromised service delivery and protocol adherence. There is a need to develop improved or alternative systems to balance the goal of increasing service uptake with the responsible conduct of VMMC.
by Arne Georg Kieback, Christine Espinola-Klein, Claudia Lamina, Susanne Moebus, Daniel Tiller, Roberto Lorbeer, Andreas Schulz, Christa Meisinger, Daniel Medenwald, Raimund Erbel, Alexander Kluttig, Philipp S. Wild, Florian Kronenberg, Knut Kröger, Till Ittermann, Marcus DörrPurpose and methods
A meta-analysis using data from seven German population-based cohorts was performed by the German Epidemiological consortium of Peripheral Arterial Disease (GEPArD) to investigate whether one question about claudication is more efficient for PAD screening than established questionnaires. Claudication was defined on the basis of the answer to one question asking for pain in the leg during normal walking. This simple question was compared with established questionnaires, including the Edinburgh questionnaire. The associations of claudication with continuous ABI values and decreased ABI were analyzed by linear and logistic regression analysis, respectively. The results of the studies were pooled in a random effect meta-analysis, which included data from 27,945 individuals (14,052 women, age range 20–84 years).Results
Meta-analysis revealed a significant negative association between claudication and ABI, which was stronger in men (β = -0.07; 95%CI -0.10, -0.04) than in women (β = -0.02; 95%CI -0.02, -0.01). Likewise, the presence of claudication symptoms was related to an increased odds of a decreased ABI in both men (Odds ratio = 5.40; 95%CI 4.20, 6.96) and women (Odds ratio = 1.99; 95%CI 1.58, 2.51).Conclusions
Asking only one question about claudication was able to identify many individuals with a high likelihood of a reduced ABI with markedly higher sensitivity and only slightly reduced specificity compared to more complex questionnaires. At least in men, this question should be established as first screening step.
by David M. Markowitz, Jeffrey T. Hancock, Jeremy N. Bailenson, Byron ReevesThis preregistered study examined the psychological and physiological consequences of exercising self-control with the mobile phone. A total of 125 participants were randomly assigned to sit in an unadorned room for six minutes and either (a) use their mobile phone, (b) sit alone with no phone, or (c) sit with their device but resist using it. Consistent with prior work, participants self-reported more concentration difficulty and more mind wandering with no device present compared to using the phone. Resisting the phone led to greater perceived concentration abilities than sitting without the device (not having external stimulation). Failing to replicate prior work, however, participants without external stimulation did not rate the experience as less enjoyable or more boring than having something to do. We also observed that skin conductance data were consistent across conditions for the first three-minutes of the experiment, after which participants who resisted the phone were less aroused than those who were without the phone. We discuss how the findings contribute to our understanding of exercising self-control with mobile media and how psychological consequences, such as increased mind wandering and focusing challenges, relate to periods of idleness or free thinking.
by Craig D. Soutar, John StavrinidesThe Enterobacterial genus Pantoea contains both free-living and host-associating species, with considerable debate as to whether documented reports of human infections by members of this species group are accurate. MALDI-TOF-based identification methods are commonly used in clinical laboratories as a rapid means of identification, but its reliability for identification of Pantoea species is unclear. In this study, we carried out cpn60-based molecular typing of 54 clinical isolates that had been identified as Pantoea using MALDI-TOF and other clinical typing methods. We found that 24% had been misidentified, and were actually strains of Citrobacter, Enterobacter, Kosakonia, Klebsiella, Pseudocitrobacter, members of the newly described Erwinia gerundensis, and even several unclassified members of the Enterobacteriaceae. The 40 clinical strains that were confirmed to be Pantoea were identified as Pantoea agglomerans, Pantoea allii, Pantoea dispersa, Pantoea eucalypti, and Pantoea septica as well as the proposed species group, Pantoea latae. Some species groups considered largely environmental or plant-associated, such as P. allii and P. eucalypti were also among clinical specimens. Our results indicate that MALDI-TOF-based identification methods may misidentify strains of the Enterobacteriaceae as Pantoea.