by Aishwarya Subramanian, Rachel M. Germain
Animals navigate landscapes based on perceived risks vs. rewards, as inferred from features of the landscape. In the wild, knowing how strongly animal movement is directed by landscape features is difficult to ascertain but widespread disturbances such as wildfires can serve as natural experiments. We tested the hypothesis that wildfires homogenize the risk/reward landscape, causing movement to become less directed, given that fires reduce landscape complexity as habitat structures (e.g., tree cover, dense brush) are burned. We used satellite imagery of a research reserve in Northern California to count and categorize paths made primarily by mule deer (Odocoileus hemionus) in grasslands. Specifically, we compared pre-wildfire (August 2014) and post-wildfire (September 2018) image history layers among locations that were or were not impacted by wildfire (i.e., a Before/After Control/Impact design). Wildfire significantly altered spatial patterns of deer movement: more new paths were gained and more old paths were lost in areas of the reserve that were impacted by wildfire; movement patterns became less directed in response to fire, suggesting that the risk/reward landscape became more homogenous, as hypothesized. We found evidence to suggest that wildfire affects deer populations at spatial scales beyond their scale of direct impact and raises the interesting possibility that deer perceive risks and rewards at different spatial scales. In conclusion, our study provides an example of how animals integrate spatial information from the environment to make movement decisions, setting the stage for future work on the broader ecological implications for populations, communities, and ecosystems, an emerging interest in ecology.To understand community antibiotic practices and their drivers, comprehensively and in contextually sensitive ways, we explored the individual, community and health system-level factors influencing community antibiotic practices in rural West Bengal in India.
Qualitative study using focus group discussions and in-depth interviews.
Two contrasting village clusters in South 24 Parganas district, West Bengal, India. Fieldwork was conducted between November 2019 and January 2020.
98 adult community members (42 men and 56 women) were selected purposively for 8 focus group discussions. In-depth interviews were conducted with 16 community key informants (7 teachers, 4 elected village representatives, 2 doctors and 3 social workers) and 14 community health workers.
Significant themes at the individual level included sociodemographics (age, gender, education), cognitive factors (knowledge and perceptions of modern antibiotics within non-biomedical belief systems), affective influences (emotive interpretations of appropriate medicine consumption) and economic constraints (affordability of antibiotic courses and overall costs of care). Antibiotics were viewed as essential fever remedies, akin to antipyretics, with decisions to halt mid-course influenced by non-biomedical beliefs associating prolonged use with toxicity. Themes at the community and health system levels included the health stewardship roles of village leaders and knowledge brokering by informal providers, pharmacists and public sector accredited social health activists. However, these community resources lacked sufficient knowledge to address people’s doubts and concerns. Qualified doctors were physically and socially inaccessible, creating a barrier to seeking their expertise.
The interplay of sociodemographic, cognitive and affective factors, and economic constraints at the individual level, underscores the complexity of antibiotic usage. Additionally, community leaders and health workers emerge as crucial players, yet their knowledge gaps and lack of empowerment pose challenges in addressing public concerns. This comprehensive analysis highlights the need for targeted interventions that address both individual beliefs and community health dynamics to promote judicious antibiotic use.
by Ausilah Alfraihat, Amer F. Samdani, Sriram Balasubramanian
Anterior Vertebral Body Tethering (AVBT) is a growing alternative treatment for adolescent idiopathic scoliosis (AIS), offering an option besides spinal fusion. While AVBT aims to correct spinal deformity through growth correction, its outcomes have been mixed. To improve surgical outcomes, this study aimed to develop a machine learning-based tool to predict short- and midterm spinal curve correction in AIS patients who underwent AVBT surgery, using the most predictive clinical, radiographic, and surgical parameters. After institutional review board approval and based on inclusion criteria, 91 AIS patients who underwent AVBT surgery were selected from the Shriners Hospitals for Children, Philadelphia. For all patients, longitudinal standing (PA or AP, and lateral) and side bending spinal Radiographs were retrospectively obtained at six visits: preop and first standing, one year, two years, five years postop, and at the most recent follow-up. Demographic, radiographic, and surgical features associated with curve correction were collected. The sequential backward feature selection method was used to eliminate correlated features and to provide a rank-ordered list of the most predictive features of the AVBT correction. A Gradient Boosting Regressor (GBR) model was trained and tested using the selected features to predict the final correction of the curve in AIS patients. Eleven most predictive features were identified. The GBR model predicted the final Cobb angle with an average error of 6.3 ± 5.6 degrees. The model also provided a prediction interval, where 84% of the actual values were within the 90% prediction interval. A list of the most predictive features for AVBT curve correction was provided. The GBR model, trained on these features, predicted the final curve magnitude with a clinically acceptable margin of error. This model can be used as a clinical tool to plan AVBT surgical parameters and improve outcomes.Implementation of enhanced recovery pathways (ERPs) has resulted in improved patient-centred outcomes and decreased costs. However, there is a lack of high-level evidence for many ERP elements. We have designed a randomised, embedded, multifactorial, adaptive platform perioperative medicine (REMAP Periop) trial to evaluate the effectiveness of several perioperative therapies for patients undergoing complex abdominal surgery as part of an ERP. This trial will begin with two domains: postoperative nausea/vomiting (PONV) prophylaxis and regional/neuraxial analgesia. Patients enrolled in the trial will be randomised to arms within both domains, with the possibility of adding additional domains in the future.
In the PONV domain, patients are randomised to optimal versus supraoptimal prophylactic regimens. In the regional/neuraxial domain, patients are randomised to one of five different single-injection techniques/combination of techniques. The primary study endpoint is hospital-free days at 30 days, with additional domain-specific secondary endpoints of PONV incidence and postoperative opioid consumption. The efficacy of an intervention arm within a given domain will be evaluated at regular interim analyses using Bayesian statistical analysis. At the beginning of the trial, participants will have an equal probability of being allocated to any given intervention within a domain (ie, simple 1:1 randomisation), with response adaptive randomisation guiding changes to allocation ratios after interim analyses when applicable based on prespecified statistical triggers. Triggers met at interim analysis may also result in intervention dropping.
The core protocol and domain-specific appendices were approved by the University of Pittsburgh Institutional Review Board. A waiver of informed consent was obtained for this trial. Trial results will be announced to the public and healthcare providers once prespecified statistical triggers of interest are reached as described in the core protocol, and the most favourable interventions will then be implemented as a standardised institutional protocol.
Early identification of lung cancer on chest radiographs improves patient outcomes. Artificial intelligence (AI) tools may increase diagnostic accuracy and streamline this pathway. This study evaluated the performance of commercially available AI-based software trained to identify cancerous lung nodules on chest radiographs.
This retrospective study included primary care chest radiographs acquired in a UK centre. The software evaluated each radiograph independently and outputs were compared with two reference standards: (1) the radiologist report and (2) the diagnosis of cancer by multidisciplinary team decision. Failure analysis was performed by interrogating the software marker locations on radiographs.
5722 consecutive chest radiographs were included from 5592 patients (median age 59 years, 53.8% women, 1.6% prevalence of cancer).
Compared with radiologist reports for nodule detection, the software demonstrated sensitivity 54.5% (95% CI 44.2% to 64.4%), specificity 83.2% (82.2% to 84.1%), positive predictive value (PPV) 5.5% (4.6% to 6.6%) and negative predictive value (NPV) 99.0% (98.8% to 99.2%). Compared with cancer diagnosis, the software demonstrated sensitivity 60.9% (50.1% to 70.9%), specificity 83.3% (82.3% to 84.2%), PPV 5.6% (4.8% to 6.6%) and NPV 99.2% (99.0% to 99.4%). Normal or variant anatomy was misidentified as an abnormality in 69.9% of the 943 false positive cases.
The software demonstrated considerable underperformance in this real-world patient cohort. Failure analysis suggested a lack of generalisability in the training and testing datasets as a potential factor. The low PPV carries the risk of over-investigation and limits the translation of the software to clinical practice. Our findings highlight the importance of training and testing software in representative datasets, with broader implications for the implementation of AI tools in imaging.