FreshRSS

🔒
❌ Acerca de FreshRSS
Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerTus fuentes RSS

Optimising wound monitoring: Can digital tools improve healing outcomes and clinic efficiency

Abstract

Background

Chronic wounds present significant challenges for patients and nursing care teams worldwide. Digital health tools offer potential for more standardised and efficient nursing care pathways but require further rigorous evaluation.

Objective

This retrospective matched cohort study aimed to compare the impacts of a digital tracking application for wound documentation versus traditional manual nursing assessments.

Methods

Data from 5236 patients with various wound types were analysed. Propensity score matching balanced groups, and bivariate tests, correlation analyses, linear regression, and Hayes' Process Macro Model 15 were utilised for a mediation-moderation model.

Results

Digital wound tracking was associated with significantly shorter healing durations (15 vs. 35 days) and fewer clinic nursing visits (3 vs. 5.8 visits) compared to standard nursing monitoring. Digital tracking demonstrated improved wound size reduction over time. Laboratory values tested did not consistently predict healing outcomes. Digital tracking exhibited moderate negative correlations with the total number of nursing visits. Regression analysis identified wound complexity, hospitalizations, and initial wound size as clinical predictors for more nursing visits in patients with diabetes mellitus (p < .01). Digital tracking significantly reduced the number of associated nursing visits for patients with peripheral vascular disease.

Conclusion

These findings suggest that digital wound management may streamline nursing care and provide advantages, particularly for comorbid populations facing treatment burdens.

Reporting Method

This study adhered to STROBE guidelines in reporting this observational research.

Relevance to Clinical Practice

By streamlining documentation and potentially shortening healing times, digital wound tracking could help optimise nursing resources, enhance wound care standards, and improve patient experiences. This supports further exploration of digital health innovations to advance evidence-based nursing practice.

Patient or public contribution

This study involved retrospective analysis of existing patient records and did not directly include patients or the public in the design, conduct, or reporting of the research.

A comparative vignette study: Evaluating the potential role of a generative AI model in enhancing clinical decision‐making in nursing

Abstract

Aim

This study explores the potential of a generative artificial intelligence tool (ChatGPT) as clinical support for nurses. Specifically, we aim to assess whether ChatGPT can demonstrate clinical decision-making equivalent to that of expert nurses and novice nursing students. This will be evaluated by comparing ChatGPT responses to clinical scenarios to those of nurses on different levels of experience.

Design

This is a cross-sectional study.

Methods

Emergency room registered nurses (i.e. experts; n = 30) and nursing students (i.e. novices; n = 38) were recruited during March–April 2023. Clinical decision-making was measured using three validated clinical scenarios involving an initial assessment and reevaluation. Clinical decision-making aspects assessed were the accuracy of initial assessments, the appropriateness of recommended tests and resource use and the capacity to reevaluate decisions. Performance was also compared by timing response generations and word counts. Expert nurses and novice students completed online questionnaires (via Qualtrics), while ChatGPT responses were obtained from OpenAI.

Results

Concerning aspects of clinical decision-making and compared to novices and experts: (1) ChatGPT exhibited indecisiveness in initial assessments; (2) ChatGPT tended to suggest unnecessary diagnostic tests; (3) When new information required re-evaluation, ChatGPT responses demonstrated inaccurate understanding and inappropriate modifications. In terms of performance, the mean number of words utilized in ChatGPT answers was 27–41 times greater than that utilized by both experts and novices; and responses were provided approximately 4 times faster than those of novices and twice faster than expert nurses. ChatGPT responses maintained logical structure and clarity.

Conclusions

A generative AI tool demonstrated indecisiveness and a tendency towards over-triage compared to human clinicians.

Impact

The study shows that it is important to approach the implementation of ChatGPT as a nurse's digital assistant with caution. More study is needed to optimize the model's training and algorithms to provide accurate healthcare support that aids clinical decision-making.

Reporting method

This study adhered to relevant EQUATOR guidelines for reporting observational studies.

Patient or public contribution

Patients were not directly involved in the conduct of this study.

❌