Refine
Document Type
- Article (3) (remove)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- Uncertainty (3) (remove)
Institute
Local climate change risk assessments (LCCRAs) are best supported by a quantitative integration of physical hazards, exposures and vulnerabilities that includes the characterization of uncertainties. We propose to use Bayesian Networks (BNs) for this task and show how to integrate freely-available output of multiple global hydrological models (GHMs) into BNs, in order to probabilistically assess risks for water supply. Projected relative changes in hydrological variables computed by three GHMs driven by the output of four global climate models were processed using MATLAB, taking into account local information on water availability and use. A roadmap to set up BNs and apply probability distributions of risk levels under historic and future climate and water use was co-developed with experts from the Maghreb (Tunisia, Algeria, Morocco) who positively evaluated the BN application for LCCRAs. We conclude that the presented approach is suitable for application in the many LCCRAs necessary for successful adaptation to climate change world-wide.
Background: Experienced and anticipated regret influence physicians’ decision-making. In medicine, diagnostic decisions and diagnostic errors can have a severe impact on both patients and physicians. Little empirical research exists on regret experienced by physicians when they make diagnostic decisions in primary care that later prove inappropriate or incorrect. The aim of this study was to explore the experience of regret following diagnostic decisions in primary care.
Methods: In this qualitative study, we used an online questionnaire on a sample of German primary care physicians. We asked participants to report on cases in which the final diagnosis differed from their original opinion, and in which treatment was at the very least delayed, possibly resulting in harm to the patient. We asked about original and final diagnoses, illness trajectories, and the reactions of other physicians, patients and relatives. We used thematic analysis to assess the data, supported by MAXQDA 11 and Microsoft Excel 2016.
Results: 29 GPs described one case each (14 female/15 male patients, aged 1.5–80 years, response rate < 1%). In 26 of 29 cases, the final diagnosis was more serious than the original diagnosis. In two cases, the diagnoses were equally serious, and in one case less serious. Clinical trajectories and the reactions of patients and relatives differed widely. Although only one third of cases involved preventable harm to patients, the vast majority (27 of 29) of physicians expressed deep feelings of regret.
Conclusion: Even if harm to patients is unavoidable, regret following diagnostic decisions can be devastating for clinicians, making them ‘second victims’. Procedures and tools are needed to analyse cases involving undesirable diagnostic events, so that ‘true’ diagnostic errors, in which harm could have been prevented, can be distinguished from others. Further studies should also explore how physicians can be supported in dealing with such events in order to prevent them from practicing defensive medicine.
Quantitative models have several advantages compared to qualitative methods for pest risk assessments (PRA). Quantitative models do not require the definition of categorical ratings and can be used to compute numerical probabilities of entry and establishment, and to quantify spread and impact. These models are powerful tools, but they include several sources of uncertainty that need to be taken into account by risk assessors and communicated to decision makers. Uncertainty analysis (UA) and sensitivity analysis (SA) are useful for analyzing uncertainty in models used in PRA, and are becoming more popular. However, these techniques should be applied with caution because several factors may influence their results. In this paper, a brief overview of methods of UA and SA are given. As well, a series of practical rules are defined that can be followed by risk assessors to improve the reliability of UA and SA results. These rules are illustrated in a case study based on the infection model of Magarey et al. (2005) where the results of UA and SA are shown to be highly dependent on the assumptions made on the probability distribution of the model inputs.