Refine
Year of publication
Document Type
- Preprint (620)
- Article (356)
- Working Paper (1)
Has Fulltext
- yes (977)
Is part of the Bibliography
- no (977)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron scattering (experiments) (12)
- Hadron-Hadron Scattering (11)
- Jets (6)
- Heavy-ion collision (5)
- Collective Flow (4)
- Heavy Quark Production (4)
- Quark-Gluon Plasma (4)
- Jets and Jet Substructure (3)
- Quarkonium (3)
Institute
- Physik (916)
- Frankfurt Institute for Advanced Studies (FIAS) (875)
- Informatik (798)
- Medizin (16)
- Informatik und Mathematik (3)
- Center for Financial Studies (CFS) (1)
- Hochschulrechenzentrum (1)
- House of Finance (HoF) (1)
- Sportwissenschaften (1)
- Sustainable Architecture for Finance in Europe (SAFE) (1)
Background: Glioblastoma (GBM) patients are at particularly high risk for thrombotic complications. In the event of a postoperative pulmonary embolism, therapeutic anticoagulation (tAC) is indispensable. The impact of therapeutic anticoagulation on recurrence pattern in GBM is currently unknown. Methods: We conducted a matched-pair cohort analysis of 57 GBM patients with or without tAC that were matched for age, sex, gross total resection and MGMT methylation status in a ratio of 1:2. Patients’ characteristics and clinical course were evaluated using medical charts. MRI characteristics were evaluated by two independent authors blinded to the AC status. Results: The morphologic MRI appearance in first GBM recurrence showed a significantly higher presence of multifocal, midline crossing and sharp demarcated GBM recurrence patterns in patients with therapeutic tAC compared to the matched control group. Although statistically non-significant, the therapeutic tAC cohort showed increased survival. Conclusion: Therapeutic anticoagulation induced significant morphologic changes in GBM recurrences. The underlying pathophysiology is discussed in this article but remains to be further elucidated.
This paper reports on Monte Carlo simulation results for future measurements of the moduli of time-like proton electromagnetic form factors, |GE | and |GM|, using the ¯pp → μ+μ− reaction at PANDA (FAIR). The electromagnetic form factors are fundamental quantities parameterizing the electric and magnetic structure of hadrons. This work estimates the statistical and total accuracy with which the form factors can be measured at PANDA, using an analysis of simulated data within the PandaRoot software framework. The most crucial background channel is ¯pp → π+π−,due to the very similar behavior of muons and pions in the detector. The suppression factors are evaluated for this and all other relevant background channels at different values of antiproton beam momentum. The signal/background separation is based on a multivariate analysis, using the Boosted Decision Trees method. An expected background subtraction is included in this study, based on realistic angular distribuations of the background contribution. Systematic uncertainties are considered and the relative total uncertainties of the form factor measurements are presented.
Standard monitoring of heart rate, blood pressure and arterial oxygen saturation during endoscopy is recommended by current guidelines on procedural sedation. A number of studies indicated a reduction of hypoxic (art. oxygenation < 90% for > 15 s) and severe hypoxic events (art. oxygenation < 85%) by additional use of capnography. Therefore, U.S. and the European guidelines comment that additional capnography monitoring can be considered in long or deep sedation. Integrated Pulmonary Index® (IPI) is an algorithm-based monitoring parameter that combines oxygenation measured by pulse oximetry (art. oxygenation, heart rate) and ventilation measured by capnography (respiratory rate, apnea > 10 s, partial pressure of end-tidal carbon dioxide [PetCO2]). The aim of this paper was to analyze the value of IPI as parameter to monitor the respiratory status in patients receiving propofol sedation during PEG-procedure. Patients reporting for PEG-placement under sedation were randomized 1:1 in either standard monitoring group (SM) or capnography monitoring group including IPI (IM). Heart rate, blood pressure and arterial oxygen saturation were monitored in SM. In IM additional monitoring was performed measuring PetCO2, respiratory rate and IPI. Capnography and IPI values were recorded for all patients but were only visible to the endoscopic team for the IM-group. IPI values range between 1 and 10 (10 = normal; 8–9 = within normal range; 7 = close to normal range, requires attention; 5–6 = requires attention and may require intervention; 3–4 = requires intervention; 1–2 requires immediate intervention). Results on capnography versus standard monitoring of the same study population was published previously. A total of 147 patients (74 in SM and 73 in IM) were included in the present study. Hypoxic events occurred in 62 patients (42%) and severe hypoxic events in 44 patients (29%), respectively. Baseline characteristics were equally distributed in both groups. IPI = 1, IPI < 7 as well as the parameters PetCO2 = 0 mmHg and apnea > 10 s had a high sensitivity for hypoxic and severe hypoxic events, respectively (IPI = 1: 81%/81% [hypoxic/severe hypoxic event], IPI < 7: 82%/88%, PetCO2: 69%/68%, apnea > 10 s: 84%/84%). All four parameters had a low specificity for both hypoxic and severe hypoxic events (IPI = 1: 13%/12%, IPI < 7: 7%/7%, PetCO2: 29%/27%, apnea > 10 s: 7%/7%). In multivariate analysis, only SM and PetCO2 = 0 mmHg were independent risk factors for hypoxia. IPI (IPI = 1 and IPI < 7) as well as the individual parameters PetCO2 = 0 mmHg and apnea > 10 s allow a fast and convenient conclusion on patients’ respiratory status in a morbid patient population. Sensitivity is good for most parameters, but specificity is poor. In conclusion, IPI can be a useful metric to assess respiratory status during propofol-sedation in PEG-placement. However, IPI was not superior to PetCO2 and apnea > 10 s.
Background: The development of robotic systems has provided an alternative to frame-based stereotactic procedures. The aim of this experimental phantom study was to compare the mechanical accuracy of the Robotic Surgery Assistant (ROSA) and the Leksell stereotactic frame by reducing clinical and procedural factors to a minimum.
Methods: To precisely compare mechanical accuracy, a stereotactic system was chosen as reference for both methods. A thin layer CT scan with an acrylic phantom fixed to the frame and a localizer enabling the software to recognize the coordinate system was performed. For each of the five phantom targets, two different trajectories were planned, resulting in 10 trajectories. A series of five repetitions was performed, each time based on a new CT scan. Hence, 50 trajectories were analyzed for each method. X-rays of the final cannula position were fused with the planning data. The coordinates of the target point and the endpoint of the robot- or frame-guided probe were visually determined using the robotic software. The target point error (TPE) was calculated applying the Euclidian distance. The depth deviation along the trajectory and the lateral deviation were separately calculated.
Results: Robotics was significantly more accurate, with an arithmetic TPE mean of 0.53 mm (95% CI 0.41–0.55 mm) compared to 0.72 mm (95% CI 0.63–0.8 mm) in stereotaxy (p < 0.05). In robotics, the mean depth deviation along the trajectory was −0.22 mm (95% CI −0.25 to −0.14 mm). The mean lateral deviation was 0.43 mm (95% CI 0.32–0.49 mm). In frame-based stereotaxy, the mean depth deviation amounted to −0.20 mm (95% CI −0.26 to −0.14 mm), the mean lateral deviation to 0.65 mm (95% CI 0.55–0.74 mm).
Conclusion: Both the robotic and frame-based approach proved accurate. The robotic procedure showed significantly higher accuracy. For both methods, procedural factors occurring during surgery might have a more relevant impact on overall accuracy.
A central motivation for the development of x-ray free-electron lasers has been the prospect of time-resolved single-molecule imaging with atomic resolution. Here, we show that x-ray photoelectron diffraction—where a photoelectron emitted after x-ray absorption illuminates the molecular structure from within—can be used to image the increase of the internuclear distance during the x-ray-induced fragmentation of an O2 molecule. By measuring the molecular-frame photoelectron emission patterns for a two-photon sequential K-shell ionization in coincidence with the fragment ions, and by sorting the data as a function of the measured kinetic energy release, we can resolve the elongation of the molecular bond by approximately 1.2 a.u. within the duration of the x-ray pulse. The experiment paves the road toward time-resolved pump-probe photoelectron diffraction imaging at high-repetition-rate x-ray free-electron lasers.
Investigators in the cognitive neurosciences have turned to Big Data to address persistent replication and reliability issues by increasing sample sizes, statistical power, and representativeness of data. While there is tremendous potential to advance science through open data sharing, these efforts unveil a host of new questions about how to integrate data arising from distinct sources and instruments. We focus on the most frequently assessed area of cognition - memory testing - and demonstrate a process for reliable data harmonization across three common measures. We aggregated raw data from 53 studies from around the world which measured at least one of three distinct verbal learning tasks, totaling N = 10,505 healthy and brain-injured individuals. A mega analysis was conducted using empirical bayes harmonization to isolate and remove site effects, followed by linear models which adjusted for common covariates. After corrections, a continuous item response theory (IRT) model estimated each individual subject’s latent verbal learning ability while accounting for item difficulties. Harmonization significantly reduced inter-site variance by 37% while preserving covariate effects. The effects of age, sex, and education on scores were found to be highly consistent across memory tests. IRT methods for equating scores across AVLTs agreed with held-out data of dually-administered tests, and these tools are made available for free online. This work demonstrates that large-scale data sharing and harmonization initiatives can offer opportunities to address reproducibility and integration challenges across the behavioral sciences.
Background The COVID-19 pandemic has spurred large-scale, inter-institutional research efforts. To enable these efforts, researchers must agree on dataset definitions that not only cover all elements relevant to the respective medical specialty but that are also syntactically and semantically interoperable. Following such an effort, the German Corona Consensus (GECCO) dataset has been developed previously as a harmonized, interoperable collection of the most relevant data elements for COVID-19-related patient research. As GECCO has been developed as a compact core dataset across all medical fields, the focused research within particular medical domains demands the definition of extension modules that include those data elements that are most relevant to the research performed in these individual medical specialties.
Objective To (i) specify a workflow for the development of interoperable dataset definitions that involves a close collaboration between medical experts and information scientists and to (ii) apply the workflow to develop dataset definitions that include data elements most relevant to COVID-19-related patient research in immunization, pediatrics, and cardiology.
Methods We developed a workflow to create dataset definitions that are (i) content-wise as relevant as possible to a specific field of study and (ii) universally usable across computer systems, institutions, and countries, i.e., interoperable. We then gathered medical experts from three specialties (immunization, pediatrics, and cardiology) to the select data elements most relevant to COVID-19-related patient research in the respective specialty. We mapped the data elements to international standardized vocabularies and created data exchange specifications using HL7 FHIR. All steps were performed in close interdisciplinary collaboration between medical domain experts and medical information scientists. The profiles and vocabulary mappings were syntactically and semantically validated in a two-stage process.
Results We created GECCO extension modules for the immunization, pediatrics, and cardiology domains with respect to the pandemic requests. The data elements included in each of these modules were selected according to the here developed consensus-based workflow by medical experts from the respective specialty to ensure that the contents are aligned with the respective research needs. We defined dataset specifications for a total number of 48 (immunization), 150 (pediatrics), and 52 (cardiology) data elements that complement the GECCO core dataset. We created and published implementation guides and example implementations as well as dataset annotations for each extension module.
Conclusions These here presented GECCO extension modules, which contain data elements most relevant to COVID-19-related patient research in immunization, pediatrics and cardiology, were defined in an interdisciplinary, iterative, consensus-based workflow that may serve as a blueprint for the development of further dataset definitions. The GECCO extension modules provide a standardized and harmonized definition of specialty-related datasets that can help to enable inter-institutional and cross-country COVID-19 research in these specialties.