Universitätspublikationen
Refine
Year of publication
Document Type
- Article (5074)
- Doctoral Thesis (820)
- Part of Periodical (179)
- Preprint (173)
- Conference Proceeding (150)
- Book (80)
- Contribution to a Periodical (69)
- Review (39)
- Part of a Book (16)
- Working Paper (8)
Language
Has Fulltext
- yes (6617) (remove)
Keywords
- inflammation (77)
- COVID-19 (60)
- SARS-CoV-2 (48)
- cancer (38)
- glioblastoma (38)
- apoptosis (37)
- Inflammation (36)
- breast cancer (34)
- autophagy (29)
- prostate cancer (29)
Institute
- Medizin (6617) (remove)
Aims: To compare the effects of Ayurvedic and conventional nutritional therapy in patients with irritable bowel syndrome (IBS). Methods: Sixty-nine patients with IBS were randomized to Ayurvedic (n = 35) or conventional nutritional therapy according to the recommendations of the German Nutrition Society including the low-FODMAP diet (n = 34). Study visits took place at baseline and after 1, 3, and 6 months. The primary outcome was IBS symptom severity (IBS-SSS) after 3 months; secondary outcomes included stress (CPSS), anxiety and depression (HADS), well-being (WHO-5) and IBS-specific quality of life (IBS-QOL). A repeated measures general linear model (GLM) for intent-to-treat-analyses was applied in this explorative study. Results: After 3 months, estimated marginal means for IBS-SSS reductions were 123.8 [95% confidence interval (95% CI) = 92.8–154.9; p < 0.001] in the Ayurvedic and 72.7 (95% CI = 38.8–106.7; p < 0.001) in the conventional group. The IBS-SSS reduction was significantly higher in the Ayurveda group compared to the conventional therapy group (estimated marginal mean = 51.1; 95% CI = 3.8–98.5; p = 0.035) and clinically meaningful. Sixty-eight percentage of the variance in IBS-SSS reduction after 3 months can be explained by treatment, 6.5% by patients' expectations for their therapies and 23.4% by IBS-SSS at pre-intervention. Both therapies are equivalent in their contribution to the outcome variance. The higher the IBS-SSS score at pre-intervention and the larger the patients' expectations, the greater the IBS-SSS reduction. There were no significant group differences in any secondary outcome measures. No serious adverse events occurred in either group. Conclusion: Patients with IBS seem to benefit significantly from Ayurvedic or conventional nutritional therapy. The results warrant further studies with longer-term follow-ups and larger sample sizes. Clinical Trial Registration: https://clinicaltrials.gov/ct2/show/NCT03019861, identifier: NCT03019861.
Der Morbus Basedow zählt zu den häufigsten Ursachen einer Hyperthyreose. Zur Behandlung stehen neben der medikamentösen thyreostatischen Therapie auch ein operatives sowie ein nuklearmedizinisches Verfahren zur Verfügung.
Die beiden zuletzt genannten Behandlungsmöglichkeiten stellen aufgrund ihrer Wirkungsweise ein definitives Verfahren dar, bei dem Schilddrüsengewebe entfernt bzw. zerstört wird. Dadurch ist in der Regel eine dauerhafte Substitution der lebensnotwendigen Schilddrüsenhormone erforderlich. Im Gegensatz dazu bleibt bei der medikamentösen Therapie mit Thyreostatika die gesamte Schilddrüse erhalten und funktionsfähig. Der Nachteil besteht in der hohen Rezidivrate von über 50 % im Vergleich zur definitiven Therapie. Damit mehr Patienten von den Vorteilen der thyreostatischen Therapie profitieren, ist eine Optimierung dieser zur Reduktion der Rezidivrate notwendig.
Ziel dieser Arbeit war es, mittels einer retrospektiven Analyse zu ermitteln, welche anamnestischen, klinischen, sonographischen und laborchemischen Parameter mit einem Rezidiv des Morbus Basedow bei Patienten mit thyreostatischer Therapie in Zusammenhang stehen. Weiterhin erfolgte eine Analyse von sonographischen und laborchemischen Werten im Krankheitsverlauf, um daraus Indikatoren für eine optimale Dauer der thyreostatischen Therapie abzuleiten. Hierzu wurden die Daten von 260 Patienten bezüglich der folgenden Faktoren zwischen Remissions- und Rezidivgruppe verglichen: Erkrankungsalter, Geschlecht, Dauer der Thyreostatikagabe, Vitamin D-Spiegel, Nikotinkonsum, endokrine Orbitopathie, fam. Autoimmunerkrankung, fam. Schilddrüsenerkrankung und Veränderungen im Hormonhaushalt anderer Hormonachsen. Zudem erfolgte eine Zeitreihenanalyse Schilddrüsen-spezifischer Laborwerte (fT3, fT4, TSH, TRAK, anti-TPO-Ak, TgAk) und des sonographisch bestimmten Schilddrüsenvolumens jeweils zu den Zeitpunkten Diagnosestellung sowie sechs und zwölf Monate darauf. Die Rezidivrate im untersuchten Patientenkollektiv betrug 68,8 %.
Für das Erkrankungsalter, die Therapiedauer, das Schilddrüsenvolumen, die Schilddrüsenfunktionsparameter und die TSH-Rezeptor-Antikörper ließen sich signifikante Unterschiede zwischen Remissions- und Rezidivkohorte nachweisen. Patienten, die bei Diagnose das 35. Lebensjahr noch nicht vollendet hatten, erlitten signifikant häufiger ein Rezidiv als ältere Patienten. In der Remissionsgruppe war die Therapiedauer mit zwölf Monaten zudem signifikant länger als in der Rezidivgruppe. Patienten, deren Schilddrüse zum Zeitpunkt der Diagnose oder zwölf Monate darauf in der sonographischen Messung über die Norm vergrößert war, erlitten signifikant häufiger ein Rezidiv des Morbus Basedow, ebenso wie Patienten mit anhaltend pathologischen Schilddrüsenfunktionsparametern sechs und zwölf Monate nach Diagnose. Die Werte der TSH-Rezeptor-Antikörper fielen in der Rezidivgruppe zu allen Erhebungszeitpunkten signifikant höher aus als in der Remissionsgruppe. Diese Ergebnisse lassen für die medikamentöse Behandlung des Morbus Basedow den Schluss zu, dass die Dauer der thyreostatischen Therapie dem Krankheitsverlauf, der sich in den Schilddrüsenfunktionswerten und den Leveln der TSH-Rezeptor-Antikörper widerspiegelt, angepasst werden sollte, um deren Erfolgsrate zu steigern. Weiterhin lässt sich folgern, dass bei jüngeren Patienten und Patienten mit vergrößerter Schilddrüse ein erhöhtes Rezidivrisiko besteht und diese Patienten möglicherweise von einem verlängerten Therapieintervall profitieren.
Während in der aktuellen europäischen Leitlinie zur Behandlung der Immunhyperthyreose eine feste Spanne von zwölf bis achtzehn Monaten für die Gabe der Thyreostatika empfohlen wird, lautet die Empfehlung der amerikanischen Hyperthyreose-Leitlinie die thyreostatische Therapie bis zur Normalisierung der TSH-Rezeptor-Antikörper fortzuführen. Die Ergebnisse der vorliegenden Arbeit sprechen dafür, die europäische Leitlinie dahingehend der amerikanischen Leitlinie anzupassen.
In psychiatry, there has been a growing focus on identifying at-risk populations. For schizophrenia, these efforts have led to the development of early recognition and intervention measures. Despite a similar disease burden, the populations at risk of bipolar disorder have not been sufficiently characterized. Within the BipoLife consortium, we used magnetic resonance imaging (MRI) data from a multicenter study to assess structural gray matter alterations in N = 263 help-seeking individuals from seven study sites. We defined the risk using the EPIbipolar assessment tool as no-risk, low-risk, and high-risk and used a region-of-interest approach (ROI) based on the results of two large-scale multicenter studies of bipolar disorder by the ENIGMA working group. We detected significant differences in the thickness of the left pars opercularis (Cohen’s d = 0.47, p = 0.024) between groups. The cortex was significantly thinner in high-risk individuals compared to those in the no-risk group (p = 0.011). We detected no differences in the hippocampal volume. Exploratory analyses revealed no significant differences in other cortical or subcortical regions. The thinner cortex in help-seeking individuals at risk of bipolar disorder is in line with previous findings in patients with the established disorder and corresponds to the region of the highest effect size in the ENIGMA study of cortical alterations. Structural alterations in prefrontal cortex might be a trait marker of bipolar risk. This is the largest structural MRI study of help-seeking individuals at increased risk of bipolar disorder.
Periodontal furcation lesions: a survey of diagnosis and management by general dental practitioners
(2021)
Aim: The aim of this study was to explore general dental practitioners' (GDPs) attitude to periodontal furcation involvement (FI). Materials and methods: An online survey focused on diagnosis and management of periodontal FI was circulated to GDPs in seven different countries. Results: A total of 400 responses were collected. Nearly a fifth of participants reported rarely or never taking 6-point pocket charts; 65.8% of participants had access to a Nabers probe in their practice. When shown clinical pictures and radiographs of FI-involved molars, the majority of participants correctly diagnosed it. Although 47.1% of participants were very/extremely confident in detecting FI, only 8.9% felt very/extremely confident at treating it. Differences in responses were detected according to country and year of qualification, with a trend towards less interest in periodontal diagnosis and treatment in younger generations. Lack of knowledge of management/referral pathways (reported by 22.8%) and lack of correct equipment were considered the biggest barriers to FI management. Most participants (80.9%) were interested in learning more about FI, ideally face to face followed by online tutorials. Conclusions: Plans should be put in place to improve general dentists' knowledge and ability to manage FI, as this can have a significant impact on public health.
Cytochrome P450 (CYP) signalling pathway has been shown to play a vital role in the vasoreactivity of wild type mouse ophthalmic artery. In this study, we determined the expression, vascular responses and potential mechanisms of the CYP-derived arachidonic acid metabolites. The expression of murine CYP (Cyp2c44) and soluble epoxide hydrolase (sEH) in the wild type ophthalmic artery was determined with immunofluorescence, which showed predominant expression of Cyp2c44 in the vascular smooth muscle cells (VSMC), while sEH was found mainly in the endothelium of the wild type ophthalmic artery. Artery of Cyp2c44−/− and sEH−/− mice were used as negative controls. Targeted mass spectrometry-based lipidomics analysis of endogenous epoxide and diols of the wild type artery detected only 14, 15-EET. Vasorelaxant responses of isolated vessels in response to selective pharmacological blockers and agonist were analysed ex vivo. Direct antagonism of epoxyeicosatrienoic acids (EETs) with a selective inhibitor caused partial vasodilation, suggesting that EETs may behave as vasoconstrictors. Exogenous administration of synthetic EET regioisomers significantly constricted the vessels in a concentration-dependent manner, with the strongest responses elicited by 11, 12- and 14, 15-EETs. Our results provide the first experimental evidence that Cyp2c44-derived EETs in the VSMC mediate vasoconstriction of the ophthalmic artery.
Objectives: To test the effect of race/ethnicity on cancer-specific mortality after radical prostatectomy or external beam radiotherapy in localized prostate cancer patients. Methods: In the Surveillance, Epidemiology and End Results database 2004–2016, we identified intermediate-risk and high-risk white (n = 151 632), Asian (n = 11 189), Hispanic/Latino (n = 20 077) and African American (n = 32 550) localized prostate cancer patients, treated with external beam radiotherapy or radical prostatectomy. Race/ethnicity-stratified cancer-specific mortality analyses relied on competing risks regression, after propensity score matching for patient and cancer characteristics. Results: Compared with white patients, Asian intermediate- and high-risk external beam radiotherapy patients showed lower cancer-specific mortality (hazard ratio 0.58 and 0.70, respectively, both P ≤ 0.02). Additionally, Asian high-risk radical prostatectomy patients also showed lower cancer-specific mortality than white patients (hazard ratio 0.72, P = 0.04), but not Asian intermediate-risk radical prostatectomy patients (P = 0.08). Conversely, compared with white patients, African American intermediate-risk radical prostatectomy patients showed higher cancer-specific mortality (hazard ratio 1.36, P = 0.01), but not African American high-risk radical prostatectomy or intermediate- and high-risk external beam radiotherapy patients (all P ≥ 0.2). Finally, compared with white people, no cancer-specific mortality differences were recorded for Hispanic/Latino patients after external beam radiotherapy or radical prostatectomy, in both risk levels (P ≥ 0.2). Conclusions: Relative to white patients, an important cancer-specific mortality advantage applies to intermediate-risk and high-risk Asian prostate cancer patients treated with external beam radiotherapy, and to high-risk Asian patients treated with radical prostatectomy. These observations should be considered in pretreatment risk stratification and decision-making.
Inflammatory diseases including psoriasis are associated with metabolic and cardiovascular comorbidities, including obesity and metabolic syndrome. Obesity is associated with greater psoriasis disease severity and reduced response to treatment. Therefore, targeting metabolic comorbidities could improve patients’ health status and psoriasis-specific outcomes. METABOLyx is a randomized controlled trial evaluating the combination of a lifestyle intervention program with secukinumab treatment in psoriasis. Here, the rationale, methodology and baseline patient characteristics of METABOLyx are presented. A total of 768 patients with concomitant moderate to severe plaque psoriasis and metabolic syndrome were randomized to secukinumab 300 mg, or secukinumab 300 mg plus a tailored lifestyle intervention program, over 24 weeks. A substudy of immunologic and metabolic biomarkers is ongoing. The primary endpoint of METABOLyx is PASI90 response at week 24. Other endpoints include patient-reported outcomes and safety. METABOLyx represents the first large scale clinical trial of an immunomodulatory biologic in combination with a standardized lifestyle intervention.
Cardiac rehabilitation (CR) is a multidisciplinary intervention including patient assessment and medical actions to promote stabilization, management of cardiovascular risk factors, vocational support, psychosocial management, physical activity counselling, and prescription of exercise training. Millions of people with cardiac implantable electronic devices live in Europe and their numbers are progressively increasing, therefore, large subsets of patients admitted in CR facilities have a cardiac implantable electronic device. Patients who are cardiac implantable electronic devices recipients are considered eligible for a CR programme. This is not only related to the underlying heart disease but also to specific issues, such as psychological adaptation to living with an implanted device and, in implantable cardioverter-defibrillator patients, the risk of arrhythmia, syncope, and sudden cardiac death. Therefore, these patients should receive special attention, as their needs may differ from other patients participating in CR. As evidence from studies of CR in patients with cardiac implantable electronic devices is sparse, detailed clinical practice guidelines are lacking. Here, we aim to provide practical recommendations for CR in cardiac implantable electronic devices recipients in order to increase CR implementation, efficacy, and safety in this subset of patients.
(1) Background: The aim of our study was to identify specific risk factors for fatal outcome in critically ill COVID-19 patients. (2) Methods: Our data set consisted of 840 patients enclosed in the LEOSS registry. Using lasso regression for variable selection, a multifactorial logistic regression model was fitted to the response variable survival. Specific risk factors and their odds ratios were derived. A nomogram was developed as a graphical representation of the model. (3) Results: 14 variables were identified as independent factors contributing to the risk of death for critically ill COVID-19 patients: age (OR 1.08, CI 1.06–1.10), cardiovascular disease (OR 1.64, CI 1.06–2.55), pulmonary disease (OR 1.87, CI 1.16–3.03), baseline Statin treatment (0.54, CI 0.33–0.87), oxygen saturation (unit = 1%, OR 0.94, CI 0.92–0.96), leukocytes (unit 1000/μL, OR 1.04, CI 1.01–1.07), lymphocytes (unit 100/μL, OR 0.96, CI 0.94–0.99), platelets (unit 100,000/μL, OR 0.70, CI 0.62–0.80), procalcitonin (unit ng/mL, OR 1.11, CI 1.05–1.18), kidney failure (OR 1.68, CI 1.05–2.70), congestive heart failure (OR 2.62, CI 1.11–6.21), severe liver failure (OR 4.93, CI 1.94–12.52), and a quick SOFA score of 3 (OR 1.78, CI 1.14–2.78). The nomogram graphically displays the importance of these 14 factors for mortality. (4) Conclusions: There are risk factors that are specific to the subpopulation of critically ill COVID-19 patients.
Severe traumatic injury induces phenotypic and functional changes of neutrophils and monocytes
(2021)
Background: Severe traumatic injury has been associated with high susceptibility for the development of secondary complications caused by dysbalanced immune response. As the first line of the cellular immune response, neutrophils and monocytes recruited to the site of tissue damage and/or infection, are divided into three different subsets according to their CD16/CD62L and CD16/CD14 expression, respectively. Their differential functions have not yet been clearly understood. Thus, we evaluated the phenotypic changes of neutrophil and monocyte subsets among their functionality regarding oxidative burst and the phagocytic capacity in severely traumatized patients. Methods: Peripheral blood was withdrawn from severely injured trauma patients (TP; n = 15, ISS ≥ 16) within the first 12 h post-trauma and from healthy volunteers (HV; n = 15) and stimulated with fMLP and PMA. CD16dimCD62Lbright (immature), CD16brightCD62Lbright (mature) and CD16brightCD62Ldim (CD62Llow) neutrophil subsets and CD14brightCD16− (classical), CD14brightCD16+ (intermediate) and CD14dimCD16+ (non-classical) monocyte subsets of HV and TP were either directly analyzed by flow cytometry or the examined subsets of HV were sorted first by fluorescence-activated cell sorting and subsequently analyzed. Subset-specific generation of reactive oxygen species (ROS) and of E. coli bioparticle phagocytosis were evaluated. Results: In TP, the counts of immature neutrophils were significantly increased vs. HV. The numbers of mature and CD62Ldim neutrophils remained unchanged but the production of ROS was significantly enhanced in TP vs. HV and the stimulation with fMLP significantly increased the generation of ROS in the mature and CD62Ldim neutrophils of HV. The counts of phagocyting neutrophils did not change but the mean phagocytic capacity showed an increasing trend in TP. In TP, the monocytes shifted toward the intermediate phenotype, whereas the classical and non-classical monocytes became less abundant. ROS generation was significantly increased in all monocyte subsets in TP vs. HV and PMA stimulation significantly increased those level in both, HV and TP. However, the PMA-induced mean ROS generation was significantly lower in intermediate monocytes of TP vs. HV. Sorting of monocyte and neutrophil subsets revealed a significant increase of ROS and decrease of phagocytic capacity vs. whole blood analysis. Conclusions: Neutrophils and monocytes display a phenotypic shift following severe injury. The increased functional abnormalities of certain subsets may contribute to the dysbalanced immune response and attenuate the antimicrobial function and thus, may represent a potential therapeutic target. Further studies on isolated subsets are necessary for evaluation of their physiological role after severe traumatic injury.
Human babesiosis in Europe
(2021)
Babesiosis is attracting increasing attention as a worldwide emerging zoonosis. The first case of human babesiosis in Europe was described in the late 1950s and since then more than 60 cases have been reported in Europe. While the disease is relatively rare in Europe, it is significant because the majority of cases present as life-threatening fulminant infections, mainly in immunocompromised patients. Although appearing clinically similar to human babesiosis elsewhere, particularly in the USA, most European forms of the disease are distinct entities, especially concerning epidemiology, human susceptibility to infection and clinical management. This paper describes the history of the disease and reviews all published cases that have occurred in Europe with regard to the identity and genetic characteristics of the etiological agents, pathogenesis, aspects of epidemiology including the eco-epidemiology of the vectors, the clinical courses of infection, diagnostic tools and clinical management and treatment.
Objectives Our study aimed to assess the frequency of potentially inappropriate medication (PIM) use (according to three PIM lists) and to examine the association between PIM use and cognitive function among participants in the MultiCare cohort. Design MultiCare is conducted as a longitudinal, multicentre, observational cohort study. Setting The MultiCare study is located in eight different study centres in Germany. Participants 3189 patients (59.3% female). Primary and secondary outcome measures The study had a cross-sectional design using baseline data from the German MultiCare study. Prescribed and over-the-counter drugs were classified using FORTA (Fit fOR The Aged), PRISCUS (Latin for ‘time-honoured’) and EU(7)-PIM lists. A mixed-effect multivariate linear regression was performed to calculate the association between PIM use patients’ cognitive function (measured with (LDST)). Results Patients (3189) used 2152 FORTA PIM (mean 0.9±1.03 per patient), 936 PRISCUS PIM (0.3±0.58) and 4311 EU(7)-PIM (1.4±1.29). The most common FORTA PIM was phenprocoumon (13.8%); the most prevalent PRISCUS PIM was amitriptyline (2.8%); the most common EU(7)-PIM was omeprazole (14.0%). The lists rate PIM differently, with an overall overlap of 6.6%. Increasing use of PIM is significantly associated with reduced cognitive function that was detected with a correlation coefficient of −0.60 for FORTA PIM (p=0.002), −0.72 for PRISCUS PIM (p=0.025) and −0.44 for EU(7)-PIM (p=0.005). Conclusion We identified PIM using FORTA, PRISCUS and EU(7)-PIM lists differently and found that PIM use is associated with cognitive impairment according to LDST, whereby the FORTA list best explained cognitive decline for the German population. These findings are consistent with a negative impact of PIM use on multimorbid elderly patient outcomes.
Untersuchungen zum HIV-assoziierten Immun-Rekonstitutions-Inflammationssyndrom bei Tuberkulose
(2021)
HIV- und Tuberkulose (TB)-koinfizierte Patienten können nach Beginn einer antiretroviralen Therapie (ART) als Komplikation ein Immunrekonstitutionssyndrom (IRIS) entwickeln. Dabei kommt es zu einem Neuauftreten oder einer Verschlechterung von klinischen Symptomen oder radiologischen Befunden im Zusammenhang mit der TB. Präsentieren kann sich ein IRIS entweder als eine plötzliche Verschlechterung der Infektion nach ART-Beginn („paradoxical/paradoxes IRIS“) oder durch ein Demaskieren einer vorher klinisch inapparenten und unbehandelten Infektion („unmasking/demaskierendes IRIS“). Aufgrund nicht einheitlich definierter Diagnosekritierien kann die Diagnosestellung im klinischen Alltag eine Herausforderung darstellen.
Ziel dieser Dissertation war es deshalb, klinische Charakteristika, Risikofaktoren und ggf. protektive Faktoren für die Entwicklung eines IRIS bei TB zu identifizieren. Diese Ergebnisse sollten zu besseren Verständnis und Vorhersagen von IRIS im Zusammenhang mit TB beitragen.
Dazu wurden retrospektiv Daten von 52 Patienten, die im Zeitraum 01.01.2010 - 30.06.2016 mit einer HIV-Infektion und zur Behandlung einer aktiven Tuberkulose stationär in der Infektiologie des Uniklinikums Frankfurts aufgenommen wurden, pseudonymisiert erfasst. Es wurden u. a. Arztbriefe, Laborbefunde, Fieberkurven und Visitenberichte aus dem Patientenmanagementprogramm „ORBIS“, der Datenbank „epidem“ und des Laborinformationsprogramms „Nexus swisslab“ des Uniklinikums Frankfurt genutzt. Zu den Parametern gehörten neben patientenspezifischen Daten wie Alter und Geschlecht unter anderem auch Routinelaborparameter, Serologien, Art der TB, genaue ART und TB-Therapien und Laborparameter, die zur Beurteilung einer Entwicklung der Immunrekonstitution und der virologischen Suppression hinweisend sind. Dazu zählen insbesondere HI-Viruslastwerte, CD4- und CD8-Zellzahlen für einen Zeitraum von 48 Wochen ab ART-Beginn.
Zur Untersuchung der unterschiedlichen IRIS-Arten wurden die Patienten in zwei Gruppen aufgeteilt: bereits mit einer ART vorbehandelte Patienten, bei denen somit die Entwicklung eines demaskierendem IRIS möglich war, und ART-naive Patienten, die theoretisch ein paradoxes IRIS entwickeln konnten. Durch Beurteilung des Krankheitsverlaufes und unter spezieller Berücksichtigung der HI-Viruslast im Verlaufe der ART wurde nach der IRIS-Definition von French et al. (2004) festgelegt, ob ein IRIS vorlag. Bei unklaren Fällen erfolgte eine gemeinsame Besprechung und definitive Einteilung im kliniksinternen Kolloquium. Schließlich wurde die statistische Auswertung mithilfe des Statistikprogramms „bias“ durchgeführt und dabei jeweils die „IRIS“ mit der „Nicht-IRIS“-Gruppe verglichen. Angewandt wurden der Exakte Fisher-Test für kategorische und der Wilcoxon-Mann-Whitney-Test für numerische Variablen.
Die paradoxe IRIS-Inzidenz betrug 29,7 %, die demaskierende IRIS-Inzidenz 46,7 %. Am häufigsten präsentierte sich das IRIS in der Frankfurter Kohorte mit Fieber, am zweithäufigsten als Lymphadenopathie oder mit respiratorischen Beschwerden. Für sowohl Patienten mit paradoxem als auch demaskierendem IRIS zeigte sich ein signifikant längerer Krankenhausaufenthalt als für Patienten, die kein IRIS entwickelten. Sonst wurden für das demaskierende IRIS keine weiteren statistisch signifikanten Parameter gefunden, u. a. aufgrund Limitationen wie der sehr kleinen Studienpopulation (15 Patienten).
Patienten mit paradoxem IRIS hatten zudem eine signifikant höhere Rehospitalisierungsrate (63,3 % vs. 15,4 %; p= 0,006), was die klinische Relevanz aufzeigt. Außerdem korrelierten extrathorakale TB-Manifestationen (p= 0,025), niedrige CD4+-Lymphozyten-Zellzahl (p= 0,006) und hohe Viruslast (p= 0,017) vor ART-Beginn mit einer paradoxen TB-IRIS-Entwicklung. Diese Patienten sollten folglich nach ART-Beginn besonders engmaschig klinisch kontrolliert werden, da bei ihnen ein IRIS wahrscheinlicher ist. Ebenfalls statistisch signifikant zeigte sich erhöhte Laktatdehydrogenase (LDH) und erniedrigtes Albumin im Serum. In Kombination mit den davorgenannten Parametern könnten die Werte dabei behilflich sein, das individuelle paradoxe IRIS-Risiko bei Tuberkulose einzuschätzen. ART-Bestandteile oder Zeit zwischen dem Beginn der TB-Therapie und ART hatten in der Studie keinen Einfluss.
Background: SARS-CoV-2 is one of the most threatening pandemics in human history. As of the date of this analysis, it had claimed about 2 million lives worldwide, and the number is rising sharply. Governments, societies, and scientists are equally challenged under this burden. Objective: This study aimed to map global coronavirus research in 2020 according to various influencing factors to highlight incentives or necessities for further research. Methods: The application of established and advanced bibliometric methods combined with the visualization technique of density-equalizing mapping provided a global picture of incentives and efforts on coronavirus research in 2020. Countries’ funding patterns and their epidemiological and socioeconomic characteristics as well as their publication performance data were included. Results: Research output exploded in 2020 with momentum, including citation and networking parameters. China and the United States were the countries with the highest publication performance. Globally, however, publication output correlated significantly with COVID-19 cases. Research funding has also increased immensely. Conclusions: Nonetheless, the abrupt decline in publication efforts following previous coronavirus epidemics should demonstrate to global researchers that they should not lose interest even after containment, as the next epidemiological challenge is certain to come. Validated reporting worldwide and the inclusion of low-income countries are additionally important for a successful future research strategy.
Following publication of the original article, the authors noticed an incorrect affiliation for Christine Stürken and Udo Schumacher. The correct affiliations are as follows: Christine Stürken: Institute of Anatomy and Experimental Morphology, University Medical Center Hamburg-Eppendorf, Martinistrasse 52, 20246 Hamburg, Germany. Udo Schumacher: Institute of Anatomy and Experimental Morphology, University Medical Center Hamburg-Eppendorf, Martinistrasse 52, 20246 Hamburg, Germany. The affiliations have been correctly published in this correction and the original article has been updated.
Morbus Parkinson ist die zweithäufigste neurodegenerativen Erkrankung, die durch Untergang der dopaminergen Neuronen im Mesenzephalon zu einer Störung des extrapyramidalen motorischen Systems führt. Daraus resultierende Bewegungsstörungen, zu denen Rigor, Tremor, Hypokinese und posturale Instabilität gehören, werden von nichtmotorischen Symptomen wie autonome Dysregulation, veränderte sensorische Wahrnehmung, sowie kognitive und psychische Störungen begleitet.
Mehrere Studien berichten über erhöhte Schmerzprävalenz bei Parkinson Patienten. Die genaue Pathogenese der gestörten Schmerzwahrnehmung bleibt unklar. Zusätzlich zu den zentralen Mechanismen entstehen die Schmerzen bei Morbus Parkinson wahrscheinlich durch eine Schädigung der peripheren somatosensorischen und autonomen Neuronen, die sich in sensorischen Defiziten, sowie in erhöhter Schmerzempfindlichkeit manifestieren. Als Korrelat dazu wurden abnormale somatosensorisch evozierte Potenziale, pathologische Ergebnisse in der quantitativen sensorischen Testung und eine Abnahme der Nervenfaserdichte beschrieben.
Ein Schwerpunkt unserer Untersuchungen lag auf der Erforschung von potentiellen Veränderungen von Lipidsignalmolekülen. Eine Reihe von Studien zeigen eine Schmerzlinderung durch Cannabis-Einnahme, sowie eine Tendenz zur Schmerzentwicklung bei Parkinson Patienten mit dem bekannten FAAHPolymorphismus. Die Ergebnisse deuten darauf hin, dass eine Störung im Endocannabinoid-System höchstwahrscheinlich zu erhöhter Schmerzprävalenz bei Morbus Parkinson beiträgt. Eine weitere wichtige Lipid-Gruppe sind Glycosylceramide. Ihr Abbau kann durch heterozygote Mutationen des lipidabbauenden Enzyms Glukocerebrosidase 1 (GBA1) gestört sein. GBA1 Mutationen sind mit der schnell progredienten sporadischen Verlaufsform der Parkinson-Krankheit assoziiert.
Im Rahmen der Studie wurden zwei Kohorten von Parkinson Patienten analysiert. Die 128 Patienten aus Israel wurden im ersten Teil mit 224 jungen gesunden deutschen Probanden verglichen. Im zweiten Teil wurden 50 deutschen Patienten und 50 gesunde altersgleiche Probanden untersucht. Die Schmerzevaluation erfolgte anhand der "Brief Pain Inventory“ und "Neuro Detect“ Fragebögen. Bei allen Probanden wurde quantitative sensorische Testung durchgeführt und die Plasmakonzentrationen der Lipidsignalmoleküle mittels quantitativer HPLC-Tandem-Massenspektrometrie analysiert.
Nach Auswertung der Schmerzevaluation konnte eine erhöhte Schmerzprävalenz bei Parkinson Patienten festgestellt werden. Die Prävalenz betrug 66% im ersten Teil der Studie und 74% in der deutschen Kohorte, im Vergleich zu 40% bei den altersgleichen gesunden Probanden. Ergebnisse der quantitativen sensorischen Testung zeigen einen Verlust der thermischen Empfindung (erhöhte Schwellen) bei der gleichzeitigen mechanischen Überempfindlichkeit (erniedrigte Schwellen). In der multivariaten LipidAnalyse konnten erniedrigte Konzentrationen von Anandamid und Lysophosphatidsäure 20:4 und eine Erhöhung der Glucosylceramide nachgewiesen werden. Diese Veränderungen waren bei Parkinson Patienten mit Schmerzen stärker ausgeprägt. Außerdem wurde eine lineare Korrelation zwischen Glucosylceramiden (GlcCer 18:1, GlcCer 24:1) und der Schmerzintensität, sowie sensorischem Defizit festgestellt.
Nach sorgfältiger Auswertung der Studienergebnisse kommen wir zu der Schlussfolgerung, dass eine Veränderung der Endocannabinoide und der Glucosylceramide zur Pathogenese der Schmerzen und der sensorischen Neuropathie bei Morbus Parkinson beitragen. Die Erkenntnisse könnten zukünftig zur Diagnosestellung durch frühzeitige Erkennung prämotorischer sensorischer Symptome beitragen. Darüber hinaus könnten unsere Ergebnisse zur Therapieoptimierung durch Wiederherstellung der Lipid-Homeostases beitragen.
Diese Dissertation soll die Frage beantworten, ob die Forderung der Krankenkassen, die Nabelhernie und die epigastrische Hernie als ambulante Operation zu realisieren, gerechtfertigt bzw. sinnvoll ist. Sie soll ferner Steuergrößen und Maßnahmen identifizieren, die die Überführung des Eingriffs in den ambulanten Rahmen begünstigen können.
Seit den achtziger Jahren des letzten Jahrhunderts wird versucht, durch die kurzstationäre und ambulante Operation verschiedener Krankheitsbilder der Forderung nach Kostenersparnis im Gesundheitswesen nachzukommen. Von den Krankenkassen wird gefordert, den Verschluss einer Hernia umbilicalis: Ohne Plastik: Mit Exstirpation einer Nabelzyste, den Verschluss einer Hernie epigastrica: Ohne Plastik sowie den Verschluss einer Hernia umbilicalis: Mit Plastik im Rahmen einer ambulanten Operation zu korrigieren. Entsprechend wurden diese Eingriffe 2005 in die Liste der ambulant zu erbringenden und stationsersetzenden Maßnahmen aufgenommen. Dennoch liegt die durchschnittliche stationäre Verweildauer nach diesem Eingriff weiterhin bei 3,5 Tagen.
Phylogenetisch ist die Entstehung von Nabelhernien durch anatomisch präformierte Schwachstellen der Bauchwand bedingt, an denen Muskulatur fehlt und nur Aponeurosen und Faszien vorhanden sind. Die Entstehung wird aber auch durch Begleiterkrankungen und Risikofaktoren begünstigt.
In die vorliegende Untersuchung wurden nach Anwendung verschiedener Ausschlusskriterien 95 Patienten aufgenommen, die im Zeitraum zwischen dem 24. August 2009 und dem 24. Juni 2012 mit der Hauptdiagnose einer Nabelhernie bzw. epigastrischen Hernie - Diagnose nach ICD10 - K42.0, K42.1, K42.9, K43.0, K43.1 und K43.9 in der Klinik für Allgemein- und Viszeralchirurgie der Hochtaunuskliniken Bad Homburg operiert wurden. Die selektierten Patienten, welche betrachtet wurden, teilten sich in 61 primäre Nabelhernien, fünf Rezidivnabelhernien, elf epigastrische Hernien, drei Rezidive epigastrischer Hernien und 15 Kombinationseingriffe mit simultaner Operation einer Nabelhernie und einer Leistenhernie auf.
Als Operationsverfahren kam entweder eine Naht Stoß-auf-Stoß (NSAS), die Technik nach Mayo mit einer Fasziendoppelung oder die Implantation von alloplastischem Fremdmaterial entweder mittels eines Ventralex™ Patch oder Proceed™ Patch in Sublay-Technik oder bei ausgedehnten Befunden eine retromuskuläre Mesh Plastik (RMMP) zum Einsatz. Als laparoskopisches Verfahren wurde das Intraperitoneale Onlay Mesh (IPOM) verwendet.
Die Auswertung für die deskriptive Statistik erfolgte mit Microsoft® Excel® 2013. Anschließend wurde die Auswertung der explorativen wie auch der mathematisch/induktiven Statistik mit Hilfe von BiAS. für Windows™ Version 11/2015 durchgeführt.
Nach Analyse des Patientengutes konnte anhand von Korrelationsanalysen herausgearbeitet werden, dass das Alter, die Anzahl der Begleiterkrankungen, die Anzahl der Risikofaktoren und die ASA-Klassifikation (American Society of Anesthesiologists), die Größe der Bruchlücke in Zentimetern und die Schmerzen am zweiten postoperativen Tag einen schwachen Zusammenhang rho (ρ) zwischen 0,23 und 0,39 mit der Liegedauer bei jedoch signifikanten p-Wert p ≤ 0,05 aufwiesen. Einen stärkeren Zusammenhang mit einem Korrelationskoeffizienten ρ von 0,42 und 0,40 im Hinblick auf die Liegedauer zeigten hierbei die Operationsdauer und die Schmerzen am ersten postoperativen Tag. Den stärksten signifikanten Zusammenhang mit einem ρ von 0,64 zeigten die Schmerzen am dritten postoperativen Tag.
Die Verweildauer wurde auch durch die Wahl des Operationsverfahrens beeinflusst. Hier ergab sich eine signifikante Verlängerung der Verweildauer durch unterschiedliche Operationsverfahren sowohl in der Begutachtung des Gesamtkollektivs als auch in der Subgruppe NSAS, Mayo und Patch.
Im Anschluss konnte anhand multivariater Analysen festgestellt werden, dass die Operationsdauer, das Operationsverfahren und die ASA-Klassifikation mit p-Werten ≤ 0,05 mit der Liegedauer signifikant korrelierten. Auch konnte mit Hilfe der multivariaten Analyse aufgezeigt werden, dass die Größe der Bruchlücke in Zentimetern und die Schmerzen am ersten und zweiten postoperativen Tag mit Signifikanzwerten ≤ 0,05 mit der Liegedauer korrelierten.
Nach der durchgeführten Analyse, wie auch nach Betrachtung der Literatur, ist die Grundlage zur Durchführbarkeit einer ambulanten Operation die Erfüllung der medizinischen Voraussetzungen, die Erfüllung der Kriterien für ambulante Operationen und die Erfüllung der Entlassungskriterien. Zudem sollten Patienten mit kardiovaskulären Erkrankungen, insbesondere bei Vorliegen einer Herzinsuffizienz, aber auch bei COPD, Asthma und Schlafapnoesyndrom und einem BMI größer 30 nicht für eine ambulante Operation in Betracht gezogen werden. Auch gelten ein ASA Status größer als 2, Nebenwirkungen der (Allgemein-)Narkose wie PONV, Schwindel, Schläfrigkeit und ein erhöhtes postoperatives Schmerzniveau sowie eine große Defektgröße als hinderlich für die ambulante Durchführung der Operationen.
...
Conduct disorder (CD), a psychiatric disorder characterized by a repetitive pattern of antisocial behaviors, results from a complex interplay between genetic and environmental factors. The clinical presentation of CD varies both according to the individual’s sex and level of callous-unemotional (CU) traits, but it remains unclear how genetic and environmental factors interact at the molecular level to produce these differences. Emerging evidence in males implicates methylation of genes associated with socio-affective processes. Here, we combined an epigenome-wide association study with structural neuroimaging in 51 females with CD and 59 typically developing (TD) females to examine DNA methylation in relation to CD, CU traits, and gray matter volume (GMV). We demonstrate an inverse pattern of correlation between CU traits and methylation of a chromosome 1 region in CD females (positive) as compared to TD females (negative). The identified region spans exon 1 of the SLC25A24 gene, central to energy metabolism due to its role in mitochondrial function. Increased SLC25A24 methylation was also related to lower GMV in multiple brain regions in the overall cohort. These included the superior frontal gyrus, dorsolateral prefrontal cortex, supramarginal gyrus, secondary visual cortex and ventral posterior cingulate cortex, which are regions that have previously been implicated in CD and CU traits. While our findings are preliminary and need to be replicated in larger samples, they provide novel evidence that CU traits in females are associated with methylation levels in a fundamentally different way in CD and TD individuals, which in turn may relate to observable variations in GMV across the brain.
Background: Glucagon-like peptide-1 receptor agonists may be a treatment option in patients with non-alcoholic fatty liver disease (NAFLD). Aims: To investigate the effects of semaglutide on liver stiffness and liver fat in subjects with NAFLD using non-invasive magnetic resonance imaging (MRI) methods. Methods: This randomised, double-blind, placebo-controlled trial enrolled subjects with liver stiffness 2.50-4.63 kPa by magnetic resonance elastography (MRE) and liver steatosis ≥10% by MRI proton density fat fraction (MRI-PDFF). The primary endpoint was change from baseline to week 48 in liver stiffness assessed by MRE. Results: Sixty-seven subjects were randomised to once-daily subcutaneous semaglutide 0.4 mg (n = 34) or placebo (n = 33). Change from baseline in liver stiffness was not significantly different between semaglutide and placebo at week 48 (estimated treatment ratio 0.96 (95% CI 0.89, 1.03; P = 0.2798); significant differences in liver stiffness were not observed at weeks 24 or 72. Reductions in liver steatosis were significantly greater with semaglutide (estimated treatment ratios: 0.70 [0.59, 0.84], P = 0.0002; 0.47 [0.36, 0.60], P < 0.0001; and 0.50 [0.39, 0.66], P < 0.0001) and more subjects achieved a ≥ 30% reduction in liver fat content with semaglutide at weeks 24, 48 and 72, (all P < 0.001). Decreases in liver enzymes, body weight and HbA1c were also observed with semaglutide. Conclusions: The change in liver stiffness in subjects with NAFLD was not significantly different between semaglutide and placebo. However, semaglutide significantly reduced liver steatosis compared with placebo which, together with improvements in liver enzymes and metabolic parameters, suggests a positive impact on disease activity and metabolic profile. ClinicalTrials.gov identifier: NCT03357380
Aim: Comparison of the clinical efficacy (digitally volumetric, aesthetic, patient-centred outcomes) of tunnel technique (TUN) with subepithelial connective tissue graft (CTG) versus coronally advanced flap (CAF) with enamel matrix derivate (EMD) 5 years after gingival recession therapy. Materials and methods: In 18 patients contributing 36 RT1 recessions, study models were collected at baseline and follow-ups. Optical scans assessed recessions computer-assisted [recession depth, recession reduction (RECred), complete root coverage (CRC), percentage of root coverage (RC), pointwise (pTHK) and mean areal (aTHK) marginal soft tissue thickness]. Root coverage aesthetic Score (RES) was used for aesthetic evaluation and visual analogue scales for patient-centred data collection applied. Results: Sixty months after surgery, 50.0% (TUN+CTG) and 0.0% (CAF+EMD) of sites showed CRC (p = 0.0118), 82.2% (TUN+CTG) and 32.0% (CAF+EMD) achieved RC, respectively (p = 0.0023). CTG achieved significantly better RECred (TUN+CTG: 1.75±0.74 mm; CAF+EMD: 0.50 ± 0.39 mm; p = 0.0009) and aTHK (TUN+CTG: 0.95 ± 0.41 mm; CAF+EMD: 0.26 ± 0.28 mm; p = 0.0013). RES showed superior outcomes (p = 0.0533) for TUN+CTG (6.86 ± 2.31) compared to CAF+EMD (4.63 ± 1.99). The study failed to find significant differences related to patient-centred outcomes (TUN+CTG: 8.30 ± 2.21; CAF+EMD: 7.50 ± 1.51; p = 0.1136). Conclusions: Five years after treatment, CTG resulted in better clinical and aesthetic outcomes than CAF+EMD. Increased THK was associated with improved outcomes for RECred and RC.
Background: To test for rates of other cause mortality (OCM) and cancer-specific mortality (CSM) in elderly prostate cancer (PCa) patients treated with the combination of radical prostatectomy (RP) and external beam radiation therapy (EBRT) versus RP alone, since elderly PCa patients may be over-treated. Methods: Within the Surveillance, Epidemiology and End Results database (2004–2016), cumulative incidence plots, after propensity score matching for cT-stage, cN-stage, prostate specific antigen, age and biopsy Gleason score, and multivariable competing risks regression models (socioeconomic status, pathological Gleason score) addressed OCM and CSM in patients (70–79, 70–74, and 75–79 years) treated with RP and EBRT versus RP alone. Results: Of 18,126 eligible patients aged 70–79 years, 2520 (13.9%) underwent RP and EBRT versus 15,606 (86.1%) RP alone. After propensity score matching, 10-year OCM rates were respectively 27.9 versus 20.3% for RP and EBRT versus RP alone (p < .001), which resulted in a multivariable HR of 1.4 (p < .001). Moreover, 10-year CSM rates were respectively 13.4 versus 5.5% for RP and EBRT versus RP alone. In subgroup analyses separately addressing 70–74 year old and 75–79 years old PCa patients, 10-year OCM rates were 22.8 versus 16.2% and 39.5 versus 24.0% for respectively RP and EBRT versus RP alone patients (all p < .001). Conclusion: Elderly patients treated with RP and EBRT exhibited worrisome rates of OCM. These higher than expected OCM rates question the need for combination therapy (RP and EBRT) in elderly PCa patients and indicate the need for better patient selection, when combination therapy is contemplated.
Aims: Parkinson's disease (PD) is frequently associated with a prodromal sensory neuropathy manifesting with sensory loss and chronic pain. We have recently shown that PD-associated sensory neuropathy in patients is associated with high levels of glucosylceramides. Here, we assessed the underlying pathology and mechanisms in Pink1−/−SNCAA53T double mutant mice. Methods: We studied nociceptive and olfactory behaviour and the neuropathology of dorsal root ganglia (DRGs), including ultrastructure, mitochondrial respiration, transcriptomes, outgrowth and calcium currents of primary neurons, and tissue ceramides and sphingolipids before the onset of a PD-like disease that spontaneously develops in Pink1−/−SNCAA53T double mutant mice beyond 15 months of age. Results: Similar to PD patients, Pink1−/−SNCAA53T mice developed a progressive prodromal sensory neuropathy with a loss of thermal sensitivity starting as early as 4 months of age. In analogy to human plasma, lipid analyses revealed an accumulation of glucosylceramides (GlcCer) in the DRGs and sciatic nerves, which was associated with pathological mitochondria, impairment of mitochondrial respiration, and deregulation of transient receptor potential channels (TRPV and TRPA) at mRNA, protein and functional levels in DRGs. Direct exposure of DRG neurons to GlcCer caused transient hyperexcitability, followed by a premature decline of the viability of sensory neurons cultures upon repeated GlcCer application. Conclusions: The results suggest that pathological GlcCer contribute to prodromal sensory disease in PD mice via mitochondrial damage and calcium channel hyperexcitability. GlcCer-associated sensory neuron pathology might be amenable to GlcCer lowering therapeutic strategies.
Amide proton transfer-chemical exchange saturation transfer (APT-CEST) imaging provides important information for the diagnosis and monitoring of tumors. For such analysis, complete coverage of the brain is advantageous, especially when registration is performed with other magnetic resonance (MR) modalities, such as MR spectroscopy (MRS). However, the acquisition of Z-spectra across several slices via multislice imaging may be time-consuming. Therefore, in this paper, we present a new approach for fast multislice imaging, allowing us to acquire 16 slices per frequency offset within 8 s. The proposed fast CEST-EPI sequence employs a presaturation module, which drives the magnetization into the steady-state equilibrium for the first frequency offset. A second module, consisting of a single CEST pulse (for maintaining the steady-state) followed by an EPI acquisition, passes through a loop to acquire multiple slices and adjacent frequency offsets. Thus, the whole Z-spectrum can be recorded much faster than the conventional saturation scheme, which employs a presaturation for each single frequency offset. The validation of the CEST sequence parameters was performed by using the conventional saturation scheme. Subsequently, the proposed and a modified version of the conventional CEST sequence were compared in vitro on a phantom with different T1 times and in vivo on a brain tumor patient. No significant differences between both sequences could be found in vitro. The in vivo data yielded almost identical MTRasym contrasts for the white and gray matter as well as for tumor tissue. Our results show that the proposed fast CEST-EPI sequence allows for rapid data acquisition and provides similar CEST contrasts as the modified conventional scheme while reducing the scanning time by approximately 50%.
Objective: To assess the effect of cesarean section (CS) timing, elective versus unplanned, on the residual myometrial thickness (RMT) and CS scars. Methods: This is a prospective single-blinded observational cohort study with 186 observations. Patients indicated to undergo first singleton CS were preoperatively recruited. Exclusion criteria were history of repeated CS, vertical hysterotomy, diabetes, and additional uterine surgeries. Sonographic examination was performed for assessing the RMT ratio, the presence of a niche, fibrosis, and the distance from the scar to the internal os (SO) 1 year after CS. Power analysis was performed with 0.05 α, 0.1 β, and all statistical analyses were conducted with Stata®. Results: Wilcoxon rank-sum test for the association between CS timing, RMT ratio and SO showed Z values of −0.59 and −4.94 (P = 0.553 and P < 0.001), respectively. There was no association between CS timing and niches and fibrosis (P > 0.99 and P = 0.268, respectively). Linear regression between SO and the extent of cervical dilatation showed a −0.45 β (95% confidence interval −0.68 to −0.21) and a 10.22-mm intercept (P < 0.001). Conclusion: RMT is independent of the timing of CS, but the SO distance shows a negative linear relationship with the cervical dilatation.
Aim: To evaluate preclinical education in Endodontology at Austrian, German and Swiss dental schools using an online survey. Methodology: An online survey divided into nine categories was sent using SurveyMonkey software to 37 dental schools, before the spread of the COVID-19 pandemic. The questionnaire included 50 questions to evaluate preclinical endodontic education, such as faculty-to-student ratios, topics taught and materials used, in preclinical phantom head courses. Seven and 14 days after the first e-mail contact, dental schools received a reminder e-mail. After four and six weeks, the dental schools were contacted by telephone and asked to participate in the online survey. The processing time was eight weeks in total. Results: The response rate was 89%. Preclinical endodontic education at the participating dental schools differs considerably. Theory classes ranged from 1 to 70 h (15 h mean), and practical classes ranged from 3 to 78 h (39 h mean). The faculty-to-student ratio varied between 1:4 and 1:38 (1:15 mean). Forty-five per cent of the dental schools had a specialist in endodontics teaching theory. Several dental microscopes were available for preclinical teaching purposes at 82% of the dental schools. The majority (82%) taught root canal preparation with rotary or reciprocating NiTi instruments. Overall, 85% of the dental schools taught lateral compaction, amongst other methods, for canal filling. Conclusion: A substantial divergence amongst the dental schools regarding the time dedicated to theory and practical instruction in Endodontology was reported. However, convergence in the use of root canal treatment techniques and materials was reported.
Background: Recently, an increase in the rates of high-risk prostate cancer (PCa) was reported. We tested whether the rates of and low, intermediate, high and very high-risk PCa changed over time. We also tested whether the number of prostate biopsy cores contributed to changes rates over time. Methods: Within the Surveillance, Epidemiology and End Results (SEER) database (2010–2015), annual rates of low, intermediate, high-risk according to traditional National Comprehensive Cancer Network (NCCN) and high versus very high-risk PCa according to Johns Hopkins classification were tabulated without and with adjustment for the number of prostate biopsy cores. Results: In 119,574 eligible prostate cancer patients, the rates of NCCN low, intermediate, and high-risk PCa were, respectively, 29.7%, 47.8%, and 22.5%. Of high-risk patients, 39.6% and 60.4% fulfilled high and very high-risk criteria. Without adjustment for number of prostate biopsy cores, the estimated annual percentage changes (EAPC) for low, intermediate, high and very high-risk were respectively −5.5% (32.4%–24.9%, p < .01), +0.5% (47.6%–48.4%, p = .09), +4.1% (8.2%–9.9%, p < .01), and +8.9% (11.8%–16.9%, p < .01), between 2010 and 2015. After adjustment for number of prostate biopsy cores, differences in rates over time disappeared and ranged from 29.8%–29.7% for low risk, 47.9%–47.9% for intermediate risk, 8.9%–9.0% for high-risk, and 13.6%–13.6% for very high-risk PCa (all p > .05). Conclusions: The rates of high and very high-risk PCa are strongly associated with the number of prostate biopsy cores, that in turn may be driven by broader use magnetic resonance imaging (MRI).
BAG3 is a negative regulator of ciliogenesis in glioblastoma and triple-negative breast cancer cells
(2021)
By regulating several hallmarks of cancer, BAG3 exerts oncogenic functions in a wide variety of malignant diseases including glioblastoma (GBM) and triple-negative breast cancer (TNBC). Here we performed global proteomic/phosphoproteomic analyses of CRISPR/Cas9-mediated isogenic BAG3 knockouts of the two GBM lines U343 and U251 in comparison to parental controls. Depletion of BAG3 evoked major effects on proteins involved in ciliogenesis/ciliary function and the activity of the related kinases aurora-kinase A and CDK1. Cilia formation was significantly enhanced in BAG3 KO cells, a finding that could be confirmed in BAG3-deficient versus -proficient BT-549 TNBC cells, thus identifying a completely novel function of BAG3 as a negative regulator of ciliogenesis. Furthermore, we demonstrate that enhanced ciliogenesis and reduced expression of SNAI1 and ZEB1, two key transcription factors regulating epithelial to mesenchymal transition (EMT) are correlated to decreased cell migration, both in the GBM and TNBC BAG3 knockout cells. Our data obtained in two different tumor entities identify suppression of EMT and ciliogenesis as putative synergizing mechanisms of BAG3-driven tumor aggressiveness in therapy-resistant cancers.
Rationale: Postinfectious bronchiolitis obliterans (PIBO) is a rare, chronic respiratory condition, which follows an acute insult due to a severe infection of the lower airways. Objectives: The objective of this study was to investigate the long-term course of bronchial inflammation and pulmonary function testing in children with PIBO. Methods: Medical charts of 21 children with PIBO were analyzed retrospectively at the Children's University Hospital Frankfurt/Main Germany. Pulmonary function tests (PFTs) with an interval of at least 1 month were studied between 2002 and 2019. A total of 382 PFTs were analyzed retrospectively and per year, the two best PFTs, in total 217, were evaluated. Additionally, 56 sputum analysis were assessed and the sputum neutrophils were evaluated. Results: The evaluation of the 217 PFTs showed a decrease in FEV1 with a loss of 1.07% and a loss in z score of −0.075 per year. FEV1/FVC decreased by 1.44 per year. FVC remained stable, showing a nonsignificant increase by 0.006 in z score per year. However, FEV1 and FVC in L increased significantly with FEV1 0.032 L per cm and FVC 0.048 L/cm in height. Sputum neutrophils showed a significant increase of 2.12% per year. Conclusion: Our results demonstrated that in patients with PIBO pulmonary function decreased significantly showing persistent obstruction over an average follow-up period of 8 years. However, persistent lung growth was revealed. In addition, pulmonary inflammation persisted clearly showing an increasing amount of neutrophils in induced sputum. Patients did not present with a general susceptibility to respiratory infections.
Objective: To assess tooth loss (TL) in initially periodontally healthy/gingivitis (PHG) and periodontally compromised (PC) individuals during a 15- to 25-year follow-up in a specialist practice and to identify the factors influencing TL. Materials and methods: Patients were re-examined 240 ± 60 months after active periodontal therapy (PC) or initial examination (PHG). PHG patients were periodontally healthy or had gingivitis, and PC patients exhibited at least stage II periodontitis. TL, patient-related outcomes, and risk factors for TL were assessed at the patient level (group-relation, gender, age, smoking, bleeding on probing, educational status, mean number of visits/year). Results: Fifty-six PC patients receiving regular supportive periodontal care (12 female, mean age 49.1 ± 10.9 years, stage II: 10, stage III/IV: 46) lost 38 teeth (0.03 ± 0.05 teeth/year). Fifty-one PHG patients (23 female, mean age 34.5 ± 12.4 years) following regular oral prevention lost 39 teeth (0.04 ± 0.05 teeth/year) (p = .631). Both PC and PHG groups did not show any significant differences regarding visual analogue scale measurements [aesthetics (p = .309), chewing function (p = .362), hygiene (p = .989)] and overall Oral Health Impact Profile (p = .484). Age at the start of follow-up was identified as a risk factor for TL (p < .0001). Conclusion: PC and PHG patients exhibited similarly small TL rates over 240 ± 60 months, which should, however, be interpreted with caution in view of the group heterogeneity. Clinical trial number: DRKS00018840 (URL: https://drks.de).
Degradation of the endoplasmic reticulum (ER) via selective autophagy (ER-phagy) is vital for cellular homeostasis. We identify FAM134A/RETREG2 and FAM134C/RETREG3 as ER-phagy receptors, which predominantly exist in an inactive state under basal conditions. Upon autophagy induction and ER stress signal, they can induce significant ER fragmentation and subsequent lysosomal degradation. FAM134A, FAM134B/RETREG1, and FAM134C are essential for maintaining ER morphology in a LC3-interacting region (LIR)-dependent manner. Overexpression of any FAM134 paralogue has the capacity to significantly augment the general ER-phagy flux upon starvation or ER-stress. Global proteomic analysis of FAM134 overexpressing and knockout cell lines reveals several protein clusters that are distinctly regulated by each of the FAM134 paralogues as well as a cluster of commonly regulated ER-resident proteins. Utilizing pro-Collagen I, as a shared ER-phagy substrate, we observe that FAM134A acts in a LIR-independent manner and compensates for the loss of FAM134B and FAM134C, respectively. FAM134C instead is unable to compensate for the loss of its paralogues. Taken together, our data show that FAM134 paralogues contribute to common and unique ER-phagy pathways.
Introduction: Adeno-associated virus (AAV)-based gene therapy for haemophilia presents a challenge to the existing structure of haemophilia centres and requires a rethink of current collaboration and information exchange with the aim of ensuring a system that is fit-for-purpose for advanced therapies to maximise benefits and minimise risks. In Europe, a certification process based on the number of patients and facilities is offered to the haemophilia centres by European Haemophilia Network (EUHANET). Aim and methods: This joint European Association for Haemophilia and Allied Disorders (EAHAD) and European Haemophilia Consortium (EHC) publication describes criteria for centres participating in gene therapy care that require a reassessment of the infrastructure of comprehensive care and provides an outlook on how these criteria can be implemented in the future work of haemophilia centres. Results: The core definition of a haemophilia treatment centre remains, but additional roles could be implemented. A modifiable ‘hub-and-spoke’ model addresses all aspects associated with gene therapy, including preparation and administration of the gene therapy product, determination of coagulation and immunological parameters, joint score and function, and liver health. This will also include the strategy on how to follow-up patients for a long-term safety and efficacy surveillance. Conclusion: We propose a modifiable, networked ‘hub and spoke’ model with a long term safety and efficacy surveillance system. This approach will be progressively developed with the goal of making haemophilia centres better qualified to deliver gene therapy and to make gene therapy accessible to all persons with haemophilia, irrespective of their country or centre of origin.
Aim: It can be challenging to distinguish COVID-19 in children from other common infections. We set out to determine the rate at which children consulting a primary care paediatrician with an acute infection are infected with SARS-CoV-2 and to compare distinct findings. Method: In seven out-patient clinics, children aged 0–13 years with any new respiratory or gastrointestinal symptoms and presumed infection were invited to be tested for SARS-CoV-2. Factors that were correlated with testing positive were determined. Samples were collected from 25 January 2021 to 01 April 2021. Results: Seven hundred and eighty-three children participated in the study (median age 3 years and 0 months, range 1 month to 12 years and 11 months). Three hundred and fifty-eight were female (45.7%). SARS-CoV-2 RNA was detected in 19 (2.4%). The most common symptoms in children with as well as without detectable SARS-CoV-2 RNA were rhinitis, fever and cough. Known recent exposure to a case of COVID-19 was significantly correlated with testing positive, but symptoms or clinical findings were not. Conclusion: COVID-19 among the children with symptoms of an acute infection was uncommon, and the clinical presentation did not differ significantly between children with and without evidence of an infection with SARS-CoV-2.
The majority of excitatory synapses terminating on cortical neurons are found on dendritic spines. The geometry of spines, in particular the size of the spine head, tightly correlates with the strength of the excitatory synapse formed with the spine. Under conditions of synaptic plasticity, spine geometry may change, reflecting functional adaptations. Since the cytokine tumor necrosis factor (TNF) has been shown to influence synaptic transmission as well as Hebbian and homeostatic forms of synaptic plasticity, we speculated that TNF-deficiency may cause concomitant structural changes at the level of dendritic spines. To address this question, we analyzed spine density and spine head area of Alexa568-filled granule cells in the dentate gyrus of adult C57BL/6J and TNF-deficient (TNF-KO) mice. Tissue sections were double-stained for the actin-modulating and plasticity-related protein synaptopodin (SP), a molecular marker for strong and stable spines. Dendritic segments of TNF-deficient granule cells exhibited ∼20% fewer spines in the outer molecular layer of the dentate gyrus compared to controls, indicating a reduced afferent innervation. Of note, these segments also had larger spines containing larger SP-clusters. This pattern of changes is strikingly similar to the one seen after denervation-associated spine loss following experimental entorhinal denervation of granule cells: Denervated granule cells increase the SP-content and strength of their remaining spines to homeostatically compensate for those that were lost. Our data suggest a similar compensatory mechanism in TNF-deficient granule cells in response to a reduction in their afferent innervation.
Grundlagen: Das Neuroblastom ist der häufigste extrakranielle solide Tumor im Kindesalter. Die Patienten in der Hochrisikogruppe haben trotz der Weiterentwicklung der Therapie immer noch eine sehr schlechte Prognose. Die Entwicklung von Resistenzen und die darauffolgende Progression der Erkrankung sind kennzeichnende Phänomene innerhalb dieser Patientengruppe.
Die hier vorgestellte Charakterisierung von MYCN amplifizierten, Cisplatin adaptierten chemoresistenten Neuroblastomsublinien UKF-NB-3rCDDP1000 I bis XII ist eine grundlegende Aufgabe, um den Phänotyp des multiresistenten/ Hochrisiko Neuroblastoms besser zu verstehen. Des Weiteren könnte diese Charakterisierung zu einem besseren Verständnis der Rolle von Krebsstammzellen beim Neuroblastom führen.
Methoden: Die Empfindlichkeit zu verschiedenen Zytostatika wurde im Viabilitätsassay untersucht. Die Expression mehrerer Stammzellmarker wurde durch Durchflusszytometrie überprüft. Im Western Blot wurde die Expression der Proteine p53, p21, XIAP und Survivin untersucht. Die Proliferation der verschiedenen Sublinien wurde durch den Kolonienbildungstest untersucht.
Ergebnise: In dieser Arbeit wurde nachgewiesen, dass die Cisplatin adaptierten Sublinien zusätzliche Resistenzen gegenüber weiteren klassischen Zytostatika zeigen. Abgesehen von der erworbenen Cisplatin-Resistenz zeigen die
Cisplatinsublinien erhöhte IC50-Werte für die Wirkstoffe YM-155, Doxorubicin, Melphalan, Vincristin, Docetaxel, Etoposid, Carboplatin und Vinblastin (jeweils im Vergleich zu UKF-NB-3). Von den getesteten klassischen Zytostatika hat nur Gemcitabin bei den Cisplatin adaptierten Sublinien eine gute Wirksamkeit. In dieser Arbeit konnte die Expression von mehreren Stammzellmarkern, sowohl bei den Cisplatin resistenten Sublinien als auch bei der parentalen Zelle UKFNB-3, nachgewiesen werden. Durch die Cisplatinadaptierung ergaben sich Unterschiede in der Expression von CD-133, Nanog, Nestin, Sox-2 und GD2. Im Kolonienbildungstest konnten keine großen Unterschiede festgestellt werden, die
Cisplatin-adaptierten Sublinien zeigen tendenziell eine geringere Kolonienbildung als UKF-NB-3.
Konklusion: Der Nachweis von unterschiedlichen Stammzellmarkern bei den Neuroblastomsublinien UKF-NB-3rCDDP1000 I bis XII ist ein wichtiger Hinweis für die Existenz von Zellen mit Stammzellfähigkeiten innerhalb der Sublinien.
Durch ein besseres Verständnis der biologischen Merkmale in resistenten Neuroblastomzellen könnten neuartige gezielte Therapiestrategien entdeckt werden. Viele der bei dieser Arbeit untersuchten Moleküle vermögen einen Effekt bei der Entstehung von Resistenzen und bei Aufrechterhaltung der Proliferation und Überleben von Neuroblastomzellen sowie Neuroblastomkrebsstammzellen zu haben. Folglich könnten diese Zielmoleküle (CD-133, Nanog, Nestin, Sox-2 und GD2) in der Zukunft benutzt werden, um neue therapeutische Strategien zu entwickeln, die sowohl die multiresistenten Neuroblastomzellen als auch die Neuroblastom-krebsstammzellen besser abtöten können. Zusätzlich ist Gemcitabin als Medikament nach Cisplatintherapie klinisch interessant.
In der Akuten Lymphatischen Leukämie (ALL) im Erwachsenenalter beträgt die 5–Jahres-Überlebensrate trotz verbesserter Therapien unter 40%. Die Prognose wird durch das Auftreten von Rezidiven signifikant verschlechtert. ALL entsteht durch genetische Veränderungen lymphatischer Vorläuferzellen im Knochenmark, welche zu einem Differenzierungsblock und zu starker Zunahme der Vorläufer-zellen führen. Eine mögliche Erklärung für das bestehende hohe Rezidiv-Risiko wird in der unvollständigen Elimination von Leukämie-induzierenden Zellen (LIZ) durch die Primärtherapie gesehen. Die Identifizierung und Charakterisierung von LIZ in der ALL anhand spezifischer Oberflächenmarker war bisher nicht möglich, daher ist die molekulare und funktionelle Charakterisierung von LIZ für die Entwicklung moderner Therapieansätze unabdingbar. Metabolische Analysen primärer ALL-Langzeitkulturen (LZK) in Vorarbeiten zeigten eine deutliche Abweichung des Kohlenhydratstoffwechsels vom physiologischen metabolischen Profil einer Knochenmarkszelle hin zur Nutzung der Glykolyse mit zunehmendem leukämogenen Potential der etablierten LZK. Folglich ist in dieser Dissertation der Zusammenhang zwischen höherer Glukoseaffinität, schnellerer Glukoseaufnahme und dem Vorliegen eines höheren leukämogenen Potentials der Zellen und damit einer Definition der LIZ anhand ihres Energiestoffwechsels untersucht worden.
Hierfür wurden Tests im Mausmodell in vivo und in vitro mit drei ALL-LZK CR, PH und BV durchgeführt. Wir etablierten unter Verwendung des fluoreszenzmarkierten Glukoseanalogons 2–NBDG sowie eines gegen den GLUT–1 gerichteten Antikör-pers jeweils ein durchflusszytometrisches Verfahren zur quantitativen Messung der Glukoseaufnahme. Anhand dieser Parameter erfolgte die FACS-Anreicherung unterschiedlicher Zellpopulationen der LZK und die Xenotransplantation zur Evaluation potentieller Unterschiede des leukämogenen Potentials.
Durch durchflusszytometrische Messungen konnten in den drei LZK jeweils drei Subpopulationen von Zellen anhand ihrer Glukoseaffinität unterschieden werden (2–NBDG negativ, 2–NBDG positiv und 2–NBDG hochaffin). Auch zeigten sich Unterschiede in der Kinetik der Glukoseaufnahme der drei getesteten LZK, wobei CR Zellen mit Abstand am schnellsten 2–NBDG aufnahmen, gefolgt von PH. Die schnellere Glukoseaufnahme der LZK CR und PH wurde durch eine vermehrte Expression des GLUT-1 Rezeptors und einen höheren Anteil an GLUT–1 positiver Zellen hervorgerufen. Interessanterweise bestand auch eine Korrelation zwischen höherem leukämogenem Potential mit schnellerer Glukoseaufnahme und stärkerer GLUT–1 Expression. Hierbei zeigte sich, dass die HIF-1α Stabilisierung unter Normoxie in einer vermehrten GLUT–1 Expression und daraufhin vermehrter Glukoseaufnahme resultierte. Die prospektive Anreicherung von distinkten Zellsubpopulationen der LZK CR und PH aufgrund ihrer Glukoseaufnahme (gemessen durch 2–NBDG) und Transplantation der sortierten Zellpopulationen in NSG Empfängermäuse zeigte keine kohärente Beziehung zwischen der Glukoseaffinität der Zellen und der Entwicklung der Leukämie. Während es bei CR Zellen initial zu einer beschleunigten Expansion der 2–NBDG-positiv sortierten Leukämiezellen kommt, was sich aber nicht signifikant auf das Gesamtüberleben der Empfängermäuse auswirkt, zeigte die serielle Transplantation von 2–NBDG negativen Zellen ein schnelleres Ableben der Tiere. Bei der LZK PH expandierten 2–NBDG-negative Zellen schneller in primären Empfängermäusen als positive Zellen. Dabei konnten zelltoxische Effekte durch die Verwendung von 2–NBDG ausgeschlossen werden. Auch die Transplantation von GLUT-1 positiven bzw. negativen CR Zellen zeigte, dass GLUT-1 negative Zellen schneller in den Mäusen expandierten, eine aggressivere Leukämie verursachten und zu einem früheren Ableben der Mäuse führte.
Diese Ergebnisse zeigen keine unmittelbare Korrelation von Glukoseaufnahme oder GLUT-1 Expression und der Leukämogenität der untersuchten ALL Zellen. Daher können diese Eigenschaften nicht dazu verwendet werden LIZ in ALL prospektiv anzureichern. Im Rahmen dieser Dissertation zeigte sich aber auch, dass sich die LZK in ihrer jeweiligen Gesamtpopulation bezüglich ihres Glukoseaufnahmeverhaltens und ihrem Anteil GLUT-1-positiver Zellen unterschieden. Weiterführende Untersuchungen sind nötig, um den Grund der differentiellen Expression von GLUT-1 und der damit zusammenhängenden gesteigerten Glukoseaufnahme einzelner Zellen in der ALL zu ermitteln.
Stereotaktische Biospien gehören seit vielen Jahren zu den Standardoperationen zahlreicher neurochirurgischer Kliniken. Hierbei werden Proben von Hirnläsionen entnommen, um diese histopathologisch zu untersuchen.
Die histopathologische Diagnose unklarer Hirnläsionen ist zwingend erforderlich, um eine adäquate Therapie durchzuführen. Eine weitere Therapie kann aus Bestrahlung, Chemotherapie, Kombination beider oder Resektion bestehen. In wenigen Fällen wird eine zweite oder dritte Biopsie benötigt, um eine endgültige Diagnose zu erhalten. Das Ziel dieser Studie war es, jene Patienten genauer zu untersuchen, bei denen die erste Biopsie kein definitives Ergebnis erbracht hatte. Die meisten dieser Patienten mussten sich einer zweiten Biopsie unterziehen. Wir haben eine umfassende Recherche der letzten 10 Jahre durchgeführt und eine Datenbank mit den Patienten erstellt, bei denen die erste Biopsie kein Ergebnis erbracht hatte.
Hierbei wurden klinische Parameter, welche einen Einfluss auf die nicht zielführenden Biopsie haben können, erhoben, beschrieben und diskutiert. Die Parameter umfassten die entnommene Probenanzahl, Kontrastmittelaufnahme der Läsion, Lokalisation der Läsion, Erfahrung des Operateurs, neuroradiologische Verdachtsdiagnose und Vorbehandlung.
Wir haben in dieser retrospektiven Arbeit unser Augenmerk auf die klinischen Aspekte der einzelnen Patienten, bei denen die erste Biopsie kein definitives Ergebnis erbrachte, gelegt.
Hier zeigten sich keinerlei Auffälligkeiten, welche positiv mit einer nichtzielführenden Biopsie einhergehen könnten.
Wir folgern, dass in den meisten Fällen eine definitive Diagnose zu erwarten ist. Unklar bleibt, bei welchen Patienten keine zielführende Biopsie erfolgen wird, so dass sie einer erneuten Biopsie unterzogen werden müssen.
Beim Auffinden menschlicher Überreste stellt sich neben der Beurteilung des postmortalen Intervalls auch konsequent die Frage nach der Möglichkeit des Vorliegens eines Tötungsdelikts. Da Weichgewebe nur in begrenztem Ausmaß Verwesung, Fäulnis oder Umwelteinflüssen standhält, ist dieses nur bedingt geeignet, auch langfristig Spuren von Gewalteinwirkung zu konservieren. Knochengewebe hingegen kann Läsionen noch nach langen Zeiträumen nahezu unverändert abbilden und stellt somit einen forensisch bedeutenden Spurenträger dar.
Im Rahmen dieser retrospektiven Studie sollte geklärt werden, inwieweit und in welchem Ausmaß bei Tötungsdelikten knöcherne Verletzungen entstehen. Ob bei definierten Formen letaler Gewalteinwirkung unterschiedliche Häufigkeiten des Auftretens knöcherner Verletzungen zu beobachten und zudem bevorzugte Körperregionen zu identifizieren sind, stellte eine weitere wesentliche Fragestellung der Arbeit dar.
Nach Auswertung der Sektionsprotokolle von insgesamt 897 im Institut für Rechtsmedizin in Frankfurt am Main obduzierten, im In- und Ausland begangenen Tötungsdelikten im Zeitraum 01.01.1994 bis 31.12.2014 zeigte sich, dass unabhängig von der Art der tödlichen Gewalteinwirkung 70,9% der Opfer mindestens eine knöcherne Verletzung aufwiesen und darüber hinaus bei insgesamt 45,5% der Opfer mehrfache knöcherne Verletzungen nachgewiesen werden konnten.
Zudem zeigte sich, dass unterschiedliche, definierte letale Gewalteinwirkungen entsprechend charakteristische Häufigkeiten und Verteilungen knöcherner Verletzungen zur Folge haben. So sind mit 92,6 % die häufigsten knöchernen Läsionen bei Schussopfern festzustellen. Nach stumpfer und scharfer Gewalt mit je 80 % und 66,3 % ließ sich auch nach tödlicher Gewalteinwirkung gegen den Hals in 53,3 % der Fälle mindestens eine knöcherne Läsion nachweisen.
Das Fehlen knöcherner Verletzungen in insgesamt 29% der im Auswertungszeitraum untersuchten 897 Tötungsdelikte zeigt auch, dass selbst bei knöchern unversehrten, vollständigen Skelettfunden ein Homizid keineswegs ausgeschlossen werden kann. Neben der gerichtlichen Leichenöffnung sind stets ergänzende forensische Aufarbeitungen der menschlichen Überreste zu fordern. Hierbei sind einerseits physikalische und chemische Methoden in Betracht zu ziehen, vor allem jedoch auch radiologische Untersuchungen. Weitere Untersuchungen der gewonnenen Ergebnisse im Rahmen einer weiteren Studie sollen klären, welcher Stellenwert der postmortalen Computertomographie zugesprochen werden kann.
Correction to: Infection (2020) 48:723–733 https://doi.org/10.1007/s15010-020-01469-6. The original version of this article unfortunately contained a mistake. In this article the authors Dirk Schürmann at affiliation Charité, University Medicine, Berlin, Olaf Degen at affiliation University Clinic Hamburg Eppendorf, Hamburg and Heinz-August Horst at affiliation University Hospital Schleswig–Holstein, Kiel, Germany were missing from the author list. The original article has been corrected.
Objective: Prisoners constitute a high-risk group for suicide, with suicide rates about 5 to 8 times higher than in the general population. The first weeks of imprisonment are a particularly vulnerable time, but there is limited knowledge about the risk factors for either early or late suicide events. Methods: Based on a national total sample of prison suicides in Germany between 2005 and 2017, suicides within the first 2 (4 and 8) weeks after reception into prison were matched by age and penalty length with cases that occurred later. Factors that potentially influence the timing of suicide were investigated. Results: The study has shown that 16.7% (31.5%) of all 390 suicides in German prisons occurred within the first two weeks (two months) of imprisonment. Factors that facilitate adaptation to the prison environment (e.g. prior prison experience) were negatively associated with early suicide events. Factors that hindered the adaptation process (e.g. withdrawal from illicit drugs) were observed more frequently in early suicide events than in late ones. These factors are active at different times of imprisonment. Conclusion: At reception, particular attention should be paid to the following factors associated with early suicide events: widowed marital status, lack of prison experience, and drug dependency.
Objective. We investigated the health-related quality of life (HRQoL) of patients with gastrointestinal stromal tumours (GIST). Methods: In the multicentre PROSa study, the HRQoL of adult GIST patients was assessed between 2017 and 2019 using the European Organisation for Research and Treatment of Cancer HRQoL questionnaire (EORTC QLQ-C30). We performed group comparisons and multivariate linear regressions. Results: Among 130 patients from 13 centres, the mean global HRQoL was 63.3 out of 100 points. Higher sores indicate better HRQoL. The highest restrictions were in emotional, social, role functioning, insomnia, fatigue, and pain. In multivariate linear regression, we found no significant differences between patients receiving tyrosine kinase inhibitor (TKI) treatment and those without TKI treatment as well as between patients treated with curative or with palliative intent. Patients who received multiple lines of TKI treatment had the most restrictions, notably in physical (unstandardized regression coefficient [B] = −15.7), role (B = −25.7), social (B = −18.4), and cognitive functioning (B = −19.7); fatigue (B = 15.93); general health (B = −14.23); and EORTC-sum score (B = −13.82) compared to all other patients. Conclusion: The highest HRQoL restrictions were in GIST patients receiving multiple lines of TKI therapy. Underlying causes need further investigation.
Objective: Using multimodal imaging, we tested the hypothesis that patients after hemispherotomy recruit non-primary motor areas and non-pyramidal descending motor fibers to restore motor function of the impaired limb. Methods: Functional and structural MRI data were acquired in a group of 25 patients who had undergone hemispherotomy and in a matched group of healthy controls. Patients’ motor impairment was measured using the Fugl-Meyer Motor Assessment. Cortical areas governing upper extremity motor-control were identified by task-based functional MRI. The resulting areas were used as nodes for functional and structural connectivity analyses. Results: In hemispherotomy patients, movement of the impaired upper extremity was associated to widespread activation of non-primary premotor areas, whereas movement of the unimpaired one and of the control group related to activations prevalently located in the primary motor cortex (all p ≤ 0.05, FWE-corrected). Non-pyramidal tracts originating in premotor/supplementary motor areas and descending through the pontine tegmentum showed relatively higher structural connectivity in patients (p < 0.001, FWE-corrected). Significant correlations between structural connectivity and motor impairment were found for non-pyramidal (p = 0.023, FWE-corrected), but not for pyramidal connections. Interpretation: A premotor/supplementary motor network and non-pyramidal fibers seem to mediate motor function in patients after hemispherotomy. In case of hemispheric lesion, the homologous regions in the contralesional hemisphere may not compensate the resulting motor deficit, but the functionally redundant premotor network.
Ependymomas encompass a heterogeneous group of central nervous system (CNS) neoplasms that occur along the entire neuroaxis. In recent years, extensive (epi-)genomic profiling efforts have identified several molecular groups of ependymoma that are characterized by distinct molecular alterations and/or patterns. Based on unsupervised visualization of a large cohort of genome-wide DNA methylation data, we identified a highly distinct group of pediatric-type tumors (n = 40) forming a cluster separate from all established CNS tumor types, of which a high proportion were histopathologically diagnosed as ependymoma. RNA sequencing revealed recurrent fusions involving the pleomorphic adenoma gene-like 1 (PLAGL1) gene in 19 of 20 of the samples analyzed, with the most common fusion being EWSR1:PLAGL1 (n = 13). Five tumors showed a PLAGL1:FOXO1 fusion and one a PLAGL1:EP300 fusion. High transcript levels of PLAGL1 were noted in these tumors, with concurrent overexpression of the imprinted genes H19 and IGF2, which are regulated by PLAGL1. Histopathological review of cases with sufficient material (n = 16) demonstrated a broad morphological spectrum of tumors with predominant ependymoma-like features. Immunohistochemically, tumors were GFAP positive and OLIG2- and SOX10 negative. In 3/16 of the cases, a dot-like positivity for EMA was detected. All tumors in our series were located in the supratentorial compartment. Median age of the patients at the time of diagnosis was 6.2 years. Median progression-free survival was 35 months (for 11 patients with data available). In summary, our findings suggest the existence of a novel group of supratentorial neuroepithelial tumors that are characterized by recurrent PLAGL1 fusions and enriched for pediatric patients.
Background: Compound flaps offer the advantage of one stage defect reconstruction respecting all relevant tissues and early functional recovery by optimal vascularity of all components. Due to its specific vascular anatomy and the three-dimensional donor site, compound flaps with bone components may result in higher complication rates compared to soft tissue compound flaps. The meta-analysis summarizes the available evidence and evaluates whether bone components are a risk factor for periprocedural complications in upper extremity multidimensional defect reconstruction. Method: PubMed and Embase were searched for all publications addressing compound free flaps for upper extremity defect reconstruction with bone or soft tissue components published between January 1988 and May 2018. The methodological quality was assessed with the American Society of Plastic Surgeons Evidence Rating Scale for Therapeutic Studies. Flap loss, thrombosis rate, early infection, hematoma, seroma, as well as donor site complications were extracted and analyzed. Results: Twelve out of 1157 potentially eligible studies (evidence-III) comprising 159 patients were finally included with publication bias for all summarized complication rates. Complication rates for flaps with/ without bone components were: total flap loss 5%, 95% CI = 3%–10% (6%/5%); partial flap loss 8%, 95% CI = 5%–15%, (9%/8%); arterial/venous thrombosis 7%, 95% CI = 4%–12%, (8%/5%)/14%, 95% CI = 9%–21% (16%/6%, P < .05) with higher risk for flaps with bone components; infection 6%, 95% CI = 3%–12% (6%/6%); hematoma 6%, 95% CI = 3%–11% (6%/5%); seroma 5%, 95% CI = 3%–10% (5%/5%); dehiscence 10%, 95% CI = 6%–17% (11%/9%). Conclusion: Compound flaps for upper extremity defect reconstruction including bone components have a higher venous thrombosis rate compared to compound soft-tissue flaps.
Scores to identify patients at high risk of progression of coronavirus disease (COVID-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), may become instrumental for clinical decision-making and patient management. We used patient data from the multicentre Lean European Open Survey on SARS-CoV-2-Infected Patients (LEOSS) and applied variable selection to develop a simplified scoring system to identify patients at increased risk of critical illness or death. A total of 1946 patients who tested positive for SARS-CoV-2 were included in the initial analysis and assigned to derivation and validation cohorts (n = 1297 and n = 649, respectively). Stability selection from over 100 baseline predictors for the combined endpoint of progression to the critical phase or COVID-19-related death enabled the development of a simplified score consisting of five predictors: C-reactive protein (CRP), age, clinical disease phase (uncomplicated vs. complicated), serum urea, and D-dimer (abbreviated as CAPS-D score). This score yielded an area under the curve (AUC) of 0.81 (95% confidence interval [CI]: 0.77–0.85) in the validation cohort for predicting the combined endpoint within 7 days of diagnosis and 0.81 (95% CI: 0.77–0.85) during full follow-up. We used an additional prospective cohort of 682 patients, diagnosed largely after the “first wave” of the pandemic to validate the predictive accuracy of the score and observed similar results (AUC for the event within 7 days: 0.83 [95% CI: 0.78–0.87]; for full follow-up: 0.82 [95% CI: 0.78–0.86]). An easily applicable score to calculate the risk of COVID-19 progression to critical illness or death was thus established and validated.
Background: Number of positive prostate biopsy cores represents a key determinant between high versus very high-risk prostate cancer (PCa). We performed a critical appraisal of the association between the number of positive prostate biopsy cores and CSM in high versus very high-risk PCa. Methods: Within Surveillance, Epidemiology, and End Results database (2010–2016), 13,836 high versus 20,359 very high-risk PCa patients were identified. Discrimination according to 11 different positive prostate biopsy core cut-offs (≥2–≥12) were tested in Kaplan–Meier, cumulative incidence, and multivariable Cox and competing risks regression models. Results: Among 11 tested positive prostate biopsy core cut-offs, more than or equal to 8 (high-risk vs. very high-risk: n = 18,986 vs. n = 15,209, median prostate-specific antigen [PSA]: 10.6 vs. 16.8 ng/ml, <.001) yielded optimal discrimination and was closely followed by the established more than or equal to 5 cut-off (high-risk vs. very high-risk: n = 13,836 vs. n = 20,359, median PSA: 16.5 vs. 11.1 ng/ml, p < .001). Stratification according to more than or equal to 8 positive prostate biopsy cores resulted in CSM rates of 4.1 versus 14.2% (delta: 10.1%, multivariable hazard ratio: 2.2, p < .001) and stratification according to more than or equal to 5 positive prostate biopsy cores with CSM rates of 3.7 versus 11.9% (delta: 8.2%, multivariable hazard ratio: 2.0, p < .001) in respectively high versus very high-risk PCa. Conclusions: The more than or equal to 8 positive prostate biopsy cores cutoff yielded optimal results. It was very closely followed by more than or equal to 5 positive prostate biopsy cores. In consequence, virtually the same endorsement may be made for either cutoff. However, more than or equal to 5 positive prostate biopsy cores cutoff, based on its existing wide implementation, might represent the optimal choice.
Ataxia-telangiectasia (A-T) is a hereditary immune system disorder with neurodegeneration. Its first neurologic symptoms include ataxic gait in early childhood, with slowly progressive cerebellar ataxia, oculomotor apraxia, oculocutaneous telangiectasia, and progressive muscle weakness. Neonatal screening for severe T-cell deficiency was recently found to diagnose A-T patients with a significantly reduced naïve T-cell pool. Our study includes 69 A-T patients between 8 January 2002 and 1 December 2019. Nineteen cases of cancer were diagnosed in 17 patients (25%), with a median overall survival [OS; 95% cumulative indcidence (CI)] of 26·9 years for the entire cohort. The 15-year OS of 82·5% (72–95%) was significantly decreased among A-T patients with malignancies, who had a median OS of 2·11 years, with a two-year-estimated OS of 50·7% (31–82%). Haematological malignancies were the major causes of death within the initial years of life with a 15 times increased risk for death [HR (95% CI): 6·9 (3·1–15.2), P < 0·001] upon malignancy diagnosis. Male patients with A-T are at a higher cancer risk than their female counterparts. This manuscript highlights the need for cancer surveillance and prevention, as well as optimal treatment in this cohort.
Background: Fabry disease (FD), the second most prevalent lysosomal storage disorder, is classified as a rare disease. It often leads to significant quality of life impairments and premature death. Many cases remain undiagnosed due to the rarity and heterogeneity. Further, costs related to treatment often constitute a substantial financial burden for patients and health systems. While its epidemiology is still unclear, newborn screenings suggest that its actual prevalence rate is significantly higher than previously suspected. Methods: Based on well-established methodologies, this study gives an overview about the background of the development of FD-related research and provides a critical view of future needs. Results: On the grounds of benchmarking findings, an increasing research activity on FD can be observed. Most publishing countries are the USA, some European countries, Japan, Taiwan, and South Korea. In general, high-income countries publish comparably more on FD than low- or middle-income economies. The countries' financial and infrastructural background are unveiled as crucial factors for the FD research activity. Conclusions: Overall, there is a need to foster FD research infrastructure in developing and emerging countries with focus on cost-intensive genetic research that is independent from economic interests of big pharmaceutical companies.
Objective: Deep brain stimulation (DBS) of the ventral intermediate nucleus (VIM) is a mainstay treatment for severe and drug-refractory essential tremor (ET). Although stimulation-induced dysarthria has been extensively described, possible impairment of swallowing has not been systematically investigated yet. Methods: Twelve patients with ET and bilateral VIM-DBS with self-reported dysphagia after VIM-DBS were included. Swallowing function was assessed clinically and using by flexible endoscopic evaluation of swallowing in the stim-ON and in the stim-OFF condition. Presence, severity, and improvement of dysphagia were recorded. Results: During stim-ON, the presence of dysphagia could be objectified in all patients, 42% showing mild, 42% moderate, and 16 % severe dysphagia. During stim-OFF, all patients experienced a statistically significant improvement of swallowing function. Interpretation: VIM-DBS may have an impact on swallowing physiology in ET-patients. Further studies to elucidate the prevalence and underlying pathophysiological mechanisms are warranted.
Background: Breast cancer is the leading cause of cancer-related deaths in women, demanding new treatment options. With the advent of immune checkpoint blockade, immunotherapy emerged as a treatment option. In addition to lymphocytes, tumor-associated macrophages exert a significant, albeit controversial, impact on tumor development. Pro-inflammatory macrophages are thought to hinder, whereas anti-inflammatory macrophages promote tumor growth. However, molecular markers to identify prognostic macrophage populations remain elusive. Methods: We isolated two macrophage subsets, from 48 primary human breast tumors, distinguished by the expression of CD206. Their transcriptomes were analyzed via RNA-Seq, and potential prognostic macrophage markers were validated by PhenOptics in tissue microarrays of patients with invasive breast cancer. Results: Normal human breast tissue contained mainly CD206+ macrophages, while increased relative amounts of CD206− macrophages were observed in tumors. The presence of CD206+ macrophages correlated with a pronounced lymphocyte infiltrate and subsets of CD206+ macrophages, expressing SERPINH1 and collagen 1, or MORC4, were unexpectedly associated with improved survival of breast cancer patients. In contrast, MHCIIhi CD206− macrophages were linked with a poor survival prognosis. Conclusion: Our data highlight the heterogeneity of tumor-infiltrating macrophages and suggest the use of multiple phenotypic markers to predict the impact of macrophage subpopulations on cancer prognosis. We identified novel macrophage markers that correlate with the survival of patients with invasive mammary carcinoma.
Aims: Systemic inflammatory response, identified by increased total leucocyte counts, was shown to be a strong predictor of mortality after transcatheter aortic valve implantation (TAVI). Yet the mechanisms of inflammation-associated poor outcome after TAVI are unclear. Therefore, the present study aimed at investigating individual inflammatory signatures and functional heterogeneity of circulating myeloid and T-lymphocyte subsets and their impact on 1 year survival in a single-centre cohort of patients with severe aortic stenosis undergoing TAVI. Methods and results: One hundred twenty-nine consecutive patients with severe symptomatic aortic stenosis admitted for transfemoral TAVI were included. Blood samples were obtained at baseline, immediately after, and 24 h and 3 days after TAVI, and these were analysed for inflammatory and cardiac biomarkers. Myeloid and T-lymphocyte subsets were measured using flow cytometry. The inflammatory parameters were first analysed as continuous variables; and in case of association with outcome and area under receiver operating characteristic (ROC) curve (AUC) ≥ 0.6, the values were dichotomized using optimal cut-off points. Several baseline inflammatory parameters, including high-sensitivity C-reactive protein (hsCRP; HR = 1.37, 95% CI: 1.15–1.63; P < 0.0001) and IL-6 (HR = 1.02, 95% CI: 1.01–1.03; P = 0.003), lower counts of Th2 (HR = 0.95, 95% CI: 0.91–0.99; P = 0.009), and increased percentages of Th17 cells (HR = 1.19, 95% CI: 1.02–1.38; P = 0.024) were associated with 12 month all-cause mortality. Among postprocedural parameters, only increased post-TAVI counts of non-classical monocytes immediately after TAVI were predictive of outcome (HR = 1.03, 95% CI: 1.01–1.05; P = 0.003). The occurrence of SIRS criteria within 48 h post-TAVI showed no significant association with 12 month mortality (HR = 0.57, 95% CI: 0.13–2.43, P = 0.45). In multivariate analysis of discrete or dichotomized clinical and inflammatory variables, the presence of diabetes mellitus (HR = 3.50; 95% CI: 1.42–8.62; P = 0.006), low left ventricular (LV) ejection fraction (HR = 3.16; 95% CI: 1.35–7.39; P = 0.008), increased baseline hsCRP (HR = 5.22; 95% CI: 2.09–13.01; P < 0.0001), and low baseline Th2 cell counts (HR = 8.83; 95% CI: 3.02–25.80) were significant predictors of death. The prognostic value of the linear prediction score calculated of these parameters was superior to the Society of Thoracic Surgeons score (AUC: 0.88; 95% CI: 0.78–0.99 vs. 0.75; 95% CI: 0.64–0.86, respectively; P = 0.036). Finally, when analysing LV remodelling outcomes, ROC curve analysis revealed that low numbers of Tregs (P = 0.017; AUC: 0.69) and increased Th17/Treg ratio (P = 0.012; AUC: 0.70) were predictive of adverse remodelling after TAVI. Conclusions: Our findings demonstrate an association of specific pre-existing inflammatory phenotypes with increased mortality and adverse LV remodelling after TAVI. Distinct monocyte and T-cell signatures might provide additive biomarkers to improve pre-procedural risk stratification in patients referred to TAVI for severe aortic stenosis.
Aims: Inadequate treatment is one of the factors interfering with a successful social and working life. Among students, it can impair their health and learning progress. In the field of medicine the problem of inadequate treatment seems widespread. This study examines wether inadequate treatment in internships differs between medicine and other academic disciplines.
Method: Using a questionnaire, the frequency, forms and severity of inadequate treatment among students were compared between the disciplines of medicine, civil engineering and teaching.
Results: 69,3% of medical students reported inadequate treatment during their internships, about twice as many as students of other disciplines. The ratios of verbal, non-verbal and organisational inadequate treatment were similar between the different academic disciplines. However, medical students executed tasks without receiving sufficient safety precautions or training significantly more often (sevenfold) than students of other disciplines. In total however, the experienced incidents of inadequate treatment were seen as similarly severe across the different academic fields.
Conclusion: Inadequate treatment of students during internships is a larger problem in medicine than in civil engineering or teaching, particularly concerning the performance of unsafe tasks. With regard to the health of students and patients, inadequate treatment in the medical education should be tackled. Previous studies suggest that this goal can be achieved only through longtime extensive measures on the level of students, lecturers, faculty and teaching hospitals.
Die allergische Rhinitis (AR) zählt zu einer der häufigsten chronischen Atemwegserkrankungen und betrifft weltweit etwa 500 Millionen Menschen. Bei einem Teil der Patienten mit rhinitischer Symptomatik lassen sich in den herkömmlichen Tests jedoch keine Hinweise für eine Allergensensibilisierung aufweisen. Diese Patienten wurden in der Vergangenheit häufig der Gruppe der nicht-allergischen Rhinitis (NAR) zugeordnet, welche über 200 Millionen Menschen weltweit betrifft. In den letzten zwei Jahrzehnten hat sich die lokale allergische Rhinitis (LAR) als wichtige Differentialdiagnose zur NAR oder idiopathischen Rhinitis (IR) ergeben. Einige Autoren postulieren, dass bis zu einem Viertel der chronischen Rhinitiker von LAR betroffen sein könnten und bis zu 62,5 % der bisher als NAR oder IR klassifizierten Patienten eine LAR haben könnten. Die LAR wird durch allergiesuggerierende Rhinitissymptome, eine positive Reaktion im nasalen Provokationstest (NPT) mit Inhalationsallergenen und das gelegentliche Vorhandensein spezifischer Antikörper in der Nasen-schleimhaut definiert, ohne dass ein Nachweis systemischer Sensibilisierung zu finden ist.
Da große Unterschiede der LAR-Prävalenzangaben herrschen, war es das Ziel der Arbeit, diese bei Personen mit ganzjähriger Rhinitis herauszufinden und die nasale Mukosa auf lokales spezifisches IgE (sIgE) zu untersuchen.
Hierfür wurden aus einer Gruppe von insgesamt 156 gescreenten Testpersonen 63 weitergehend erforscht. Einundzwanzig Patienten mit ganzjähriger NAR wurden herausgefiltert, untersucht und deren Ergebnisse mit denen von
24 AR Patienten und Hausstaubmilben (HDM)-Allergie sowie 18 Kontrollen verglichen. Wir untersuchten die Ausprägung der klinischen Symptomatik sowie die Reaktion im Haut-Prick-Test, das Gesamt-IgE und sIgE gegen die Milbenspezies Dermatophagoides pteronyssinus (D1) und Dermatophagoides farinae (D2) in Serum und Nasensekret (NS) und führten mit allen einen NPT mit D2 durch. Der NPT wurde mithilfe der Messung des peak nasal inspiratory flow (PNIF) und des Lebel-Scores bewertet.
Während sich die Ausprägung der klinischen Symptomatik der NAR- und AR Patienten sehr ähnelte, wies keiner der NAR-Patienten nasales sIgE gegen HDM oder eine positive Reaktion im NPT gegen D2 auf. Der Nasensummenscore lag sowohl bei AR- und NAR-Patienten im Median bei 11 von 24 Punkten (Range: 6–21 Punkte beziehungsweise 6–20 Punkte) und hob sich signifikant von dem der Kontrollen ab, welche einen Score von 0 Punkten (Range: 0–5 Punkte) aufwiesen. Der Median des sIgE-D1 und sIgE-D2 im NS lag sowohl bei NAR Patienten als auch Kontrollen bei 0,1 kU/L (Range: 0,1–0,1 kU/L) und unterschied sich nicht signifikant voneinander. Im Gegensatz dazu zeigten 94,12 % der untersuchten AR-Proben erhöhtes sIgE-D1 oder sIgE-D2 im NS. Die mediane Konzentration im NS lag bei AR-Patienten für sIgE-D1 bei 1,19 kU/L (Range: 0,1–14,93 kU/L) und für sIgE-D2 bei 2,34 kU/L (Range: 0,1–22,14 kU/L). Der NPT mit D2 war bei 13/14 AR-Patienten (= 92,86 %) und keinem der NAR-Patienten oder Kontrollen positiv. Sowohl die absolute als auch die prozentuale PNIF-Abnahme nach HDM-Provokation unterschied sich zwischen AR-Patienten und Kontrollen sowie zwischen Patienten mit AR und NAR signifikant. Die prozentuale PNIF-Reduktion lag nach HDM-Provokation in der AR-Gruppe bei 55,85 %, der NAR-Gruppe bei 7,14 % und bei Kontrollen bei 0 %. Es ließ sich jedoch kein signifikanter Unterschied zwischen Kontrollen und NAR-Patienten feststellen.
Aufgrund der erhobenen Ergebnisse ist festzuhalten, dass wir nur in der Gruppe der AR positive NPTs und nasales sIgE gegen HDM-Spezies nachweisen konnten und wir demnach für diese Studie eine Prävalenz der LAR unter den NAR-Patienten von 0 % feststellen. Wir gehen in Zusammenschau unserer Befunde daher davon aus, dass die Prävalenz von LAR im Bereich der NAR oder IR in der untersuchten in Deutschland lebenden Population deutlich niedriger sein muss als zuvor in anderen Populationen berichtet.
Das kolorektale Karzinom stellt die zweithäufigste Krebstodesursache bei Männern und Frauen in der Bundesrepublik Deutschland dar
Das CRC hat aus diesem Grund eine große Bedeutung in chirurgischen und radiologischen Fachgebieten. Hierbei spielen zahlreiche Verfahren und Behandlungsmethoden eine zentrale Rolle, um das CRC und die hiervon ausgehenden kolorektalen Lebermetastasen zu behandeln und eine bestmögliche Therapie zu evaluieren. Über die letzten Jahrzehnte haben sich daher viele verschiedene Methoden für die Behandlung von CRLMs entwickelt, wie Mikrowellenablation (MWA), laserinduzierte interstitielle Thermotherapie (LITT), Radiofrequenzablation (RFA) und das chirurgische Vorgehen. Die vielversprechendste unter den Techniken und Verfahren stellt die chirurgische Resektion dar. Problematisch ist hierbei, dass viele erkrankte Patienten keine ausreichend gute körperliche Verfassung mehr aufweisen, um eine Resektion ohne große Risiken durchführen zu können.
Das Hauptziel dieser Studie war es nun, eine möglichst genaue und
aussagekräftige Untersuchung von Patientengruppen durchzuführen, bei denen eine kolorektale Lebermetastase diagnostiziert wurde. In der vorliegenden Studie wurden 132 Patienten mit kolorektalen Lebermetastasen (CRLM) untersucht, welche zwischen 2010 und 2018 mit einer CT-gesteuerten MWA-Therapie im Institut für Diagnostische und Interventionelle Radiologie des Universitätsklinikums in Frankfurt am Main behandelt wurden. Hierbei war von besonderer Bedeutung, welche prognostischen Parameter die Überlebenszeiten und Überlebensraten beeinflussen. Die Daten konnten anhand von vielfältigen Personendaten und den dazugehörigen Therapieverläufen erhoben werden. Außerdem wurden CT-Bilder, welche im Zuge der Behandlung entstanden waren, für die Erhebung zusätzlicher Parameter verwendet. Die erhobenen Daten und Messwerte wurden retrospektiv ermittelt und umfassten eine große Patientengruppe. Dies steigert die Aussagekraft der Ergebnisse und Kennzahlen wesentlich. Ein besonderes Augenmerk lag auf der Einteilung der Patienten in zwei Gruppen entsprechend ihrer Behandlungsindikation.
Zu den prognostischen Faktoren zählten das Ablationssystem, die Lokation der Metastasen, die Anzahl der Metastasen, der technische Erfolg, die Energie und Leistung, der Durchmesser und das Volumen der Metastasen, die Vor- und Nachbehandlung und die Lokalrezidive.
Die Patientengruppe mit palliativer Therapieindikation (1.08 Jahre) zeigte eine signifikant geringere mediane Überlebenszeit im Vergleich mit der kurativen Patientengruppe (3.48 Jahre). Die mediane Überlebenszeit aller Patienten betrug insgesamt 2.68 Jahre. Zusätzlich wurden die Überlebensraten der Patienten ermittelt. Die 1- und 3-Jahres-Überlebensraten aller behandelten Patienten im Untersuchungszeitraum lagen bei 82.7% und 41.6%. Die 1- und 3-JahresÜberlebensraten der 57 Patienten mit palliativer Behandlungsindikation waren 54.4% und 14.9%. Im Vergleich hierzu betrugen die 1- und 3-JahresÜberlebensraten der kurativ behandelten Patientengruppe 96.9% und 55.1%. Die mediane Beobachtungszeit nach der Behandlung betrug 2.39 Jahre. In dieser Zeit erreichten 96.2% aller Patienten eine lokale Tumorkontrolle (127/132). Die Überlebenszeit von Patienten mit einer, zwei oder drei, vier oder fünf und multiplen Lebermetastasen betrug 3.79, 2.13, 1.09 und 0.93 Jahre (alle p<0,017). Es gab eine einzige relevante Komplikation (Abszess) bei allen Behandlungen (1/257; 0,4%). Alle Unterschiede der Überlebenszeiten im primären Tumorursprung (p <0,038) und bei der Anzahl der Metastasen waren signifikant. Die anderen prognostischen Faktoren zeigten keine statistische Signifikanz. Prognostische Faktoren wie die Anzahl der Lebermetastasen, die Lokation des Primärtumors und das verwendete Ablationssystem haben einen bedeutenden Einfluss auf die Überlebenszeiten der CRLM-Patienten in dieser Studie gezeigt. Die Ergebnisse dieser Studie sind als vornehmlich anzusehen, weil eine strenge Zuteilung der Patienten in kurative und palliative Behandlungsindikationen für die Analyse der Überlebensdaten in dieser Form bis zu diesem Zeitpunkt nicht durchgeführt worden war.
Die Prognosefaktoren und deren Einfluss auf die Überlebenszeiten stellen für zukünftige radiologische Prognosen und Therapiemaßnahmen in Bezug auf CRLM Patienten gute Richtwerte dar. Sowohl für die Radiologen und Ärzte als auch für die Patienten und Angehörigen sind dies zukunftsweisende Anhaltspunkte.
In dieser Dissertation wird der Frage nachgegangen, inwiefern sich unangemessene Behandlung in der praktischen Ausbildung zwischen Medizinstudierenden und Studierenden anderer Studienfächer unterscheidet. Zudem wird untersucht, welcher Einfluss der Hierarchie im angestrebten Beruf von den Probanden diesbezüglich zugemessen wird. Auch wird untersucht, wie sich Persönlichkeitsmerkmale auf die Wahrscheinlichkeit, unangemessene Behandlung zu erleben, auswirken.
Die vorliegende Arbeit thematisiert den Vergleich der Bildqualität von Liegend-Röntgen-Thorax-Aufnahmen von Patienten auf der Intensivstation des Universitätsklinikums Frankfurt unter Verwendung einerseits eines parallelen und andererseits eines virtuellen Streustrahlenrasters (=Bildverarbeitungssoftware). Es wurde untersucht, ob mit dem virtuellen Raster eine mindestens gleichwertige Bildqualität wie mit dem parallelen Raster erreicht und gleichzeitig Strahlendosis eingespart werden kann.
Insgesamt wurden 378 Röntgen-Thorax-Aufnahmen von 126 Patienten, die jeweils einmal mit parallelem Raster, mit virtuellem Raster und mit dem gleichen virtuellen Raster mit Dosisreduktion durchgeführt wurden, in die Studie eingeschlossen. Das virtuelle Raster ahmt das parallele Raster in der Streustrahlenreduktion nach. Das Übergewicht der Patienten als Einschlusskriterium der Studie rechtfertigte den Einsatz des parallelen Rasters. Jeder Patient wurde nur nach klinischer Indikationsstellung geröntgt, sodass der zeitliche Abstand zwischen zwei Röntgen-Thorax-Aufnahmen unterschiedlicher Aufnahmetechniken desselben Patienten variierte. Für alle Röntgen-Thorax-Aufnahmen wurde derselbe indirekte Flachdetektor verwendet. Die Röhrenspannung betrug konstant 125 kV, das Strom-Zeit-Produkt 1,4 mAs (für das parallele und virtuelle Raster) bzw. 1,0 mAs (für das virtuelle Raster mit Dosisreduktion). Für jeden Röntgen-Thorax wurde das Dosisflächenprodukt bestimmt. Vier Radiologen evaluierten die Bildqualität hinsichtlich sechs Kriterien (Lungenparenchym, Weichteile, thorakale Wirbelsäule, Fremdkörper, Pathologien und Gesamtqualität) anhand einer 9-Punkte-Skala. Der Friedman-Test (p < 0,05: signifikant) wurde angewendet. Die Übereinstimmung der Radiologen wurde über Intraklassenkorrelationskoeffizienten berechnet.
Das virtuelle Raster ohne/mit Dosisreduktion wurde insgesamt von allen vier Radiologen für die Weichteile, die thorakale Wirbelsäule, die Fremdkörper und die Gesamtbildqualität signifikant besser bewertet als das parallele Streustrahlenraster (p ≤ 0,018).
Für das Lungenparenchym und die Pathologien resultierten sowohl signifikante als auch nicht-signifikante Ergebnisse, wobei bei signifikanten Ergebnissen ebenfalls das virtuelle Raster ohne/mit Dosisreduktion besser bewertet wurde als das parallele Streustrahlenraster (p ≤ 0,002). Einzige Ausnahme stellten die Evaluationen der Bildqualität bez. des Lungenparenchyms eines Radiologen dar, der das virtuelle Raster ohne/mit Dosisreduktion signifikant schlechter bewertete als das parallele Raster (p < 0,0001). Insgesamt wurde das virtuelle Raster mit Dosisreduktion für die folgenden Kriterien am besten in absteigender Reihenfolge im Vergleich zum parallelen Raster bewertet: Fremdkörper, thorakale Wirbelsäule, Weichteile, Gesamtbildqualität, Pathologien und Lungenparenchym. Die Übereinstimmung der vier Radiologen in ihren Bildqualitätsbewertungen war maximal gering. Mit dem virtuellen Raster wurde im Durchschnitt etwa 28,7% des Dosisflächenprodukts im Vergleich zum parallelen Streustrahlenraster eingespart (p < 0,0001).
Bisher haben nur vier Studien Streustrahlenreduktionssoftwares an Liegend-Röntgen-Thorax-Aufnahmen untersucht, davon zwei an lebenden Menschen. Limitationen der vorliegenden Studie sind die Subjektivität der Bewertungen der Radiologen, die mögliche Identifizierung der Röntgen-Thorax-Aufnahmen, die mit dem parallelen Streustrahlenraster als gängige Aufnahmetechnik in der Radiologie des Universitätsklinikums Frankfurt durchgeführt wurden, die Konstanz der Expositionsparameter unabhängig des BMI der Patienten und die eingeschränkte Vergleichbarkeit der Röntgen-Thorax-Aufnahmen desselben Patienten aufgrund von Veränderungen der Pathologien, Fremdkörper, etc. bei (großem) zeitlichem Abstand zwischen den Röntgen-Thorax-Aufnahmen.
Das virtuelle Raster erzielte teils eine gleichwertige, teils eine bessere Bildqualität wie/als das parallele Raster bei gleichzeitiger Dosisreduktion von 28,7% und kann es somit bei Liegend-Röntgen-Thorax-Aufnahmen ersetzen. Weitere Studien sollten den Einsatz des virtuellen Rasters bei Röntgenaufnahmen des Thorax (stehend und liegend) und anderer Körperpartien im Hinblick auf die Bildqualität, (höhere) Dosiseinsparungen und den Workflow untersuchen.
Hämophilie A (HA) ist eine X-chromosomal-rezessiv vererbte Blutgerinnungsstörung mit einem vollständigen Fehlen oder einem funktionellen Defizit des Gerinnungsfaktors VIII (FVIII). Trotz der Therapiefortschritte innerhalb der letzten Jahre, zeigen HA-Patienten auch unter der regelmäßigen FVIII-Substitutionstherapie weiterhin multiple Komplikationen, einschließlich Gelenkschäden, Entstehung einer Immunantwort (Hemmkörper) und reduzierter Lebensqualität. Im Gegensatz zu den bisherigen Therapieoptionen stellt die Gentherapie (GT) die vielversprechende Möglichkeit einer dauerhaften Anhebung des FVIII-Spiegels bis hin zur Heilung der HA in Aussicht.
In der vorliegenden Arbeit konnte ein geeignetes HA-Zellmodell auf Basis der primären humanen hepatischen sinusoidalen Endothelzellen (HHSEC) etabliert werden, um die zukünftige Erforschung einer SaCas-CRISPR-basierten HA-GT in vitro zu evaluieren, sowie wichtige Erkenntnisse für weiterführende Arbeiten gewonnen werden.
Mittels stabiler Integration des Doxycyclin-induzierbaren large T-Onkogens konnte eine gut charakterisierte, immortale HHSEC_LT-Zelllinie hergestellt werden, welche funktionalen FVIII exprimiert. Weiterhin konnte gezeigt werden, dass die Immortalisierung in Abhängigkeit von Doxycyclin für weiterführende Experimente in der Zellkultur essenziell ist, um Stressreaktionen der HHSEC, aufgrund ra-scher Seneszenz und Apoptose, zu umgehen.
Im weiteren Verlauf des GT-Projektes sollten verschiedene HHSEC-F8-Mutations-Zelllinien hergestellt werden. Neben der Gensequenzierung wurden in der vorliegenden Arbeit mehrere in Betracht kommende FVIII-Detektionsverfahren getestet, um den Erfolg einer eingeführten F8-Genmutation in HHSEC sowie ihrer anschließenden Reparatur im weiteren Verlauf des GT-Projektes auch auf Proteinebene zu demonstrieren. Hierbei konnte gezeigt wer-den, dass für die vorliegende Fragestellung sich insbesondere die Immunfluoreszenz- (IF-) Mikroskopie und die Quantifizierung der FVIII-Aktivität (FVIII:C) mittels aPTT-basierter Messung zur spezifischen Detektion von FVIII in HHSEC bewähren.
In Anlehnung an patientenspezifische F8-Genmutationen mit einem Frameshift-Effekt wurden fünf verschiedene sgRNA/SaCas9-CRISPR-Expressionsvektoren konstruiert und mittels lentiviralem Gentransfer in die immortalisierten HHSEC stabil transduziert. Nach PCR-Amplifikation der betreffenden genomischen Loci dieser fünf verschiedenen stabil transduzierten HHSEC-F8-Mutations-Zelllinien zeigte die anschließende Sequenzierung, dass vier der fünf hergestellten Konstrukte Genveränderungen mit potenziellen Frameshift-Effekten in HHSEC generieren konnten, wovon zwei sehr gute Ergebnisse erzielten. Korrelierend zu den Sequenzierergebnissen konnten ebenfalls Verminderungen der FVIII-Fluoreszenzintensität mittels mikroskopischer IF-Aufnahmen sowie der FVIII:C mittels aPTT-basierter Messung dargestellt werden.
Weiterhin konnte bei der Beurteilung des morphologischen Erscheinungsbildes der stabil transduzierten HHSECs eine optisch veränderte Zellmorphologie sowie ein Wachstumsnachteil innerhalb der beiden Zellpools mit den höchst erreichten Indel-Raten und der niedrigsten FVIII:C beobachtet werden. Diese Beobachtungen erlaubten die Formulierungen neuartiger, vielversprechender Hypothesen in Bezug auf das Grundverständnis der HA-Erkrankung.
In der vorliegenden Studie wurden Patienten mit struktureller Epilepsie bedingt durch eine fokale kortikale Dysplasie (FCD) mittels moderner Magnetresonanztomographie (MRT)-Verfahren untersucht.
Bei FCDs handelt es sich um Fehlbildungen der Großhirnrinde, die mit einer hohen epileptogenen Aktivität vergesellschaftet sind. Einige dieser Patienten unterziehen sich einer epilepsiechirurgischen Resektion, sind jedoch hiernach hinsichtlich ihrer Anfallsfrequenz dennoch nicht ausreichend kontrollierbar, weshalb Grund zur Annahme besteht, dass es neben der fokalen kortikalen Dysplasie andere Faktoren geben könnte, die epileptische Anfälle verursachen.
Basierend auf dieser Überlegung wurde mittels T2-Relaxometrie untersucht, ob bei Patienten mit FCDs mikrostrukturelle Veränderungen in Teilen des Kortex vorhanden sind, die mittels konventioneller MRT-Verfahren normal bzw. gesund erscheinen. Es wird angenommen, dass bei diesen Patienten auch außerhalb der FCD mikrostrukturelle Veränderungen, beispielsweise bedingt durch Schädigung im Rahmen von Anfällen oder durch Therapieeffekte, vorzufinden sind.
Für die Studie wurden 16 Patienten mit einer neuroradiologisch gesicherten FCD und 16 hinsichtlich des Alters und des Geschlechts gematchte gesunde Probanden rekrutiert.
Die Daten wurden an einem 3 Tesla (T) MRT-Scanner erhoben. Um die T2-Relaxationszeit zu messen, wurden Spin-Echo Datensätze mit verschiedenen Echozeiten (TE) aufgezeichnet. Zur Erfassung der Ausdehnung der FCD wurden konventionelle fluid-attenuated inversion recovery (FLAIR)-Datensätze akquiriert. Zur Segmentierung des Gewebes wurden synthetische T1-gewichtete magnetization-prepared rapid acquisition of gradient echos (MP-RAGE)-Datensätze aus quantitativen T1-Karten berechnet. Der Kortex und dessen Grenzflächen wurden mittels FreeSurfer anhand der MP-RAGE-Datensätze identifiziert und die kortikale Dicke wurde gemessen. Die FCD-Areale wurden in den FLAIR-Datensätzen manuell markiert und aus den T2-Karten exkludiert, um die FCD-assoziierten Veränderungen nicht in die Analyse einzubeziehen.
Anschließend wurden kortikale T2-Werte ausgelesen und in Oberflächendatensätzen gespeichert, um dann durchschnittliche kortikale T2-Werte für jeden Probanden zu ermitteln und mittels ungepaartem t-Test zwischen den Gruppen zu vergleichen. Zudem wurde der Pearson-Korrelationskoeffizient zwischen den kortikalen T2 Werten und klinischen Parametern berechnet. Außerdem wurde eine oberflächenbasierte Gruppenanalyse kortikaler T2-Werte und der kortikalen Dicke durchgeführt. Hierbei wurden Permutationssimulationen durchgeführt, um kortikale Cluster zu erkennen, die fokale Gruppenunterschiede anzeigen, und um für Mehrfachvergleiche zu korrigieren.
Die Analyse ergab, dass die durchschnittlichen kortikalen T2-Werte außerhalb der FCD in der Patientenkohorte im Vergleich zu den gesunden Probanden signifikant erhöht waren. Diese T2-Veränderungen zeigten weder eine signifikante Korrelation mit der Anzahl der Anfälle der letzten drei Monate, noch mit der Anzahl der jemals eingenommenen antiepileptischen Medikamente. Insbesondere wurden T2-Erhöhungen in den frontalen, parietalen und manchen temporalen Regionen festgestellt. Die oberflächenbasierte Analyse der Kortexdicke zeigte keine signifikanten Gruppenunterschiede.
Mittels T2-Relaxometrie und oberflächenbasierten Analyse-Techniken wurden demnach T2-Veränderungen des mittels konventioneller MRT-Bildgebung unauffällig erscheinenden zerebralen Kortex bei Patienten mit FCD und Epilepsie festgestellt.
Die Ergebnisse deuten auf das Vorhandensein von mikrostrukturellen Veränderungen hin, die sich mit konventionellen MRT-Verfahren nicht erfassen lassen. Potentielle Ursachen dieser Veränderungen sind neben Effekten der antikonvulsiven Medikation möglicherweise auch gliotischer Gewebeumbau bedingt durch stattgehabte epileptische Anfälle. Die Studie legt nahe, dass strukturelle Epilepsien mehr als ein Symptom bedingt durch eine fokale Läsion sind und stattdessen das Gehirn als Ganzes betreffen.
Aktive Hörimplantate befinden sich seit Mitte der 1980er Jahre im klinischen Einsatz. Aufgrund der inzwischen sehr hohen Anwendungszahl und durch-schnittlich sehr langen Anwendungsdauer gelten diese als sehr sicher. Dennoch können Komplikation auftreten. Eine Komplikation wurde in der vorliegenden Arbeit als Auftreten eines negativen Ereignisses außerhalb des gewünschten Behandlungsablaufes gewertet.
Ziel dieser Arbeit war es, aufgetretene Komplikationen zu kategorisieren und zu quantifizieren. Ferner sollte untersucht werden, ob bestimmte Faktoren Einfluss auf die Häufigkeit von Komplikationen haben, insbesondere in Bezug auf die verschiedenen Implantat- und Elektrodenträgermodelle. Es wurden neben der Erfassung und Quantifizierung unerwünschter Ereignisse vier Hypothesen for-muliert, die sich aus der klinischen Erfahrung der Anwendung der Systeme ergaben: (H1) Kinder entwickeln nach Cochlea-Implantation häufiger Entzün-dungen. (H2) Implantatmodelle mit Magnettasche führen häufiger zu Infektio-nen. (H3) Perimodiolare Elektrodenträger führen häufiger zu „Tip fold-over“ (Umschlagen der Elektrodenträgerspitze). (H4) Gerade Elektrodenträger führen häufiger zu Elektrodenträgerdislokation.
In dieser Arbeit wurden alle von Januar 2006 bis Dezember 2016 im Universi-tätsklinikum Frankfurt mit aktiven Hörimplantaten versorgten Patienten einge-schlossen. Unter den 1274 Patienten befanden sich 583 Patienten, bei denen mindestens eine Komplikation auftrat. Hiervon machten den Großteil Schmer-zen (16,9 %), Drehschwindel (15,6 %) und Infektionen im Verlauf (8,3 %) aus.
Es wurde aus dem Datenmaterial eine Patientengruppe von 503 betroffenen Patienten gebildet, die nach der Operation erstmals eine Komplikation angaben. In dieser Kohorte „Erstereignis“ traten Komplikationen vor allem in den Bereichen Entzündung (281 Patienten), Hören (183 Patienten) und Gleichgewicht (158 Patienten) auf. Bei den unilateral versorgten Patienten dieser Kohorte zeigte sich das erste Ereignis durchschnittlich nach 5,64 Jahren, bei den beid-seitig Operierten trat das erste Ereignis durchschnittlich nach 7,35 Jahren auf.
Die Implantatmodelle wichen im Auftreten von Komplikationen voneinander ab: Die höchsten Komplikationsraten traten bei den Modellen HiRes90K mit 37 von 81 (45,7 %), Synchrony mit 62 von 140 (44,3 %), und Nucleus 5 mit 115 von 274 (42,0 %) auf. Die Elektrodenträgerbauformen wiesen signifikante (p < 0,001) Unterschiede untereinander auf: Die meisten Komplikationen traten bei den Elektrodenträgerbauarten Medium (75 %), Midscala (58,8 %), Slim Modi-olar (54,3 %), und Straight (52,1 %) auf. Eine Infektion trat besonders bei den Implantaten Synchrony (1,34 Jahre) und Clarion (1,57 Jahre) früh auf. Die Modelle Pulsar (7,51 Jahre) und CI24RE (6,13 Jahre) zeigten ein eher spätes Auftreten. Für das Auftreten einer Infektion der Implantatmodelle lag p unter 0,001, was für signifikante Unterschiede bezüglich des Zeitpunktes des Auftretens spricht. Die Elektrodenträgerbauart zeigte in Bezug auf eine Hörbeeinträchtigung und in Bezug auf das Auftreten einer Elektrodenträger bezogenen Komplikation, wie Tip fold-over, Migration oder inkomplette Insertion hoch signifikante (p < 0,001) Unterschiede. Elektrodenträgerbauformen wie Midscala, Straight und Slim Modiolar führten früh nach durchschnittlich einem (Slim Modiolar) bis 2,5 (Straight) Jahren zum ersten Auftreten von einem veränderten Höreindruck nach CI-Implantation. Etwas häufiger traten Probleme mit dem Elektrodenträger wie Tip fold-over, Migration oder inkomplette Insertion bei den Modellen Flex Soft und Helix auf, am häufigsten bei dem Modell Flex 24.
(H1) Bei Kindern traten signifikant (p < 0.001) häufiger implantatbezogene Entzündungen auf als bei Erwachsenen. In der Gruppe „Erstereignis“ hatten 66,0 % der Kinder und 23,7 % der Erwachsenen eine Entzündung. (H2) Das Vor-handensein einer Magnettasche an der Implantat-Empfänger-Spule führte nicht signifikant häufiger zum Auftreten einer Entzündung. (H3) Vorgekrümmte (engl. pre-curved) Elektrodenträger zeigten eine höhere Inzidenz für Tip fold-over als gerade Elektrodenträger. (H4) Gerade Elektrodenträger zeigten eine höhere Inzidenz für eine Migration des Elektrodenträgers. Insgesamt traten im betrach-teten Kollektiv „Erstereignis“ 11 Migrationen auf, 10 davon bei geraden Elektro-denträgern (p = 0,03).
Insgesamt führen Faktoren wie die Implantatmodelle, Elektrodenträgerbauformen, Alter des Patienten früher zum Auftreten von Komplikationen. Für zukünftige Studien wäre eine eigene Auswertung der noch relativ neuen (2012) Mittelohrimplantate interessant.
1.1. Einleitung
Die periprothetische Infektion (PPI) gilt als eine der schwersten Komplikationen nach endoprothetischem Gelenkersatz, deren Behandlung einen hohen finanziellen, personellen und zeitlichen Aufwand erfordert. Das Krankheitsbild ist seit Beginn der Endoprothetik bekannt und das Wissen um die Pathophysiologie wurde seitdem vertieft. Die Therapie wurde um stadienadaptierte Konzepte, wirksame Antibiotika und verbesserte Implantate beachtlich erweitert. Dennoch liegt die Inzidenz der PPI beim Kniegelenk unverändert zwischen einem und zwei Prozent, bei Risikopatienten auch deutlich darüber (1). Falle einer PPI sind die primären Ziele aus Patientensicht die Wiederherstellung der Gehfähigkeit und Schmerzfreiheit bei Implantation einer beweglichen Revisionsprothese. Es zeigt sich allerdings, dass nach mehrmaligem Prothesenwechsel die Gelenkfunktion, die Patientenzufriedenheit und das Outcome abnehmen. Nach multiplen Operationen am betroffenen Gelenk leidet der Streckapparat und ein Verlust von Knochen ist unumgänglich. In diesen Extremsituationen müssen neben der Ultima Ratio einer Amputation auch Salvage-Prozeduren, wie das Anlegen einer stabilen Arthrodese, erwogen werden. Das klinische Outcome und die Lebensqualität hiervon sind bisher weitgehend unbekannt. Daher vergleicht die vorliegende Arbeit das klinische und funktionelle Behandlungsergebnis in Kombination mit der resultierenden Lebensqualität von Patienten, bei denen das Anlegen einer stabilen Arthrodese klinisch erforderlich war, mit dem von Patienten nach Implantation einer Revisionsprothese als Resultat eines mehrzeitigen septischen Endoprothesen-Wechsels.
1.2. Material und Methoden
Die Studie umfasste 104 Patienten (2010-2017), von denen alle eine periprothetische Infektion einer Knie-Totalendoprothese (KTEP) aufwiesen. In einem mehrzeitigen Verfahren wurde das Implantat gewechselt. Nach Infektberuhigung erfolgte die Implantation eines Revisionsimplantates. Im Falle von ausgedehnten Knochendefekten oder bei Verlust der Streckapparates wurde ein modularer intramedullärer Arthrodesenagel verwendet (Knie-Arthrodese-Modul, KAM-Gruppe; n=52). In der Kontrollgruppe wurde eine gekoppelte Revisionsprothese reimplantiert (Rotating Hinge Knee, RHK-Gruppe; n=52). Infektremissionsraten und das klinische Behandlungsergebnis (anhand des Knee Society Score (KSS) und Western Ontario McMasters Universities Osteoarthritis Index (WOMAC), sowie die Lebensqualität (anhand des Short Form Health Survey 12 (SF-12)) wurden gemessen. Zusätzlich wurden patientenbezogene Daten, wie Komorbititäten (Charlson Comorbidity Index (CCI)) und das Schmerzniveau (visuelle Analogskala), untersucht.
1.3. Ergebnisse
Das Durchschnittsalter der Studienteilnehmer war 72,5 Jahre. Der Charlson Comorbidity Index war in der KAM-Gruppe leicht erhöht (3,3 KAM versus 2,8 RHK). Die Infektberuhigungsrate lag bei 89,4% (88,5 KAM versus 90,4% RHK). Bei Reinfektion war der Prothesenerhalt vor allem in der RHK-Gruppe (7,7%) möglich, Amputationen mussten hauptsächlich in der KAM-Gruppe (9,6%) durchgeführt werden. In beiden Gruppen wurde eine signifikante Schmerzreduktion (visuelle Analogskala prä-OP: 7,9 post-OP: 2,8) erreicht. Die Gehstrecke der KAM-Gruppe war signifikant gegenüber der RHK-Gruppe (504 Meter KAM versus 1064 Meter RHK) vermindert. Der KSS Funktionsscore und der WOMAC (25 KAM versus 40 RHK bzw. 35 KAM versus 64 RHK) waren in der KAM-Gruppe ebenfalls signifikant niedriger. Eine etwas niedrigere Lebensqualität wurde in der KAM-Gruppe (SF-12 Körpersubskala 34 KAM versus 40 RHK; SF-12 Psychesubskala 51 KAM versus 56 RHK) beobachtet. Die generelle Zufriedenheit mit der Behandlung lag in der KAM-Gruppe bei 88% und bei 81% in der RHK-Gruppe.
1.4. Schlussfolgerungen
Sowohl durch Therapie mittels Revisionsprothese als auch durch Arthrodese konnten hohe Infektremissionsraten erreicht werden. Die Gehstrecke und Gelenkfunktion war nach Arthrodesenimplantation reduziert, doch war die Rehabilitationszeit deutlich kürzer. Eine Arthrodese mit intramedullärer Marknagelung bietet eine gute Therapieoption zum Extremitätenerhalt, zur Schmerzreduktion und zum Erhalt von Lebensqualität und Alltagsmobilität, wenn aufgrund von Knochensubstanzverlust und Streckapparatinsuffizienz keine Möglichkeit zur Implantation einer Revisionsprothese mehr besteht.
Autophagy is a highly conserved catabolic process cells use to maintain their homeostasis by degrading misfolded, damaged and excessive proteins, nonfunctional organelles, foreign pathogens and other cellular components. Hence, autophagy can be nonselective, where bulky portions of the cytoplasm are degraded upon stress, or a highly selective process, where preselected cellular components are degraded. To distinguish between different cellular components, autophagy employs selective autophagy receptors, which will link the cargo to the autophagy machinery, thereby sequestering it in the autophagosome for its subsequent degradation in the lysosome. Autophagy receptors undergo post-translational and structural modifications to fulfil their role in autophagy, or upon executing their role, for their own degradation. We highlight the four most prominent protein modifications – phosphorylation, ubiquitination, acetylation and oligomerisation – that are essential for autophagy receptor recruitment, function and turnover. Understanding the regulation of selective autophagy receptors will provide deeper insights into the pathway and open up potential therapeutic avenues.
Objectives: To compare radiation dose and image quality of single-energy (SECT) and dual-energy (DECT) head and neck CT examinations performed with second- and third-generation dual-source CT (DSCT) in matched patient cohorts. Methods: 200 patients (mean age 55.1 ± 16.9 years) who underwent venous phase head and neck CT with a vendor-preset protocol were retrospectively divided into four equal groups (n = 50) matched by gender and BMI: second (Group A, SECT, 100-kV; Group B, DECT, 80/Sn140-kV), and third-generation DSCT (Group C, SECT, 100-kV; Group D, DECT, 90/Sn150-kV). Assess- ment of radiation dose was performed for an average scan length of 27 cm. Contrast-to-noise ratio measure- ments and dose-independent figure-of-merit calcu- lations of the submandibular gland, thyroid, internal jugular vein, and common carotid artery were analyzed quantitatively. Qualitative image parameters were evalu- ated regarding overall image quality, artifacts and reader confidence using 5-point Likert scales. Results: Effective radiation dose (ED) was not signifi- cantly different between SECT and DECT acquisition for each scanner generation (p = 0.10). Significantly lower effective radiation dose (p < 0.01) values were observed for third-generation DSCT groups C (1.1 ± 0.2 mSv) and D (1.0 ± 0.3 mSv) compared to second-generation DSCT groups A (1.8 ± 0.1 mSv) and B (1.6 ± 0.2 mSv). Figure-of- merit/contrast-to-noise ratio analysis revealed superior results for third-generation DECT Group D compared to all other groups. Qualitative image parameters showed non-significant differences between all groups (p > 0.06). Conclusion: Contrast-enhanced head and neck DECT can be performed with second- and third-generation DSCT systems without radiation penalty or impaired image quality compared with SECT, while third-generation DSCT is the most dose efficient acquisition method. Advances in knowledge: Differences in radiation dose between SECT and DECT of the dose-vulnerable head and neck region using DSCT systems have not been evaluated so far. Therefore, this study directly compares radiation dose and image quality of standard SECT and DECT protocols of second- and third-generation DSCT platforms.
Endothelial tip cells are essential for VEGF-induced angiogenesis, but underlying mechanisms are elusive. The Ena/VASP protein family, consisting of EVL, VASP, and Mena, plays a pivotal role in axon guidance. Given that axonal growth cones and endothelial tip cells share many common features, from the morphological to the molecular level, we investigated the role of Ena/VASP proteins in angiogenesis. EVL and VASP, but not Mena, are expressed in endothelial cells of the postnatal mouse retina. Global deletion of EVL (but not VASP) compromises the radial sprouting of the vascular plexus in mice. Similarly, endothelial-specific EVL deletion compromises the radial sprouting of the vascular plexus and reduces the endothelial tip cell density and filopodia formation. Gene sets involved in blood vessel development and angiogenesis are down-regulated in EVL-deficient P5-retinal endothelial cells. Consistently, EVL deletion impairs VEGF-induced endothelial cell proliferation and sprouting, and reduces the internalization and phosphorylation of VEGF receptor 2 and its downstream signaling via the MAPK/ERK pathway. Together, we show that endothelial EVL regulates sprouting angiogenesis via VEGF receptor-2 internalization and signaling.
Objectives: The four-dimensional ultrasound (4D-US) enables imaging of the aortic segment and simultaneous determination of the wall expansion. The method shows a high spatial and temporal resolution, but its in vivo reliability is so far unknown for low-measure values. The present study determines the intraobserver repeatability and interobserver reproducibility of 4D-US in the atherosclerotic and non-atherosclerotic infrarenal aorta. Methods: In all, 22 patients with non-aneurysmal aorta were examined by an experienced examiner and a medical student. After registration of 4D images, both the examiners marked the aortic wall manually before the commercially implemented speckle tracking algorithm was applied. The cyclic changes of the aortic diameter and circumferential strain were determined with the help of custom-made software. The reliability of 4D-US was tested by the intraclass correlation coefficient (ICC). Results: The 4D-US measurements showed very good reliability for the maximum aortic diameter and the circumferential strain for all patients and for the non-atherosclerotic aortae (ICC >0.7), but low reliability for circumferential strain in calcified aortae (ICC = 0.29). The observer- and masking-related variances for both maximum diameter and circumferential strain were close to zero. Conclusions: Despite the low-measured values, the high spatial and temporal resolution of the 4D-US enables a reliable evaluation of cyclic diameter changes and circumferential strain in non-aneurysmal aortae independent from the observer experience but with some limitations for calcified aortae. The 4D-US opens up a new perspective with regard to noninvasive, in vivo assessment of kinematic properties of the vessel wall in the abdominal aorta.
Abstract: The Children's Communication Checklist-2 (CCC-2) is often applied to assess pragmatic language impairment which is highly prevalent in autism spectrum disorder (ASD) and several mental health conditions. We replicated previous findings on the limited applicability of the CCC-2 in clinical samples and the inconsistent findings concerning the factor structure. The aim of the present study was, thus, to develop a concise, simplified, and revised version of the CCC-2 in a large German-speaking sample. Four groups of children and adolescents aged 4 to 17 years were included: ASD (n = 195), intellectual disability (ID, n = 83), diverse mental health conditions (MHC, n = 144) and a typically developing control group (TD, n = 417). We reduced the original number of items from 70 to 39, based on item analysis, exploratory factor analysis and the exclusion of communication-unrelated items. The revised version, CCC-R (α = 0.96), consists of two empirically derived factors: a pragmatic-language (α = 0.96) and a grammatical-semantic-language factor (α = 0.93). All clinical groups (ASD, ID, and MHC) had significantly increased CCC-R total scores, with the highest scores being in the neurodevelopmental disorder groups (ASD and ID). In addition, we found group-specific patterns of elevated pragmatic-language scores in the ASD group and grammatical-semantic scores in the ID group. The CCC-R was comparable to the CCC-2 in distinguishing ASD from the other groups. The CCC-R is proposed as a simplified and easily applied, clinical questionnaire for caregivers, assessing pragmatic language impairments across neurodevelopmental disorders and mental health conditions. Lay Summary: The CCC-2 is a questionnaire designed to identify children who have problems in the social use of language, however, it is limited in its clinical application and exhibits inconsistent factors. We have created a shorter and simpler version of the CCC-2 that we have called the CCC-R which overcomes the previous limitations of the CCC-2. It consists of two subscales: pragmatic language and grammatical-semantic language. The CCC-R can be used as a short and clinically relevant caregiver questionnaire which assesses pragmatic language impairments in children and adolescents. Autism Res 2021, 14: 759–772. © 2021 The Authors. Autism Research published by International Society for Autism Research published by Wiley Periodicals LLC.
Cerumen was found to be a promising alternative specimen for the detection of drugs. In a pilot study, drugs of abuse were identified at a higher detection rate and a longer detection window in cerumen than in urine. In this study, cerumen from subjects was analyzed after they ingested the designer stimulant 4-fluoroamphetamine (4-FA) in a controlled manner. Methods: Twelve subjects ingested placebo and 100 mg of 4-FA. Five of them were also given 150 mg of 4-FA in 150 mL Royal Club bitter lemon drink at least after 7 days. Cerumen was sampled using cotton swabs at baseline, 1 h after the ingestion of the drug and at the end of the study day (12 h). After extraction with ethyl acetate followed by solid-phase extraction, the extracts were analyzed using liquid chromatography coupled with tandem mass spectrometry (LC–MS/MS). Results and discussion: In the cerumen of all 12 subjects, 4-FA was detected 12 h after its ingestion; in most subjects, cerumen was detected after 1 h of ingestion, ranging from 0.06 to 13.90 (median 1.52) ng per swab. The detection of 4-FA in cerumen sampled 7 days or more after the first dose suggested a long detection window of cerumen. Conclusions: Cerumen can be successfully used to detect a single drug ingestion even immediately after the ingestion when a sufficient amount of cerumen is used.
Background: The approval of everolimus (EVE) for the treatment of angiomyolipoma (2013), subependymal giant cell astrocytoma (2013) and drug-refractory epilepsy (2017) in patients with tuberous sclerosis complex (TSC) represents the first disease-modifying treatment option available for this rare and complex genetic disorder. Objective: The objective of this study was to analyse the use, efficacy, tolerability and treatment retention of EVE in patients with TSC in Germany from the patient’s perspective. Methods: A structured cross-age survey was conducted at 26 specialised TSC centres in Germany and by the German TSC patient advocacy group between February and July 2019, enrolling children, adolescents and adult patients with TSC. Results: Of 365 participants, 36.7% (n = 134) reported the current or past intake of EVE, including 31.5% (n = 115) who were taking EVE at study entry. The mean EVE dosage was 6.1 ± 2.9 mg/m2 (median: 5.6 mg/m2, range 2.0–15.1 mg/m2) in children and adolescents and 4 ± 2.1 mg/m2 (median: 3.7 mg/m2, range 0.8–10.1 mg/m2) in adult patients. An early diagnosis of TSC, the presence of angiomyolipoma, drug-refractory epilepsy, neuropsychiatric manifestations, subependymal giant cell astrocytoma, cardiac rhabdomyoma and overall multi-organ involvement were associated with the use of EVE as a disease-modifying treatment. The reported efficacy was 64.0% for angiomyolipoma (75% in adult patients), 66.2% for drug-refractory epilepsy, and 54.4% for subependymal giant cell astrocytoma. The overall retention rate for EVE was 85.8%. The retention rates after 12 months of EVE therapy were higher among adults (93.7%) than among children and adolescents (88.7%; 90.5% vs 77.4% after 24 months; 87.3% vs 77.4% after 36 months). Tolerability was acceptable, with 70.9% of patients overall reporting adverse events, including stomatitis (47.0%), acne-like rash (7.7%), increased susceptibility to common infections and lymphoedema (each 6.0%), which were the most frequently reported symptoms. With a total score of 41.7 compared with 36.8 among patients not taking EVE, patients currently being treated with EVE showed an increased Liverpool Adverse Event Profile. Noticeable deviations in the sub-items ‘tiredness’, ‘skin problems’ and ‘mouth/gum problems’, which are likely related to EVE-typical adverse effects, were more frequently reported among patients taking EVE. Conclusions: From the patients’ perspective, EVE is an effective and relatively well-tolerated disease-modifying treatment option for children, adolescents and adults with TSC, associated with a high long-term retention rate that can be individually considered for each patient. Everolimus therapy should ideally be supervised by a centre experienced in the use of mechanistic target of rapamycin inhibitors, and adverse effects should be monitored on a regular basis.
Our purpose was to analyze the robustness and reproducibility of magnetic resonance imaging (MRI) radiomic features. We constructed a multi-object fruit phantom to perform MRI acquisition as scan-rescan using a 3 Tesla MRI scanner. We applied T2-weighted (T2w) half-Fourier acquisition single-shot turbo spin-echo (HASTE), T2w turbo spin-echo (TSE), T2w fluid-attenuated inversion recovery (FLAIR), T2 map and T1-weighted (T1w) TSE. Images were resampled to isotropic voxels. Fruits were segmented. The workflow was repeated by a second reader and the first reader after a pause of one month. We applied PyRadiomics to extract 107 radiomic features per fruit and sequence from seven feature classes. We calculated concordance correlation coefficients (CCC) and dynamic range (DR) to obtain measurements of feature robustness. Intraclass correlation coefficient (ICC) was calculated to assess intra- and inter-observer reproducibility. We calculated Gini scores to test the pairwise discriminative power specific for the features and MRI sequences. We depict Bland Altmann plots of features with top discriminative power (Mann–Whitney U test). Shape features were the most robust feature class. T2 map was the most robust imaging technique (robust features (rf), n = 84). HASTE sequence led to the least amount of rf (n = 20). Intra-observer ICC was excellent (≥ 0.75) for nearly all features (max–min; 99.1–97.2%). Deterioration of ICC values was seen in the inter-observer analyses (max–min; 88.7–81.1%). Complete robustness across all sequences was found for 8 features. Shape features and T2 map yielded the highest pairwise discriminative performance. Radiomics validity depends on the MRI sequence and feature class. T2 map seems to be the most promising imaging technique with the highest feature robustness, high intra-/inter-observer reproducibility and most promising discriminative power.
Patients with ataxia-telangiectasia (A-T) suffer from progressive cerebellar ataxia, immunodeficiency, respiratory failure, and cancer susceptibility. From a clinical point of view, A-T patients with IgA deficiency show more symptoms and may have a poorer prognosis. In this study, we analyzed mortality and immunity data of 659 A-T patients with regard to IgA deficiency collected from the European Society for Immunodeficiencies (ESID) registry and from 66 patients with classical A-T who attended at the Frankfurt Goethe-University between 2012 and 2018. We studied peripheral B- and T-cell subsets and T-cell repertoire of the Frankfurt cohort and survival rates of all A-T patients in the ESID registry. Patients with A-T have significant alterations in their lymphocyte phenotypes. All subsets (CD3, CD4, CD8, CD19, CD4/CD45RA, and CD8/CD45RA) were significantly diminished compared to standard values. Patients with IgA deficiency (n = 35) had significantly lower lymphocyte counts compared to A-T patients without IgA deficiency (n = 31) due to a further decrease of naïve CD4 T-cells, central memory CD4 cells, and regulatory T-cells. Although both patient groups showed affected TCR-ß repertoires compared to controls, no differences could be detected between patients with and without IgA deficiency. Overall survival of patients with IgA deficiency was significantly diminished. For the first time, our data show that patients with IgA deficiency have significantly lower lymphocyte counts and subsets, which are accompanied with reduced survival, compared to A-T patients without IgA deficiency. IgA, a simple surrogate marker, is indicating the poorest prognosis for classical A-T patients. Both non-interventional clinical trials were registered at clinicaltrials.gov 2012 (Susceptibility to infections in ataxia-telangiectasia; NCT02345135) and 2017 (Susceptibility to Infections, tumor risk and liver disease in patients with ataxia-telangiectasia; NCT03357978)
Physical inactivity is discussed as one of the most detrimental influences for lifestyle-related medical complications such as obesity, heart disease, hypertension, diabetes and premature mortality in in- and outpatients with major depressive disorder (MDD). In contrast, intervention studies indicate that moderate-to-vigorous-intensity physical activity (MVPA) might reduce complications and depression symptoms itself. Self-reported data on depression [Beck-Depression-Inventory-II (BDI-II)], general habitual well-being (FAHW), self-esteem and physical self-perception (FAHW, MSWS) were administrated in a cross-sectional study with 76 in- and outpatients with MDD. MVPA was documented using ActiGraph wGT3X + ® accelerometers and fitness was measured using cardiopulmonary exercise testing (CPET). Subgroups were built according to activity level (low PA defined as MVPA < 30 min/day, moderate PA defined as MVPA 30–45 min/day, high PA defined as MVPA > 45 min/day). Statistical analysis was performed using a Mann–Whitney U and Kruskal–Wallis test, Spearman correlation and mediation analysis. BDI-II scores and MVPA values of in- and outpatients were comparable, but fitness differed between the two groups. Analysis of the outpatient group showed a negative correlation between BDI-II and MVPA. No association of inpatient MVPA and psychopathology was found. General habitual well-being and self-esteem mediated the relationship between outpatient MVPA and BDI-II. The level of depression determined by the BDI-II score was significantly higher in the outpatient low- and moderate PA subgroups compared to outpatients with high PA. Fitness showed no association to depression symptoms or well-being. To ameliorate depressive symptoms of MDD outpatients, intervention strategies should promote habitual MVPA and exercise exceeding the duration recommended for general health (≥ 30 min/day). Further studies need to investigate sufficient MVPA strategies to impact MDD symptoms in inpatient settings. Exercise effects seem to be driven by changes of well-being rather than increased physical fitness.
Background: Approximately one in three patients suffers from preoperative anaemia. Even though haemoglobin is measured before surgery, anaemia management is not implemented in every hospital. Objective: Here, we demonstrate the implementation of an anaemia walk-in clinic at an Orthopedic University Hospital. To improve the diagnosis of iron deficiency (ID), we examined whether reticulocyte haemoglobin (Ret-He) could be a useful additional parameter. Material and Methods: In August 2019, an anaemia walk-in clinic was established. Between September and December 2019, major orthopaedic surgical patients were screened for preoperative anaemia. The primary endpoint was the incidence of preoperative anaemia. Secondary endpoints included Ret-He level, red blood cell (RBC) transfusion rate, in-hospital length of stay and anaemia at hospital discharge. Results: A total of 104 patients were screened for anaemia. Preoperative anaemia rate was 20.6%. Intravenous iron was supplemented in 23 patients. Transfusion of RBC units per patient (1.7 ± 1.2 vs. 0.2 ± 0.9; p = 0.004) and hospital length of stay (13.1 ± 4.8 days vs. 10.6 ± 5.1 days; p = 0.068) was increased in anaemic patients compared to non-anaemic patients. Ret-He values were significantly lower in patients with ID anaemia (33.3 pg [28.6–40.2 pg]) compared to patients with ID (35.3 pg [28.9–38.6 pg]; p = 0.015) or patients without anaemia (35.4 pg [30.2–39.4 pg]; p = 0.001). Conclusion: Preoperative anaemia is common in orthopaedic patients. Our results proved the feasibility of an anaemia walk-in clinic to manage preoperative anaemia. Furthermore, our analysis supports the use of Ret-He as an additional parameter for the diagnosis of ID in surgical patients.
USP22 controls necroptosis by regulating receptor-interacting protein kinase 3 ubiquitination
(2020)
Dynamic control of ubiquitination by deubiquitinating enzymes is essential for almost all biological processes. Ubiquitin-specific peptidase 22 (USP22) is part of the SAGA complex and catalyzes the removal of mono-ubiquitination from histones H2A and H2B, thereby regulating gene transcription. However, novel roles for USP22 have emerged recently, such as tumor development and cell death. Apart from apoptosis, the relevance of USP22 in other programmed cell death pathways still remains unclear. Here, we describe a novel role for USP22 in controlling necroptotic cell death in human tumor cell lines. Loss of USP22 expression significantly delays TNFα/Smac mimetic/zVAD.fmk (TBZ)-induced necroptosis, without affecting TNFα-mediated NF-κB activation or extrinsic apoptosis. Ubiquitin remnant profiling identified receptor-interacting protein kinase 3 (RIPK3) lysines 42, 351, and 518 as novel, USP22-regulated ubiquitination sites during necroptosis. Importantly, mutation of RIPK3 K518 reduced necroptosis-associated RIPK3 ubiquitination and amplified necrosome formation and necroptotic cell death. In conclusion, we identify a novel role of USP22 in necroptosis and further elucidate the relevance of RIPK3 ubiquitination as crucial regulator of necroptotic cell death.
Background and purpose: Superficial siderosis of the central nervous system is a sporadic finding in magnetic resonance imaging, resulting from recurrent bleedings into the subarachnoid space. This study aimed to determine the frequency of spinal dural cerebrospinal fluid (CSF) leaks amongst patients with a symmetric infratentorial siderosis pattern. Methods: In all, 97,733 magnetic resonance images performed between 2007 and 2018 in our neurocenter were screened by a keyword search for “hemosiderosis” and “superficial siderosis.” Siderosis patterns on brain imaging were classified according to a previously published algorithm. Potential causative intracranial bleeding events were also assessed. Patients with a symmetric infratentorial siderosis pattern but without causative intracranial bleeding events in history were prospectively evaluated for spinal pathologies. Results: Forty-two patients with isolated supratentorial siderosis, 30 with symmetric infratentorial siderosis and 21 with limited (non-symmetric) infratentorial siderosis were identified. Amyloid angiopathy and subarachnoid hemorrhage were causes for isolated supratentorial siderosis. In all four patients with a symmetric infratentorial siderosis pattern but without a causative intracranial bleeding event in history, spinal dural abnormalities were detected. Dural leaks were searched for in patients with symmetric infratentorial siderosis and a history of intracranial bleeding event without known bleeding etiology, considering that spinal dural CSF leaks themselves may also cause intracranial hemorrhage, for example by inducing venous thrombosis due to low CSF pressure. Thereby, one additional spinal dural leak was detected. Conclusions: Persisting spinal dural CSF leaks can frequently be identified in patients with a symmetric infratentorial siderosis pattern. Diagnostic workup in these cases should include magnetic resonance imaging of the whole spine.
Autophagy is a highly conserved catabolic process cells use to maintain their homeostasis by degrading misfolded, damaged and excessive proteins, nonfunctional organelles, foreign pathogens and other cellular components. Hence, autophagy can be nonselective, where bulky portions of the cytoplasm are degraded upon stress, or a highly selective process, where preselected cellular components are degraded. To distinguish between different cellular components, autophagy employs selective autophagy receptors, which will link the cargo to the autophagy machinery, thereby sequestering it in the autophagosome for its subsequent degradation in the lysosome. Autophagy receptors undergo post-translational and structural modifications to fulfil their role in autophagy, or upon executing their role, for their own degradation. We highlight the four most prominent protein modifications – phosphorylation, ubiquitination, acetylation and oligomerisation – that are essential for autophagy receptor recruitment, function and turnover. Understanding the regulation of selective autophagy receptors will provide deeper insights into the pathway and open up potential therapeutic avenues.
Linking epigenetic signature and metabolic phenotype in IDH mutant and IDH wildtype diffuse glioma
(2020)
Aims: Changes in metabolism are known to contribute to tumour phenotypes. If and how metabolic alterations in brain tumours contribute to patient outcome is still poorly understood. Epigenetics impact metabolism and mitochondrial function. The aim of this study is a characterisation of metabolic features in molecular subgroups of isocitrate dehydrogenase mutant (IDHmut) and isocitrate dehydrogenase wildtype (IDHwt) gliomas. Methods: We employed DNA methylation pattern analyses with a special focus on metabolic genes, large-scale metabolism panel immunohistochemistry (IHC), qPCR-based determination of mitochondrial DNA copy number and immune cell content using IHC and deconvolution of DNA methylation data. We analysed molecularly characterised gliomas (n = 57) for in depth DNA methylation, a cohort of primary and recurrent gliomas (n = 22) for mitochondrial copy number and validated these results in a large glioma cohort (n = 293). Finally, we investigated the potential of metabolic markers in Bevacizumab (Bev)-treated gliomas (n = 29). Results: DNA methylation patterns of metabolic genes successfully distinguished the molecular subtypes of IDHmut and IDHwt gliomas. Promoter methylation of lactate dehydrogenase A negatively correlated with protein expression and was associated with IDHmut gliomas. Mitochondrial DNA copy number was increased in IDHmut tumours and did not change in recurrent tumours. Hierarchical clustering based on metabolism panel IHC revealed distinct subclasses of IDHmut and IDHwt gliomas with an impact on patient outcome. Further quantification of these markers allowed for the prediction of survival under anti-angiogenic therapy. Conclusion: A mitochondrial signature was associated with increased survival in all analyses, which could indicate tumour subgroups with specific metabolic vulnerabilities.
Long-term effects on cirrhosis and portal hypertension of direct antiviral agent (DAA)-based eradication of hepatitis C virus (HCV) are still under debate. We analysed dynamics of liver and spleen elastography to assess potential regression of cirrhosis and portal hypertension 3 years post-treatment. Fifty-four patients with HCV-associated cirrhosis and DAA-induced SVR were included. Liver and spleen stiffness were measured at baseline (BL), end of treatment (EOT), 24 weeks after EOT (FU24) and 1, 2 and 3 (FU144) years post-treatment by transient liver elastography (L-TE) and point shear wave elastography (pSWE) using acoustic radiation force impulse (ARFI) of the liver (L-ARFI) and spleen (S-ARFI). Biochemical, virological and clinical data were also obtained. Liver stiffness assessed by L-TE decreased between BL [median (range), 32.5(9.1–75) kPa] and EOT [21.3(6.7–73.5) kPa; p < .0001] and EOT and FU144 [16(4.1–75) kPa; p = .006]. L-ARFI values improved between EOT [2.5(1.2–4.1) m/s] and FU144 [1.7(0.9–4.1) m/s; p = .001], while spleen stiffness remained unchanged. Overall, L-TE improved in 38 of 54 (70.4%) patients at EOT and 29 of 38 (76.3%) declined further until FU144, whereas L-ARFI values decreased in 30/54 (55.6%) patients at EOT and continued to decrease in 28/30 (93.3%) patients at FU144. Low bilirubin and high albumin levels at BL were associated with improved L-ARFI values (p = .048) at EOT or regression of cirrhosis (<12.5 kPa) by L-TE at FU144 (p = .005), respectively. Liver stiffness, but not spleen stiffness, continued to decline in a considerable proportion of patients with advanced liver disease after HCV eradication.
Beneficial acute effects of resistance exercise on cognitive functions may be modified by exercise intensity or by habitual physical activity. Twenty-six participants (9 female and 17 male; 25.5 ± 3.4 years) completed four resistance exercise interventions in a randomized order on separate days (≥48 h washout). The intensities were set at 60%, 75%, and 90% of the one repetition maximum (1RM). Three interventions had matched workloads (equal resistance*nrepetitions). One intervention applied 75% of the 1RM and a 50% reduced workload (resistance*nrepetitions = 50%). Cognitive attention (Trail Making Test A—TMTA), task switching (Trail Making Test B—TMTB), and working memory (Digit Reading Spans Backward) were assessed before and immediately after exercise. Habitual activity was assessed as MET hours per week using the International Physical Activity Questionnaire. TMTB time to completion was significantly shorter after exercise with an intensity of 60% 1RM and 75% 1RM and 100% workload. Friedman test indicated a significant effect of exercise intensity in favor of 60% 1RM. TMTA performance was significantly shorter after exercise with an intensity of 60% 1RM, 90% 1RM, and 75% 1RM (50% workload). Habitual activity with vigorous intensity correlated positively with the baseline TMTB and Digit Span Forward performance but not with pre- to post-intervention changes. Task switching, based on working memory, mental flexibility, and inhibition, was beneficially influenced by acute exercise with moderate intensity whereas attention performance was increased after exercise with moderate and vigorous intensity. The effect of regular activity had no impact on acute exercise effects.
Purpose: 10-year retrospective study to assess burden of illness in individuals with tuberous sclerosis complex (TSC) identified from German healthcare data. Methods: Patients with TSC were identified by International Classification of Diseases code Q85.1. Patients with epilepsy were identified by epilepsy diagnosis or antiseizure medication (ASM) prescription after TSC diagnosis. Results: Using data from 2016 (final study year), 100 patients with TSC were identified (mean [range] age: 38 [1–86] years; male: 40%); prevalence: 7.9 per 100,000 (TSC), 2.2 per 100,000 (TSC with epilepsy). During the 10-year study period (2007–2016), 256 patients with TSC were identified and followed up for 1,784 patient- years (epilepsy: 36%, 616 patient-years). TSC manifestations/comorbidities (apart from epilepsy) were identi- fied more frequently in patients with epilepsy than without. Mean annual healthcare costs for patients with TSC were €6,139 per patient-year (PPY), mostly attributable to medication (35%) and inpatient care (29%). Patients with epilepsy incurred costs more than double those without. Mean (standard deviation [SD]) annual hospi- talisation rate (AHR) and length of stay (LOS) PPY: 0.5 (1.0) and 5.9 (18.6) days for TSC. AHR and LOS were greater in patients with epilepsy than without. Mean (SD) number of ASMs prescribed (TSC with epilepsy): 3.0 (2.3) over the entire observable time per patient. Mortality rates (vs. control): 5.08% (vs. 1.69%, p<0.001) for TSC, 7.53% (vs. 0.98%, p<0.001) for TSC with epilepsy, 3.68% (vs. 2.03%, p = 0.003) for TSC without epilepsy. Conclusion: Healthcare costs, resource utilisation, and mortality were greater in patients with TSC and epilepsy than those without epilepsy.
Glioblastoma is the most common malignant primary brain tumor. To date, clinically relevant biomarkers are restricted to isocitrate dehydrogenase (IDH) gene 1 or 2 mutations and O6-methylguanine DNA methyltransferase (MGMT) promoter methylation. Long non-coding RNAs (lncRNAs) have been shown to contribute to glioblastoma pathogenesis and could potentially serve as novel biomarkers. The clinical significance of HOXA Transcript Antisense RNA, Myeloid-Specific 1 (HOTAIRM1) was determined by analyzing HOTAIRM1 in multiple glioblastoma gene expression data sets for associations with prognosis, as well as, IDH mutation and MGMT promoter methylation status. Finally, the role of HOTAIRM1 in glioblastoma biology and radiotherapy resistance was characterized in vitro and in vivo. We identified HOTAIRM1 as a candidate lncRNA whose up-regulation is significantly associated with shorter survival of glioblastoma patients, independent from IDH mutation and MGMT promoter methylation. Glioblastoma cell line models uniformly showed reduced cell viability, decreased invasive growth and diminished colony formation capacity upon HOTAIRM1 down-regulation. Integrated proteogenomic analyses revealed impaired mitochondrial function and determination of reactive oxygen species (ROS) levels confirmed increased ROS levels upon HOTAIRM1 knock-down. HOTAIRM1 knock-down decreased expression of transglutaminase 2 (TGM2), a candidate protein implicated in mitochondrial function, and knock-down of TGM2 mimicked the phenotype of HOTAIRM1 down-regulation in glioblastoma cells. Moreover, HOTAIRM1 modulates radiosensitivity of glioblastoma cells both in vitro and in vivo. Our data support a role for HOTAIRM1 as a driver of biological aggressiveness, radioresistance and poor outcome in glioblastoma. Targeting HOTAIRM1 may be a promising new therapeutic approach.
Das Dermatofibrosarcoma protuberans (DFSP) ist ein seltenes fibroblastisches Weichteilsarkom. Bisher ist wenig über die optimale Therapie des primären und insbesondere des rezidivierten DFSP im Kindes- und Jugendalter bekannt. Zudem gibt es sehr wenig klinische Daten über das fibrosarkomatöse DFSP (FS-DFSP) bei pädiatrischen Patienten, welches eine intermediär maligne Variante des DFSP darstellt. Die vorliegende Studie untersuchte das Dermatofibrosarcoma protuberans im Kindes- und Jugendalter im Hinblick auf die Therapie und Prognose der primären Erkrankung und bei Rezidiven. Es wurden Daten von 40 Patienten mit DFSP, welche im Zeitraum von 1996 bis 2016 in der Cooperativen Weichteilsarkomstudiengruppe (CWS) prospektiv registriert wurden, retrospektiv analysiert. Zusätzlich wurde die Therapie und der Verlauf von 3 Patienten mit der Diagnose eines FS-DFSP beschrieben.
Alle Patienten erhielten vorrangig eine chirurgische Tumorresektion. Eine sekundäre Resektion erfolgte bei 18 Patienten nach unvollständiger oder grenzwertig vollständiger primärer Resektion. Insgesamt konnte bei 85% (n = 34/40) eine mikroskopisch vollständige chirurgische Resektion (R0) in der besten Resektion erreicht werden. Alle Patienten konnten eine komplette Remission nach der primären Erkrankung erreichen und das 5-Jahres Gesamtüberleben war 100% (± 0; CI, 95%). Die R0-Resektion (IRS I) stellte einen signifikanten Faktor für die Prävention eines Rezidivs dar. Ein lokales Rezidiv trat nach einem Median von 1,1 Jahren bei insgesamt 15% (n = 6/40) der Patienten auf und wurde durch erneute chirurgische Resektion behandelt. Darunter konnten alle Patienten eine komplette Remission erreichen. Zwei von 3 Patienten mit einem FS-DFSP überlebten nach einer R0-Resektion in kompletter Remission.
Zusammenfassend konnte gezeigt werden, dass das DFSP eine gute Prognose bei pädiatrischen Patienten hat. Der wichtigste prognostische Faktor für die Prävention von Rezidiven ist eine mikroskopisch vollständige chirurgische Resektion. Im Falle eines Rezidivs oder dem Vorliegen eines FS-DFSP sollte ebenfalls eine vollständige chirurgische Resektion angestrebt werden.
This research project investigated how motor activity, such as cycling, influences the acquisition of foreign language vocabulary under two distinct conditions of auditory-motor-synchronisation. In a mixed subject design, 48 participants had to learn 40 Polish-German vocabulary pairs by auditory presentation over headphones in two different conditions, in which they performed motor activity cycling on a bicycle ergometer: in experiment 1, vocabulary was presented in a fixed rhythm while in experiment 2, participants self-initiated the presentation of vocabulary through pedalling. After having listened to the word pairs, they had to perform online vocabulary tests, one directly after the learning session and a second one 24 hours later from home. Additionally, the individual pitch perception preference (i.e. fundamental vs. spectral pitch perception) of the participants was determined.
The results showed that fundamental listeners forgot significantly more vocabulary than spectral listeners during the fixed than during the self-initiated condition. There was no difference within the groups for the self-initiated condition. The analysis of the motor data revealed a significantly more accurate synchronisation for fundamental listeners during the fixed condition. Therefore, this study provides first evidence for the benefit of self-initiated auditory-motor synchronisation in the process of learning a foreign language in adults. It also reveals that pitch preference has an effect on auditory-motor synchronisation.
Die Multiple Sklerose (MS) gehört zu den häufigsten chronisch-entzündlichen Erkrankungen des zentralen Nervensystems in Deutschland und kann durch Sehstörungen, Paresen oder Sensibilitätsstörungen symptomatisch werden.
Konventionelle Magnetresonanztomographie (MRT)-Verfahren leisten in der Diagnostik der MS einen wichtigen Beitrag, da diese die Läsionslast der weißen Substanz gut darstellen können. Frühere Studien deuten an, dass kognitive und psychomotorische Symptome wie Fatigue sowie Konzentrations- und Gedächtnisstörungen bei der MS mit Schädigungen des zerebralen Kortex in Beziehung stehen könnten. Mit konventionellen MRT-Bildgebungsverfahren lässt sich zwar kortikale Atrophie, nicht jedoch die zugrundeliegenden mikrostrukturellen kortikalen Umbauprozesse erfassen. In der vorliegenden Studie wurden daher quantitative MRT(qMRT)-Verfahren verwendet, die eben diese diffusen kortikalen Gewebsveränderungen messen und quantifizieren können. Mithilfe der dabei genutzten Diffusions-Tensor-Bildgebung (DTI) als qMRT-Verfahren konnten Diffusionsanomalien analysiert und charakterisiert werden. Dabei wurden zwei Gewebsparameter im Gehirn bestimmt: die mittlere Diffusivität(MD) und die fraktionelle Anisotropie (FA). Da vorherige Studien uneinheitliche Ergebnisse hinsichtlich Änderungen von DTI-Parametern in der grauen Substanz bei der MS erbrachten, beschäftigten wir uns mit der Frage, ob kortikale MD- und FA-Veränderungen bei Patienten mit schubförmig-remittierender MS (RRMS) mithilfe optimierter DTI-Messtechniken zu detektieren sind, wie diese charakterisiert sind und wie sich diese im Kortex verteilen.
An der vorliegenden Studie nahmen 24 Patienten mit RRMS und 25 gesunde Kontrollprobanden teil. Der Schweregrad der Erkrankung wurde mithilfe des Expanded Disability Status Scale (EDSS) eingestuft.
Bei der MRT-Datenerfassung wurde eine optimierte DTI-Methode mit intrinsischer „Eddy-Current“-Kompensation verwendet. Die MD und die FA wurden für jeden Bildpunkt bestimmt. Kortikale Parameterwerte wurden ausgelesen und in Oberflächendatensätzen gespeichert. Es erfolgte ein oberflächenbasierter statistischer Gruppenvergleich. Kortikale Mittelwerte wurden für die MD und die FA bestimmt und zwischen den Gruppen verglichen.
Für Parameter mit nachgewiesenen globalen Gruppenunterschieden wurde die Korrelation mit dem klinischen Status (quantifiziert durch den EDSS) bestimmt.
Die Analyse kortikaler Mittelwerte zeigte eine Erhöhung der MD in der Patientengruppe. Die MD-Veränderungen waren räumlich ausgedehnt und es fanden sich Cluster mit erhöhten MD-Werten in der Patientengruppe, insbesondere in temporalen, okzipitalen und parietalen Regionen. Des Weiteren konnte eine signifikante positive Korrelation zwischen dem EDSS-Score und der kortikalen MD festgestellt werden. Außerdem ließen sich fokale FA-Erniedrigungen im Temporal- und Okzipitallappen nachweisen. Die MD quantifiziert das Ausmaß und die FA die Gerichtetheit der Diffusion.
Somit bietet die MD möglicherweise Hinweise auf die Intaktheit mikrostruktureller Barrieren und die FA auf die Integrität von Faserverbindungen. Unsere Ergebnisse könnten demnach darauf hinweisen, dass im Kortex von MS-Patienten der Abbau mikrostruktureller Barrieren räumlich ausgedehnter stattfindet als eine Störung axonaler Strukturen. Die Korrelation der MD mit dem klinischen Status legt die Möglichkeit der Quantifizierung klinisch relevanter kortikaler Gewebsveränderungen und somit eine mögliche Relevanz dieser Techniken für klinische Studien nahe.
Background: Breast cancer (BC) is the most frequent female cancer and preferentially metastasizes to bone. The transcription factor TGFB-induced factor homeobox 1 (TGIF) is involved in bone metabolism. However, it is not yet known whether TGIF is associated with BC bone metastasis or patient outcome and thus of potential interest. Methods: TGIF expression was analyzed by immunohistochemistry in 1197 formalin-fixed, paraffin-embedded tissue samples from BC patients treated in the GAIN (German Adjuvant Intergroup Node-Positive) study with two adjuvant dose-dense schedules of chemotherapy with or without bisphosphonate ibandronate. TGIF expression was categorized into negative/low and moderate/strong staining. Endpoints were disease-free survival (DFS), overall survival (OS) and time to primary bone metastasis as first site of relapse (TTPBM). Results: We found associations of higher TGIF protein expression with smaller tumor size (p= 0.015), well differentiated phenotype (p< 0.001) and estrogen receptor (ER)-positive BC (p< 0.001). Patients with higher TGIF expression levels showed a significantly longer disease-free (DFS: HR 0.75 [95%CI 0.59–0.95], log-rank p=0.019) and overall survival (OS: HR 0.69 [95%CI 0.50–0.94], log-rank p= 0.019), but no association with TTPBM (HR 0.77 [95%CI 0.51–1.16]; p= 0.213). Univariate analysis in molecular subgroups emphasized that elevated TGIF expression was prognostic for both DFS and OS in ER-positive BC patients (DFS: HR 0.68 [95%CI 0.51–0.91]; log-rank p= 0.009, interaction p= 0.130; OS: HR 0.60 [95%CI 0.41–0.88], log-rank p= 0.008, interaction p= 0.107) and in the HER2-negative subgroup (DFS:HR 0.67 [95%CI 0.50–0.88], log-rank p= 0.004, interaction p= 0.034; OS: HR 0.57 [95%CI 0.40–0.81], log-rank p= 0.002, interaction p= 0.015). Conclusions: Our results suggest that moderate to high TGIF expression is a common feature of breast cancer cells and that this is not associated with bone metastases as first site of relapse. However, a reduced expression is linked to tumor progression, especially in HER2-negative breast cancer.
High-resolution fMRI in the sub-millimeter regime allows researchers to resolve brain activity across cortical layers and columns non-invasively. While these high-resolution data make it possible to address novel questions of directional information flow within and across brain circuits, the corresponding data analyses are challenged by MRI artifacts, including image blurring, image distortions, low SNR, and restricted coverage. These challenges often result in insufficient spatial accuracy of conventional analysis pipelines. Here we introduce a new software suite that is specifically designed for layer-specific functional MRI: LayNii. This toolbox is a collection of command-line executable programs written in C/C++ and is distributed opensource and as pre-compiled binaries for Linux, Windows, and macOS. LayNii is designed for layer-fMRI data that suffer from SNR and coverage constraints and thus cannot be straightforwardly analyzed in alternative software packages. Some of the most popular programs of LayNii contain ‘layerification’ and columnarization in the native voxel space of functional data as well as many other layer-fMRI specific analysis tasks: layer-specific smoothing, model-based vein mitigation of GE-BOLD data, quality assessment of artifact dominated sub-millimeter fMRI, as well as analyses of VASO data.
Aims: Cardio-oncology is a growing interdisciplinary field which aims to improve cardiological care for cancer patients in order to reduce morbidity and mortality. The impact of cardiac biomarkers, echocardiographic parameters, and cardiological assessment regarding risk stratification is still unclear. We aimed to identify potential parameters that allow an early risk stratification of cancer patients. Methods and results: In this cohort study, we evaluated 930 patients that were admitted to the cardio-oncology outpatient clinic of the University Hospital Heidelberg from January 2016 to January 2019. We performed echocardiography, including Global Longitudinal Strain (GLS) analysis and measured cardiac biomarkers including N-terminal pro brain-type natriuretic peptide (NT-proBNP) and high-sensitivity cardiac troponin T levels (hs-cTnT). Most patients were suffering from breast cancer (n = 450, 48.4%), upper gastrointestinal carcinoma (n = 99, 10.6%) or multiple myeloma (n = 51, 5.5%). At the initial visit, we observed 86.7% of patients having a preserved left ventricular ejection fraction (LVEF >50%). At the second follow up, still 78.9% of patients showed a preserved LVEF. Echocardiographic parameters or elevation of NT-proBNP did not significantly correlate with all-cause mortality (ACM) (logistic regression LVEF <50%: P = 0.46, NT-proBNP: P = 0.16) and failed to identify high-risk patients. In contrast, hs-cTnT above the median (≥7 ng/L) was an independent marker to determine ACM (multivariant logistic regression, OR: 2.21, P = 0.0038) among all included patients. In particular, hs-cTnT levels before start of a chemotherapy were predictive for ACM. Conclusions: Based on our non-selected cohort of cardio-oncological patients, hs-cTnT was able to identify patients with high mortality by using a low cutoff of 7 ng/L. We conclude that measurement of hs-cTnT is an important tool to stratify the risk for mortality of cancer patients before starting chemotherapy.
We retrospectively investigated histopathological growth patterns in individuals with advanced nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) treated within the randomized HD18 study. In all, 35/60 patients (58%) presented with atypical growth patterns. Patients with atypical growth patterns more often had stage IV disease (P = 0·0354) and splenic involvement (P = 0·0048) than patients with typical growth patterns; a positive positron emission tomography after two cycles of chemotherapy (PET-2) tended to be more common (P = 0·1078). Five-year progression-free survival [hazard ratio (HR) = 0·86; 95% confidence interval (CI) = 0·49–1·47] and overall survival (HR = 0·85; 95% CI = 0·49–1·51) did not differ between the groups after study treatment with PET-2-guided escalated BEACOPP (bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine, prednisone). Thus, advanced NLPHL is often associated with atypical growth patterns but their prognostic impact is compensated by PET-2-guided escalated BEACOPP.
Since the early 1970s several studies have reported distal splenic artery embolization, better known as partial spleen embolization (PSE), as an efficacious treatment of portal hypertensive variceal bleeding and hypersplenism in cirrhosis.(1, 2) However, the effect of PSE on portal pressure is secondary to the induction of splenic infarction. Depending on both the infarct volume and possible infection, PSE can induce serious complications including death.(2, 3) On the other hand, proximal splenic artery embolization (PSAE), which mimics surgical splenic artery ligation, prevents large infarction of the spleen, favoring collateral perfusion of its intact distal vasculature.(3) For this, PSAE has been extensively preferred over PSE for reducing portal hyperflow and treating refractory ascites (RA) after whole or partial liver transplantation (LT).(3, 4) We report here a case of PSAE used to treat RA in a patient with cirrhosis not eligible for transjugular intrahepatic portosystemic shunt (TIPS) and LT.
Inhibition of fatty acid synthesis (FAS) stimulates tumor cell death and reduces angiogenesis. When SH-SY5Y cells or primary neurons are exposed to hypoxia only, inhibition of FAS yields significantly enhanced cell injury. The pathophysiology of stroke, however, is not only restricted to hypoxia but also includes reoxygenation injury. Hence, an oxygen-glucose-deprivation (OGD) model with subsequent reoxygenation in both SH-SY5Y cells and primary neurons as well as a murine stroke model were used herein in order to study the role of FAS inhibition and its underlying mechanisms. SH-SY5Y cells and cortical neurons exposed to 10 h of OGD and 24 h of reoxygenation displayed prominent cell death when treated with the Acetyl-CoA carboxylase inhibitor TOFA or the fatty acid synthase inhibitor cerulenin. Such FAS inhibition reduced the reduction potential of these cells, as indicated by increased NADH2+/NAD+ ratios under both in vitro and in vivo stroke conditions. As observed in the OGD model, FAS inhibition also resulted in increased cell death in the stroke model. Stroke mice treated with cerulenin did not only display increased brain injury but also showed reduced neurological recovery during the observation period of 4 weeks. Interestingly, cerulenin treatment enhanced endothelial cell leakage, reduced transcellular electrical resistance (TER) of the endothelium and contributed to poststroke blood-brain barrier (BBB) breakdown. The latter was a consequence of the activated NF-κB pathway, stimulating MMP-9 and ABCB1 transporter activity on the luminal side of the endothelium. In conclusion, FAS inhibition aggravated poststroke brain injury as consequence of BBB breakdown and NF-κB-dependent inflammation.
Background: No simple staging system has emerged for basal cell carcinomas (BCCs), since they do not follow the TNM process, and practitioners failed to agree on simple clinical or pathological criteria as a basis for a classification. Operational classification of BCCs is required for decision-making, trials and guidelines. Unsupervised clustering of real cases of difficult-to-treat BCCs (DTT-BCCs; part 1) has demonstrated that experts could blindly agree on a five groups classification of DTT-BCCs based on five patterns of clinical situations. Objective: Using this five patterns to generate an operational and comprehensive classification of BCCs. Method: Testing practitioner's agreement, when using the five patterns classification to ensure that it is robust enough to be used in the practice. Generating the first version of a staging system of BCCs based on pattern recognition. Results: Sixty-two physicians, including 48 practitioners and the 14 experts who participated in the generation of the five different patterns of DTT-BCCs, agreed on 90% of cases when classifying 199 DTT-BCCs cases using the five patterns classification (part 1) attesting that this classification is understandable and usable in practice. In order to cover the whole field of BCCs, these five groups of DTT-BCCs were added a group representing the huge number of easy-to-treat BCCs, for which sub-classification has little interest, and a group of very rare metastatic cases, resulting in a four-stage and seven-substage staging system of BCCs. Conclusion: A practical classification adapted to the specificities of BCCs is proposed. It is the first tumour classification based on pattern recognition of clinical situations, which proves to be consistent and usable. This EADO staging system version 1 will be improved step by step and tested as a decision tool and a prognostic instrument.
Autophagy is a core molecular pathway for the preservation of cellular and organismal homeostasis. Pharmacological and genetic interventions impairing autophagy responses promote or aggravate disease in a plethora of experimental models. Consistently, mutations in autophagy-related processes cause severe human pathologies. Here, we review and discuss preclinical data linking autophagy dysfunction to the pathogenesis of major human disorders including cancer as well as cardiovascular, neurodegenerative, metabolic, pulmonary, renal, infectious, musculoskeletal, and ocular disorders.
BAG3 is a negative regulator of ciliogenesis in glioblastoma and triple-negative breast cancer cells
(2021)
By regulating several hallmarks of cancer, BAG3 exerts oncogenic functions in a wide variety of malignant diseases including glioblastoma (GBM) and triple-negative breast cancer (TNBC). Here we performed global proteomic/phosphoproteomic analyses of CRISPR/Cas9-mediated isogenic BAG3 knockouts of the two GBM lines U343 and U251 in comparison to parental controls. Depletion of BAG3 evoked major effects on proteins involved in ciliogenesis/ciliary function and the activity of the related kinases aurora-kinase A and CDK1. Cilia formation was significantly enhanced in BAG3 KO cells, a finding that could be confirmed in BAG3-deficient versus -proficient BT-549 TNBC cells, thus identifying a completely novel function of BAG3 as a negative regulator of ciliogenesis. Furthermore, we demonstrate that enhanced ciliogenesis and reduced expression of SNAI1 and ZEB1, two key transcription factors regulating epithelial to mesenchymal transition (EMT) are correlated to decreased cell migration, both in the GBM and TNBC BAG3 knockout cells. Our data obtained in two different tumor entities identify suppression of EMT and ciliogenesis as putative synergizing mechanisms of BAG3-driven tumor aggressiveness in therapy-resistant cancers.
Purpose: Auditory functional MRI (fMRI) often uses silent inter-volume delays for stimulus presentation. However, maintaining the steady-state of the magnetization usually requires constant delays. Here, a novel acquisition scheme dubbed “pre-Saturated EPI using Multiple delays in Steady-state” (SEPIMS) is proposed, using spin saturation at a fixed delay before each volume to maintain steady-state conditions, independent of previous spin history. This concept allows for variable inter-volume delays and thus for flexible stimulus design in auditory fMRI. The purpose was to compare the signal stability of SEPIMS and conventional sparse EPI (CS-EPI). Methods: The saturation module comprises two non-selective adiabatic saturation pulses. The efficiency of the saturation and its effect on the SEPIMS signal stability is tested in vitro and in vivo. Results: Data show that SEPIMS yields the same signal stability as CS-EPI, even for extreme variations between inter-volume delay durations. However, dual saturation pulses are required to achieve sufficiently high saturation efficiency in compartments with long T1 values. Importantly, spoiler gradient pulses after the EPI readout have to be optimized to avoid eddy-current-induced image distortions. Conclusion: The proposed SEPIMS sequence maintains high signal stability in the presence of variable inter-volume durations, thus allowing for flexible stimulus design.
Background and Objective: Long-term tooth retention is the ultimate goal of periodontal therapy. Aim of this study was to evaluate tooth loss (TL) during 10 years of supportive periodontal therapy (SPT) in periodontal compromised patients and to identify factors influencing TL on patient level. Material and Methods: Patients were re-examined 120 ± 12 months after active periodontal therapy. TL and risk factors [smoking, initial diagnosis, SPT adherence, interleukin-1 polymorphism, cardiovascular diseases, age at baseline, bleeding on probing (BOP), change of practitioner, insurance status, number of SPT, marital and educational status] influencing TL on patient level were assessed. Results: One-hundred patients (52 female, mean age 65.6 ± 11 years) lost 121 of 2428 teeth (1.21 teeth/patient; 0.12 teeth/patient/y) during 10 years of SPT. Forty-two of these were lost for periodontal reasons (0.42 teeth/patient; 0.04 teeth/patient/y). Significantly more teeth were lost due to other reasons (P < .001). Smoking, baseline severity of periodontitis, non-adherent SPT, positive interleukin-1 polymorphism, marital and educational status, private insurance, older age at baseline and BOP, small number of SPT were identified as patient-related risk factors for TL (P < .05). Conclusion: During 120 ± 12 months of SPT, only a small number of teeth was lost in periodontally compromised patients showing the positive effect of a well-established periodontal treatment concept. The remaining risk for TL should be considered using risk-adopted SPT allocation.
CD4+ T cell lymphopenia predicts mortality from Pneumocystis pneumonia in kidney transplant patients
(2020)
Background: Pneumocystis jirovecii pneumonia (PcP) remains a life-threatening opportunistic infection after solid organ transplantation, even in the era of Pneumocystis prophylaxis. The association between risk of developing PcP and low CD4+ T cell counts has been well established. However, it is unknown whether lymphopenia in the context of post-renal transplant PcP increases the risk of mortality. Methods: We carried out a retrospective analysis of a cohort of kidney transplant patients with PcP (n = 49) to determine the risk factors for mortality associated with PcP. We correlated clinical and demographic data with the outcome of the disease. For CD4+ T cell counts, we used the Wilcoxon rank sum test for in-hospital mortality and a Cox proportional-hazards regression model for 60-day mortality. Results: In univariate analyses, high CRP, high neutrophils, CD4+ T cell lymphopenia, mechanical ventilation, and high acute kidney injury network stage were associated with in-hospital mortality following presentation with PcP. In a receiver-operator characteristic (ROC) analysis, an optimum cutoff of ≤200 CD4+ T cells/µL predicted in-hospital mortality, CD4+ T cell lymphopenia remained a risk factor in a Cox regression model. Conclusions: Low CD4+ T cell count in kidney transplant recipients is a biomarker for disease severity and a risk factor for in-hospital mortality following presentation with PcP.
Background: Cerebral O2 saturation (ScO2) reflects cerebral perfusion and can be measured noninvasively by near-infrared spectroscopy (NIRS). Objectives: In this pilot study, we describe the dynamics of ScO2 during TAVI in nonventilated patients and its impact on procedural outcome. Methods and Results: We measured ScO2 of both frontal lobes continuously by NIRS in 50 consecutive analgo-sedated patients undergoing transfemoral TAVI (female 58%, mean age 80.8 years). Compared to baseline ScO2 dropped significantly during RVP (59.3% vs. 53.9%, p < .01). Five minutes after RVP ScO2 values normalized (post RVP 62.6% vs. 53.9% during RVP, p < .01; pre 61.6% vs. post RVP 62.6%, p = .53). Patients with an intraprocedural pathological ScO2 decline of >20% (n = 13) had higher EuroSCORE II (3.42% vs. 5.7%, p = .020) and experienced more often delirium (24% vs. 62%, p = .015) and stroke (0% vs. 23%, p < .01) after TAVI. Multivariable logistic regression revealed higher age and large ScO2 drops as independent risk factors for delirium. Conclusions: During RVP ScO2 significantly declined compared to baseline. A ScO2 decline of >20% is associated with a higher incidence of delirium and stroke and a valid cut-off value to screen for these complications. NIRS measurement during TAVI procedure may be an easy to implement diagnostic tool to detect patients at high risks for cerebrovascular complications and delirium.
Cortical changes in epilepsy patients with focal cortical dysplasia: new insights with T2 mapping
(2020)
Background: In epilepsy patients with focal cortical dysplasia (FCD) as the epileptogenic focus, global cortical signal changes are generally not visible on conventional MRI. However, epileptic seizures or antiepileptic medication might affect normal-appearing cerebral cortex and lead to subtle damage. Purpose: To investigate cortical properties outside FCD regions with T2-relaxometry. Study Type: Prospective study. Subjects: Sixteen patients with epilepsy and FCD and 16 age-/sex-matched healthy controls. Field Strength/Sequence: 3T, fast spin-echo T2-mapping, fluid-attenuated inversion recovery (FLAIR), and synthetic T1-weighted magnetization-prepared rapid acquisition of gradient-echoes (MP-RAGE) datasets derived from T1-maps. Assessment: Reconstruction of the white matter and cortical surfaces based on MP-RAGE structural images was performed to extract cortical T2 values, excluding lesion areas. Three independent raters confirmed that morphological cortical/juxtacortical changes in the conventional FLAIR datasets outside the FCD areas were definitely absent for all patients. Averaged global cortical T2 values were compared between groups. Furthermore, group comparisons of regional cortical T2 values were performed using a surface-based approach. Tests for correlations with clinical parameters were carried out. Statistical Tests: General linear model analysis, permutation simulations, paired and unpaired t-tests, and Pearson correlations. Results: Cortical T2 values were increased outside FCD regions in patients (83.4 ± 2.1 msec, control group 81.4 ± 2.1 msec, P = 0.01). T2 increases were widespread, affecting mainly frontal, but also parietal and temporal regions of both hemispheres. Significant correlations were not observed (P ≥ 0.55) between cortical T2 values in the patient group and the number of seizures in the last 3 months or the number of anticonvulsive drugs in the medical history. Data Conclusion: Widespread increases in cortical T2 in FCD-associated epilepsy patients were found, suggesting that structural epilepsy in patients with FCD is not only a symptom of a focal cerebral lesion, but also leads to global cortical damage not visible on conventional MRI. Evidence Level: 21. Technical efficacy Stage: 3 J. MAGN. RESON. IMAGING 2020;52:1783–1789.
Background and Objectives: Patient blood (more accurately: haemoglobin, Hb) management (PBM) aims to optimize endogenous Hb production and to minimize iatrogenic Hb loss while maintaining patient safety and optimal effectiveness of medical interventions. PBM was adopted as policy for patients by the World Health Organization (WHO), and, all the more, should be applied to healthy donors. Materials and Methods: Observational data from 489 bone marrow (BM) donors were retrospectively analysed, and principles of patient blood management were applied to healthy volunteer BM donations. Results and Conclusion: We managed to render BM aspiration safe for donors, notably completely avoiding the collection of autologous blood units and blood transfusions through iron management, establishment and curation of high-yield aspiration technique, limitation of collection volume to 1·5% of donor body weight and development of volume prediction algorithms for the requested cell dose.