Refine
Year of publication
Has Fulltext
- yes (32)
Is part of the Bibliography
- no (32)
Keywords
- Amblyopia (3)
- Dose response (3)
- Eccentric fixation (3)
- Efficiency (3)
- Occlusion treatment (3)
- ABC transporter (1)
- ALL (1)
- AML (1)
- ARDS (1)
- Apoptosis (1)
Institute
- Medizin (20)
- Exzellenzcluster Makromolekulare Komplexe (2)
- Geowissenschaften (2)
- Pharmazie (2)
- Biochemie und Chemie (1)
- Biochemie, Chemie und Pharmazie (1)
- DFG-Forschergruppen (1)
- Georg-Speyer-Haus (1)
- Geowissenschaften / Geographie (1)
- Physik (1)
- Sonderforschungsbereiche / Forschungskollegs (1)
- Sportwissenschaften (1)
- Zentrum für Arzneimittelforschung, Entwicklung und Sicherheit (ZAFES) (1)
Background and Objectives: Delirium is a common and major complication subsequent to cardiac surgery. Despite scientific efforts, there are no parameters which reliably predict postoperative delirium. In delirium pathology, natriuretic peptides (NPs) interfere with the blood–brain barrier and thus promote delirium. Therefore, we aimed to assess whether NPs may predict postoperative delirium and long-term outcomes. Materials and Methods: To evaluate the predictive value of NPs for delirium we retrospectively analyzed data from a prospective, randomized study for serum levels of atrial natriuretic peptide (ANP) and the precursor of C-type natriuretic peptide (NT-proCNP) in patients undergoing coronary artery bypass grafting (CABG) with or without cardiopulmonary bypass (off-pump coronary bypass grafting; OPCAB). Delirium was assessed by a validated chart-based method. Long-term outcomes were assessed 10 years after surgery by a telephone interview. Results: The overall incidence of delirium in the total cohort was 48% regardless of the surgical approach (CABG vs. OPCAB). Serum ANP levels >64.6 pg/mL predicted delirium with a sensitivity (95% confidence interval) of 100% (75.3–100) and specificity of 42.9% (17.7–71.1). Serum NT-proCNP levels >1.7 pg/mL predicted delirium with a sensitivity (95% confidence interval) of 92.3% (64.0–99.8) and specificity of 42.9% (17.7–71.1). Both NPs could not predict postoperative survival or long-term cognitive decline. Conclusions: We found a positive correlation between delirium and preoperative plasma levels of ANP and NT-proCNP. A well-powered and prospective study might identify NPs as biomarkers indicating the risk of delirium and postoperative cognitive decline in patients at risk for postoperative delirium.
Background: Liver fibrosis in human immunodeficiency virus (HIV)-infected individuals is mostly attributable to co-infection with hepatitis B or C. The impact of other risk factors, including prolonged exposure to combined antiretroviral therapy (cART) is poorly understood. Our aim was to determine the prevalence of liver fibrosis and associated risk factors in HIV-infected individuals based on non-invasive fibrosis assessment using transient elastography (TE) and serum biomarkers (Fibrotest [FT]).
Methods: In 202 consecutive HIV-infected individuals (159 men; mean age 47 ± 9 years; 35 with hepatitis-C-virus [HCV] co-infection), TE and FT were performed. Repeat TE examinations were conducted 1 and 2 years after study inclusion.
Results: Significant liver fibrosis was present in 16% and 29% of patients, respectively, when assessed by TE (≥ 7.1 kPa) and FT (> 0.48). A combination of TE and FT predicted significant fibrosis in 8% of all patients (31% in HIV/HCV co-infected and 3% in HIV mono-infected individuals). Chronic ALT, AST and γ-GT elevation was present in 29%, 20% and 51% of all cART-exposed patients and in 19%, 8% and 45.5% of HIV mono-infected individuals. Overall, factors independently associated with significant fibrosis as assessed by TE (OR, 95% CI) were co-infection with HCV (7.29, 1.95-27.34), chronic AST (6.58, 1.30-33.25) and γ-GT (5.17, 1.56-17.08) elevation and time on dideoxynucleoside therapy (1.01, 1.00-1.02). In 68 HIV mono-infected individuals who had repeat TE examinations, TE values did not differ significantly during a median follow-up time of 24 months (median intra-patient changes at last TE examination relative to baseline: -0.2 kPa, p = 0.20).
Conclusions: Chronic elevation of liver enzymes was observed in up to 45.5% of HIV mono-infected patients on cART. However, only a small subset had significant fibrosis as predicted by TE and FT. There was no evidence for fibrosis progression during follow-up TE examinations.
Background & Aims: Thrombopoietin receptor agonists are a new class of compounds licenced for the treatment of immune thrombocytopenic purpura. They are currently being studied for patients with thrombopenia in advanced liver disease or under therapy for hepatitis C. There are indications that the risk for development of portal vein thrombosis in patients with advanced liver cirrhosis might be increased under therapy with thrombopoietin receptor agonists. We report a case of a patient with Child class B liver cirrhosis with concurrent immune thrombocytopenic purpura that developed portal vein thrombosis under therapy with the thrombopoietin receptor agonist romiplostim.
Methods: A 50-year-old woman with hepatitis C virus associated immune thrombocytopenic purpura and Child class B liver cirrhosis presented in our emergency with rapidly evolving hydropic decompensation and general malaise. For immune thrombocytopenic purpura, the patient was started on the thrombopoietin receptor agonist romiplostim nine months ago.
Results: During hospitalization, the platelet count was measured above 330,000/μl and partial portal vein thrombosis was diagnosed by imaging studies. The thrombotic event was assumed to be associated with the romiplostim treatment for immune thrombocytopenic purpura via excessive elevation of platelet count. After anticoagulation with heparin and cessation of romiplostim treatment, complete recanalisation of the portal vein was achieved.
Conclusions: We conclude that romiplostim should be used with precaution in patients with hepatitis C-associated immune thrombocytopenic purpura and advanced liver cirrhosis as the risk for thrombotic complications may increase significantly.
Background. Transcatheter aortic valve implantation (TAVI) is currently recommended for patients with severe aortic stenosis at intermediate or high surgical risk. The decision process during TAVI evaluation includes a thorough benefit-risk assessment, and knowledge about long-term benefits and outcomes may improve patients’ expectation management. Objective. To evaluate patients’ perceived health status and self-reported long-term outcome more than 5 years after TAVI. Methods and Results. Demographic and procedure data were obtained from all patients treated with TAVI at our institution from 2006 to 2012. A cross-sectional survey was conducted on the patients alive, measuring health status, including the EQ-5D-5L questionnaire, and clinical outcomes. 103 patients (22.8%) were alive at a median follow-up period of 7 years (5.4–9.8). 99 (96%) of the 103 patients were included in the final analysis. The mean age at follow-up was 86.5 years ± 8.0 years, and 56.6% were female. Almost all patients (93.9%) described an improvement of their quality of life after receiving TAVI. At late follow-up, the mean utility index and EQ-VAS score were 0.80 ± 0.20 and 58.49 ± 11.49, respectively. Mobility was found to be the most frequently reported limitation (85.4%), while anxiety/depression was the least frequently reported limitation (19.8%). With respect to functional class, 64.7% were in New York Heart Association (NYHA) class III or IV, compared to 67.0% prior to TAVI (p = 0.51). Self-reported long-term outcomes revealed mainly low long-term complication rates. 74 total hospitalizations were reported after TAVI, and among those 43% for cardiovascular reasons. Within cardiovascular rehospitalizations, new pacemaker implantations were the most frequently reported (18.9%), followed by cardiac decompensation and coronary heart disease (15.6%). Conclusion. The majority of the patients described an improvement of health status after TAVI. More than five years after TAVI, the patients’ perceived health status was satisfactory, and the incidence of clinical events and hospitalizations was very low.
Extensive black shale deposits formed in the Early Cretaceous South Atlantic, supporting the notion that this emerging ocean basin was a globally important site of organic carbon burial. The magnitude of organic carbon burial in marine basins is known to be controlled by various tectonic, oceanographic, hydrological, and climatic processes acting on different temporal and spatial scales, the nature and relative importance of which are poorly understood for the young South Atlantic. Here we present new bulk and molecular geochemical data from an Aptian–Albian sediment record recovered from the deep Cape Basin at Deep Sea Drilling Project (DSDP) Site 361, which we combine with general circulation model results to identify driving mechanisms of organic carbon burial. A multimillion-year decrease (i.e., Early Aptian–Albian) in organic carbon burial, reflected in a lithological succession of black shale, gray shale, and red beds, was caused by increasing bottom water oxygenation due to abating hydrographic restriction via South Atlantic–Southern Ocean gateways. These results emphasize basin evolution and ocean gateway development as a decisive primary control on enhanced organic carbon preservation in the Cape Basin at geological timescales (> 1 Myr). The Early Aptian black shale sequence comprises alternations of shales with high (> 6 %) and relatively low (∼ 3.5 %) organic carbon content of marine sources, the former being deposited during the global Oceanic Anoxic Event (OAE) 1a, as well as during repetitive intervals before and after OAE 1a. In all cases, these short-term intervals of enhanced organic carbon burial coincided with strong influxes of sediments derived from the proximal African continent, indicating closely coupled climate–land–ocean interactions. Supported by our model results, we show that fluctuations in weathering-derived nutrient input from the southern African continent, linked to changes in orbitally driven humidity and aridity, were the underlying drivers of repetitive episodes of enhanced organic carbon burial in the deep Cape Basin. These results suggest that deep marine environments of emerging ocean basins responded sensitively and directly to short-term fluctuations in riverine nutrient fluxes. We explain this relationship using the lack of wide and mature continental shelf seas that could have acted as a barrier or filter for nutrient transfer from the continent into the deep ocean.
Extensive black shale deposits formed in the Early Cretaceous South Atlantic, supporting the notion that this emerging ocean basin was a globally important site of organic carbon burial. The magnitude of organic carbon burial in marine basins is known to be controlled by various tectonic, oceanographic, hydrological, and climatic processes acting on different temporal and spatial scales, the nature and relative importance of which are poorly understood for the young South Atlantic. Here we present new bulk and molecular geochemical data from an Aptian–Albian sediment record recovered from the deep Cape Basin at Deep Sea Drilling Project (DSDP) Site 361, which we combine with general circulation model results to identify driving mechanisms of organic carbon burial. A multi-million year decrease (i.e. Early Aptian–Albian) in organic carbon burial, reflected in a lithological succession of black shale, gray shale, and red beds, was caused by increasing bottom water oxygenation due to abating tectonic restriction via South Atlantic-Southern Ocean gateways. These results emphasize basin evolution and ocean gateway development as a decisive primary control on enhanced organic carbon preservation in the Cape Basin at geological time scales (>1 Myr). The Early Aptian black shale sequence comprises alternations of shales with high (>5%) and relatively low (~3%) organic carbon content of marine sources, the former being deposited during the global Oceanic Anoxic Event (OAE) 1a, as well as during repetitive events before and after OAE 1a. In all cases, these short-term events of enhanced organic carbon burial coincided with strong influxes of sediments derived from the proximal African continent, indicating closely coupled climate–land–ocean interactions. Supported by our model results, we propose that fluctuations in weathering-derived nutrient input from the southern African continent, linked to fluctuations in pCO2 and/or orbitally driven humidity/aridity, were the underlying drivers of short-term organic carbon burial in the deep Cape Basin. These results suggest that deep marine environments of emerging ocean basins responded sensitively and directly to short term fluctuations in riverine nutrient fluxes. We explain this relationship by the lack of wide and mature continental shelf seas that could have acted as a barrier or filter for nutrient transfer from the continent into the deep ocean.
Long non-coding RNAs (lncRNAs) contribute to cardiac (patho)physiology. Aging is the major risk factor for cardiovascular disease with cardiomyocyte apoptosis as one underlying cause. Here, we report the identification of the aging-regulated lncRNA Sarrah (ENSMUST00000140003) that is anti-apoptotic in cardiomyocytes. Importantly, loss of SARRAH (OXCT1-AS1) in human engineered heart tissue results in impaired contractile force development. SARRAH directly binds to the promoters of genes downregulated after SARRAH silencing via RNA-DNA triple helix formation and cardiomyocytes lacking the triple helix forming domain of Sarrah show an increase in apoptosis. One of the direct SARRAH targets is NRF2, and restoration of NRF2 levels after SARRAH silencing partially rescues the reduction in cell viability. Overexpression of Sarrah in mice shows better recovery of cardiac contractile function after AMI compared to control mice. In summary, we identified the anti-apoptotic evolutionary conserved lncRNA Sarrah, which is downregulated by aging, as a regulator of cardiomyocyte survival.
Purpose: Amblyopia with eccentric fixation, especially when not diagnosed early, is a therapeutic challenge, as visual outcome is known to be poorer than in amblyopia with central fixation. Consequently, treatment after late diagnosis is often denied. Electronic monitoring of occlusion provides us the chance to gain first focussed insight into age-dependent dose response and treatment efficiency, as well as the shift of fixation in this rare group of paediatric patients. Methods: In our prospective pilot study, we examined amblyopes with eccentric fixation during 12 months of occlusion treatment. We evaluated their visual acuity, recorded patching duration using a TheraMon®-microsensor, and determined their fixation with a direct ophthalmoscope. Dose-response relationship and treatment efficiency were calculated. Results: The study included 12 participants with strabismic and combined amblyopia aged 2.9–12.4 years (mean 6.5). Median prescription of occlusion was 7.7 h/day (range 6.6–9.9) and median daily received occlusion was 5.2 h/day (range 0.7–9.7). At study end, median acuity gain was 0.6 log units (range 0–1.6) and residual interocular visual acuity difference (IOVAD) 0.3 log units (range 0–1.8). There was neither significant acuity gain nor reduction in IOVAD after the 6th month of treatment. Children younger than 4 years showed best response with lowest residual IOVAD at study end. Efficiency calculation showed an acuity gain of approximately one line from 100 h of patching in the first 2 months and half a line after 6 months. There was a significant decline of treatment efficiency with age (p = 0.01). Foveolar fixation was achieved after median 3 months (range 1–6). Three patients (> 6 years) did not gain central fixation. Conclusion: Eccentric fixation is a challenge to therapy success. Based on electronic monitoring, our study quantified for the first time the reduction of treatment efficiency with increasing age in amblyopes with eccentric fixation. Despite some improvement in patients up to 8 years, older patients showed significantly lower treatment efficiency. In younger patients with good adherence, despite poor initial acuity, central fixation and low residual IOVAD could be attained after median 3 months. Hence, the necessity of early diagnosis and intensive occlusion should be emphasized.
Purpose: Amblyopia with eccentric fixation, especially when not diagnosed early, is a therapeutic challenge, as visual outcome is known to be poorer than in amblyopia with central fixation. Consequently, treatment after late diagnosis is often denied. Electronic monitoring of occlusion provides us the chance to gain first focussed insight into age-dependent dose response and treatment efficiency, as well as the shift of fixation in this rare group of paediatric patients. Methods: In our prospective pilot study, we examined amblyopes with eccentric fixation during 12 months of occlusion treatment. We evaluated their visual acuity, recorded patching duration using a TheraMon®-microsensor, and determined their fixation with a direct ophthalmoscope. Dose-response relationship and treatment efficiency were calculated. Results: The study included 12 participants with strabismic and combined amblyopia aged 2.9–12.4 years (mean 6.5). Median prescription of occlusion was 7.7 h/day (range 6.6–9.9) and median daily received occlusion was 5.2 h/day (range 0.7–9.7). At study end, median acuity gain was 0.6 log units (range 0–1.6) and residual interocular visual acuity difference (IOVAD) 0.3 log units (range 0–1.8). There was neither significant acuity gain nor reduction in IOVAD after the 6th month of treatment. Children younger than 4 years showed best response with lowest residual IOVAD at study end. Efficiency calculation showed an acuity gain of approximately one line from 100 h of patching in the first 2 months and half a line after 6 months. There was a significant decline of treatment efficiency with age (p = 0.01). Foveolar fixation was achieved after median 3 months (range 1–6). Three patients (> 6 years) did not gain central fixation. Conclusion: Eccentric fixation is a challenge to therapy success. Based on electronic monitoring, our study quantified for the first time the reduction of treatment efficiency with increasing age in amblyopes with eccentric fixation. Despite some improvement in patients up to 8 years, older patients showed significantly lower treatment efficiency. In younger patients with good adherence, despite poor initial acuity, central fixation and low residual IOVAD could be attained after median 3 months. Hence, the necessity of early diagnosis and intensive occlusion should be emphasized.
Background: Biological psychiatry aims to understand mental disorders in terms of altered neurobiological pathways. However, for one of the most prevalent and disabling mental disorders, Major Depressive Disorder (MDD), patients only marginally differ from healthy individuals on the group-level. Whether Precision Psychiatry can solve this discrepancy and provide specific, reliable biomarkers remains unclear as current Machine Learning (ML) studies suffer from shortcomings pertaining to methods and data, which lead to substantial over-as well as underestimation of true model accuracy.
Methods: Addressing these issues, we quantify classification accuracy on a single-subject level in N=1,801 patients with MDD and healthy controls employing an extensive multivariate approach across a comprehensive range of neuroimaging modalities in a well-curated cohort, including structural and functional Magnetic Resonance Imaging, Diffusion Tensor Imaging as well as a polygenic risk score for depression.
Findings Training and testing a total of 2.4 million ML models, we find accuracies for diagnostic classification between 48.1% and 62.0%. Multimodal data integration of all neuroimaging modalities does not improve model performance. Similarly, training ML models on individuals stratified based on age, sex, or remission status does not lead to better classification. Even under simulated conditions of perfect reliability, performance does not substantially improve. Importantly, model error analysis identifies symptom severity as one potential target for MDD subgroup identification.
Interpretation: Although multivariate neuroimaging markers increase predictive power compared to univariate analyses, single-subject classification – even under conditions of extensive, best-practice Machine Learning optimization in a large, harmonized sample of patients diagnosed using state-of-the-art clinical assessments – does not reach clinically relevant performance. Based on this evidence, we sketch a course of action for Precision Psychiatry and future MDD biomarker research.
The Weissert Event ~133 million years ago marked a profound global cooling that punctuated the Early Cretaceous greenhouse. We present modelling, high-resolution bulk organic carbon isotopes and chronostratigraphically calibrated sea surface temperature (SSTs) based on an organic paleothermometer (the TEX86 proxy), which capture the Weissert Event in the semi-enclosed Weddell Sea basin, offshore Antarctica (paleolatitude ~54 °S; paleowater depth ~500 meters). We document a ~3–4 °C drop in SST coinciding with the Weissert cold end, and converge the Weddell Sea data, climate simulations and available worldwide multi-proxy based temperature data towards one unifying solution providing a best-fit between all lines of evidence. The outcome confirms a 3.0 °C ( ±1.7 °C) global mean surface cooling across the Weissert Event, which translates into a ~40% drop in atmospheric pCO2 over a period of ~700 thousand years. Consistent with geologic evidence, this pCO2 drop favoured the potential build-up of local polar ice.
Purpose: Amblyopia with eccentric fixation, especially when not diagnosed early, is a therapeutic challenge, as visual outcome is known to be poorer than in amblyopia with central fixation. Consequently, treatment after late diagnosis is often denied. Electronic monitoring of occlusion provides us the chance to gain first focussed insight into age-dependent dose response and treatment efficiency, as well as the shift of fixation in this rare group of paediatric patients. Methods: In our prospective pilot study, we examined amblyopes with eccentric fixation during 12 months of occlusion treatment. We evaluated their visual acuity, recorded patching duration using a TheraMon®-microsensor, and determined their fixation with a direct ophthalmoscope. Dose-response relationship and treatment efficiency were calculated. Results: The study included 12 participants with strabismic and combined amblyopia aged 2.9–12.4 years (mean 6.5). Median prescription of occlusion was 7.7 h/day (range 6.6–9.9) and median daily received occlusion was 5.2 h/day (range 0.7–9.7). At study end, median acuity gain was 0.6 log units (range 0–1.6) and residual interocular visual acuity difference (IOVAD) 0.3 log units (range 0–1.8). There was neither significant acuity gain nor reduction in IOVAD after the 6th month of treatment. Children younger than 4 years showed best response with lowest residual IOVAD at study end. Efficiency calculation showed an acuity gain of approximately one line from 100 h of patching in the first 2 months and half a line after 6 months. There was a significant decline of treatment efficiency with age (p = 0.01). Foveolar fixation was achieved after median 3 months (range 1–6). Three patients (> 6 years) did not gain central fixation. Conclusion: Eccentric fixation is a challenge to therapy success. Based on electronic monitoring, our study quantified for the first time the reduction of treatment efficiency with increasing age in amblyopes with eccentric fixation. Despite some improvement in patients up to 8 years, older patients showed significantly lower treatment efficiency. In younger patients with good adherence, despite poor initial acuity, central fixation and low residual IOVAD could be attained after median 3 months. Hence, the necessity of early diagnosis and intensive occlusion should be emphasized.
Background and aims: Individualization of treatment with peginterferon alfa and ribavirin in patients with chronic hepatitis C showed benefit in controlled trials and was implemented in treatment guidelines to increase response rates and to reduce side effects and costs. However, it is unknown whether individualization was adopted in routine daily practice and whether it translated into improved outcomes.
Methods: From a large noninterventional cohort study, clinical and virologic response data of 10,262 HCV patients who received peginterferon alfa-2a and ribavirin between 2003-2007 and 2008-2011 were analyzed. To account for treatment individualization, a matched-pair analysis (2,997 matched pairs) was performed. Variation in treatment duration and dosing of ribavirin were analyzed as indicators for individualization.
Results: Sustained virological response (SVR) rates were similar between 2003-2007 and 2008-2011 (62.0% vs. 63.7%). Patients with comorbidities were more abundant in the later period, (44.3% vs. 57.1%). The subsequent matched-pair analysis demonstrated higher SVR rates in the 2008-2011 period (64.3%) than in the 2003-2007 period (61.2%, p=0.008). More patients received abbreviated or extended treatment regimens in the later than the earlier period as an indicator of treatment individualization. To the same end, ribavirin doses were higher in the later period (12.6 versus 11.6 mg/kg/day). Factors independently associated with SVR included HCV genotype, low baseline viral load, younger age, route of infection, absence of concomitant diseases, lower APRI score, normal gamma-GT, higher ribavirin doses, no substitution for drug abuse, treatment duration, and treatment in the 2008-2011 period.
Conclusions: Treatment individualization with peginterferon alfa and ribavirin was implemented in daily routine between 2003-2007 and 2008-2011, SVR rates improved in the same period. These findings may be most relevant in resource-limited settings.
Early T-cell precursor acute lymphoblastic leukemia (ETP-ALL) has been identified as high-risk subgroup of acute T-lymphoblastic leukemia (T-ALL) with a high rate of FLT3-mutations in adults. To unravel the underlying pathomechanisms and the clinical course we assessed molecular alterations and clinical characteristics in a large cohort of ETP-ALL (n = 68) in comparison to non-ETP T-ALL adult patients. Interestingly, we found a high rate of FLT3-mutations in ETP-ALL samples (n = 24, 35%). Furthermore, FLT3 mutated ETP-ALL was characterized by a specific immunophenotype (CD2+/CD5-/CD13+/CD33-), a distinct gene expression pattern (aberrant expression of IGFBP7, WT1, GATA3) and mutational status (absence of NOTCH1 mutations and a low frequency, 21%, of clonal TCR rearrangements). The observed low GATA3 expression and high WT1 expression in combination with lack of NOTCH1 mutations and a low rate of TCR rearrangements point to a leukemic transformation at the pluripotent prothymocyte stage in FLT3 mutated ETP-ALL. The clinical outcome in ETP-ALL patients was poor, but encouraging in those patients with allogeneic stem cell transplantation (3-year OS: 74%). To further explore the efficacy of targeted therapies, we demonstrate that T-ALL cell lines transfected with FLT3 expression constructs were particularly sensitive to tyrosine kinase inhibitors. In conclusion, FLT3 mutated ETP-ALL defines a molecular distinct stem cell like leukemic subtype. These data warrant clinical studies with the implementation of FLT3 inhibitors in addition to early allogeneic stem cell transplantation for this high risk subgroup.
Background: Threonine Aspartase 1 (Taspase1) mediates cleavage of the mixed lineage leukemia (MLL) protein and leukemia provoking MLL-fusions. In contrast to other proteases, the understanding of Taspase1's (patho)biological relevance and function is limited, since neither small molecule inhibitors nor cell based functional assays for Taspase1 are currently available. Methodology/Findings: Efficient cell-based assays to probe Taspase1 function in vivo are presented here. These are composed of glutathione S-transferase, autofluorescent protein variants, Taspase1 cleavage sites and rational combinations of nuclear import and export signals. The biosensors localize predominantly to the cytoplasm, whereas expression of biologically active Taspase1 but not of inactive Taspase1 mutants or of the protease Caspase3 triggers their proteolytic cleavage and nuclear accumulation. Compared to in vitro assays using recombinant components the in vivo assay was highly efficient. Employing an optimized nuclear translocation algorithm, the triple-color assay could be adapted to a high-throughput microscopy platform (Z'factor = 0.63). Automated high-content data analysis was used to screen a focused compound library, selected by an in silico pharmacophor screening approach, as well as a collection of fungal extracts. Screening identified two compounds, N-[2-[(4-amino-6-oxo-3H-pyrimidin-2-yl)sulfanyl]ethyl]benzenesulfonamideand 2-benzyltriazole-4,5-dicarboxylic acid, which partially inhibited Taspase1 cleavage in living cells. Additionally, the assay was exploited to probe endogenous Taspase1 in solid tumor cell models and to identify an improved consensus sequence for efficient Taspase1 cleavage. This allowed the in silico identification of novel putative Taspase1 targets. Those include the FERM Domain-Containing Protein 4B, the Tyrosine-Protein Phosphatase Zeta, and DNA Polymerase Zeta. Cleavage site recognition and proteolytic processing of these substrates were verified in the context of the biosensor. Conclusions: The assay not only allows to genetically probe Taspase1 structure function in vivo, but is also applicable for high-content screening to identify Taspase1 inhibitors. Such tools will provide novel insights into Taspase1's function and its potential therapeutic relevance.
Synaptic long-term potentiation (LTP) at spinal neurons directly communicating pain-specific inputs from the periphery to the brain has been proposed to serve as a trigger for pain hypersensitivity in pathological states. Previous studies have functionally implicated the NMDA receptor-NO pathway and the downstream second messenger, cGMP, in these processes. Because cGMP can broadly influence diverse ion-channels, kinases, and phosphodiesterases, pre- as well as post-synaptically, the precise identity of cGMP targets mediating spinal LTP, their mechanisms of action, and their locus in the spinal circuitry are still unclear. Here, we found that Protein Kinase G1 (PKG-I) localized presynaptically in nociceptor terminals plays an essential role in the expression of spinal LTP. Using the Cre-lox P system, we generated nociceptor-specific knockout mice lacking PKG-I specifically in presynaptic terminals of nociceptors in the spinal cord, but not in post-synaptic neurons or elsewhere (SNS-PKG-I−/− mice). Patch clamp recordings showed that activity-induced LTP at identified synapses between nociceptors and spinal neurons projecting to the periaqueductal grey (PAG) was completely abolished in SNS-PKG-I−/− mice, although basal synaptic transmission was not affected. Analyses of synaptic failure rates and paired-pulse ratios indicated a role for presynaptic PKG-I in regulating the probability of neurotransmitter release. Inositol 1,4,5-triphosphate receptor 1 and myosin light chain kinase were recruited as key phosphorylation targets of presynaptic PKG-I in nociceptive neurons. Finally, behavioural analyses in vivo showed marked defects in SNS-PKG-I−/− mice in several models of activity-induced nociceptive hypersensitivity, and pharmacological studies identified a clear contribution of PKG-I expressed in spinal terminals of nociceptors. Our results thus indicate that presynaptic mechanisms involving an increase in release probability from nociceptors are operational in the expression of synaptic LTP on spinal-PAG projection neurons and that PKG-I localized in presynaptic nociceptor terminals plays an essential role in this process to regulate pain sensitivity.
Background: The angiogenic function of endothelial cells is regulated by numerous mechanisms, but the impact of long noncoding RNAs (lncRNAs) has hardly been studied. We set out to identify novel and functionally important endothelial lncRNAs.
Methods: Epigenetically controlled lncRNAs in human umbilical vein endothelial cells were searched by exon-array analysis after knockdown of the histone demethylase JARID1B. Molecular mechanisms were investigated by RNA pulldown and immunoprecipitation, mass spectrometry, microarray, several knockdown approaches, CRISPR-Cas9, assay for transposase-accessible chromatin sequencing, and chromatin immunoprecipitation in human umbilical vein endothelial cells. Patient samples from lung and tumors were studied for MANTIS expression.
Results: A search for epigenetically controlled endothelial lncRNAs yielded lncRNA n342419, here termed MANTIS, as the most strongly regulated lncRNA. Controlled by the histone demethylase JARID1B, MANTIS was downregulated in patients with idiopathic pulmonary arterial hypertension and in rats treated with monocrotaline, whereas it was upregulated in carotid arteries of Macaca fascicularis subjected to atherosclerosis regression diet, and in endothelial cells isolated from human glioblastoma patients. CRISPR/Cas9-mediated deletion or silencing of MANTIS with small interfering RNAs or GapmeRs inhibited angiogenic sprouting and alignment of endothelial cells in response to shear stress. Mechanistically, the nuclear-localized MANTIS lncRNA interacted with BRG1, the catalytic subunit of the switch/sucrose nonfermentable chromatin-remodeling complex. This interaction was required for nucleosome remodeling by keeping the ATPase function of BRG1 active. Thereby, the transcription of key endothelial genes such as SOX18, SMAD6, and COUP-TFII was regulated by ensuring efficient RNA polymerase II machinery binding.
Conclusion: MANTIS is a differentially regulated novel lncRNA facilitating endothelial angiogenic function.
5-Lipoxygenase (5-LO) is the key enzyme in the formation of pro-inflammatory leukotrienes (LT) which play an important role in a number of inflammatory diseases. Accordingly, 5-LO inhibitors are frequently used to study the role of 5-LO and LT in models of inflammation and cancer. Interestingly, the therapeutic efficacy of these inhibitors is highly variable. Here we show that the frequently used 5-LO inhibitors AA-861, BWA4C, C06, CJ-13,610 and the FDA approved compound zileuton as well as the pan-LO inhibitor nordihydroguaiaretic acid interfere with prostaglandin E2 (PGE2) release into the supernatants of cytokine-stimulated (TNFα/IL-1β) HeLa cervix carcinoma, A549 lung cancer as well as HCA-7 colon carcinoma cells with similar potencies compared to their LT inhibitory activities (IC50 values ranging from 0.1–9.1 µM). In addition, AA-861, BWA4C, CJ-13,610 and zileuton concentration-dependently inhibited bacterial lipopolysaccharide triggered prostaglandin (PG) release into human whole blood. Western Blot analysis revealed that inhibition of expression of enzymes involved in PG synthesis was not part of the underlying mechanism. Also, liberation of arachidonic acid which is the substrate for PG synthesis as well as PGH2 and PGE2 formation were not impaired by the compounds. However, accumulation of intracellular PGE2 was found in the inhibitor treated HeLa cells suggesting inhibition of PG export as major mechanism. Further, experiments showed that the PG exporter ATP-binding cassette transporter multidrug resistance protein 4 (MRP-4) is targeted by the inhibitors and may be involved in the 5-LO inhibitor-mediated PGE2 inhibition. In conclusion, the pharmacological effects of a number of 5-LO inhibitors are compound-specific and involve the potent inhibition of PGE2 export. Results from experimental models on the role of 5-LO in inflammation and pain using 5-LO inhibitors may be misleading and their use as pharmacological tools in experimental models has to be revisited. In addition, 5-LO inhibitors may serve as new scaffolds for the development of potent prostaglandin export inhibitors.
Background: Intensive Care Resources are heavily utilized during the COVID-19 pandemic. However, risk stratification and prediction of SARS-CoV-2 patient clinical outcomes upon ICU admission remain inadequate. This study aimed to develop a machine learning model, based on retrospective & prospective clinical data, to stratify patient risk and predict ICU survival and outcomes. Methods: A Germany-wide electronic registry was established to pseudonymously collect admission, therapeutic and discharge information of SARS-CoV-2 ICU patients retrospectively and prospectively. Machine learning approaches were evaluated for the accuracy and interpretability of predictions. The Explainable Boosting Machine approach was selected as the most suitable method. Individual, non-linear shape functions for predictive parameters and parameter interactions are reported. Results: 1039 patients were included in the Explainable Boosting Machine model, 596 patients retrospectively collected, and 443 patients prospectively collected. The model for prediction of general ICU outcome was shown to be more reliable to predict “survival”. Age, inflammatory and thrombotic activity, and severity of ARDS at ICU admission were shown to be predictive of ICU survival. Patients’ age, pulmonary dysfunction and transfer from an external institution were predictors for ECMO therapy. The interaction of patient age with D-dimer levels on admission and creatinine levels with SOFA score without GCS were predictors for renal replacement therapy. Conclusions: Using Explainable Boosting Machine analysis, we confirmed and weighed previously reported and identified novel predictors for outcome in critically ill COVID-19 patients. Using this strategy, predictive modeling of COVID-19 ICU patient outcomes can be performed overcoming the limitations of linear regression models. Trial registration “ClinicalTrials” (clinicaltrials.gov) under NCT04455451.
Background: Clinical practice guidelines for patients with primary biliary cholangitis (PBC) have been recently revised and implemented for well-established response criteria to standard first-line ursodeoxycholic acid (UDCA) therapy at 12 months after treatment initiation for the early identification of high-risk patients with inadequate treatment responses who may require treatment modification. However, there are only very limited data concerning the real-world clinical management of patients with PBC in Germany. Objective: The aim of this retrospective multicenter study was to evaluate response rates to standard first-line UDCA therapy and subsequent Second-line treatment regimens in a large cohort of well-characterized patients with PBC from 10 independent hepatological referral centers in Germany prior to the introduction of obeticholic acid as a licensed second-line treatment option. Methods: Diagnostic confirmation of PBC, standard first-line UDCA treatment regimens and response rates at 12 months according to Paris-I, Paris-II, and Barcelona criteria, the follow-up cut-off alkaline phosphatase (ALP) ≤ 1.67 × upper limit of normal (ULN) and the normalization of bilirubin (bilirubin ≤ 1 × ULN) were retrospectively examined between June 1986 and March 2017. The management and hitherto applied second-line treatment regimens in patients with an inadequate response to UDCA and subsequent response rates at 12 months were also evaluated. Results: Overall, 480 PBC patients were included in this study. The median UDCA dosage was 13.2 mg UDCA/kg bodyweight (BW)/d. Adequate UDCA treatment response rates according to Paris-I, Paris-II, and Barcelona criteria were observed in 91, 71.3, and 61.3% of patients, respectively. In 83.8% of patients, ALP ≤ 1.67 × ULN were achieved. A total of 116 patients (24.2%) showed an inadequate response to UDCA according to at least one criterion. The diverse second-line treatment regimens applied led to significantly higher response rates according to Paris-II (35 vs. 60%, p = 0.005), Barcelona (13 vs. 34%, p = 0.0005), ALP ≤ 1.67 × ULN and bilirubin ≤ 1 × ULN (52.1 vs. 75%, p = 0.002). The addition of bezafibrates appeared to induce the strongest beneficial effect in this cohort (Paris II: 24 vs. 74%, p = 0.004; Barcelona: 50 vs. 84%, p = 0.046; ALP < 1.67 × ULN and bilirubin ≤ 1 × ULN: 33 vs. 86%, p = 0.001). Conclusion: Our large retrospective multicenter study confirms high response rates following UDCA first-line standard treatment in patients with PBC and highlights the need for close monitoring and early treatment modification in high-risk patients with an insufficient response to UDCA since early treatment modification significantly increases subsequent response rates of these patients.