Universitätspublikationen
Refine
Year of publication
- 2021 (1787)
- 2020 (1453)
- 2019 (1181)
- 2018 (1053)
- 2017 (896)
- 2022 (869)
- 2016 (752)
- 2015 (603)
- 2014 (600)
- 2012 (586)
- 2013 (581)
- 2011 (444)
- 2010 (394)
- 2023 (328)
- 2009 (178)
- 2008 (144)
- 2024 (127)
- 2003 (111)
- 2007 (97)
- 2005 (87)
- 2004 (85)
- 2006 (80)
- 2002 (54)
- 2001 (38)
- 1999 (35)
- 1998 (34)
- 1969 (28)
- 2000 (27)
- 1976 (25)
- 1988 (24)
- 1994 (24)
- 1966 (23)
- 1967 (23)
- 1987 (23)
- 1971 (22)
- 1996 (22)
- 1997 (22)
- 1975 (21)
- 1968 (20)
- 1981 (20)
- 1995 (19)
- 1947 (18)
- 1962 (18)
- 1965 (18)
- 1970 (18)
- 1972 (18)
- 1974 (18)
- 1977 (18)
- 1991 (18)
- 1992 (18)
- 1913 (16)
- 1978 (16)
- 1982 (16)
- 1989 (16)
- 1973 (15)
- 1990 (15)
- 1963 (14)
- 1964 (14)
- 1984 (14)
- 1960 (12)
- 1980 (12)
- 1985 (12)
- 1993 (12)
- 1986 (11)
- 1983 (10)
- 1959 (9)
- 1950 (8)
- 1954 (8)
- 1957 (8)
- 1979 (8)
- 1953 (7)
- 1958 (7)
- 1952 (6)
- 1956 (6)
- 1961 (5)
- 1948 (4)
- 1951 (4)
- 1885 (3)
- 1897 (3)
- 1949 (3)
- 1910 (2)
- 1914 (2)
- 1939 (2)
- 1946 (2)
- 1955 (2)
- 1880 (1)
- 1881 (1)
- 1883 (1)
- 1896 (1)
- 1901 (1)
- 1903 (1)
- 1904 (1)
- 1906 (1)
- 1907 (1)
- 1908 (1)
- 1911 (1)
- 1916 (1)
- 1917 (1)
- 1918 (1)
- 1919 (1)
- 1921 (1)
- 1922 (1)
- 1923 (1)
- 1928 (1)
- 1929 (1)
- 1930 (1)
Document Type
- Article (13427) (remove)
Language
- English (10716)
- German (2259)
- Portuguese (222)
- Spanish (97)
- Italian (53)
- French (36)
- Multiple languages (9)
- Ukrainian (9)
- slo (7)
- Turkish (4)
Has Fulltext
- yes (13427)
Keywords
- inflammation (89)
- COVID-19 (81)
- SARS-CoV-2 (60)
- Adorno (56)
- cancer (43)
- crystal structure (41)
- apoptosis (40)
- Inflammation (39)
- aging (39)
- glioblastoma (38)
Institute
- Medizin (5039)
- Physik (1514)
- Biowissenschaften (1034)
- Biochemie und Chemie (987)
- Gesellschaftswissenschaften (726)
- Frankfurt Institute for Advanced Studies (FIAS) (683)
- Geowissenschaften (508)
- Präsidium (445)
- Philosophie (431)
- Informatik (369)
O artigo propõe uma aproximação entre dois campos de pesquisa distintos, mas com notórias afinidades eletivas: o campo da arqueologia e o da estética filosófica. Pretende-se saber de que modo, no interior do pensamento dialético de Adorno, articulam-se os conceitos de pré-história e proto-história, tendo como fio condutor a temática da arte rupestre e a sua contrapartida moderna, isto é, a reprodutibilidade técnica. Tal aproximação tem como ponto de partida um instigante parágrafo da obra póstuma de Adorno, “Ästhetische Theorie”, presente na subseção assim classificada por Rolf Tiedmann como "Moderne Kunst und Industrielle Produktion", no qual Adorno afirma que há uma convergência entre a arte rupestre e a câmera fotográfica, que se daria na objetivação (Objektivation), isto é, na ação de separar o ato subjetivo do objeto que é visto. A partir desta constatação, a contribuição maior deste artigo estaria em identificar uma espécie de proto-história da reprodutibilidade técnica no mundo pré-histórico. Deste modo, numa perspectiva radicalmente dialética pode-se dizer que o progresso virtual e tecnológico sentido nas últimas décadas não representa algo qualitativamente novo na história humana, sendo apenas um desdobramento de uma tendência já contida na pré-história, algo que nos leva a crer que não conseguimos ainda superar o estado de imanência mítica denunciado amplamente por Adorno e Horkheimer na “Dialektik der Aufklärung. Para demonstrar isto o presente artigo almeja reconstruir as principais linhas de força da "Dialética do Esclarecimento", centrando na categoria de mito (Seção 1). Após, pretende apresentar a relação entre pré-história e proto-história no contexto do pensamento adorniano, especialmente nas obras e nos ensaios do período intermediário de sua bibliografia, tais como “Minima Moralia” e “Prismen” (Seção 2). Por último, deseja apresentar algumas reflexões de Adorno sobre a arte rupestre e a reprodutibilidade técnica presentes na Teoria Estética (Seção 3).
O texto trata da relação entre esfera pública e meios de comunicação de massa, no corpus bibliográfico de Jürgen Habermas, nestes 50 anos que nos separam de Strukturwandel der Öffentlichkeit (1962). O objetivo do texto é mostrar que, diferente de alguns estudos críticos, não se trata de uma lacuna investigativa - ausência, abandono ou não exploração do tema - , mas de uma abordagem secundária, implícita; que a abordagem secundária do tema está relacionada com a posição pessimista original de Habermas acerca da influência negativa dos meios de comunicação de massa, na despolitização da esfera pública; que o pessimismo de Habermas sobre os efeitos negativos dos meios de comunicação de massa mantém uma conexão interna com a orientação original da crítica da cultura de massa de Adorno. Isso significa que, apesar das reformulações e novos diagnósticos, a posição cética de Habermas quanto ao potencial democrático dos meios de comunicação de massa na repolitização da esfera pública parece não ter mudado em seus fundamentos, nestes 50 anos.
A sociedade como um todo vivencia uma crise de ordem ética, mas de uma ética, sobretudo, coletiva. As “pequenas” barbáries instalam-se silenciosamente na sociedade, fomentadas por um sistema econômico que, na sua raiz, é excludente e implacável com aqueles que não se enquadram de alguma forma nele. As pequenas banalizações das injustiças criam elementos que alertam para a possibilidade de que Auschwitz se repita, medo presente e evidenciado por Adorno nas suas obras mais importantes. Os objetivos deste trabalho são responder aos seguintes questionamentos: 1) como traduzir para o dia-a-dia da educação a desconstrução da cultura de violência e preconceito que avança em todo o mundo? 2) Como trabalhar uma pedagogia da imaginação, que seja, antes de mais nada, a condição de imaginar o outro, de imaginar-se no lugar do outro? 3) Quais as bases epistemológicas norteadoras de tal objetivo? Para tanto, verificar-se-á que, por meio da pedagogia do antipreconceito, fundamentada sobretudo no pensamento de Adorno, de Horkheimer e de Marcuse, é possível promover a interdisciplinaridade que aproxime razão de afeto e que denuncie a negação do preconceito, esclarecendo, ainda, que para se fazer uma sociedade mais humanizada torna-se impositivo passar pela desconstrução, pela compreensão e pela reconstrução da educação. Entender e esclarecer como o homem médio viabiliza as posturas excludentes e preconceituosas, impostas tanto pela condição humana quanto pelo capitalismo contemporâneo, no cotidiano da educação constitui uma de suas metas. Enfim, o trabalho propõe-se afirmar a possibilidade de uma ética para a sociedade tecnológica e de semiformação, fundamentada na pedagogia do antipreconceito, cuja finalidade seja a formação de sujeitos conscientes das limitações da ciência e do uso da tecnologia para impedir a barbárie, cuja bandeira seja a do combate à intolerância e às violências e o instrumento principal seja a imaginação.
By means of the analysis of two Theodor Adomo's texts temporal1y very distant from each other -one written in the beginning of his career, the other in his maturity -, this article shows that the essay was for him not merely a theme of reflection, but also and upmost a kind of matrix for his thought. Within this matrix, through resort to a tradition, begun, in the Modernity, with Montaigne and solidified with Leibniz and the English empiricists, Adorno seeks to build, in the last phase of his philosophy, his conception of an "Anti-system", in which the indispensable coherence of thought can be kept save from instrumentalization by the domination system.
O presente artigo discute as atuais transformações nos sistemas educacionais em todo o mundo. Tendo como foco a União Europeia (UE) e a Organização para a Cooperação e o Desenvolvimento Econômico (OCDE) como atores de políticas, seu argumento é que tais transformações implicam uma tripla "economização" da política educacional, que pode ser constatada em todos os níveis da área educacional. A importância cada vez maior dessas organizações nas questões educacionais configura uma transição para uma "constelação pós-nacional" também na área educacional, na medida em que a soberania educacional nacional está, no mínimo, passando por reajustes. No entanto, a "economização" das políticas educacionais não se limita a aproximar a educação das necessidades da economia e a transformar seus serviços em mercadorias comercializáveis. Ela também afeta o nível operacional da educação. Uma lógica de produção está sendo implementada na descrição realizada pelas próprias instituições do sistema educacional, que deixaram de ser estabelecimentos burocraticamente administrados para ser concebidos como uma atividade comercial gerencialmente controlada, uma atividade na qual uma ação empresarial se faz necessária. Esse novo tipo de administração faz surgir o problema da legitimação democrática das decisões políticas que, em termos ideais, combina três elementos: o democrático, o "expertocrático" e o ético-profissional. O artigo discute as consequências de uma mudança no equilíbrio desses três elementos no caso da Alemanha.
O presente texto apresenta uma reflexão sobre a educação dos sentidos a partir do estudo das obras “A indústria cultural” de Theodor W. Adorno e “A obra de arte na era de sua reprodutibilidade técnica” de Walter Benjamin. Primeiramente apresentamos uma breve contextualização histórica sobre o período no qual se desenvolveram os referentes textos. Em seguida buscamos demonstrar através do pensamento de Walter Benjamin como a arte passou a educar os sentidos das classes trabalhadoras a partir da sua reprodutibilidade técnica. Nas considerações finais, apontamos a importância da leitura de ambas as obras para possíveis reflexões acerca da formação do discurso nas tomadas de decisão social e cultural na atualidade. Como também buscamos compreender a função política da arte na formação crítica do sujeito.
Vegetation responds to drought through a complex interplay of plant hydraulic mechanisms, posing challenges for model development and parameterization. We present a mathematical model that describes the dynamics of leaf water-potential over time while considering different strategies by which plant species regulate their water-potentials. The model has two parameters: the parameter λ describing the adjustment of the leaf water potential to changes in soil water potential, and the parameter Δψww describing the typical ‘well-watered’ leaf water potentials at non-stressed (near-zero) levels of soil water potential. Our model was tested and calibrated on 110 time-series datasets containing the leaf- and soil water potentials of 66 species under drought and non-drought conditions. Our model successfully reproduces the measured leaf water potentials over time based on three different regulation strategies under drought. We found that three parameter sets derived from the measurement data reproduced the dynamics of 53% of an drought dataset, and 52% of a control dataset [root mean square error (RMSE) < 0.5 MPa)]. We conclude that, instead of quantifying water-potential-regulation of different plant species by complex modeling approaches, a small set of parameters may be sufficient to describe the water potential regulation behavior for large-scale modeling. Thus, our approach paves the way for a parsimonious representation of the full spectrum of plant hydraulic responses to drought in dynamic vegetation models.
The upcoming commissioning of the superconducting (SC) continuous wave Helmholtz linear accelerators first of series cryomodule is going to demand precise alignment of the four internal SC cavities and two SC solenoids. For optimal results, a beam-based alignment method is used to reduce the misalignment of the whole cryomodule, as well as its individual components. A symmetric beam of low transverse emittance is required for this method, which is to be formed by a collimation system. It consists of two separate plates with milled slits, aligned in the horizontal and vertical direction. The collimation system and alignment measurements are proposed, investigated, and realized. The complete setup of this system and its integration into the existing environment at the GSI High Charge State Injector are presented, as well as the results of the recent reference measurements.
As a centerpiece of antigen processing, the ATP-binding cassette transporter associated with antigen processing (TAP) became a main target for viral immune evasion. The herpesviral ICP47 inhibits TAP function, thereby suppressing an adaptive immune response. Here, we report on a thermostable ICP47-TAP complex, generated by fusion of different ICP47 fragments. These fusion complexes allowed us to determine the direction and positioning in the central cavity of TAP. ICP47-TAP fusion complexes are arrested in a stable conformation, as demonstrated by MHC I surface expression, melting temperature, and the mutual exclusion of herpesviral TAP inhibitors. We unveiled a conserved region next to the active domain of ICP47 as essential for the complete stabilization of the TAP complex. Binding of the active domain of ICP47 arrests TAP in an open inward facing conformation rendering the complex inaccessible for other viral factors. Based on our findings, we propose a dual interaction mechanism for ICP47. A per se destabilizing active domain inhibits the function of TAP, whereas a conserved C-terminal region additionally stabilizes the transporter. These new insights into the ICP47 inhibition mechanism can be applied for future structural analyses of the TAP complex.
Derived from a biophysical model for the motion of a crawling cell, the evolution system(⋆){ut=Δu−∇⋅(u∇v),0=Δv−kv+u, is investigated in a finite domain Ω⊂Rn, n≥2, with k≥0. Whereas a comprehensive literature is available for cases in which (⋆) describes chemotaxis-driven population dynamics and hence is accompanied by homogeneous Neumann-type boundary conditions for both components, the presently considered modeling context, besides yet requiring the flux ∂νu−u∂νv to vanish on ∂Ω, inherently involves homogeneous Dirichlet boundary conditions for the attractant v, which in the current setting corresponds to the cell's cytoskeleton being free of pressure at the boundary. This modification in the boundary setting is shown to go along with a substantial change with respect to the potential to support the emergence of singular structures: It is, inter alia, revealed that in contexts of radial solutions in balls there exist two critical mass levels, distinct from each other whenever k>0 or n≥3, that separate ranges within which (i) all solutions are global in time and remain bounded, (ii) both global bounded and exploding solutions exist, or (iii) all nontrivial solutions blow up. While critical mass phenomena distinguishing between regimes of type (i) and (ii) belong to the well-understood characteristics of (⋆) when posed under classical no-flux boundary conditions in planar domains, the discovery of a distinct secondary critical mass level related to the occurrence of (iii) seems to have no nearby precedent. In the planar case with the domain being a disk, the analytical results are supplemented with some numerical illustrations, and it is discussed how the findings can be interpreted biophysically for the situation of a cell on a flat substrate.
Purpose: A study of real-time adaptive radiotherapy systems was performed to test the hypothesis that, across delivery systems and institutions, the dosimetric accuracy is improved with adaptive treatments over non-adaptive radiotherapy in the presence of patient-measured tumor motion.
Methods and materials: Ten institutions with robotic(2), gimbaled(2), MLC(4) or couch tracking(2) used common materials including CT and structure sets, motion traces and planning protocols to create a lung and a prostate plan. For each motion trace, the plan was delivered twice to a moving dosimeter; with and without real-time adaptation. Each measurement was compared to a static measurement and the percentage of failed points for γ-tests recorded.
Results: For all lung traces all measurement sets show improved dose accuracy with a mean 2%/2 mm γ-fail rate of 1.6% with adaptation and 15.2% without adaptation (p < 0.001). For all prostate the mean 2%/2 mm γ-fail rate was 1.4% with adaptation and 17.3% without adaptation (p < 0.001). The difference between the four systems was small with an average 2%/2 mm γ-fail rate of <3% for all systems with adaptation for lung and prostate.
Conclusions: The investigated systems all accounted for realistic tumor motion accurately and performed to a similar high standard, with real-time adaptation significantly outperforming non-adaptive delivery methods.
The prediction of protein–ligand interactions and their corresponding binding free energy is a challenging task in structure-based drug design and related applications. Docking and scoring is broadly used to propose the binding mode and underlying interactions as well as to provide a measure for ligand affinity or differentiate between active and inactive ligands. Various studies have revealed that most docking software packages reliably predict the binding mode, although scoring remains a challenge. Here, a diverse benchmark data set of 99 matched molecular pairs (3D-MMPs) with experimentally determined X-ray structures and corresponding binding affinities is introduced. This data set was used to study the predictive power of 13 commonly used scoring functions to demonstrate the applicability of the 3D-MMP data set as a valuable tool for benchmarking scoring functions.
Correlation functions provide information on the properties of mesons in vacuum and of hot nuclear matter. In this work, we present a new method to derive a well-defined spectral representation for correlation functions. Combining this method with the quark gap equation and the inhomogeneous Bethe–Salpeter equation in the rainbow-ladder approximation, we calculate in-vacuum masses of light mesons and the electrical conductivity of the quark–gluon plasma. The analysis can be extended to other observables of strong-interaction systems.
RcsF, a proposed auxiliary regulator of the regulation of capsule synthesis (rcs) phosphorelay system, is a key element for understanding the RcsC-D-A/B signaling cascade, which is responsible for the regulation of more than 100 genes and is involved in cell division, motility, biofilm formation, and virulence. The RcsC-D-A/B system is one of the most complex bacterial signal transduction pathways, consisting of several membrane-bound and soluble proteins. RcsF is a lipoprotein attached to the outer membrane and plays an important role in activating the RcsC-d-A/B pathway. The exact mechanism of activation of the rcs phosphorelay by RcsF, however, remains unknown. We have analyzed the sequence of RcsF and identified three structural elements: 1) an N-terminal membrane-anchored helix (residues 3-13), 2) a loop (residues 14-48), and 3) a C-terminal folded domain (residues 49-134). We have determined the structure of this C-terminal domain and started to investigate its interaction with potential partners. Important features of its structure are two disulfide bridges between Cys-74 and Cys-118 and between Cys-109 and Cys-124. To evaluate the importance of this RcsF disulfide bridge network in vivo, we have examined the ability of the full-length protein and of specific Cys mutants to initiate the rcs signaling cascade. The results indicate that the Cys-74/Cys-118 and the Cys-109/Cys-124 residues correlate pairwise with the activity of RcsF. Interaction studies showed a weak interaction with an RNA hairpin. However, no interaction could be detected with reagents that are believed to activate the rcs phosphorelay, such as lysozyme, glucose, or Zn(2+) ions.
Neste artigo, que é originalmente um discurso de posse no Instituto Otto Suhr na Universidade Livre de Berlin, Axel Honneth esboça o programa de uma teoria intersubjetiva do reconhecimento, utilizando esta última categoria como o núcleo conceitual de uma Teoria Crítica da sociedade na qual a experiência pré-cientifica de desrespeito às expectativas sociais se conecta à formação de demandas emancipatórias.
Polo-like kinase 1 (PLK1) is a crucial regulator of cell cycle progression. It is established that the activation of PLK1 depends on the coordinated action of Aurora-A and Bora. Nevertheless, very little is known about the spatiotemporal regulation of PLK1 during G2, specifically, the mechanisms that keep cytoplasmic PLK1 inactive until shortly before mitosis onset. Here, we describe PLK1 dimerization as a new mechanism that controls PLK1 activation. During the early G2 phase, Bora supports transient PLK1 dimerization, thus fine-tuning the timely regulated activation of PLK1 and modulating its nuclear entry. At late G2, the phosphorylation of T210 by Aurora-A triggers dimer dissociation and generates active PLK1 monomers that support entry into mitosis. Interfering with this critical PLK1 dimer/monomer switch prevents the association of PLK1 with importins, limiting its nuclear shuttling, and causes nuclear PLK1 mislocalization during the G2-M transition. Our results suggest a novel conformational space for the design of a new generation of PLK1 inhibitors.
O presente estudo tem como objetivo investigar a ideia de reconhecimento jurídico na teoria de Axel Honneth, o que se dará mediante análise da obra Luta por Reconhecimento. Honneth, ancorado nas teorias de Hegel e Mead, estabelece o papel do direito como esfera de reconhecimento individual e seu potencial de asseguramento do autorrespeito. Empreende-se uma reconstrução da teoria honnethiana no atinente aos papéis desempenhados pelo direito na teoria da Luta por Reconhecimento de Axel Honneth, para em seguida, a partir da leitura que o autor desenvolve da teoria de Thomas Marshall, analisar o papel desempenhado pelos direitos subjetivos fundamentais como medium de sedimentação e ampliação de novas formas de reconhecimento e cidadania.
The regulation of cellular copper homeostasis is crucial in biology. Impairments lead to severe dysfunctions and are known to affect aging and development. Previously, a loss-of-function mutation in the gene encoding the copper-sensing and copper-regulated transcription factor GRISEA of the filamentous fungus Podospora anserina was reported to lead to cellular copper depletion and a pleiotropic phenotype with hypopigmentation of the mycelium and the ascospores, affected fertility and increased lifespan by approximately 60% when compared to the wild type. This phenotype is linked to a switch from a copper-dependent standard to an alternative respiration leading to both a reduced generation of reactive oxygen species (ROS) and of adenosine triphosphate (ATP). We performed a genome-wide comparative transcriptome analysis of a wild-type strain and the copper-depleted grisea mutant. We unambiguously assigned 9,700 sequences of the transcriptome in both strains to the more than 10,600 predicted and annotated open reading frames of the P. anserina genome indicating 90% coverage of the transcriptome. 4,752 of the transcripts differed significantly in abundance with 1,156 transcripts differing at least 3-fold. Selected genes were investigated by qRT-PCR analyses. Apart from this general characterization we analyzed the data with special emphasis on molecular pathways related to the grisea mutation taking advantage of the available complete genomic sequence of P. anserina. This analysis verified but also corrected conclusions from earlier data obtained by single gene analysis, identified new candidates of factors as part of the cellular copper homeostasis system including target genes of transcription factor GRISEA, and provides a rich reference source of quantitative data for further in detail investigations. Overall, the present study demonstrates the importance of systems biology approaches also in cases were mutations in single genes are analyzed to explain the underlying mechanisms controlling complex biological processes like aging and development.
Stationarity of the constituents of the body and of its functionalities is a basic requirement for life, being equivalent to survival in first place. Assuming that the resting state activity of the brain serves essential functionalities, stationarity entails that the dynamics of the brain needs to be regulated on a time-averaged basis. The combination of recurrent and driving external inputs must therefore lead to a non-trivial stationary neural activity, a condition which is fulfiled for afferent signals of varying strengths only close to criticality. In this view, the benefits of working in the vicinity of a second-order phase transition, such as signal enhancements, are not the underlying evolutionary drivers, but side effects of the requirement to keep the brain functional in first place. It is hence more appropriate to use the term 'self-regulated' in this context, instead of 'self-organized'.
We present a deterministic workflow for genotyping single and double transgenic individuals directly upon nascence that prevents overproduction and reduces wasted animals by two-thirds. In our vector concepts, transgenes are accompanied by two of four clearly distinguishable transformation markers that are embedded in interweaved, but incompatible Lox site pairs. Following Cre-mediated recombination, the genotypes of single and double transgenic individuals were successfully identified by specific marker combinations in 461 scorings.
Here we present a formal description of Biremis panamae Barka, Witkowski et Weisenborn sp. nov., which was isolated from the marine littoral environment of the Pacific Ocean coast of Panama. The description is based on morphology (light and electron microscopy) and the rbcL, psbC and SSU sequences of one clone of this species. The new species is included in Biremis due to its morphological features; i.e. two marginal rows of foramina, chambered striae, and girdle composed of numerous punctate copulae. The new species also possesses a striated valve face which is not seen in most known representatives of marine littoral Biremis species. In this study we also present the relationship of Biremis to other taxa using morphology, DNA sequence data and observations of auxosporulation. Our results based on these three sources point to an evolutionary relationship between Biremis, Neidium and Scoliopleura. The unusual silicified incunabular caps present in them are known otherwise only in Muelleria, which is probably related to the Neidiaceae and Scoliotropidaceae. We also discuss the relationship between Biremis and the recently described Labellicula and Olifantiella.
Organ-on-a-chip technology has the potential to accelerate pharmaceutical drug development, improve the clinical translation of basic research, and provide personalized intervention strategies. In the last decade, big pharma has engaged in many academic research cooperations to develop organ-on-a-chip systems for future drug discoveries. Although most organ-on-a-chip systems present proof-of-concept studies, miniaturized organ systems still need to demonstrate translational relevance and predictive power in clinical and pharmaceutical settings. This review explores whether microfluidic technology succeeded in paving the way for developing physiologically relevant human in vitro models for pharmacology and toxicology in biomedical research within the last decade. Individual organ-on-a-chip systems are discussed, focusing on relevant applications and highlighting their ability to tackle current challenges in pharmacological research.
Knowledge of consumers' willingness to pay (WTP) is a prerequisite to profitable price-setting. To gauge consumers' WTP, practitioners often rely on a direct single question approach in which consumers are asked to explicitly state their WTP for a product. Despite its popularity among practitioners, this approach has been found to suffer from hypothetical bias. In this paper, we propose a rigorous method that improves the accuracy of the direct single question approach. Specifically, we systematically assess the hypothetical biases associated with the direct single question approach and explore ways to de-bias it. Our results show that by using the de-biasing procedures we propose, we can generate a de-biased direct single question approach that is accurate enough to be useful for managerial decision-making. We validate this approach with two studies in this paper.
Background: Modulation of cortical excitability by transcranial magnetic stimulation (TMS) is used for investigating human brain functions. A common observation is the high variability of long-term depression (LTD)-like changes in human (motor) cortex excitability. This study aimed at analyzing the response subgroup distribution after paired continuous theta burst stimulation (cTBS) as a basis for subject selection.
Methods: The effects of paired cTBS using 80% active motor threshold (AMT) in 31 healthy volunteers were assessed at the primary motor cortex (M1) corresponding to the representation of the first dorsal interosseous (FDI) muscle of the left hand, before and up to 50 min after plasticity induction. The changes in motor evoked potentials (MEPs) were analyzed using machine-learning derived methods implemented as Gaussian mixture modeling (GMM) and computed ABC analysis.
Results: The probability density distribution of the MEP changes from baseline was tri-modal, showing a clear separation at 80.9%. Subjects displaying at least this degree of LTD-like changes were n = 6 responders. By contrast, n = 7 subjects displayed a paradox response with increase in MEP. Reassessment using ABC analysis as alternative approach led to the same n = 6 subjects as a distinct category.
Conclusion: Depressive effects of paired cTBS using 80% AMT endure at least 50 min, however, only in a small subgroup of healthy subjects. Hence, plasticity induction by paired cTBS might not reflect a general mechanism in human motor cortex excitability. A mathematically supported criterion is proposed to select responders for enrolment in assessments of human brain functional networks using virtual brain lesions.
Based on accumulating evidence of a role of lipid signaling in many physiological and pathophysiological processes including psychiatric diseases, the present data driven analysis was designed to gather information needed to develop a prospective biomarker, using a targeted lipidomics approach covering different lipid mediators. Using unsupervised methods of data structure detection, implemented as hierarchal clustering, emergent self-organizing maps of neuronal networks, and principal component analysis, a cluster structure was found in the input data space comprising plasma concentrations of d = 35 different lipid-markers of various classes acquired in n = 94 subjects with the clinical diagnoses depression, bipolar disorder, ADHD, dementia, or in healthy controls. The structure separated patients with dementia from the other clinical groups, indicating that dementia is associated with a distinct lipid mediator plasma concentrations pattern possibly providing a basis for a future biomarker. This hypothesis was subsequently assessed using supervised machine-learning methods, implemented as random forests or principal component analysis followed by computed ABC analysis used for feature selection, and as random forests, k-nearest neighbors, support vector machines, multilayer perceptron, and naïve Bayesian classifiers to estimate whether the selected lipid mediators provide sufficient information that the diagnosis of dementia can be established at a higher accuracy than by guessing. This succeeded using a set of d = 7 markers comprising GluCerC16:0, Cer24:0, Cer20:0, Cer16:0, Cer24:1, C16 sphinganine, and LacCerC16:0, at an accuracy of 77%. By contrast, using random lipid markers reduced the diagnostic accuracy to values of 65% or less, whereas training the algorithms with randomly permuted data was followed by complete failure to diagnose dementia, emphasizing that the selected lipid mediators were display a particular pattern in this disease possibly qualifying as biomarkers.
The Gini index is a measure of the inequality of a distribution that can be derived from Lorenz curves. While commonly used in, e.g., economic research, it suffers from ambiguity via lack of Lorenz dominance preservation. Here, investigation of large sets of empirical distributions of incomes of the World’s countries over several years indicated firstly, that the Gini indices are centered on a value of 33.33% corresponding to the Gini index of the uniform distribution and secondly, that the Lorenz curves of these distributions are consistent with Lorenz curves of log-normal distributions. This can be employed to provide a Lorenz dominance preserving equivalent of the Gini index. Therefore, a modified measure based on log-normal approximation and standardization of Lorenz curves is proposed. The so-called UGini index provides a meaningful and intuitive standardization on the uniform distribution as this characterizes societies that provide equal chances. The novel UGini index preserves Lorenz dominance. Analysis of the probability density distributions of the UGini index of the World’s counties income data indicated multimodality in two independent data sets. Applying Bayesian statistics provided a data-based classification of the World’s countries’ income distributions. The UGini index can be re-transferred into the classical index to preserve comparability with previous research.
Persistent and, in particular, neuropathic pain is a major healthcare problem with still insufficient pharmacological treatment options. This triggered research activities aimed at finding analgesics with a novel mechanism of action. Results of these efforts will need to pass through the phases of drug development, in which experimental human pain models are established components e.g. implemented as chemical hyperalgesia induced by capsaicin. We aimed at ranking the various readouts of a human capsaicin–based pain model with respect to the most relevant information about the effects of a potential reference analgesic. In a placebo‐controlled, randomized cross‐over study, seven different pain‐related readouts were acquired in 16 healthy individuals before and after oral administration of 300 mg pregabalin. The sizes of the effect on pain induced by intradermal injection of capsaicin were quantified by calculating Cohen's d. While in four of the seven pain‐related parameters, pregabalin provided a small effect judged by values of Cohen's d exceeding 0.2, an item categorization technique implemented as computed ABC analysis identified the pain intensities in the area of secondary hyperalgesia and of allodynia as the most suitable parameters to quantify the analgesic effects of pregabalin. Results of this study provide further support for the ability of the intradermal capsaicin pain model to show analgesic effects of pregabalin. Results can serve as a basis for the designs of studies where the inclusion of this particular pain model and pregabalin is planned.
An easy-to-use model to evaluate conductivities at high and middle latitudes in the height range 70–100 km is presented. It is based on electron density profiles obtained with the EISCAT VHF radar during 11 years and on the neutral atmospheric model MSIS95. The model uses solar zenith angle, geomagnetic activity and season as input parameters. It was mainly constructed to study the properties of Schumann resonances that depend on such conductivity profiles.
Dual-task paradigms encompass a broad range of approaches to measure cognitive load in instructional settings. As a common characteristic, an additional task is implemented alongside a learning task to capture the individual’s unengaged cognitive capacities during the learning process. Measures to determine these capacities are, for instance, reaction times and interval errors on the additional task, while the performance on the learning task is to be maintained. Opposite to retrospectively applied subjective ratings, the continuous assessment within a dual-task paradigm allows to simultaneously monitor changes in the performance related to previously defined tasks. Following the Cognitive Load Theory, these changes in performance correspond to cognitive changes related to the establishment of permanently existing knowledge structures. Yet the current state of research indicates a clear lack of standardization of dual-task paradigms over study settings and task procedures. Typically, dual-task designs are adapted uniquely for each study, albeit with some similarities across different settings and task procedures. These similarities range from the type of modality to the frequency used for the additional task. This results in a lack of validity and comparability between studies due to arbitrarily chosen patterns of frequency without a sound scientific base, potentially confounding variables, or undecided adaptation potentials for future studies. In this paper, the lack of validity and comparability between dual-task settings will be presented, the current taxonomies compared and the future steps for a better standardization and implementation discussed.
Este artigo tem por objetivo analisar comparativamente as semelhanças contidas nas críticas à democracia liberal presentes em alguns trabalhos selecionados de Carl Schmitt (1888-1985) e Robert Kurz (1943-2012). A despeito da estreita associação do primeiro autor com o regime nazista após 1933 e do segundo ser normalmente caracterizado como um pensador marxista (embora bastante crítico ao marxismo “ortodoxo”), são verificáveis inúmeras similitudes entre ambos quando se propõem a analisar as características do liberalismo parlamentar das democracias do século XX. Uma hipótese que pode explicar tais semelhanças seria a influência exercida por Schmitt sobre diversos teóricos da escola de Frankfurt, com os quais Kurz frequentemente dialoga em seus escritos e que foram inspiradores de algumas de suas reflexões – em especial, Walter Benjamin, Theodor Adorno e Max Horkheimer, embora Schmitt também tenha influenciado Franz Neumann, Otto Kirchheimer, Karl Korsch e Herbert Marcuse. Outra via de interpretação abordada aqui se refere à possibilidade de Schmitt ter encontrado, em suas teorias sobre o Estado e sobre o direito, os limites epistemológicos do liberalismo moderno, o que constitui o principal objeto de pesquisa de Kurz e foi tema recorrente nos escritos dos teóricos de Frankfurt.
Este artigo analisa a crítica de Adorno à ontologia de Heidegger. Para tal, utiliza como leitmotiv a interpretação heideggeriana de Kant. Procuraremos mostrar que para Adorno a edificação da ontologia fundamental a partir da filosofia de Kant é uma interpretação indevida desta. Por fim, procura apontar uma possível saída na filosofia de Adorno para o problema da necessidade de fundamentação do discurso filosófico. Tal saída passa pela constatação da importância da arte para a construção da universalidade na filosofia.
A critical role for VEGF and VEGFR2 in NMDA receptor synaptic function and fear-related behavior
(2016)
Vascular endothelial growth factor (VEGF) is known to be required for the action of antidepressant therapies but its impact on brain synaptic function is poorly characterized. Using a combination of electrophysiological, single-molecule imaging and conditional transgenic approaches, we identified the molecular basis of the VEGF effect on synaptic transmission and plasticity. VEGF increases the postsynaptic responses mediated by the N-methyl-d-aspartate type of glutamate receptors (GluNRs) in hippocampal neurons. This is concurrent with the formation of new synapses and with the synaptic recruitment of GluNR expressing the GluN2B subunit (GluNR-2B). VEGF induces a rapid redistribution of GluNR-2B at synaptic sites by increasing the surface dynamics of these receptors within the membrane. Consistently, silencing the expression of the VEGF receptor 2 (VEGFR2) in neural cells impairs hippocampal-dependent synaptic plasticity and consolidation of emotional memory. These findings demonstrated the direct implication of VEGF signaling in neurons via VEGFR2 in proper synaptic function. They highlight the potential of VEGF as a key regulator of GluNR synaptic function and suggest a role for VEGF in new therapeutic approaches targeting GluNR in depression.
Rezension zu: Psychology of Retention:Theory, Research and Practice / Melinde Coetzee, Ingrid L. Potgieter and Nadia Ferreira (Eds.), ISBN:978-3-319-98919-8 Publisher:Springer Nature, 2018, R1600 (Preis SA)
Background: Invasive off- or on-pump cardiac surgery (elective and emergency procedures, excluding transplants are routinely performed to treat complications of ischaemic heart disease. Randomised controlled trials (RCT) evaluate the effectiveness of treatments in the setting of cardiac surgery. However, the impact of RCTs is weakened by heterogeneity in outcome measuring and reporting, which hinders comparison across trials. Core outcome sets (COS, a set of outcomes that should be measured and reported, as a minimum, in clinical trials for a specific clinical field) help reduce this problem. In light of the above, we developed a COS for cardiac surgery effectiveness trials.
Methods: Potential core outcomes were identified a priori by analysing data on 371 RCTs of 58,253 patients. We reached consensus on core outcomes in an international three-round eDelphi exercise. Outcomes for which at least 60% of the participants chose the response option "no" and less than 20% chose the response option "yes" were excluded.
Results: Eighty-six participants from 23 different countries involving adult cardiac patients, cardiac surgeons, anaesthesiologists, nursing staff and researchers contributed to this eDelphi. The panel reached consensus on four core outcomes: 1) Measure of mortality, 2) Measure of quality of life, 3) Measure of hospitalisation and 4) Measure of cerebrovascular complication to be included in adult cardiac surgery trials.
Conclusion: This study used robust research methodology to develop a minimum core outcome set for clinical trials evaluating the effectiveness of treatments in the setting of cardiac surgery. As a next step, appropriate outcome measurement instruments have to be selected.
Commercialization of consumers’ personal data in the digital economy poses serious, both conceptual and practical, challenges to the traditional approach of European Union (EU) Consumer Law. This article argues that mass-spread, automated, algorithmic decision-making casts doubt on the foundational paradigm of EU consumer law: consent and autonomy. Moreover, it poses threats of discrimination and under- mining of consumer privacy. It is argued that the recent legislative reaction by the EU Commission, in the form of the ‘New Deal for Consumers’, was a step in the right direction, but fell short due to its continued reliance on consent, autonomy and failure to adequately protect consumers from indirect discrimination. It is posited that a focus on creating a contracting landscape where the consumer may be properly informed in material respects is required, which in turn necessitates blending the approaches of competition, consumer protection and data protection laws.
A consistent muscle activation strategy underlies crawling and swimming in Caenorhabditis elegans
(2014)
Although undulatory swimming is observed in many organisms, the neuromuscular basis for undulatory movement patterns is not well understood. To better understand the basis for the generation of these movement patterns, we studied muscle activity in the nematode Caenorhabditis elegans. Caenorhabditis elegans exhibits a range of locomotion patterns: in low viscosity fluids the undulation has a wavelength longer than the body and propagates rapidly, while in high viscosity fluids or on agar media the undulatory waves are shorter and slower. Theoretical treatment of observed behaviour has suggested a large change in force–posture relationships at different viscosities, but analysis of bend propagation suggests that short-range proprioceptive feedback is used to control and generate body bends. How muscles could be activated in a way consistent with both these results is unclear. We therefore combined automated worm tracking with calcium imaging to determine muscle activation strategy in a variety of external substrates. Remarkably, we observed that across locomotion patterns spanning a threefold change in wavelength, peak muscle activation occurs approximately 45° (1/8th of a cycle) ahead of peak midline curvature. Although the location of peak force is predicted to vary widely, the activation pattern is consistent with required force in a model incorporating putative length- and velocity-dependence of muscle strength. Furthermore, a linear combination of local curvature and velocity can match the pattern of activation. This suggests that proprioception can enable the worm to swim effectively while working within the limitations of muscle biomechanics and neural control.
Introduction: Encouraged by the change in licensing regulations the practical professional skills in Germany received a higher priority and are taught in medical schools therefore increasingly. This created the need to standardize the process more and more. On the initiative of the German skills labs the German Medical Association Committee for practical skills was established and developed a competency-based catalogue of learning objectives, whose origin and structure is described here.
Goal of the catalogue is to define the practical skills in undergraduate medical education and to give the medical schools a rational planning basis for the necessary resources to teach them.
Methods: Building on already existing German catalogues of learning objectives a multi-iterative process of condensation was performed, which corresponds to the development of S1 guidelines, in order to get a broad professional and political support.
Results: 289 different practical learning goals were identified and assigned to twelve different organ systems with three overlapping areas to other fields of expertise and one area of across organ system skills. They were three depths and three different chronological dimensions assigned and the objectives were matched with the Swiss and the Austrian equivalent.
Discussion: This consensus statement may provide the German faculties with a basis for planning the teaching of practical skills and is an important step towards a national standard of medical learning objectives.
Looking ahead: The consensus statement may have a formative effect on the medical schools to teach practical skills and plan the resources accordingly.
Publicly available compound and bioactivity databases provide an essential basis for data-driven applications in life-science research and drug design. By analyzing several bioactivity repositories, we discovered differences in compound and target coverage advocating the combined use of data from multiple sources. Using data from ChEMBL, PubChem, IUPHAR/BPS, BindingDB, and Probes & Drugs, we assembled a consensus dataset focusing on small molecules with bioactivity on human macromolecular targets. This allowed an improved coverage of compound space and targets, and an automated comparison and curation of structural and bioactivity data to reveal potentially erroneous entries and increase confidence. The consensus dataset comprised of more than 1.1 million compounds with over 10.9 million bioactivity data points with annotations on assay type and bioactivity confidence, providing a useful ensemble for computational applications in drug design and chemogenomics.
Ubiquitin fold modifier 1 (UFM1) is a member of the ubiquitin-like protein family. UFM1 undergoes a cascade of enzymatic reactions including activation by UBA5 (E1), transfer to UFC1 (E2) and selective conjugation to a number of target proteins via UFL1 (E3) enzymes. Despite the importance of ufmylation in a variety of cellular processes and its role in the pathogenicity of many human diseases, the molecular mechanisms of the ufmylation cascade remains unclear. In this study we focused on the biophysical and biochemical characterization of the interaction between UBA5 and UFC1. We explored the hypothesis that the unstructured C-terminal region of UBA5 serves as a regulatory region, controlling cellular localization of the elements of the ufmylation cascade and effective interaction between them. We found that the last 20 residues in UBA5 are pivotal for binding to UFC1 and can accelerate the transfer of UFM1 to UFC1. We solved the structure of a complex of UFC1 and a peptide spanning the last 20 residues of UBA5 by NMR spectroscopy. This structure in combination with additional NMR titration and isothermal titration calorimetry experiments revealed the mechanism of interaction and confirmed the importance of the C-terminal unstructured region in UBA5 for the ufmylation cascade.
Background: The differentiation between Gaucher disease type 3 (GD3) and type 1 is challenging because pathognomonic neurologic symptoms may be subtle and develop at late stages. The ophthalmologist plays a crucial role in identifying the typical impairment of horizontal saccadic eye movements, followed by vertical ones. Little is known about further ocular involvement. The aim of this monocentric cohort study is to comprehensively describe the ophthalmological features of Gaucher disease type 3. We suggest recommendations for a set of useful ophthalmologic investigations for diagnosis and follow up and for saccadometry parameters enabling a correlation to disease severity.
Methods: Sixteen patients with biochemically and genetically diagnosed GD3 completed ophthalmologic examination including optical coherence tomography (OCT), clinical oculomotor assessment and saccadometry by infrared based video-oculography. Saccadic peak velocity, gain and latency were compared to 100 healthy controls, using parametric tests. Correlations between saccadic assessment and clinical parameters were calculated.
Results: Peripapillary subretinal drusen-like deposits with retinal atrophy (2/16), preretinal opacities of the vitreous (4/16) and increased retinal vessel tortuosity (3/16) were found. Oculomotor pathology with clinically slowed saccades was more frequent horizontally (15/16) than vertically (12/16). Saccadometry revealed slowed peak velocity compared to 100 controls (most evident horizontally and downwards). Saccades were delayed and hypometric. Best correlating with SARA (scale for the assessment and rating of ataxia), disease duration, mSST (modified Severity Scoring Tool) and reduced IQ was peak velocity (both up- and downwards). Motility restriction occurred in 8/16 patients affecting horizontal eye movements, while vertical motility restriction was seen less frequently. Impaired abduction presented with esophoria or esotropia, the latter in combination with reduced stereopsis.
Conclusions: Vitreoretinal lesions may occur in 25% of Gaucher type 3 patients, while we additionally observed subretinal lesions with retinal atrophy in advanced disease stages. Vertical saccadic peak velocity seems the most promising "biomarker" for neuropathic manifestation for future longitudinal studies, as it correlates best with other neurologic symptoms. Apart from the well documented abduction deficit in Gaucher type 3 we were able to demonstrate motility impairment in all directions of gaze.
Background: Alterations in the DNA methylation pattern are a hallmark of leukemias and lymphomas. However, most epigenetic studies in hematologic neoplasms (HNs) have focused either on the analysis of few candidate genes or many genes and few HN entities, and comprehensive studies are required. Methodology/Principal Findings: Here, we report for the first time a microarray-based DNA methylation study of 767 genes in 367 HNs diagnosed with 16 of the most representative B-cell (n = 203), T-cell (n = 30), and myeloid (n = 134) neoplasias, as well as 37 samples from different cell types of the hematopoietic system. Using appropriate controls of B-, T-, or myeloid cellular origin, we identified a total of 220 genes hypermethylated in at least one HN entity. In general, promoter hypermethylation was more frequent in lymphoid malignancies than in myeloid malignancies, being germinal center mature B-cell lymphomas as well as B and T precursor lymphoid neoplasias those entities with highest frequency of gene-associated DNA hypermethylation. We also observed a significant correlation between the number of hypermethylated and hypomethylated genes in several mature B-cell neoplasias, but not in precursor B- and T-cell leukemias. Most of the genes becoming hypermethylated contained promoters with high CpG content, and a significant fraction of them are targets of the polycomb repressor complex. Interestingly, T-cell prolymphocytic leukemias show low levels of DNA hypermethylation and a comparatively large number of hypomethylated genes, many of them showing an increased gene expression. Conclusions/Significance: We have characterized the DNA methylation profile of a wide range of different HNs entities. As well as identifying genes showing aberrant DNA methylation in certain HN subtypes, we also detected six genes—DBC1, DIO3, FZD9, HS3ST2, MOS, and MYOD1—that were significantly hypermethylated in B-cell, T-cell, and myeloid malignancies. These might therefore play an important role in the development of different HNs.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice nucleating particles (INPs). However, an inter-comparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nucleation research UnIT), we distributed an illite rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. Seventeen measurement methods were involved in the data inter-comparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while ten other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing dataset was evaluated using the ice nucleation active surface-site density (ns) to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers nine orders of magnitude in ns.
Our inter-comparison results revealed a discrepancy between suspension and dry-dispersed particle measurements for this mineral dust. While the agreement was good below ~ −26 °C, the ice nucleation activity, expressed in ns, was smaller for the wet suspended samples and higher for the dry-dispersed aerosol samples between about −26 and −18 °C. Only instruments making measurement techniques with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −26 and −18 °C is discussed. In general, the seventeen immersion freezing measurement techniques deviate, within the range of about 7 °C in terms of temperature, by three orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency (i.e., ns) of illite NX particles is relatively independent on droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature-dependence and weak time- and size-dependence of immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns (T) spectra, and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. A multiple exponential distribution fit is expressed as ns(T) = exp(23.82 × exp(−exp(0.16 × (T + 17.49))) + 1.39) based on the specific surface area and ns(T) = exp(25.75 × exp(−exp(0.13 × (T + 17.17))) + 3.34) based on the geometric area (ns and T in m−2 and °C, respectively). These new fits, constrained by using an identical reference samples, will help to compare IN measurement methods that are not included in the present study and, thereby, IN data from future IN instruments.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice-nucleating particles. However, an intercomparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nuclei Research Unit), we distributed an illite-rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. A total of 17 measurement methods were involved in the data intercomparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while 10 other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing data set was evaluated using the ice nucleation active surface-site density, ns, to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers 9 orders of magnitude in ns.
In general, the 17 immersion freezing measurement techniques deviate, within a range of about 8 °C in terms of temperature, by 3 orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency expressed in ns of illite NX particles is relatively independent of droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature dependence and weak time and size dependence of the immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns(T) spectra and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. While the agreement between different instruments was reasonable below ~ −27 °C, there seemed to be a different trend in the temperature-dependent ice nucleation activity from the suspension and dry-dispersed particle measurements for this mineral dust, in particular at higher temperatures. For instance, the ice nucleation activity expressed in ns was smaller for the average of the wet suspended samples and higher for the average of the dry-dispersed aerosol samples between about −27 and −18 °C. Only instruments making measurements with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −27 and −18 °C is discussed. Multiple exponential distribution fits in both linear and log space for both specific surface area-based ns(T) and geometric surface area-based ns(T) are provided. These new fits, constrained by using identical reference samples, will help to compare IN measurement methods that are not included in the present study and IN data from future IN instruments.
Analysis of whole cell lipid extracts of bacteria by means of ultra-performance (UP)LC-MS allows a comprehensive determination of the lipid molecular species present in the respective organism. The data allow conclusions on its metabolic potential as well as the creation of lipid profiles, which visualize the organism's response to changes in internal and external conditions. Herein, we describe: i) a fast reversed phase UPLC-ESI-MS method suitable for detection and determination of individual lipids from whole cell lipid extracts of all polarities ranging from monoacylglycerophosphoethanolamines to TGs; ii) the first overview of a wide range of lipid molecular species in vegetative Myxococcus xanthus DK1622 cells; iii) changes in their relative composition in selected mutants impaired in the biosynthesis of α-hydroxylated FAs, sphingolipids, and ether lipids; and iv) the first report of ceramide phosphoinositols in M. xanthus, a lipid species previously found only in eukaryotes.
Covalent inhibition has become more accepted in the past two decades, as illustrated by the clinical approval of several irreversible inhibitors designed to covalently modify their target. Elucidation of the structure-activity relationship and potency of such inhibitors requires a detailed kinetic evaluation. Here, we elucidate the relationship between the experimental read-out and the underlying inhibitor binding kinetics. Interactive kinetic simulation scripts are employed to highlight the effects of in vitro enzyme activity assay conditions and inhibitor binding mode, thereby showcasing which assumptions and corrections are crucial. Four stepwise protocols to assess the biochemical potency of (ir)reversible covalent enzyme inhibitors targeting a nucleophilic active site residue are included, with accompanying data analysis tailored to the covalent binding mode. Together, this will serve as a guide to make an educated decision regarding the most suitable method to assess covalent inhibition potency. © 2022 The Authors. Current Protocols published by Wiley Periodicals LLC.
Apigenin (4′,5,7-trihydroxyflavone) (Api) is an important component of the human diet, being distributed in a wide number of fruits, vegetables and herbs with the most important sources being represented by chamomile, celery, celeriac and parsley. This study was designed for a comprehensive evaluation of Api as an antiproliferative, proapoptotic, antiangiogenic and immunomodulatory phytocompound. In the set experimental conditions, Api presents antiproliferative activity against the A375 human melanoma cell line, a G2/M arrest of the cell cycle and cytotoxic events as revealed by the lactate dehydrogenase release. Caspase 3 activity was inversely proportional to the Api tested doses, namely 30 μM and 60 μM. Phenomena of early apoptosis, late apoptosis and necrosis following incubation with Api were detected by Annexin V-PI double staining. The flavone interfered with the mitochondrial respiration by modulating both glycolytic and mitochondrial pathways for ATP production. The metabolic activity of human dendritic cells (DCs) under LPS-activation was clearly attenuated by stimulation with high concentrations of Api. Il-6 and IL-10 secretion was almost completely blocked while TNF alpha secretion was reduced by about 60%. Api elicited antiangiogenic properties in a dose-dependent manner. Both concentrations of Api influenced tumour cell growth and migration, inducing a limited tumour area inside the application ring, associated with a low number of capillaries.
Translation is an important step in gene expression. The initiation of translation is phylogenetically diverse, since currently five different initiation mechanisms are known. For bacteria the three initiation factors IF1 – IF3 are described in contrast to archaea and eukaryotes, which contain a considerably higher number of initiation factor genes. As eukaryotes and archaea use a non-overlapping set of initiation mechanisms, orthologous proteins of both domains do not necessarily fulfill the same function. The genome of Haloferax volcanii contains 14 annotated genes that encode (subunits of) initiation factors. To gain a comprehensive overview of the importance of these genes, it was attempted to construct single gene deletion mutants of all genes. In 9 cases single deletion mutants were successfully constructed, showing that the respective genes are not essential. In contrast, the genes encoding initiation factors aIF1, aIF2γ, aIF5A, aIF5B, and aIF6 were found to be essential. Factors aIF1A and aIF2β are encoded by two orthologous genes in H. volcanii. Attempts to generate double mutants failed in both cases, indicating that also these factors are essential. A translatome analysis of one of the single aIF2β deletion mutants revealed that the translational efficiency of the second ortholog was enhanced tenfold and thus the two proteins can replace one another. The phenotypes of the single deletion mutants also revealed that the two aIF1As and aIF2βs have redundant but not identical functions. Remarkably, the gene encoding aIF2α, a subunit of aIF2 involved in initiator tRNA binding, could be deleted. However, the mutant had a severe growth defect under all tested conditions. Conditional depletion mutants were generated for the five essential genes. The phenotypes of deletion mutants and conditional depletion mutants were compared to that of the wild-type under various conditions, and growth characteristics are discussed.
In this work we present, for the first time, the non-perturbative renormalization for the unpolarized, helicity and transversity quasi-PDFs, in an RI′ scheme. The proposed prescription addresses simultaneously all aspects of renormalization: logarithmic divergences, finite renormalization as well as the linear divergence which is present in the matrix elements of fermion operators with Wilson lines. Furthermore, for the case of the unpolarized quasi-PDF, we describe how to eliminate the unwanted mixing with the twist-3 scalar operator.
We utilize perturbation theory for the one-loop conversion factor that brings the renormalization functions to the MS-scheme at a scale of 2 GeV. We also explain how to improve the estimates on the renormalization functions by eliminating lattice artifacts. The latter can be computed in one-loop perturbation theory and to all orders in the lattice spacing.
We apply the methodology for the renormalization to an ensemble of twisted mass fermions with Nf = 2 + 1 + 1 dynamical quarks, and a pion mass of around 375 MeV.