Universitätspublikationen
Refine
Year of publication
- 2015 (1742) (remove)
Document Type
- Article (606)
- Doctoral Thesis (187)
- Working Paper (169)
- Contribution to a Periodical (164)
- Book (159)
- Report (157)
- Part of Periodical (124)
- Review (70)
- Preprint (55)
- Conference Proceeding (22)
Language
- English (866)
- German (835)
- Spanish (14)
- Italian (11)
- Portuguese (11)
- French (3)
- Multiple languages (1)
- Russian (1)
Is part of the Bibliography
- no (1742)
Keywords
- Islamischer Staat (34)
- IS (25)
- Terrorismus (23)
- Deutschland (16)
- Dschihadismus (13)
- Syrien (12)
- Terror (11)
- Irak (10)
- Islamismus (10)
- Salafismus (10)
Institute
- Präsidium (336)
- Medizin (252)
- Gesellschaftswissenschaften (230)
- Physik (185)
- Wirtschaftswissenschaften (149)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (116)
- Center for Financial Studies (CFS) (115)
- Biowissenschaften (99)
- Frankfurt Institute for Advanced Studies (FIAS) (97)
- Informatik (96)
In-depth analyses of cancer cell proteomes are needed to elucidate oncogenic pathomechanisms, as well as to identify potential drug targets and diagnostic biomarkers. However, methods for quantitative proteomic characterization of patient-derived tumors and in particular their cellular subpopulations are largely lacking. Here we describe an experimental set-up that allows quantitative analysis of proteomes of cancer cell subpopulations derived from either liquid or solid tumors. This is achieved by combining cellular enrichment strategies with quantitative Super-SILAC-based mass spectrometry followed by bioinformatic data analysis. To enrich specific cellular subsets, liquid tumors are first immunophenotyped by flow cytometry followed by FACS-sorting; for solid tumors, laser-capture microdissection is used to purify specific cellular subpopulations. In a second step, proteins are extracted from the purified cells and subsequently combined with a tumor-specific, SILAC-labeled spike-in standard that enables protein quantification. The resulting protein mixture is subjected to either gel electrophoresis or Filter Aided Sample Preparation (FASP) followed by tryptic digestion. Finally, tryptic peptides are analyzed using a hybrid quadrupole-orbitrap mass spectrometer, and the data obtained are processed with bioinformatic software suites including MaxQuant. By means of the workflow presented here, up to 8,000 proteins can be identified and quantified in patient-derived samples, and the resulting protein expression profiles can be compared among patients to identify diagnostic proteomic signatures or potential drug targets.
In the title compound, C20H24N2O4, both peptide bonds adopt a trans configuration with respect to the —N—H and —C=O groups. The dihedral angle between the aromatic rings is 53.58 (4)°. The molecular conformation is stabilized by an intramolecular N—H⋯O hydrogen bond. The crystal packing is characterized by zigzag chains of N—H⋯O hydrogen-bonded molecules running along the b-axis direction.
Das Hauptziel der vorliegenden Arbeit war es, die energieabhängigen Wirkungsquerschnitte von (γ,n)-Reaktionen für 169Tm, 170Yb, 176Yb und 130Te mittels der Photoaktivierungsmethode zu bestimmen.
Dazu wurden zunächst die Effizienzen der verwendeten Detektoren mithilfe von Simulationen korrigiert, da die verwendeten Targets eine ausgedehnte Geometrie aufweisen im Gegensatz zu den punktförmigen Eichquellen. Es hat sich herausgestellt, dass mit den Simulationen die Effizienzen der MCA-Detektoren energieabhängig korrigiert werden konnten, da die Simulationen die Form der gemessenen Effizienzen gut reproduzieren konnten. Bei den Effizienzen der LEPS-Detektoren hingegen konnte keine energieabhäangige Korrektur vorgenommen werden, da die LEPS-Detektoren aufgrund des geringen Abstandes zu den Detektoren hohe Summeneffekte zeigten. Im Rahmen dieser Arbeit konnten diese Summeneffekte jedoch nicht korrigiert bzw. berücksichtigt werden.
n the EU there are longstanding and ongoing pressures towards a tax that is levied on the EU level to substitute for national contributions. We discuss conditions under which such a transition can make sense, starting from what we call a "decentralization theorem of taxation" that is analogous to Oates (1972) famous result that in the absence of spill-over effects and economies of scale decentralized public good provision weakly dominates central provision. We then drop assumptions that turn out to be unnecessary for this results. While spill-over effects of taxation may call for central rules for taxation, as long as spill-over effects do not depend on the intra-regional distribution of the tax burden, decentralized taxation plus tax coordination is found superior to a union-wide tax.
Do markets correct individual behavioral biases? In an experimental asset market, we compare the outcomes of a standard market economy to those of a an island economy that removed market interactions. We observe asset price bubbles in the market economy while prices are stable in the island economy. We also find that subjects took more risk following larger losses, resulting in larger prices and consistent with a gambling for resurrection motive. This motive can translate into bubbles in the market economy because higher prices increase average losses and thus reinforce the desire to resurrect. By contrast, the absence of such a strategic complementarity in island economies can explain the more stable outcome. These results suggest that markets do not correct behavioral biases, rather the contrary.
This paper analyzes sovereign risk shift-contagion, i.e. positive and significant changes in the propagation mechanisms, using bond yield spreads for the major eurozone countries. By emphasizing the use of two econometric approaches based on quantile regressions (standard quantile regression and Bayesian quantile regression with heteroskedasticity) we find that the propagation of shocks in euro's bond yield spreads shows almost no presence of shift-contagion. All the increases in correlation we have witnessed over the last years come from larger shocks propagated with higher intensity across Europe.
Research on interbank networks and systemic importance is starting to recognise that the web of exposures linking banks balance sheets is more complex than the single-layer-of-exposure paradigm. We use data on exposures between large European banks broken down by both maturity and instrument type to characterise the main features of the multiplex structure of the network of large European banks. This multiplex network presents positive correlated multiplexity and a high similarity between layers, stemming both from standard similarity analyses as well as a core-periphery analyses of the different layers. We propose measures of systemic importance that fit the case in which banks are connected through an arbitrary number of layers (be it by instrument, maturity or a combination of both). Such measures allow for a decomposition of the global systemic importance index for any bank into the contributions of each of the sub-networks, providing a useful tool for banking regulators and supervisors. We use the dataset of exposures between large European banks to illustrate the proposed measures.
Although banks are at the center of systemic risk, there are other institutions that contribute to it. With the publication of the leveraged lending guideline in March 2013, the U.S. regulators show that they are especially worried about the private equity firms with their high-risk deals. Given these risks and the interconnectedness of the banks through the LBO loan syndicates, I shed light on the impact of a bank’s LBO loan exposure on its systemic risk. By using 3,538 observations between 2000 and 2013 from 165 global banks, I show that banks with higher LBO exposure also have a higher level of systemic risk. Other loan purposes do not show this positive relationship. The main drivers influencing this relationship positively are the bank’s interconnectedness to other LBO financing banks and its size. Lending experience with a specific PE sponsor, experience with leading LBO syndicates or a bank’s credit rating, however, lead to a lower impact of the LBO loan exposure on systemic risk.
In the mid-1990s, institutional investors entered the syndicated loan market and started to serve borrowers as lead arrangers. Why are non-banks able to compete for this role against banks? How do the composition of syndicates and loan pricing differ among lead arrangers? By using a dataset of 12,847 leveraged loans between 1997 and 2012, I aim to answer these questions. Non-banks benefit from looser regulatory requirements, have industry expertise which helps them in the screening and monitoring of borrowers and focus on firms that ask for loans only instead of additional cross-selling of other services. I can show that non-banks specialize on more opaque and less experienced borrowers, are more likely than banks to choose participants that help to reduce potentially higher information asymmetries and earn 105 basis points more than banks.
This paper analyzes the influence Leveraged Buyouts (LBOs) have on the operating performance of the LBO target companies’ direct competitors. A unique and hand-collected data set on LBOs in the United States in the period 1985-2009 allows us to analyze the effects different restructuring activities as part of the LBO have on the competitors’ revenues. These restructuring activities include changes to leverage, governance, or operating business, as well as M&A activities of the LBO target company. We find that although LBOs itself have a negative influence on competitors’ revenue growth, some restructuring mechanisms might actually benefit competing companies.
The Liikanen Group proposes contingent convertible (CoCo) bonds as a potential mechanism to enhance financial stability in the banking industry. Especially life insurance companies could serve as CoCo bond holders as they are already the largest purchasers of bank bonds in Europe. We develop a stylized model with a direct financial connection between banking and insurance and study the effects of various types of bonds such as non-convertible bonds, write-down bonds and CoCos on banks' and insurers' risk situations. In addition, we compare insurers' capital requirements under the proposed Solvency II standard model as well as under an internal model that ex-ante anticipates additional risks due to possible conversion of the CoCo bond into bank shares. In order to check the robustness of our findings, we consider different CoCo designs (write-down factor, trigger value, holding time of bank shares) and compare the resulting capital requirements with those for holding non-convertible bonds. We identify situations in which insurers benefit from buying CoCo bonds due to lower capital requirements and higher coupon rates. Furthermore, our results highlight how the Solvency II standard model can mislead insurers in their CoCo investment decision due to economically irrational incentives.
I assess how Basel III, Solvency II and the low interest rate environment will affect the financial connection between the bank and insurance sector by changing the funding patterns of banks as well as the investment strategies of life insurance companies. Especially for life insurance companies, the current low interest rate environment poses a key risk since declining returns on investments jeopardize the guaranteed return on life insurance contracts, a core component of traditional life insurance contracts in several European countries. I consider a contingent claim framework with a direct financial connection between banks and life insurers via bank bonds. The results indicate that life insurers' demand for bank bonds increases over the mid-term but ultimately declines in the long-run. Since life insurers are the largest purchasers of bank bonds in Europe, banks could lose one of their main funding sources. In addition, I show that shareholder value driven life insurers' appetite for risk increases when the gap between asset return and liability growth diminishes. To check the robustness of the findings, I calibrate a prolonged low interest rate scenario. The results show that the insurer's risk appetite is even higher when interest rates remain persistently low. A sensitivity analysis regarding industry-specific regulatory safety levels reveals that contagion between bank and life insurer is driven by the insurers' demand for bank bonds which itself depends on the regulatory safety level of banks.
The creation of the Banking Union is likely to come with substantial implications for the governance of Eurozone banks. The European Central Bank, in its capacity as supervisory authority for systemically important banks, as well as the Single Resolution Board, under the EU Regulations establishing the Single Supervisory Mechanism and the Single Resolution Mechanism, have been provided with a broad mandate and corresponding powers that allow for far-reaching interference with the relevant institutions’ organisational and business decisions. Starting with an overview of the relevant powers, the present paper explores how these could – and should – be exercised against the backdrop of the fundamental policy objectives of the Banking Union. The relevant aspects directly relate to a fundamental question associated with the reallocation of the supervisory landscape, namely: Will the centralisation of supervisory powers, over time, also lead to the streamlining of business models, corporate and group structures of banks across the Eurozone?
This paper examines the dynamic relationship between credit risk and liquidity in the sovereign bond market in the context of the European Central Bank (ECB) interventions. Using a comprehensive set of liquidity measures obtained from a detailed, quote-level dataset of the largest interdealer market for Italian government bonds, we show that changes in credit risk, as measured by the Italian sovereign credit default swap (CDS) spread, generally drive the liquidity of the market: a 10% change in the CDS spread leads a 11% change in the bid-ask spread. This relationship is stronger, and the transmission is faster, when the CDS spread is above the 500 basis point threshold, estimated endogenously, and can be ascribed to changes in margins and collateral, as well as clientele effects. Moreover, we show that the Long-Term Refinancing Operations (LTRO) intervention by the ECB weakened the sensitivity of the liquidity provision by the market makers to changes in the Italian government's credit risk. We also document the importance of market-wide and dealer-specific funding liquidity measures in determining the market liquidity for Italian government bonds.
The European Commission has published a Green Paper outlining possible measures to create a single market for capital in Europe. Our comments on the Commission’s capital markets union project use the functional finance approach as a starting point. Policy decisions, according to the functional finance perspective, should be essentially neutral (agnostic) in terms of institutions (level playing field). Our main angle, from which we assess proposals for the capital markets union agenda, are information asymmetries and the agency problems (screening, monitoring) which arise as a result. Within this perspective, we make a number of more specific proposals.
The paper traces the developments from the formation of the European Economic and Monetary Union to this date. It discusses the fact that the primary mandate of the European System of Central Banks (ESCB) is confined to safeguarding price stability and does not include general economic policy. Finally, the paper contributes to the discussion on whether the primary law of the European Union would support a eurozone exit. The Treaty of Maastricht imposed the strict obligation on the European Union (EU) to establish an economic and monetary union, now Article 3(4) TEU. This economic and monetary union is, however, not designed as a separate entity but as an integral part of the EU. The single currency was to become the currency of the EU and to be the legal tender in all Member States unless an exemption was explicitly granted in the primary law of the EU, as in the case of the UK and Denmark. The newly admitted Member States are obliged to introduce the euro as their currency as soon as they fulfil the admission criteria. Technically, this has been achieved by transferring the exclusive competence for the monetary policy of the Member States whose currency is the euro on the EU, Article 3(1)(c) TFEU and by bestowing the euro with the quality of legal tender, the only legal tender in the EU, Article 128(1) sentence 3 TFEU.
Die deutsche Steuerpolitik kombiniert hohe Steuersätze mit zahlreichen Ausnahmen. Das reißt Gerechtigkeitslücken, lenkt Investitionen in die falschen Zwecke und verkompliziert das Steuersystem mitunter bis zur Unkenntlichkeit. Bei der Erbschaftsteuer ist dies besonders augenfällig. Der Versuch mit minimalinvasiven Korrekturen Konsistenz in die Erbschaft- und Schenkungsteuer zu bringen ist fast zwangsläufig zum Scheitern verurteilt. Vieles spricht stattdessen für deutlich abgesenkte Steuersätze und eine gleichzeitige Abschaffung der Vergünstigungen für Betriebsvermögen.
Ein kritischer Diskurs ist essentiell für die Wissenschaft. Das ist zwar banal, wird aber im gegenwärtigen Streit um „Münkler Watch“, einem Blog, in dem Studierende der Humboldt-Universität Berlin eine Vorlesung des Politikwissenschaftlers Prof. Herfried Münkler anonym kritisieren, häufig vergessen. Aber auch den Studierenden scheint es nicht um einen inhaltlichen Dialog, sondern um Aufmerksamkeit zu gehen.
25 Jahre ISOE – Veranstaltungsdokumentation online +++ Mehr als nur Wohnen: Veranstaltungsreihe „Gemeinsam Leben in der Stadt“ +++ Bau von Windenergieanlagen: Konfliktparteien im Dialog +++ Festakt bei BiK-F – Aufnahme in die Senckenberg Gesellschaft für Naturforschung +++ ISOE-Lecture im Wintersemester 2014/15 an der Goethe-Universität Frankfurt +++ Aus dem ISOE: Europäische Biodiversitätsforschung: ISOE ist Mitglied im ALTER-Net +++ Termine +++ Publikationen
ISOE-Forschungsteam begleitet „Reallabore“ in Baden-Württemberg +++ ISOE bei den Berliner Energietagen +++ Wasser für die Trockenzeit – Übergabe der Flutwassersammelanlage in Namibia +++ Zukunftsstadt – ISOE ist Partner des Wissenschaftsjahres 2015 +++ Weltwasserdekade endet – Probleme in der weltweiten Wasserversorgung bleiben +++ Capital4Health – Forschungsverbund für transdisziplinäre Gesundheitsforschung +++ Aus dem ISOE: Dr. Alexandra Lux ist neue Leiterin des Forschungsschwerpunkts „Transdisziplinäre Methoden und Konzepte“ +++ Termine +++ Publikationen
Beste! Neues in der Blogroll
(2015)
Hin und wieder muss einfach mal ordentlich entrümpelt und aufgeräumt werden. Das gilt für das Leben im Allgemeinen und hin und wieder eben auch für das Bretterblog. Heute war es mal wieder soweit. Hoch motiviert von den verwegenen Plänen des letzten Redaktionstreffens habe ich mich unter anderem an unsere Blogroll gewagt: Einmal durchgeklickt, Blogleichen weggeräumt und gestaunt, was für starke Blogs es doch so da draußen gibt, die man hin und wieder mal aus dem Auge verliert!...
The design of rainwater harvesting based gardens requires considering current climate but also climate change during the lifespan of the facility. The goal of this study is to present an approach for designing garden variants that can be safely supplied with harvested rainwater, taking into account climate change and adaptation measures. In addition, the study presents a methodology to quantify the effects of climate change on rainwater harvesting based gardening. Results of the study may not be accurate due to the assumptions made for climate projections and may need to be further refined. We used a tank flow model and an irrigation water model. Then we established three simple climate scenarios and analyzed the impact of climate change on harvested rain and horticulture production for a semi-arid region in northern Namibia. In the two climate scenarios with decreased precipitation and medium/high temperature increase; adaptation measures are required to avoid substantial decreases in horticulture production. The study found that the most promising adaptation measures to sustain yields and revenues are a more water efficient garden variant and an enlargement of the roof size. The proposed measures can partly or completely compensate the negative impacts of climate change.
Bayesian Networks are computer-based environmental models that are frequently used to support decision-making under uncertainty. Under data scarce conditions, Bayesian Networks can be developed, parameterized, and run based on expert knowledge only. However, the efficiency of expert-based Bayesian Network modeling is limited by the difficulty in deriving model inputs in the time available during expert workshops. This thesis therefore aimed at developing a simple and robust method for deriving conditional probability tables from expert estimates in a time-efficient way. The design and application of this new elicitation and conversion method is demonstrated using a case study in Xinjiang, Northwest China. The key characteristics of this method are its time-efficiency and the approach to use different conversion tables based on varying levels of confidence. Although the method has its limitations, e.g. it can only be applied for variables with one conditioning variable; it provides the opportunity to support the parameterization of Bayesian Networks which would otherwise remain half-finished due to time constraints. In addition, a case study in the Murray-Darling Basin, Australia, is used to compare Bayesian Network types and software to improve the presentation clarity of large Bayesian Networks. Both case studies aimed at gaining insights on how to improve the applicability of Bayesian Networks to support environmental management.
Nach dem Ausfall der vergangenen Woche, gibt es diese Woche wieder eine Netzschau. Alles beim Alten, bin ich geneigt zu sagen. Aber das stimmt nicht ganz. Diese Ausgabe ist eine Übergangs-Netzschau und kommt deshalb sehr schlank daher. Wir überlegen, das Format drastisch zu verändern. Über das “wie” ist bisher allerdings noch keine Entscheidung gefallen. Wer uns Tipps geben mag, wie ihre/seine perfekte Netzschau aussehen würde oder was wir auf jeden Fall verändern sollten, kommentiere bitte diesen Beitrag. Danke!
Nach vorherrschender Lesart prallen im Internet Exklusivitäts- und Zugangsinteressen aufeinander. Das Urheberrecht soll diesen Konflikt in ein angemessenes Gleichgewicht bringen. Im folgenden Beitrag werden die Auseinandersetzungen um das digitale Urheberrecht anders gedeutet. Demnach ist die Online-Kommunikation von zwei koexistierenden Kulturen geprägt, die sich je verschieden zum Urheberrecht verhalten. Die Ausgestaltung des digitalen Urheberrechts wird mit darüber entscheiden, ob das dynamische Nebeneinander von Exklusivitäts- und Zugangskultur fortdauert oder ob eine der beiden Kulturen verdrängt wird. Das Urheberrecht ist folglich als Teil der Internetregulierung zu betrachten.
FIAS Scientific Report 2014
(2015)
Die vorliegende Dissertation zeigt, dass globale Kohärenz in Lebenserzählungen erst in der Adoleszenz entsteht und sich im Erwachsenenalter weiter entwickelt. Außerdem konnte gezeigt werden, dass die fragmentarische Nutzung der Lebensgeschichte in Form autobiographischen Urteilens in Zeiten tiefgreifender Lebensveränderungen zum Erhalt der Selbst-Kontinuität beiträgt.
Estudio de la producción científica sobre la Escuela de Salamanca en los últimos años y perspectivas de futuro. Se plantea la dificultad de delimitación temporal de la Escuela y se propone la ampliación de su ámbito de estudio, no sólo a los temas tradicionales como la teología (moral, el problema del mal, la polémica De auxiliis), el derecho (el derecho natural y los derechos humanos, la soberanía, la guerra justa,…) y la economía (la propiedad privada, el dinero, el valor y el precio, el interés), sino también a problemas científicos sobre el espacio, el tiempo y otros.
El siguiente artículo presenta una reconstrucción del debate acerca de la condición teológico-política de los indios americanos en el siglo XVI. Se concentra, en particular, en uno de los elementos presentes en una controversia compleja: las opiniones sobre el paganismo de los pueblos "descubiertos" en América y en Asia. Luego de analizar las condenas por "idolatría" de los indios que encontramos en escritos de cronistas como López de Gómara o Fernández de Oviedo, se resumen los argumentos elaborados por maestros importantes de la Universidad de Salamanca (De Paz, Vitoria, Soto) para rechazar la forma confusa en la que estaban siendo planteados los dilemas teológicos surgidos tras el descubrimiento de los nuevos pueblos paganos. El trabajo hace énfasis también en la importancia del papel jugado por los teólogos salamantinos en un proceso más amplio de conceptualización de la naturaleza "inocente" de las "idolatrías" de los nativos americanos del que otros teólogos misioneros (Las Casas, Zumárraga,…) también fueron partícipes, aunque recurriendo a otros métodos y argumentos. Las secciones finales están dedicadas al estudio de la consolidación de los argumentos forjados por los teólogos salmantinos en el continuo debate sobre la evangelización y la dominación española sobre las Indias. Se retoman, en concreto, algunos escritos de dos figuras destacadas: Alonso de la Veracruz y Domingo de Salazar, para mostrar cómo, bajo la influencia de Vitoria y Soto, sus profesores en la Universidad de Salamanca, Veracruz y Salazar adaptaron algunas de sus ideas a los contextos misioneros de América y Asia.
El propósito de este artículo es discutir, en un primero momento, en que medida los maestros vinculados a la Universidad de Salamanca y a su correspondiente Escuela de Salamanca contribuyeron para la validación de un saber relacionado a los descubrimientos que permitió pensar una nueva configuración geográfica de la Tierra. En un segundo momento, mostraremos como la universidad salmantina, junto con otras instituciones de saber, operaron como un centro de actividad ‘científica’ que estuve a servicio de los proyectos de la monarquía española.
Anforderungen an einen wissenschaftlicher Verbrechensbegriff werden im ersten Teil dieses Textes vorgestellt. Die folgende Untersuchung der „Allgemeinen Theorien des Verbrechens“ zeigt, dass diese ihren Anspruch nicht einlösen können, weil sie eines wissenschaftlich tragfähigen Verbrechensbegriffes entbehren. Doch indem sie diesen Mangel nicht erwähnen, sondern diese Leerstelle mit Schweigen oder losen Verbrechensbegriffen verhüllen, täuschen sie darüber hinweg.
Background: Acquired resistance to standard chemotherapy causes treatment failure in patients with metastatic bladder cancer. Overexpression of pro-survival Bcl-2 family proteins has been associated with a poor chemotherapeutic response, suggesting that Bcl-2-targeted therapy may be a feasible strategy in patients with these tumors. The small-molecule pan-Bcl-2 inhibitor (−)-gossypol (AT-101) is known to induce apoptotic cell death, but can also induce autophagy through release of the pro-autophagic BH3 only protein Beclin-1 from Bcl-2. The potential therapeutic effects of (−)-gossypol in chemoresistant bladder cancer and the role of autophagy in this context are hitherto unknown.
Methods: Cisplatin (5637rCDDP1000, RT4rCDDP1000) and gemcitabine (5637rGEMCI20, RT4rGEMCI20) chemoresistant sub-lines of the chemo-sensitive bladder cancer cell lines 5637 and RT4 were established for the investigation of acquired resistance mechanisms. Cell lines carrying a stable lentiviral knockdown of the core autophagy regulator ATG5 were created from chemosensitive 5637 and chemoresistant 5637rGEMCI20 and 5637rCDDP1000 cell lines. Cell death and autophagy were quantified by FACS analysis of propidium iodide, Annexin and Lysotracker staining, as well as LC3 translocation.
Results: Here we demonstrate that (−)-gossypol induces an apoptotic type of cell death in 5637 and RT4 cells which is partially inhibited by the pan-caspase inhibitor z-VAD. Cisplatin- and gemcitabine-resistant bladder cancer cells exhibit enhanced basal and drug-induced autophagosome formation and lysosomal activity which is accompanied by an attenuated apoptotic cell death after treatment with both (−)-gossypol and ABT-737, a Bcl-2 inhibitor which spares Mcl-1, in comparison to parental cells. Knockdown of ATG5 and inhibition of autophagy by 3-MA had no discernible effect on apoptotic cell death induced by (−)-gossypol and ABT-737 in parental 5637 cells, but evoked a significant increase in early apoptosis and overall cell death in BH3 mimetic-treated 5637rGEMCI20 and 5637rCDDP1000 cells.
Conclusions: Our findings show for the first time that (−)-gossypol concomitantly triggers apoptosis and a cytoprotective type of autophagy in bladder cancer and support the notion that enhanced autophagy may underlie the chemoresistant phenotype of these tumors. Simultaneous targeting of Bcl-2 proteins and the autophagy pathway may be an efficient new strategy to overcome their "autophagy addiction" and acquired resistance to current therapy.
The family of lysosome-associated membrane proteins (LAMP) includes the ubiquitously expressed LAMP1 and LAMP2, which account for half of the proteins in the lysosomal membrane. Another member of the LAMP family is LAMP3, which is expressed only in certain cell types and differentiation stages. LAMP3 expression is linked with poor prognosis of certain cancers, and the locus where it is encoded was identified as a risk factor for Parkinson's disease (PD). Here, we investigated the role of LAMP3 in the two main cellular degradation pathways, the proteasome and autophagy. LAMP3 mRNA was not detected in mouse models of PD or in the brain of human patients. However, it was strongly induced upon proteasomal inhibition in the neuroblastoma cell line SH-SY5Y. Induction of LAMP3 mRNA following proteasomal inhibition was dependent on UPR transcription factor ATF4 signaling and induced autophagic flux. Prevention of LAMP3 induction enhanced apoptotic cell death. In summary, these data demonstrate that LAMP3 regulation as part of the UPR contributes to protein degradation and cell survival during proteasomal dysfunction. This link between autophagy and the proteasome may be of special importance for the treatment of tumor cells with proteasomal inhibitors.
Dieser Beitrag ist ein Besprechungsaufsatz zu Beatrice Brunhöbers 2010 erschienener Dissertation Die Erfindung „demokratischer Repräsentation“ in den Federalist Papers (Mohr Siebeck, Tübingen: Grundlagen der Rechtswissenschaft, Bd. 14), in der Brunhöber die innovative – und auch die Verfassungsentwicklung andernorts prägende – Kraft der Verbindung von Demokratie, politischer Repräsentation und Föderalismusidee durch die amerikanischen Verfassungsväter herausarbeitet. Auf der Basis von Brunhöbers Untersuchung geht es insbesondere darum, wie sich das von Hamilton, Madison und Jay entworfene ‚alte‘ Konzept zur Gestaltung eines starken Gemeinwesens (eingeschlossen das vertrauensbildende Prinzip der Gewaltenteilung) für einen integrativen Umgang mit den ‚modernen‘ Gegebenheiten pluralistischer Gesellschaften nutzbar machen läßt, im Blick die Gesamtheit (und Vielfalt) des Staatsvolkes als Geltungsfundament legitimer Herrschaft. Im Hintergrund steht die Frage nach Möglichkeiten zur Nutzbarmachung historischer Vergewisserungen für heutige Debatten überhaupt.
Background: Influenza vaccination is recommended for all healthcare personnel (HCP) and most institutions offer vaccination for free and on site. However, medical students do not always have such easy access, and the predictors that might guide the motivation of medical students to get vaccinated are largely unknown.
Methods: We conducted a cross-sectional survey study among pre-clinical medical students in a German University hospital to assess the social cognitive predictors of influenza vaccination, as well as reasons for refusal and acceptance of the vaccine.
Results: Findings show that pre-clinical medical students have comparable knowledge gaps and negative attitudes towards influenza vaccination that have previously been reported among HCP. Lower injunctive norms and higher feelings of autonomy contribute to no intention to get vaccinated against influenza, while a positive instrumental attitude and higher feelings of autonomy contribute to a high intention to get vaccinated. The variables in the regression model explained 20% of the variance in intention to get vaccinated. Conclusions: The identified factors should be addressed early in medical education, and hospitals might benefit from a more inclusive vaccination program and accessibility of free vaccines for their medical students.
Background: The objective measurement of the mechanical component and its role in chronic ankle instability is still a matter of scientific debate. We analyzed known group and diagnostic validity of our ankle arthrometer. Additionally, functional aspects of chronic ankle instability were evaluated in relation to anterior talar drawer.
Methods: By manual stress testing, 41 functionally unstable ankles were divided as mechanically stable (n = 15) or mechanically unstable (n = 26). Ankle laxity was quantified using an ankle arthrometer. Stiffness values from the load displacement curves were calculated between 40 and 60 N. Known group validity and eta2 were established by comparing manual and arthrometer testing results. Diagnostic validity for the ankle arthrometer was determined by a 2 × 2 contingency table. The functional ankle instability severity was quantified by the German version of the Foot and Ankle Ability Measure (FAAM-G). Stiffness (40–60 N) and FAAM-G values were correlated.
Results: Mechanically unstable ankles had lower 40–60 N stiffness values than mechanically stable ankles (p = 0.006 and <0.001). Eta for the relation between manual and arthrometer anterior talar drawer testing was 0.628. With 5.1 N/mm as cut-off value, accuracy, sensitivity, and specificity were 85%, 81%, and 93%, respectively. The correlation between individual 40–60 N arthrometer stiffness values and FAAM-G scores was r = 0.286 and 0.316 (p = 0.07 and 0.04).
Conclusions: In this investigation, the ankle arthrometer demonstrated a high diagnostic validity for the determination of mechanical ankle instability. A clear interaction between mechanical (ankle arthrometer) and functional (FAAM-G) measures could not be demonstrated.
Background: Patients with liver cirrhosis have a highly elevated risk of developing bacterial infections that significantly decrease survival rates. One of the most relevant infections is spontaneous bacterial peritonitis (SBP). Recently, NOD2 germline variants were found to be potential predictors of the development of infectious complications and mortality in patients with cirrhosis. The aim of the INCA (Impact of NOD2 genotype-guided antibiotic prevention on survival in patients with liver Cirrhosis and Ascites) trial is to investigate whether survival of this genetically defined high-risk group of patients with cirrhosis defined by the presence of NOD2 variants is improved by primary antibiotic prophylaxis of SBP.
Methods/Design: The INCA trial is a double-blind, placebo-controlled clinical trial with two parallel treatment arms (arm 1: norfloxacin 400 mg once daily; arm 2: placebo once daily; 12-month treatment and observational period). Balanced randomization of 186 eligible patients with stratification for the protein content of the ascites (<15 versus ≥15 g/L) and the study site is planned. In this multicenter national study, patients are recruited in at least 13 centers throughout Germany. The key inclusion criterion is the presence of a NOD2 risk variant in patients with decompensated liver cirrhosis. The most important exclusion criteria are current SBP or previous history of SBP and any long-term antibiotic prophylaxis. The primary endpoint is overall survival after 12 months of treatment. Secondary objectives are to evaluate whether the frequencies of SBP and other clinically relevant infections necessitating antibiotic treatment, as well as the total duration of unplanned hospitalization due to cirrhosis, differ in both study arms. Recruitment started in February 2014.
Discussion: Preventive strategies are required to avoid life-threatening infections in patients with liver cirrhosis, but unselected use of antibiotics can trigger resistant bacteria and worsen outcome. Thus, individualized approaches that direct intervention only to patients with the highest risk are urgently needed. This trial meets this need by suggesting stratified prevention based on genetic risk assessment. To our knowledge, the INCA trial is first in the field of hepatology aimed at rapidly transferring and validating information on individual genetic risk into clinical decision algorithms.
Trial registrations: German Clinical Trials Register DRKS00005616. Registered 22 January 2014. EU Clinical Trials Register EudraCT 2013-001626-26. Registered 26 January 2015.
Recent studies have revealed an important role for Ltbp-4 in elastogenesis. Its mutational inactivation in humans causes autosomal recessive cutis laxa type 1C (ARCL1C), which is a severe disorder caused by defects of the elastic fiber network. Although the human gene involved in ARCL1C has been discovered based on similar elastic fiber abnormalities exhibited by mice lacking the short Ltbp-4 isoform (Ltbp4S−/−), the murine phenotype does not replicate ARCL1C. We therefore inactivated both Ltbp-4 isoforms in the mouse germline to model ARCL1C. Comparative analysis of Ltbp4S−/− and Ltbp4-null (Ltbp4−/−) mice identified Ltbp-4L as an important factor for elastogenesis and postnatal survival, and showed that it has distinct tissue expression patterns and specific molecular functions. We identified fibulin-4 as a previously unknown interaction partner of both Ltbp-4 isoforms and demonstrated that at least Ltbp-4L expression is essential for incorporation of fibulin-4 into the extracellular matrix (ECM). Overall, our results contribute to the current understanding of elastogenesis and provide an animal model of ARCL1C.
The three-dimensional quantification of small-scale processes in the upper troposphere and lower stratosphere is one of the challenges of current atmospheric research and requires the development of new measurement strategies. This work presents the first results from the newly developed Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) obtained during the ESSenCe (ESa Sounder Campaign) and TACTS/ESMVal (TACTS: Transport and composition in the upper troposphere/lowermost stratosphere, ESMVal: Earth System Model Validation) aircraft campaigns. The focus of this work is on the so-called dynamics-mode data characterized by a medium-spectral and a very-high-spatial resolution. The retrieval strategy for the derivation of two- and three-dimensional constituent fields in the upper troposphere and lower stratosphere is presented. Uncertainties of the main retrieval targets (temperature, O3, HNO3, and CFC-12) and their spatial resolution are discussed. During ESSenCe, high-resolution two-dimensional cross-sections have been obtained. Comparisons to collocated remote-sensing and in situ data indicate a good agreement between the data sets. During TACTS/ESMVal, a tomographic flight pattern to sense an intrusion of stratospheric air deep into the troposphere was performed. It was possible to reconstruct this filament at an unprecedented spatial resolution of better than 500 m vertically and 20 × 20 km horizontally.
Microstructural abnormalities in white matter (WM) are often reported in Alzheimer's disease (AD) and may reflect primary or secondary circuitry degeneration (i.e., due to cortical atrophy). The interpretation of diffusion tensor imaging (DTI) eigenvectors, known as multiple indices, may provide new insights into the main pathological models supporting primary or secondary patterns of WM disruption in AD, the retrogenesis, and Wallerian degeneration models, respectively. The aim of this review is to analyze the current literature on the contribution of DTI multiple indices to the understanding of AD neuropathology, taking the retrogenesis model as a reference for discussion. A systematic review using MEDLINE, EMBASE, and PUBMED was performed. Evidence suggests that AD evolves through distinct patterns of WM disruption, in which retrogenesis or, alternatively, the Wallerian degeneration may prevail. Distinct patterns of WM atrophy may be influenced by complex interactions which comprise disease status and progression, fiber localization, concurrent risk factors (i.e., vascular disease, gender), and cognitive reserve. The use of DTI multiple indices in addition to other standard multimodal methods in dementia research may help to determine the contribution of retrogenesis hypothesis to the understanding of neuropathological hallmarks that lead to AD.
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.
Introduction: In this article three research questions are addressed: (1) Is there an association between socioeconomic status (SES) and patient-reported outcomes in a cohort of multimorbid patients? (2) Does the association vary according to SES indicator used (income, education, occupational position)? (3) Can the association between SES and patient-reported outcomes (self-rated health, health-related quality of life and functional status) be (partly) explained by burden of disease?
Methods: Analyses are based on the MultiCare Cohort Study, a German multicentre, prospective, observational cohort study of multimorbid patients from general practice. We analysed baseline data and data from the first follow-up after 15 months (N = 2,729). To assess burden of disease we used the patients’ morbidity data from standardized general practitioner (GP) interviews based on a list of 46 groups of chronic conditions including the GP’s severity rating of each chronic condition ranging from marginal to very severe.
Results: In the cross-sectional analyses SES was significantly associated with the patient-reported outcomes at baseline. Associations with income were more consistent and stronger than with education and occupational position. Associations were partly explained (17% to 44%) by burden of disease. In the longitudinal analyses only income (but not education and occupational position) was significantly related to the patient-reported outcomes at follow-up. Associations between income and the outcomes were reduced by 18% to 27% after adjustment for burden of disease.
Conclusions: Results indicate social inequalities in self-rated health, functional status and health related quality of life among older multimorbid patients. As associations with education and occupational position were inconsistent, these inequalities were mainly due to income. Inequalities were partly explained by burden of disease. However, even among patients with a similar disease burden, those with a low income were worse off in terms of the three patient-reported outcomes under study.
Background: Xanthophyllomyces dendrorhous is a basal agaricomycete with uncertain taxonomic placement, known for its unique ability to produce astaxanthin, a carotenoid with antioxidant properties. It was the aim of this study to elucidate the organization of its CoA-derived pathways and to use the genomic information of X. dendrorhous for a phylogenomic investigation of the Basidiomycota.
Results: The genome assembly of a haploid strain of Xanthophyllomyces dendrorhous revealed a genome of 19.50 Megabases with 6385 protein coding genes. Phylogenetic analyses were conducted including 48 fungal genomes. These revealed Ustilaginomycotina and Agaricomycotina as sister groups. In the latter a well-supported sister-group relationship of two major orders, Polyporales and Russulales, was inferred. Wallemia occupies a basal position within the Agaricomycotina and X. dendrorhous represents the basal lineage of the Tremellomycetes, highlighting that the typical tremelloid parenthesomes have either convergently evolved in Wallemia and the Tremellomycetes, or were lost in the Cystofilobasidiales lineage. A detailed characterization of the CoA-related pathways was done and all genes for fatty acid, sterol and carotenoid synthesis have been assigned.
Conclusions: The current study ascertains that Wallemia with tremelloid parenthesomes is the most basal agaricomycotinous lineage and that Cystofilobasidiales without tremelloid parenthesomes are deeply rooted within Tremellomycetes, suggesting that parenthesomes at septal pores might be the core synapomorphy for the Agaricomycotina. Apart from evolutionary insights the genome sequence of X. dendrorhous will facilitate genetic pathway engineering for optimized astaxanthin or oxidative alcohol production.
Background: Aging is associated with loss of balance and activity in daily life. It impacts postural control and increases the risk of falls. The current study was conducted to determine the feasibility and long-term impact of stochastic resonance whole-body vibration (SR-WBV) on static and dynamic balance and reaction time among elderly individuals.
Methods: A randomized crossover pilot study with blinding of the participants. Twenty elderly were divided into group A (SR-WBV 5 Hz, Noise 4/SR-WBV 1 Hz, Noise 1) or group B (SR-WBV 1 Hz, Noise 1/SR-WBV 5 Hz, Noise 1). Feasibility outcomes included recruitment, compliance and safety. Secondary outcomes were Semi-Tandem Stand (STS), Functional Reach Test (FRT), Expanded Timed Get Up-and-Go (ETGUG), walking under single (ST) & dual task (DT) conditions, hand and foot reaction time (RTH/RTF). Puri and Sen Rank-Order L Statistics were used to analyse carry-over effects. To analyse SR-WBV effects Wilcoxon signed-ranked tests were used.
Results: With good recruitment rate (55%) and compliance (attrition 15%; adherence 85%) rates the intervention was deemed feasible. Three participants dropped out, two due to knee pain and one for personal reasons. ETGUG 0 to 2 m (p = 0.143; ES: 0.36) and ETGUG total time (p = 0.097; ES: 0.40) showed medium effect sizes.
Conclusions: Stochastic resonance training is feasible in untrained elderly resulting in good recruitment and compliance. Low volume SR-WBV exercises over 12 training sessions with 5 Hz, Noise 4 seems a sufficient stimulus to improve ETGUG total time. The stimulation did not elicit changes in other outcomes.
Trial registration: This trial has been registered at the U.S. National Institutes of Health under ClinicalTrials.gov: NCT01045746 .
The genetics responsible for the inter-individually variable G-CSF responsiveness remain elusive. A single nucleotide polymorphism (SNP) in the 3’UTR of CXCL12, rs1801157, was implicated in X4-tropic HiV susceptibility and later, in two small studies, in G-CSR responsiveness in patients and donors. The position of the SNP in the 3’UTR together with in-silico predictions suggested differential binding of micro-RNA941 as an underlying mechanism. In a cohort of 515 healthy stem cell donors we attempted to reproduce the correlation of the CXCL12 3’UTR SNP and mobilization responses and tested the role of miR941 in this context. The SNP was distributed with the expected frequency. Mobilization efficiency for CD34+ cells in WT, heterozygous and homozygous SNP individuals was indistinguishable, even after controlling for gender. miR941 expression in non-hematopoietic bone marrow cells was undetectable and miR941 did not interact with the 3’ UTR of CXCL12. Proposed effects of the SNP rs1801157 on G-CSF responsiveness cannot be confirmed in a larger cohort.
Völkermord an den Armeniern: diplomatische Rücksichtnahme darf Anerkennung nicht im Weg stehen
(2015)
In ihrem Gastbeitrag fordern Matthias Winkler und Timo Leimeister von Genocide Alert, dass Deutschland, trotz möglicher diplomatischer Verstimmungen, nicht davor zurückschrecken sollte, den Völkermord an den Armeniern von 1915 als solchen ausdrücklich zu benennen. Vor einem Jahrhundert wurden weite Teile des armenischen Volkes im Osmanischen Reich in einem Völkermord ausgelöscht. Das Deutsche Reich war ein enger Verbündeter der damaligen osmanischen Regierung und stellte die Bündnispolitik über das Überleben der Armenier. Durch ein Eingeständnis auch der eigenen historischen Verantwortung an den Geschehnissen kann die Bundesrepublik im Gegenteil den Vertretern einer Aussöhnung in der Türkei den Rücken stärken...
Single-molecule super-resolution microscopy allows imaging of fluorescently-tagged proteins in live cells with a precision well below that of the diffraction limit. Here, we demonstrate 3D sectioning with single-molecule super-resolution microscopy by making use of the fitting information that is usually discarded to reject fluorophores that emit from above or below a virtual-'light-sheet', a thin volume centred on the focal plane of the microscope. We describe an easy-to-use routine (implemented as an open-source ImageJ plug-in) to quickly analyse a calibration sample to define and use such a virtual light-sheet. In addition, the plug-in is easily usable on almost any existing 2D super-resolution instrumentation. This optical sectioning of super-resolution images is achieved by applying well-characterised width and amplitude thresholds to diffraction-limited spots that can be used to tune the thickness of the virtual light-sheet. This allows qualitative and quantitative imaging improvements: by rejecting out-of-focus fluorophores, the super-resolution image gains contrast and local features may be revealed; by retaining only fluorophores close to the focal plane, virtual-'light-sheet' single-molecule localisation microscopy improves the probability that all emitting fluorophores will be detected, fitted and quantitatively evaluated.
The formation of particles from precursor vapors is an important source of atmospheric aerosol. Research at the Cosmics Leaving OUtdoor Droplets (CLOUD) facility at CERN tries to elucidate which vapors are responsible for this new-particle formation, and how in detail it proceeds. Initial measurement campaigns at the CLOUD stainless-steel aerosol chamber focused on investigating particle formation from ammonia (NH3) and sulfuric acid (H2SO4). Experiments were conducted in the presence of water, ozone and sulfur dioxide. Contaminant trace gases were suppressed at the technological limit. For this study, we mapped out the compositions of small NH3–H2SO4 clusters over a wide range of atmospherically relevant environmental conditions. We covered [NH3] in the range from < 2 to 1400 pptv, [H2SO4] from 3.3 × 106 to 1.4 × 109 cm−3 (0.1 to 56 pptv), and a temperature range from −25 to +20 °C. Negatively and positively charged clusters were directly measured by an atmospheric pressure interface time-of-flight (APi-TOF) mass spectrometer, as they initially formed from gas-phase NH3 and H2SO4, and then grew to larger clusters containing more than 50 molecules of NH3 and H2SO4, corresponding to mobility-equivalent diameters greater than 2 nm. Water molecules evaporate from these clusters during sampling and are not observed. We found that the composition of the NH3–H2SO4 clusters is primarily determined by the ratio of gas-phase concentrations [NH3] / [H2SO4], as well as by temperature. Pure binary H2O–H2SO4 clusters (observed as clusters of only H2SO4) only form at [NH3] / [H2SO4] < 0.1 to 1. For larger values of [NH3] / [H2SO4], the composition of NH3–H2SO4 clusters was characterized by the number of NH3 molecules m added for each added H2SO4 molecule n (Δm/Δ n), where n is in the range 4–18 (negatively charged clusters) or 1–17 (positively charged clusters). For negatively charged clusters, Δ m/Δn saturated between 1 and 1.4 for [NH3] / [H2SO4] > 10. Positively charged clusters grew on average by Δm/Δn = 1.05 and were only observed at sufficiently high [NH3] / [H2SO4]. The H2SO4 molecules of these clusters are partially neutralized by NH3, in close resemblance to the acid–base bindings of ammonium bisulfate. Supported by model simulations, we substantiate previous evidence for acid–base reactions being the essential mechanism behind the formation of these clusters under atmospheric conditions and up to sizes of at least 2 nm. Our results also suggest that electrically neutral NH3–H2SO4 clusters, unobservable in this study, have generally the same composition as ionic clusters for [NH3] / [H2SO4] > 10. We expect that NH3–H2SO4 clusters form and grow also mostly by Δm/Δn > 1 in the atmosphere's boundary layer, as [NH3] / [H2SO4] is mostly larger than 10. We compared our results from CLOUD with APi-TOF measurements of NH3–H2SO4 anion clusters during new-particle formation in the Finnish boreal forest. However, the exact role of NH3–H2SO4 clusters in boundary layer particle formation remains to be resolved.
Seven different instruments and measurement methods were used to examine the immersion freezing of bacterial ice nuclei from Snomax® (hereafter Snomax), a product containing ice-active protein complexes from non-viable Pseudomonas syringae bacteria. The experimental conditions were kept as similar as possible for the different measurements. Of the participating instruments, some examined droplets which had been made from suspensions directly, and the others examined droplets activated on previously generated Snomax particles, with particle diameters of mostly a few hundred nanometers and up to a few micrometers in some cases. Data were obtained in the temperature range from −2 to −38 °C, and it was found that all ice-active protein complexes were already activated above −12 °C. Droplets with different Snomax mass concentrations covering 10 orders of magnitude were examined. Some instruments had very short ice nucleation times down to below 1 s, while others had comparably slow cooling rates around 1 K min−1. Displaying data from the different instruments in terms of numbers of ice-active protein complexes per dry mass of Snomax, nm, showed that within their uncertainty, the data agree well with each other as well as to previously reported literature results. Two parameterizations were taken from literature for a direct comparison to our results, and these were a time-dependent approach based on a contact angle distribution (Niedermeier et al., 2014) and a modification of the parameterization presented in Hartmann et al. (2013) representing a time-independent approach. The agreement between these and the measured data were good; i.e., they agreed within a temperature range of 0.6 K or equivalently a range in nm of a factor of 2. From the results presented herein, we propose that Snomax, at least when carefully shared and prepared, is a suitable material to test and compare different instruments for their accuracy of measuring immersion freezing.
Ecolabels are frequently presented as consumer information tools that efficiently promote environmental aims such as the sustainability of fisheries. Two recent WTO dispute settlement cases -- Tuna II and COOL -- have called into question the characterisation of labels as ‘consumer information tools’ by illuminating the regulatory power and purposes of labelling. Tuna II moreover clarifies that WTO law does not necessarily privilege ecolabelling over more openly interventionist government measures aimed at environmental protection. In this contribution I first sketch two views of ecolabelling -- one that depicts ecolabelling as primarily aiming at consumer information and another that stresses the regulatory function of labelling. I then turn to the dispute settlement reports in Tuna II and COOL in order to specify the government authority involved in many labelling schemes. I conclude this contribution with the call for a critical assessment of ecolabelling. The power of ecolabelling may be employed to reshape markets and promote green growth. At the same time, however, it may consolidate a trend that places the consumer at the centre of initiatives for societal change and loses sight of potentially more radical transformations through the engagement of human beings as citizens.
Das Verhältnis von Zwangsvollstreckungs- und Verfassungsrecht ist nicht nur in Deutschland ein aktuelles Thema in der zivilprozessualen, verfassungsrechtlichen und (verfahrens-) rechtspolitischen Diskussion, wie die vorliegende Themenwahl der o.g. Jahrestagung der International Association of Procedural Law (IAPL) belegt. Ein Ausschnitt aus dieser Gesamtthematik ist Gegenstand dieses Nationalberichts aus der Perspektive des deutschen (Verfahrens-) Rechts, der unter dem Generalhema „Verfassung, Grundrechte und Vollstreckungsrecht“ insbesondere das „Spannungsverhältnis“ der kollidierenden Grundrechte von Vollstreckungsschuldner und -gläubiger behandelt.
Although much is known about the critical importance of active verbal rehearsal for successful recall, knowledge about the mechanisms of rehearsal and their respective development in children is very limited. To be able to rehearse several items together, these items have to be available, or, if presented and rehearsed previously, retrieved from memory. Therefore, joint rehearsal of several items may itself be considered recall. Accordingly, by analyzing free recall, one cannot only gain insight into how recall and rehearsal unfold, but also into how principles that govern children’s recall govern children’s rehearsal. Over a period of three and a half years (beginning at grade 3) 54 children were longitudinally assessed seven times on several overt rehearsal free recall trials. A first set of analyses on recall revealed significant age-related increases in the primacy effect and an age-invariant recency effect. In the middle portion of the list, wave-shaped recall characteristics emerged and increased with age, indicating grouping of the list into subsequences. In a second set of analyses, overt rehearsal behavior was decomposed into distinct rehearsal sets. Analyses of these sets revealed that the distribution of rehearsals within each set resembled the serial position curves with one- or two-item primacy and recency effects and wave-shaped rehearsal patterns in between. In addition, rehearsal behavior throughout the list was characterized by a decreasing tendency to begin rehearsal sets with the first list item. This result parallels the phenomenon of beginning recall with the first item on short lists and with the last item on longer lists.
Prostaglandin E2 (PGE2) favors multiple aspects of tumor development and immune evasion. Therefore, microsomal prostaglandin E synthase (mPGES-1/-2), is a potential target for cancer therapy. We explored whether inhibiting mPGES-1 in human and mouse models of breast cancer affects tumor-associated immunity. A new model of breast tumor spheroid killing by human PBMCs was developed. In this model, tumor killing required CD80 expression by tumor-associated phagocytes to trigger cytotoxic T cell activation. Pharmacological mPGES-1 inhibition increased CD80 expression, whereas addition of PGE2, a prostaglandin E2 receptor 2 (EP2) agonist, or activation of signaling downstream of EP2 reduced CD80 expression. Genetic ablation of mPGES-1 resulted in markedly reduced tumor growth in PyMT mice. Macrophages of mPGES-1-/- PyMT mice indeed expressed elevated levels of CD80 compared to their wildtype counterparts. CD80 expression in tumor-spheroid infiltrating mPGES-1-/- macrophages translated into antigen-specific cytotoxic T cell activation. In conclusion, mPGES-1 inhibition elevates CD80 expression by tumor-associated phagocytes to restrict tumor growth. We propose that mPGES-1 inhibition in combination with immune cell activation might be part of a therapeutic strategy to overcome the immunosuppressive tumor microenvironment.
The aim of this study was to assess whether endosperm-specific carotenoid biosynthesis influenced core metabolic processes in maize embryo and endosperm and how global seed metabolism adapted to this expanded biosynthetic capacity. Although enhancement of carotenoid biosynthesis was targeted to the endosperm of maize kernels, a concurrent up-regulation of sterol and fatty acid biosynthesis in the embryo was measured. Targeted terpenoid analysis, and non-targeted metabolomic, proteomic, and transcriptomic profiling revealed changes especially in carbohydrate metabolism in the transgenic line. In-depth analysis of the data, including changes of metabolite pools and increased enzyme and transcript concentrations, gave a first insight into the metabolic variation precipitated by the higher up-stream metabolite demand by the extended biosynthesis capacities for terpenoids and fatty acids. An integrative model is put forward to explain the metabolic regulation for the increased provision of terpenoid and fatty acid precursors, particularly glyceraldehyde 3-phosphate and pyruvate or acetyl-CoA from imported fructose and glucose. The model was supported by higher activities of fructokinase, glucose 6-phosphate isomerase, and fructose 1,6-bisphosphate aldolase indicating a higher flux through the glycolytic pathway. Although pyruvate and acetyl-CoA utilization was higher in the engineered line, pyruvate kinase activity was lower. A sufficient provision of both metabolites may be supported by a by-pass in a reaction sequence involving phosphoenolpyruvate carboxylase, malate dehydrogenase, and malic enzyme.
Global warming, changes in the hydrological cycle and enhanced marine primary productivity all have been invoked as having contributed to the occurrence of widespread ocean anoxia during the Cenomanian–Turonian oceanic anoxic event (OAE2; ~94 Ma), but disentangling these factors on a regional scale has remained problematic. In an attempt to separate these forcing factors, we generated palynological and organic geochemical records using a core spanning the OAE2 from Wunstorf, Lower Saxony Basin (LSB; northern Germany), which exhibits cyclic black shale–marl alternations related to the orbital precession cycle.
Despite the widely varying depositional conditions complicating the interpretation of the obtained records, TEX86H indicates that sea-surface temperature (SST) evolution in the LSB during OAE2 resembles that of previously studied sites throughout the proto-North Atlantic. Cooling during the so-called Plenus Cold Event interrupted black shale deposition during the early stages of OAE2. However, TEX86 does not vary significantly across black shale–marl alternations, suggesting that temperature variations did not force the formation of the cyclic black shale horizons. Relative (i.e., with respect to marine palynomorphs) and absolute abundances of pollen and spores are elevated during phases of black shale deposition, indicative of enhanced precipitation and run-off. High abundances of cysts from inferred heterotrophic and euryhaline dinoflagellates supports high run-off, which likely introduced additional nutrients to the epicontinental shelf resulting in elevated marine primary productivity.
We conclude that orbitally forced enhanced precipitation and run-off, in tandem with elevated marine primary productivity, were critical in cyclic black shale formation on the northern European epicontinental shelf and potentially for other OAE2 sections in the proto-Atlantic and Western Interior Seaway at similar latitudes as well.
The forest, savanna, and grassland biomes, and the transitions between them, are expected to undergo major changes in the future due to global climate change. Dynamic global vegetation models (DGVMs) are very useful for understanding vegetation dynamics under the present climate, and for predicting its changes under future conditions. However, several DGVMs display high uncertainty in predicting vegetation in tropical areas. Here we perform a comparative analysis of three different DGVMs (JSBACH, LPJ-GUESS-SPITFIRE and aDGVM) with regard to their representation of the ecological mechanisms and feedbacks that determine the forest, savanna, and grassland biomes, in an attempt to bridge the knowledge gap between ecology and global modeling. The outcomes of the models, which include different mechanisms, are compared to observed tree cover along a mean annual precipitation gradient in Africa. By drawing on the large number of recent studies that have delivered new insights into the ecology of tropical ecosystems in general, and of savannas in particular, we identify two main mechanisms that need improved representation in the examined DGVMs. The first mechanism includes water limitation to tree growth, and tree–grass competition for water, which are key factors in determining savanna presence in arid and semi-arid areas. The second is a grass–fire feedback, which maintains both forest and savanna presence in mesic areas. Grasses constitute the majority of the fuel load, and at the same time benefit from the openness of the landscape after fires, since they recover faster than trees. Additionally, these two mechanisms are better represented when the models also include tree life stages (adults and seedlings), and distinguish between fire-prone and shade-tolerant forest trees, and fire-resistant and shade-intolerant savanna trees. Including these basic elements could improve the predictive ability of the DGVMs, not only under current climate conditions but also and especially under future scenarios.
The Tarim River basin, located in Xinjiang, NW China, is the largest endorheic river basin in China and one of the largest in all of Central Asia. Due to the extremely arid climate, with an annual precipitation of less than 100 mm, the water supply along the Aksu and Tarim rivers solely depends on river water. This is linked to anthropogenic activities (e.g., agriculture) and natural and semi-natural ecosystems as both compete for water. The ongoing increase in water consumption by agriculture and other human activities in this region has been enhancing the competition for water between human needs and nature. Against this background, 11 German and 6 Chinese universities and research institutes have formed the consortium SuMaRiO (Sustainable Management of River Oases along the Tarim River; http://www.sumario.de), which aims to create a holistic picture of the availability of water resources in the Tarim River basin and the impacts on anthropogenic activities and natural ecosystems caused by the water distribution within the Tarim River basin. On the basis of the results from field studies and modeling approaches as well as from suggestions by the relevant regional stakeholders, a decision support tool (DST) will be implemented that will then assist stakeholders in balancing the competition for water, acknowledging the major external effects of water allocation to agriculture and to natural ecosystems. This consortium was formed in 2011 and is funded by the German Federal Ministry of Education and Research. As the data collection phase was finished this year, the paper presented here brings together the results from the fields from the disciplines of climate modeling, cryology, hydrology, agricultural sciences, ecology, geoinformatics, and social sciences in order to present a comprehensive picture of the effects of different water availability schemes on anthropogenic activities and natural ecosystems along the Tarim River. The second objective is to present the project structure of the whole consortium, the current status of work (i.e., major new results and findings), explain the foundation of the decision support tool as a key product of this project, and conclude with application recommendations for the region. The discharge of the Aksu River, which is the major tributary of the Tarim, has been increasing over the past 6 decades. From 1989 to 2011, agricultural area more than doubled: cotton became the major crop and there was a shift from small-scale to large-scale intensive farming. The ongoing increase in irrigated agricultural land leads to the increased threat of salinization and soil degradation caused by increased evapotranspiration. Aside from agricultural land, the major natural and semi-natural ecosystems are riparian (Tugai) forests, shrub vegetation, reed beds, and other grassland, as well as urban and peri-urban vegetation. Within the SuMaRiO cluster, focus has been set on the Tugai forests, with Populus euphratica as the dominant tree species, because these forests belong to the most productive and species-rich natural ecosystems of the Tarim River basin. At sites close to the groundwater, the annual stem diameter increments of Populus euphratica correlated with the river runoffs of the previous year. However, the natural river dynamics cease along the downstream course and thus hamper the recruitment of Populus euphratica. A study on the willingness to pay for the conservation of the natural ecosystems was conducted to estimate the concern of the people in the region and in China's capital. These household surveys revealed that there is a considerable willingness to pay for conservation of the natural ecosystems, with mitigation of dust and sandstorms considered the most important ecosystem service. Stakeholder dialogues contributed to creating a scientific basis for a sustainable management in the future.
Die vorliegende Arbeit verfolgt den Anspruch, die von Paul Natorp (1907) gestellte Frage, was die Gemeinschaft für die Erziehung und umgekehrt die Erziehung für die Gemeinschaft bedeute, empirisch auszuloten. Im Mittelpunkt des Forschungsinteresses steht dabei das Verhältnis von Schule und Gemeinschaft in der ‚postnationalen Konstellation‘ (Habermas 1998), dem sich die Arbeit über die Untersuchung von Unterrichtskommunikation zu den Themen ‚Nationalsozialismus/Holocaust‘ und ‚Multikulturalismus/Rassismus‘ anzunähern versucht.
Zur Vorbereitung der empirischen Studie wird im Rahmen einer Semantikstudie zunächst der pädagogische Diskurs zum Thema Gemeinschaft von den ersten Vorläufern gemeinschaftspä-dagogischen Denkens bis hin zu aktuellen konzeptionellen Entwürfen nachgezeichnet. Funktion und Bedeutung der Gemeinschaftsfigur als Bezugskategorie von Erziehungsreflexion in der Moderne werden herausgearbeitet. Weiterhin werden die rekonstruierten gemeinschaftspädagogischen Konzepte in Hinblick auf ihr Spannungspotential zu Leitprinzipien demokratischer Erziehung beleuchtet.
Mithilfe sequenzanalytischer Interpretationen wird anschließend der Frage nachgegangen, welche Rolle Gemeinschaftsbezügen als Fluchtpunkt pädagogischer Einwirkungsbemühungen im Unterricht zukommt. Gleichzeitig wird gefragt, ob und in welcher Weise Unterricht Gemeinschaft als Ermöglichungsform zur Umsetzung seiner pädagogischen Absichten in Anspruch nimmt. Nicht zuletzt richtet die Analyse das Augenmerk auf den Umgang mit den potentiellen Herausforderungen, die unter Migrationsbedingungen mit dem Rekurrieren auf Gemeinschaft am Lernort (Geschichts-)Unterricht verbunden sind.
Die Interpretationen zeigen, wie Unterricht Gemeinschaft in Anspruch nimmt, um die Bedingungen seines Prozessierens zu sichern. Weiterhin decken sie zwei kontrastierende Typen des Rekurrierens auf Gemeinschaft als Fluchtpunkt pädagogischer Kommunikation zum Thema NS auf. Als Lösungsoption im Umgang mit den migrationsbedingten Herausforderungen von Geschichtsunterricht zum Thema NS deutet sich indes die Tendenz an, verstärkt auf eine universalistische Erinnerungs- und Verantwortungskultur Bezug zu nehmen.
Das Gefahrenpotential, das dem Gemeinschaftsgedanken in der aktuellen erziehungswissenschaftlichen Diskussion zugeschrieben wird, erweist sich in den betrachteten Unterrichtsse-quenzen als beschränkt. Die Beobachtungen legen vielmehr die These einer eingehegten Form von Gemeinschaftserziehung in der gegenwärtigen Unterrichtspraxis nahe. Darin erfährt das Risiko, das gemeinschaftspädagogischen Ansätzen ihrer Kritik nach eingeschrieben ist, über normative Selbstverpflichtungen der pädagogischen Praxis eine Eindämmung.
I’m probably not alone in observing that there seems to be an increasing number of data articles being published in the field of conflict studies and IR. Together with some colleagues, I’m even preparing one myself at the moment! Is that perceived increase in data publication actually measurable? And does it indeed amount to “drowning”?
This paper explores how banks adjust their risk-based capital ratios and asset allocations following an exogenous shock to their asset quality caused by Hurricane Katrina in 2005. We find that independent banks based in the disaster areas increase their risk-based capital ratios after the hurricane, while those part of a bank holding company do not. The effect on independent banks mainly comes from the subgroup of high-capitalized banks. These banks increase their holdings in government securities and reduce loans to non-financial firms. Hence, banks that become more stable achieve this at the cost of reduced lending.
A number of recent studies regress a "narratively" identified measure of a macroeconomic shock directly on an outcome variable. In this note, we argue that this approach can be viewed as the reduced-form regression of an instrumental variable approach in which the narrative time series is used as an instrument for an endogenous series of interest. This motivates evaluating the validity of narrative measures through the lens of a randomized experiment. We apply our framework to four recently constructed narrative measures of tax shocks by Romer and Romer (2010), Cloyne (2013), and Mertens and Ravn (2012). All of them turn out to be weak instruments for observable measures of taxes. After correcting for weak instruments, we find that using any of the considered narrative tax measures as an instrument for cyclically adjusted tax revenues yields tax multiplier estimates that are indistinguishable from zero. We conclude that the literature currently understates the uncertainty associated with quantifying the tax multiplier.
This paper studies a dynamic stochastic general equilibrium model involving climate change. Our model allows for damages on economic growth resulting from global warming. In the calibration, we capture effects from climate change and feedback effects on the temperature dynamics. We solve for the optimal state-dependent abatement policy. In our simulations, the costs of this policy measured in terms of lost GDP growth are moderate. On the other hand, postponing abatement action could reduce the probability that the climate can be stabilized. For instance, waiting for 10 years reduces this probability from 60% to 30%. Waiting for another 10 years leads to a probability that is less than 10%. Finally, doing nothing opens the risk that temperatures might explode and economic growth decreases significantly.
The banking system is highly interconnected and these connections can be conveniently represented as an interbank network. This survey presents a systematic overview of the recent advances in the theoretical literature on interbank networks. We assess our current understanding of the structure of interbank networks, of how network characteristics affect contagion in the banking system and of how banks form connections when faced with the possibility of contagion and systemic risk. In particular, we highlight how the theoretical literature on interbank networks offers a coherent way of studying interconnections, contagion processes and systemic risk, while emphasizing at the same time the challenges that must be addressed before general results on the link between the structure of the interbank network and financial stability can be established. The survey concludes with a discussion of the policy relevance of interbank network models with a special focus on macroprudential policies and monetary policy.
Immersion freezing is the most relevant heterogeneous ice nucleation mechanism through which ice crystals are formed in mixed-phase clouds. In recent years, an increasing number of laboratory experiments utilizing a variety of instruments have examined immersion freezing activity of atmospherically relevant ice-nucleating particles. However, an intercomparison of these laboratory results is a difficult task because investigators have used different ice nucleation (IN) measurement methods to produce these results. A remaining challenge is to explore the sensitivity and accuracy of these techniques and to understand how the IN results are potentially influenced or biased by experimental parameters associated with these techniques.
Within the framework of INUIT (Ice Nuclei Research Unit), we distributed an illite-rich sample (illite NX) as a representative surrogate for atmospheric mineral dust particles to investigators to perform immersion freezing experiments using different IN measurement methods and to obtain IN data as a function of particle concentration, temperature (T), cooling rate and nucleation time. A total of 17 measurement methods were involved in the data intercomparison. Experiments with seven instruments started with the test sample pre-suspended in water before cooling, while 10 other instruments employed water vapor condensation onto dry-dispersed particles followed by immersion freezing. The resulting comprehensive immersion freezing data set was evaluated using the ice nucleation active surface-site density, ns, to develop a representative ns(T) spectrum that spans a wide temperature range (−37 °C < T < −11 °C) and covers 9 orders of magnitude in ns.
In general, the 17 immersion freezing measurement techniques deviate, within a range of about 8 °C in terms of temperature, by 3 orders of magnitude with respect to ns. In addition, we show evidence that the immersion freezing efficiency expressed in ns of illite NX particles is relatively independent of droplet size, particle mass in suspension, particle size and cooling rate during freezing. A strong temperature dependence and weak time and size dependence of the immersion freezing efficiency of illite-rich clay mineral particles enabled the ns parameterization solely as a function of temperature. We also characterized the ns(T) spectra and identified a section with a steep slope between −20 and −27 °C, where a large fraction of active sites of our test dust may trigger immersion freezing. This slope was followed by a region with a gentler slope at temperatures below −27 °C. While the agreement between different instruments was reasonable below ~ −27 °C, there seemed to be a different trend in the temperature-dependent ice nucleation activity from the suspension and dry-dispersed particle measurements for this mineral dust, in particular at higher temperatures. For instance, the ice nucleation activity expressed in ns was smaller for the average of the wet suspended samples and higher for the average of the dry-dispersed aerosol samples between about −27 and −18 °C. Only instruments making measurements with wet suspended samples were able to measure ice nucleation above −18 °C. A possible explanation for the deviation between −27 and −18 °C is discussed. Multiple exponential distribution fits in both linear and log space for both specific surface area-based ns(T) and geometric surface area-based ns(T) are provided. These new fits, constrained by using identical reference samples, will help to compare IN measurement methods that are not included in the present study and IN data from future IN instruments.
Objectives: The aim of our study was to find out how much energy is applicable in second-generation dual source high-pitch computed tomography (CT) in imaging of the abdomen.
Materials and methods: We examined an upper abdominal phantom using a Somatom Definition Flash CT-Scanner (Siemens, Forchheim, Germany). The study protocol consisted of a scan-series at 100 kV and 120 kV. In each scan series we started with a pitch of 3.2 and reduced it in steps of 0.2, until a pitch of 1.6 was reached. The current was adjusted to the maximum the scanner could achieve. Energy values, image noise, image quality, and radiation exposure were evaluated.
Results: For a pitch of 3.2 the maximum applicable current was 142 mAs at 120 kV and in 100 kV the maximum applicable current was 114 mAs. For conventional abdominal imaging, current levels of 200 to 260 mAs are generally used. To achieve similar current levels, we had to decrease the pitch to 1.8 at 100 kV - at this pitch we could perform our imaging at 204 mAs. At a pitch of 2.2 in 120 kV we could apply a current of 206 mAs.
Conclusion: We conclude our study by stating that if there is a need for a higher current, we have to reduce the pitch. In a high-pitch dual source CT, we always have to remember where our main focus is, so we can adjust the pitch to the energy we need in the area of the body that has to be imaged, to find answers to the clinical question being raised.
This paper presents results from the "INUIT-JFJ/CLACE 2013" field campaign at the high alpine research station Jungfraujoch in January/February 2013. The chemical composition of ice particle residuals (IPR) in a size diameter range of 200–900 nm was measured in orographic, convective and non-convective clouds with a single particle mass spectrometer (ALABAMA) under ambient conditions characterized by temperatures between −28 and −4 °C and wind speed from 0.1 to 21 km h−1. Additionally, background aerosol particles in cloud free air were investigated. The IPR were sampled from mixed-phase clouds with two inlets which selectively extract small ice crystals in-cloud, namely the Counterflow Virtual Impactor (Ice-CVI) and the Ice Selective Inlet (ISI). The IPR as well as the aerosol particles were classified into seven different particle types: (1) black carbon, (2) organic carbon, (3) black carbon internally mixed with organic carbon, (4) minerals, (5) one particle group (termed "BioMinSal") that may contain biological particles, minerals, or salts, (6) industrial metals, and (7) lead containing particles. For any sampled particle population it was determined by means of single particle mass spectrometer how many of the analyzed particles belonged to each of these categories. Accordingly, between 20 and 30% of the IPR and roughly 42% of the background particles contained organic carbon. The measured fractions of minerals in the IPR composition varied from 6 to 33%, while the values for the "BioMinSal" group were between 15 and 29%. Four percent to 31% of the IPR contained organic carbon mixed with black carbon. Both inlets delivered similar results of the chemical composition and of the particle size distribution, although lead was found only in the IPR sampled by the Ice-CVI. The results show that the ice particle residual composition varies substantially between different cloud events, which indicates the influence of different meteorological conditions, such as origin of the air masses, temperature and wind speed.
Irrigation intensifies land use by increasing crop yield but also impacts water resources. It affects water and energy balances and consequently the microclimate in irrigated regions. Therefore, knowledge of the extent of irrigated land is important for hydrological and crop modelling, global change research, and assessments of resource use and management. Information on the historical evolution of irrigated lands is limited. The new global historical irrigation data set (HID) provides estimates of the temporal development of the area equipped for irrigation (AEI) between 1900 and 2005 at 5 arcmin resolution. We collected sub-national irrigation statistics from various sources and found that the global extent of AEI increased from 63 million ha (Mha) in 1900 to 111 Mha in 1950 and 306 Mha in 2005. We developed eight gridded versions of time series of AEI by combining sub-national irrigation statistics with different data sets on the historical extent of cropland and pasture. Different rules were applied to maximize consistency of the gridded products to sub-national irrigation statistics or to historical cropland and pasture data sets. The HID reflects very well the spatial patterns of irrigated land as shown on historical maps for the western United States (around year 1900) and on a global map (around year 1960). Mean aridity on irrigated land increased and mean natural river discharge on irrigated land decreased from 1900 to 1950 whereas aridity decreased and river discharge remained approximately constant from 1950 to 2005. The data set and its documentation are made available in an open-data repository at https://mygeohub.org/publications/8 (doi:10.13019/M20599).
During January/February 2013, at the High Alpine Research Station Jungfraujoch a measurement campaign was carried out, which was centered on atmospheric ice-nucleating particles (INP) and ice particle residuals (IPR). Three different techniques for separation of INP and IPR from the non-ice-active particles are compared. The Ice Selective Inlet (ISI) and the Ice Counterflow Virtual Impactor (Ice-CVI) sample ice particles from mixed phase clouds and allow for the analysis of the residuals. The combination of the Fast Ice Nucleus Chamber (FINCH) and the Ice Nuclei Pumped Counterflow Virtual Impactor (IN-PCVI) provides ice-activating conditions to aerosol particles and extracts the activated INP for analysis.Collected particles were analyzed by scanning electron microscopy and energy-dispersive X-ray microanalysis to determine size, chemical composition and mixing state. All INP/IPR-separating techniques had considerable abundances (median 20 – 70 %) of instrumental contamination artifacts (ISI: Si-O spheres, probably calibration aerosol; Ice-CVI: Al-O particles; FINCH+IN-PCVI: steel particles). Also, potential sampling artifacts (e.g., pure soluble material) occurred with a median abundance of < 20 %. While these could be explained as IPR by ice break-up, for INP their IN-ability pathway is less clear. After removal of the contamination artifacts, silicates and Ca-rich particles, carbonaceous material and metal oxides were the major INP/IPR particle types separated by all three techniques. Soot was a minor contributor. Lead was detected in less than 10 % of the particles, of which the majority were internal mixtures with other particle types. Sea-salt and sulfates were identified by all three methods as INP/IPR. Most samples showed a maximum of the INP/IPR size distribution at 400 nm geometric diameter. In a few cases, a second super-micron maximum was identified. Soot/carbonaceous material and metal oxides were present mainly in the submicron range. ISI and FINCH yielded silicates and Ca-rich particles mainly with diameters above 1 μm, while the Ice-CVI also separated many submicron IPR. As strictly parallel sampling could not be performed, a part of the discrepancies between the different techniques may result from variations in meteorological conditions and subsequent INP/IPR composition. The observed differences in the particle group abundances as well as in the mixing state of INP/IPR express the need for further studies to better understand the influence of the separating techniques on the INP/IPR chemical
composition.
The ancestors to the Australian marsupials entered Australia around 60 (54-72) million years ago from Antarctica, and radiated into the four living orders Peramelemorphia, Dasyuromorphia, Diprotodontia and Notoryctemorphia. The relationship between the four Australian marsupial orders has been a long-standing question, because different phylogenetic studies were not able to consistently reconstruct the same topology. Initial in silico analysis of the Tasmanian devil genome and experimental screening in the seven marsupial orders revealed 20 informative transposable element insertions for resolving the inter- and intraordinal relationships of Australian and South American orders. However, the retrotransposon insertions support three conflicting topologies regarding Peramelemorphia, Dasyuromorphia and Notoryctemorphia, indicating that the split between the three orders may be best understood as a network. This finding is supported by a phylogenetic re-analysis of nuclear gene sequences, using a consensus network approach that allows depicting hidden phylogenetic conflict, otherwise lost when forcing the data into a bifurcating tree. The consensus network analysis agrees with the transposable element analysis in that all possible topologies regarding Peramelemorphia, Dasyuromorphia, and Notoryctemorphia in a rooted four-taxon topology are equally well supported. In addition, retrotransposon insertion data supports the South American order Didelphimorphia being the sistergroup to all other living marsupial orders. The four Australian orders originated within three million years at the Cretaceous-Paleogene boundary. The rapid divergences left conflicting phylogenetic information in the genome possibly generated by incomplete lineage sorting or introgressive hybridisation, leaving the relationship among Australian marsupial orders unresolvable as a bifurcating process million years later.
Im Zuge der (wieder) zunehmenden Aufmerksamkeit verschiedenster IB-Theorien für postkoloniale Problemlagen hat unter ‚nördlichen‘ IB-Forschern das Interesse an Dialog und Auseinandersetzung mit den IB-Communities des ‚globalen Südens‘ zugenommen. Dieser Beitrag will Möglichkeiten und Hindernisse dieses Dialogs für den Fall der südamerikanischen IB-Communities aufzeigen. Es soll gewissermaßen eine ‚Bedienungsanleitung‘ erstellt werden, um dem interessierten Außenseiter das Verständnis der Interessenlagen, theoretischen Präferenzen und der konkreten Arbeitssituationen südamerikanischer IB-Forscher zu erleichtern. Dies geschieht selbstredend unter Betonung der Unvollständigkeit der Beschreibung sowie der Perspektivenabhängigkeit des Beobachters.
Diese Woche steht nicht im Zeichen des #varoufake – schlicht und einfach weil es wichtigere Dinge gibt als den deutschen Michel mit Schaum vorm Mund. Stattdessen haben wir Debatten über Konfliktstudien und ihre Kontrollmöglichkeiten, Neues zu Venezuela, Postkoloniale Betrachtungen, Infos zu PPP-Projekten und einen weiteren ISA-Recap für Euch gesammelt. Und auch #blockupy darf nicht fehlen. Viel Spaß!
Im Zuge des Ukrainekonflikts hat der Westen Sanktionen gegen Russland verhängt. Doch angesichts der ausbleibenden Verhaltensveränderung, wird der Nutzen der Sanktionen in Frage gestellt. Wenngleich sinkende Öleinnahmen eher Wirkung zeigen könnten, dürfen wir nicht vergessen, dass Sanktionen auch eine symbolische Wirkung zur Einhaltung geltenden Rechts aufweisen. Deshalb sollte der Westen an ihnen festhalten.
Stimulation of a principal whisker yields sparse action potential (AP) spiking in layer 2/3 (L2/3) pyramidal neurons in a cortical column of rat barrel cortex. The low AP rates in pyramidal neurons could be explained by activation of interneurons in L2/3 providing inhibition onto L2/3 pyramidal neurons. L2/3 interneurons classified as local inhibitors based on their axonal projection in the same column were reported to receive strong excitatory input from spiny neurons in L4, which are also the main source of the excitatory input to L2/3 pyramidal neurons. Here, we investigated the remaining synaptic connection in this intracolumnar microcircuit. We found strong and reliable inhibitory synaptic transmission between intracolumnar L2/3 local-inhibitor-to-L2/3 pyramidal neuron pairs [inhibitory postsynaptic potential (IPSP) amplitude -0.88 ± 0.67 mV]. On average, 6.2 ± 2 synaptic contacts were made by L2/3 local inhibitors onto L2/3 pyramidal neurons at 107 ± 64 µm path distance from the pyramidal neuron soma, thus overlapping with the distribution of synaptic contacts from L4 spiny neurons onto L2/3 pyramidal neurons (67 ± 34 µm). Finally, using compartmental simulations, we determined the synaptic conductance per synaptic contact to be 0.77 ± 0.4 nS. We conclude that the synaptic circuit from L4 to L2/3 can provide efficient shunting inhibition that is temporally and spatially aligned with the excitatory input from L4 to L2/3.
The study of large-scale functional interactions in the human brain with functional magnetic resonance imaging (fMRI) extends almost to the first applications of this technology. Due to historical reasons and preconceptions about the limitations of this brain imaging method, most studies have focused on assessing connectivity over extended periods of time. It is now clear that fMRI can resolve the temporal dynamics of functional connectivity, like other faster imaging techniques such as electroencephalography and magnetoencephalography (albeit on a different temporal scale). However, the indirect nature of fMRI measurements can hinder the interpretability of the results. After briefly summarizing recent advances in the field, we discuss how the simultaneous combination of fMRI with electrophysiological activity measurements can contribute to a better understanding of dynamic functional connectivity in humans both during rest and task, wakefulness, and other brain states.
Background: Bats belong to one of the most species-rich orders within the Mammalia. They show a worldwide distribution, a high degree of ecological diversification as well as a high diversity of associated parasites and pathogens. Despite their prominent and unique role, the knowledge of their parasite-host-relationships as well as the mechanisms of co-evolutionary processes are, partly due to strict conservation regulations, scarce.
Methods: Juvenile specimens of the greater mouse-eared bat (Myotis myotis) from a roosting colony in Gladenbach (Hesse, Germany) were examined for their metazoan endo-and ectoparasite infections and pathogens. Morphometric data were recorded and the individuals were checked for Lyssavirus-specific antigen using a direct immunofluorescence test. For unambiguous species identification, the bats were analysed by cyt-b sequence comparison.
Results: Myotis myotis were parasitized by the six insect and arachnid ectoparasite species, i.e. Ixodes ricinus, Ischnopsyllus octactenus, Ichoronyssus scutatus, Steatonyssus periblepharus, Spinturnix myoti and Cimex dissimilis. Additionally, the nematode Molinostrongylus alatus and the cestode Vampirolepis balsaci were recorded. Each bat was parasitized by at least four species. The parasites showed partially extreme rates of infection, never recorded before, with more than 1,440 parasites per single host. Ichoronyssus scutatus, Steatonyssus periblepharus, Vampirolepis balsaci and Molinostrongylus alatus are recorded for the first time in Germany. A checklist for Europe is presented containing records of 98 parasite species of 14 Myotis species.
Conclusions: The Myotis myotis from Gladenbach (Hesse, Germany) were parasitized by a diverse parasite fauna with high infestation rates. We assume that in juvenile Myotis the number of parasites is generally higher than in adults due to only later acquired immune competence and behavioural adaptations. Our results revealed new insights into parasite fauna of M. myotis and European bats in general. The finding of endoparasitic cyclophyllidean cestodes that have a two-host lifecycle is, considering the stationary behaviour of the juvenile bats, rather unusual and suggests a non-predatory transmission mechanism (e.g. via autoinfection).
A new insight gained from the collated literature was that the European wide composition of the Myotis parasite fauna is dominated by a few specific taxonomic groups in Europe.
Background: The presence of the recently introduced primary dengue virus vector mosquito Aedes aegypti in Nepal, in association with the likely indigenous secondary vector Aedes albopictus, raises public health concerns. Chikungunya fever cases have also been reported in Nepal, and the virus causing this disease is also transmitted by these mosquito species. Here we report the results of a study on the risk factors for the presence of chikungunya and dengue virus vectors, their elevational ceiling of distribution, and climatic determinants of their abundance in central Nepal.
Methodology/Principal findings: We collected immature stages of mosquitoes during six monthly cross-sectional surveys covering six administrative districts along an altitudinal transect in central Nepal that extended from Birgunj (80 m above sea level [asl]) to Dhunche (highest altitude sampled: 2,100 m asl). The dengue vectors Ae. aegypti and Ae. albopictus were commonly found up to 1,350 m asl in Kathmandu valley and were present but rarely found from 1,750 to 2,100 m asl in Dhunche. The lymphatic filariasis vector Culex quinquefasciatus was commonly found throughout the study transect. Physiographic region, month of collection, collection station and container type were significant predictors of the occurrence and co-occurrence of Ae. aegypti and Ae. albopictus. The climatic variables rainfall, temperature, and relative humidity were significant predictors of chikungunya and dengue virus vectors abundance.
Conclusions/Significance: We conclude that chikungunya and dengue virus vectors have already established their populations up to the High Mountain region of Nepal and that this may be attributed to the environmental and climate change that has been observed over the decades in Nepal. The rapid expansion of the distribution of these important disease vectors in the High Mountain region, previously considered to be non-endemic for dengue and chikungunya fever, calls for urgent actions to protect the health of local people and tourists travelling in the central Himalayas.
Saphenous vein graft disease is a timely problem in coronary artery bypass grafting. Indeed, after exposure of the vein to arterial blood flow, a progressive modification in the wall begins, due to proliferation of smooth muscle cells in the intima. As a consequence, the graft progressively occludes and this leads to recurrent ischemia. In the present study we employed a novel ex vivo culture system to assess the biological effects of arterial-like pressure on the human saphenous vein structure and physiology, and to compare the results to those achieved in the presence of a constant low pressure and flow mimicking the physiologic vein perfusion. While under both conditions we found an activation of Matrix Metallo-Proteases 2/9 and of microRNAs-21/146a/221, a specific effect of the arterial-like pressure was observed. This consisted in a marked geometrical remodeling, in the suppression of Tissue Inhibitor of Metallo-Protease-1, in the enhanced expression of TGF-β1 and BMP-2 mRNAs and, finally, in the upregulation of microRNAs-138/200b/200c. In addition, the veins exposed to arterial-like pressure showed an increase in the density of the adventitial vasa vasorum and of cells co-expressing NG2, CD44 and SM22α markers in the adventitia. Cells with nuclear expression of Sox-10, a transcription factor characterizing multipotent vascular stem cells, were finally found in adventitial vessels. Our findings suggest, for the first time, a role of arterial-like wall strain in the activation of pro-pathologic pathways resulting in adventitial vessels growth, activation of vasa vasorum cells, and upregulation of specific gene products associated to vascular remodeling and inflammation.
The degradation of natural forests to modified forests threatens subtropical and tropical biodiversity worldwide. Yet, species responses to forest modification vary considerably. Furthermore, effects of forest modification can differ, whether with respect to diversity components (taxonomic or phylogenetic) or to local (α-diversity) and regional (β-diversity) spatial scales. This real-world complexity has so far hampered our understanding of subtropical and tropical biodiversity patterns in human-modified forest landscapes. In a subtropical South African forest landscape, we studied the responses of three successive plant life stages (adult trees, saplings, seedlings) and of birds to five different types of forest modification distinguished by the degree of within-forest disturbance and forest loss. Responses of the two taxa differed markedly. Thus, the taxonomic α-diversity of birds was negatively correlated with the diversity of all plant life stages and, contrary to plant diversity, increased with forest disturbance. Conversely, forest disturbance reduced the phylogenetic α-diversity of all plant life stages but not that of birds. Forest loss neither affected taxonomic nor phylogenetic diversity of any taxon. On the regional scale, taxonomic but not phylogenetic β-diversity of both taxa was well predicted by variation in forest disturbance and forest loss. In contrast to adult trees, the phylogenetic diversity of saplings and seedlings showed signs of contemporary environmental filtering. In conclusion, forest modification in this subtropical landscape strongly shaped both local and regional biodiversity but with contrasting outcomes. Phylogenetic diversity of plants may be more threatened than that of mobile species such as birds. The reduced phylogenetic diversity of saplings and seedlings suggests losses in biodiversity that are not visible in adult trees, potentially indicating time-lags and contemporary shifts in forest regeneration. The different responses of taxonomic and phylogenetic diversity to forest modifications imply that biodiversity conservation in this subtropical landscape requires the preservation of natural and modified forests.
Simultaneous and dose dependent melanoma cytotoxic and immune stimulatory activity of betulin
(2015)
Conventional cytostatic cancer treatments rarely result in the complete eradication of tumor cells. Therefore, new therapeutic strategies focus on antagonizing the immunosuppressive activity of established tumors. In particular, recent studies of antigen-loaded dendritic cells (DCs) eliciting a specific antitumor immune response has raised the hopes of achieving the complete elimination of tumor tissue. Genistein, fingolimod and betulin have already been described as active compounds in different types of cancer. Herein, we applied an integrated screening approach to characterize both their cytostatic and their immune-modulating properties side-by-side. As will be described in detail, our data confirmed that all three compounds exerted proapoptotic and antiproliferative activity in different B16 melanoma cell lines to a given extent, as revealed by an MTT assay, CFSE and DAPI staining. However, while genistein and fingolimod also affected the survival of primary bone marrow (BM) derived DCs of C57BL/6 mice, betulin exhibited a lower cytotoxicity for BMDCs in comparison to the melanoma cells. Moreover, we could show for the first time, that only betulin caused a simultaneous, highly specific immune-stimulating activity, as measured by the IL-12p70 release of Toll-like receptor 4-stimulated BMDCs by ELISA, which was due to increased IL-12p35 mRNA expression. Interestingly, the activation of DCs resulted in enhanced T lymphocyte stimulation, indicated by increased IL-2 and IFN-γ production of cytotoxic T cells in spleen cell co-culture assays which led to a decreased viability of B16 cells in an antigen specific model system. This may overcome the immunosuppressive environment of a tumor and destroy tumor cells more effectively in vivo if the immune response is specific targeted against the tumor tissue by antigen-loaded dendritic cells. In summary, cytostatic agents, such as betulin, that simultaneously exhibit immune stimulatory activity may serve as lead compounds and hold great promise as a novel approach for an integrated cancer therapy.
African trypanosomes cause a parasitic disease known as sleeping sickness. Mitochondrial transcript maturation in these organisms requires a RNA editing reaction that is characterized by the insertion and deletion of U-nucleotides into otherwise non-functional mRNAs. Editing represents an ideal target for a parasite-specific therapeutic intervention since the reaction cycle is absent in the infected host. In addition, editing relies on a macromolecular protein complex, the editosome, that only exists in the parasite. Therefore, all attempts to search for editing interfering compounds have been focused on molecules that bind to proteins of the editing machinery. However, in analogy to other RNA-driven biochemical pathways it should be possible to stall the reaction by targeting its substrate RNAs. Here we demonstrate inhibition of editing by specific aminoglycosides. The molecules bind into the major groove of the gRNA/pre-mRNA editing substrates thereby causing a stabilization of the RNA molecules through charge compensation and an increase in stacking. The data shed light on mechanistic details of the editing process and identify critical parameters for the development of new trypanocidal compounds.
The involvement of the ubiquitin-proteasome system (UPS) in the course of various age-associated neurodegenerative diseases is well established. The single RING finger type E3 ubiquitin-protein ligase PARK2 is mutated in a Parkinson’s disease (PD) variant and was found to interact with ATXN2, a protein where polyglutamine expansions cause Spinocerebellar ataxia type 2 (SCA2) or increase the risk for Levodopa-responsive PD and for the motor neuron disease Amyotrophic lateral sclerosis (ALS). We previously reported evidence for a transcriptional induction of the multi-subunit RING finger Skp1/Cul/F-box (SCF) type E3 ubiquitin-protein ligase complex component FBXW8 in global microarray profiling of ATXN2-expansion mouse cerebellum and demonstrated its role for ATXN2 degradation in vitro. Now, we documented co-localization in vitro and co-immunoprecipitations both in vitro and in vivo, which indicate associations of FBXW8 with ATXN2 and PARK2. Both FBXW8 and PARK2 proteins are driven into insolubility by expanded ATXN2. Whereas the FBXW8 transcript upregulation by ATXN2- expansion was confirmed also in qPCR of skin fibroblasts and blood samples of SCA2 patients, a FBXW8 expression dysregulation was not observed in ATXN2-deficient mice, nor was a PARK2 transcript dysregulation observed in any samples. Jointly, all available data suggest that the degradation of wildtype and mutant ATXN2 is dependent on FBXW8, and that ATXN2 accumulation selectively modulates FBXW8 levels, while PARK2 might act indirectly through FBXW8. The effects of ATXN2-expansions on FBXW8 expression in peripheral tissues like blood may become useful for clinical diagnostics
Intrinsic motivations drive the acquisition of knowledge and skills on the basis of novel or surprising stimuli or the pleasure to learn new skills. In so doing, they are different from extrinsic motivations that are mainly linked to drives that promote survival and reproduction. Intrinsic motivations have been implicitly exploited in several psychological experiments but, due to the lack of proper paradigms, they are rarely a direct subject of investigation. This article investigates how different intrinsic motivation mechanisms can support the learning of visual skills, such as "foveate a particular object in space", using a gaze contingency paradigm. In the experiment participants could freely foveate objects shown in a computer screen. Foveating each of two “button” pictures caused different effects: one caused the appearance of a simple image (blue rectangle) in unexpected positions, while the other evoked the appearance of an always-novel picture (objects or animals). The experiment studied how two possible intrinsic motivation mechanisms might guide learning to foveate one or the other button picture. One mechanism is based on the sudden, surprising appearance of a familiar image at unpredicted locations, and a second one is based on the content novelty of the images. The results show the comparative effectiveness of the mechanism based on image novelty, whereas they do not support the operation of the mechanism based on the surprising location of the image appearance. Interestingly, these results were also obtained with participants that, according to a post experiment questionnaire, had not understood the functions of the different buttons suggesting that novelty-based intrinsic motivation mechanisms might operate even at an unconscious level.
kurz und kn@pp news : Nr. 33
(2015)