Refine
Year of publication
- 2014 (1214) (remove)
Document Type
- Article (588)
- Part of Periodical (163)
- Working Paper (149)
- Book (134)
- Doctoral Thesis (86)
- Report (27)
- Part of a Book (23)
- Conference Proceeding (18)
- Review (10)
- Preprint (8)
Language
- English (1214) (remove)
Keywords
- taxonomy (21)
- new species (19)
- Syntax (11)
- Inversionsfigur (10)
- Multistability (10)
- Multistable figures (10)
- Wahrnehmungswechsel (10)
- morphology (8)
- Bantusprachen (7)
- Benjamin, Walter (7)
Institute
- Medizin (231)
- Wirtschaftswissenschaften (149)
- Center for Financial Studies (CFS) (131)
- Physik (101)
- Biowissenschaften (88)
- Sustainable Architecture for Finance in Europe (SAFE) (86)
- House of Finance (HoF) (82)
- Biochemie und Chemie (48)
- Geowissenschaften (41)
- Gesellschaftswissenschaften (33)
The extinction of conditioned fear depends on an efficient interplay between the amygdala and the medial prefrontal cortex (mPFC). In rats, high-frequency electrical mPFC stimulation has been shown to improve extinction by means of a reduction of amygdala activity. However, so far it is unclear whether stimulation of homologues regions in humans might have similar beneficial effects. Healthy volunteers received one session of either active or sham repetitive transcranial magnetic stimulation (rTMS) covering the mPFC while undergoing a 2-day fear conditioning and extinction paradigm. Repetitive TMS was applied offline after fear acquisition in which one of two faces (CS+ but not CS−) was associated with an aversive scream (UCS). Immediate extinction learning (day 1) and extinction recall (day 2) were conducted without UCS delivery. Conditioned responses (CR) were assessed in a multimodal approach using fear-potentiated startle (FPS), skin conductance responses (SCR), functional near-infrared spectroscopy (fNIRS), and self-report scales. Consistent with the hypothesis of a modulated processing of conditioned fear after high-frequency rTMS, the active group showed a reduced CS+/CS− discrimination during extinction learning as evident in FPS as well as in SCR and arousal ratings. FPS responses to CS+ further showed a linear decrement throughout both extinction sessions. This study describes the first experimental approach of influencing conditioned fear by using rTMS and can thus be a basis for future studies investigating a complementation of mPFC stimulation to cognitive behavioral therapy (CBT).
This paper provides a systematic analysis of individual attitudes towards ambiguity, based on laboratory experiments. The design of the analysis allows to capture individual behavior across various levels of ambiguity, ranging from low to high. Attitudes towards risk and attitudes towards ambiguity are disentangled, providing pure measures of ambiguity aversion. Ambiguity aversion is captured in several ways, i.e. as a discount factor net of a risk premium, and as an estimated parameter in a generalized utility function. We find that ambiguity aversion varies across individuals, and with the level of ambiguity, being most prominent for intermediate levels. Around one third of subjects show no aversion, one third show maximum aversion, and one third show intermediate levels of ambiguity aversion, while there is almost no ambiguity seeking. While most theoretical work on ambiguity builds on maxmin expected utility, our results provide evidence that MEU does not adequately capture individual attitudes towards ambiguity for the majority of individuals. Instead, our results support models that allow for intermediate levels of ambiguity aversion. Moreover, we find risk aversion to be statistically unrelated to ambiguity aversion on average. Taken together, the results support the view that ambiguity is an important and distinct argument in decision making under uncertainty.
The n_TOF facility operates at CERN with the aim of addressing the request of high accuracy nuclear data for advanced nuclear energy systems as well as for nuclear astrophysics. Thanks to the features of the neutron beam, important results have been obtained on neutron induced fission and capture cross sections of U, Pu and minor actinides. Recently the construction of another beam line has started; the new line will be complementary to the first one, allowing to further extend the experimental program foreseen for next measurement campaigns.
We present the results of two-pion production in tagged quasi-free np collisions at a deutron incident beam energy of 1.25 GeV/c measured with the High-Acceptance Di-Electron Spectrometer (HADES) installed at GSI. The specific acceptance of HADES allowed for the first time to obtain high-precision data on π+π− and π−π0 production in np collisions in a region corresponding to large transverse momenta of the secondary particles. The obtained differential cross section data provide strong constraints on the production mechanisms and on the various baryon resonance contributions (∆∆, N(1440), N(1520), ∆(1600)). The invariant mass and angular distributions from the np → npπ+π −and np → ppπ−π0 reactions are compared with different theoretical model predictions.
The elements in the universe are mainly produced by charged-particle fusion reactions and neutron-capture reactions. About 35 proton-rich isotopes, the p-nuclei, cannot be produced via neutron-induced reactions. To date, nucleosynthesis simulations of possible production sites fail to reproduce the p-nuclei abundances observed in the solar system. In particular, the origin of the light p-nuclei 92Mo, 94Mo, 96Ru and 98Ru is little understood. The nucleosynthesis simulations rely on assumptions about the seed abundance distributions, the nuclear reaction network and the astrophysical environment. This work addressed the nuclear data input.
The key reaction 94Mo(g,n) for the production ratio of the p-nuclei 92Mo and 94Mo was investigated via Coulomb dissociation at the LAND/R3B setup at GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt, Germany. A beam of 94Mo with an energy of 500 AMeV was directed onto a lead target. The neutron-dissociation reactions following the Coulomb excitation by virtual photons of the electromagnetic field of the target nucleus were investigated. All particles in the incoming and outgoing channels of the reaction were identified and their kinematics were determined in a complex analysis. The systematic uncertainties were analyzed by calculating the cross sections for all possible combinations of the data selection criteria. The integral Coulomb dissociation cross section of the reaction 94Mo(g,n) was determined to be (571 +- 14 (stat) +- 46 (syst) ) mb. The result was compared to the data obtained in a real photon experiment carried out at the Saclay linear accelerator. The ratio of the integral cross sections was found to be 0.63 +- 0.07, which is lower than the expected value of about 0.8.
The nucleosynthesis of the light p-nuclei 92Mo, 94Mo, 96Ru and 98Ru was investigated in post-processing nucleosynthesis simulations within the NuGrid research platform. The impact of rate uncertainties of the most important production and destruction reactions was studied for a Supernova type II model. It could be shown that the light p-nuclei are mainly produced via neutron-dissociation reactions on heavier nuclei in the isotopic chains, and that the final abundances of these p-nuclei are determined by their main destruction reactions. The nucleosynthesis of 92Mo and 94Mo was also studied in different environments of a Supernova type Ia model. It was concluded that the maximum temperature and the duration of the high temperature phase determine the final abundances of 92Mo and 94Mo.
A measurement of the transverse momentum spectra of jets in Pb-Pb collisions at sNN−−−√=2.76 TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R of 0.2 and 0.3 in pseudo-rapidity |η|<0.5. The transverse momentum pT of charged particles is measured down to 0.15 GeV/c which gives access to the low pT fragments of the jet. Jets found in heavy-ion collisions are corrected event-by-event for average background density and on an inclusive basis (via unfolding) for residual background fluctuations and detector effects. A strong suppression of jet production in central events with respect to peripheral events is observed. The suppression is found to be similar to the suppression of charged hadrons, which suggests that substantial energy is radiated at angles larger than the jet resolution parameter R=0.3 considered in the analysis. The fragmentation bias introduced by selecting jets with a high pT leading particle, which rejects jets with a soft fragmentation pattern, has a similar effect on the jet yield for central and peripheral events. The ratio of jet spectra with R=0.2 and R=0.3 is found to be similar in Pb-Pb and simulated PYTHIA pp events, indicating no strong broadening of the radial jet structure in the reconstructed jets with R<0.3.
This thesis is structured into 7 chapters:
• Chapter 2 gives an overview of the ultrashort high intensity laser interaction with matter. The laser interaction with an induced plasma is described, starting from the kinematics of single electron motion, followed by collective electron effects and the ponderamotive motion in the laser focus and the plasma transparency for the laser beam. The three different mechanisms prepared to accelerate and propagate electrons through matter are discussed. The following indirect acceleration of protons is explained by the Target Normal Sheath Acceleration (TNSA) mechanism. Finally some possible applications of laser accelerated protons are explained briefly.
• Chapter 3 deals with the modeling of geometry and field mapping of magnetic lens. Initial proton and electron distributions, fitted to PHELIX measured data are generated, a brief description of employed codes and used techniques in simulation is given, and the aberrations at the solenoid focal spot is studied.
• Chapter 4 presents a simulation study for suggested corrections to optimize the proton beam as a later beam source. Two tools have been employed in these suggested corrections, an aperture placed at the solenoid focal spot as energy selection tool, and a scattering foil placed in the proton beam to smooth the radial energy beam profile correlation at the focal spot due to chromatic aberrations. Another suggested correction has been investigated, to optimize the beam radius at the focal spot by lens geometry controlling.
• Chapter 5 presents a simulation study for the de-neutralization problem in TNSA caused by the fringing fields of pulsed magnetic solenoid and quadrupole. In this simulation, we followed an electrostatic model, wherethe evolution of both, self and mutual fields through the pulsed magnetic solenoid could be found, which is not the case in the quadrupole and only the growth of self fields could be found. The field mapping of magnetic elements is generated by the Matlab program, while the TraceWin code is employed to study the tracking through magnetic elements.
• Chapter 6 describes the PHELIX laser parameters at GSI with chirp pulse amplification technique (CPA), and Gafchromic Radiochromic film RCF) as a spatial energy resolver film detector. The results of experiments with laser proton acceleration, which were performed in two experimental areas at GSI (Z6 area and PHELIX Laser Hall (PLH)), are presented in section 6.3.
• Chapter 7 includes the main results of this work, conclusions and gives a perspective for future experimental activities.
Mathematical modeling of Arabidopsis thaliana with focus on network decomposition and reduction
(2014)
Systems biology has become an important research field during the last decade. It focusses on the understanding of the systems which emit the measured data. An important part of this research field is the network analysis, investigating biological networks. An essential point of the inspection of these network models is their validation, i.e., the successful comparison of predicted properties to measured data. Here especially Petri nets have shown their usefulness as modeling technique, coming with sound analysis methods and an intuitive representation of biological network data.
A very important tool for network validation is the analysis of the Transition-invariants (TI), which represent possible steady-state pathways, and the investigation of the liveness property. The computational complexity of the determination of both, TI and liveness property, often hamper their investigation.
To investigate this issue, a metabolic network model is created. It describes the core metabolism of Arabidopsis thaliana, and it is solely based on data from the literature. The model is too complex to determine the TI and the liveness property.
Several strategies are followed to enable an analysis and validation of the network. A network decomposition is utilized in two different ways: manually, motivated by idea to preserve the integrity of biological pathways, and automatically, motivated by the idea to minimize the number of crossing edges. As a decomposition may not be preserving important properties like the coveredness, a network reduction approach is suggested, which is mathematically proven to conserve these important properties. To deal with the large amount of data coming from the TI analysis, new organizational structures are proposed. The liveness property is investigated by reducing the complexity of the calculation method and adapting it to biological networks.
The results obtained by these approaches suggest a valid network model. In conclusion, the proposed approaches and strategies can be used in combination to allow the validation and analysis of highly complex biological networks.
A recent paper on the phylogenetic relationships of species within the cephalopod family Mastigoteuthidae meant great progress in stabilizing the classification of the family. The authors, however, left the generic placement of Mastigoteuthis pyrodes unresolved. This problem is corrected here by placing this species in a new monotypic genus, Mastigotragus, based on unique structures of the photophores and the funnel/mantle locking apparatus.
More than 100 years after Henry James’s death, criticism is still working through unresolved gender issues in his fiction. This study proposes a new interdisciplinary approach to the gendered power relations in James’s novels that fills a crucial vacancy in the literature. Reading James’s intricately woven narrative form through the lens of relational sociology, specifically Pierre Bourdieu’s concept of symbolic domination, reconciles some of the most fiercely disputed positions in James studies of the past decades. With its focus on gender-related symbolic domination, this study demonstrates this approach’s potential to probe the depths of James’s fictional social worlds while developing the narratological tools to do so.
Many critics have paid attention to the relational nature of James’s social fictions as well as his talent for capturing unspoken, invisible, hidden social constraints. Blatantly missing from the literature is a systematic relational analysis into the specifically Jamesian method of narrating the socio-psychological, embodied responses to power and oppression. The present study closes this research gap. It reveals how James persistently narrates his characters as social agents whose perception, affects, and bodily practices are products of the social structures that they in turn continue to shape and reproduce. Moreover, it traces a development throughout James’s career that reflects his growing sensitivity for the stubbornness of some seemingly insurmountable social constraints. James’s fictional social worlds are relational ones through and through. This study is the first sustained effort to investigate the way in which his narratives capture this interrelatedness.
This article explores life insurance consumption in 31 European countries from 2003 to 2012 and aims to investigate the extent to which market transparency can affect life insurance demand. The cross-country evidence for the entire sample period shows that greater market transparency, which resolves asymmetric information, can generate a higher demand for life insurance. However, when considering the financial crisis period (2008-2012) separately, the results suggest a negative impact of enhanced market transparency on life insurance consumption. The mixed findings imply a trade-off between the reduction in adverse selection under greater market transparency and the possible negative effects on life insurance consumption during the crisis period due to more effective market discipline. Furthermore, this article studies the extent to which transparency can influence the reaction of life insurance demand to bad market outcomes: i.e., low solvency ratios or low profitability. The results indicate that the markets with bad outcomes generate higher life insurance demand under greater transparency compared to the markets that also experience bad outcomes but are less transparent.
Many Zanjian settlements (8th to 13th centuries AD) on Tanzania’s coast are considered to have collapsed and not regarded as belonging to the formation of the Swahili culture (13th to 16th centuries AD). With this regard, Swahili traditions found on Tanzania’s coast are seldom linked to local Zanjian precursors but to external influence especially from Lamu archipelago on the Kenya coast. Nevertheless, new archaeological evidences from Pangani Bay on the northern coast of Tanzania suggest that the external influences to cultural continuity and change from Zanjian to Swahili periods are overemphasized. This conclusion is grounded on archaeological field works conducted in the surrounding of Pangani Bay in 2010 and 2012, where major Swahili sites directly overlie Zanjian sites without recognizable changes of the cultural materials. The study compares and contrasts cultural materials (in particular pottery) and remains of economy and trade (fauna and glass beads) traditions from both Zanjian and Swahili phases. The aim of this comparative analysis is to trace change and continuity of archaeological traditions for better understanding the origin of Swahili culture in Pangani Bay.
In this endeavour, the analysis of ceramic, faunal remains and glass beads from Pangani Bay proposes negligible differences of materials and economical traditions from the late 1st to 2nd millennia AD. That is, local ceramic styles by Swahilis show only minor differences to those used by their ancestors, while fauna data suggest a similarity in subsistence economy between Zanjian and Swahili periods. Correspondingly, glass bead data indicate that although maritime trade became highly sophisticated during Swahili time, early involvement into oceanic far distance trade contact began in the Zanjian period. Thus, this thesis conveys all issues together. It presents research objectives, field work methods as well as analysis and interpretation of the results, with a main focus on ceramic, fauna and bead data. With the support of archaeological evidences, the current work concludes that there is more continuity than change in most of the Zanjian traditions that facilitated the origin of Swahili culture in Pangani Bay.
he predictive likelihood is of particular relevance in a Bayesian setting when the purpose is to rank models in a forecast comparison exercise. This paper discusses how the predictive likelihood can be estimated for any subset of the observable variables in linear Gaussian state-space models with Bayesian methods, and proposes to utilize a missing observations consistent Kalman filter in the process of achieving this objective. As an empirical application, we analyze euro area data and compare the density forecast performance of a DSGE model to DSGE-VARs and reduced-form linear Gaussian models.
Mapping is an important tool for the management of plant invasions. If landscapes are mapped in an appropriate way, results can help managers decide when and where to prioritize their efforts. We mapped vegetation with the aim of providing key information for managers on the extent, density and rates of spread of multiple invasive species across the landscape. Our case study focused on an area of Galapagos National Park that is faced with the challenge of managing multiple plant invasions. We used satellite imagery to produce a spatially explicit database of plant species densities in the canopy, finding that 92% of the humid highlands had some degree of invasion and 41% of the canopy was comprised of invasive plants. We also calculated the rate of spread of eight invasive species using known introduction dates, finding that species with the most limited dispersal ability had the slowest spread rates while those able to disperse long distances had a range of spread rates. Our results on spread rate fall at the lower end of the range of published spread rates of invasive plants. This is probably because most studies are based on the entire geographic extent, whereas our estimates took plant density into account. A spatial database of plant species densities, such as the one developed in our case study, can be used by managers to decide where to apply management actions and thereby help curtail the spread of current plant invasions. For example, it can be used to identify sites containing several invasive plant species, to find the density of a particular species across the landscape or to locate where native species make up the majority of the canopy. Similar databases could be developed elsewhere to help inform the management of multiple plant invasions over the landscape.
Telecommunications companies traditionally offer several tariffs from which their customers can choose the tariff that best suits their preferences. Yet, customers sometimes make choices that are not optimal for them because they do not minimize their bill for a certain usage amount. We show in this paper that companies should be very concerned about choices in which customers pick tariffs that are too small for them because they lead to a significant increase in customers churn. In contrast, this is not the case if customers choose tariffs that are too big for them. The reason is that in particular flat-rates provide customers with the additional benefit that they guarantee a constant bill amount that consumption can be enjoyed more freely because all costs are already accounted for.
FINANCIAL SERVICE PROVIDERS FACE SERIOUS PROBLEMS IF MANY OF THEIR CUSTOMERS LEAVE QUICKLY BECAUSE SUCH CUSTOMERS HAVE LITTLE LONG-TERM VALUE. STILL, CURRENT REPORTING PRIMARILY FOCUSES ON CURRENT PROFITABILITY THAT REPRESENTS THE SHORT-TERM VALUE OF THE CUSTOMERS. THE LONG-TERM VALUE TYPICALLY RECEIVES LITTLE ATTENTION. CUSTOMER EQUITY REPORTING PRESENTS A MEANS TO FOCUS ON THE LONG-TERM VALUE OF THE COMPANY'S CUSTOMERS. IT AVOIDS THE RISK THAT SHORT-TERM PROFITS ARE INCREASED AT THE EXPENSE OF LONG-TERM VALUE CREATION AND ITS CENTRAL METRIC, CUSTOMER EQUITY, SERVES AS AN EARLY WARNING INDICATOR FOR RISK MANAGEMENT SYSTEMS THAT FOCUS ON CUSTOMER LOSS.
5-lipoxygenase (5-LO) is an enzyme with a substantial role in inflammatory processes. In vitro kinase assays using [32P]-ATP in combination with mutagenesis have revealed that serine residues 271, 523 and 663 can be phosphorylated by MK2, PKA and ERK2 kinases, respectively. A few available reports regarding 5-LO protein sequence have covered up to 30% of the sequence after amino acid sequencing including Ser663. In LCMS/MS analyses of 5-LO tryptic digests from different cellular sources different peptides have been detected; however, none of the three phosphorylations has been detected and only Ser663 was included in the covered sequence.
As there was no comprehensive mass spectrometric analysis of 5-LO, the purpose of this study was to optimize the experimental conditions under which detection of the aforementioned phosphorylation events, as well as other possible post-translational modifications (PTMs), would be feasible. Matrix-Assisted Laser Desorption/Ionization Mass Spectrometry (MALDI-MS) was used for peptide analysis of 5-LO cleaved either by chemical reagents or by proteases. Sequence coverage of 5-LO could be enhanced to be close to completion by combination of results from digestions by trypsin, AspN and chymotrypsin. In-gel trypsin digestion followed by in-solution AspN digestion proved to be a useful sample treatment for reproducible detection of the Ser271-containing peptide.
Nevertheless, in none of the examined cleavage protocols the sequence around Ser523 was detected reproducibly or with acceptable signal intensity for subsequent peptide fragmentation. Propionic anhydride and sulfo-NHS-SS-biotin cross-linker (EZ-linkTM), were used for derivatization of lysine side chains and hindrance of lysine residue recognition by trypsin. Phosphopeptide enrichment became possible after tryptic digestion of these samples, not only due to formation of an individual Ser523-containing peptide, but also because TiO2-mediated enrichment, which is performed in acidic pH, was not impaired by positively charged free lysine side chains. Additionally, biotinylation of lysine residues was exploited for an intermediate enrichment step of the lysine containing peptides, prior to TiO2 phosphopeptide enrichment.
MALDI-MS analysis after in-vitro phosphorylation of 5-LO by the three kinases showed that Ser271 was phosphorylated in the MK2 and PKA kinase assays, while Ser523 was phosphorylated only in the PKA kinase assay. Surpisingly, no phosphopeptides were detected in the in-vitro kinase assays with ERK2, even though the unmodified counterpart of the Ser663-containing peptide was easily detected. The detection limit for each of the three phosphorylation sites was determined by the use of custom made phosphopeptides and an amount of 0.06 pmol of phosphopeptide in 1 μg 5-LO (representing 0.5% phosphorylation rate) was sufficient in all cases for successful enrichment and detection by MS.
In-vitro kinase assays with [32P]-ATP were performed for some kinases that were expected to phosphorylate 5-LO according to in-silico data. Three members of the Src tyrosine kinase family (Fgr, Hck and Yes) and the Ser/Thr specific kinase DNA-PK used 5-LO as their substrate and mainly residues at the N-terminal part of 5-LO were detected phosphorylated by MS (e.g. Y42, Y53). Additional in-vitro assays for recombinant 5-LO modification included incubation with glutathione or compound U73122, previously described as inhibitor of 5-LO.
Since in-vitro assays might have generated artifacts, a method for 5-LO purification from human cells was sought, in order to examine the modification state of the protein in the cellular context. ATP-agarose affinity purification and anti-5-LO immunoprecipitation proved inappropriate for sample purification for MALDI-MS analysis. Consequently, two human cell lines that are able to express 5-LO (Rec-1 Blymphocytes and MM6 monocytes) were transduced with a DNA cassette that contained recombinant human 5-LO sequence with an attached N-terminal FLAG-tag. Anti-FLAG immunoprecipitation was then performed effectively in cell lysates and the precipitated FLAG-5-LO was separated by SDS-PAGE before MALDI-MS analysis.
The examined cell stimuli were expected to result to phosphorylation of 5-LO at Ser523 by PKA in Rec-1 cells and to phosphorylation of Ser271 and/or Ser663 in MM6 cells by activated MK2 and ERK2, respectively. Additionally, under the conditions of MM6 cell stimulation, Fgr, Hck and Yes kinases, which phosphorylated 5-LO in vitro, were expected to be activated and the possibility of 5-LO phosphorylation on tyrosine was investigated. Although immunoblotting results indicated that all the aforementioned phosphorylation events existed in the examined samples, MALDI-MS analysis verified only phosphorylation on Ser271 in differentiated MM6 cells, interestingly regardless of cell stimulation.
Finally, the primary amine derivatization procedure by EZ-linkTM was utilized for MS analysis of lysine rich proteins. In the past, chemical propionylation of histones had been employed prior to trypsin digestion; however it was easily confused in MS with combinations of other PTMs (e.g. acetylation, methylation). Moreover, propionylation is a PTM for histone H3 and this information was lost. Consequently, the EZ-link reagent was more useful for analysis of histones, as unambiguous assignment of PTMs and detection of native propionylation on bovine H3 became possible.
Background: Malaria is still a priority public health problem of Nepal where about 84% of the population are at risk. The aim of this paper is to highlight the past and present malaria situation in this country and its challenges for long-term malaria elimination strategies.
Methods: Malariometric indicator data of Nepal recorded through routine surveillance of health facilities for the years between 1963 and 2012 were compiled. Trends and differences in malaria indicator data were analysed.
Results: The trend of confirmed malaria cases in Nepal between 1963 and 2012 shows fluctuation, with a peak in 1985 when the number exceeded 42,321, representing the highest malaria case-load ever recorded in Nepal. This was followed by a steep declining trend of malaria with some major outbreaks. Nepal has made significant progress in controlling malaria transmission over the past decade: total confirmed malaria cases declined by 84% (12,750 in 2002 vs 2,092 in 2012), and there was only one reported death in 2012. Based on the evaluation of the National Malaria Control Programme in 2010, Nepal recently adopted a long-term malaria elimination strategy for the years 2011–2026 with the ambitious vision of a malaria-free Nepal by 2026. However, there has been an increasing trend of Plasmodium falciparum and imported malaria proportions in the last decade. Furthermore, the analysis of malariometric indicators of 31 malaria-risk districts between 2004 and 2012 shows a statistically significant reduction in the incidence of confirmed malaria and of Plasmodium vivax, but not in the incidence of P. falciparum and clinically suspected malaria.
Conclusions: Based on the achievements the country has made over the last decade, Nepal is preparing to move towards malaria elimination by 2026. However, considerable challenges lie ahead. These include especially, the need to improve access to diagnostic facilities to confirm clinically suspected cases and their treatment, the development of resistance in parasites and vectors, climate change, and increasing numbers of imported cases from a porous border with India. Therefore, caution is needed before the country embarks towards malaria elimination.
The European Central Bank (ECB) has finalized its comprehensive assessment of the solvency of the largest banks in the euro area and on October 26 disclosed the results of this assessment. In the present paper, Acharya and Steffen compare the outcomes of the ECB's assessment to their own benchmark stress tests conducted for 39 publically listed financial institutions that are also included in the ECB's regulatory review. The authors identify a negative correlation between their benchmark estimates for capital shortfalls and the regulatory capital shortfall, but a positive correlation between their benchmark estimates for losses under stress both in the banking book and in the trading book. They conclude that the regulatory stress test outcomes are potentially heavily affected by discretion of national regulators in measuring what is capital, and especially the use of risk-weighted assets in calculating the prudential capital requirement.
Europeana provides a common access point to digital cultural heritage objects across different cultural domains among which the libraries. The recent development of the Europeana Data Model (EDM) provide new ways for libraries to experiment with Linked Data. Indeed the model is designed as a framework reusing various wellknown standards developed in the Semantic Web Community, such as the Resource Description Framework (RDF), the OAI Object Reuse and Exchange (ORE), and Dublin Core namespaces. It provides new opportunities for libraries to provide rich and interlinked metadata to the Europeana aggregation.
However to be able to provide data to Europeana, libraries need to create mappings from the librarystandard to EDM. This step involves decisions based on domainspecific requirements and on the possibilities offered by EDM. The crossdomain nature of EDM limiting in some cases the completeness of the mappings, extension of the model have been proposed to accommodate the library needs.
The "Digitised Manuscripts to Europeana" project (DM2E) has created an extension of EDM to optimise the mappings of librarydata for manuscripts. This extension is in the form of subclasses and subproperties that further specialise EDM concepts and properties. It includes spatial creation and publishing information, specific contributor and publication type properties and more.
Furthermore the granularity of the mapping has been extended to allow references and annotations on page level as required for scholarly work. As part of this project the metadata of the Hebrew Manuscripts as well as of the Medieval Manuscripts presented in the Digital Collections of the Frankfurt University Library have been mapped to this extension. This includes links to the Integrated Authority File (GND) of the German National Library with further links to the Virtual International Authority File (VIAF).
Based on this development a new comprehensive mapping from the digitalisation metadata format METS/MODS to EDM has been established for all materials of the Frankfurt Judaica in "Judaica Europeana ". It demonstrates today’s capabilities of the creation of linked Data structures in Europeana based on library catalogue data and structural data from the digitalisation process.
Cryptochrome 1a, located in the UV/violet-sensitive cones in the avian retina, is discussed as receptor molecule for the magnetic compass of birds. Our previous immunohistochemical studies of chicken retinae with an antiserum that labelled only activated cryptochrome 1a had shown activation of cryptochrome 1a under 373 nm UV, 424 nm blue, 502 nm turquoise and 565 nm green light. Green light, however, does not allow the first step of photoreduction of oxidized cryptochromes to the semiquinone. As the chickens had been kept under ‘white’ light before, we suggested that there was a supply of the semiquinone present at the beginning of the exposure to green light, which could be further reduced and then re-oxidized. To test this hypothesis, we exposed chickens to various wavelengths (1) for 30 min after being kept in daylight, (2) for 30 min after a 30 min pre-exposure to total darkness, and (3) for 1 h after being kept in daylight. In the first case, we found activated cryptochrome 1a under UV, blue, turquoise and green light; in the second two cases we found activated cryptochrome 1a only under UV to turquoise light, where the complete redox cycle of cryptochrome can run, but not under green light. This observation is in agreement with the hypothesis that activated cryptochrome 1a is found as long as there is some of the semiquinone left, but not when the supply is depleted. It supports the idea that the crucial radical pair for magnetoreception is generated during re-oxidation.
Cryo-electron tomography provides a snapshot of the cellular proteome. With template matching, the spatial positions of various macromolecular complexes within their native cellular context can be detected. However, the growing awareness of the reference bias introduced by the cross-correlation based approaches, and more importantly the lack of a reliable confidence measurement in the selection of these macromolecular complexes, has restricted the use of these applications. Here we propose a heuristic, in which the reference bias is measured in real space in an analogous way to the R-free value in X-ray crystallography. We measure the reference bias within the mask used to outline the area of the template, and do not modify the template itself. The heuristic works by splitting the mask into a working and a testing area in a volume ratio of 9:1. While the working area is used during the calculation of the cross-correlation function, the information from both areas is explored to calculate the M-free score. We show using artificial data, that the M-free score gives a reliable measure for the reference bias. The heuristic can be applied in template matching and in sub-tomogram averaging. We further test the applicability of the heuristic in tomograms of purified macromolecules, and tomograms of whole Mycoplasma cells.
Lysimachia mauritiana Lam. (family Primulaceae), a small short-lived herb native to India, Indian and Pacific Ocean islands, and coastal east Asia, is described as a new naturalised record from the eastern suburbs of Sydney, New South Wales, Australia. It was first recorded in 1981 near Coogee, and grows in exposed rock crevices and seepages on the seacoast, very similar to its natural habitat overseas. Lysimachia mauritiana is known to have been cultivated in the area in 1961 in a home garden, which is the likely source of this introduction; it appears to be spreading locally as a weed.
Bacteria communicate via small diffusible molecules to mediate group-coordinated behavior, a process designated as quorum sensing. The basic molecular quorum sensing system of Gram-negative bacteria consists of a LuxI-type autoinducer synthase producing acyl-homoserine lactones (AHLs) as signaling molecules, and a LuxR-type receptor detecting the AHLs to control expression of specific genes. However, many proteobacteria possess one or more unpaired LuxR-type receptors that lack a cognate LuxI-like synthase, referred to as LuxR solos. The enteric and insect pathogenic bacteria of the genus Photorhabdus harbor an extraordinarily high number of LuxR solos, more than any other known bacteria, and all lack a LuxI-like synthase. Here, we focus on the presence and the different types of LuxR solos in the three known Photorhabdus species using bioinformatics analyses. Generally, the N-terminal signal-binding domain (SBD) of LuxR-type receptors sensing AHLs have a motif of six conserved amino acids that is important for binding and specificity of the signaling molecule. However, this motif is altered in the majority of the Photorhabdus-specific LuxR solos, suggesting the use of other signaling molecules than AHLs. Furthermore, all Photorhabdus species contain at least one LuxR solo with an intact AHL-binding motif, which might allow the ability to sense AHLs of other bacteria. Moreover, all three species have high AHL-degrading activity caused by the presence of different AHL-lactonases and AHL-acylases, revealing a high quorum quenching activity against other bacteria. However, the majority of the other LuxR solos in Photorhabdus have a N-terminal so-called PAS4-domain instead of an AHL-binding domain, containing different amino acid motifs than the AHL-sensors, which potentially allows the recognition of a highly variable range of signaling molecules that can be sensed apart from AHLs. These PAS4-LuxR solos are proposed to be involved in host sensing, and therefore in inter-kingdom signaling. Overall, Photorhabdus species are perfect model organisms to study bacterial communication via LuxR solos and their role for a symbiotic and pathogenic life style.
Noise-induced hearing loss is one of the most common auditory pathologies, resulting from overstimulation of the human cochlea, an exquisitely sensitive micromechanical device. At very low frequencies (less than 250 Hz), however, the sensitivity of human hearing, and therefore the perceived loudness is poor. The perceived loudness is mediated by the inner hair cells of the cochlea which are driven very inadequately at low frequencies. To assess the impact of low-frequency (LF) sound, we exploited a by-product of the active amplification of sound outer hair cells (OHCs) perform, so-called spontaneous otoacoustic emissions. These are faint sounds produced by the inner ear that can be used to detect changes of cochlear physiology. We show that a short exposure to perceptually unobtrusive, LF sounds significantly affects OHCs: a 90 s, 80 dB(A) LF sound induced slow, concordant and positively correlated frequency and level oscillations of spontaneous otoacoustic emissions that lasted for about 2 min after LF sound offset. LF sounds, contrary to their unobtrusive perception, strongly stimulate the human cochlea and affect amplification processes in the most sensitive and important frequency range of human hearing.
Low-energy effective models for two-flavor quantum chromodynamics and the universality hypothesis
(2014)
Die Untersuchung der Natur auf extremen Längenskalen hat seit jeher zu bahnbrechenden Einsichten und Innovationen geführt. Insbesondere zu unserem heutigen Verständnis, dass Nukleonen (Protonen und Neutronen) aus Quarks zusammengesetzt sind, die infolge der starken Wechselwirkung, vermittelt durch Gluonenaustausch, gebunden sind. Mit dem Aufkommen des Quarkmodells wurde bald die Quantenchromodynamik (QCD) erfolgreich in der Beschreibung vieler messbarer Eigenschaften der starken Wechselwirkung. Um es mit Goethe zu sagen: mit den modernen Hochenergie-Beschleuniger-Experimenten wird versucht unser Verständnis davon zu verbessern, was die Welt im Innersten zusammenhält. Am Large Hadron Collider (LHC) werden beispielsweise Protonen derart beschleunigt und miteinander zur Kollision gebracht, dass bislang unerreichte Energiedichten auftreten, infolge derer Temperatur und baryochemisches Potential Werte annehmen, die mit denen des frühen Universums vergleichbar sind. Es gibt sowohl theoretische als auch experimentelle Hinweise darauf, dass hadronische Materie mit zunehmender Temperatur und/oder zunehmendem baryochemischen Potentials einen Phasenübergang durchläuft, hin zu einem exotischen Zustand, der als Quark-Gluon-Plasma bekannt ist. Dieser Übergang wird begleitet von einem sogenannten chiralen Übergang. Es ist eine wichtige Frage, ob es sich bei diesem chiralen Übergang um einen echten Phasenübergang (von erster bzw. zweiter Ordnung) handelt, oder ob ein sogenannter crossover vorliegt. Einige Resultate deuten auf einen crossover für verschwindendes baryochemisches Potential und einen Phasenübergang erster Ordnung für verschwindende Temperatur hin, lassen jedoch noch keinen endgültigen Schluss zu, ob dies tatsächlich der Realität entspricht. Wenn ja, so liegt die Annahme nahe, dass ein kritischer Endpunkt existiert, an dem der chirale Übergang von zweiter Ordnung ist. In der Tat existiert ein kritischer Endpunkt in einigen theoretischen Zugängen zur Beschreibung des chiralen Phasenübergangs, deren Aussagekraft seit jeher lebhaft diskutiert wird. Ein zentrales Ziel des zukünftigen CBM-Experiments an der GSI in Darmstadt ist es, die Existenz im Experiment zu überprüfen.
In der Nähe des QCD-(Phasen)übergangs ist es die Abwesenheit jeglicher perturbativer Entwicklungsparameter, die exakte analytische Berechnungen verbietet. Das gleiche gilt für realistische effektive Modelle für QCD. Nichtperturbative Methoden sind daher unverzichtbar für die Untersuchung des QCD-Phasendiagramms. Zu den populärsten dieser Zugänge gehören Gitter-QCD, Resummierungsverfahren, der Dyson-Schwinger-Formalismus, sowie die Funktionale Renormierungsgruppe (FRG). All diese Methoden ergänzen sich gegenseitig und werden zum Teil auch miteinander kombiniert. Eine der Stärken der FRG-Methode ist, dass sie nicht nur erfolgreich auf effektive Modelle angewendet werden kann, sondern auch auf QCD selbst. Für letztere Ab-Initio-Rechnungen sind die aus effektiven Modellen für QCD gewonnenen Resultate von grossem Wert.
Der Schwerpunkt der vorliegenden Arbeit liegt auf der Fragestellung von welcher Ordnung der chirale Phasenübergang im Fall von genau zwei leichten Quarksorten ist. Problemstellungen wie die Suche nach einer Antwort auf die Frage nach den Bedingungen für die Existenz eines Phasenübergangs zweiter Ordnung, die Bestimmung der Universalitätsklasse in diesem Fall etc. erfordern Wissen aus verschiedenen Gebieten.
Kapitel 1 besteht aus einer allgemeinen Einleitung.
In Kapitel 2 stellen wir zunächst einige allgemeine Aspekte von Phasenübergängen dar, die von besonderer Relevanz für das Verständnis des Renormierungsgruppen-Zugangs zu ebendiesen sind. Unser Fokus liegt hierbei auf einer kritischen Untersuchung der Universalitätshypothese. Insbesondere die Rechtfertigung des linearen Sigma-Modells als effektive Theorie für den chiralen Ordnungsparameter beruht auf der Gültigkeit selbiger.
Kapitel 3 beschäftigt sich mit dem chiralen Phasenübergang von einem allgemeinen Standpunkt aus. Wir ergünzen wohlbekannte Fakten durch eine detaillierte Diskussion der sogenannten O(4)-Hypothese. Die Überprüfung der Gültigkeit selbiger wird schließlich in Kapitel 6 und 7 in Angriff genommen.
In Kapitel 4 stellen wir die von uns benutzte FRG-Methode vor. Außerdem diskutieren wir den Zusammenhang zwischen effektiven Theorien für QCD und der QCD selbst.
Kapitel 5 behandelt ein mathematisches Thema, das für alle unserer Untersuchungen unabdingbar ist, nämlich die systematische Konstruktion polynomialer Invarianten zu einer gegebenen Symmetrie. Wir präsentieren einen einfachen, jedoch neuartigen, Algorithmus für die praktische Konstruktion von Invarianten einer gegebenen polynomialen Ordnung.
Kapitel 6 widmet sich Renormierungsgruppen-Studien einer Reihe dimensional reduzierter Theorien. Von zentralem Interesse ist hierbei das lineare Sigma-Modell, insbesondere in Anwesenheit der axialen Anomalie. Es stellt sich heraus, dass die Fixpunkt-Struktur des letzteren vergleichsweise kompliziert ist und ein tieferes Verständnis der zugrundeliegenden Methode sowie ihrer Annahmen erfordert. Dies führt uns zu einer sorgfältigen Analyse der Fixpunkt-Struktur von Modellen verschiedenster Symmetrien. Im Zusammenhang mit der Untersuchung des Einflusses von Vektor- und Axial-Vektor-Mesonen stoßen wir hierbei auf eine neue Universalitä}tsklasse.
Während wenig Spielraum für die Wahl der Symmetriegruppe der effektiven Theorie für den chiralen Ordnungsparameter besteht, ist die Identifizierung der Ordnungsparameter-Komponenten mit den relevanten mesonischen Freiheitsgraden hochgradig nichttrivial. Diese Wahl entspricht der Wahl einer Darstellung der Gruppe und kann zur Zeit nicht eindeutig aus der QCD hergeleitet werden. Es ist daher unerlässlich, verschiedene Möglichkeiten auszutesten. Eine wohlbekannte Wahl besteht darin, das Pion und seinen chiralen Partner, das Sigma-Meson, der O(4)-Darstellung für SU(2)_A x SU(2)_V zuzuordnen, welche einen Phasenübergang zweiter Ordnung erlaubt. Dieses Szenario ist jedoch nur dann sinnvoll, wenn nahe der kritischen Temperatur alle anderen Mesonen entsprechend schwer sind. Im Fall von genau zwei leichten Quarkmassen erfordert dies eine hinreichend große Anomaliestärke. Berücksichtigt man zusätzlich zum Pion und Sigma-Meson auch das Eta-Meson und das a_0-Meson, liefern unsere derzeitigen expliziten Rechnungen keinen Nachweis für die Existenz eines Phasenübergang zweiter Ordnung. Stattdessen spricht die Abwesenheit eines physikalischen (hinsichtlich der Massen) infrarot-stabilen Fixpunktes für einen fluktuationsinduzierten Phasenübergang erster Ordnung. Dieses Ergebnis ist auch zu erwarten (jedoch nicht impliziert), allein durch die Existenz zweier quadratischer Invarianten. Es besteht jedoch immer noch eine hypothetische Chance auf einen Phasenübergang zweiter Ordnung in der SU(2)_A x U(2)_V -Universalitätsklasse. Dies wäre der Fall, wenn der entsprechende von uns gefundene unphysikalische infrarot-stabile Fixpunkt physikalisch werden sollte in höherer Trunkierungsordnung. Interessanterweise finden wir bei endlicher Temperatur für gewisse Parameter einen Phasenübergang zweiter Ordnung. Es ist unklar, ob diese Wahl der Parameter in den Gültigkeitsbereich der dimensional reduzierten Theorie fällt.
Erst vor kurzem (Ende September 2013) wurde die Existenz eines infrarot-stabilen U(2)_A x U(2)_V-symmetrischen Fixpunkts durch Pelissetto und Vicari verifiziert (die zugehörige anomale Dimension ist mit 0.12 angegeben). Dieses Resultat war sehr
überraschend, da für zwei leichte Quarksorten und abwesende Anomalie ein Phasenübergang erster Ordnung relativ gesichert erschien, insbesondere durch die Epsilon-Entwicklung. Offensichtlich versagt letztere jedoch im Limes D=3, also für drei räumliche Dimensionen, da lediglich Fixpunkte gefunden werden können, die auch nahe D=4 existieren. Inspiriert durch diesen wichtigen Fund führen wir eine FRG-Fixpunktstudie in lokaler Potential-Näherung und hoher Trunkierungsordnung (bis zu zehnter Ordnung in den Feldern) durch. Die Stabilitätsanalyse besitzt jedoch leider keine Aussagekraft, da die Stabilitätsmatrix für den Gaußschen Fixpunkt marginale Eigenwerte besitzt. Wir sind überzeugt davon, dass dies nicht mehr der Fall ist, wenn man über die lokale Potential-Näherung hinausgeht und eine nichtverschwindende anomale Dimension zulässt. Die bisherigen Resultate verdeutlichen die Limitierungen der lokalen Potential-Näherung und der Epsilon-Entwicklung, auf denen unsere Untersuchungen zur Universalitätshypothese in weiten Teilen beruhen. Systematische Untersuchungen der Fixpunktstruktur von Modellen mit acht Ordnungsparameter-Komponenten wurden in der Literatur im Rahmen der Epsilon-Entwicklung durchgeführt und im Rahmen dieser Dissertation innerhalb der lokalen Potential-Näherung. Die meisten der Vorhersagen der Epsilon-Entwicklung konnten bestätigt werden, einige hingegen werden in Frage gestellt durch das Auftauchen marginaler Stabilitätsmatrix-Eigenwerte.
Einige wichtige Fragestellungen können nicht im Rahmen einer dimensional reduzierten Theorie behandelt werden, da die explizite Temperaturabhängigkeit in diesem Fall eliminiert wurde.
Insbesondere ist es in diesem Fall nicht möglich, die Stärke eines Phasenübergangs erster Ordnung vorherzusagen, da diese von Observablen (Meson-Massen und die Pion-Zerfallskonstante im Vakuum) abhängen, an die man bei verschwindender Temperatur fitten muss. Dieser Umstand führt uns zu solchen FRG-Studien, in denen die Temperatur als expliziter Parameter verbleibt.
Ein beträchtlicher Teil der für die vorliegende Dissertation zur Verfügung stehenden Arbeitszeit wurde darauf verwendet, eigene Implementierungen geeigneter Algorithmen zur numerischen Lösung der auftretenden partiellen Differentialgleichungen zu finden. Exemplarische Routinen (welche ausschließlich wohlbekannte Methoden nutzen) sind in einem Anhang zur Verfügung gestellt. Das Hauptziel der vorliegenden Arbeit, die Anwendung auf effektive Modelle für QCD, wird in Kapitel 7 präsentiert. Unsere (vorläufigen) FRG-Studien des linearen Sigma-Modells mit axialer Anomalie bei nichtverschwindender Temperatur erlauben verschiedene Szenarien. Sowohl einen extrem schwach ausgeprägten, als auch einen sehr deutlichen Phasenübergang erster Ordnung, ganz abhängig von der Wahl der Ultraviolett-Abschneideskala und oben genannter Parameter. Sogar ein Phasenübergang zweiter Ordnung scheint möglich für gewisse Parameterwerte. Um verlässliche Schlussfolgerungen zu ziehen, sind weitere Untersuchungen nötig und bereits im Gange. In Kapitel 7 verifizieren wir außerdem bereits bekannte numerische Resultate für das Quark-Meson-Modell.
The High Acceptance DiElectron Spectrometer HADES [1] is installed at the Helmholtzzentrum für Schwerionenforschung (GSI) accelerator facility in Darmstadt. It investigates dielectron emission and strangeness production in the 1-3 AGeV regime. A recent experiment series focusses on medium-modifications of light vector mesons in cold nuclear matter. In two runs, p+p and p+Nb reactions were investigated at 3.5 GeV beam energy; about 9·109 events have been registered. In contrast to other experiments the high acceptance of the HADES allows for a detailed analysis of electron pairs with low momenta relative to nuclear matter, where modifications of the spectral functions of vector mesons are predicted to be most prominent. Comparing these low momentum electron pairs to the reference measurement in the elementary p+p reaction, we find in fact a strong modification of the spectral distribution in the whole vector meson region.
Background: Risk stratification, detection of minimal residual disease (MRD), and implementation of novel therapeutic agents have improved outcome in acute lymphoblastic leukemia (ALL), but survival of adult patients with T-cell acute lymphoblastic leukemia (T-ALL) remains unsatisfactory. Thus, novel molecular insights and therapeutic approaches are urgently needed.
Methods: We studied the impact of B-cell CLL/lymphoma 11b (BCL11b), a key regulator in normal T-cell development, in T-ALL patients enrolled into the German Multicenter Acute Lymphoblastic Leukemia Study Group trials (GMALL; n = 169). The mutational status (exon 4) of BCL11b was analyzed by Sanger sequencing and mRNA expression levels were determined by quantitative real-time PCR. In addition gene expression profiles generated on the Human Genome U133 Plus 2.0 Array (affymetrix) were used to investigate BCL11b low and high expressing T-ALL patients.
Results: We demonstrate that BCL11b is aberrantly expressed in T-ALL and gene expression profiles reveal an association of low BCL11b expression with up-regulation of immature markers. T-ALL patients characterized by low BCL11b expression exhibit an adverse prognosis [5-year overall survival (OS): low 35% (n = 40) vs. high 53% (n = 129), P = 0.02]. Within the standard risk group of thymic T-ALL (n = 102), low BCL11b expression identified patients with an unexpected poor outcome compared to those with high expression (5-year OS: 20%, n = 18 versus 62%, n = 84, P < 0.01). In addition, sequencing of exon 4 revealed a high mutation rate (14%) of BCL11b.
Conclusions: In summary, our data of a large adult T-ALL patient cohort show that low BCL11b expression was associated with poor prognosis; particularly in the standard risk group of thymic T-ALL. These findings can be utilized for improved risk prediction in a significant proportion of adult T-ALL patients, which carry a high risk of standard therapy failure despite a favorable immunophenotype.
Objectives: Low energy shock waves have been shown to induce angiogenesis, improve left ventricular ejection fraction and decrease angina symptoms in patients suffering from chronic ischemic heart disease. Whether there is as well an effect in acute ischemia was not yet investigated.
Methods: Hind-limb ischemia was induced in 10–12 weeks old male C57/Bl6 wild-type mice by excision of the left femoral artery. Animals were randomly divided in a treatment group (SWT, 300 shock waves at 0.1 mJ/mm2, 5 Hz) and untreated controls (CTR), n = 10 per group. The treatment group received shock wave therapy immediately after surgery.
Results: Higher gene expression and protein levels of angiogenic factors VEGF-A and PlGF, as well as their receptors Flt-1 and KDR have been found. This resulted in significantly more vessels per high-power field in SWT compared to controls. Improvement of blood perfusion in treatment animals was confirmed by laser Doppler perfusion imaging. Receptor tyrosine kinase profiler revealed significant phosphorylation of VEGF receptor 2 as an underlying mechanism of action. The effect of VEGF signaling was abolished upon incubation with a VEGFR2 inhibitor indicating that the effect is indeed VEGFR 2 dependent.
Conclusions: Low energy shock wave treatment induces angiogenesis in acute ischemia via VEGF receptor 2 stimulation and shows the same promising effects as known from chronic myocardial ischemia. It may therefore develop as an adjunct to the treatment armentarium of acute muscle ischemia in limbs and myocardium.
Loudness in the novel
(2014)
The novel is composed entirely of voices: the most prominent among them is typically that of the narrator, which is regularly intermixed with those of the various characters. In reading through a novel, the reader "hears" these heterogeneous voices as they occur in the text. When the novel is read out loud, the voices are audibly heard. They are also heard, however, when the novel is read silently: in this la!er case, the voices are not verbalized for others to hear, but acoustically created and perceived in the mind of the reader. Simply put: sound, in the context of the novel, is fundamentally a product of the novel’s voices. This conception of sound mechanics may at first seem unintuitive—sound seems to be the product of oral reading—but it is only by starting with the voice that one can fully appreciate sound’s function in the novel. Moreover, such a conception of sound mechanics finds affirmation in the works of both Mikhail Bakhtin and Elaine Scarry: "In the novel," writes Bakhtin, "we can always hear voices (even while reading silently to ourselves)."
The mitochondrial kinase PINK1 and the ubiquitin ligase Parkin are participating in quality control after CCCP- or ROSinduced mitochondrial damage, and their dysfunction is associated with the development and progression of Parkinson’s disease. Furthermore, PINK1 expression is also induced by starvation indicating an additional role for PINK1 in stress response. Therefore, the effects of PINK1 deficiency on the autophago-lysosomal pathway during stress were investigated. Under trophic deprivation SH-SY5Y cells with stable PINK1 knockdown showed downregulation of key autophagic genes, including Beclin, LC3 and LAMP-2. In good agreement, protein levels of LC3-II and LAMP-2 but not of LAMP-1 were reduced in different cell model systems with PINK1 knockdown or knockout after addition of different stressors. This downregulation of autophagic factors caused increased apoptosis, which could be rescued by overexpression of LC3 or PINK1. Taken together, the PINK1-mediated reduction of autophagic key factors during stress resulted in increased cell death, thus defining an additional pathway that could contribute to the progression of Parkinson’s disease in patients with PINK1 mutations.
The deregulation of Polo-like kinase 1 is inversely linked to the prognosis of patients with diverse human tumors. Targeting Polo-like kinase 1 has been widely considered as one of the most promising strategies for molecular anticancer therapy. While the preclinical results are encouraging, the clinical outcomes are rather less inspiring by showing limited anticancer activity. It is thus of importance to identify molecules and mechanisms responsible for the sensitivity of Polo-like kinase 1 inhibition. We have recently shown that p21Cip1/CDKN1A is involved in the regulation of mitosis and its loss prolongs the mitotic duration accompanied by defects in chromosome segregation and cytokinesis in various tumor cells. In the present study, we demonstrate that p21 affects the efficacy of Polo-like kinase 1 inhibitors, especially Poloxin, a specific inhibitor of the unique Polo-box domain. Intriguingly, upon treatment with Polo-like kinase 1 inhibitors, p21 is increased in the cytoplasm, associated with anti-apoptosis, DNA repair and cell survival. By contrast, deficiency of p21 renders tumor cells more susceptible to Polo-like kinase 1 inhibition by showing a pronounced mitotic arrest, DNA damage and apoptosis. Furthermore, long-term treatment with Plk1 inhibitors induced fiercely the senescent state of tumor cells with functional p21. We suggest that the p21 status may be a useful biomarker for predicting the efficacy of Plk1 inhibition.
Background: In this study, we examined patients who had non-progressive disease for at least 2 years after diagnosis of inoperable locoregional recurrent or metastatic breast cancer under continuous trastuzumab treatment. Our primary goal was to assess the long-term outcome of patients with durable response to trastuzumab.
Methods: 268 patients with HER2-positive inoperable locally recurrent or metastatic breast cancer and non-progressive disease for at least 2 years under trastuzumab treatment were documented retrospectively or prospectively in the HER-OS registry, an online documentation tool, between December 2006 and September 2010 by 71 German oncology centers. The study end point was time to tumor progression.
Results: Overall, 47.1% of patients (95% confidence interval (CI): 39.9–54.1%) remained in remission for more than 5 years, while the median time to progression was 4.5 years (95% CI: 4.0–6.6 years). Lower age (<50 years) and good performance status (ECOG 0) at time of trastuzumab treatment initiation as well as complete remission after initial trastuzumab treatment were associated with longer time to progression. Interruption of trastuzumab therapy correlated with shorter time to progression.
Conclusions: HER2-positive patients, who initially respond to palliative treatment with trastuzumab, can achieve a long-term tumor remission of several years.
In the interest of understanding the development of a multicellular organism, subcellular events must be seen in the context of the entire three-dimensional tissue. In addition, events that occur within a short period of time can be of great importance for the relatively long developmental process of the organ. Thus, it is required to capture subcellular events in a larger spatio-temporal scale context, which has been up to now a technical challenge. In developmental biology, light microscopy has always been an important tool. The dilemma of light microscopy, in particular fluorescence microscopy, is that molecules receive high light intensities that might change the conformation of molecules, which can have signaling or toxic effects. In Light Sheet-based Fluorescence Microscopy (LSFM), the energy required for a single recording is reduced by several orders of magnitude compared to other fluorescence microscopy techniques. During the last ten years, LSFM has emerged as a preferred tool to capture all cells during embryogenesis of the zebrafish Danio rerio, the fruit fly Drosophila melanogaster or recently the red flour beetle Tribolium castaneum for a period of several days. The motivation of this work was to gain new insights in developmental related processes of plant organs. The aim of this work was to establish a protocol for imaging plant growth over a long period of time using LSFM and perform comprehensive analyses at the cellular level. Plants have to cope with a variety of environmental conditions, therefore the conditions inside the microscope chamber had to be brought under control. The sample preparation methods and the standardized conditions at a physiological level allowed the study of gravity response, day-night rhythms, organ shape development as well as the intracellular dynamic events of the cytoskeleton and endosomal compartments in an unprecedented manner. Several of these projects were successfully published in collaborations with Prof. Jozef Šamaj (Palacký University Olomouc, Czech Republic), Prof. Niko Geldner (University of Lausanne, Switzerland), Prof. Malcom Bennett (University of Nottingham, UK) and Dr. Jürgen Kleine-Vehn (University of Natural Resources and Life Sciences, Austria). The main part of my work focused on the formation of lateral roots in Arabidopsis thaliana and was conducted in close collaboration with Dr. Alexis Maizel (University of Heidelberg, Germany). Previously, most experiments that describe lateral root formation have been performed on a small number of cells and for short periods of time. Capturing the complete process of lateral roots is an ambitious goal, because first, the primordium of a lateral root is located deep inside the primary root and imaging quality is impaired due to scattering of the overlaying tissue. Second, the process takes about 48 h, i.e. the plant has to be kept healthy for the whole period. Third, the amount of excitation light required for the spatio-temporal might have phototoxic effects that lead to a stop of growth at least in conventional microscopic techniques. In Arabidopsis embryogenesis, the sequence of cell divisions is relatively invariant. However, whether lateral root organogenesis follows particular cell division patterns has been unknown. The complete process of lateral root formation was captured from the first cell division until after the emergence from the main root. Images of a nuclei marker and a plasmamembrane marker were recorded every 5 min for a time period of up to 64 h. The positions and cell divisions of all cells were tracked manually. In collaboration with Alexander Schmitz (Goethe University Frankfurt am Main, Germany) and Dr. Jens Fangerau (University of Heidelberg, Germany), comprehensive analyses of the data were performed. A lateral root forms from initially 8-15 founder cells, arranged in a patch of 5-8 parallel files. The occurrence of new cell layers by periclinal divisions, as well as the sequence of layer generation was conserved and resembles the sequence suggested by Malamy and Benfey in 1997. Besides this stereotyped occurrence of periclinal divisions, radial divisions were found to appear stochastically, following no particular pattern. A large variability was also found in the contribution of founder cells and cell files to the final lateral root. In summary, the results suggest that a stereotyped pattern of cell divisions at particular developmental stages and a dynamically adapted control of cell divisions exist in parallel. Both properties allow a controlled but flexible development of the organ according to variations in cell topology and mechanical properties of the surrounding tissue. This work shows that LSFM, the sample preparation methods and controlled environmental conditions allow to capture and analyse the development of plants over several days at high resolution in an unprecedented manner.
Locative inversion in Cuwabo
(2014)
This paper proposes a detailed description of locative inversion (LI) constructions in Cuwabo, in terms of morphosyntactic properties and thematic restrictions. Of particular interest are the use of disjoint verb forms in LI, and the co-existence of formal and semantic LI, which challenges the widespread belief that the two constructions cannot be found in the same language.
Late stage cancer is often associated with reduced immune recognition and a highly immunosuppressive tumor microenvironment. The presence of tumor infiltrating lymphocytes (TILs) and specific gene-signatures prior to treatment are linked to good prognosis, while the opposite is true for extensive immunosuppression. The use of adenoviruses as cancer vaccines is a form of active immunotherapy to initialise a tumor-specific immune response that targets the patient’s unique tumor antigen repertoire. We report a case of a 68-year-old male with asbestos-related malignant pleural mesothelioma who was treated in a Phase I study with a granulocyte-macrophage colony‑stimulating factor (GM-CSF)-expressing oncolytic adenovirus, Ad5/3-D24-GMCSF (ONCOS-102). The treatment resulted in prominent infiltration of CD8C lymphocytes to tumor, marked induction of systemic antitumor CD8C T-cells and induction of Th1- type polarization in the tumor. These results indicate that ONCOS-102 treatment sensitizes tumors to other immunotherapies by inducing a T-cell positive phenotype to an initially T-cell negative tumor.
Local active information storage as a tool to understand distributed neural information processing
(2014)
Every act of information processing can in principle be decomposed into the component operations of information storage, transfer, and modification. Yet, while this is easily done for today's digital computers, the application of these concepts to neural information processing was hampered by the lack of proper mathematical definitions of these operations on information. Recently, definitions were given for the dynamics of these information processing operations on a local scale in space and time in a distributed system, and the specific concept of local active information storage was successfully applied to the analysis and optimization of artificial neural systems. However, no attempt to measure the space-time dynamics of local active information storage in neural data has been made to date. Here we measure local active information storage on a local scale in time and space in voltage sensitive dye imaging data from area 18 of the cat. We show that storage reflects neural properties such as stimulus preferences and surprise upon unexpected stimulus change, and in area 18 reflects the abstract concept of an ongoing stimulus despite the locally random nature of this stimulus. We suggest that LAIS will be a useful quantity to test theories of cortical function, such as predictive coding.
Banks can deal with their liquidity risk by holding liquid assets (self-insurance), by participating in interbank markets (coinsurance), or by using flexible financing instruments, such as bank capital (risk-sharing). We use a simple model to show that undiversifiable liquidity risk, i.e. the liquidity risk that banks are unable to coinsure on interbank markets, represents an important risk factor affecting their capital structures. Banks facing higher undiversifiable liquidity risk hold more capital. We posit that empirically banks that are more exposed to undiversifiable liquidity risk are less active on interbank markets. Therefore, we test for the existence of a negative relationship between bank capital and interbank market activity and find support in a large sample of U.S. commercial banks.
Dendritic morphology has been shown to have a dramatic impact on neuronal function. However, population features such as the inherent variability in dendritic morphology between cells belonging to the same neuronal type are often overlooked when studying computation in neural networks. While detailed models for morphology and electrophysiology exist for many types of single neurons, the role of detailed single cell morphology in the population has not been studied quantitatively or computationally. Here we use the structural context of the neural tissue in which dendritic trees exist to drive their generation in silico. We synthesize the entire population of dentate gyrus granule cells, the most numerous cell type in the hippocampus, by growing their dendritic trees within their characteristic dendritic fields bounded by the realistic structural context of (1) the granule cell layer that contains all somata and (2) the molecular layer that contains the dendritic forest. This process enables branching statistics to be linked to larger scale neuroanatomical features. We find large differences in dendritic total length and individual path length measures as a function of location in the dentate gyrus and of somatic depth in the granule cell layer. We also predict the number of unique granule cell dendrites invading a given volume in the molecular layer. This work enables the complete population-level study of morphological properties and provides a framework to develop complex and realistic neural network models.
Channelrhodopsin-2 (ChR2) is a cation-selective light-gated channel from Chlamydomonas reinhardtii (Nagel G, Szellas T, Huhn W, Kateriya S, Adeishvili N, Berthold P, et al. Channelrhodopsin-2, a directly light-gated cation-selective membrane channel. Proc Natl Acad Sci USA 2003;100:13940-5), which has become a powerful tool in optogenetics. Two-dimensional crystals of the slow photocycling C128T ChR2 mutant were exposed to 473 nm light and rapidly frozen to trap the open state. Projection difference maps at 6Å resolution show the location, extent and direction of light-induced conformational changes in ChR2 during the transition from the closed state to the ion-conducting open state. Difference peaks indicate that transmembrane helices (TMHs) TMH2, TMH6 and TMH7 reorient or rearrange during the photocycle. No major differences were found near TMH3 and TMH4 at the dimer interface. While conformational changes in TMH6 and TMH7 are known from other microbial-type rhodopsins, our results indicate that TMH2 has a key role in light-induced channel opening and closing in ChR2.
This paper studies the life cycle consumption-investment-insurance problem of a family. The wage earner faces the risk of a health shock that significantly increases his probability of dying. The family can buy term life insurance with realistic features. In particular, the available contracts are long term so that decisions are sticky and can only be revised at significant costs. Furthermore, a revision is only possible as long as the insured person is healthy. A second important and realistic feature of our model is that the labor income of
the wage earner is unspanned. We document that the combination of unspanned labor income and the stickiness of insurance decisions reduces the insurance demand significantly. This is because an income shock induces the need to reduce the insurance coverage, since premia become less affordable. Since such a reduction is costly and families anticipate these potential costs, they buy less protection at all ages. In particular, young families stay away from life insurance markets altogether.
This country report was prepared for the 19th World Congress of the International Academy of Comparative Law in Vienna in 2014. It is structured as a questionnaire and provides an overview of the legal framework for Free and Open Source Software (FOSS) and other alternative license models like (e.g.) Creative Commons under German law. The first set of questions addresses the applicable statutory provisions and the reported case law in this area. The second section concerns contractual issues, in particular with regard to the interpretation and validity of open content licenses. The third section deals with copyright aspects of open content models, for example regarding revocation rights and rights to equitable remuneration. The final set of questions pertains to patent, trademark and competition law issues of open content licenses.
Library Buildings around the World" is a survey based on researches of several years. The objective was to gather library buildings on an international level starting with 1990.
The parts Germany, France, United Kingdom, United States have been thoroughly revised, supplemented and completed for this 2nd edition. A revision of the other countries is planned for the next edition.
In the United States, on April 1, 2014, the set of rules commonly known as the "Volcker Rule", prohibiting proprietary trading activities in banks, became effective. The implementation of this rule took more than three years, as “proprietary trading” is an inherently vague concept, overlapping strongly with genuinely economically useful activities such as market-making. As a result, the final Rule is a complex and lengthy combination of prohibitions and exemptions.
In January 2014, the European Commission put forward its proposal on banking structural reform. The proposal includes a Volcker-like provision, prohibiting large, systemically relevant financial institutions from engaging in proprietary trading or hedge fund-related business. This paper offers lessons to be learned from the implementation process for the Volcker rule in the US for the European regulatory process.
his paper distils three lessons for bank regulation from the experience of the 2009-12 euro-area financial crisis. First, it highlights the key role that sovereign debt exposures of banks have played in the feedback loop between bank and fiscal distress, and inquires how the regulation of banks’ sovereign exposures in the euro area should be changed to mitigate this feedback loop in the future. Second, it explores the relationship between the forbearance of non-performing loans by European banks and the tendency of EU regulators to rescue rather than resolving distressed banks, and asks to what extent the new regulatory framework of the euro-area “banking union” can be expected to mitigate excessive forbearance and facilitate resolution of insolvent banks. Finally, the paper highlights that capital requirements based on the ratio of Tier-1 capital to banks’ risk-weighted assets were massively gamed by large banks, which engaged in various forms of regulatory arbitrage to minimize their capital charges while expanding leverage. This argues in favor of relying on a set of simpler and more robust indicators to determine banks’ capital shortfall, such as book and market leverage ratios.
This paper investigates the risk channel of monetary policy on the asset side of banks’ balance sheets. We use a factoraugmented vector autoregression (FAVAR) model to show that aggregate lending standards of U.S. banks, such as their collateral requirements for firms, are significantly loosened in response to an unexpected decrease in the Federal Funds rate. Based on this evidence, we reformulate the costly state verification (CSV) contract to allow for an active financial intermediary, embed it in a New Keynesian dynamic stochastic general equilibrium (DSGE) model, and show that – consistent with our empirical findings – an expansionary monetary policy shock implies a temporary increase in bank lending relative to borrower collateral. In the model, this is accompanied by a higher default rate of borrowers.
Concepts of legal capacity and legal subjectivity have developed gradually through intermediate stages. Accordingly, there are numerous types of legal subjects and partial legal subjects, and ever-new types can develop, at the latest once the law confronts new social and technological challenges. Today such challenges seem to be making themselves felt especially in the field of information and communication technologies. Their specific communicative conditions resulting from the technological networking of social communication have a particularly pronounced influence on legal attributions of identity and action, and hence above all on issues of liability in electronic commerce. Here in particular it is becoming increasingly difficult to distinguish concrete human actors and, for example, to identify them as authors of declarations of intent or even as individually responsible agencies of legal transgressions. The communicative processes in this area appear instead as new kinds of chains of effects whose actors seem to be more socio-technical ensembles of people and things – whereby the artificial components of these hybrid human being-thing linkages can sometimes even be represented as driving forces and independent agents.
The subatomic world is governed by the strong interactions of quarks and gluons, described by Quantum Chromodynamics (QCD). Quarks experience confinement into colour-less objects, i.e. they can not be observed as free particles. Under extreme conditions such as high temperature or high density, this constraint softens and a transition to a phase where quarks and gluons are quasi-free particles (Quark-Gluon-Plasma) can occur. This environment resembles the conditions prevailing during the early stages of the universe shortly after the Big Bang.
The phase diagram of QCD is under investigation in current and future collider experiments, for example at the Large Hadron Collider (LHC) or at the Facility for Antiproton and Ion Research (FAIR). Due to the strength of the strong interactions in the energy regime of interest, analytic methods can not be applied rigorously. The only tool to study QCD from first principles is given by simulations of its discretised version, Lattice QCD (LQCD).
These simulations are in the high-performance computing area, hence, the numerical aspects of LQCD are a vital part in this field of research. In recent years, Graphic Processing Units (GPUs) have been incorporated in these simulations as they are a standard tool for general purpose calculations today.
In the course of this thesis, the LQCD application cl2qcd has been developed, which allows for simulations on GPUs as well as on traditional CPUs, as it is based on OpenCL. cl2qcd constitutes the first application for Wilson type fermions in OpenCL.
It provides excellent performance and has been applied in physics studies presented in this thesis. The investigation of the QCD phase diagram is hampered by the notorious sign-problem, which restricts current simulation algorithms to small values of the chemical potential.
Theoretically, studying unphysical parameter ranges allows for constraints on the phase diagram. Of utmost importance is the clarification of the order of the finite temperature transition in the Nf=2 chiral limit at zero chemical potential. It is not known if it is of first or second order. To this end, simulations utilising Twisted Mass Wilson fermions aiming at the chiral limit are presented in this thesis.
Another possibility is the investigation of QCD at purely imaginary chemical potential. In this region, QCD is known to posses a rich phase structure, which can be used to constrain the phase diagram of QCD at real chemical potential and to clarify the nature of the Nf=2 chiral limit. This phase structure is studied within this thesis, in particular the nature of the Roberge-Weiss endpoint is mapped out using Wilson fermions.