Refine
Year of publication
- 2009 (2459) (remove)
Document Type
- Article (985)
- Doctoral Thesis (391)
- Part of Periodical (311)
- Book (210)
- Review (128)
- Working Paper (116)
- Part of a Book (86)
- Conference Proceeding (77)
- Report (65)
- Preprint (16)
Language
- German (1440)
- English (881)
- Portuguese (55)
- Croatian (39)
- French (24)
- Multiple languages (5)
- Italian (4)
- Spanish (4)
- dut (2)
- Hungarian (2)
Keywords
- Deutsch (58)
- Linguistik (35)
- Literatur (30)
- Rezension (24)
- Filmmusik (21)
- Lehrdichtung (18)
- Reiseliteratur (16)
- Deutschland (14)
- Film (13)
- Literaturwissenschaft (13)
Institute
- Medizin (287)
- Extern (198)
- Biochemie und Chemie (159)
- Biowissenschaften (93)
- Präsidium (80)
- Physik (67)
- Gesellschaftswissenschaften (61)
- Rechtswissenschaft (51)
- Geowissenschaften (48)
- Geschichtswissenschaften (48)
Samples of freshly fallen snow were collected at the high alpine research station Jungfraujoch (Switzerland) in February and March 2006 and 2007, during the Cloud and Aerosol Characterization Experiments (CLACE) 5 and 6. In this study a new technique has been developed and demonstrated for the measurement of organic acids in fresh snow. The melted snow samples were subjected to solid phase extraction and resulting solutions analysed for organic acids by HPLC-MS-TOF using negative electrospray ionization. A series of linear dicarboxylic acids from C5 to C13 and phthalic acid, were identified and quantified. In several samples the biogenic acid pinonic acid was also observed. In fresh snow the median concentration of the most abundant acid, adipic acid, was 0.69 micro g L -1 in 2006 and 0.70 micro g L -1 in 2007. Glutaric acid was the second most abundant dicarboxylic acid found with median values of 0.46 micro g L -1 in 2006 and 0.61 micro g L -1 in 2007, while the aromatic acid phthalic acid showed a median concentration of 0.34 micro g L -1 in 2006 and 0.45 micro g L -1 in 2007. The concentrations in the samples from various snowfall events varied significantly, and were found to be dependent on the back trajectory of the air mass arriving at Jungfraujoch. Air masses of marine origin showed the lowest concentrations of acids whereas the highest concentrations were measured when the air mass was strongly influenced by boundary layer air.
Current atmospheric models do not include secondary organic aerosol (SOA) production from gas-phase reactions of polycyclic aromatic hydrocarbons (PAHs). Recent studies have shown that primary semivolatile emissions, previously assumed to be inert, undergo oxidation in the gas phase, leading to SOA formation. This opens the possibility that low-volatility gas-phase precursors are a potentially large source of SOA. In this work, SOA formation from gas-phase photooxidation of naphthalene, 1-methylnaphthalene (1-MN), 2-methylnaphthalene (2-MN), and 1,2-dimethylnaphthalene (1,2-DMN) is studied in the Caltech dual 28-m3 chambers. Under high-NOx conditions and aerosol mass loadings between 10 and 40 microg m-3, the SOA yields (mass of SOA per mass of hydrocarbon reacted) ranged from 0.19 to 0.30 for naphthalene, 0.19 to 0.39 for 1-MN, 0.26 to 0.45 for 2-MN, and constant at 0.31 for 1,2-DMN. Under low-NOx conditions, the SOA yields were measured to be 0.73, 0.68, and 0.58, for naphthalene, 1-MN, and 2-MN, respectively. The SOA was observed to be semivolatile under high-NOx conditions and essentially nonvolatile under low-NOx conditions, owing to the higher fraction of ring-retaining products formed under low-NOx conditions. When applying these measured yields to estimate SOA formation from primary emissions of diesel engines and wood burning, PAHs are estimated to yield 3–5 times more SOA than light aromatic compounds. PAHs can also account for up to 54% of the total SOA from oxidation of diesel emissions, representing a potentially large source of urban SOA.
It has become popular for journalists who are trying to sell newspapers, and politicians who are trying to solicit votes, to refer to this financial crisis as the worst since the Great Depression or WWII. I don’t know whether it is the worst or not so will leave that question to the historians and economists of the future once the storm has past. But it is indeed a “storm” as described by Vince Cable, Member of Parliament in his UK bestselling book entitled “The Storm – The World Economic Crisis and What it Means”. He describes this “storm” as a very destructive one displacing jobs, businesses, banks and whole economies from Iceland to the United Kingdom to the United States. I propose to offer a short chronology and summary of the causes of the current economic crisis. Then I will review several of the regulatory responses to the crisis focusing on the Turner Report, the de Larosière Group and certain US Treasury statements. I will offer my critiques of these proposals and then make some predictions of what the financial services industry may look like in the future.
Seit dem Inkrafttreten des Investmentänderungsgesetzes zum 28.12.2007 steht der Investmentbranche als neue Gestaltungsform eines Investmentvehikels die fremdverwaltete Investmentaktiengesellschaft zur Verfügung. Die fremdverwaltete Investmentaktiengesellschaft benennt eine Kapitalanlagegesellschaft als Verwaltungsgesellschaft und überträgt ihr die allgemeine Verwaltungstätigkeit sowie die Anlage und Verwaltung ihrer Mittel. Der folgende Beitrag untersucht die Haftung der Verwaltungsgesellschaft gegenüber den Aktionären der fremdverwalteten Investmentaktiengesellschaft. Im Ergebnis wird ein gesetzliches Schuldverhältnis bejaht, für dessen Verletzung die Verwaltungsgesellschaft von den Aktionären der Investmentaktiengesellschaft gemäß §§ 280 Abs. 1, 249 ff. BGB auf Schadensersatz in Anspruch genommen werden kann.
In this thesis the first fully integrated Boltzmann+hydrodynamics approach to relativistic heavy ion reactions has been developed. After a short introduction that motivates the study of heavy ion reactions as the tool to get insights about the QCD phase diagram, the most important theoretical approaches to describe the system are reviewed. To model the dynamical evolution of the collective system assuming local thermal equilibrium ideal hydrodynamics seems to be a good tool. Nowadays, the development of either viscous hydrodynamic codes or hybrid approaches is favoured. For the microscopic description of the hadronic as well as the partonic stage of the evolution transport approaches have beeen successfully applied, since they generate the full phse-space dynamics of all the particles. The hadron-string transport approach that this work is based on is the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) approach. It constitutes an effective solution of the relativistic Boltzmann equation and is restricted to binary collisions of the propagated hadrons. Therefore, the Boltzmann equation and the basic assumptions of this model are introduced. Furthermore, predictions for the charged particle multiplicities at LHC energies are made. The next step is the development of a new framework to calculate the baryon number density in a transport approach. Time evolutions of the net baryon number and the quark density have been calculated at AGS, SPS and RHIC energies and the new approach leads to reasonable results over the whole energy range. Studies of phase diagram trajectories using hydrodynamics are performed as a first move into the direction of the development of the hybrid approach. The hybrid approach that has been developed as the main part of this thesis is based on the UrQMD transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The initial energy and baryon number density distributions are not smooth and not symmetric in any direction and the initial velocity profiles are non-trivial since they are generated by the non-equilibrium transport approach. The fulll (3+1) dimensional ideal relativistic one fluid dynamics evolution is solved using the SHASTA algorithm. For the present work, three different equations of state have been used, namely a hadron gas equation of state without a QGP phase transition, a chiral EoS and a bag model EoS including a strong first order phase transition. For the freeze-out transition from hydrodynamics to the cascade calculation two different set-ups are employed. Either an in the computational frame isochronous freeze-out or an gradual freeze-out that mimics an iso-eigentime criterion. The particle vectors are generated by Monte Carlo methods according to the Cooper-Frye formula and UrQMD takes care of the final decoupling procedure of the particles. The parameter dependences of the model are investigated and the time evolution of different quantities is explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic flow values at SPS energies are shown to be in line with an ideal hydrodynamic evolution if a proper initial state is used and the final freeze-out proceeds gradually. The hybrid model calculation is able to reproduce the experimentally measured integrated as well as transverse momentum dependent $v_2$ values for charged particles. The multiplicity and mean transverse mass excitation function is calculated for pions, protons and kaons in the energy range from $E_{\rm lab}=2-160A~$GeV. It is observed that the different freeze-out procedures have almost as much influence on the mean transverse mass excitation function as the equation of state. The experimentally observed step-like behaviour of the mean transverse mass excitation function is only reproduced, if a first order phase transition with a large latent heat is applied or the EoS is effectively softened due to non-equilibrium effects in the hadronic transport calculation. The HBT correlation of the negatively charged pion source created in central Pb+Pb collisions at SPS energies are investigated with the hybrid model. It has been found that the latent heat influences the emission of particles visibly and hence the HBT radii of the pion source. The final hadronic interactions after the hydrodynamic freeze-out are very important for the HBT correlation since a large amount of collisions and decays still takes place during this period.
Background Heme oxygenase-1 is an inducible cytoprotective enzyme which handles oxidative stress by generating anti-oxidant bilirubin and vasodilating carbon monoxide. A (GT)n dinucleotide repeat and a -413A>T single nucleotide polymorphism have been reported in the promoter region of HMOX1 to both influence the occurrence of coronary artery disease and myocardial infarction. We sought to validate these observations in persons scheduled for coronary angiography. Methods We included 3219 subjects in the current analysis, 2526 with CAD including a subgroup of CAD and MI (n = 1339) and 693 controls. Coronary status was determined by coronary angiography. Risk factors and biochemical parameters (bilirubin, iron, LDL-C, HDL-C, and triglycerides) were determined by standard procedures. The dinucleotide repeat was analysed by PCR and subsequent sizing by capillary electrophoresis, the -413A>T polymorphism by PCR and RFLP. Results In the LURIC study the allele frequency for the -413A>T polymorphism is A = 0,589 and T = 0,411. The (GT)n repeats spread between 14 and 39 repeats with 22 (19.9%) and 29 (47.1%) as the two most common alleles. We found neither an association of the genotypes or allelic frequencies with any of the biochemical parameters nor with CAD or previous MI. Conclusion Although an association of these polymorphisms with the appearance of CAD and MI have been published before, our results strongly argue against a relevant role of the (GT)n repeat or the -413A>T SNP in the HMOX1 promoter in CAD or MI.
We calculate leading-order dilepton yields from a quark-gluon plasma which has a time-dependent anisotropy in momentum space. Such anisotropies can arise during the earliest stages of quark-gluon plasma evolution due to the rapid longitudinal expansion of the created matter. A phenomenological model for the proper time dependence of the parton hard momentum scale, p_hard, and the plasma anisotropy parameter, xi, is proposed. The model describes the transition of the plasma from a 0+1 dimensional collisionally-broadened expansion at early times to a 0+1 dimensional ideal hydrodynamic expansion at late times. We find that high-energy dilepton production is enhanced by pre-equilibrium emission up to 50% at LHC energies, if one assumes an isotropization/thermalization time of 2 fm/c. Given sufficiently precise experimental data this enhancement could be used to determine the plasma isotropization time experimentally.
Introduction Impaired renal function and/or pre-existing atherosclerosis in the deceased donor increase the risk of delayed graft function and impaired long-term renal function in kidney transplant recipients. Case presentation We report delayed graft function occurring simultaneously in two kidney transplant recipients, aged 57-years-old and 39-years-old, who received renal allografts from the same deceased donor. The 62-year-old donor died of cardiac arrest during an asthmatic state. Renal-allograft biopsies performed in both kidney recipients because of delayed graft function revealed cholesterol-crystal embolism. An empiric statin therapy in addition to low-dose acetylsalicylic acid was initiated. After 10 and 6 hemodialysis sessions every 48 hours, respectively, both renal allografts started to function. Glomerular filtration rates at discharge were 26 ml/min/1.73 m2 and 23.9 ml/min/1.73 m2, and remained stable in follow-up examinations. Possible donor and surgical procedure-dependent causes for cholesterol-crystal embolism are discussed. Conclusion Cholesterol-crystal embolism should be considered as a cause for delayed graft function and long-term impaired renal allograft function, especially in the older donor population.
Methods for dichoptic stimulus presentation in functional magnetic resonance imaging : a review
(2009)
Dichoptic stimuli (different stimuli displayed to each eye) are increasingly being used in functional brain imaging experiments using visual stimulation. These studies include investigation into binocular rivalry, interocular information transfer, three-dimensional depth perception as well as impairments of the visual system like amblyopia and stereodeficiency. In this paper, we review various approaches of displaying dichoptic stimulus used in functional magnetic resonance imaging experiments. These include traditional approaches of using filters (red-green, red-blue, polarizing) with optical assemblies as well as newer approaches of using bi-screen goggles.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if there is a mismatch between the data structures and representations used by the parser and the gold standard. A particular case in point is German, for which two treebanks (TiGer and TüBa-D/Z) are available with highly different annotation schemes for the acquisition of (e.g.) PCFG parsers. The differences between the TiGer and TüBa-D/Z annotation schemes make fair and unbiased parser evaluation difficult [7, 9, 12]. The resource (TEPACOC) presented in this paper takes a different approach to parser evaluation: instead of providing evaluation data in a single annotation scheme, TEPACOC uses comparable sentences and their annotations for 5 selected key grammatical phenomena (with 20 sentences each per phenomena) from both TiGer and TüBa-D/Z resources. This provides a 2 times 100 sentence comparable testsuite which allows us to evaluate TiGer-trained parsers against the TiGer part of TEPACOC, and TüBa-D/Z-trained parsers against the TüBa-D/Z part of TEPACOC for key phenomena, instead of comparing them against a single (and potentially biased) gold standard. To overcome the problem of inconsistency in human evaluation and to bridge the gap between the two different annotation schemes, we provide an extensive error classification, which enables us to compare parser output across the two different treebanks. In the remaining part of the paper we present the testsuite and describe the grammatical phenomena covered in the data. We discuss the different annotation strategies used in the two treebanks to encode these phenomena and present our error classification of potential parser errors.
Nous présentons ici différents algorithmes d’analyse pour grammaires à concaténation d’intervalles (Range Concatenation Grammar, RCG), dont un nouvel algorithme de type Earley, dans le paradigme de l’analyse déductive. Notre travail est motivé par l’intérêt porté récemment à ce type de grammaire, et comble un manque dans la littérature existante.
In the recent literature the phenomenon of long distance agreement has become the focus of several studies as it seems to violate certain locality conditions which require that agreeing elements in general stand in clause-mate relationships. In particular, it involves a verb agreeing with a constituent which is located in the verb's clausal complement and hence poses a challenge for theories that assume a strictly local relationship for agreement. In this paper we present empirical evidence from Greek and Romanian for the reality of long distance agreement. Specifically, we focus on raising constructions in these two languages and we show that they do not involve movement but rather instantiate long distance agreement. We further argue that subjunctives allowing long distance agreement lack both a CP layer and semantic Tense. However, since the embedded verb also bears phi-features, these constructions pose a further problem for assumptions that view the presence of phi-features as evidence for the presence of a C layer. Finally, we raise the question of the common properties that these languages have that lead to the presence of long distance agreement.
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.
Parsing coordinations
(2009)
The present paper is concerned with statistical parsing of constituent structures in German. The paper presents four experiments that aim at improving parsing performance of coordinate structure: 1) reranking the n-best parses of a PCFG parser, 2) enriching the input to a PCFG parser by gold scopes for any conjunct, 3) reranking the parser output for all possible scopes for conjuncts that are permissible with regard to clause structure. Experiment 4 reranks a combination of parses from experiments 1 and 3. The experiments presented show that n- best parsing combined with reranking improves results by a large margin. Providing the parser with different scope possibilities and reranking the resulting parses results in an increase in F-score from 69.76 for the baseline to 74.69. While the F-score is similar to the one of the first experiment (n-best parsing and reranking), the first experiment results in higher recall (75.48% vs. 73.69%) and the third one in higher precision (75.43% vs. 73.26%). Combining the two methods results in the best result with an F-score of 76.69.
The aim of this paper is to address two main counterarguments raised in Landau (2007) against the movement analysis of Control, and especially against the phenomenon of Backward Control. The paper shows that unlike the situation described in Tsez (Polinsky & Potsdam 2002), Landau's objections do not hold for Greek and Romanian, where all obligatory control verbs exhibit Backward Control. Our results thus provide stronger empirical support for a theoretical approach to Control in terms of Movement, as defended in Hornstein (1999 and subsequent work).
The recent financial crisis has led to a vigorous debate about the pros and cons of fair-value accounting (FVA). This debate presents a major challenge for FVA going forward and standard setters’ push to extend FVA into other areas. In this article, we highlight four important issues as an attempt to make sense of the debate. First, much of the controversy results from confusion about what is new and different about FVA. Second, while there are legitimate concerns about marking to market (or pure FVA) in times of financial crisis, it is less clear that these problems apply to FVA as stipulated by the accounting standards, be it IFRS or U.S. GAAP. Third, historical cost accounting (HCA) is unlikely to be the remedy. There are a number of concerns about HCA as well and these problems could be larger than those with FVA. Fourth, although it is difficult to fault the FVA standards per se, implementation issues are a potential concern, especially with respect to litigation. Finally, we identify several avenues for future research. JEL Classification: G14, G15, G30, K22, M41, M42
The utility-maximizing consumption and investment strategy of an individual investor receiving an unspanned labor income stream seems impossible to find in closed form and very dificult to find using numerical solution techniques. We suggest an easy procedure for finding a specific, simple, and admissible consumption and investment strategy, which is near-optimal in the sense that the wealthequivalent loss compared to the unknown optimal strategy is very small. We first explain and implement the strategy in a simple setting with constant interest rates, a single risky asset, and an exogenously given income stream, but we also show that the success of the strategy is robust to changes in parameter values, to the introduction of stochastic interest rates, and to endogenous labor supply decisions.
In this paper, we analyze economies of scale for German mutual fund complexes. Using 2002-2005 data of 41 investment management companies, we specify a hedonic translog cost function. Applying a fixed effects regression on a one-way error component model there is clear evidence of significant overall economies of scale. On the level of individual mutual fund complexes we find significant economies of scale for all of the companies in our sample. With regard to cost efficiency, we find that the average mutual fund complexes in all size quartiles deviate considerably from the best practice cost frontier. JEL Classification: G2, L25 Keywords: mutual fund complex, investment management company, cost efficiency, economies of scale, hedonic translog cost function, fixed effects regression, one-way error component model
Der vorliegende Beitrag untersucht, ob der Mehrheitsaktionär einer Gesellschaft im Vorfeld eines Zwangsausschlusses von Minderheitsaktionären (sog. Squeeze-Out) versucht, die Kapitalmarkterwartungen negativ zu beeinflussen. Ein solches "manipulatives" Verhalten wird häufig in der juristischen wie betriebswirtschaftlichen Literatur unterstellt, da der Aktienkurs fü die Abfindungshöhe die Wertuntergrenze bildet. Unsere empirische Untersuchung der Bilanz- und Pressemitteilungspolitik von Squeeze-Out-Unternehmen im Vorfeld der Ankündigung einer solchen Maßnahme am deutschen Kapitalmarkt zeigt, dass in diesem Zeitraum tatsächlich ein signifikanter Anstieg (Rückgang) der im Ton pessimistischen (optimistischen) Pressemitteilungen feststellbar ist. Allerdings zeigt sich weiter, dass die Aktien der Squeeze-Out-Kandidaten bereits im Vorfeld und am Tag der Ankündigung so hohe positive Überrenditen erzielen, dass der von uns quantifizierte kumulierte Effekt der Informationspolitik auf die Börsenbewertung einen insgesamt nur sehr geringen Einfluss ausübt und von anderen Faktoren (z.B. Abfindungsspekulationen) dominiert wird. JEL: M41, M40, G14, K22
Gauging risk with higher moments : handrails in measuring and optimising conditional value at risk
(2009)
The aim of the paper is to study empirically the influence of higher moments of the return distribution on conditional value at risk (CVaR). To be more exact, we attempt to reveal the extent to which the risk given by CVaR can be estimated when relying on the mean, standard deviation, skewness and kurtosis. Furthermore, it is intended to study how this relationship can be utilised in portfolio optimisation. First, based on a database of 600 individual equity returns from 22 emerging world markets, factor models incorporating the first four moments of the return distribution have been constructed at different confidence levels for CVaR, and the contribution of the identified factors in explaining CVaR was determined. Following this the influence of higher moments was examined in portfolio context, i.e. asset allocation decisions were simulated by creating emerging market portfolios from the viewpoint of US investors. This can be regarded as a normal decisionmaking process of a hedge fund focusing on investments into emerging markets. In our analysis we compared and contrasted two approaches with which one can overcome the shortcomings of the variance as a risk measure. First of all, we solved in the presence of conflicting higher moment preferences a multi-objective portfolio optimisation problem for different sets of preferences. In addition, portfolio optimisation was performed in the mean-CVaR framework characterised by using CVaR as a measure of risk. As a part of the analysis, the pair-wise comparison of the different higher moment metrics of the meanvariance and the mean-CVaR efficient portfolios were also made. Throughout the work special attention was given to implied preferences to the different higher moments in optimising CVaR. We also examined the extent to which model risk, namely the risk of wrongly assuming normally-distributed returns can deteriorate our optimal portfolio choice. JEL Classification: G11, G15, C61
Auf dem 67. Deutschen Juristentag (DJT) in Erfurt wurde über eine Grundfrage des deutschen Aktienrechts diskutiert. Gefordert wurde eine stärkere Differenzierung zwischen börsennotierten und nichtbörsennotierten Aktiengesellschaften. Einzelne Deregulierungsvorschläge bezogen sich in diesem Zusammenhang auf die Reichweite des Prinzips der Satzungsstrenge, die Vinkulierung von Aktien und das Mehrstimmrecht. Die folgende Ausarbeitung beschäftigt sich mit der Frage, ob eine Differenzierung zwischen börsennotierten und nichtbörsennotierten Aktiengesellschaften insbesondere vor dem Hintergrund einer rechtsvergleichenden und empirischen Betrachtung überzeugt. Im Einzelnen wird zunächst kurz der Vorschlag Bayer an dem 67. DJT dargestellt (II.). Weiter wird die Bedeutung des außerbörslichen Handels in Deutschland untersucht (III.). Im Anschluss werden das deutsche, englische und – kursorisch – das US-amerikanische Aktien- und Kapitalmarktrecht rechtsvergleichend betrachtet (IV.). Dem folgt eine Stellungnahme zum Reformvorschlag Bayer (V.). Ein Fazit schließt die Untersuchung ab (VI.).
Bei der islamisch-mystischen Koranexegese (at-tafsir al-isari) handelt es sich um eine Schule der Koranauslegung. Die Koranexegeten, die dieser Schule angehören, interpretieren einzelne Verse des Korans durch kasf1 (wörtl. Enthüllung, Entdeckung) und ilham (Inspiration). Die Bedeutung dieser Verse wurde, nach ihrer Überzeugung, von Gott in ihre Herzen gelegt.
In der vorliegenden Arbeit wurde in einer Fall-Kontroll-Studie mit 295 Fällen und 327 Kontrollen das Gonarthroserisiko beim Heben und Tragen schwerer Lasten im Beruf untersucht. Dabei ergab sich eine statistisch signifikante Risikoerhöhung durch das kumulative Heben und Tragen schwerer Lasten. Ebenso ergab sich eine statistisch signifikante Risikoerhöhung durch die kumulative Dauer von Tätigkeiten im Knien/Hocken oder Fersensitz in Kombination mit dem kumulativen Heben und Tragen schwerer Lasten. Eine Dosis-Wirkungs-Beziehung war nachzuweisen, was für eine kausale Verknüpfung der Ereignisse spricht. Die Ergebnisse dieser Arbeit verweisen darauf, dass erhöhte BMI-Werte mit einem erhöhten Gonarthroserisiko assoziiert sind. Die Daten wurden deshalb für den BMI adjustiert. Bestimmte sportliche Tätigkeiten gehen mit einem erhöhten Verschleiss der Kniegelenke einher. Die Ergebnisse gaben diskrete Hinweise darauf, dass das Joggen oder die Leichtathletik den Kniegelenksverschleiss begünstigen. Deshalb wurden die Daten hinsichtlich dieser Ergebnisse adjustiert, und diese sportlichen Belastungen wurden als Störvariablen, confounder, behandelt. In der vorliegenden Arbeit konnte jedoch ein doppeltes Gonarthroserisiko bereits ab einem kumulativen Heben und Tragen von schweren Lasten im gesamten Berufsleben von über 5120kg*h aufgezeigt werden. Bei der Kombination von der kumulativen Dauer von Tätigkeiten im Knien/Hocken oder Fersensitz zwischen 4757 - <10800 Stunden oder dem kumulativen Heben und Tragen von schweren Lasten zwischen 5120 - <37000 kg*h konnte ein mehr als doppeltes Gonarthroserisiko dargestellt werden. Die vorliegende Arbeit liefert die Evidenz für die Annahme eines erhöhten Gonarthroserisikos beim Heben und Tragen schwerer Lasten.
If we want to develop a semantic analysis for explicit performatives such as I promise you to free Willy, we are faced with the following puzzle: In order to account for the speech act expressed by the performative verb, one can assume that the so-called performative clause is purely performative and provides the illocutionary force of the speech act whose content is given by the semantic object denoted by the complement clause. Yet under this perspective, the performative clause that is, next to the performative verb, the indexicals I and you that refer to the speaker and to the addressee of the utterance context is semantically invisible and does not contribute compositionally its meaning to the meaning of the entire explicit performative sentence. Conversely, if we account for the truth conditional contribution of the performative clause and deny that the meaning of the performative verb is purely performative, then we have to find a way to account for the speech act expressed by the performative verb. Of course, there is already the widely accepted and very appealing indirectness account for explicit performative utterances developed by Bach & Harnish (1979). Roughly, Bach and Harnish solve this puzzle in deriving the performativity by means of a pragmatic inference process. According to them, the important speech act performed by means of the utterance of the explicit performative sentence is a kind of the conventionalized indirect speech act. However, the boundary between semantics and pragmatics can be drawn in many various ways. Therefore, I think there could be other perspectives regarding the interface between the truth-functional treatment of the declarative explicit performative sentences and the speech acts performed with their utterances and which are expressed by the performative verbs. Hence, this thesis consists in the experiment to develop a further analysis and to check out its consequences with respect to the semantics and pragmatics of explicit performative utterances and the new interface emerged. Briefly, the experiment runs as follows: First, I develop an analysis for explicit performative sentences framed by parenthetical structures such as in (1)(a). In a second step, this parenthetical analysis is applied to the proper Austinian explicit performative sentences in (1)(b). (1) a. Tomorrow, I promise you this, I will teach them Tyrolean songs. b. I promise you that I will teach them Tyrolean songs. To analyze at first explicit performatives framed by parenthetical structures bears the convenience that we are faced with two utterances of two main clauses. In (1)(a) there is the utterance of the host sentence Tomorrow I will teach them Tyrolean songs, and the utterance of the explicit parenthetical I promise you this, where the demonstrative this refers to the utterance of Tomorrow I will teach them Tyrolean songs. Since speakers perform speech acts with utterances of main clauses, I assume that the meaning of the explicit parenthetical I promise you this specifies that the actual illocutionary force of the utterance of Tomorrow I will teach them Tyrolean songs is the illocutionary force of a promise. Hence, instead of deriving an indirect illocutionary force by means of a pragmatic inference schema, we can deal with an ordinary direct speech act that is performed with the utterance of the host sentence. This kind of analysis stresses the particular discourse function of explicit performative utterances. Performative verbs are used whenever the contextual information is not sufficient to determine the illocutionary force of the corresponding implicit speech act. The resulting consequences of the parenthetical analysis are interesting since they cast a different light on performative verbs. Surprisingly, the performative verbs are not performative at all. They do not constitute the execution of a speech act, but are execution supporting. Instead of constituting the particular illocutionary force, they merely specify the illocutionary force of the utterance of the host sentence. For instance, the speaker utters the explicit parenthetical I promise you this for specifying what he is simultaneously doing. Hence the speaker does not succeed in performing the promise simply because he is uttering I promise you this. Rather, by means of the information conveyed by the utterance of I promise you this, the potential illocutionary forces of the utterance of the host sentence are disambiguated. Thus, it is not the case that explicit parentheticals are trivially true when uttered. Their function is more complex. Their self-verifying property (‘saying so makes it so’) is explained by means of disambiguation. Furthermore, according to the parenthetical analysis, instead of being purely performative, the performative verbs contribute compositionally their meanings to the truth conditions of the entire explicit performative sentence. Together with its consequences, this analysis is applied to the proper Austinian performatives, which display subordination. I assume that regardless of their structure, explicit performatives always semantically and pragmatically behave as the parenthetical analysis predicts.
Die Vogelkunde besitzt in Frankfurt eine weitreichende Tradition. So zum Beispiel engagierten sich Naturforscher und -liebhaber schon lange vor Gründung der Universität im Jahre 1914 in Vereinigungen wie der Senckenberg Gesellschaft für Naturforschung (SGN, gegründet 1817) oder der Zoologischen Gesellschaft Frankfurt (ZGF, gegründet 1858). Biografi sche Skizzen zeichnen den Weg von den Pionierzeiten der Frankfurter Ornithologie bis heute nach.
Bis vor wenigen Jahren war die Ribonukleinsäure, kurz RNA, ein Stiefkind der Forschung. Nicht zuletzt deshalb, weil man ihre Vielseitigkeit unterschätzte. Inzwischen weiß man, dass die RNA nicht nur die genetischen Baupläne vom Zellkern zu den Ribosomen überbringt, sondern auch wichtige katalytische und regulatorische Funktionen übernehmen kann. Dank zweier von der Aventis Foundation gestifteter Professuren für Chemische Biologie konnte die Goethe-Universität 2008 ihre Aktivitäten auf diesem Gebiet verstärken: Während Beatrix Süß die biochemischen Eigenschaften von RNA untersucht, konzentriert sich Jens Wöhnert auf die Aufklärung ihrer Struktur. ...
Biventricular pacing has been suggested in end-stage heart failure. We present a 59-year-old patient undergoing second re-do CABG (coronary artery bypass graft) and carotid artery endarterectomy. Ejection fraction was 15%, QRS-width 175 ms. Following the carotid and CABG procedure, an implanted single-chamber ICD (implantable cardioverter defibrillator) was upgraded to permanent biventricular DDD pacing by implantation of one epicardial left ventricular and one epicardial atrial electrode. At follow-up two months postoperatively ejection fraction had significantly improved to 45%, the patient underwent stress test with adequate load and reported a good quality of life.
Background Hepatitis C virus (HCV) is a leading cause of chronic liver disease, end-stage cirrhosis, and liver cancer, but little is known about the burden of disease caused by the virus. We summarised burden of disease data presently available for Europe, compared the data to current expert estimates, and identified areas in which better data are needed. Methods Literature and international health databases were systematically searched for HCV-specific burden of disease data, including incidence, prevalence, mortality, disability-adjusted life-years (DALYs), and liver transplantation. Data were collected for the WHO European region with emphasis on 22 countries. If HCV-specific data were unavailable, these were calculated via HCV-attributable fractions. Results HCV-specific burden of disease data for Europe are scarce. Incidence data provided by national surveillance are not fully comparable and need to be standardised. HCV prevalence data are often inconclusive. According to available data, an estimated 7.3–8.8 million people (1.1–1.3%) are infected in our 22 focus countries. HCV-specific mortality, DALY, and transplantation data are unavailable. Estimations via HCV-attributable fractions indicate that HCV caused more than 86000 deaths and 1.2 million DALYs in the WHO European region in 2002. Most of the DALYs (95%) were accumulated by patients in preventable disease stages. About one-quarter of the liver transplants performed in 25 European countries in 2004 were attributable to HCV. Conclusion Our results indicate that hepatitis C is a major health problem and highlight the importance of timely antiviral treatment. However, data on the burden of disease of hepatitis C in Europe are scarce, outdated or inconclusive, which indicates that hepatitis C is still a neglected disease in many countries. What is needed are public awareness, co-ordinated action plans, and better data. European physicians should be aware that many infections are still undetected, provide timely testing and antiviral treatment, and avoid iatrogenic transmission.
Background Public health systems are confronted with constantly rising costs. Furthermore, diagnostic as well as treatment services become more and more specialized. These are the reasons for an interdisciplinary project on the one hand aiming at simplification of planning and scheduling patient appointments, on the other hand at fulfilling all requirements of efficiency and treatment quality. Methods As to understanding procedure and problem solving activities, the responsible project group strictly proceeded with four methodical steps: actual state analysis, analysis of causes, correcting measures, and examination of effectiveness. Various methods of quality management, as for instance opinion polls, data collections, and several procedures of problem identification as well as of solution proposals were applied. All activities were realized according to the requirements of the clinic's ISO 9001:2000 certified quality management system. The development of this project is described step by step from planning phase to inauguration into the daily routine of the clinic and subsequent control of effectiveness. Results Five significant problem fields could be identified. After an analysis of causes the major remedial measures were: installation of a patient telephone hotline, standardization of appointment arrangements for all patients, modification of the appointments book considering the reason for coming in planning defined working periods for certain symptoms and treatments, improvement of telephonic counselling, and transition to flexible time planning by daily updates of the appointments book. After implementation of these changes into the clinic's routine success could be demonstrated by significantly reduced waiting times and resulting increased patient satisfaction. Conclusion Systematic scrutiny of the existing organizational structures of the outpatients' department of our clinic by means of actual state analysis and analysis of causes revealed the necessity of improvement. According to rules of quality management correcting measures and subsequent examination of effectiveness were performed. These changes resulted in higher satisfaction of patients, referring colleagues and clinic staff the like. Additionally the clinic is able to cope with an increasing demand for appointments in outpatients' departments, and the clinic's human resources are employed more effectively.
Background Ongoing changes in cancer care cause an increase in the complexity of cases which is characterized by modern treatment techniques and a higher demand for patient information about the underlying disease and therapeutic options. At the same time, the restructuring of health services and reduced funding have led to the downsizing of hospital care services. These trends strongly influence the workplace environment and are a potential source of stress and burnout among professionals working in radiotherapy. Methods and patients A postal survey was sent to members of the workgroup "Quality of Life" which is part of DEGRO (German Society for Radiooncology). Thus far, 11 departments have answered the survey. 406 (76.1%) out of 534 cancer care workers (23% physicians, 35% radiographers, 31% nurses, 11% physicists) from 8 university hospitals and 3 general hospitals completed the FBAS form (Stress Questionnaire of Physicians and Nurses; 42 items, 7 scales), and a self-designed questionnaire regarding work situation and one question on global job satisfaction. Furthermore, the participants could make voluntary suggestions about how to improve their situation. Results Nurses and physicians showed the highest level of job stress (total score 2.2 and 2.1). The greatest source of job stress (physicians, nurses and radiographers) stemmed from structural conditions (e.g. underpayment, ringing of the telephone) a "stress by compassion" (e.g. "long suffering of patients", "patients will be kept alive using all available resources against the conviction of staff"). In multivariate analyses professional group (p < 0.001), working night shifts (p = 0.001), age group (p = 0.012) and free time compensation (p = 0.024) gained significance for total FBAS score. Global job satisfaction was 4.1 on a 9-point scale (from 1 – very satisfied to 9 – not satisfied). Comparing the total stress scores of the hospitals and job groups we found significant differences in nurses (p = 0.005) and physicists (p = 0.042) and a borderline significance in physicians (p = 0.052). In multivariate analyses "professional group" (p = 0.006) and "vocational experience" (p = 0.036) were associated with job satisfaction (cancer care workers with < 2 years of vocational experience having a higher global job satisfaction). The total FBAS score correlated with job satisfaction (Spearman-Rho = 0.40; p < 0.001). Conclusion Current workplace environments have a negative impact on stress levels and the satisfaction of radiotherapy staff. Identification and removal of the above-mentioned critical points requires various changes which should lead to the reduction of stress.
Analysis of knockout/knockin mice that express a mutant FasL lacking the intracellular domain
(2009)
Fas ligand (FasL; CD178; CD95L) is a type II transmembrane protein belonging to the tumour necrosis factor family; its binding to the Fas receptor (CD95; APO-1) triggers apoptosis in the receptor-bearing cell. Signalling through this pathway plays a pivotal role during the immune response and in immune system homeostasis. Similar to other TNF family members, the intracellular domain has been reported to transmit signals to the inside of the FasL-bearing cell (reverse signalling). Recently, we identified the proteases ADAM10 and SPPL2a as molecules important for the processing of FasL. Protease cleavage releases the intracellular domain, which then is able to translocate to the nucleus and to repress reporter gene activity. To study the physiological importance of FasL reverse signalling in vivo, we established knockout/knockin mice with a FasL deletion mutant that lacks the intracellular portion (FasLDeltaIntra). Co-culture experiments confirmed that the truncated FasL protein is still capable of inducing apoptosis in Fas-sensitive cells. Preliminary immune histochemistry data suggest that, in contrast to published data, the absence of the intracellular FasL domain does not alter the intracellular FasL localization in activated T cells. We are currently investigating signalling and proliferative capacities of T cells derived from homozygous FasLDeltaIntra mice to validate a co-stimulatory role of FasL reverse signalling.
Mitochondria are essential for respiration and oxidative phosphorylation. Mitochondrial dysfunction due to aging processes is involved in pathologies and pathogenesis of a series of cardiovascular disorders. New results accumulate showing that the enzyme telomerase with its catalytic subunit telomerase reverse transcriptase (TERT) has a beneficial effect on heart functions. The benefit of short-term running of mice for heart function is dependent on TERT expression. TERT can translocate into the mitochondria and mitochondrial TERT (mtTERT) is protective against stress induced stimuli and binds to mitochondrial DNA (mtDNA). Because mtDNA is highly susceptible to damage produced by reactive oxygen species (ROS) which are generated in close proximity to the respiratory chain, the aim of this study was to determine the functions of mtTERT in vivo and in vitro. Therefore, mitochondria from hearts of adult, 2nd generation TERT-deficient mice (TERT -/-) and wt littermates were isolated and state 3 respiration was measured. Strikingly mitochondria from TERT -/- revealed a significantly lower state 3 respiration (TERTwt: 987 +/- 72 pmol/s*mg vs. TERT-/-: 774 +/- 38 pmol/s*mg, p < 0.05, n = 5). These results demonstrated that TERT -/- mice have a so far undiscovered heart phenotype. In contrast mitochondria isolated from liver tissues did not show any differences. To get further insights in the molecular mechanisms, we reduced endogenous TERT levels by shRNA and measured mitochondrial reactive oxygen species (mtROS). mtROS were increased after ablation of TERT (scrambled: 4.98 +/- 1.1% gated vs. shTERT: 2.03 +/- 0.7% gated, p < 0.05, n = 4). We next determined mtDNA deletions, which are caused by mtROS. Semiquantitative realtime PCR of mtDNA deletions revealed that mtTERT protects mtDNA from oxidative damage. To analyze whether mitochondrial integrity is required to protect from apoptosis, vectors with mitochondrially targeted TERT (mitoTERT) and wildtype TERT (wtTERT) were transfected and apoptosis was measured. mitoTERT showed the most prominent protective effect on H2O2 induced apoptosis. In conclusion, mtTERT has a protective role in mitochondria by importantly contributing to mtDNA integrity and thereby enhancing respiration capacity of the heart.