Universitätspublikationen
Refine
Year of publication
- 2009 (608) (remove)
Document Type
- Article (178)
- Doctoral Thesis (115)
- Part of Periodical (101)
- Book (64)
- Review (49)
- Conference Proceeding (26)
- Working Paper (17)
- Report (13)
- Diploma Thesis (11)
- Bachelor Thesis (9)
- Magister's Thesis (7)
- diplomthesis (6)
- Periodical (5)
- Part of a Book (4)
- magisterthesis (3)
Language
- German (366)
- English (225)
- French (7)
- Portuguese (7)
- Spanish (2)
- Multiple languages (1)
Is part of the Bibliography
- no (608)
Keywords
- Europa (7)
- Deutschland (4)
- Lambda-Kalkül (4)
- China (3)
- Forschung (3)
- Frankfurt <Main> / Universität (3)
- Frankreich (3)
- Zeitschrift (3)
- reactive oxygen species (3)
- Adorno (2)
Institute
- Medizin (122)
- Präsidium (75)
- Biochemie und Chemie (46)
- Gesellschaftswissenschaften (46)
- Rechtswissenschaft (41)
- Geowissenschaften (34)
- Physik (33)
- Biowissenschaften (30)
- E-Finance Lab e.V. (24)
- Informatik (24)
Abrupt climate changes of the last deglaciation detected in a western Mediterranean forest record
(2009)
Abrupt changes in Western Mediterranean climate during the last deglaciation (20 to 6 cal ka BP) are detected in marine core MD95-2043 (Alboran Sea) through the investigation of high-resolution pollen data and pollen-based climate reconstructions by the modern analogue technique (MAT) for annual precipitation (Pann) and mean temperatures of the coldest and warmest months (MTCO and MTWA). Changes in temperate Mediterranean forest development and composition and MAT reconstructions indicate major climatic shifts with parallel temperature and precipitation changes at the onsets of Heinrich stadial 1 (equivalent to the Oldest Dryas), the Bölling-Allerød (BA), and the Younger Dryas (YD). Multi-centennial-scale oscillations in forest development occurred throughout the BA, YD, and early Holocene. Shifts in vegetation composition and (Pann reconstructions indicate that forest declines occurred during dry, and generally cool, episodes centred at 14.0, 13.3, 12.9, 11.8, 10.7, 10.1, 9.2, 8.3 and 7.4 cal ka BP. The forest record also suggests multiple, low-amplitude Preboreal (PB) climate oscillations, and a marked increase in moisture availability for forest development at the end of the PB at 10.6 cal ka BP. Dry atmospheric conditions in the Western Mediterranean occurred in phase with Lateglacial events of high-latitude cooling including GI-1d (Older Dryas), GI-1b (Intra-Allerød Cold Period) and GS-1 (YD), and during Holocene events associated with high-latitude cooling, meltwater pulses and N. Atlantic ice-rafting. A possible climatic mechanism for the recurrence of dry intervals and an opposed regional precipitation pattern with respect to Western-central Europe relates to the dynamics of the westerlies and the prevalence of atmospheric blocking highs. Comparison of radiocarbon and ice-core ages for well-defined climatic transitions in the forest record suggests possible enhancement of marine reservoir ages in the Alboran Sea by 200 years (surface water age 600 years) during the Lateglacial.
George Orwells Roman 1984 aus dem Jahre 1949 gilt gemeinhin als einer der Klassiker dystopischer Literatur. Auch wenn das tatsächliche Jahr 1984 inzwischen Vergangenheit ist, hat Orwell’s Entwurf einer repressiven, totalitären Gesellschaft bis heute nichts an Aktualität eingebüßt. Konzepte wie „Big Brother“ oder „doublethink“ sind in unseren alltäglichen Wortschatz übergegangen, und Orwells Roman bildet auch weiterhin das Vorbild für viele aktuelle Dystopien. Doch nicht nur Orwells Darstellung eines düsteren, futuristischen Überwachungsstaates, in dem eine Gruppe von Machtinhabern versucht, sowohl die Vergangenheit als auch die Gedanken der Bevölkerung zu steuern, verkörpert wichtige Leitmotive dystopischer Literatur. Auch die Rolle und Anwendung von Sprache in dieser Zukunftsvision hat nachhaltig seine Spuren in dystopischer Literatur hinterlassen, auch wenn diese Rolle in der Forschungsliteratur häufig übersehen wird. Zwar befassen sich regelmäßig Kritiker mit dem Aspekt von Sprache in Romanen wie 1984 oder Aldous Huxleys Brave New World, allerdings gibt es kaum komparative Studien, die Sprache als ein eigenes, zentrales dystopisches Motiv sehen, sondern Sprache in der Regel in andere Aspekte subsumieren.
George Orwells Roman 1984 aus dem Jahre 1949 gilt gemeinhin als einer der Klassiker dystopischer Literatur. Auch wenn das tatsächliche Jahr 1984 inzwischen Vergangenheit ist, hat Orwell’s Entwurf einer repressiven, totalitären Gesellschaft bis heute nichts an Aktualität eingebüßt. Konzepte wie „Big Brother“ oder „doublethink“ sind in unseren alltäglichen Wortschatz übergegangen, und Orwells Roman bildet auch weiterhin das Vorbild für viele aktuelle Dystopien. Doch nicht nur Orwells Darstellung eines düsteren, futuristischen Überwachungsstaates, in dem eine Gruppe von Machtinhabern versucht, sowohl die Vergangenheit als auch die Gedanken der Bevölkerung zu steuern, verkörpert wichtige Leitmotive dystopischer Literatur. Auch die Rolle und Anwendung von Sprache in dieser Zukunftsvision hat nachhaltig seine Spuren in dystopischer Literatur hinterlassen, auch wenn diese Rolle in der Forschungsliteratur häufig übersehen wird. Zwar befassen sich regelmäßig Kritiker mit dem Aspekt von Sprache in Romanen wie 1984 oder Aldous Huxleys Brave New World, allerdings gibt es kaum komparative Studien, die Sprache als ein eigenes, zentrales dystopisches Motiv sehen, sondern Sprache in der Regel in andere Aspekte subsumieren.
Die vorliegende Arbeit, befasst sich mit genau dieser Unzulänglichkeit. Anhand von acht dystopischen Romanen in Englischer Sprache, die allesamt in den letzten 80 Jahren erschienen sind, wird die Rolle von Sprache herausgearbeitet, und ihre Relevanz für das Genre der Dystopie deutlich gemacht. Die verwendeten Werke sind, in chronologischer Reihenfolge: Aldous Huxleys Brave New World (1932), George Orwells 1984 (1949), Anthony Burgess‘ Clockwork Orange (1960), Russell Hobans Riddley Walker (1980), Suzette Haden Elgins Native Tongue (1984) und The Judas Rose (1987), Margaret Atwoods The Handmaid’s Tale (1985), sowie Will Selfs The Book of Dave (2006). Die Romane sind bewusst gewählt, um einen größtmöglichen Rahmen und Zeitraum abzudecken, der zudem unterschiedliche Strömungen und Traditionen innerhalb des Genres der dystopischen Literatur aufgreift.
Bevor die eigentliche Textanalyse beginnt, werden zunächst Entstehung und Charakteristika des dystopischen Konzeptes erläutert. Die Studie blickt kurz auf die Entwicklung der Utopie, dem Gegenkonzept von Dystopie, von der Klassik zur Moderne und verfolgt anschließend die Entstehung anti-utopischer Tendenzen bis hin zum Auftreten der Dystopie, einer speziellen Unterkategorie anti-utopischer Literatur, im späten 19. Jahrhundert. Darauf basierend werden einige der wichtigsten Leitmotive vorgestellt, die im weiteren Verlauf auch in Verbindung mit Sprache eine maßgebliche Rolle spielen. Zu guter Letzt wird auch auf die Problematik der Organisation und Klassifikation von Sprache in der folgenden Analyse eingegangen. Nicht nur ist Sprache an sich ein weitreichender Begriff; auch die Verwendung von Sprache in den einzelnen Romanen ist sehr unterschiedlich geprägt. So sind beispielsweise Romane wie Riddley Walker, Clockwork Orange und Book of Dave komplett oder zu weiten Teilen in einer eigenen, fiktiven Sprache verfasst, die verfügt, dass der Leser seinen Interpretationsrahmen anpassen muss. In anderen Romane dagegen, wie in Brave New World, The Handmaid’s Tale oder 1984, spielt Sprache dagegen fast ausschließlich auf der Handlungsebene eine Rolle. Eine umfangreiche Analyse erfordert es, alle Aspekte des Sprachgebrauchs abzudecken, auch wenn der begrenzte Rahmen dieser Arbeit es nur zulässt, die wichtigsten Aspekte in dieser Hinsicht abzudecken.
Aus den unterschiedlichen Formen des Sprachgebrauch, in dem sich auf Sprache sowohl als Schrift- wie Sprechmedium bezogen wird, geht auch der Aufbau der Hauptanalyse hervor: Im ersten Teil wird auf die Rolle von Sprache auf der Handlungsebene eingegangen. Es wird, unter Zuhilfenahme von Michel Foucaults Diskurstheorie, gezeigt, wie Sprache auf der einen Seite von einer autoritären Macht oder Institution verwendet wird, um bestimmte Diskurse durchzusetzen, die Stabilität der dystopischen Gesellschaft zu garantieren und das Äußern von kritischen Gedanken abzuwenden. Auf der anderen Seite, analog zu Foucaults Diskurs-Begriff, wonach ein Diskurs immer auch seinen Widerstand produziert, wird Sprache in einigen Romanen jedoch als gegenteiliges Medium eingesetzt; als ein Medium zur Befreiung und Wahrung der Individualität. Die wechselseitige Beziehung wird ausgiebig analysiert. Im dritten Analysepunkt wird die Beziehung zwischen sozialer Klasse und Status aufgedeckt.
Die zweite Hälfte der Studie wendet sich von der Handlungsebene ab und konzentriert sich auf stilistische und strukturelle Aspekte. Es wird gezeigt, wie Sprache von den Autoren benutzt wird, um die dystopische Erfahrung zu verstärken, wie die Einbindung von fiktiven Sprachen, Para- und Intertextualität sowie Namensgebung als stilistisches Mittel verwendet wird, das im Gegenzug zwei der wichtigsten Charakteristika dystopischer Literatur hervorhebt: Zum einen die didaktische Absicht, mit der Dystopien vor einer möglichen (und unweigerlich schlechteren) Zukunft warnen, falls keine Gegenmaßnahmen ergriffen werden, und zum anderen, wie Dystopien gezielt Aspekte aus der Zeit der Autoren aufgreifen, und diese in den Rahmen der Handlungsstruktur extrapolieren. Basierend auf dieser Annahme werden zum Abschluss einige Sprach- und kulturtheoretische Ideen aufgegriffen, die ihren Weg in die einzelnen Werke gefunden haben, und somit einen eigenen Diskurs von Sprache im dystopischen Roman ermöglichen.
Zum Abschluss der Arbeit werden die Ergebnisse aufgegriffen und im Hinblick auf eine mögliche Repositionierung von Sprache in der Forschung des dystopischen Romanes evaluiert. Es werden drei bestimmte Funktionen von Sprachgebrauch anhand der Analyse erschlossen und abschließend vorgeschlagen, Sprache zukünftig als eigenes Motiv innerhalb dystopischer Literatur zu sehen, da der Aspekt von Sprache in den hier diskutierten Texten unweigerlich mit der Absicht und Form der Dystopie in Einklang steht.
In this paper, similarity hypotheses for the atmospheric surface layer (ASL) are reviewed using nondimensional characteristic invariants, referred to as π -numbers. The basic idea of this dimensional π-invariants analysis (sometimes also called Buckingham’s π-theorem) is described in a mathematically generalized formalism. To illustrate the task of this powerful method and how it can be applied to deduce a variety of reasonable solutions by the formalized procedure of non-dimensionalization, various instances are represented that are relevant to the turbulence transfer across the ASL and prevailing structure of ASL turbulence. Within the framework of our review we consider both (a) Monin-Obukhov scaling for forced-convective conditions, and (b) Prandtl-Obukhov-Priestley scaling for free-convective conditions.It is shown that in the various instances of Monin-Obukhov scaling generally two π-numbers occur that result in corresponding similarity functions. In contrast to that, Prandtl-Obukhov-Priestley scaling will lead to only one π number in each case usually considered as a non-dimensional universal constant. Since an explicit mathematical relationship for the similarity functions cannot be obtained from a dimensional π-invariants analysis, elementary laws of π-invariants have to be pointed out using empirical or/and theoretical findings. To evaluate empirical similarity functions usually considered within the framework flux-profile relationships, so-called integral similarity functions for momentum and sensible heat are presented and assessed on the basis of the friction velocity and the vertical component of the eddy flux densities of sensible and latent heat directly measured during the GREIV I 1974 field campaign.
Diabetic nephropathy (DN) is a major cause of end-stage renal failure worldwide. Oxidative stress has been reported to be a major culprit of the disease and increased oxidized low density lipoprotein (oxLDL) immune complexes were found in patients with DN. In this study we present evidence, that CXCL16 is the main receptor in human podocytes mediating the uptake of oxLDL. In contrast, in primary tubular cells CD36 was mainly involved in the uptake of oxLDL. We further demonstrate that oxLDL down-regulated α3-integrin expression and increased the production of fibronectin in human podocytes. In addition, oxLDL uptake induced the production of reactive oxygen species (ROS) in human podocytes. Inhibition of oxLDL uptake by CXCL16 blocking antibodies abrogated the fibronectin and ROS production and restored α3 integrin expression in human podocytes. Furthermore we present evidence that hyperglycaemic conditions increased CXCL16 and reduced ADAM10 expression in podocytes. Importantly, in streptozotocin-induced diabetic mice an early induction of CXCL16 was accompanied by higher levels of oxLDL. Finally immunofluorescence analysis in biopsies of patients with DN revealed increased glomerular CXCL16 expression, which was paralleled by high levels of oxLDL. In summary, regulation of CXCL16, ADAM10 and oxLDL expression may be an early event in the onset of DN and therefore all three proteins may represent potential new targets for diagnosis and therapeutic intervention in DN.
The manifestation of chronic back pain depends on structural, psychosocial, occupational and genetic influences. Heritability estimates for back pain range from 30% to 45%. Genetic influences are caused by genes affecting intervertebral disc degeneration or the immune response and genes involved in pain perception, signalling and psychological processing. This inter-individual variability which is partly due to genetic differences would require an individualized pain management to prevent the transition from acute to chronic back pain or improve the outcome. The genetic profile may help to define patients at high risk for chronic pain. We summarize genetic factors that (i) impact on intervertebral disc stability, namely Collagen IX, COL9A3, COL11A1, COL11A2, COL1A1, aggrecan (AGAN), cartilage intermediate layer protein, vitamin D receptor, metalloproteinsase-3 (MMP3), MMP9, and thrombospondin-2, (ii) modify inflammation, namely interleukin-1 (IL-1) locus genes and IL-6 and (iii) and pain signalling namely guanine triphosphate (GTP) cyclohydrolase 1, catechol-O-methyltransferase, μ opioid receptor (OPMR1), melanocortin 1 receptor (MC1R), transient receptor potential channel A1 and fatty acid amide hydrolase and analgesic drug metabolism (cytochrome P450 [CYP]2D6, CYP2C9).
Protein catabolism should be reduced and protein synthesis promoted with parenteral nutrion (PN). Amino acid (AA) solutions should always be infused with PN. Standard AA solutions are generally used, whereas specially adapted AA solutions may be required in certain conditions such as severe disorders of AA utilisation or in inborn errors of AA metabolism. An AA intake of 0.8 g/kg/day is generally recommended for adult patients with a normal metabolism, which may be increased to 1.2–1.5 g/kg/day, or to 2.0 or 2.5 g/kg/day in exceptional cases. Sufficient non-nitrogen energy sources should be added in order to assure adequate utilisation of AA. A nitrogen calorie ratio of 1:130 to 1:170 (g N/kcal) or 1:21 to 1:27 (g AA/kcal) is recommended under normal metabolic conditions. In critically ill patients glutamine should be administered parenterally if indicated in the form of peptides, for example 0.3–0.4 g glutamine dipeptide/kg body weight/day (=0.2–0.26 g glutamine/kg body weight/day). No recommendation can be made for glutamine supplementation in PN for patients with acute pancreatitis or after bone marrow transplantation (BMT), and in newborns. The application of arginine is currently not warranted as a supplement in PN in adults. N-acetyl AA are only of limited use as alternative AA sources. There is currently no indication for use of AA solutions with an increased content of glycine, branched-chain AAs (BCAA) and ornithine-α-ketoglutarate (OKG) in all patients receiving PN. AA solutions with an increased proportion of BCAA are recommended in the treatment of hepatic encephalopathy (III–IV).
There are special challenges in implementing parenteral nutrition (PN) in paediatric patients, which arises from the wide range of patients, ranging from extremely premature infants up to teenagers weighing up to and over 100 kg, and their varying substrate requirements. Age and maturity-related changes of the metabolism and fluid and nutrient requirements must be taken into consideration along with the clinical situation during which PN is applied. The indication, the procedure as well as the intake of fluid and substrates are very different to that known in PN-practice in adult patients, e.g. the fluid, nutrient and energy needs of premature infants and newborns per kg body weight are markedly higher than of older paediatric and adult patients. Premature infants <35 weeks of pregnancy and most sick term infants usually require full or partial PN. In neonates the actual amount of PN administered must be calculated (not estimated). Enteral nutrition should be gradually introduced and should replace PN as quickly as possible in order to minimise any side-effects from exposure to PN. Inadequate substrate intake in early infancy can cause long-term detrimental effects in terms of metabolic programming of the risk of illness in later life. If energy and nutrient demands in children and adolescents cannot be met through enteral nutrition, partial or total PN should be considered within 7 days or less depending on the nutritional state and clinical conditions.
Both the genomes of the epsilonproteobacteria Wolinella succinogenes and Campylobacter jejuni contain operons (sdhABE) that encode for so far uncharacterized enzyme complexes annotated as ‘non-classical’ succinate:quinone reductases (SQRs). However, the role of such an enzyme ostensibly involved in aerobic respiration in an anaerobic organism such as W. succinogenes has hitherto been unknown. We have established the first genetic system for the manipulation and production of a member of the non-classical succinate:quinone oxidoreductase family. Biochemical characterization of the W. succinogenes enzyme reveals that the putative SQR is in fact a novel methylmenaquinol:fumarate reductase (MFR) with no detectable succinate oxidation activity, clearly indicative of its involvement in anaerobic metabolism. We demonstrate that the hydrophilic subunits of the MFR complex are, in contrast to all other previously characterized members of the superfamily, exported into the periplasm via the twin-arginine translocation (tat)-pathway. Furthermore we show that a single amino acid exchange (Ala86→His) in the flavoprotein of that enzyme complex is the only additional requirement for the covalent binding of the otherwise non-covalently bound FAD. Our results provide an explanation for the previously published puzzling observation that the C. jejuni sdhABE operon is upregulated in an oxygen-limited environment as compared with microaerophilic laboratory conditions.
Perturbation theory for non-abelian gauge theories at finite temperature is plagued by infrared
divergences which are caused by magnetic soft modes ~ g2T, corresponding to gluon fields of
a 3d Yang-Mills theory. While the divergences can be regulated by a dynamically generated
magnetic mass on that scale, the gauge coupling drops out of the effective expansion parameter
requiring summation of all loop orders for the calculation of observables. Some gauge invariant
possibilities to implement such infrared-safe resummations are reviewed. We use a scheme based
on the non-linear sigma model to estimate some of the contributions ~ g6 of the soft magnetic
modes to the QCD pressure through two loops. The NLO contribution amounts to ~ 10% of the
LO, suggestive of a reasonable convergence of the series.
The so-called sign problem of lattice QCD prohibits Monte Carlo simulations at finite baryon
density by means of importance sampling. Over the last few years, methods have been developed
which are able to circumvent this problem as long as the quark chemical potential is m=T <~1.
After a brief review of these methods, their application to a first principles determination of the
QCD phase diagram for small baryon densities is summarised. The location and curvature of the
pseudo-critical line of the quark hardon transition is under control and extrapolations to physical
quark masses and the continuum are feasible in the near future. No definite conclusions can as
yet be drawn regarding the existence of a critical end point, which turns out to be extremely quark
mass and cut-off sensitive. Investigations with different methods on coarse lattices show the lightmass
chiral phase transition to weaken when a chemical potential is switched on. If persisting on
finer lattices, this would imply that there is no chiral critical point or phase transition for physical
QCD. Any critical structure would then be related to physics other than chiral symmetry breaking.
The chiral critical surface is a surface of second order phase transitions bounding the region of
first order chiral phase transitions for small quark masses in the fmu;d;ms;mg parameter space.
The potential critical endpoint of the QCD (T;m)-phase diagram is widely expected to be part of
this surface. Since for m = 0 with physical quark masses QCD is known to exhibit an analytic
crossover, this expectation requires the region of chiral transitions to expand with m for a chiral
critical endpoint to exist. Instead, on coarse Nt = 4 lattices, we find the area of chiral transitions
to shrink with m, which excludes a chiral critical point for QCD at moderate chemical potentials
mB < 500 MeV. First results on finer Nt = 6 lattices indicate a curvature of the critical surface
consistent with zero and unchanged conclusions. We also comment on the interplay of phase
diagrams between the Nf = 2 and Nf = 2+1 theories and its consequences for physical QCD.
We perform a two-flavor dynamical lattice computation of the Isgur-Wise functions t1/2 and t3/2
at zero recoil in the static limit. We find t1/2(1) = 0.297(26) and t3/2(1) = 0.528(23) fulfilling
Uraltsev’s sum rule by around 80%. We also comment on a persistent conflict between theory and
experiment regarding semileptonic decays of B mesons into orbitally excited P wave D mesons,
the so-called “1/2 versus 3/2 puzzle”, and we discuss the relevance of lattice results in this
context.
We present a lattice QCD calculation of the heavy-light decay constants fB and fBs performed with Nf = 2 maximally twisted Wilson fermions, at four values of the lattice spacing. The decay constants have been also computed in the static limit and the results are used to interpolate the observables between the charmand the infinite-mass sectors, thus obtaining the value of the decay constants at the physical b quark mass. Our preliminary results are fB = 191(14)MeV, fBs = 243(14)MeV, fBs/ fB = 1.27(5). They are in good agreement with those obtained with a novel approach, recently proposed by our Collaboration (ETMC), based on the use of suitable ratios having an exactly known static limit.
We present first results from runs performed with Nf = 2+1+1 flavours of dynamical twisted mass fermions at maximal twist: a degenerate light doublet and a mass split heavy doublet. An overview of the input parameters and tuning status of our ensembles is given, together with a comparison with results obtained with Nf = 2 flavours. The problem of extracting the mass of the K- and D-mesons is discussed, and the tuning of the strange and charm quark masses examined. Finally we compare two methods of extracting the lattice spacings to check the consistency of our data and we present some first results of cPT fits in the light meson sector.
"Entre direitos iguais, a força decide", proferiu karl marx ao descrever a antinomia do direito em situações antagônicas das relações de produção capitalistas, em que "o direito [oferece resistência] ao direito" nesse ponto, marx aborda uma questão que se situa no centro de todas as teorias jurídicas críticas: que tipo de violência é velada por meio do mecanismo de ocultação denominado "direito"? Para responder a esta questão, tentar-se-á, a seguir, tornar a teoria da hegemonia de antonio gramsci e seu modelo de direito hegemônico produtivos para o campo da teoria do direito. Tal tarefa tem de lidar com a dupla dificuldade de que, por um lado, gramsci não foi um teórico do direito no sentido mais estrito, razão pela qual o potencial de sua teoria para uma análise do direito raramente foi utilizada. Por outro lado, sua abordagem só pode ser empregada por meio de uma crítica às restrições relacionadas a seu tempo. isso se aplica especialmente à sua concepção de economia como a base e a núcleo essencialista oculto (laclau; mouffe, 2001:69), assim como à sua ideia de 'classismo' sob a forma de um enfoque unilateral das classes, em que há preferencialmente mais de um "pluralismo de poder" e inúmeras lutas (litowitz, 2000: 536). Recuperar-se-á, consequentemente, argumentos-chave, ampliando-os pela utilização das recentes descobertas feitas pelas abordagens feminista e neomaterialista da teoria jurídica, bem como as análises de foucault acerca das tecnologias de poder. por fim, uma interpretação da teoria sistêmica das autonomizações comunicativas.
O 11 de setembro acelerou o desenvolvimento de uma arquitetura transnacional de segurança que intervém profundamente nas liberdades civis individuais, tanto nos direitos básicos dos cidadãos dos Estados como nos direitos humanos dos cidadãos mundiais. O artigo delineia essa arquitetura, mostra como ela dissolve as categorias jurídicas tradicionais que preservam a liberdade e discute por que hoje se aceita amplamente a prioridade da segurança sobre a liberdade.
Os limites da tolerância
(2009)
Este artigo apresenta os elementos constitutivos do conceito de tolerância e discute duas concepções diferentes do termo, como permissão e como respeito moral, que expressam modos diversos de demarcar os limites da tolerância. A tolerância é apresentada como um conceito que, para ganhar algum conteúdo, depende normativamente de um direito à justificação baseado na idéia de um uso público da razão segundo o qual as práticas e as instituições político-jurídicas que determinam a vida social dos cidadãos devem ser justificáveis à luz de normas que eles não podem recíproca e genericamente rejeitar.
Background: A growing number of German hospitals have been privatized with the intention of increasing cost effectiveness and improving the quality of health care. Numerous studies investigated what possible qualitative and economic consequences these changes issues might have on patient care. However, little is known about how this privatization trend relates to physicians' working conditions and job satisfaction. It was anticipated that different working conditions would be associated with different types of hospital ownership. To that end, this study's purpose is to compare how physicians, working for both public and privatized hospitals, rate their respective psychosocial working conditions and job satisfaction.
Methods: The study was designed as a cross-sectional comparison using questionnaire data from 203 physicians working at German hospitals of different ownership types (private for-profit, public and private nonprofit).
Results: The present study shows that several aspects of physicians' perceived working conditions differ significantly depending on hospital ownership. However, results also indicated that physicians' job satisfaction does not vary between different types of hospital ownership. Finally, it was demonstrated that job demands and resources are associated with job satisfaction, while type of ownership is not.
Conclusion: This study represents one of a few studies that investigate the effect of hospital ownership on physicians work situation and demonstrated that the type of ownership is a potential factor accounting for differences in working conditions. The findings provide an informative basis to find solutions improving physicians' work at German hospitals.
Die Arbeit analysiert den Begriff sowie den Wert der Freiheit in den Schriften des kanadischen Philosophen Charles Taylor, unter Bezugnahme auf dessen politische Philosophie und philosophische Anthropologie. Die begriffliche Klärung basiert auf einer Systematisierung der positiven Verwendung des Freiheitsbegriffes in Taylors Gesamtwerk. Die Wertanalyse interpretiert die Ergebnisse der Systematisierung in Bezug auf die Frage, ob Freiheit in Taylors Verständnis ein extrinsischer oder ein intrinsischer Wert ist.
In its admissibility decision in the Al-Saadoon case the ECtHR held that the United Kingdom had jurisdiction over the applicants, who had been arrested by British forces and kept in a British-run military prison in Iraq. Just before the respective mandate of the Security Council expired on 31 December 2008, the applicants were transferred to Iraqi custody at Iraqi request and thereby exposed to the risk of an unfair trial followed by capital punishment. In this respect, the case resembles the Soering case, although the applicants were, unlike Soering, not on British territory but on occupied Iraqi soil before they were handed over. This aspect raises the question of Iraqi sovereignty as a norm competing with the UK's human rights obligations. The authors trace back the ECtHR's case law concerning the extraterritorial application of the Convention and analyse the UK judgments and the ECtHR's admissibility decision in the Al-Saadoon affair from this angle. Furthermore they consider the doctrinal consequences of the ECHR's extraterritorial effect in cases like Soering and Al-Saadoon, where contracting parties violate guarantees of the Convention by exposing a person within their jurisdiction to a risk of a treatment contrary to these guarantees by a third state. Finally, they test the argument brought forward by the UK that not transferring the applicants would have violated Iraqi sovereignty and establish patterns how the ECtHR and the UK Courts did cope in the past with international law norms potentially competing with the Convention.
* Cooperation between "jeder-fehlerzaehlt.de" and the Techniker statutory insurance company
* "PRIoritising multiple medication in multi-morbid patients" – PRIMUM-Pilot study gets off to successful start
* New work area: Quality promotion and concept development
* Frankfurt Training Program in Evidence-Based Medicine
* Another change in our institute is the new arrival of Sabine Pommeresch
* 2nd General Practice Day in Frankfurt
kurz und kn@pp news : Nr. 17
(2009)
kurz und kn@pp news : Nr. 16
(2009)
* Kooperation von "jeder-Fehler-zaehlt.de" mit der Techniker Krankenkasse
* "PRIorisierung von MUltimedikation bei Multimorbidität" – PRIMUM-Pilotstudie erfolgreich gestartet
* Neuer Arbeitsbereich: Qualitätsförderung und Konzeptentwicklung
* Neu im Institut ist Sabine Pommeresch
* Frankfurter Fortbildungsreihe Evidenzbasierte Medizin
* 2. Frankfurter Tag der Allgemeinmedizin: Jetzt online
kurz und kn@pp news : Nr. 15
(2009)
GOeTHEO : Ausgabe 1
(2009)
GoeTheo ist ein Projekt, welches der Transparenz des Fachbereichs und der Informationsweitergabe an Studierende, Förderer und Interessierte dient.
Die Fachbereichszeitschrift ist eine gemeinsame Publikation des Vereins der Freunde und Förderer der Evangelischen Theologie in Frankfurt/Main und des Fachbereichs Evangelische Theologie der Goethe-Universität. Seit Wintersemester 2009/10 erscheint GoeTheo immer zum Semesteranfang und informiert über die Lehrveranstaltungen des kommenden Semesters, sowie über aktuelle Entwicklungen des Fachbereichs in Forschung und Lehre.
In contrast to the US and recently Europe, Japan appears to be unsuccessful in establishing new industries. An oft-cited example is Japan's practical invisibility in the global business software sector. Literature has ascribed Japan's weakness – or conversely, America's strength – to the specific institutional settings and competences of actors within the respective national innovation system. It has additionally been argued that unlike the American innovation system, with its proven ability to give birth to new industries, the inherent path dependency of the Japanese innovation system makes innovation and establishment of new industries quite difficult. However, there are two notable weaknesses underlying current propositions postulating that only certain innovation systems enable the creation of new industries: first, they mistakenly confound context specific with general empirical observations. And second, they grossly underestimate – or altogether fail to examine – the dynamics within innovation systems. This paper will show that it is precisely the dynamics within innovation systems – dynamics founded on the concept of path plasticity – which have enabled Japan to charge forward as a global leader in a highly innovative field: the game software sector as well as the biotechnology industry.
European scholars, colonial administrators, missionaries, bibliophiles and others were the main collectors of Malay books in the nineteenth century, both in manuscript or printed form. Among these persons were many well-known names in the field of Malay literature and culture like Raffles, Marsden, Crawfurd, Klinkert, van der Tuuk, von Dewall, Roorda, Favre, Maxwell, Overbeck, Wilkinson and Skeat, to name only a few. Their collections were often handed over to public libraries where they form an important part of the relevant Oriental or Southeast Asian manuscript collections.
Therefore the knowledge of the intellectual culture of the Malay Peninsula and the Malay World in general depended very much on these manuscripts and printed books collected often by chance or in a rather unsystematic way. The collections reflect in a strong sense the interests of its administrative or philologist collectors: court histories, genealogies of aristocratic lineages, law collections (adat-istiadat as well as undangundang) or prose belles-lettres build a vast bulk of these collections, while Islamic religious texts and poetry forms popular in the 19th century (especially syair) are fairly underrepresented. Malay manuscripts and books located in religious institutions like mosques or pondok/pesantren schools have not been searched for; until today there are more or less no systematic studies of these collections. As in some statistics religious texts build about 20% of all existing Malay manuscripts, their neglect by Europeans scholars leads to a distorted view of the literary culture in the Malay language.
Following the discovery of context-dependent synchronization of oscillatory neuronal responses in the visual system, the role of neural synchrony in cortical networks has been expanded to provide a general mechanism for the coordination of distributed neural activity patterns. In the current paper, we present an update of the status of this hypothesis through summarizing recent results from our laboratory that suggest important new insights regarding the mechanisms, function and relevance of this phenomenon. In the first part, we present recent results derived from animal experiments and mathematical simulations that provide novel explanations and mechanisms for zero and nero-zero phase lag synchronization. In the second part, we shall discuss the role of neural synchrony for expectancy during perceptual organization and its role in conscious experience. This will be followed by evidence that indicates that in addition to supporting conscious cognition, neural synchrony is abnormal in major brain disorders, such as schizophrenia and autism spectrum disorders. We conclude this paper with suggestions for further research as well as with critical issues that need to be addressed in future studies.