Refine
Year of publication
- 2017 (3219) (remove)
Document Type
- Article (1229)
- Part of Periodical (582)
- Part of a Book (303)
- Book (235)
- Doctoral Thesis (182)
- Review (160)
- Contribution to a Periodical (151)
- Working Paper (129)
- Report (112)
- Preprint (65)
Language
- English (1704)
- German (1396)
- Portuguese (44)
- Spanish (21)
- French (20)
- Multiple languages (16)
- Turkish (13)
- Ukrainian (2)
- Catalan (1)
- Hebrew (1)
Is part of the Bibliography
- no (3219)
Keywords
- taxonomy (65)
- Deutsch (60)
- Literatur (58)
- Financial Institutions (41)
- new species (40)
- Rezeption (28)
- Literaturwissenschaft (26)
- Fremdsprachenunterricht (24)
- Vergleichende Literaturwissenschaft (23)
- Banking Supervision (22)
Institute
- Medizin (381)
- Präsidium (285)
- Physik (237)
- Wirtschaftswissenschaften (215)
- Gesellschaftswissenschaften (196)
- Sustainable Architecture for Finance in Europe (SAFE) (163)
- Frankfurt Institute for Advanced Studies (FIAS) (116)
- Biowissenschaften (105)
- Neuere Philologien (105)
- Informatik (92)
This paper discusses the syntactic properties of 'prepositional numeral constructions (PNCs)' in English, which is exemplified by 'about 250 babies' and 'over 16,000 animals'. In PNCs a preposition is followed by a numeral. Previous analyses have claimed that the preposition and the numeral make a prepositional phrase in PNCs, but we argue that this is not a satisfactory approach. In HPSG there are some possible analyses that might be proposed, but there are reasons for supposing that the best analysis is one in which the preposition is a functor, a non-head selecting a numeral head.
This paper investigates the syntax of the English "not only ... but also ..." construction, focusing on the linearization possibilities of "not only". Based on novel corpus data, I argue that the "not only ... but also ..." construction exhibits different properties from the "not ... but ..." construction or the adverbial "only". I propose that a linearization-based account, along with coordinate ellipsis, can explain the various linearization possibilities of "not only". I also propose that the construction as a whole is a subtype of the type correlative-coord-ph, which is a novel subtype of coord-ph. Finally, I argue that subject-auxiliary inversion triggered by the clause-initial not only is a new subtype of the type "negative-inversion-ph".
Partial inversion in English
(2017)
A typical finite clause in English has a single constituent that serves as subject. This constituent precedes the finite verb in non-inverted clauses like simple declarative clauses, follows the finite verb in inverted clauses like polar questions, agrees in person and number with the finite verb and with a tag subject when a tag is present, undergoes subject raising, and so on (Postal 2004). Five constructions violate these generalizations and in the literature have called into question the identity of the subject constituent. In each of these five constructions the finite verb agrees with a following constituent in a declarative clause despite the fact, among others, that the constituent preceding the verb exhibits subject behaviors of the kind identified by Keenan (1976). To the authors' knowledge, despite intensive analysis of several of these patterns, the group as a whole has not been subject to prior study. The constructions are: Presentational Inversion (e.g., On the porch stood marble pillars), Presentational there (e.g., "The earth was now dry, and there grew a tree in the middle of the earth"), Deictic Inversion (e.g., "Here comes the bus"), Existential there (e.g., There’s a big problem here) and Reversed Specificational be (e.g., "The only thing we’ve taken back recently are plants"). The approach of Sign-Based Construction Grammar (Sag 2012) enables us to establish precisely what all five patterns have in common and what is particular to each, revealing that a constructional, constraint-based approach can extract the correct grammatical generalizations, not only in 'core' areas of a grammar, but also in the hard cases, where concepts such as subject, which readily handle the more tractable facts, fail to fit the facts at hand. We see further that the five split-subject patterns, sometimes identified as clausal, yield to a strictly lexical analysis.
Against split morphology
(2017)
In this paper I present data from several Niger Congo languages, illustrating how the paradigms which make up the noun class systems of these languages are problematic to analyze within traditional morphosyntactic frameworks. I outline possible solutions to this problem, and argue for the introduction of an exemplar based Word and Paradigm (Blevins 2006) approach to morphology within SBCG. I then outline the consequences of this approach for the structure of the SBCG lexicon.
In this paper I present an incremental approach to gapping and conjunction reduction where it is assumed that the first sentence in these constructions is fully parsed before the second sentence with the elided verb is parsed. I will show that the two phenomena can be given a uniform analysis by letting the construction type of the first conjunct be carried over to the second conjunct. This construction type imposes constraints on the arguments that the second conjunct can have. The difference between gapping and conjunction reduction is captured by the already existing constructions for sentence and VP coordination. The analysis is implemented in an HPSG grammar of Norwegian.
This paper explores the conundrum posed by two different control constructions in Yucatec Maya, a Mayan language spoken by around 800,000 speakers in the Yucatán Peninsula and northern Belize. Basic syntactic structure of the language is introduced, and a general SBCG treatment of control in YM is presented, alongside with an example of motion verbs as control matrices. The unruly case of intransitive subjunctive control, where the controllee appears with an unexpected status (incompletive) and without set-A morphology, is discussed and a proposal to treat it as nominalization is evaluated. The nominalization proposal is rejected based on the following grounds: (1) nominalization tends to attract definitive morphology, which is absent from intransitive subjunctive control constructions, (2) nominalization does not truly explain the lack of set-A morphology if one desires to provide a unified account of set-A morphemes, (3) verbs bereft of otherwise expected set-A morphemes have an independent motivation in the form of agent focus constructions.
In this paper we discuss two contrasting views of exponence in inflectional morphology: the atomistic view, where content is associated individually with minimal segmentable morphs, and the holistic view, where the association is made for the whole word between complex content and constellations of morphs. On the basis of data from Estonian and Swahili, we argue that an adequate theory of inflection should be able to accomodate both views. We then show that the framework of Information-based Morphology (Crysmann and Bonami, 2016) is indeed compatible with both views, thanks to relying on realisation rules that associate m units of forms with n units of content.
Over the past few years, there has been renewed interest in the treatment of resumption in HPSG: despite areas of convergence, e.g. the recognition of resumptive dependencies as dependencies, as motivated by Across-the-Board (ATB) extraction, there is no unified theory to date, with differences pertaining, e.g., to the exact formulation of amalgamation (Ginzburg and Sag, 2000), or the place of island constraints in grammar. While Borsley (2010) and Alotaibi and Borsley (2013) relegate the difference in locality of gap and resumptive dependencies to the performance system, Crysmann (2012, 2016) captures insensitivity to strong islands as part of the grammar. Harmonising existing proposals becomes even more acute, if we consider the cross-linguistic similarity of the phenomenon, in particular, if we compare languages like Hausa and Arabic, which both feature island insensitivity to some degree, as well as bound pronominal resumptive objects and zero pronominal resumptive subjects, to name just a few of the parallels. In this paper, I shall reexamine resumption (and extraction) in Modern Standard Arabic (henceforth: MSA) and propose a reanalysis that improves on Alotaibi and Borsley (2013) in several areas: first, I shall argue that controlling the distribution of gaps and resumptives by means of case is not only empirically under-motivated but also leads to counter-intuitive constraint specifications in the majority of cases. Second, I shall show that the case-based account of Alotaibi and Borsley (2013) can be straightforwardly supplanted with the weight-based account I proposed in Crysmann (2016): in doing this, one does not only get a better alignment of case assignment constraints with overtly observable manifestations of case, but such an account is also general enough to scale from case languages, such as MSA, to languages without case, such as Hausa, or many Arabic vernaculars. Finally, I shall address case in ATB extraction and propose a refinement of the Coordination Constraint of Pollard and Sag (1994) that accounts for exactly the kind of mismatch observed in mixed gap/resumptive ATB extraction
Explanations and "engineering solutions"? Aspects of the relation between Minimalism and HPSG
(2017)
It is not simple to compare Minimalism and HPSG, but it is possible to identify a variety of differences, some not so important but others of considerable importance. Two of the latter are: (1) the fact that Minimalism is a very lexically-based approach whereas HPSG is more syntactically-based, and (2) the fact that Minimalism uses Internal Merge in the analysis of unbounded dependencies whereas HPSG employs the SLASH feature. In both cases the HPSG approach seems to offer a better account of the facts. Thus, in two important respects it seems preferable to Minimalism.
The Polynesian language Tongan appears to lack surface-oriented motivation for a VP constituent. Even so, adverbial elements appear in both a rightwards location and a leftwards location, superficially similar to the S-adverbs and VP-adverbs in well-studied western European languages. This paper explores how the Tongan ''VP-adverbs'' (as well as others) can be analyzed in HPSG without a VP for those adverbs to attach to. Several kinds of analyses, representing different strands of research on the syntax of adjuncts in HPSG, are explored: a Adjuncts-as-Valents analysis, a VAL-sensitive Adjuncts-as-Selectors analysis, and a WEIGHT-sensitive Adjuncts-as-Selectors analysis. All suggest that an analysis of the adverbs without a VP is possible; a WEIGHT-sensitive Adjuncts-as-Selectors seems to have the fewest issues.
This paper is the third in a series of papers dedicated to the investigation of subjunctive complement clauses in Modern Standard Arabic. It began with Arad Greshler et al.'s (2016) search for obligatory control predicates in the language and continued with Arad Greshler et al.'s (2017) empirical and theoretical investigation of the backward control construction. In this paper we show that Arad Greshler et al.'s (2017) findings and ultimate analysis, which is cast in a transformational framework, can be straightforwardly formalized using the existing principles and tools of HPSG. Our proposed analysis accounts for all the patterns attested with subjunctive complement clauses in Modern Standard Arabic, including instances of control and no-control.
This paper investigates the structure and agreement of coordinated binominals in the form Det N1 et N2 in French. We provide corpus data and experimental data to show that different strategies exist, depending on their readings: singular Det for joint reading (mon collègue et ami, 'my.MSG colleague.MSG and friend.MSG'), plural Det agreement (mes frère et soeur 'my.PL brother.MSG and sister.FSG') or closest conjunct agreement (mon nom et prénom, 'my.MSG surname.MSG and first name.MSG') for split reading. These results challenge previous syntactic analyses of binominals (Le Bruyn and de Swart, 2014), stating that Det combines with N1, forming a DP and the later coordinates with N2. We then propose an HPSG analysis to account for French binominals.
Modern Standard Arabic (MSA) has simple and complex comparatives, which look rather like their counterparts in many other languages. MSA simple comparatives are indeed like those of other languages, but MSA complex comparatives are quite different. They involve an adjective with a nominal complement, which may be an adjectival noun or an ordinary noun, and are rather like so-called 'adjectival constructs'. Simple comparatives, complex comparatives, and adjectival constructs can all be analysed with lexical rules within HPSG.
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at sNN−−−√=2.76 TeV down to zero transverse momentum in the rapidity range 2.5<y<4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with respect to the one measured in pp collisions scaled by the number of binary nucleon-nucleon collisions. The nuclear modification factor, integrated over the 0-80% most central collisions, is 0.545±0.032(stat.)±0.083(syst.) and does not exhibit a significant dependence on the collision centrality. These features appear significantly different from measurements at lower collision energies. Models including J/ψ production from charm quarks in a deconfined partonic phase can describe our data.
Obwohl Therese Huber (geborene Heyne, verwitwete Forster) zeit ihres Lebens keine einzige Übersetzung unter ihrem Namen veröffentlicht hat - sieht man von einer Romanbearbeitung ab, die als ihr Originalwerk publiziert wurde - ist sie dennoch eine nicht zu vernachlässigende Figur in der Geschichte der Übersetzung an der Wende vom 18. zum 19. Jahrhundert.
Regelmäßig werden Debatten um Geschlecht geführt, auch mit dem Fokus auf Schule. So regt die PISA-Studie mit ihren Ergebnissen im Jahr 2000 einen Diskurs um Bildungserfolg und Geschlecht an. Jürgen Budde (2009) vermutet beispielsweise einen engen Zusammenhang vom Schulerfolg eines Jungen mit der jeweiligen Schulkultur und konstatiert in diesem Zusammenhang eine Forschungslücke. Die Studie zu GeschlechterSchulKulturen schließt hier in Bezug auf eine Beforschung von Schulkultur mit Fokus auf Geschlechter an.
Anhand eines gegenstandsverankerten Vorgehens in Form einer Kombination von ethnographischem Forschungsstil und Grounded Theory Methodologie werden Interaktionen und verbale Zuschreibungen im Arrangement zweier integrierter Gesamt- und Ganztagsschulen beforscht. Dabei liegen folgende Forschungsfragen zugrunde: Welche Geschlechterkonstruktionen lassen sich am Beispiel zweier Gesamt- und Ganztagsschulen rekonstruieren? Lassen sich über Geschlechterkonstruktionen im schulischen Kontext spezifische GeschlechterSchulKulturen analytisch fassen und unterscheiden sich die beiden Forschungsfelder darin? Im Rahmen dieser Fragestellungen steht insbesondere die Frage danach im Fokus, welche Geschlechterkonstruktionen die Schüler*innen fünfter Klassen selbst vornehmen. Denn den öffentlichen Diskurs um die Ausbildung von Geschlechtsidentität führen Erwachsene, aber wie konstruieren die Schüler*innen Geschlechtlichkeit in ihrem Schulalltag? Welche Geschlechterzuschreibungen nehmen sie vor und welche möglichen Zusammenhänge lassen sich mit den jeweiligen Schulen rekonstruieren?
In der Studie zu GeschlechterSchulKulturen werden verbale Schüler*innen-Zuschreibungen in den Blick genommen, die in einem zweiten Schritt über einen Vergleich der beiden beforschten Schulen betrachtet und darauf folgend mit den jeweiligen Geschlechter(vor)strukturierungen der Schulen kontextualisiert werden, um damit dem Begriff GeschlechterSchulKulturen analytisch auf die Spur zu kommen. Die Studie liefert dabei zahlreiche Einblicke in verbale Schüler*innen-Zuschreibungen. Daran schließt eine Erörterung möglicher Implikationen für die schulische Praxis an. Dabei liefert die Studie zu GeschlechterSchulKulturen Annahmen zu spezifischen schulischen Kulturen, die bestimmte Konstruktionsprozesse gegebenenfalls (be)fördern. Mit der Studie zu GeschlechterSchulKulturen werden zudem methodologische Fragen zu schulischer Geschlechterforschung bearbeitet und ein Beitrag zu der Darstellbarkeit reflexiver Forschung geleistet.
We report the first observation of the decay Λ+c→Σ−π+π+π0, based on data obtained in e+e− annihilations with an integrated luminosity of 567~pb−1 at s√=4.6~GeV. The data were collected with the BESIII detector at the BEPCII storage rings. The absolute branching fraction B(Λ+c→Σ−π+π+π0) is determined to be (2.11±0.33(stat.)±0.14(syst.))%. In addition, an improved measurement of B(Λ+c→Σ−π+π+) is determined as (1.81±0.17(stat.)±0.09(syst.))%.
Measurements of cross section of e⁺e⁻ → pp¯π⁰ at center-of-mass energies between 4.008 and 4.600 GeV
(2017)
Based on e+e− annihilation data samples collected with the BESIII detector at the BEPCII collider at 13 center-of-mass energies from 4.008 to 4.600 GeV, measurements of the Born cross section of e+e− → pp¯π0 are performed. No significant resonant structure is observed in the measured energy dependence of the cross section. The upper limit on the Born cross section of e+e− → Y (4260) → pp¯π0 at the 90% C.L. is determined to be 0.01 pb. The upper limit on the ratio of the branching fractions B(Y (4260)→pp¯π0) B(Y (4260)→π+π− J/ψ) at the 90% C.L. is determined to be 0.02%.
Using data samples collected with the BESIII detector at the BEPCII collider at six center-of-mass energies between 4.008 and 4.600 GeV, we observe the processes e+e− → φφω and e+e− → φφφ. The Born cross sections are measured and the ratio of the cross sections σ(e+e− → φφω)/σ(e+e− → φφφ) is estimated to be 1.75 ± 0.22 ± 0.19 averaged over six energy points, where the first uncertainty is statistical and the second is systematic. The results represent first measurements of these interactions.
We report the first measurement of the absolute branching fraction for Λ+c→Λμ+νμ. This measurement is based on a sample of e+e− annihilation data at a center-of-mass energy of s√=4.6 GeV collected with the BESIII detector at the BEPCII storage rings. The sample corresponds to an integrated luminosity of 567 pb−1. The branching fraction is determined to be B(Λ+c→Λμ+νμ)=(3.49±0.46(stat)±0.27(syst))%. In addition, we calculate the ratio B(Λ+c→Λμ+νμ)/B(Λ+c→Λe+νe) to be 0.96±0.16(stat)±0.04(syst).
Background: High reproducibility and low intra- and interobserver variability are important strengths of cardiac magnetic resonance (CMR). In clinical practice a significant learning curve may however be observed. Basic CMR courses offer an average of 1.4 h dedicated to lecturing and demonstrating left ventricular (LV) function analysis. The purpose of this study was to evaluate the effect of initial teaching on complete and intermediate beginners’ quantitative measurements of LV volumes and function by CMR.
Methods: Standard clinical cine CMR sequences were acquired in 15 patients. Five observers (two complete beginners, one intermediate, two experienced) measured LV volumes. Before initial evaluation beginners read the SCMR guidelines on CMR analysis. After initial evaluation, beginners participated in a two-hour teaching session including cases and hands-on training, representative for most basic CMR courses, after which it is uncertain to what extent different centres provide continued teaching and feedback in-house. Dice Similarity Coefficient (DSC) assessed delineations. Agreement, accuracy, precision, repeatability and reliability were assessed by Bland-Altman, coefficient of variation, and intraclass correlation coefficient methods.
Results: Endocardial DSC improved after teaching (+0.14 ± 0.17;p < 0.001) for complete beginners. Low intraobserver variability was found before and after teaching, however with wide limits of agreement. Beginners underestimated volumes by up to 44 ml (EDV), 27 ml (ESV) and overestimated LVM by up to 53 g before teaching, improving to an underestimation of up to 9 ml (EDV), 7 ml (ESV) and an overestimation of up to 30 g (LVM) after teaching. For the intermediate beginner, however, accuracy was quite high already before teaching.
Conclusions: Initial teaching to complete beginners increases accuracy for assessment of LV volumes, however with high bias and low precision even after standardised teaching as offered in most basic CMR courses. Even though the intermediate beginner showed quite high accuracy already before teaching, precision did generally not improve after standardised teaching. To maintain CMR as a technique known for high accuracy and reproducibility and low intra- and inter-observer variability for quantitative measurements, internationally standardised training should be encouraged including high-quality feedback mechanisms. Objective measurements of training methods, training duration and, above all, quality of assessments are required.
The decays of χc2→K+K−π0, KSK±π∓ and π+π−π0 are studied with the ψ(3686) data samples collected with the Beijing Spectrometer (BESIII). For the first time, the branching fractions of χc2→K∗K¯¯¯¯¯, χc2→a±2(1320)π∓/a02(1320)π0 and χc2→ρ(770)±π∓ are measured. Here K∗K¯¯¯¯¯ denotes both K∗±K∓ and K∗0K¯¯¯¯¯0+c.c., and K∗ denotes the resonances K∗(892), K∗2(1430) and K∗3(1780). The observations indicate a strong violation of the helicity selection rule in χc2 decays into vector and pseudoscalar meson pairs. The measured branching fractions of χc2→K∗(892)K¯¯¯¯¯ are more than 10 times larger than the upper limit of χc2→ρ(770)±π∓, which is so far the first direct observation of a significant U-spin symmetry breaking effect in charmonium decays.
The decays of χc2→K+K−π0, KSK±π∓ and π+π−π0 are studied with the ψ(3686) data samples collected with the Beijing Spectrometer (BESIII). For the first time, the branching fractions of χc2→K∗K¯¯¯¯¯, χc2→a±2(1320)π∓/a02(1320)π0 and χc2→ρ(770)±π∓ are measured. Here K∗K¯¯¯¯¯ denotes both K∗±K∓ and K∗0K¯¯¯¯¯0+c.c., and K∗ denotes the resonances K∗(892), K∗2(1430) and K∗3(1780). The observations indicate a strong violation of the helicity selection rule in χc2 decays into vector and pseudoscalar meson pairs. The measured branching fractions of χc2→K∗(892)K¯¯¯¯¯ are more than 10 times larger than the upper limit of χc2→ρ(770)±π∓, which is so far the first direct observation of a significant U-spin symmetry breaking effect in charmonium decays.
We study the decays of J/ψ and ψ(3686) to the final states Σ(1385)0Σ¯(1385)0 and Ξ0Ξ¯0 based on a single baryon tag method using data samples of (1310.6±7.0)×106 J/ψ and (447.9±2.9)×106 ψ(3686) events collected with the BESIII detector at the BEPCII collider. The decays to Σ(1385)0Σ¯(1385)0 are observed for the first time. The measured branching fractions of J/ψ and ψ(3686)→Ξ0Ξ¯0 are in good agreement with, and much more precise, than the previously published results. The angular parameters for these decays are also measured for the first time. The measured angular decay parameter for J/ψ→Σ(1385)0Σ¯(1385)0, α=−0.64±0.03±0.10, is found to be negative, different to the other decay processes in this measurement. In addition, the "12\% rule" and isospin symmetry in the J/ψ and ψ(3686)→ΞΞ¯ and Σ(1385)Σ¯(1385) systems are tested.
By analyzing the large-angle Bhabha scattering events e+e− → (γ)e+e− and diphoton events e+e− → (γ)γγ for the data sets collected at center-of-mass (c.m.) energies between 2.2324 and 4.5900 GeV (131 energy points in total) with the upgraded Beijing Spectrometer (BESIII) at the Beijing Electron-Positron Collider (BEPCII), the integrated luminosities have been measured at the different c.m. energies, individually. The results are important inputs for the R value and J/ψ resonance parameter measurements.
The G2A receptor (GPR132) contributes to oxaliplatin-induced mechanical pain hypersensitivity
(2017)
Chemotherapy-induced peripheral neuropathic pain (CIPN) is a common and severe debilitating side effect of many widely used cytostatics. However, there is no approved pharmacological treatment for CIPN available. Among other substances, oxaliplatin causes CIPN in up to 80% of treated patients. Here, we report the involvement of the G-protein coupled receptor G2A (GPR132) in oxaliplatin-induced neuropathic pain in mice. We found that mice deficient in the G2A-receptor show decreased mechanical hypersensitivity after oxaliplatin treatment. Lipid ligands of G2A were found in increased concentrations in the sciatic nerve and dorsal root ganglia of oxaliplatin treated mice. Calcium imaging and patch-clamp experiments show that G2A activation sensitizes the ligand-gated ion channel TRPV1 in sensory neurons via activation of PKC. Based on these findings, we conclude that targeting G2A may be a promising approach to reduce oxaliplatin-induced TRPV1-sensitization and the hyperexcitability of sensory neurons and thereby to reduce pain in patients treated with this chemotherapeutic agent.
Identification of unique cardiolipin and monolysocardiolipin species in Acinetobacter baumannii
(2017)
Acidic glycerophospholipids play an important role in determining the resistance of Gram-negative bacteria to stress conditions and antibiotics. Acinetobacter baumannii, an opportunistic human pathogen which is responsible for an increasing number of nosocomial infections, exhibits broad antibiotic resistances. Here lipids of A. baumannii have been analyzed by combined MALDI-TOF/MS and TLC analyses; in addition GC-MS analyses of fatty acid methyl esters released by methanolysis of membrane phospholipids have been performed. The main glycerophospholipids are phosphatidylethanolamine, phosphatidylglycerol, acyl-phosphatidylglycerol and cardiolipin together with monolysocardiolipin, a lysophospholipid only rarely detected in bacterial membranes. The major acyl chains in the phospholipids are C16:0 and C18:1, plus minor amounts of short chain fatty acids. The structures of the cardiolipin and monolysocardiolipin have been elucidated by post source decay mass spectrometry analysis. A large variety of cardiolipin and monolysocardiolipin species were found in A. baumannii. Similar lysocardiolipin levels were found in the two clinical strains A. baumannii ATCC19606T and AYE whereas in the nonpathogenic strain Acinetobacter baylyi ADP1 lysocardiolipin levels were highly reduced.
Different insurance activities exhibit different levels of persistence of shocks and volatility. For example, life insurance is typically more persistent but less volatile than non-life insurance. We examine how diversification among life, non-life insurance, and active reinsurance business affects an insurer's contribution and exposure to the risk of other companies. Our model shows that a counterparty's credit risk exposure to an insurance group substantially depends on the relative proportion of the insurance group's life and non-life business. The empirical analysis confirms this finding with respect to several measures for spillover risk. The optimal proportion of life business that minimizes spillover risk decreases with leverage of the insurance group, and increases with active reinsurance business.
Background Microdeletions are known to confer risk to epilepsy, particularly at genomic rearrangement “hotspot” loci. However, deciphering their role outside hotspots and risk assessment by epilepsy sub-type has not been conducted.
Methods We assessed the burden, frequency and genomic content of rare, large microdeletions found in a previously published cohort of 1,366 patients with Genetic Generalized Epilepsy (GGE) plus two sets of additional unpublished genome-wide microdeletions found in 281 Rolandic Epilepsy (RE) and 807 Adult Focal Epilepsy (AFE) patients, totaling 2,454 cases. These microdeletion sets were assessed in a combined analysis and in sub-type specific approaches against 6,746 ethnically matched controls.
Results When hotspots are considered, we detected an enrichment of microdeletions in the combined epilepsy analysis (adjusted-P= 2.00×10-7; OR = 1.89; 95%-CI: 1.51-2.35), where the implicated microdeletions overlapped with rarely deleted genes and those involved in neurodevelopmental processes. Sub-type specific analyses showed that hotspot deletions in the GGE subgroup contribute most of the signal (adjusted-P = 1.22×10-12; OR = 7.45; 95%-CI = 4.20-11.97). Outside hotspot loci, microdeletions were enriched in the GGE cohort for neurodevelopmental genes (adjusted-P = 4.78×10-3; OR = 2.30; 95%-CI = 1.42-3.70), whereas no additional signal was observed for RE and AFE. Still, gene content analysis was able to identify known (NRXN1, RBFOX1 and PCDH7) and novel (LOC102723362) candidate genes affected in more than one epilepsy sub-type but not in controls.
Conclusions Our results show a heterogeneous effect of recurrent and non-recurrent microdeletions as part of the genetic architecture of GGE and a minor to negligible contribution in the etiology of RE and AFE.
Why do humans cooperate and often punish norm violations of others? In the present study, we sought to investigate the genetic bases of altruistic punishment (AP), which refers to the costly punishment of norm violations with potential benefit for other individuals. Recent evidence suggests that norm violations and unfairness are indexed by the feedback-related negativity (FRN), an anterior cingulate cortex (ACC) generated neural response to expectancy violations. Given evidence on the role of serotonin and dopamine in AP as well as in FRN-generation, we explored the impact of genetic variation of serotonin and dopamine function on FRN and AP behavior in response to unfair vs. fair monetary offers in a Dictator Game (DG) with punishment option. In a sample of 45 healthy participants we observed larger FRN amplitudes to unfair DG assignments both for 7-repeat allele carriers of the dopamine D4 receptor (DRD4) exon III polymorphism and for l/l-genotype carriers of the serotonin transporter gene-linked polymorphic region (5-HTTLRP). Moreover, 5-HTTLPR l/l-genotype carriers punished unfair offers more strongly. These findings support the role of serotonin and dopamine in AP, potentially via their influence on neural mechanisms implicated in the monitoring of expectancy violations and their relation to impulsive and punishment behavior.
Telemonitoring devices can be used to screen consumers' characteristics and mitigate information asymmetries that lead to adverse selection in insurance markets. However, some consumers value their privacy and dislike sharing private information with insurers. In the second-best efficient Wilson-Miyazaki-Spence framework, we allow for consumers to reveal their risk type for an individual subjective cost and show analytically how this affects insurance market equilibria as well as utilitarian social welfare. Our analysis shows that the choice of information disclosure with respect to revelation of their risk type can substitute deductibles for consumers whose transparency aversion is sufficiently low. This can lead to a Pareto improvement of social welfare and a Pareto efficient market allocation. However, if all consumers are offered cross-subsidizing contracts, the introduction of a transparency contract decreases or even eliminates cross-subsidies. Given the prior existence of a WMS equilibrium, utility is shifted from individuals who do not reveal their private information to those who choose to reveal. Our analysis provides a theoretical foundation for the discussion on consumer protection in the context of digitalization. It shows that new technologies bring new ways to challenge crosssubsidization in insurance markets and stresses the negative externalities that digitalization has on consumers who are not willing to take part in this development.
This paper investigates the effects of a rise in interest rate and lapse risk of endowment life insurance policies on the liquidity and solvency of life insurers. We model the book and market value balance sheet of an average German life insurer, subject to both GAAP and Solvency II regulation, featuring an existing back book of policies and an existing asset allocation calibrated by historical data. The balance sheet is then projected forward under stochastic financial markets. Lapse rates are modeled stochastically and depend on the granted guaranteed rate of return and prevailing level of interest rates. Our results suggest that in the case of a sharp increase in interest rates, policyholders sharply increase lapses and the solvency position of the insurer deteriorates in the short-run. This result is particularly driven by the interaction between a reduction in the market value of assets, large guarantees for existing policies, and a very slow adjustment of asset returns to interest rates. A sharp or gradual rise in interest rates is associated with substantial and persistent liquidity needs, that are particularly driven by lapse rates.
Under Solvency II, corporate governance requirements are a complementary, but nonetheless essential, element to build a sound regulatory framework for insurance undertakings, also to address risks not specifically mitigated by the sole solvency capital requirements. After recalling the provisions of the Second Pillar concerning the system of governance, the paper highlights the emerging regulatory trends in the corporate governance of insurance firms. Among others things, it signals the exceptional extension of the duties and responsibilities assigned to the board of directors, far beyond the traditional role of both monitoring the chief executive officer, and assessing the overall direction and strategy of the business. However, a better risk governance is not necessarily built on narrow rule-based approaches to corporate governance.
A tontine provides a mortality driven, age-increasing payout structure through the pooling of mortality. Because a tontine does not entail any guarantees, the payout structure of a tontine is determined by the pooling of individual characteristics of tontinists. Therefore, the surrender decision of single tontinists directly affects the remaining members' payouts. Nevertheless, the opportunity to surrender is crucial to the success of a tontine from a regulatory as well as a policyholder perspective. Therefore, this paper derives the fair surrender value of a tontine, first on the basis of expected values, and then incorporates the increasing payout volatility to determine an equitable surrender value. Results show that the surrender decision requires a discount on the fair surrender value as security for the remaining members. The discount intensifies in decreasing tontine size and increasing risk aversion. However, tontinists are less willing to surrender for decreasing tontine size and increasing risk aversion, creating a natural protection against tontine runs stemming from short-term liquidity shocks. Furthermore we argue that a surrender decision based on private information requires a discount on the fair surrender value as well.
Under Solvency II, corporate governance requirements are a complementary, but nonetheless essential, element to build a sound regulatory framework for insurance undertakings, also to address risks not specifically mitigated by the sole solvency capital requirements. After recalling the provisions of the second pillar concerning the system of governance, the paper is devoted to highlight the emerging regulatory trends in the corporate governance of insurance firms. Among others, it signals the exceptional extension of the duties and responsibilities assigned to the Board of directors, far beyond the traditional role of both monitoring the chief executive officer, and assessing the overall direction and strategy of the business. However, a better risk governance is not necessarily built on narrow rule-based approaches to corporate governance.
We study the impact of estimation errors of firms on social welfare. For this purpose, we present a model of the insurance market in which insurers face parameter uncertainty about expected loss sizes. As consumers react to under- and overestimation by increasing and decreasing demand, respectively, insurers require a safety loading for parameter uncertainty. If the safety loading is too small, less risk averse consumers benefit from less informed insurers by speculating on them underestimating expected losses. Otherwise, social welfare increases with insurers’ information. We empirically estimate safety loadings in the US property and casualty insurance market, and show that these are likely to be sufficiently large for consumers to benefit from more informed insurers.
Fossil dental remains are an archive of unique information for paleobiological studies. Computed microtomography based on X-ray microfocus sources (X-μCT) and Synchrotron Radiation (SR-μCT) allow subtle quantification at the micron and sub-micron scale of the meso- and microstructural signature imprinted in the mineralized tissues, such as enamel and dentine, through high-resolution “virtual histology”. Nonetheless, depending on the degree of alterations undergone during fossilization, X-ray analyses of tooth tissues do not always provide distinct imaging contrasts, thus preventing the extraction of essential morphological and anatomical details. We illustrate here by three examples the successful application of neutron microtomography (n-μCT) in cases where X-rays have previously failed to deliver contrasts between dental tissues of fossilized specimen.
Ebola virus (EBOV) infection causes a high death toll, killing a high proportion of EBOV-infected patients within 7 days. Comprehensive data on EBOV infection are fragmented, hampering efforts in developing therapeutics and vaccines against EBOV. Under this circumstance, mathematical models become valuable resources to explore potential controlling strategies. In this paper, we employed experimental data of EBOV-infected nonhuman primates (NHPs) to construct a mathematical framework for determining windows of opportunity for treatment and vaccination. Considering a prophylactic vaccine based on recombinant vesicular stomatitis virus expressing the EBOV glycoprotein (rVSV-EBOV), vaccination could be protective if a subject is vaccinated during a period from one week to four months before infection. For the case of a therapeutic vaccine based on monoclonal antibodies (mAbs), a single dose might resolve the invasive EBOV replication even if it was administrated as late as four days after infection. Our mathematical models can be used as building blocks for evaluating therapeutic and vaccine modalities as well as for evaluating public health intervention strategies in outbreaks. Future laborator experiments will help to validate and refine the estimates of the windows of opportunity proposed here.
Formation of Hubbard-like bands as a fingerprint of strong electron-electron interactions in FeSe
(2017)
We use angle-resolved photo-emission spectroscopy (ARPES) to explore the electronic structure of single crystals of FeSe over a wide range of binding energies and study the effects of strong electron-electron correlations. We provide evidence for the existence of "Hubbard-like bands" at high binding energies consisting of incoherent many-body excitations originating from Fe 3d states in addition to the renormalized quasiparticle bands near the Fermi level. Many high energy features of the observed ARPES data can be accounted for when incorporating effects of strong local Coulomb interactions in calculations of the spectral function via dynamical mean-field theory, including the formation of a Hubbard-like band. This shows that over the energy scale of several eV, local correlations arising from the on-site Coulomb repulsion and Hund's coupling are essential for a proper understanding of the electronic structure of FeSe and other related iron based superconductors.
Wilhelm Fraenger, 1890–1964
(2017)
We present the charged-particle multiplicity distributions over a wide pseudorapidity range (−3.4<η<5.0) for pp collisions at s√= 0.9, 7, and 8 TeV at the LHC. Results are based on information from the Silicon Pixel Detector and the Forward Multiplicity Detector of ALICE, extending the pseudorapidity coverage of the earlier publications and the high-multiplicity reach. The measurements are compared to results from the CMS experiment and to PYTHIA, PHOJET and EPOS LHC event generators, as well as IP-Glasma calculations.
The transverse momentum distributions of the strange and double-strange hyperon resonances (Σ(1385)±, Ξ(1530)0) produced in p-Pb collisions at sNN−−−√=5.02 TeV were measured in the rapidity range −0.5<yCMS<0 for event classes corresponding to different charged-particle multiplicity densities, ⟨dNch/dηlab⟩. The mean transverse momentum values are presented as a function of ⟨dNch/dηlab⟩, as well as a function of the particle masses and compared with previous results on hyperon production. The integrated yield ratios of excited to ground-state hyperons are constant as a function of ⟨dNch/dηlab⟩. The equivalent ratios to pions exhibit an increase with ⟨dNch/dηlab⟩, depending on their strangeness content.
The transverse momentum distributions of the strange and double-strange hyperon resonances (Σ(1385)±, Ξ(1530)0) produced in p-Pb collisions at sNN−−−√=5.02 TeV were measured in the rapidity range −0.5<yCMS<0 for event classes corresponding to different charged-particle multiplicity densities, ⟨dNch/dηlab⟩. The mean transverse momentum values are presented as a function of ⟨dNch/dηlab⟩, as well as a function of the particle masses and compared with previous results on hyperon production. The integrated yield ratios of excited to ground-state hyperons are constant as a function of ⟨dNch/dηlab⟩. The equivalent ratios to pions exhibit an increase with ⟨dNch/dηlab⟩, depending on their strangeness content.
We present the charged-particle multiplicity distributions over a wide pseudorapidity range (−3.4<η<5.0) for pp collisions at s√= 0.9, 7, and 8 TeV at the LHC. Results are based on information from the Silicon Pixel Detector and the Forward Multiplicity Detector of ALICE, extending the pseudorapidity coverage of the earlier publications and the high-multiplicity reach. The measurements are compared to results from the CMS experiment and to PYTHIA, PHOJET and EPOS LHC event generators, as well as IP-Glasma calculations.
Epigenetic control of microsomal prostaglandin E synthase-1 by HDAC-mediated recruitment of p300
(2017)
Nonsteroidal anti-inflammatory drugs are the most widely used medicine to treat pain and inflammation, and to inhibit platelet function. Understanding the expression regulation of enzymes of the prostanoid pathway is of great medical relevance. Histone acetylation crucially controls gene expression. We set out to identify the impact of histone deacetylases (HDACs) on the generation of prostanoids and examine the consequences on vascular function. HDAC inhibition (HDACi) with the pan-HDAC inhibitor, vorinostat, attenuated prostaglandin (PG)E2 generation in the murine vasculature and in human vascular smooth muscle cells. In line with this, the expression of the key enzyme for PGE2 synthesis, microsomal PGE synthase-1 (PTGES1), was reduced by HDACi. Accordingly, the relaxation to arachidonic acid was decreased after ex vivo incubation of murine vessels with HDACi. To identify the underlying mechanism, chromatin immunoprecipitation (ChIP) and ChIP-sequencing analysis were performed. These results suggest that HDACs are involved in the recruitment of the transcriptional activator p300 to the PTGES1 gene and that HDACi prevented this effect. In line with the acetyltransferase activity of p300, H3K27 acetylation was reduced after HDACi and resulted in the formation of heterochromatin in the PTGES1 gene. In conclusion, HDAC activity maintains PTGES1 expression by recruiting p300 to its gene.
Die Einführung von Patient Blood Management (PBM) führt zu einem Paradigmenwechsel bezüglich Erkennen und Therapie der Anämie und zeigt Maßnahmen auf um die Entstehung einer Anämie zu verhindern. PBM unterstützt den Arzt im Entscheidungsdilemma zwischen positiver Wirkung und nachteiligen Nebenwirkungen von Bluttransfusionen. Mit PBM wird der Blutverbrauch deutlich reduziert und die Nebenwirkungen gesenkt. Nicht nur die therapeutischen Maßnahmen, sondern auch die diagnostischen PBM Maßnahmen im Labor führen zu einer relevanten Verringerung des Blutvolumens. PBM Studienergebnisse zeigen eine signifikant Reduktion der Morbidität und Mortalität und die Verbesserung des Patienten- Outcome. Ein weiterer positiver Nebeneffekt ist Schonung von Ressourcen in allen beteiligten Bereichen, welches zu einer relevanten Kostenreduktion und Steigerung der Wirtschaftlichkeit führt. Zusätzlich sensibilisiert das PBM bezüglich des Vorliegens, der Entwicklung und der Therapie einer anämischen Situation sowie den Umgang mit der kostbaren Ressource Blut. Die Bedeutung des PBM wird mittlerweile von der Industrie auch für das Labor unterstützt; für den Bereich POCT ist das PBM jedoch bisher noch nicht adäquat technisch realisiert.