Universitätspublikationen
Refine
Year of publication
- 2018 (1307) (remove)
Document Type
- Article (897)
- Doctoral Thesis (113)
- Working Paper (107)
- Preprint (75)
- Part of Periodical (40)
- Conference Proceeding (30)
- Part of a Book (21)
- Report (9)
- Review (9)
- Book (3)
Language
- English (1307) (remove)
Has Fulltext
- yes (1307)
Is part of the Bibliography
- no (1307)
Keywords
- Financial Institutions (9)
- breast cancer (9)
- inflammation (8)
- Neuroscience (7)
- Research article (7)
- Germany (6)
- Inflammation (6)
- Liquidity (6)
- Macro Finance (6)
- aging (6)
Institute
- Medizin (454)
- Physik (213)
- Wirtschaftswissenschaften (131)
- Frankfurt Institute for Advanced Studies (FIAS) (96)
- Sustainable Architecture for Finance in Europe (SAFE) (95)
- Center for Financial Studies (CFS) (94)
- Biowissenschaften (90)
- Informatik (69)
- House of Finance (HoF) (62)
- Biochemie und Chemie (58)
An invariant differential cross section measurement of inclusive π0 and η meson production at mid-rapidity in pp collisions at s√=8 TeV was carried out by the ALICE experiment at the LHC. The spectra of π0 and η mesons were measured in transverse momentum ranges of 0.3<p T <35 GeV/c and 0.5<p T <35 GeV/c , respectively. Next-to-leading order perturbative QCD calculations using fragmentation functions DSS14 for the π0 and AESSS for the η overestimate the cross sections of both neutral mesons, although such calculations agree with the measured η/π0 ratio within uncertainties. The results were also compared with PYTHIA 8.2 predictions for which the Monash 2013 tune yields the best agreement with the measured neutral meson spectra. The measurements confirm a universal behavior of the η/π0 ratio seen for NA27, PHENIX and ALICE data for pp collisions from s√=27.5 GeV to s√=8 TeV within experimental uncertainties. A relation between the π0 and η production cross sections for pp collisions at s√=8 TeV is given by m T scaling for p T >3.5 GeV/c . However, a deviation from this empirical scaling rule is observed for transverse momenta below p T <3.5 GeV/c in the η/π0 ratio with a significance of 6.2σ.
Neutral pion and η meson invariant differential yields were measured in non-single diffractive p–Pb collisions at sNN−−−√ = 5.02 TeV with the ALICE experiment at the CERN LHC. The analysis combines results from three complementary photon measurements, utilizing the PHOS and EMCal calorimeters and the Photon Conversion Method. The invariant differential yields of π0 and η meson inclusive production are measured near mid-rapidity in a broad transverse momentum range of 0.3<pT<20 GeV/c and 0.7<pT<20 GeV/c, respectively. The measured η/π0 ratio increases with pT and saturates for pT > 4 GeV/c at 0.483±0.015stat±0.015sys. A deviation from mT scaling is observed for pT< 2 GeV/c. The measured η/π0 ratio is consistent with previous measurements from proton-nucleus and pp collisions over the full pT range. The measured η/π0 ratio at high pT also agrees within uncertainties with measurements from nucleus–nucleus collisions. The π0 and η yields in p–Pb relative to the scaled pp interpolated reference, RpPb, are presented for 0.3<pT< 20 GeV/c and 0.7<pT< 20 GeV/c, respectively. The results are compared with theoretical model calculations. The values of RpPb are consistent with unity for transverse momenta above 2 GeV/c. These results support the interpretation that the suppressed yield of neutral mesons measured in Pb–Pb collisions at LHC energies is due to parton energy loss in the hot QCD medium.
ϕ meson measurements provide insight into strangeness production, which is one of the key observables for the hot medium formed in high-energy heavy-ion collisions. ALICE measured ϕ production through its decay in muon pairs in Pb–Pb collisions at sNN−−−√=2.76 TeV in the intermediate transverse momentum range 2<pT<5 GeV/c and in the rapidity interval 2.5<y<4. The ϕ yield was measured as a function of the transverse momentum and collision centrality. The nuclear modification factor was obtained as a function of the average number of participating nucleons. Results were compared with the ones obtained via the kaon decay channel in the same pT range at midrapidity. The values of the nuclear modification factor in the two rapidity regions are in agreement within uncertainties.
A measurement of beauty hadron production at mid-rapidity in proton-lead collisions at a nucleon–nucleon centre-of-mass energy sNN−−−√=5.02 TeV is presented. The semi-inclusive decay channel of beauty hadrons into J/ψ is considered, where the J/ψ mesons are reconstructed in the dielectron decay channel at mid-rapidity down to transverse momenta of 1.3 GeV/c. The bb¯ production cross section at mid-rapidity, dσbb¯/dy, and the total cross section extrapolated over full phase space, σbb¯, are obtained. This measurement is combined with results on inclusive J/ψ production to determine the prompt J/ψ cross sections. The results in p–Pb collisions are then scaled to expectations from pp collisions at the same centre-of-mass energy to derive the nuclear modification factor RpPb, and compared to models to study possible nuclear modifications of the production induced by cold nuclear matter effects. RpPb is found to be smaller than unity at low pT for both J/ψ coming from beauty hadron decays and prompt J/ψ.
We apply the phenomenological Reggeon field theory framework to investigate rapidity gap survival (RGS) probability for diffractive dijet production in proton–proton collisions. In particular, we study in some detail rapidity gap suppression due to elastic rescatterings of intermediate partons in the underlying parton cascades, described by enhanced (Pomeron–Pomeron interaction) diagrams. We demonstrate that such contributions play a subdominant role, compared to the usual, so-called “eikonal”, rapidity gap suppression due to elastic rescatterings of constituent partons of the colliding protons. On the other hand, the overall RGS factor proves to be sensitive to color fluctuations in the proton. Hence, experimental data on diffractive dijet production can be used to constrain the respective model approaches.
Inclusive ϒ(1S) and ϒ(2S) production have been measured in Pb–Pb collisions at the centre-of-mass energy per nucleon–nucleon pair √sNN = 5.02 TeV, using the ALICE detector at the CERN LHC. The ϒ mesons are reconstructed in the centre-of-mass rapidity interval 2.5 < y < 4 and in the transversemomentum range pT < 15 GeV/c, via their decays to muon pairs. In this Letter, we present results on the inclusive ϒ(1S) nuclear modification factor RAA as a function of collision centrality, transverse momentum and rapidity. The ϒ(1S) and ϒ(2S) RAA, integrated over the centrality range 0–90%, are 0.37± 0.02(stat) ± 0.03(syst) and 0.10 ± 0.04(stat) ± 0.02(syst), respectively, leading to a ratio Rϒ(2S) AA /Rϒ(1S) AA of 0.28±0.12(stat)±0.06(syst). The observed ϒ(1S) suppression increases with the centrality of the collision and no significant variation is observed as a function of transverse momentum and rapidity.
The measurement of dielectron production is presented as a function of invariant mass and transverse momentum (pT) at midrapidity (|ye| < 0.8) in proton–proton (pp) collisions at a centre-of-mass energy of √s = 13 TeV. The contributions from light-hadron decays are calculated from their measured cross sections in pp collisions at √s = 7 TeV or 13 TeV. The remaining continuum stems from correlated semileptonic decays of heavy-flavour hadrons. Fitting the data with templates from two different MC event generators, PYTHIA and POWHEG, the charm and beauty cross sections at midrapidity are extracted for the first time at this collision energy: dσcc¯/dy|y=0 = 974 ± 138 (stat.) ± 140 (syst.) ± 214(BR) μb and dσbb¯ /dy|y=0 = 79 ± 14 (stat.) ± 11 (syst.) ± 5(BR) μb using PYTHIA simulations and dσcc¯/dy|y=0 = 1417 ± 184 (stat.) ± 204 (syst.) ± 312(BR) μb and dσbb¯ /dy|y=0 = 48 ± 14 (stat.) ± 7 (syst.) ± 3(BR) μb for POWHEG. These values, whose uncertainties are fully correlated between the two generators, are consistent with extrapolations from lower energies. The different results obtained with POWHEG and PYTHIA imply different kinematic correlations of the heavy-quark pairs in these two generators. Furthermore, comparisons of dielectron spectra in inelastic events and in events collected with a trigger on high charged-particle multiplicities are presented in various pT intervals. The differences are consistent with the already measured scaling of light-hadron and open-charm production at high charged-particle multiplicity as a function of pT. Upper limits for the contribution of virtual direct photons are extracted at 90% confidence level and found to be in agreement with pQCD calculations.
Inclusive J/ψ production is studied in Xe–Xe interactions at a centre-of-mass energy per nucleon pair of √sNN = 5.44 TeV, using the ALICE detector at the CERN LHC. The J/ψ meson is reconstructed via its decay into a muon pair, in the centre-of-mass rapidity interval 2.5 < y < 4 and down to zero transverse momentum. In this Letter, the nuclear modification factors RAA for inclusive J/ψ, measured in the centrality range 0–90% as well as in the centrality intervals 0–20% and 20–90% are presented. The RAA values are compared to previously published results for Pb–Pb collisions at √sNN = 5.02 TeV and to the calculation of a transport model. A good agreement is found between Xe–Xe and Pb–Pb results as well as between data and the model.
The elliptic flow of inclusive and direct photons was measured at mid-rapidity in two centrality classes 0–20% and 20–40% in Pb–Pb collisions at √sNN = 2.76 TeV by ALICE. Photons were detected with the highly segmented electromagnetic calorimeter PHOS and via conversions in the detector material with the e+e− pairs reconstructed in the central tracking system. The results of the two methods were combined and the direct-photon elliptic flow was extracted in the transverse momentum range 0.9 < pT < 6.2 GeV/c. A comparison to RHIC data shows a similar magnitude of the measured directphoton elliptic flow. Hydrodynamic and transport model calculations are systematically lower than the data, but are found to be compatible.
In this Letter, the ALICE Collaboration presents the first measurements of the charged-particle multiplicity density, dNch/dη, and total charged-particle multiplicity, Ntot ch , in Xe–Xe collisions at a centre-of-mass energy per nucleon–nucleon pair of √sNN = 5.44 TeV. The measurements are performed as a function of collision centrality over a wide pseudorapidity range of −3.5 < η < 5. The values of dNch/dη at mid-rapidity and Ntot ch for central collisions, normalised to the number of nucleons participating in the collision (Npart) as a function of √sNN follow the trends established in previous heavy-ion measurements. The same quantities are also found to increase as a function of Npart, and up to the 5% most central collisions the trends are the same as the ones observed in Pb–Pb at a similar energy. For more central collisions, the Xe–Xe scaled multiplicities exceed those in Pb–Pb for a similar Npart. The results are compared to phenomenological models and theoretical calculations based on different mechanisms for particle production in nuclear collisions. All considered models describe the data reasonably well within 15%.
The production of Z0 bosons at large rapidities in Pb–Pb collisions at √sNN = 5.02 TeV is reported. Z0 candidates are reconstructed in the dimuon decay channel (Z0 → μ+ μ−), based on muons selected with pseudo-rapidity −4.0 < η < −2.5 and pT > 20 GeV/c. The invariant yield and the nuclear modification factor, RAA, are presented as a function of rapidity and collision centrality. The value of RAA for the 0–20% central Pb–Pb collisions is 0.67 ± 0.11 (stat.) ± 0.03 (syst.) ± 0.06 (corr. syst.), exhibiting a deviation of 2.6σ from unity. The results are well-described by calculations that include nuclear modifications of the parton distribution functions, while the predictions using vacuum PDFs deviate from data by 2.3σ in the 0–90% centrality class and by 3σ in the 0–20% central collisions.
We present a measurement of inclusive J /ψ production at mid-rapidity (|y| < 1) in p+p collisions at a center-of-mass energy of √s = 200 GeV with the STAR experiment at the Relativistic Heavy Ion Collider (RHIC). The differential production cross section for J /ψ as a function of transverse momentum (p T ) for 0 < p T < 14 GeV/c and the total cross section are reported and compared to calculations from the color evaporation model and the non-relativistic Quantum Chromodynamics model. The dependence of J /ψ relative yields in three p T intervals on charged-particle multiplicity at mid-rapidity is measured for the first time in p+p collisions at √s = 200 GeV and compared with that measured at √s = 7 TeV, PYTHIA8 and EPOS3 Monte Carlo generators, and the Percolation model prediction.
New measurements of directed flow for charged hadrons, characterized by the Fourier coefficient v1, are presented for transverse momenta pT, and centrality intervals in Au+Au collisions recorded by the STAR experiment for the center-of-mass energy range √sN N = 7.7–200 GeV. The measurements underscore the importance of momentum conservation, and the characteristic dependencies on √sN N , centrality and pT are consistent with the expectations of geometric fluctuations generated in the initial stages of the collision, acting in concert with a hydrodynamic-like expansion. The centrality and pT dependencies of veven 1 , as well as an observed similarity between its excitation function and that for v3, could serve as constraints for initial-state models. The veven 1 excitation function could also provide an important supplement to the flow measurements employed for precision extraction of the temperature dependence of the specific shear viscosity.
Fluctuations of conserved quantities such as baryon number, charge, and strangeness are sensitive to the correlation length of the hot and dense matter created in relativistic heavy-ion collisions and can be used to search for the QCD critical point. We report the first measurements of the moments of net-kaon multiplicity distributions in Au+Au collisions at √sNN = 7.7, 11.5, 14.5, 19.6, 27, 39, 62.4, and 200 GeV. The collision centrality and energy dependence of the mean (M), variance (σ 2), skewness (S), and kurtosis (κ) for net-kaon multiplicity distributions as well as the ratio σ 2/M and the products Sσ and κσ 2 are presented. Comparisons are made with Poisson and negative binomial baseline calculations as well as with UrQMD, a transport model (UrQMD) that does not include effects from the QCD critical point. Within current uncertainties, the net-kaon cumulant ratios appear to be monotonic as a function of collision energy.
Azimuthally-differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze out, provide very important information on the nature and dynamics of the system evolution. While the HBT radii oscillations relative to the second harmonic event plane measured recently reflect mostly the spatial geometry of the source, model studies have shown that the HBT radii oscillations relative to the third harmonic event plane are predominantly defined by the velocity fields. In this Letter, we present the first results on azimuthally-differential pion femtoscopy relative to the third harmonic event plane as a function of the pion pair transverse momentum kT for different collision centralities in Pb–Pb collisions at √sNN = 2.76 TeV. We find that the Rside and Rout radii, which characterize the pion source size in the directions perpendicular and parallel to the pion transverse momentum, oscillate in phase relative to the third harmonic event plane, similar to the results from 3+1D hydrodynamical calculations. The observed radii oscillations unambiguously signal a collective expansion and anisotropy in the velocity fields. A comparison of the measured radii scillations with the Blast-Wave model calculations indicate that the initial state triangularity is washedout at freeze out.
The first measurements of anisotropic flow coefficients vn for mid-rapidity charged particles in Xe–Xe collisions at √sNN = 5.44 TeV are presented. Comparing these measurements to those from Pb–Pb collisions at √sNN = 5.02 TeV, v2 is found to be suppressed for mid-central collisions at the same centrality, and enhanced for central collisions. The values of v3 are generally larger in Xe–Xe than in Pb–Pb at a given centrality. These observations are consistent with expectations from hydrodynamic predictions. When both v2 and v3 are divided by their corresponding eccentricities for a variety of initial state models, they generally scale with transverse density when comparing Xe–Xe and Pb–Pb, with some deviations observed in central Xe–Xe and Pb–Pb collisions. These results assist in placing strong constraints on both the initial state geometry and medium response for relativistic heavy-ion collisions.
We report measurements of the inclusive J/ψ yield and average transverse momentum as a function of charged-particle pseudorapidity density dNch/dη in p–Pb collisions at √sNN = 5.02 TeV with ALICE at the LHC. The observables are normalised to their corresponding averages in non-single diffractive events. An increase of the normalised J/ψ yield with normalised dNch/dη, measured at mid-rapidity, is observed at mid-rapidity and backward rapidity. At forward rapidity, a saturation of the relative yield is observed for high charged-particle multiplicities. The normalised average transverse momentum at forward and backward rapidities increases with multiplicity at low multiplicities and saturates beyond moderate multiplicities. In addition, the forward-to-backward nuclear modification factor ratio is also reported, showing an increasing suppression of J/ψ production at forward rapidity with respect to backward rapidity for increasing charged-particle multiplicity.
First results on the longitudinal asymmetry and its effect on the pseudorapidity distributions in Pb–Pb collisions at √sNN = 2.76 TeV at the Large Hadron Collider are obtained with the ALICE detector. The longitudinal asymmetry arises because of an unequal number of participating nucleons from the two colliding nuclei, and is estimated for each event by measuring the energy in the forward neutron-ZeroDegree-Calorimeters (ZNs). The effect of the longitudinal asymmetry is measured on the pseudorapidity distributions of charged particles in the regions |η| < 0.9, 2.8 < η < 5.1 and −3.7 < η < −1.7 by taking the ratio of the pseudorapidity distributions from events corresponding to different regions of asymmetry. The coefficients of a polynomial fit to the ratio characterise the effect of the asymmetry. A Monte Carlo simulation using a Glauber model for the colliding nuclei is tuned to reproduce the spectrum in the ZNs and provides a relation between the measurable longitudinal asymmetry and the shift in the rapidity (y0) of the participant zone formed by the unequal number of participating nucleons. The dependence of the coefficient of the linear term in the polynomial expansion, c1, on the mean value of y0 is investigated.
This letter presents the first measurement of jet mass in Pb–Pb and p–Pb collisions at sNN=2.76 TeV and sNN=5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet quenching in the hot Quantum Chromodynamics (QCD) matter created in nuclear collisions at collider energies. Jets are reconstructed from charged particles using the anti-kT jet algorithm and resolution parameter R=0.4. The jets are measured in the pseudorapidity range |ηjet|<0.5 and in three intervals of transverse momentum between 60 GeV/c and 120 GeV/c. The measurement of the jet mass in central Pb–Pb collisions is compared to the jet mass as measured in p–Pb reference collisions, to vacuum event generators, and to models including jet quenching. It is observed that the jet mass in central Pb–Pb collisions is consistent within uncertainties with p–Pb reference measurements. Furthermore, the measured jet mass in Pb–Pb collisions is not reproduced by the quenching models considered in this letter and is found to be consistent with PYTHIA expectations within systematic uncertainties.
We present a measurement of azimuthal correlations between inclusive J/ψ and charged hadrons in p–Pb collisions recorded with the ALICE detector at the CERN LHC. The J/ψ are reconstructed at forward (p-going, 2.03<y<3.53) and backward (Pb-going, −4.46<y<−2.96) rapidity via their μ+μ− decay channel, while the charged hadrons are reconstructed at mid-rapidity (|η|<1.8). The correlations are expressed in terms of associated charged-hadron yields per J/ψ trigger. A rapidity gap of at least 1.5 units is required between the trigger J/ψ and the associated charged hadrons. Possible correlations due to collective effects are assessed by subtracting the associated per-trigger yields in the low-multiplicity collisions from those in the high-multiplicity collisions. After the subtraction, we observe a strong indication of remaining symmetric structures at Δφ≈0 and Δφ≈π, similar to those previously found in two-particle correlations at middle and forward rapidity. The corresponding second-order Fourier coefficient (v2) in the transverse momentum interval between 3 and 6 GeV/c is found to be positive with a significance of about 5σ. The obtained results are similar to the J/ψ v2 coefficients measured in Pb–Pb collisions at sNN=5.02 TeV, suggesting a common mechanism at the origin of the J/ψ v2.
The production of the charm-strange baryon Ξc0 is measured for the first time at the LHC via its semileptonic decay into eΞ−+νe in pp collisions at s=7 TeV with the ALICE detector. The transverse momentum (pT) differential cross section multiplied by the branching ratio is presented in the interval 1<pT<8 GeV/c at mid-rapidity, |y|<0.5. The transverse momentum dependence of the Ξc0 baryon production relative to the D0 meson production is compared to predictions of event generators with various tunes of the hadronisation mechanism, which are found to underestimate the measured cross-section ratio.
The transversity distribution, which describes transversely polarized quarks in transversely polarized nucleons, is a fundamental component of the spin structure of the nucleon, and is only loosely constrained by global fits to existing semi-inclusive deep inelastic scattering (SIDIS) data. In transversely polarized p↑+p collisions it can be accessed using transverse polarization dependent fragmentation functions which give rise to azimuthal correlations between the polarization of the struck parton and the final state scalar mesons.This letter reports on spin dependent di-hadron correlations measured by the STAR experiment. The new dataset corresponds to 25 pb−1 integrated luminosity of p↑+p collisions at s=500 GeV, an increase of more than a factor of ten compared to our previous measurement at s=200 GeV. Non-zero asymmetries sensitive to transversity are observed at a Q2 of several hundred GeV and are found to be consistent with the former measurement and a model calculation. We expect that these data will enable an extraction of transversity with comparable precision to current SIDIS datasets but at much higher momentum transfers where subleading effects are suppressed.
The production of Σ0 baryons in the nuclear reaction p (3.5 GeV) + Nb (corresponding to sNN=3.18 GeV) is studied with the detector set-up HADES at GSI, Darmstadt. Σ0s were identified via the decay Σ0→Λγ with subsequent decays Λ→pπ− in coincidence with a e+e− pair from either external (γ→e+e−) or internal (Dalitz decay γ⁎→e+e−) gamma conversions. The differential Σ0 cross section integrated over the detector acceptance, i.e. the rapidity interval 0.5<y<1.1, has been extracted as ΔσΣ0=2.3±(0.2)stat±(−0.6+0.6)sys±(0.2)norm mb, yielding the inclusive production cross section in full phase space σΣ0total=5.8±(0.5)stat±(−1.4+1.4)sys±(0.6)norm±(1.7)extrapol mb by averaging over different extrapolation methods. The Λall/Σ0 ratio within the HADES acceptance is equal to 2.3±(0.2)stat±(−0.6+0.6)sys. The obtained rapidity and momentum distributions are compared to transport model calculations. The Σ0 yield agrees with the statistical model of particle production in nuclear reactions. Keywords: Hyperons, Strangeness, Proton, Nucleus.
We present data on charged kaons (K±) and ϕ mesons in Au(1.23A GeV)+Au collisions. It is the first simultaneous measurement of K− and ϕ mesons in central heavy-ion collisions below a kinetic beam energy of 10A GeV. The ϕ/K− multiplicity ratio is found to be surprisingly high with a value of 0.52±0.16 and shows no dependence on the centrality of the collision. Consequently, the different slopes of the K+ and K− transverse-mass spectra can be explained solely by feed-down, which substantially softens the spectra of K− mesons. Hence, in contrast to the commonly adapted argumentation in literature, the different slopes do not necessarily imply diverging freeze-out temperatures of K+ and K− mesons caused by different couplings to baryons.
Challenges of FAIR phase 0
(2018)
After two-year's shutdown, the GSI accelerators plus the latest addition of storage ring CRYRING, will be back into operation in 2018 as the FAIR phase 0 with the goal to fulfill the needs of scientific community and the FAIR accelerators and detector development. Even though GSI has been well known for its operation of a variety of ion beams ranging from proton up to uranium for multi research areas such as nuclear physics, astrophysics, biophysics, material science, the upcoming beam time faces a number of challenges in re-commissioning its existing circular accelerators with brand new control system and upgrade of beam instrumentations, as well as in rising failures of dated components and systems. The cycling synchrotron SIS18 has been undergoing a set of upgrade measures for fulfilling future FAIR operation, among which many measures will also be commissioned during the upcoming beam time. This paper presents the highlights of the challenges such as re-establishing the high intensity heavy ion operation as well as parallel operation mode for serving multi users. The status of preparation including commissioning results will also be reported.
An automated beam-setting optimization application has been implemented on top of FAIR’s control system software stack based on CERN’s LSA framework. The optimization functionality is built using the Jenetics software library implemented in Java. Tests of the software with beam have been performed at the CRYRING@ESR ion storage ring.
Synovial adipose stem cells (sASC) can be differentiated into catecholamine-expressing sympathetic neuron-like cells to treat experimental arthritis. However, the pro-inflammatory tumor necrosis factor (TNF) is known to be toxic to catecholaminergic cells (see Parkinson disease), and this may prevent anti-inflammatory effects in inflamed tissue. We hypothesized that TNF exhibits inhibitory effects on human differentiated sympathetic tyrosine hydroxylase-positive (TH+) neuron-like cells. For the first time, iTH+ neuron-like sympathetic cells were generated from sACSs of rheumatoid arthritis (RA) and osteoarthritis (OA) synovial tissue. Compared to untreated controls in both OA and RA, TNF-treated iTH+ cells demonstrated a weaker staining of catecholaminergic markers in cell cultures of RA/OA patients, and the amount of produced noradrenaline was markedly lower. These effects were reversed by etanercept. Exposure of iTH+ cells to synovial fluid of RA patients showed similar inhibitory effects. In mixed synovial cells, significant effects of TNF on catecholamine release were observed only in OA. This study shows that TNF inhibits iTH+ synovial cells leading to the decrease of secreted noradrenaline. This might be a reason why discovered newly appearing TH+ cells in the synovium are not able to develop their possible full anti-inflammatory role in arthritis.
Guide to active recruiting : attract more female faculty increase diversity – optimize quality
(2018)
Within democratic orders, it is the declared aim of a state of exception to secure or restore the endangered foundation of democracy. The provided measures are, however, undemocratic insofar they directly affect individual rights as the principle on which democracy is based: By suspending rights, the state of exception treats individuals not as members of a democratic community (demos), but as parts of a population which has to be secured. Whereas individual rights enable individuals to be part of the demos, the state of exception – by restraining rights – enforces a politics of population. In my article, I show in what way individual rights, too, are used as a strategy of governing the population. Referring to the history of individual rights in the early modern period, I describe a specific form of alienation of individual rights. I argue that this alienation consists in the separation of a private from the political component of individual rights. This alienation is the reason for a dialectical shift from demos to population which occurs in an extreme form in the state of exception. Against this background, the question of the state of exception and the question of individual rights appear in an unfamiliar but crucial relation. In order to oppose the dialectical shift and the misuse of exceptional measures, I claim it necessary to insist on the inextricable link between the private and the political component of individual rights – that is to extend the domain of democracy.
The Eastern Steppe of Mongolia is one of the world's largest mostly intact grassland ecosystems and is characterised by a close coupling of societal and natural processes. In this ecosystem, mobility is one of the key characteristics of wildlife and human societies alike. The current economic development of Mongolia is accompanied by extensive societal transformation and changes in nomadic lifestyles, which potentially affects the unique steppe ecosystem and its biodiversity. The changing lifestyles are mainly characterised by rural-urban migration, resulting in reduced mobility of herders and their livestock, and presumably affecting wildlife. The question is how mobility can be fostered under these transformation processes. Time is pressing as a new generation is born which is growing up in urban environments and with new skill sets but a potential loss of the tight connection to nature and the nomadic lifestyle.
Traditionally, in deciding whether some strategy or action in war is proportionate and necessary and thus permissible both international law and just war theory focus exclusively on civilian deaths and the destruction of civilian infrastructure. I argue in this paper that any argument that can explain why we should care about collateral killing and damage to infrastructure can also explain why collateral displacement matters. I argue that displacement is a foreseeable near-proximate cause of lethal harm to civilians and is relevant for proportionality and necessity calculi. Accepting my argument has significant consequences for what we are permitted to do in war and for what obligations we have towards refugees that result from our actions in war.
Moral refugee markets
(2018)
States are increasingly paying other states to host refugees. For example, in 2010 the EU paid Libya €50 million to continue hosting the refugees within its borders, and five years later Australia offered Cambodia $31.16 million to accept asylum seekers living in Naru. These exchanges, which I call ‘refugees markets,’ have faced criticism by philosophers. Some philosophers claim the markets fail to ensure true protection, and are demeaning, expressing just how much refugees are unwanted. In response, some have defended refugee markets, claiming they can ensure refugees have protection and are not demeaned. I argue that many markets do demean refugees, and therefore have moral costs, but can still be all-things-considered preferable to alternative schemes if they protect refugees more than these alternative schemes.
This essay develops, within the terms of the recent New York Declaration, an account of the shared responsibility of states to refugees and of how the character of that responsibility effects the ways in which it can be fairly shared. However, it also moves beyond the question of the general obligations that states owe to refugees to consider ways in which refugee choices and refugee voice can be given appropriate standing with the global governance of refuge. It offers an argument for the normative significance of refugee’s reasons for choosing states of asylum and linked this to consideration of a refugee matching system and to refugee quota trading conceived as responsibility-trading, before turning to the issue of the inclusion of refugee voice in relation to the justification of the norms of refugee governance and in relation to the institutions and practices of refugee governance through which those norms are given practical expression.
The issue of statelessness poses problems for the statist (or nationalist) approach to the philosophy of immigration. Despite the fact that the statist approach claims to constrain the state’s right to exclude with human rights considerations, the arguments statists offer for the right of states to determine their own immigration policies would also justify citizenship rules that would render some children stateless. Insofar as rendering a child stateless is best characterized as a violation of human rights and insofar as some states have direct responsibility for causing such harm, the problem of non-refugee stateless children points to greater constraints than most statists accept on states’ right to determine their own rules for membership. While statists can ultimately account for the right not to be rendered stateless, recognizing these additional human rights constraints ultimately weakens the core of the statist position.
While global justice theorists heatedly discuss the responsibilities of the affluent and powerful, those states which can legitimately be seen as victims of global injustice have seldom, if ever, been considered as duty bearers to whom responsibilities can be attached. However, recognising agents whose options are constrained not only as victims, but also as duty bearers is necessary as a proof of respect for their agency and indispensable to mobilise the type of action required to alter global injustices. In this article, I explore what responsibilities state officials of dominated states have. I argue that they have the responsibility to resist domination in the name of the dominated states members. While under particular circumstances this responsibility gives rise to a duty to engage in acts of state civil disobedience, under other circumstances state officials of dominated states ought to resist domination in an internal, attitudinal way by recognising themselves as outcome responsible agents.
Fair Trade is under fire. Some critics argue, for instance, that there is no obligation to purchase Fair Trade certified products and that doing so may even be counter-productive. Others worry that well-justified conceptions of what makes trade fair can conflict. Yet others suggest that the common arguments for Fair Trade cannot justify purchasing Fair Trade certified goods, in particular. This paper starts by sketching one common argument for Fair Trade and defends it against this last line of criticism. In particular, it argues that we should purchase Fair Trade certified goods because doing so benefits the poor even though there are other ways to alleviate poverty. It then considers how other common arguments for Fair Trade fare in light of similar criticism and concludes that they may well succeed.
Political realists claim that international relations are in a state of anarchy, and therefore every state is allowed to disregard its moral duties towards other states and their inhabitants. Realists argue that complying with moral duties is simply too risky for a state’s national security. Political moralists convincingly show that realists exaggerate both the extent of international anarchy and the risks it poses to states who act morally. Yet moralists do not go far enough, since they do not question realism’s normative core: the claim that when national security is really at risk, states are allowed to disregard their moral duties. I contend that there is at least one moral duty that states should not disregard even if their inhabitants are at risk of death by military aggression: the duty to reduce extreme global poverty. The reason is that even granting that national security is about securing individuals’ right to life, global poverty relief is about that as well.
There are longstanding calls for international organizations (IOs) to be more inclusive of the voices and interests of people whose lives they affect. There is nevertheless widespread disagreement among practitioners and political theorists over who ought to be included in IO decision-making and by what means. This paper focuses on the inclusion of IOs’ ‘intended beneficiaries,’ both in principle and practice. It argues that IOs’ intended beneficiaries have particularly strong normative claims for inclusion because IOs can affect their vital interests and their political agency. It then examines how these claims to inclusion might be feasibly addressed. The paper proposes a model of inclusion via representation and communication, or ‘mediated inclusion.’ An examination of existing practices in global governance reveals significant opportunities for the mediated inclusion of IOs’ intended beneficiaries, as well as pervasive obstacles. The paper concludes that the inclusion of intended beneficiaries by IOs is both appropriate and feasible.
This article outlines a new approach to answering the foundational question in democratic theory of how the boundaries of democratic political units should be delineated. Whereas democratic theorists have mostly focused on identifying the appropriate population-group – or demos – for democratic decisionmaking, it is argued here that we should also take account of considerations relating to the appropriate scope of a democratic unit’s institutionalized governance capabilities – or public power. These matter because democratically legitimate governance is produced not only through the decision-making agency of a demos, but also through the institutionally distinct sources of political agency that shape the governance capabilities of public power. To develop this argument, the article traces a new theoretical account of the normative and institutional sources of collective agency, political legitimacy, and democratic boundaries, and illustrates it through a democratic reconstruction of the classical body politic metaphor. It further shows how this theoretical account lends strong prescriptive support to pluralist institutional boundaries within democratic global governance.
The democratic boundary problem raises the question of who has democratic participation rights in a given polity and why. One possible solution to this problem is the all-affected principle (AAP), according to which a polity ought to enfranchise all persons whose interests are affected by the polity’s decisions in a morally significant way. While AAP offers a plausible principle of democratic enfranchisement, its supporters have so far not paid sufficient attention to economic participation rights. I argue that if one commits oneself to AAP, one must also commit oneself to the view that political participation rights are not necessarily the only, and not necessarily the best, way to protect morally weighty interests. I also argue that economic participation rights raise important worries about democratic accountability, which is why their exercise must be constrained by a number of moral duties.
The individual parameter
(2018)
The present thesis tackles the unification of two-dimensional semantic systems, which are designed to deal with context-dependency of a certain kind i.e. indexicality, with dynamic theories of meaning, designed to capture facts about anaphoricity and the distribution of definite and indefinite articles. The need for a more principled look at this unification is twofold. Firstly, there is an overlap of these two families of theories in terms of empirical data, namely third person personal pronouns, as well as definite descriptions. Both kinds of expressions have anaphoric as well as non-anaphoric usages, whereas some of the latter ones can be captured in terms of indexicality. But, on the other hand, no language, especially not German and English, the main sources of data in this thesis, seems to distinguish these two usages formally, i.e. by employing different expressions. Hence the need for a unified framework in which this sort of ambiguity can be treated. Secondly, the theoretical state is dissatisfactory in the sense that the families of theories take very disparate forms that are not easy to relate conceptually.
The overlap in empirical area of application strongly suggests that this dichotomy is an artifact of the way these theories traditionally are developed and justified. This thesis seeks to overcome this state of the field. It proceeds as follows.
The first chapter discusses the way in which theories indexicality are designed. After taking a closer look some hallmarks of these theories such as the notions of index- and context-dependency themselves, double indexing, etc., it develops a notion of index dependency that makes use of a more complex individual parameter than the one that is usually assumed in the literature. Apart from agents and addressees, the two standard components of indices that represent contexts, additional objects are assumed. This leads to a variant of the semantics of deictically used third person expression that is called ‘indexical theory of demonstratives’, which is then investigated further.
The second chapter discusses the classics of dynamic semantics: DRT, DPL, and FCS. It arrives at the common core of all of these theories that consists in the assumption of a novel sort of variable namely active variables as opposed to free and bound ones that are intended to model the behavior of (in)definite descriptions and pronouns. The projection behavior of these variables or discourse referents is described either in (discourse-)syntactic or semantic terms. The chapter also arrives at a new formulation of the uniqueness condition that is thought to be part of the semantics of definite descriptions and sketches an account of transparent negation.
The third chapter then combines the insights of the previous ones by developing the notion of representation that connects the entities of evaluation of the first chapter i.e. indices with those of the second namely sets of assignments, a.k.a. files. The formal language that emerged in the second chapter is endowed with two kinds of variables for situations to allow for double indexing within a dynamic setting. A novel interpretation mechanism for the so designed language is proposed, which is shown to capture not only those aspects that are known to exist in two-dimensional frameworks, but also certain other index-index interactions that are described in yet another body of literature.
The final chapter discusses potential flaws of the theory and sketches an account of allegedly bound indexicals that is compatible with Kaplan’s infamous ban on monsters.
We investigate privacy concerns and the privacy behavior of users of the AR smartphone game Pokémon Go. Pokémon Go accesses several functionalities of the smartphone and, in turn, collects a plethora of data of its users. For assessing the privacy concerns, we conduct an online study in Germany with 683 users of the game. The results indicate that the majority of the active players are concerned about the privacy practices of companies. This result hints towards the existence of a cognitive dissonance, i.e. the privacy paradox. Since this result is common in the privacy literature, we complement the first study with a second one with 199 users, which aims to assess the behavior of users with regard to which measures they undertake for protecting their privacy. The results are highly mixed and dependent on the measure, i.e. relatively many participants use privacy-preserving measures when interacting with their smartphone. This implies that many users know about risks and might take actions to protect their privacy, but deliberately trade-off their information privacy for the utility generated by playing the game.
Privacy concerns as well as trust and risk beliefs are important factors that can influence users’ decision to use a service. One popular model that integrates these factors is relating the Internet Users Information Privacy Concerns (IUIPC) construct to trust and risk beliefs. However, studies haven’t yet applied it to a privacy enhancing technology (PET) such as an anonymization service. Therefore, we conducted a survey among 416 users of the anonymization service JonDonym [1] and collected 141 complete questionnaires. We rely on the IUIPC construct and the related trust-risk model and show that it needs to be adapted for the case of PETs. In addition, we extend the original causal model by including trust beliefs in the anonymization service provider and show that they have a significant effect on the actual use behavior of the PET.
This paper provides an assessment framework for privacy policies of Internet of Things Services which is based on particular GDPR requirements. The objective of the framework is to serve as supportive tool for users to take privacy-related informed decisions. For example when buying a new fitness tracker, users could compare different models in respect to privacy friendliness or more particular aspects of the framework such as if data is given to a third party. The framework consists of 16 parameters with one to four yes-or-no-questions each and allows the users to bring in their own weights for the different parameters. We assessed 110 devices which had 94 different policies. Furthermore, we did a legal assessment for the parameters to deal with the case that there is no statement at all regarding a certain parameter. The results of this comparative study show that most of the examined privacy policies of IoT devices/services are insufficient to address particular GDPR requirements and beyond. We also found a correlation between the length of the policy and the privacy transparency score, respectively.
Participatory policy making is a contested concept that can be understood in multiple ways. So how do those involved with participatory initiatives make sense of contrasting ideas of participation? What purposes and values do they associate with participatory governance? This paper reflects on a Q‐method study with a range of actors, from citizen activists to senior civil servants, involved with participatory initiatives in U.K. social policy. Using principal components analysis, supplemented with data from qualitative interviews, it identifies three shared participation preferences: participation as collective decision making, participation as knowledge transfer, and participation as agonism. These preferences demonstrate significant disagreements between the key informants, particularly concerning the objectives of participation, how much power should be afforded to the public, and what motivates people to participate. Their contrasting normative orientations are used to highlight how participatory governance theory and practice frequently fails to take seriously legitimate diversity in procedural preferences. Moreover, it is argued that, despite the diversity of preferences, there is a lack of imagination about how participation can function when social relations are conflictual.
Good quality data on precipitation are a prerequisite for applications like short-term weather forecasts, medium-term humanitarian assistance, and long-term climate modelling. In Sub-Saharan Africa, however, the meteorological station networks are frequently insufficient, as in the Cuvelai-Basin in Namibia and Angola. This paper analyses six rainfall products (ARC2.0, CHIRPS2.0, CRU-TS3.23, GPCCv7, PERSIANN-CDR, and TAMSAT) with respect to their performance in a crop model (APSIM) to obtain nutritional scores of a household’s requirements for dietary energy and further macronutrients. All products were calibrated to an observed time series using Quantile Mapping. The crop model output was compared against official yield data. The results show that the products (i) reproduce well the Basin’s spatial patterns, and (ii) temporally agree to station records (r = 0.84). However, differences exist in absolute annual rainfall (range: 154 mm), rainfall intensities, dry spell duration, rainy day counts, and the rainy season onset. Though calibration aligns key characteristics, the remaining differences lead to varying crop model results. While the model well reproduces official yield data using the observed rainfall time series (r = 0.52), the products’ results are heterogeneous (e.g., CHIRPS: r = 0.18). Overall, 97% of a household’s dietary energy demand is met. The study emphasizes the importance of considering the differences among multiple rainfall products when ground measurements are scarce.
UNDERSTANDING HOW HOUSEHOLDS REACT TO THE ARRIVAL OF PERMANENT AND TRANSITORY INCOME IS OF INTEREST FOR RESEARCHERS AND REGULATORS. PREVIOUS STUDIES HAD TO USE IMPRECISE SURVEY DATA TO MEASURE CONSUMPTION AND THUS CONCLUSIONS OFTEN DIVERGED. WE LEVERAGE GRANULAR PERSONAL FINANCE MANAGEMENT FINTECH DATA TO TEST FRIEDMAN'S PERMANENT INCOME HYPOTHESIS AND TO ASSESS HOUSEHOLD SPENDING ELASTICITY AND MARGINAL PROPENSITY TO CONSUME FOR VARIOUS SPENDING CATEGORIES IN RESPONSE TO DIFFERENT INCOME TYPES.
AGAINST THE BACKGROUND OF FRAGMENTED EUROPEAN EQUITIES TRADING, MARKET OPERATORS HAVE EMPLOYED DIFFERENT STRATEGIES TO INCREASE LIQUIDITY ON THEIR MARKET RELATIVE TO OTHER TRADING VENUES. ONE OF THESE STRATEGIES IS TO INCENTIVIZE LIQUIDITY PROVIDERS VIA FEE REBATES. THIS ARTICLE PRESENTS AN EMPIRICAL INVESTIGATION OF THE INTRODUCTION OF THE XETRA LIQUIDITY PROVIDER PROGRAM AT DEUTSCHE BÖRSE AND ITS IMPACT ON LIQUIDITY AND TRADING VOLUME ON THE INTRODUCING MARKET ITSELF AND ON THE CONSOLIDATED EUROPEAN MARKET.
THIS EXPLORATORY STUDY INVESTIGATES DRIVERS OF THE CRYPTOCURRENCY EX CHANGE COMPETITION. WE EXAMINE THE IMPACT OF MARKET-RELATED AND COMMUNITY-RELATED ASPECTS OF CRYPTOCURRENCY EXCHANGES ON TWO DISTINCT TYPES OF COMPETITION. OUR EMPIRICAL ANALYSIS OF THREE DATASETS INDICATES THAT THE COMPETITION FOR TRADING FREQUENCY IS DRIVEN BY BOTH THE MARKET AS WELL AS THE COMMUNITY WHEREAS THE COMPETITION FOR TRADING QUANTITY IS DRIVEN SOLELY BY THE MARKET.
TO MAKE PROFITABLE INVESTMENT DECISIONS, INVESTORS NEED TO ASSESS THE FINANCIAL FUTURE OF FIRMS. DUE TO INVESTORS’ LACK OF INTERNAL INFORMATION ABOUT THE FIRMS’ FUTURE PROSPECTS, THEY OFTEN HAVE TO RELY ON MANAGERS’ VERBAL STATEMENTS FOR THIS TASK. HOWEVER, AS MANAGERS MIGHT HAVE AN INCENTIVE TO PRESENT POSITIVELY BIASED INFORMATION, THE VALUE OF THEIR STATEMENTS FOR INVESTORS IS NOT CLEAR.
IN THIS REPORT, WE SHOW HOW TEXTUAL ANALYSIS TOOLS CAN BE USED TO ASSESS THE VALUE OF MANAGERS’ VERBAL STATEMENTS DURING EARNINGS CONFERENCE CALLS FOR INVESTORS. WE FIND THAT IN PARTICULAR MANAGERS’ NEGATIVE STATEMENTS SIGNI FICANTLY PREDICT LOWER FUTURE EARNINGS.
ICOs and improvement potentials for a global digital market infrastructure : Martin Steinbach
(2018)
DIGITIZATION CHALLENGES COMPANIES TO ACCELERATE THEIR INNOVATION CYCLES TO STAY COMPETITIVE. THIS RESEARCH INVESTIGATES HOW IT KNOWLEDGE ESTABLISHED ON DIFFERENT HIERARCHICAL LEVELS LEADS TO ORGANIZATIONAL INNOVATIVENESS. DIFFERENTIATING BETWEEN STRATEGICALLY MORE AND LESS DIGITIZED ORGANIZATIONS, THE RESULTS REVEAL: ORGANIZATIONAL INNOVATIVENESS IS SIGNIFICANTLY HIGHER INFLUENCED BY THE IT KNOWLEDGE OF BUSINESS EMPLOYEES IN ORGANIZATIONS GIVING THE DIGITAL BUSINESS STRATEGY HIGH IMPORTANCE, WHEREAS THE MANAGEMENT’S ROLE DECREASES. WE FURTHER DEDUCE THE CIO’S POSITIVE ROLE FOR IT-ENABLED BUSINESS INNOVATION IN KNOWLEDGE-INTENSIVE INDUSTRIES, SUCH AS THE FINANCIAL SERVICES SECTOR.
THE GROWING DEMAND FOR DIFFERENTIATED QUALITY OF SERVICE REQUIREMENTS OF VARIOUS MOBILE APPLICATIONS ESTABLISHES THE NEED FOR ELASTIC CLOUDLET RE SOURCE ALLOCATIONS. HERE, WE CONSIDER THE DYNAMIC OPTIMIZATION OF RESOURCE ALLOCATIONS IN REMOTE, AS WELL AS EDGE CLOUD INFRASTRUCTURES. WE CONSIDER TIME VARYING APPLICATION DEMANDS AND OPTIMIZE THE CLOUDLET RESOURCE ALLOCATION OVER A FINITE TIME HORIZON SHOWING THAT THE CORRESPONDING COMPUTATIONAL EFFORT IS REDUCED BY THREE ORDERS OF MAGNITUDE.
EXTANT STRATEGY CONCEPTS ARE CHALLENGED DUE TO THE ONGOING DIGITIZATION, WHICH FUNDAMENTALLY CHANGES CONDITIONS FOR ALL MARKET PARTICIPANTS. THIS RESEARCH COMPARES THE CONCEPT OF IT ALIGNMENT WITH THE RECENTLY INTRODUCED “DIGITAL BUSINESS STRATEGY” (DBS), WHICH DESCRIBES A CROSS-FUNCTIONAL AND AGILE FUSION OF BUSINESS AND IT STRATEGY. THE RESULTS REVEAL A TOTAL ABSENCE OF A DIRECT INFLUENCE OF IT LEADERS (CIOS) ON DBS, WHEREAS A HIGH IMPACT ON IT ALIGNMENT IS STILL GIVEN. BUSINESS LEADERS IN TURN IMPACT MORE ON DBS.
IN THE CURRENT REGIME OF LOW INTEREST RATES, TAKING SOUND SAVINGS DECISIONS POSES A SIGNIFICANT CHALLENGE TO MOST INDIVIDUALS. FUND SAVINGS PLANS ALLOW TO ACCUMULATE PRIVATE SAVINGS VIA AUTOMATED RECURRING INVESTMENTS IN SELECTED FUNDS. LOW FEES AND SMALL MINIMUM INVESTMENT AMOUNTS MAKE THEM A SUITABLE SAVINGS VEHICLE ALSO FOR LOW NET-WORTH INDIVIDUALS. WHILE TRADITIONAL FINANCIAL ADVISORS ONLY RELUCTANTLY PROVIDE ADVICE ON SMALL-SCALE INVESTMENTS, THE RECENT SURGE OF ROBO-ADVISORS ENABLES ACCESS TO ADVICE ON SAVINGS PLAN CHOICES FOR INVESTORS FROM ALL WEALTH BANDS. IN THIS REPORT, WE PRESENT EMPIRICAL RESULTS ON THE IMPACT OF INTRODUCING AN AUTOMATED INVESTMENT TOOL AT A LARGE GERMAN ONLINE BANK ON PRIVATE INVESTORS’ SAVINGS DECISIONS.
Assessment of selective mutism (SM) is hampered by the lack of diagnostic measures. The Frankfurt Scale of Selective Mutism was developed for kindergarteners, schoolchildren, and adolescents, including the diagnostic scale (DS) and the severity scale (SS). The objective of this study was to evaluate this novel, parent-rated questionnaire among individuals aged 3 to 18 years (n = 334) with SM, social phobia, internalizing disorders, and a control group. Item analysis resulted in high item-total correlations, and internal consistency in both scales was excellent with Cronbach’s α = .90-.98. Exploratory factor analysis of the SS consistently yielded a one-factor solution. Mean sum scores of the DS differed significantly between the diagnostic groups, and the receiver operating characteristic analysis resulted in optimal cutoffs for distinguishing SM from all other groups with the area under the curves of 0.94-1.00. The SS sum scores correlated significantly with SM’s clinician-rated symptom severity.
In the current contribution we present a comprehensive study on the heteronuclear carbonyl complex H2FeRu3(CO)13 covering its low energy electron induced fragmentation in the gas phase through dissociative electron attachment (DEA) and dissociative ionization (DI), its decomposition when adsorbed on a surface under controlled ultrahigh vacuum (UHV) conditions and exposed to irradiation with 500 eV electrons, and its performance in focused electron beam induced deposition (FEBID) at room temperature under HV conditions. The performance of this precursor in FEBID is poor, resulting in maximum metal content of 26 atom % under optimized conditions. Furthermore, the Ru/Fe ratio in the FEBID deposit (≈3.5) is higher than the 3:1 ratio predicted. This is somewhat surprising as in recent FEBID studies on a structurally similar bimetallic precursor, HFeCo3(CO)12, metal contents of about 80 atom % is achievable on a routine basis and the deposits are found to maintain the initial Co/Fe ratio. Low temperature (≈213 K) surface science studies on thin films of H2FeRu3(CO)13 demonstrate that electron stimulated decomposition leads to significant CO desorption (average of 8–9 CO groups per molecule) to form partially decarbonylated intermediates. However, once formed these intermediates are largely unaffected by either further electron irradiation or annealing to room temperature, with a predicted metal content similar to what is observed in FEBID. Furthermore, gas phase experiments indicate formation of Fe(CO)4 from H2FeRu3(CO)13 upon low energy electron interaction. This fragment could desorb at room temperature under high vacuum conditions, which may explain the slight increase in the Ru/Fe ratio of deposits in FEBID. With the combination of gas phase experiments, surface science studies and actual FEBID experiments, we can offer new insights into the low energy electron induced decomposition of this precursor and how this is reflected in the relatively poor performance of H2FeRu3(CO)13 as compared to the structurally similar HFeCo3(CO)12.
Neuraminidase inhibitors in influenza treatment and prevention – is it time to call it a day?
(2018)
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for the treatment and prevention of influenza and its complications is largely debatable due to constraints in the ability to control for confounders and to explore unobserved areas of the drug effects. For this study, we used a mathematical model of influenza infection which allowed transparent analyses. The model recreated the oseltamivir effects and indicated that: (i) the efficacy was limited by design, (ii) a 99% efficacy could be achieved by using high drug doses (however, taking high doses of drug 48 h post-infection could only yield a maximum of 1.6-day reduction in the time to symptom alleviation), and (iii) contributions of oseltamivir to epidemic control could be high, but were observed only in fragile settings. In a typical influenza infection, NAIs’ efficacy is inherently not high, and even if their efficacy is improved, the effect can be negligible in practice.
Atrial septostomy (AS) is recommended for pulmonary arterial hypertension (PAH)-associated right ventricular (RV) failure, recurrent syncope, or pulmonary hypertensive crisis (PHC). We aimed to evaluate the feasibility and efficacy of AS to manage PAH from infancy to adulthood. From June 2009 to December 2016, transcatheter atrial communications were created in 11 PAH patients (4 girls/women; median age = 4.3 years; range = 33 days–26 years; median body weight = 14 kg; range = 3–71 kg; NYHA-/Ross class IV; n = 11). PAH was classified as idiopathic (n = 6) or secondary (n = 5). History of syncope was dominant (n = 6); two with patent foramen ovale (PFO) admitted with recurrent PHC, three patients required resuscitation before AS. Three patients had PAH-associated low cardiac output. The average pulmonary arterial pressures (PAP systolic/diastolic) were 101/50 (±34/23); the corresponding systemic arterial pressures (SAP) were 99/54 (±23/11); and the mean ratio of PAPd / SAPd was 0.97 (±0.4). Percutaneous trans-septal puncture was uneventfully performed in nine patients; a PFO was dilated in two patients. There was no procedure-related mortality. The median balloon size was 10 mm (range = 6–14 mm); the mean catheter time was 174.6 ± 48 min; fluoroscopy time was 19.8 (±11) min. Syncope and PHC were successfully treated in all patients. The mean arterial oxygen saturation decreased from 97 ± 2 to 89 ± 11.7. One patient died awaiting lung transplantation, one continues to be listed; two patients received a reverse Potts-shunt, one patient died during follow-up; seven patients are stable with PAH-specific treatment. Percutaneous AS is an effective method palliating PAH-associated syncope, PHCs or right (bi-) ventricular heart failure.
The main sources for the discussion of the category “relation” were Aristotle’s Categories and Metaphysics. Before their translation into Arabic in the 8th and 9th centuries, Christian theologians and in their footsteps Syriac scholars considered Aristotle’s works to be a useful tool in Christological discussions. This article analyzes the category of relation and its development in Arabic-Islamic philosophy in authors such as Kindī and his student Aḥmad Ibn aṭ-Ṭayyib as-Saraḫsī, Fārābī, Ibn Sīnā, Ghazālī, Ibn Rušd, the Sufi Ibn ʿArabī and others.
The purpose of the text is to present an interpretation of Theodor Adorno’s critical reading of authors considered revisionists of Sigmund Freud’s psychoanalytic theory, particularly Karen Horney. We discuss critically Adorno’s favorable positioning to the Freudian conception of the individual psychic nucleus in contrast to the hasty sociologization of psychoanalysis practiced by the revisionism of Karen Horney. In the final part we try to show how the Adornian perspective ends up by making, in his own way, the same mistake of a hasty sociologization of psychoanalysis he imputed to the revisionists and advocates an theoretical emphasis on the sociological realm that seems also problematic.
This article deals with the analysis of Frankfurtrt's theorists, especially Adorno, Marcuse, Walter Benjamin and Horkheimer, and their relevance in relation to education. Motivation, faced with a world in which extreme-right values and religious fundamentalisms are promoted, such a scenario motivates us to question the role that education plays in combating extremism and intolerance. Scope of relevance. This article is directly related to the philosophy of education. Justification and relevance. This topic is justified because it deals with teleological aspects of the function of education. In the sense of questioning the teleological character of education based on philosophical concepts that seek the autonomy of the subject instead of just the human being to what is settled. As a methodology, it resorts to bibliographical studies and critical reflections on education and its political character in the construction of an emancipated social conscience of values that legitimize oppression. Results and discussion. A study on Critical Theory of Adorno, Horkheimer, Benjamin, Habermas and Marcuse was conducted as contributions to the construction of an education that, in addition to seeking inclusion, also seeks to be a political instrument to combat prejudice, which is nowadays alive again with the rise of religious fundamentalisms, xenophobia and the rise of extreme-right political ideas. Conclusion. It is concluded that the school has the political purpose to educate for a world of solidarity and respect for differences.
Purpose: Acute kidney injury (AKI) is a severe complication in medical and surgical intensive care units accounting for a high morbidity and mortality. Incidence, risk factors, and prognostic impact of this deleterious condition are well established in this setting. Data concerning the neurocritically ill patients is scarce. Therefore, aim of this study was to determine the incidence of AKI and elucidate risk factors in this special population.
Methods: Patients admitted to a specialized neurocritical care unit between 2005 and 2011 with a length of stay above 48 hours were analyzed retrospectively for incidence, cause, and outcome of AKI (AKI Network-stage ≥2).
Results: The study population comprised 681 neurocritically ill patients from a mixed neurosurgical and neurological intensive care unit. The prevalence of chronic kidney disease (CKD) was 8.4% (57/681). Overall incidence of AKI was 11.6% with 36 (45.6%) patients developing dialysis-requiring AKI. Sepsis was the main cause of AKI in nearly 50% of patients. Acute kidney injury and renal replacement therapy are independent predictors of worse outcome (hazard ratio [HR]: 3.704; 95% confidence interval [CI]: 1.867-7.350; P < .001; and HR: 2.848; CI: 1.301-6.325; P = .009). Chronic kidney disease was the strongest independent risk factor (odds ratio: 12.473; CI: 5.944-26.172; P < .001), whereas surgical intervention or contrast agents were not associated with AKI.
Conclusions: Acute kidney injury in neurocritical care has a high incidence and is a crucial risk factor for mortality independently of the underlying neurocritical condition. Sepsis is the main cause of AKI in this setting. Therefore, careful prevention of infectious complications and considering CKD in treatment decisions may lower the incidence of AKI and hereby improve outcome in neurocritical care.
Complex problem solving (CPS) is a highly transversal competence needed in educational and vocational settings as well as everyday life. The assessment of CPS is often computer-based, and therefore provides data regarding not only the outcome but also the process of CPS. However, research addressing this issue is scarce. In this article we investigated planning activities in the process of complex problem solving. We operationalized planning through three behavioral measures indicating the duration of the longest planning interval, the delay of the longest planning interval and the variance of intervals between each two successive interactions. We found a significant negative average effect for our delay indicator, indicating that early planning in CPS is more beneficial. However, we also found effects depending on task and interaction effects for all three indicators, suggesting that the effects of different planning behaviors on CPS are highly intertwined.
Previous research on scalar implicature has primarily relied on metalinguistic judgment tasks and found varying rates of such inferences depending on the nature of the task and contextual manipulations. This paper introduces a novel interactive paradigm involving both a production and a comprehension component, thereby fixing a precise conversational context.
The main research question is what is reliably communicated by some in this communicative setting, when the quantifier occurs in unembedded positions as well as embedded positions. Our new paradigm involves an action-based task from which participants’ interpretation of utterances can be inferred. It incorporates a game–theoretic design, including a precise model to predict participants’ behaviour in the experimental context.
Our study shows that embedded and unembedded implicatures are reliably communicated by some. We propose two cognitive principles which describe what can be left unsaid. In our experimental context, a production strategy based on these principles is more efficient (with equal communicative success and shorter utterances) than a strategy based on literal descriptions.
NADH:ubiquinone oxidoreductase (Complex Ⅰ) is the first and largest enzyme in the respiratory chain. It catalyzes the transfer of two electrons from NADH to ubiquinone via a series of enzyme-bound redox centers - Flavin mononucleotide (FMN) and iron-sulfur (Fe-S) clusters – and couples the exergonic reaction with the endergonic translocation of four protons across the membranes. Bacteria contain the minimal form of complex I, which is composed of 14 conserved core subunits with a molecular mass of around 550 kDa. Complex Ⅰ has an L-shaped structure which can be subdivided into two major parts (arms). The hydrophilic arm protruding into the bacterial cytosol (or mitochondrial matrix) harbors the binding site for the substrate NADH, the two- to one-electron switch FMN and all one-electron transferring Fe-S clusters and therefore considered as the catalytic unit. The membrane arm consists of the membranespanning subunits and conducts the proton pumping process. The Quinone binding site is located at the interface of both arms. ...
The article reports three simulation studies conducted to find out whether the effect of a time limit for testing impairs model fit in investigations of structural validity, whether the representation of the assumed source of the effect prevents impairment of model fit and whether it is possible to identify and discriminate this method effect from another method effect. Omissions due to the time limit for testing were not considered as missing data but as information on the participants’ processing speed. In simulated data the presence of a time-limit effect impaired comparative fit index and nonnormed fit index whereas normed chi-square, root mean square error of approximation, and standardized root mean square residual indicated good model fit. The explicit consideration of the effect due to the time limit by an additional component of the model improved model fit. Effect-specific assumptions included in the model of measurement enabled the discrimination of the effect due to the time limit from another possible method effect.
Background: The MRI Breast Imaging-Reporting and Data System (BI-RADS) lexicon recommends that a breast MRI proto-col contain T2-weighted and dynamic contrast-enhanced (DCE) MRI sequences. The addition of diffusion-weighted imag-ing (DWI) significantly improves diagnostic accuracy. This study aims to clarify which descriptors from DCE-MRI, DWI, andT2-weighted imaging are most strongly associated with a breast cancer diagnosis.Purpose/Hypothesis: To develop a multiparametric MRI (mpMRI) model for breast cancer diagnosis incorporating Ameri-can College of Radiology (ACR) BI-RADS recommended descriptors for breast MRI with DCE, T2-weighted imaging, andDWI with apparent diffusion coefficient (ADC) mapping.Study Type: Retrospective.Subjects: In all, 188 patients (mean 51.6 years) with 210 breast tumors (136 malignant and 74 benign) who underwentmpMRI from December 2010 to September 2014.Field Strength/Sequence: IR inversion recovert DCE-MRI dynamic contrast-enhanced magnetic resonance imaging VIBEVolume-Interpolated-Breathhold-Examination FLASH turbo fast-low-angle-shot TWIST Time-resolved angiography withstochastic Trajectories.Assessment: Two radiologists in consensus and another radiologist independently evaluated the mpMRI data. Charac-teristics for mass (n = 182) and nonmass (n = 28) lesions were recorded on DCE and T2-weighted imaging accordingto BI-RADS, as well as DWI descriptors. Two separate models were analyzed, using DCE-MRI BI-RADS descriptors, T2-weighted imagines, and ADCmean as either a continuous or binary form using a previously published ADC cutoffvalue of ≤1.25 × 10−3mm2/sec for differentiation between benign and malignant lesions. Histopathology was the stan-dard of reference.Statistical Tests: χ2test, Fisher’s exact test, Kruskal–Wallis test, Pearson correlation coefficient, multivariate logistic regres-sion analysis, Hosmer–Lemeshow test of goodness-of-fit, receiver operating characteristics analysis.Results: In Model 1, ADCmean (P = 0.0031), mass margins with DCE (P = 0.0016), and delayed enhancement with DCE(P = 0.0016) were significantly and independently associated with breast cancer diagnosis; Model 2 identified ADCmean(P = 0.0031), mass margins with DCE (P = 0.0012), initial enhancement (P = 0.0422), and delayed enhancement with DCE(P = 0.0065) to be significantly independently associated with breast cancer diagnosis. T2-weighted imaging variables werenot included in the final models
Background: Native T1 may be a sensitive, contrast-free, non-invasive cardiovascular magnetic resonance (CMR) marker of myocardial tissue changes in patients with pulmonary artery hypertension. However, the diagnostic and prognostic value of native T1 mapping in this patient group has not been fully explored. The aim of this work was to determine whether elevation of native T1 in myocardial tissue in pulmonary hypertension: (a) varies according to pulmonary hypertension subtype; (b) has prognostic value and (c) is associated with ventricular function and interaction.
Methods: Data were retrospectively collected from a total of 490 consecutive patients during their clinical 1.5 T CMR assessment at a pulmonary hypertension referral centre in 2015. Three hundred sixty-nine patients had pulmonary hypertension [58 ± 15 years; 66% female], an additional 39 had pulmonary hypertension due to left heart disease [68 ± 13 years; 60% female], 82 patients did not have pulmonary hypertension [55 ± 18; 68% female]. Twenty five healthy subjects were also recruited [58 ±4 years); 51% female]. T1 mapping was performed with a MOdified Look-Locker Inversion Recovery (MOLLI) sequence. T1 prognostic value in patients with pulmonary arterial hypertension was assessed using multivariate Cox proportional hazards regression analysis.
Results: Patients with pulmonary artery hypertension had elevated T1 in the right ventricular (RV) insertion point (pulmonary hypertension patients: T1 = 1060 ± 90 ms; No pulmonary hypertension patients: T1 = 1020 ± 80 ms p < 0.001; healthy subjects T1 = 940 ± 50 ms p < 0.001) with no significant difference between the major pulmonary hypertension subtypes. The RV insertion point was the most successful T1 region for discriminating patients with pulmonary hypertension from healthy subjects (area under the curve = 0.863) however it could not accurately discriminate between patients with and without pulmonary hypertension (area under the curve = 0.654). T1 metrics did not contribute to prediction of overall mortality (septal: p = 0.552; RV insertion point: p = 0.688; left ventricular free wall: p = 0.258). Systolic interventricular septal angle was a significant predictor of T1 in patients with pulmonary hypertension (p < 0.001).
Conclusions: Elevated myocardial native T1 was found to a similar extent in pulmonary hypertension patient subgroups and is independently associated with increased interventricular septal angle. Native T1 mapping may not be of additive value in the diagnostic or prognostic evaluation of patients with pulmonary artery hypertension.
Objective: A high unilateral load to the musculoskeletal system is specific for formation dance. Due to the lack of data the aim of this study was the side-related (right – left) analysis of strength- and balance capability subject to injuries, gender and performance standards.
Methods: N = 51 dancers (m: n = 24, f: m = 27) of two performance levels participated in this cross-sectional study. Double-sided tests of the isometric maximal strength of relevant muscle groups and the balance capability were carried out. The tests were supplemented by a self report questionnaire.
Results: Tests of the isometric maximal strength in the elite performance level showed significant differences between either side of the body. As to the balance capability, no significant side-related differences could be found in. Correlations between the strength capability and the injuries could be observed in either group.
Conclusion: The significant strength differences are presumably caused by the right-sided load in the dance-specific movements. The cautious conclusion that movement patterns challenge the stability of either side of the body likewise may be allowed. The increased injury frequency at the muscularly stronger side of the body primarily results from an overload. An additive muscular training should be considered as a preventive measure.
Background: The year 2016 has marked the highest number of displaced people worldwide on record. A large number of these refugees are women, yet little is known about their specific situation and the hurdles they have to face during their journey. Herein, we investigated whether sociodemographic characteristics and traumatic experiences in the home country and during the flight affected the quality of life of refugee women arriving in Germany in 2015–2016.
Methods: Six hundred sixty-three women from six countries (Afghanistan, Syria, Iran, Iraq, Somalia, and Eritrea) living in shared reception facilities in five distinct German regions were interviewed by native speakers using a structured questionnaire. Sociodemographic data and information about reasons for fleeing, traumatic experiences, symptoms, quality of life, and expectations towards their future were elicited. All information was stored in a central database in Berlin. Descriptive analyses, correlations, and multivariate analyses were performed.
Results: The most frequent reasons cited for fleeing were war, terror, and threat to one’s life or the life of a family member. Eighty-seven percent of women resorted to smugglers to make the journey to Europe, and this significantly correlated to residence in a war zone (odds ratio (OR) = 2.5, 95% confidence interval (CI) = 1.4–4.6, p = 0.003) and homelessness prior to fleeing (OR = 2.1, 95% CI = 1–4.3, p = 0.04). Overall the described quality of life by the women was moderate (overall mean = 3.23, range of 1–5) and slightly worse than that of European populations (overall mean = 3.68, p < 0.0001). The main reasons correlating with lower quality of life were older age, having had a near-death experience, having been attacked by a family member, and absence of health care in case of illness.
Conclusions: Refugee women experience multiple traumatic experiences before and/or during their journey, some of which are gender-specific. These experiences affect the quality of life in their current country of residence and might impact their integration. We encourage the early investigation of these traumatic experiences to rapidly identify women at higher risk and to improve health care for somatic and mental illness.
The accurate knowledge of the groundwater storage variation (ΔGWS) is essential for reliable water resource assessment, particularly in arid and semi-arid environments (e.g., Australia, the North China Plain (NCP)) where water storage is significantly affected by human activities and spatiotemporal climate variations. The large-scale ΔGWS can be simulated from a land surface model (LSM), but the high model uncertainty is a major drawback that reduces the reliability of the estimates. The evaluation of the model estimate is then very important to assess its accuracy. To improve the model performance, the terrestrial water storage variation derived from the Gravity Recovery And Climate Experiment (GRACE) satellite mission is commonly assimilated into LSMs to enhance the accuracy of the ΔGWS estimate. This study assimilates GRACE data into the PCRaster Global Water Balance (PCR-GLOBWB) model. The GRACE data assimilation (DA) is developed based on the three-dimensional ensemble Kalman smoother (EnKS 3D), which considers the statistical correlation of all extents (spatial, temporal, vertical) in the DA process. The ΔGWS estimates from GRACE DA and four LSM simulations (PCR-GLOBWB, the Community Atmosphere Biosphere Land Exchange (CABLE), the Water Global Assessment and Prognosis Global Hydrology Model (WGHM), and World-Wide Water (W3)) are validated against the in situ groundwater data. The evaluation is conducted in terms of temporal correlation, seasonality, long-term trend, and detection of groundwater depletion. The GRACE DA estimate shows a significant improvement in all measures, notably the correlation coefficients (respect to the in situ data) are always higher than the values obtained from model simulations alone (e.g., ~0.15 greater in Australia, and ~0.1 greater in the NCP). GRACE DA also improves the estimation of groundwater depletion that the models cannot accurately capture due to the incorrect information of the groundwater demand (in, e.g., PCR-GLOBWB, WGHM) or the unavailability of a groundwater consumption routine (in, e.g., CABLE, W3). In addition, this study conducts the inter-comparison between four model simulations and reveals that PCR-GLOBWB and CABLE provide a more accurate ΔGWS estimate in Australia (subject to the calibrated parameter) while PCR-GLOBWB and WGHM are more accurate in the NCP (subject to the inclusion of anthropogenic factors). The analysis can be used to declare the status of the ΔGWS estimate, as well as itemize the possible improvements of the future model development.
Blow flies are the first insect group to colonize on a dead body and thus correct species identification is a crucial step in forensic investigations for estimating the minimum postmortem interval, as developmental times are species-specific. Due to the difficulty of traditional morphology-based identification such as the morphological similarity of closely related species and uncovered taxonomic keys for all developmental stages, DNA-based identification has been increasing in interest, especially in high biodiversity areas such as Thailand. In this study, the effectiveness of long mitochondrial cytochrome c oxidase subunit I and II (COI and COII) sequences (1247 and 635 bp, respectively) in identifying 16 species of forensically relevant blow flies in Thailand (Chrysomya bezziana, Chrysomya chani, Chrysomya megacephala, Chrysomya nigripes, Chrysomya pinguis, Chrysomya rufifacies, Chrysomya thanomthini, Chrysomya villeneuvi, Lucilia cuprina, Lucilia papuensis, Lucilia porphyrina, Lucilia sinensis, Hemipyrellia ligurriens, Hemipyrellia pulchra, Hypopygiopsis infumata, and Hypopygiopsis tumrasvini) was assessed using distance-based (Kimura two-parameter distances based on Best Match, Best Close Match, and All Species Barcodes criteria) and tree-based (grouping taxa by sequence similarity in the neighbor-joining tree) methods. Analyses of the obtained sequence data demonstrated that COI and COII genes were effective markers for accurate species identification of the Thai blow flies. This study has not only demonstrated the genetic diversity of Thai blow flies, but also provided a reliable DNA reference database for further use in forensic entomology within the country and other regions where these species exist.
Due to the resurrection of data-hungry models (such as deep convolutional neural nets), there is an increasing demand for large-scale labeled datasets and benchmarks in the computer vision fields (CV). However, collecting real data across diverse scene contexts along with high-quality annotations is often expensive and time-consuming, especially for detailed pixel-level label prediction tasks such as semantic segmentation, etc. To address the scarcity of real-world training sets, recent works have proposed the use of computer graphics (CG) generated data to train and/or characterize performance of modern CV systems. CG based virtual worlds provide easy access to ground truth annotations and control over scene states. Most of these works utilized training data simulated from video games and pre-designed virtual environments and demonstrated promising results. However, little effort has been devoted to the systematic generation of massive quantities of sufficiently complex synthetic scenes for training scene understanding algorithms. In this work, we develop a full pipeline for simulating large-scale datasets along with per-pixel ground truth information. Our simulation pipeline constitutes of mainly two components: (a) a stochastic scene generative model that automatically synthesizes traffic scene layouts by using marked point processes coupled with 3D CAD objects and factor potentials, (b) an annotated-image rendering tool that renders the sampled 3D scene as RGB image with a chosen rendering method along with pixel-level annotations such as semantic labels, depth, surface normals etc. This pipeline is capable of automatically generating and rendering a potentially infinite variety of outdoor traffic scenes that can be used to train convolutional neural nets (CNN).
However, several recent works, including our own initial experiments demonstrated that the CV models that are trained naively on simulated data lack generalization capabilities to real-world scenes. This opens up several fundamental questions about what is it lacking in simulated data compared to real data and how to use it effectively. Furthermore, there has been a long debate since 1980’s on the usefulness of CG generated data for tuning CV systems. Primarily, the impact of modeling errors and computational rendering approximations, due to various choices in the rendering pipeline, on trained CV systems generalization performance is still not clear. In this thesis, we take a case study in the context of traffic scenarios to empirically analyze the performance degradations when CV systems trained with virtual data are transferred to real data. We first explore system performance tradeoffs due to the choice of the rendering engine (e.g., Lambertian shader (LS), ray-tracing (RT), and Monte-Carlo path tracing (MCPT)) and their parameters. A CNN architecture, DeepLab, that performs semantic segmentation, is chosen as the CV system being evaluated. In our case study, involving traffic scenes, a CNN trained with CG data samples generated with photorealistic rendering methods (such as RT or MCPT), shows already a reasonably good performance on real-world testing data from CityScapes benchmark. Use of samples from an elementary rendering method, i.e., LS, degraded the performance of CNN by nearly 20%. This result conveys that training data must be photorealistic enough for better generalizability of the trained CNN models. Furthermore, the use of physics-based MCPT rendering improved the performance by 6% but at the cost of more than three times the rendering time. This MCPT generated dataset when augmented with just 10% of real-world training data from CityScapes dataset, the performance levels achieved are comparable to that of training CNN with the complete CityScapes dataset.
The next aspect we study in the thesis involves the impact of choice of parameter settings of scene generation model on the generalization performance of CNN models trained with the generated data. Towards this end, we first propose an algorithm to estimate our scene generation model parameters given an unlabeled real world dataset from the target domain. This unsupervised tuning approach utilizes the concept of generative adversarial training, which aims at adapting the generative model by measuring the discrepancy between generated and real data in terms of their separability in the space of a deep discriminatively-trained classifier. Our method involves an iterative estimation of the posterior density of prior distributions for the generative graphical model used in the simulation. Initially, we assume uniform distributions as priors over parameters of a scene described by our generative graphical model. As iterations proceed the uniform prior distributions are updated sequentially to distributions for the simulation model parameters that leads to simulated data with statistics that are closer to the distributions of the unlabeled target data.
...
Can the democratic constitutions of Hungary and Poland survive an autocratic majority? Hardly. Hungary and Poland seem to be lost for liberal and democratic constitutionalism. At least for the time being, the next question is how democratic constitutionalism can prevent an autocratic majority. The task is to make it difficult for an autocratic parliamentary majority to capture the institutions of critique and control of government and to undermine separation of powers.
What motivates welfare attitudes during economic crises? While existing research highlights self-interest, this conclusion rests on a predominant conceptualization of citizens’ crisis experiences as personal job loss. However, during economic downturns, people are likely to also witness colleagues or distant others being laid off, which might affect welfare attitudes for reasons beyond self-interest. This article analyses how personal job loss as well as that of colleagues and acquaintances during the Great Recession is related to welfare attitudes in the UK, Germany and Sweden, where welfare regimes and crisis policies differ systematically. Based on Eurobarometer data from 2010, the findings reveal that the importance of personal job loss as well as that of colleagues and acquaintances varies cross-nationally. In the liberal UK – with its modest crisis response – demand for greater public welfare provision is associated with personal job loss. In social-democratic Sweden – with its active crisis management – demand for greater welfare provision is associated with acquaintances’ job loss. In conservative Germany – with its labour market insider-focused crisis response – no clear picture emerges. These findings support a sociological perspective emphasizing the importance of other-regarding concerns for welfare attitudes and the role of institutions in structuring people’s self-interest and normative orientations.
On October 7th, general elections were held in Bosnia and Herzegovina. Its Constitution was meant to be an interim solution, setting up a complex structure of division of power between the three major ethnic groups leading to political paralysis. Constitutional reform is thus a pressing issue but the recent elections appear to reinforce the deadlock situation instead of paving the way for much needed change.
The illiberal turn in Europe has many facets. Of particular concern are Member States in which ruling majorities uproot the independence of the judiciary. For reasons well described in the Verfassungsblog, the current focus is on Poland. Since the Polish development is emblematic for a broader trend, more is at stake than the rule of law in that Member State alone (as if that were not enough). If the Polish emblematic development is not resisted, illiberal democracies might start co-defining the European constitutional order, in particular, its rule of law-value in Article 2 TEU. Accordingly, the conventional liberal self-understanding of Europe could easily erode, with tremendous implications.
Carbon-fiber-reinforced plastics are widely used in lightweight marine structures due to their high strength and superior fatigue behavior. In this article, we will present an innovative methodology for simultaneous load and structural monitoring of a carbon-fiber-reinforced plastic rudder stock as part of a big commercial vessel. Experimental results are presented here from a quasi-static tensile test in which the load monitoring is performed using embedded strain sensors. Structural monitoring is based on high-frequency electromechanical impedance spectroscopy combined with dedicated signal processing and surface-mounted piezoelectric transducers. We have achieved the following results: (1) the demonstration of a hybrid monitoring system including load and structural monitoring, (2) successful embedding of strain gauges during composite manufacturing of the carbon-fiber-reinforced plastic rudder stock, (3) development of instrumentation hardware for multichannel electromechanical impedance measurements, and (4) successful damage detection by means of electromechanical impedance spectroscopy in thick carbon-fiber-reinforced plastic rudder stock samples exploiting strain data.
BIOfid is a specialized information service currently being developed to mobilize biodiversity data dormant in printed historical and modern literature and to offer a platform for open access journals on the science of biodiversity. Our team of librarians, computer scientists and biologists produce high-quality text digitizations, develop new text-mining tools and generate detailed ontologies enabling semantic text analysis and semantic search by means of user-specific queries. In a pilot project we focus on German publications on the distribution and ecology of vascular plants, birds, moths and butterflies extending back to the Linnaeus period about 250 years ago. The three organism groups have been selected according to current demands of the relevant research community in Germany. The text corpus defined for this purpose comprises over 400 volumes with more than 100,000 pages to be digitized and will be complemented by journals from other digitization projects, copyright-free and project-related literature. With TextImager (Natural Language Processing & Text Visualization) and TextAnnotator (Discourse Semantic Annotation) we have already extended and launched tools that focus on the text-analytical section of our project. Furthermore, taxonomic and anatomical ontologies elaborated by us for the taxa prioritized by the project’s target group - German institutions and scientists active in biodiversity research - are constantly improved and expanded to maximize scientific data output. Our poster describes the general workflow of our project ranging from literature acquisition via software development, to data availability on the BIOfid web portal (http://biofid.de/), and the implementation into existing platforms which serve to promote global accessibility of biodiversity data.
This study presents comprehensive real-world data on the use of anti-human epidermal growth factor receptor 2 (HER2) therapies in patients with HER2-positive metastatic breast cancer (MBC). Specifically, it describes therapy patterns with trastuzumab (H), pertuzumab + trastuzumab (PH), lapatinib (L), and trastuzumab emtansine (T-DM1). The PRAEGNANT study is a real-time, real-world registry for MBC patients. All therapy lines are documented. This analysis describes the utilization of anti-HER2 therapies as well as therapy sequences. Among 1936 patients in PRAEGNANT, 451 were HER2-positive (23.3%). In the analysis set (417 patients), 53% of whom were included in PRAEGNANT in the first-line setting, 241 were treated with H, 237 with PH, 85 with L, and 125 with T-DM1 during the course of their therapies. The sequence PH → T-DM1 was administered in 51 patients. Higher Eastern Cooperative Oncology Group (ECOG) scores, negative hormone receptor status, and visceral or brain metastases were associated with more frequent use of this therapy sequence. Most patients received T-DM1 after treatment with pertuzumab. Both novel therapies (PH and T-DM1) are utilized in a high proportion of HER2-positive breast cancer patients. As most patients receive T-DM1 after PH, real-world data may help to clarify whether the efficacy of this sequence is similar to that in the approval study.
Structural and vibrational studies have been carried out for the most stable conformer of 3,3′-ethane-1,2-diyl-bis-1,3,5-triazabicyclo[3.2.1]octane (ETABOC) at the DFT/B3LYP/6-31G(dp) level using the Gaussian 03 software. In light of the computed vibrational parameters, the observed IR Bolhmann bands for the C2V, C2, and Ci symmetrical structures of ETABOC have been analyzed. Hyperconjugative interaction was done by Natural Bond Orbital Analysis. Interpretation of hyperconjugative interaction involving the lone pairs on the bridgehead nitrogen atoms with the neighboring C–N and C–C bonds defines the conformational preference of the title compound. The recorded X-ray diffraction bond parameters were compared with theoretical values calculated at B3LYP/6-31G(d,p) and HF/6-31G(d,p) level of theory showed that ETABOC adopts a chair conformation and possesses an inversion center.
The synthesis and single crystal structure of a new cocrystal, which is composed of OHphenolic∙∙∙OHphenolic∙∙∙Naminalic supramolecular heterosynthons assembled from 4-tert-butylphenol and the macrocyclic aminal TATU, is presented. This cocrystal was prepared by solvent-free assisted grinding, which is a commonly used mechanochemical method. Crystal structure, supramolecular assembly through hydrogen bonding interactions as well as the physical and spectroscopic properties of the title cocrystal are presented in this paper.
Background: Enterovirus 71 (EV71) is one of the major causative agents of hand, foot, and mouth disease (HFMD), which is sometimes associated with severe central nervous system disease in children. There is currently no specific medication for EV71 infection. Quercetin, one of the most widely distributed flavonoids in plants, has been demonstrated to inhibit various viral infections. However, investigation of the anti-EV71 mechanism has not been reported to date.
Methods: The anti-EV71 activity of quercetin was evaluated by phenotype screening, determining the cytopathic effect (CPE) and EV71-induced cells apoptosis. The effects on EV71 replication were evaluated further by determining virus yield, viral RNA synthesis and protein expression, respectively. The mechanism of action against EV71 was determined from the effective stage and time-of-addition assays. The possible inhibitory functions of quercetin via viral 2Apro, 3Cpro or 3Dpol were tested. The interaction between EV71 3Cpro and quercetin was predicted and calculated by molecular docking.
Results: Quercetin inhibited EV71-mediated cytopathogenic effects, reduced EV71 progeny yields, and prevented EV71-induced apoptosis with low cytotoxicity. Investigation of the underlying mechanism of action revealed that quercetin exhibited a preventive effect against EV71 infection and inhibited viral adsorption. Moreover, quercetin mediated its powerful therapeutic effects primarily by blocking the early post-attachment stage of viral infection. Further experiments demonstrated that quercetin potently inhibited the activity of the EV71 protease, 3Cpro, blocking viral replication, but not the activity of the protease, 2Apro, or the RNA polymerase, 3Dpol. Modeling of the molecular binding of the 3Cpro-quercetin complex revealed that quercetin was predicted to insert into the substrate-binding pocket of EV71 3Cpro, blocking substrate recognition and thereby inhibiting EV71 3Cpro activity.
Conclusions: Quercetin can effectively prevent EV71-induced cell injury with low toxicity to host cells. Quercetin may act in more than one way to deter viral infection, exhibiting some preventive and a powerful therapeutic effect against EV71. Further, quercetin potently inhibits EV71 3Cpro activity, thereby blocking EV71 replication.
Biomechanical analysis of the fixation strength of a novel plate for greater tuberosity fractures
(2018)
Background: The incidence of isolated greater tuberosity fractures has been estimated to be 20% of all proximal humeral fractures. It is generally accepted that displaced (>5 mm) fractures should be treated surgically but the optimal surgical fixation of greater tuberosity fractures remains unclear.
Objective: The goal of this study was to simulate the environment of application of a new plate system (Kaisidis plate, Fa Königsee) for fractures of greater tuberosity, and to demonstrate the stability of the plate.
Methods: A Finite Element Method (FEM) simulation analysis was performed on a Kaisidis plate fixed with nine screws, in a greater tuberosity fracture model. Solid Works 2015 simulation software was used for the analysis. The Kaisidis plate is a bone plate intended for greater tuberosity fractures. It is a low profile plate with nine holes for 2,4 mm diameter locking screws, eight suture holes and additional K-wire holes for temporary fixation of the fragment.
The supraspinatus tendon has the greatest effect on the fracture zone, and as such, was the primary focus for this study. For this study, we performed only linear calculations.
Results: The calculations were performed in a way so that the total applied force resulted in a maximum stress of 816 N/mm2. The findings indicated that the most critical points of the Kaisidis system are the screws that are connected to the bone. The maximal force generated by the supraspinatus tendon was 784 N, which is higher than the minimal acceptable force.
The results of the FEM analysis showed that the maximal supraspinatus force was 11.6% higher than the minimal acceptable force. As such, the load would exceed twice the amount of maximal force required to tear the supraspinatus tendon, before the screw or the plate would show first signs of plastic deformation.
Conclusion: Based on the results of this analysis and the fulfilment of our acceptance criterion, the FEM model indicated that the strength of the Kaisidis plate exceeded that of the proposed maximum loads under non-cycli loading conditions.