Refine
Year of publication
Document Type
- Article (30531) (remove)
Language
- English (15234)
- German (13290)
- Portuguese (696)
- French (387)
- Croatian (251)
- Spanish (250)
- Italian (133)
- Turkish (113)
- Multiple languages (36)
- Latin (35)
Has Fulltext
- yes (30531)
Keywords
- Deutsch (503)
- taxonomy (421)
- Literatur (297)
- Hofmannsthal, Hugo von (185)
- new species (179)
- Rezeption (178)
- Übersetzung (163)
- Filmmusik (155)
- Johann Wolfgang von Goethe (131)
- Vormärz (113)
Institute
- Medizin (5135)
- Physik (1728)
- Biowissenschaften (1110)
- Extern (1108)
- Biochemie und Chemie (1103)
- Gesellschaftswissenschaften (798)
- Frankfurt Institute for Advanced Studies (FIAS) (697)
- Geowissenschaften (582)
- Präsidium (453)
- Philosophie (448)
Highlights
• Six Newton methods for solving matrix quadratic equations in linear DSGE models.
• Compared to QZ using 99 different DSGE models including Smets and Wouters (2007).
• Newton methods more accurate than QZ with comparable computation burden.
• Apt for refining solutions from alternative methods or nearby parameterizations.
Abstract
This paper presents and compares Newton-based methods from the applied mathematics literature for solving the matrix quadratic that underlies the recursive solution of linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. We find that Newton-based methods compare favorably in solving DSGE models, providing higher accuracy as measured by the forward error of the solution at a comparable computation burden. The methods, however, suffer from their inability to guarantee convergence to a particular, e.g. unique stable, solution, but their iterative procedures lend themselves to refining solutions either from different methods or parameterizations.
The hierarchical feature regression (HFR) is a novel graph-based regularized regression estimator, which mobilizes insights from the domains of machine learning and graph theory to estimate robust parameters for a linear regression. The estimator constructs a supervised feature graph that decomposes parameters along its edges, adjusting first for common variation and successively incorporating idiosyncratic patterns into the fitting process. The graph structure has the effect of shrinking parameters towards group targets, where the extent of shrinkage is governed by a hyperparameter, and group compositions as well as shrinkage targets are determined endogenously. The method offers rich resources for the visual exploration of the latent effect structure in the data, and demonstrates good predictive accuracy and versatility when compared to a panel of commonly used regularization techniques across a range of empirical and simulated regression tasks.
In a unifying framework generalizing established theories we characterize under which conditions Joint Ownership of assets creates the best cooperation incentives in a partnership. We endogenise renegotiation costs and assume that they weakly increase with additional assets. A salient sufficient condition for optimal cooperation incentives among patient partners is if Joint Ownership is a Strict Coasian Institution for which transaction costs impede an efficient asset reallocation after a breakdown. In contrast to Halonen (2002) the logic behind our results is that Joint Ownership maximizes the value of the relationship and the costs of renegotiating ownership after a broken relationship.
Highlights
• An airport can result in high particle concentrations in a distant residential area.
• The particle size distribution indicated the airport as the main source of particles.
• Lower air traffic during the COVID-19 pandemic lead to lower particle concentrations.
• The particle concentration showed high temporal variations.
Abstract
Exposure to ultrafine particles has a significant influence on human health. In regions with large commercial airports, air traffic and ground operations can represent a potential particle source. The particle number concentration was measured in a low-traffic residential area about 7 km from Frankfurt Airport with a Condensation Particle Counter in a long-term study. In addition, the particle number size distribution was determined using a Fast Mobility Particle Sizer.
The particle number concentrations showed high variations over the entire measuring period and even within a single day. A maximum 24 h-mean of 24,120 cm−3 was detected. Very high particle number concentrations were in particular measured when the wind came from the direction of the airport. In this case, the particle number size distribution showed a maximum in the particle size range between 5 and 15 nm. Particles produced by combustion in jet engines typically have this size range and a high potential to be deposited in the alveoli. During a period with high air traffic volume, significantly higher particle number concentrations could be measured than during a period with low air traffic volume, as in the COVID-19 pandemic.
A large commercial airport thus has the potential to lead to a high particle number concentration even in a distant residential area. Due to the high particle number concentrations, the critical particle size, and strong concentration fluctuations, long-term measurements are essential for a realistic exposure analysis.
Correlations in azimuthal angle extending over a long range in pseudorapidity between particles, usually called the "ridge" phenomenon, were discovered in heavy-ion collisions, and later found in pp and p−Pb collisions. In large systems, they are thought to arise from the expansion (collective flow) of the produced particles. Extending these measurements over a wider range in pseudorapidity and final-state particle multiplicity is important to understand better the origin of these long-range correlations in small-collision systems. In this Letter, measurements of the long-range correlations in p−Pb collisions at sNN−−−√=5.02 TeV are extended to a pseudorapidity gap of Δη∼8 between particles using the ALICE, forward multiplicity detectors. After suppressing non-flow correlations, e.g., from jet and resonance decays, the ridge structure is observed to persist up to a very large gap of Δη∼8 for the first time in p−Pb collisions. This shows that the collective flow-like correlations extend over an extensive pseudorapidity range also in small-collision systems such as p−Pb collisions. The pseudorapidity dependence of the second-order anisotropic flow coefficient, v2({\eta}), is extracted from the long-range correlations. The v2(η) results are presented for a wide pseudorapidity range of −3.1<η<4.8 in various centrality classes in p−Pb collisions. To gain a comprehensive understanding of the source of anisotropic flow in small-collision systems, the v2(η) measurements are compared to hydrodynamic and transport model calculations. The comparison suggests that the final-state interactions play a dominant role in developing the anisotropic flow in small-collision systems.
We investigate the applicability of the well-known multilevel Monte Carlo (MLMC) method to the class of density-driven flow problems, in particular the problem of salinisation of coastal aquifers. As a test case, we solve the uncertain Henry saltwater intrusion problem. Unknown porosity, permeability and recharge parameters are modelled by using random fields. The classical deterministic Henry problem is non-linear and time-dependent, and can easily take several hours of computing time. Uncertain settings require the solution of multiple realisations of the deterministic problem, and the total computational cost increases drastically. Instead of computing of hundreds random realisations, typically the mean value and the variance are computed. The standard methods such as the Monte Carlo or surrogate-based methods are a good choice, but they compute all stochastic realisations on the same, often, very fine mesh. They also do not balance the stochastic and discretisation errors. These facts motivated us to apply the MLMC method. We demonstrate that by solving the Henry problem on multi-level spatial and temporal meshes, the MLMC method reduces the overall computational and storage costs. To reduce the computing cost further, parallelization is performed in both physical and stochastic spaces. To solve each deterministic scenario, we run the parallel multigrid solver ug4 in a black-box fashion.
Highlights
• We present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series.
• The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios.
• The model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33.
Abstract
High-rate Global Navigation Satellite System (HR-GNSS) data can be highly useful for earthquake analysis as it provides continuous high-frequency measurements of ground motion. This data can be used to analyze diverse parameters related to the seismic source and to assess the potential of an earthquake to prompt strong motions at certain distances and even generate tsunamis. In this work, we present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series. The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios. We explored the potential of the model for global application and compared its performance using both synthetic and real data from different seismogenic regions. The performance of our model at this stage was satisfactory in estimating earthquake magnitude from synthetic data with 0.07 ≤ RMS ≤ 0.11. Comparable results were observed in tests using synthetic data from a different region than the training data, with RMS ≤ 0.15. Furthermore, the model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33, provided that the data from a particular group of stations had similar epicentral distance constraints to those used during the model training. The robustness of the DL model can be improved to work independently from the window size of the time series and the number of stations, enabling faster estimation by the model using only near-field data. Overall, this study provides insights for the development of future DL approaches for earthquake magnitude estimation with HR-GNSS data, emphasizing the importance of proper handling and careful data selection for further model improvements.
Previous phylogenetic analyses of the grass-specialist leafhopper tribe Chiasmini have resolved relationships among genera but have included few representatives of individual genera. Here the phylogeny of 20 Chinese species belonging to 8 chiasmine genera was investigated by combining DNA sequence data from two mitochondrial genes (COI, 16S) and two nuclear genes (H3, 28S). In both maximum likelihood (ML) and Bayesian inference (BI) analyses, relationships among genera were largely consistent with prior analyses, with most members of the tribe placed into two sister clades: (Exitianus + Nephotettix) and the remaining five sampled genera. To examine morphology-based species definitions in the taxonomically difficult genus Exitianus Ball, 1929, one mitochondrial gene (COI) and one nuclear gene (ITS2) were used to infer the phylogenetic relationships and status of two common and widespread species and compare the performance of different molecular species-delimitation methods. These analyses divide the included populations into two well-supported clades corresponding to current morphological species concepts but some inconsistencies occurred under the jMOTU, ABGD and bPTP methods depending on the which gene and analytical parameter values were selected. Considering the variable results yielded by methods employing single loci, the BPP method, which combines data from multiple loci, may be more reliable in Exitianus.
The Cladonematidae are a family of hydrozoans with a worldwide distribution and morphological adaptations for a benthic mode of life. Species of this family are characterized by high morphological variability, which has caused many taxonomical debates, mainly for the species of the genera Eleutheria Quatrefages, 1842 and Staurocladia Hartlaub, 1917. Herein, we describe Staurocladia dzilamensis sp. nov., a new species of crawling hydromedusa from the southern Gulf of Mexico. This finding also constitutes the first record of the genus Staurocladia for the Gulf of Mexico. The presence of additional nematocyst clusters, supplementing the apical one on the upper branch of the tentacles, places it within Staurocladia. The presence of exumbrellar buds, a conspicuous marginal ring of nematocysts, 6–11 bifid tentacles with lower branches longer than their upper counterpart, the cnidome with stenoteles of two size classes, and two nematocyst clusters on the upper branch supplementing the apical one, opposite placed alternately on its aboral and oral sides permits to differentiate S. dzilamensis from its congeners. A taxonomic key for the species of Staurocladia is provided.
Coming of voting age. Evidence from a natural experiment on the effects of electoral eligibility
(2024)
In recent years, several jurisdictions have lowered the voting age, with many more discussing it. Sceptics question whether young people are ready to vote, while supporters argue that allowing them to vote would increase their specific engagement with politics. To test the latter argument, we use a series of register-based surveys of over 10,000 German adolescents. Knowing the exact birthdates of our respondents, we estimate the causal effect of eligibility on their information-seeking behaviour in a regression discontinuity design. While eligible and non-eligible respondents do not differ in their fundamental political dispositions, those allowed to vote are more likely to discuss politics with their family and friends and to use a voting advice application. This effect appears to be stronger for voting age 16 than for 18. The right to vote changes behaviour. Therefore, we cannot conclude from the behaviour of ineligible citizens that they are unfit to vote.