Refine
Year of publication
Document Type
- Article (30331)
- Part of Periodical (11853)
- Book (8212)
- Doctoral Thesis (5557)
- Part of a Book (3847)
- Working Paper (3349)
- Review (2917)
- Contribution to a Periodical (2277)
- Preprint (1767)
- Report (1561)
Language
- German (42624)
- English (28134)
- French (1071)
- Portuguese (843)
- Multiple languages (309)
- Spanish (309)
- Croatian (302)
- Italian (196)
- mis (174)
- Turkish (168)
Keywords
- Deutsch (1082)
- Literatur (860)
- taxonomy (731)
- Deutschland (549)
- Rezension (511)
- new species (435)
- Rezeption (347)
- Frankfurt <Main> / Universität (341)
- Übersetzung (307)
- Geschichte (300)
Institute
- Medizin (7244)
- Präsidium (5064)
- Physik (4097)
- Extern (2742)
- Wirtschaftswissenschaften (2628)
- Gesellschaftswissenschaften (2362)
- Biowissenschaften (2081)
- Biochemie und Chemie (1934)
- Center for Financial Studies (CFS) (1604)
- Informatik (1569)
Zinc finger (ZnF) domains appear in a pool of structural contexts and despite their small size achieve varying target specificities, covering single-stranded and double-stranded DNA and RNA as well as proteins. Combined with other RNA-binding domains, ZnFs enhance affinity and specificity of RNA-binding proteins (RBPs). The ZnF-containing immunoregulatory RBP Roquin initiates mRNA decay, thereby controlling the adaptive immune system. Its unique ROQ domain shape-specifically recognizes stem-looped cis-elements in mRNA 3’-untranslated regions (UTR). The N-terminus of Roquin contains a RING domain for protein-protein interactions and a ZnF, which was suggested to play an essential role in RNA decay by Roquin. The ZnF domain boundaries, its RNA motif preference and its interplay with the ROQ domain have remained elusive, also driven by the lack of high-resolution data of the challenging protein. We provide the solution structure of the Roquin-1 ZnF and use an RBNS-NMR pipeline to show that the ZnF recognizes AU-rich elements (ARE). We systematically refine the contributions of adenines in a poly(U)-background to specific complex formation. With the simultaneous binding of ROQ and ZnF to a natural target transcript of Roquin, our study for the first time suggests how Roquin integrates RNA shape and sequence specificity through the ROQ-ZnF tandem.
Telemonitoring devices can be used to screen consumers' characteristics and mitigate information asymmetries that lead to adverse selection in insurance markets. However, some consumers value their privacy and dislike sharing private information with insurers. In the second-best efficient Wilson-Miyazaki-Spence framework, we allow for consumers to reveal their risk type for an individual subjective cost and show analytically how this affects insurance market equilibria as well as utilitarian social welfare. Our analysis shows that the choice of information disclosure with respect to revelation of their risk type can substitute deductibles for consumers whose transparency aversion is sufficiently low. This can lead to a Pareto improvement of social welfare and a Pareto efficient market allocation. However, if all consumers are offered cross-subsidizing contracts, the introduction of a transparency contract decreases or even eliminates cross-subsidies. Given the prior existence of a WMS equilibrium, utility is shifted from individuals who do not reveal their private information to those who choose to reveal. Our analysis provides a theoretical foundation for the discussion on consumer protection in the context of digitalization. It shows that new technologies bring new ways to challenge crosssubsidization in insurance markets and stresses the negative externalities that digitalization has on consumers who are not willing to take part in this development.
The modern tontine: An innovative instrument for longevity risk management in an aging society
(2016)
The changing social, financial and regulatory frameworks, such as an increasingly aging society, the current low interest rate environment, as well as the implementation of Solvency II, lead to the search for new product forms for private pension provision. In order to address the various issues, these product forms should reduce or avoid investment guarantees and risks stemming from longevity, still provide reliable insurance benefits and simultaneously take account of the increasing financial resources required for very high ages. In this context, we examine whether a historical concept of insurance, the tontine, entails enough innovative potential to extend and improve the prevailing privately funded pension solutions in a modern way. The tontine basically generates an age-increasing cash flow, which can help to match the increasing financing needs at old ages. However, the tontine generates volatile cash flows, so that - especially in the context of an aging society - the insurance character of the tontine cannot be guaranteed in every situation. We show that partial tontinization of retirement wealth can serve as a reliable supplement to existing pension products.
The Solvency II standard formula employs an approximate Value-at-Risk approach to define risk-based capital requirements. This paper investigates how the standard formula’s stock risk calibration influences the equity position and investment strategy of a shareholder-value-maximizing insurer with limited liability. The capital requirement for stock risks is determined by multiplying a regulation-defined stock risk parameter by the value of the insurer’s stock portfolio. Intuitively, a higher stock risk parameter should reduce risky investments as well as insolvency risk. However, we find that the default probability does not necessarily decrease when reducing the investment risk (by increasing the stock investment risk parameter). We also find that depending on the precise interaction between assets and liabilities, some insurers will invest conservatively, whereas others will prefer a very risky investment strategy, and a slight change of the stock risk parameter may lead from a conservative to a high risk asset allocation.
European insurers are allowed to make discretionary decisions in the calculation of Solvency II capital requirements. These choices include the design of risk models (ranging from a standard formula to a full internal model) and the use of long-term guarantees measures. This article examines the impact and the drivers of discretionary decisions with respect to capital requirements for market risks. In a first step of our analysis, we assess the risk profiles of 49 stock insurers using daily market data. In a second step, we exploit hand-collected Solvency II data for the years 2016 to 2020. We find that long-term guarantees measures substantially influence the reported solvency ratios. The measures are chosen particularly by less solvent insurers and firms with high interest rate and credit spread sensitivities. Internal models are used more frequently by large insurers and especially for risks for which the firms have already found adequate immunization strategies.
This paper compares the shareholder-value-maximizing capital structure and pricing policy of insurance groups against that of stand-alone insurers. Groups can utilise intra-group risk diversification by means of capital and risk transfer instruments. We show that using these instruments enables the group to offer insurance with less default risk and at lower premiums than is optimal for standalone insurers. We also take into account that shareholders of groups could find it more difficult to prevent inefficient overinvestment or cross-subsidisation, which we model by higher dead-weight costs of carrying capital. The tradeoff between risk diversification on the one hand and higher dead-weight costs on the other can result in group building being beneficial for shareholders but detrimental for policyholders.
A greater firm-level transparency through enhanced disclosure provides more information regarding the risk situation of an insurer to its outside stakeholders such as stock investors and policyholders. The disclosure of the insurer's risktaking can result in negative influences on, for example, its stock performance and insurance demand when stock investors and policyholders are risk-averse. Insurers, which are concerned about the potential ex post adverse effects of risk-taking under greater transparency, are thus inclined to limit their risks ex ante. In other words, improved firm-level transparency can induce less risktaking incentive of insurers. This article investigates empirically the relationship between firm-level transparency and insurers' strategies on capitalization and risky investments. By exploring the disclosure levels and the risk behavior of 52 European stock insurance companies from 2005 to 2012, the results show that insurers tend to hold more equity capital under the anticipation of greater transparency, and this strategy on capital-holding is consistent for different types of insurance businesses. When considering the influence of improved transparency on the investment policy of insurers, the results are mixed for different types of insurers.
This article explores life insurance consumption in 31 European countries from 2003 to 2012 and aims to investigate the extent to which market transparency can affect life insurance demand. The cross-country evidence for the entire sample period shows that greater market transparency, which resolves asymmetric information, can generate a higher demand for life insurance. However, when considering the financial crisis period (2008-2012) separately, the results suggest a negative impact of enhanced market transparency on life insurance consumption. The mixed findings imply a trade-off between the reduction in adverse selection under greater market transparency and the possible negative effects on life insurance consumption during the crisis period due to more effective market discipline. Furthermore, this article studies the extent to which transparency can influence the reaction of life insurance demand to bad market outcomes: i.e., low solvency ratios or low profitability. The results indicate that the markets with bad outcomes generate higher life insurance demand under greater transparency compared to the markets that also experience bad outcomes but are less transparent.
This paper sheds light on the life insurance sector’s liquidity risk exposure. Life insurers are important long-term investors on financial markets. Due to their long-term investment horizon they cannot quickly adapt to changes in macroeconomic conditions. Rising interest rates in particular can expose life insurers to run-like situations, since a slow interest rate passthrough incentivizes policyholders to terminate insurance policies and invest the proceeds at relatively high market interest rates. We develop and empirically calibrate a granular model of policyholder behavior and life insurance cash flows to quantify insurers’ liquidity risk exposure stemming from policy terminations. Our model predicts that a sharp interest rate rise by 4.5pp within two years would force life insurers to liquidate 12% of their initial assets. While the associated fire sale costs are small under reasonable assumptions, policy terminations plausibly erase 30% of life insurers’ capital due to mark-to-market accounting. Our analysis reveals a mechanism by which monetary policy tightening increases liquidity risk exposure of non-bank financial intermediaries with long-term assets.
This paper investigates the effects of a rise in interest rate and lapse risk of endowment life insurance policies on the liquidity and solvency of life insurers. We model the book and market value balance sheet of an average German life insurer, subject to both GAAP and Solvency II regulation, featuring an existing back book of policies and an existing asset allocation calibrated by historical data. The balance sheet is then projected forward under stochastic financial markets. Lapse rates are modeled stochastically and depend on the granted guaranteed rate of return and prevailing level of interest rates. Our results suggest that in the case of a sharp increase in interest rates, policyholders sharply increase lapses and the solvency position of the insurer deteriorates in the short-run. This result is particularly driven by the interaction between a reduction in the market value of assets, large guarantees for existing policies, and a very slow adjustment of asset returns to interest rates. A sharp or gradual rise in interest rates is associated with substantial and persistent liquidity needs, that are particularly driven by lapse rates.
Under Solvency II, corporate governance requirements are a complementary, but nonetheless essential, element to build a sound regulatory framework for insurance undertakings, also to address risks not specifically mitigated by the sole solvency capital requirements. After recalling the provisions of the Second Pillar concerning the system of governance, the paper highlights the emerging regulatory trends in the corporate governance of insurance firms. Among others things, it signals the exceptional extension of the duties and responsibilities assigned to the board of directors, far beyond the traditional role of both monitoring the chief executive officer, and assessing the overall direction and strategy of the business. However, a better risk governance is not necessarily built on narrow rule-based approaches to corporate governance.
Depending on the point of time and location, insurance companies are subject to different forms of solvency regulation. In modern regulation regimes, such as the future standard Solvency II in the EU, insurance pricing is liberalized and risk-based capital requirements will be introduced. In many economies in Asia and Latin America, on the other hand, supervisors require the prior approval of policy conditions and insurance premiums, but do not conduct risk-based capital regulation. This paper compares the outcome of insurance rate regulation and risk-based capital requirements by deriving stock insurers’ best responses. It turns out that binding price floors affect insurers’ optimal capital structures and induce them to choose higher safety levels. Risk-based capital requirements are a more efficient instrument of solvency regulation and allow for lower insurance premiums, but may come at the cost of investment efforts into adequate risk monitoring systems. The paper derives threshold values for regulator’s investments into risk-based capital regulation and provides starting points for designing a welfare-enhancing insurance regulation scheme.
Insurance guarantee schemes aim to protect policyholders from the costs of insurer insolvencies. However, guarantee schemes can also reduce insurers’ incentives to conduct appropriate risk management. We investigate stock insurers’ risk-shifting behavior for insurance guarantee schemes under the two different financing alternatives: a flat-rate premium assessment versus a risk-based premium assessment. We identify which guarantee scheme maximizes policyholders’ welfare, measured by their expected utility. We find that the risk-based insurance guarantee scheme can only mitigate the insurer’s risk-shifting behavior if a substantial premium loading is present. Furthermore, the risk-based guarantee scheme is superior for improving policyholders’ welfare compared to the flat-rate scheme when the mitigating effect occurs.
Through the lens of market participants' objective to minimize counterparty risk, we provide an explanation for the reluctance to clear derivative trades in the absence of a central clearing obligation. We develop a comprehensive understanding of the benefits and potential pitfalls with respect to a single market participant's counterparty risk exposure when moving from a bilateral to a clearing architecture for derivative markets. Previous studies suggest that central clearing is beneficial for single market participants in the presence of a sufficiently large number of clearing members. We show that three elements can render central clearing harmful for a market participant's counterparty risk exposure regardless of the number of its counterparties: 1) correlation across and within derivative classes (i.e., systematic risk), 2) collateralization of derivative claims, and 3) loss sharing among clearing members. Our results have substantial implications for the design of derivatives markets, and highlight that recent central clearing reforms might not incentivize market participants to clear derivatives.
Central clearing counterparties (CCPs) were established to mitigate default losses resulting from counterparty risk in derivatives markets. In a parsimonious model, we show that clearing benefits are distributed unevenly across market participants. Loss sharing rules determine who wins or loses from clearing. Current rules disproportionately benefit market participants with flat portfolios. Instead, those with directional portfolios are relatively worse off, consistent with their reluctance to voluntarily use central clearing. Alternative loss sharing rules can address cross-sectional disparities in clearing benefits. However, we show that CCPs may favor current rules to maximize fee income, with externalities on clearing participation.
Market risks account for an integral part of life insurers' risk profiles. This paper explores the market risk sensitivities of insurers in two large life insurance markets, namely the U.S. and Europe. Based on panel regression models and daily market data from 2012 to 2018, we analyze the reaction of insurers' stock returns to changes in interest rates and CDS spreads of sovereign counterparties. We find that the influence of interest rate movements on stock returns is more than 50% larger for U.S. than for European life insurers. Falling interest rates reduce stock returns in particular for less solvent firms, insurers with a high share of life insurance reserves and unit-linked insurers. Moreover, life insurers' sensitivity to interest rate changes is seven times larger than their sensitivity towards CDS spreads. Only European insurers significantly suffer from rising CDS spreads, whereas U.S. insurers are immunized against increasing sovereign default probabilities.
Life insurance convexity
(2023)
Life insurers sell savings contracts with surrender options, which allow policyholders to prematurely receive guaranteed surrender values. These surrender options move toward the money when interest rates rise. Hence, higher interest rates raise surrender rates, as we document empirically by exploiting plausibly exogenous variation in monetary policy. Using a calibrated model, we then estimate that surrender options would force insurers to sell up to 2% of their investments during an enduring interest rate rise of 25 bps per year. We show that these fire sales are fueled by surrender value guarantees and insurers’ long-term investments.
Life insurance convexity
(2021)
Life insurers massively sell savings contracts with surrender options which allow policyholders to withdraw a guaranteed amount before maturity. These options move toward the money when interest rates rise. Using data on German life insurers, we estimate that a 1 percentage point increase in interest rates raises surrender rates by 17 basis points. We quantify the resulting liquidity risk in a calibrated model of surrender decisions and insurance cash flows. Simulations predict that surrender options can force insurers to sell up to 3% of their assets, depressing asset prices by 90 basis points. The effect is amplified by the duration of insurers' investments, and its impact on the term structure of interest rates depends on life insurers' investment strategy.
This paper documents that the bond investments of insurance companies transmit shocks from insurance markets to the real economy. Liquidity windfalls from household insurance purchases increase insurers' demand for corporate bonds. Exploiting the fact that insurers persistently invest in a small subset of firms for identification, I show that these increases in bond demand raise bond prices and lower firms' funding costs. In response, firms issue more bonds, especially when their bond underwriters are well connected with investors. Firms use the proceeds to raise investment rather than equity payouts. The results emphasize the significant impact of investor demand on firms' financing and investment activities.
Korean immigrants have migrated to New Zealand over the past three decades in search of a happier and more balanced life. While they anticipated that their children would be integrated into New Zealand society, they have primarily settled in Korean ethnic enclaves. In this context, younger Korean New Zealanders have been exposed to and influenced by New Zealand’s national and Korean ethnic cultures. This study examined success beliefs and well-being among Korean youth in New Zealand with a Third Culture Kid background (TCK K-NZ) in comparison to Korean youth in Korea (K-Korean) and European New Zealand youth (Pākehā). Results indicated that TCK K-NZ youth endorsed extrinsic success similarly to K-Korean youth, but that valuing extrinsic success predicted lowered well-being only for K-Korean youth. Conversely, valuing intrinsic success predicted higher well-being across the three groups. Results also revealed that TCK K-NZ youth's well-being levels were between those of K-Korean and Pākehā youth, potentially influenced by different structural relations between success beliefs and well-being, as well as their position as “third culture kids” in New Zealand. This study contributes to understanding cultures' roles in formulating success beliefs and the relationship between success beliefs and well-being for Korean New Zealander youth.
Background: Patients with cancer have an increased risk of VTE. We compared VTE rates and bleeding complications in 1) cancer patients receiving LMWH or UFH and 2) patients with or without cancer.
Patients with cancer have an increased risk of VTE. We compared VTE rates and bleeding complications in 1) cancer patients receiving LMWH or UFH and 2) patients with or without cancer.
Methods: Acutely-ill, non-surgical patients ≥70 years with (n = 274) or without cancer (n = 2,965) received certoparin 3,000 UaXa o.d. or UFH 5,000 IU t.i.d. for 8-20 days.
Results: 1) Thromboembolic events in cancer patients (proximal DVT, symptomatic non-fatal PE and VTE-related death) occurred at 4.50% with certoparin and 6.03% with UFH (OR 0.73; 95% CI 0.23-2.39). Major bleeding was comparable and minor bleedings (0.75 vs. 5.67%) were nominally less frequent. 7.5% of certoparin and 12.8% of UFH treated patients experienced serious adverse events. 2) Thromboembolic event rates were comparable in patients with or without cancer (5.29 vs. 4.13%) as were bleeding complications. All cause death was increased in cancer (OR 2.68; 95%CI 1.22-5.86). 10.2% of patients with and 5.81% of those without cancer experienced serious adverse events (OR 1.85; 95% CI 1.21-2.81).
Conclusions: Certoparin 3,000 UaXa o.d. and 5,000 IU UFH t.i.d. were equally effective and safe with respect to bleeding complications in patients with cancer. There were no statistically significant differences in the risk of thromboembolic events in patients with or without cancer receiving adequate anticoagulation.
Trial Registration: clinicaltrials.gov, NCT00451412
Die Befundung individueller Fallkonstellationen bei geeigneten Parameterkonstellationen und Fragestellungen ist ein zentraler Bestandteil der medizinischen Aufgabenstellung des Fachgebietes Laboratoriumsmedizin.
Um den labormedizinischen Anteil der medizinischen Diagnostik umfassend zu unterstützen, sollte unabhängig vom Einsatz wissensbasierter Systeme die labormedizinische Spezialbefundung generell bei entsprechenden Fragestellungen und Kenngrößenkonstellationen sowie bei Verfügbarkeit der jeweils geeigneten Methodik, bei Vorhandensein der entsprechenden Krankheitsprävalenzen und der entsprechenden labormedizinischen Kenntnisse durchgeführt werden. Dieser Notwendigkeit wird aber oft wegen des Aufwandes der individuellen fallbezogenen Befunderstellung nicht im erforderlichen Umfang entsprochen.
Bei richtigem Einsatz wissensbasierter Systeme kann die labormedizinische Spezialbefundung effizient unterstützt und auf hohem Niveau optimiert und, soweit sinnvoll, standardisiert werden. Dies ist eine der wesentlichen Zielsetzungen der Pro.M.D.-Entwicklung (Prologsystem zur Unterstützung Medizinischer Diagnostik). Weitere zum Teil ebenfalls bereits zu einem großen Teil erreichte Ziele bei der Pro.M.D.-Entwicklung sind die Schaffung einer gemeinsamen Notationsebene für das bei der labormedizinischen Spezialbefundung formalisierbare Wissen und die dadurch erreichbare Verbesserung des fallbezogenen Erfahrungsaustausches.
Highlights
• Protocol for extracting and analyzing pollen grains from fossil insects
• Individual fossil grains can be analyzed using a combined approach
• Simple and fast TEM embedding and sectioning protocol
• Protocol enables a taxonomic assignment of pollen
Summary
This protocol explains how to extract pollen from fossil insects with subsequent descriptions of pollen treatment. We also describe how to document morphological and ultrastructural features with light-microscopy and electron microscopy. It enables a taxonomic assignment of pollen that can be used to interpret flower-insect interactions, foraging and feeding behavior of insects, and the paleoenvironment. The protocol is limited by the state of the fossil, the presence/absence of pollen on fossil specimens, and the availability of extant pollen for comparison.
Highlights
• We present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series.
• The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios.
• The model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33.
Abstract
High-rate Global Navigation Satellite System (HR-GNSS) data can be highly useful for earthquake analysis as it provides continuous high-frequency measurements of ground motion. This data can be used to analyze diverse parameters related to the seismic source and to assess the potential of an earthquake to prompt strong motions at certain distances and even generate tsunamis. In this work, we present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series. The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios. We explored the potential of the model for global application and compared its performance using both synthetic and real data from different seismogenic regions. The performance of our model at this stage was satisfactory in estimating earthquake magnitude from synthetic data with 0.07 ≤ RMS ≤ 0.11. Comparable results were observed in tests using synthetic data from a different region than the training data, with RMS ≤ 0.15. Furthermore, the model was tested using real data from different regions and magnitudes, resulting in the best cases with 0.09 ≤ RMS ≤ 0.33, provided that the data from a particular group of stations had similar epicentral distance constraints to those used during the model training. The robustness of the DL model can be improved to work independently from the window size of the time series and the number of stations, enabling faster estimation by the model using only near-field data. Overall, this study provides insights for the development of future DL approaches for earthquake magnitude estimation with HR-GNSS data, emphasizing the importance of proper handling and careful data selection for further model improvements.
PolarCAP – A deep learning approach for first motion polarity classification of earthquake waveforms
(2022)
Highlights
• We present PolarCAP, a deep learning model that can classify the polarity of a waveform with a 98% accuracy.
• The first-motion polarity of seismograms is a useful parameter, but its manual determination can be laborious and imprecise.
• We demonstrate that in several cases the model can assign trace polar-ity more accurately than a human analyst.
Abstract
The polarity of first P-wave arrivals plays a significant role in the effective determination of focal mechanisms specially for smaller earthquakes. Manual estimation of polarities is not only time-consuming but also prone to human errors. This warrants a need for an automated algorithm for first motion polarity determination. We present a deep learning model - PolarCAP that uses an autoencoder architecture to identify first-motion polarities of earth-quake waveforms. PolarCAP is trained in a supervised fashion using more than 130,000 labelled traces from the Italian seismic dataset (INSTANCE) and is cross-validated on 22,000 traces to choose the most optimal set of hyperparameters. We obtain an accuracy of 0.98 on a completely unseen test dataset of almost 33,000 traces. Furthermore, we check the model generalizability by testing it on the datasets provided by previous works and show that our model achieves a higher recall on both positive and negative polarities.
Hypoxia inhibits ferritinophagy, increases mitochondrial ferritin, and protects from ferroptosis
(2020)
Highlights
• Hypoxia decreases NCOA4 transcription in primary human macrophages.
• NCOA4 mRNA is a target of miR-6862-5p.
• Lowering NCOA4 increases FTMT abundance under hypoxia.
• FTMT and FTH protect from ferroptosis.
• Tumor cells lack the hypoxic decrease of NCOA4 and fail to stabilize FTMT.
Abstract
Cellular iron, at the physiological level, is essential to maintain several metabolic pathways, while an excess of free iron may cause oxidative damage and/or provoke cell death. Consequently, iron homeostasis has to be tightly controlled. Under hypoxia these regulatory mechanisms for human macrophages are not well understood. Hypoxic primary human macrophages reduced intracellular free iron and increased ferritin expression, including mitochondrial ferritin (FTMT), to store iron. In parallel, nuclear receptor coactivator 4 (NCOA4), a master regulator of ferritinophagy, decreased and was proven to directly regulate FTMT expression. Reduced NCOA4 expression resulted from a lower rate of hypoxic NCOA4 transcription combined with a micro RNA 6862-5p-dependent degradation of NCOA4 mRNA, the latter being regulated by c-jun N-terminal kinase (JNK). Pharmacological inhibition of JNK under hypoxia increased NCOA4 and prevented FTMT induction. FTMT and ferritin heavy chain (FTH) cooperated to protect macrophages from RSL-3-induced ferroptosis under hypoxia as this form of cell death is linked to iron metabolism. In contrast, in HT1080 fibrosarcome cells, which are sensitive to ferroptosis, NCOA4 and FTMT are not regulated. Our study helps to understand mechanisms of hypoxic FTMT regulation and to link ferritinophagy and macrophage sensitivity to ferroptosis.
G protein-coupled receptors (GPCRs) play a crucial role in modulating physiological responses and serve as the main drug target. Specifically, salmeterol and salbutamol which are used for the treatment of pulmonary diseases, exert their effects by activating the GPCR β2-adrenergic receptor (β2AR). In our study, we employed coarse-grained molecular dynamics simulations with the Martini 3 force field to investigate the dynamics of drug molecules in membranes in presence and absence of β2AR. Our simulations reveal that in more than 50% of the flip-flop events the drug molecules use the β2AR surface to permeate the membrane. The pathway along the GPCR surface is significantly more energetically favorable for the drug molecules, which was revealed by umbrella sampling simulations along spontaneous flip-flop pathways. Furthermore, we assessed the behavior of drugs with intracellular targets, such as kinase inhibitors, whose therapeutic efficacy could benefit from this observation. In summary, our results show that β2AR surface interactions can significantly enhance membrane permeation of drugs, emphasizing their potential for consideration in future drug development strategies.
Despite the recent popularity of predictive processing models of brain function, the term prediction is often instantiated very differently across studies. These differences in definition can substantially change the type of cognitive or neural operation hypothesised and thus have critical implications for the corresponding behavioural and neural correlates during visual perception. Here, we propose a five-dimensional scheme to characterise different parameters of prediction. Namely, flow of information, mnemonic origin, specificity, complexity, and temporal precision. We describe these dimensions and provide examples of their application to previous work. Such a characterisation not only facilitates the integration of findings across studies, but also helps stimulate new research questions.
Hadron lists based on experimental studies summarized by the Particle Data Group (PDG) are a crucial input for the equation of state and thermal models used in the study of strongly-interacting matter produced in heavy-ion collisions. Modeling of these strongly-interacting systems is carried out via hydrodynamical simulations, which are followed by hadronic transport codes that also require a hadronic list as input. To remain consistent throughout the different stages of modeling of a heavy-ion collision, the same hadron list with its corresponding decays must be used at each step. It has been shown that even the most uncertain states listed in the PDG from 2016 are required to reproduce partial pressures and susceptibilities from Lattice Quantum Chromodynamics with the hadronic list known as the PDG2016+. Here, we update the hadronic list for use in heavy-ion collision modeling by including the latest experimental information for all states listed in the Particle Data Booklet in 2021. We then compare our new list, called PDG2021+, to Lattice Quantum Chromodynamics results and find that it achieves even better agreement with the first principles calculations than the PDG2016+ list. Furthermore, we develop a novel scheme based on intermediate decay channels that allows for only binary decays, such that PDG2021+ will be compatible with the hadronic transport framework SMASH. Finally, we use these results to make comparisons to experimental data and discuss the impact on particle yields and spectra.
The tremendous diversity of life in the ocean has proven to be a rich source of inspiration for drug discovery, with success rates for marine natural products up to 4 times higher than other naturally derived compounds. Yet the marine biodiscovery pipeline is characterized by chronic underfunding, bottlenecks and, ultimately, untapped potential. For instance, a lack of taxonomic capacity means that, on average, 20 years pass between the discovery of new organisms and the formal publication of scientific names, a prerequisite to proceed with detecting and isolating promising bioactive metabolites. The need for “edge” research that can spur novel lines of discovery and lengthy high-risk drug discovery processes, are poorly matched with research grant cycles. Here we propose five concrete pathways to broaden the biodiscovery pipeline and open the social and economic potential of the ocean genome for global benefit: (1) investing in fundamental research, even when the links to industry are not immediately apparent; (2) cultivating equitable collaborations between academia and industry that share both risks and benefits for these foundational research stages; (3) providing new opportunities for early-career researchers and under-represented groups to engage in high-risk research without risking their careers; (4) sharing data with global networks; and (5) protecting genetic diversity at its source through strong conservation efforts. The treasures of the ocean have provided fundamental breakthroughs in human health and still remain under-utilised for human benefit, yet that potential may be lost if we allow the biodiscovery pipeline to become blocked in a search for quick-fix solutions.
Recent lattice QCD results, comparing to a hadron resonance gas model, have shown the need for hundreds of particles in hadronic models. These extra particles influence both the equation of state and hadronic interactions within hadron transport models. Here, we introduce the PDG21+ particle list, which contains the most up-to-date database of particles and their properties. We then convert all particles decays into 2 body decays so that they are compatible with SMASH in order to produce a more consistent description of a heavy-ion collision.
Muller's ratchet, in its prototype version, models a haploid, asexual population whose size~N is constant over the generations. Slightly deleterious mutations are acquired along the lineages at a constant rate, and individuals carrying less mutations have a selective advantage. The classical variant considers {\it fitness proportional} selection, but other fitness schemes are conceivable as well. Inspired by the work of Etheridge et al. ([EPW09]) we propose a parameter scaling which fits well to the ``near-critical'' regime that was in the focus of [EPW09] (and in which the mutation-selection ratio diverges logarithmically as N→∞). Using a Moran model, we investigate the``rule of thumb'' given in [EPW09] for the click rate of the ``classical ratchet'' by putting it into the context of new results on the long-time evolution of the size of the best class of the ratchet with (binary) tournament selection, which (other than that of the classical ratchet) follows an autonomous dynamics up to the time of its extinction. In [GSW23] it was discovered that the tournament ratchet has a hierarchy of dual processes which can be constructed on top of an Ancestral Selection graph with a Poisson decoration. For a regime in which the mutation/selection-ratio remains bounded away from 1, this was used in [GSW23] to reveal the asymptotics of the click rates as well as that of the type frequency profile between clicks. We will describe how these ideas can be extended to the near-critical regime in which the mutation-selection ratio of the tournament ratchet converges to 1 as N→∞.
Motivated by the question of the impact of selective advantage in populations with skewed reproduction mechanims, we study a Moran model with selection. We assume that there are two types of individuals, where the reproductive success of one type is larger than the other. The higher reproductive success may stem from either more frequent reproduction, or from larger numbers of offspring, and is encoded in a measure Λ for each of the two types. Our approach consists of constructing a Λ-asymmetric Moran model in which individuals of the two populations compete, rather than considering a Moran model for each population. Under certain conditions, that we call the "partial order of adaptation", we can couple these measures. This allows us to construct the central object of this paper, the Λ−asymmetric ancestral selection graph, leading to a pathwise duality of the forward in time Λ-asymmetric Moran model with its ancestral process. Interestingly, the construction also provides a connection to the theory of optimal transport. We apply the ancestral selection graph in order to obtain scaling limits of the forward and backward processes, and note that the frequency process converges to the solution of an SDE with discontinous paths. Finally, we derive a Griffiths representation for the generator of the SDE and use it to find a semi-explicit formula for the probability of fixation of the less beneficial of the two types.
Motivated by the question of the impact of selective advantage in populations with skewed reproduction mechanims, we study a Moran model with selection. We assume that there are two types of individuals, where the reproductive success of one type is larger than the other. The higher reproductive success may stem from either more frequent reproduction, or from larger numbers of offspring, and is encoded in a measure Λ for each of the two types. Our approach consists of constructing a Λ-asymmetric Moran model in which individuals of the two populations compete, rather than considering a Moran model for each population. Under certain conditions, that we call the ``partial order of adaptation'', we can couple these measures. This allows us to construct the central object of this paper, the Λ−asymmetric ancestral selection graph, leading to a pathwise duality of the forward in time Λ-asymmetric Moran model with its ancestral process. Interestingly, the construction also provides a connection to the theory of optimal transport. We apply the ancestral selection graph in order to obtain scaling limits of the forward and backward processes, and note that the frequency process converges to the solution of an SDE with discontinous paths. Finally, we derive a Griffiths representation for the generator of the SDE and use it to find a semi-explicit formula for the probability of fixation of the less beneficial of the two types.
The transporter associated with antigen processing (TAP) is an essential machine of the adaptive immune system that translocates antigenic peptides from the cytosol into the endoplasmic reticulum lumen for loading of major histocompatibility class I molecules. To examine this ABC transport complex in mechanistic detail, we have established, after extensive screening and optimization, the solubilization, purification, and reconstitution for TAP to preserve its function in each step. This allowed us to determine the substrate-binding stoichiometry of the TAP complex by fluorescence cross-correlation spectroscopy. In addition, the TAP complex shows strict coupling between peptide binding and ATP hydrolysis, revealing no basal ATPase activity in the absence of peptides. These results represent an optimal starting point for detailed mechanistic studies of the transport cycle of TAP by single molecule experiments to analyze single steps of peptide translocation and the stoichiometry between peptide transport and ATP hydrolysis.
Mollusca is the second-largest animal phylum with over 100,000 species among eight distinct taxonomic classes. Across 1000 living species in the class Polyplacophora, chitons have a relatively constrained morphology but with some notable deviations. Several genera possess “shell eyes”, true eyes with a lens and retina that are embedded within the dorsal shells, which represent the most recent evolution of animal eyes. The phylogeny of major chiton clades is mostly well established, in a set of superfamily and higher-level taxa supported by various approaches including multiple gene markers, mitogenome-phylogeny and phylotranscritomic approaches as well as morphological studies. However, one critical lineage has remained unclear: Schizochiton was controversially suggested as a potential independent origin of chiton shell eyes. Here, with the draft genome sequencing of Schizochiton incisus (superfamily Schizochitonoidea) plus assembly of transcriptome data from other polyplacophorans, we present phylogenetic reconstructions using both mitochondrial genomes and phylogenomic approaches with multiple methods. Phylogenetic trees from mitogenomic data are inconsistent, reflecting larger scale confounding factors in molluscan mitogenomes. A consistent robust topology was generated with protein coding genes using different models and methods. Our results support Schizochitonoidea is a sister group to other Chitonoidea in Chitonina, in agreement with established classification. This suggests that the earliest origin of shell eyes is in Schizochitonoidea, which were also gained secondarily in other genera in Chitonoidea. Our results have generated a holistic review of the internal relationship within Polyplacophora, and a better understanding on the evolution of Polyplacophora.
Common systemic risk measures focus on the instantaneous occurrence of triggering and systemic events. However, systemic events may also occur with a time-lag to the triggering event. To study this contagion period and the resulting persistence of institutions' systemic risk we develop and employ the Conditional Shortfall Probability (CoSP), which is the likelihood that a systemic market event occurs with a specific time-lag to the triggering event. Based on CoSP we propose two aggregate systemic risk measures, namely the Aggregate Excess CoSP and the CoSP-weighted time-lag, that reflect the systemic risk aggregated over time and average time-lag of an institution's triggering event, respectively. Our empirical results show that 15% of the financial companies in our sample are significantly systemically important with respect to the financial sector, while 27% of the financial companies are significantly systemically important with respect to the American non-financial sector. Still, the aggregate systemic risk of systemically important institutions is larger with respect to the financial market than with respect to non-financial markets. Moreover, the aggregate systemic risk of insurance companies is similar to the systemic risk of banks, while insurers are also exposed to the largest aggregate systemic risk among the financial sector.
A tontine provides a mortality driven, age-increasing payout structure through the pooling of mortality. Because a tontine does not entail any guarantees, the payout structure of a tontine is determined by the pooling of individual characteristics of tontinists. Therefore, the surrender decision of single tontinists directly affects the remaining members' payouts. Nevertheless, the opportunity to surrender is crucial to the success of a tontine from a regulatory as well as a policyholder perspective. Therefore, this paper derives the fair surrender value of a tontine, first on the basis of expected values, and then incorporates the increasing payout volatility to determine an equitable surrender value. Results show that the surrender decision requires a discount on the fair surrender value as security for the remaining members. The discount intensifies in decreasing tontine size and increasing risk aversion. However, tontinists are less willing to surrender for decreasing tontine size and increasing risk aversion, creating a natural protection against tontine runs stemming from short-term liquidity shocks. Furthermore we argue that a surrender decision based on private information requires a discount on the fair surrender value as well.
Under Solvency II, corporate governance requirements are a complementary, but nonetheless essential, element to build a sound regulatory framework for insurance undertakings, also to address risks not specifically mitigated by the sole solvency capital requirements. After recalling the provisions of the second pillar concerning the system of governance, the paper is devoted to highlight the emerging regulatory trends in the corporate governance of insurance firms. Among others, it signals the exceptional extension of the duties and responsibilities assigned to the Board of directors, far beyond the traditional role of both monitoring the chief executive officer, and assessing the overall direction and strategy of the business. However, a better risk governance is not necessarily built on narrow rule-based approaches to corporate governance.
Depending on the point of time and location, insurance companies are subject to different forms of solvency regulation. In modern regulation regimes, such as the future standard Solvency II in the EU, insurance pricing is liberalized and risk-based capital requirements will be introduced. In many economies in Asia and Latin America, on the other hand, supervisors require the prior approval of policy conditions and insurance premiums, but do not conduct risk-based capital regulation. This paper compares the outcome of insurance rate regulation and riskbased capital requirements by deriving stock insurers’ best responses. It turns out that binding price floors affect insurers’ optimal capital structures and induce them to choose higher safety levels. Risk-based capital requirements are a more efficient instrument of solvency regulation and allow for lower insurance premiums, but may come at the cost of investment efforts into adequate risk monitoring systems. The paper derives threshold values for regulator’s investments into risk-based capital regulation and provides starting points for designing a welfare-enhancing insurance regulation scheme.
We study the impact of estimation errors of firms on social welfare. For this purpose, we present a model of the insurance market in which insurers face parameter uncertainty about expected loss sizes. As consumers react to under- and overestimation by increasing and decreasing demand, respectively, insurers require a safety loading for parameter uncertainty. If the safety loading is too small, less risk averse consumers benefit from less informed insurers by speculating on them underestimating expected losses. Otherwise, social welfare increases with insurers’ information. We empirically estimate safety loadings in the US property and casualty insurance market, and show that these are likely to be sufficiently large for consumers to benefit from more informed insurers.
Tail-correlation matrices are an important tool for aggregating risk measurements across risk categories, asset classes and/or business segments. This paper demonstrates that traditional tail-correlation matrices—which are conventionally assumed to have ones on the diagonal—can lead to substantial biases of the aggregate risk measurement’s sensitivities with respect to risk exposures. Due to these biases, decision-makers receive an odd view of the effects of portfolio changes and may be unable to identify the optimal portfolio from a risk-return perspective. To overcome these issues, we introduce the “sensitivity-implied tail-correlation matrix”. The proposed tail-correlation matrix allows for a simple deterministic risk aggregation approach which reasonably approximates the true aggregate risk measurement according to the complete multivariate risk distribution. Numerical examples demonstrate that our approach is a better basis for portfolio optimization than the Value-at-Risk implied tail-correlation matrix, especially if the calibration portfolio (or current portfolio) deviates from the optimal portfolio.
Historical evidence like the global financial crisis from 2007-09 highlights that sector concentration risk can play an important role for the solvency of insurers. However, current microprudential frameworks like the US RBC framework and Solvency II consider only name concentration risk explicitly in their solvency capital requirements for asset concentration risk and neglect sector concentration risk. We show by means of US insurers’ asset holdings from 2009 to 2018 that substantial sectoral asset concentrations exist in the financial, public and real estate sector, and find indicative evidence for a sectoral search for yield behavior. Based on a theoretical solvency capital allocation scheme, we demonstrate that the current regulatory approaches can lead to inappropriate and biased levels of solvency capital for asset concentration risk, and should be revised. Our findings have also important implications on the ongoing discussion of asset concentration risk in the context of macroprudential insurance regulation.
This paper documents that the bond investments of insurance companies transmit shocks from insurance markets to the real economy. Liquidity windfalls from household insurance purchases increase insurers’ demand for corporate bonds. Exploiting the fact that insurers persistently invest in a small subset of firms for identification, I show that these increases in bond demand raise bond prices and lower firms’ funding costs. In response, firms issue more bonds, especially when their bond underwriters are well connected with investors. Firms use the proceeds to raise investment rather than equity payouts. The results emphasize the significant impact of investor demand on firms’ financing and investment activities.
Testing frequency and severity risk under various information regimes and implications in insurance
(2023)
We build on Peter et al. (2017) who examined the benefit of testing frequency risk under various information regimes. We first consider testing only severity risk, and whether the principle of indemnity, i.e. the usual contract term that excludes claims payments above the resulting insured loss, affects the insurance contracts offered and purchased. Under information regimes which are less restrictive (in terms of obtaining and using customer information), it is possible for the insurer to offer different contracts for tested and untested individuals. In the absence of the principle of indemnity, individuals will test their severity risk and a separating equilibrium ensues. With the principle of indemnity, given an actuarially fair pooled contract, individuals will not test for severity under less restrictive information regimes; a pooling equilibrium thus ensues. Under more restrictive information regimes, the insurer offers separating contracts. Individuals will test for severity and purchase appropriate contracts. We also consider testing for both frequency and severity risk. The results here are more varied. The highest gain in efficiency from testing results from one of the more restrictive information regimes. Generally under all information regimes, there is a greater gain in efficiency without the principle of indemnity than with the principle of indemnity.