Universitätspublikationen
Refine
Year of publication
Document Type
- Article (11106)
- Preprint (1804)
- Doctoral Thesis (1578)
- Working Paper (1446)
- Part of Periodical (572)
- Conference Proceeding (517)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17685) (remove)
Keywords
- inflammation (95)
- COVID-19 (93)
- SARS-CoV-2 (64)
- Financial Institutions (48)
- climate change (46)
- Germany (45)
- ECB (43)
- aging (43)
- apoptosis (42)
- cancer (42)
Institute
- Medizin (5182)
- Physik (3295)
- Frankfurt Institute for Advanced Studies (FIAS) (1776)
- Wirtschaftswissenschaften (1660)
- Biowissenschaften (1422)
- Informatik (1371)
- Center for Financial Studies (CFS) (1144)
- Sustainable Architecture for Finance in Europe (SAFE) (1073)
- Biochemie und Chemie (861)
- House of Finance (HoF) (708)
Colocation services offered by stock exchanges enable market participants to achieve execution costs for large orders that are substantially lower and less sensitive to transacting against high-frequency traders. However, these benefits manifest only for orders executed on the colocated brokers' own behalf, whereas customers' order execution costs are substantially higher. Analyses of individual order executions indicate that customer orders originating from colocated brokers are less actively monitored and achieve inferior execution quality. This suggests that brokers do not make effective use of their technology, possibly due to agency frictions or poor algorithm selection and parameter choice by customers.
Borrelia miyamotoi, a relapsing fever spirochete transmitted by Ixodid ticks causes B. miyamotoi disease (BMD). To evade the human host´s immune response, relapsing fever borreliae, including B. miyamotoi, produce distinct variable major proteins. Here, we investigated Vsp1, Vlp15/16, and Vlp18 all of which are currently being evaluated as antigens for the serodiagnosis of BMD. Comparative analyses identified Vlp15/16 but not Vsp1 and Vlp18 as a plasminogen-interacting protein of B. miyamotoi. Furthermore, Vlp15/16 bound plasminogen in a dose-dependent fashion with high affinity. Binding of plasminogen to Vlp15/16 was significantly inhibited by the lysine analog tranexamic acid suggesting that the protein–protein interaction is mediated by lysine residues. By contrast, ionic strength did not have an effect on binding of plasminogen to Vlp15/16. Of relevance, plasminogen bound to the borrelial protein cleaved the chromogenic substrate S-2251 upon conversion by urokinase-type plasminogen activator (uPa), demonstrating it retained its physiological activity. Interestingly, further analyses revealed a complement inhibitory activity of Vlp15/16 and Vlp18 on the alternative pathway by a Factor H-independent mechanism. More importantly, both borrelial proteins protect serum sensitive Borrelia garinii cells from complement-mediated lysis suggesting multiple roles of these two variable major proteins in immune evasion of B. miyamotoi.
The leading premium
(2022)
In this paper, we consider conditional measures of lead-lag relationships between aggregate growth and industry-level cash-flow growth in the US. Our results show that firms in leading industries pay an average annualized return 3.6\% higher than that of firms in lagging industries. Using both time series and cross sectional tests, we estimate an annual pure timing premium ranging from 1.2% to 1.7%. This finding can be rationalized in a model in which (a) agents price growth news shocks, and (b) leading industries provide valuable resolution of uncertainty about the growth prospects of lagging industries.
Advances in Machine Learning (ML) led organizations to increasingly implement predictive decision aids intended to improve employees’ decision-making performance. While such systems improve organizational efficiency in many contexts, they might be a double-edged sword when there is the danger of a system discontinuance. Following cognitive theories, the provision of ML-based predictions can adversely affect the development of decision-making skills that come to light when people lose access to the system. The purpose of this study is to put this assertion to the test. Using a novel experiment specifically tailored to deal with organizational obstacles and endogeneity concerns, we show that the initial provision of ML decision aids can latently prevent the development of decision-making skills which later becomes apparent when the system gets discontinued. We also find that the degree to which individuals 'blindly' trust observed predictions determines the ultimate performance drop in the post-discontinuance phase. Our results suggest that making it clear to people that ML decision aids are imperfect can have its benefits especially if there is a reasonable danger of (temporary) system discontinuances.
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
Many nations incentivize retirement saving by letting workers defer taxes on pension contributions, imposing them when retirees withdraw their funds. Using a dynamic life cycle model, we show how ‘Rothification’ – that is, taxing 401(k) contributions rather than payouts – alters saving, investment, consumption, and Social Security claiming patterns. We find that taxing pension contributions instead of withdrawals leads to delayed retirement, somewhat lower lifetime tax payments, and relatively small reductions in consumption. Indeed, the two tax regimes generate quite similar relative inequality metrics: the relative consumption inequality ratio under TEE is only four percent higher than in the EET case. Moreover, results indicate that the Gini measures are also strikingly similar under the EET and the TEE regimes for lifetime consumption, cash on hand, and 401(k) assets, differing by only 1-4 percent. While tax payments are higher early in life under the TEE regime, they are slightly lower in the long run. Moreover, higher EET tax payments are also accompanied by higher volatility. We therefore find few reasons for policymakers to favor either tax approach on egalitarian or revenue-enhancing grounds.
We analyze how market fragmentation affects market quality of SME and other less actively traded stocks. Compared to large stocks, they are less likely to be traded on multiple venues and show, if at all, low levels of fragmentation. Concerning the impact of fragmentation on market quality, we find evidence for a hockey stick effect: Fragmentation has no effect for infrequently traded stocks, a negative effect on liquidity of slightly more active stocks, and increasing benefits for liquidity of large and actively traded stocks. Consequently, being traded on multiple venues is not necessarily harmful for SME stock market quality.
Implementing an automated monitoring process in a digital, longitudinal observational cohort study
(2021)
Background: Clinical data collection requires correct and complete data sets in order to perform correct statistical analysis and draw valid conclusions. While in randomized clinical trials much effort concentrates on data monitoring, this is rarely the case in observational studies- due to high numbers of cases and often-restricted resources. We have developed a valid and cost-effective monitoring tool, which can substantially contribute to an increased data quality in observational research.
Methods: An automated digital monitoring system for cohort studies developed by the German Rheumatism Research Centre (DRFZ) was tested within the disease register RABBIT-SpA, a longitudinal observational study including patients with axial spondyloarthritis and psoriatic arthritis. Physicians and patients complete electronic case report forms (eCRF) twice a year for up to 10 years. Automatic plausibility checks were implemented to verify all data after entry into the eCRF. To identify conflicts that cannot be found by this approach, all possible conflicts were compiled into a catalog. This “conflict catalog” was used to create queries, which are displayed as part of the eCRF. The proportion of queried eCRFs and responses were analyzed by descriptive methods. For the analysis of responses, the type of conflict was assigned to either a single conflict only (affecting individual items) or a conflict that required the entire eCRF to be queried.
Results: Data from 1883 patients was analyzed. A total of n = 3145 eCRFs submitted between baseline (T0) and T3 (12 months) had conflicts (40–64%). Fifty-six to 100% of the queries regarding eCRFs that were completely missing were answered. A mean of 1.4 to 2.4 single conflicts occurred per eCRF, of which 59–69% were answered. The most common missing values were CRP, ESR, Schober’s test, data on systemic glucocorticoid therapy, and presence of enthesitis.
Conclusion: Providing high data quality in large observational cohort studies is a major challenge, which requires careful monitoring. An automated monitoring process was successfully implemented and well accepted by the study centers. Two thirds of the queries were answered with new data. While conventional manual monitoring is resource-intensive and may itself create new sources of errors, automated processes are a convenient way to augment data quality.
SAFE Update December 2022
(2022)