Universitätspublikationen
Refine
Year of publication
- 2017 (1982) (remove)
Document Type
- Article (897)
- Part of Periodical (210)
- Doctoral Thesis (178)
- Contribution to a Periodical (147)
- Working Paper (118)
- Report (110)
- Review (103)
- Book (93)
- Preprint (64)
- Part of a Book (17)
Language
- English (1200)
- German (734)
- Portuguese (22)
- Spanish (11)
- Multiple languages (6)
- French (4)
- Ukrainian (2)
- Catalan (1)
- Italian (1)
- Turkish (1)
Is part of the Bibliography
- no (1982)
Keywords
- Financial Institutions (41)
- Banking Supervision (22)
- Banking Resolution (20)
- BRRD (19)
- Banking Regulation (18)
- Alte Geschichte (16)
- Household Finance (15)
- SSM (15)
- Bail-in (14)
- Banking Union (14)
Institute
- Medizin (381)
- Präsidium (285)
- Physik (236)
- Wirtschaftswissenschaften (209)
- Gesellschaftswissenschaften (195)
- Sustainable Architecture for Finance in Europe (SAFE) (163)
- Frankfurt Institute for Advanced Studies (FIAS) (115)
- Neuere Philologien (104)
- Biowissenschaften (103)
- Informatik (91)
Causality is a widely-used concept in theoretical and empirical economics. The recent financial economics literature has used Granger causality to detect the presence of contemporaneous links between financial institutions and, in turn, to obtain a network structure. Subsequent studies combined the estimated networks with traditional pricing or risk measurement models to improve their fit to empirical data. In this paper, we provide two contributions: we show how to use a linear factor model as a device for estimating a combination of several networks that monitor the links across variables from different viewpoints; and we demonstrate that Granger causality should be combined with quantile-based causality when the focus is on risk propagation. The empirical evidence supports the latter claim.
Software evolves. Developers and programmers manifest the needs that arise due to evolving software by making changes to the source code. While developers make such changes, reusing old code and rewriting existing code are inevitable. There are many challenges that a developer faces when manually reusing old code or rewriting existing code. Software tools and program transformation systems aid such reuse or rewriting of program source code. But there are significantly occuring development tasks that are hard to accomplish manually, where the current state-of-the-art tools are still not able to adequately automate these tasks. In this thesis, we discuss some of these unexplored challenges that a developer faces while reusing and rewriting program source code, the significance of such challenges, the existing automation support for these challenges and how we can improve upon them.
Modern software development relies on code reuse, which software developers
typically realize through hand-written abstractions, such as functions,
methods, or classes. However, such abstractions can be challenging to
develop and maintain. An alternative form of reuse is \emph{copy-paste-modify}, in which developers explicitly duplicate source code to adapt the duplicate for a new purpose. Copy-pasted code results in code clones, i.e., groups of code fragments that are similar to each other. Past research strongly suggests that copy-paste-modify is a popular technique among software developers. In this paper, we perform a small user study that shows that copy-paste-modify can be substantially faster to use than manual abstraction.
One might propose that software developers should forego hand-written abstractions in favour of copying and pasting. However, empirical evidence also shows that copy-paste-modify complicates software maintenance and increases the frequency of bugs. Furthermore, the developers in an informal poll we conducted strongly preferred to read code written using abstractions. To address the concern around copy-paste-modify, we propose a tool that merges similar pieces of code and automatically creates suitable abstractions. Our tool allows developers to get the best of both worlds: easy reuse together with custom abstractions. Because different kinds of abstractions may be beneficial in different contexts, our tool provides multiple abstraction mechanisms, which we selected based on a study of popular open-source repositories.
To demonstrate the feasibility of our approach, we have designed and implemented a prototype merging tool for C++ and evaluated our tool on a number of clones exhibiting some variation, i.e near clones, in popular Open Source packages. We observed that maintainers find our algorithmically created abstractions to be largely preferable to existing duplicated code. Rewriting existing code can be considered as a form of program transformation, where a program in one form is transformed into a program in another form. One significant form of program transformation is data representation migration that involves changing the type of a particular data structure, and then updating all of the operations that has a control or data dependence on that data structure according to the new type. Changing the data representation can provide benefits such as improving efficiency and improving the quality of the computed results. Performing such a transformation is challenging, because it requires applying data-type specific changes to code fragments that may be widely scattered throughout the source code connected by dataflow dependencies. Refactoring systems are typically sensitive to dataflow dependencies, but are not programmable with respect to the features of particular data types. Existing program transformation languages provide the needed flexibility, but do not concisely support reasoning about dataflow dependencies.
To address the needs of data representation migration, we propose a new approach to program transformation that relies on a notion of semantic dependency: every transformation step propagates the transformation process onward to code that somehow depends on the transformed code. Our approach provides a declarative transformation specification language, for expressing type-specific transformation rules. We further provide scoped rules, a mechanism for guiding rule application, and tags, a device for simple program analysis within our framework, to enable more powerful program transformations.
We have implemented a prototype transformation system based on these ideas for C and C++ code and evaluate it against three example specifications, including vectorization, transformation of integers to big integers, and transformation of array-of-structs data types to struct-of-arrays format. Our evaluation shows that our approach can improve program performance and the precision of the computed results, and that it scales to programs of at least 3700 lines.
This dissertation provides a comprehensive account of the grammar of relative clause extraposition in English. Based on a systematic review and evaluation of the empirical generalizations and theoretical approaches provided in the literature on generative grammar, it is shown that none of the previous theories is able to account for all the relevant facts. Among the most problematic data are the Principle C and scope effects of relative clause extraposition, cases with obligatory relative clauses, and relative clauses with elliptical NPs as antecedents.
I propose a new analysis of relative clause extraposition within the constraint-based, monostratal grammatical framework of Head-driven Phrase Structure Grammar (HPSG), enhanced with the semantic theory of Lexical Resource Semantics (LRS). Crucially, it is a general analysis of relative clause attachment, since both canonical and extraposed relative clauses are licensed by the same syntactic and semantic constraints. The basic assumption is that a relative clause can be adjoined to any phrase that contains a suitable antecedent of the relative pronoun. The semantic information that licenses the relative clause is introduced by the determiner of the antecedent NP. The techniques of underspecified semantics and the standard semantic representation language used by LRS make it possible to formulate constraints which yield the correct intersective interpretation of the relative clause (arbitrarily distant from its antecedent NP) and at the same time link the scope of the antecedent NP to the adjunction site of the relative clause.
In combination with the revised HPSG binding theory developed in this dissertation, the proposed analysis is able to capture the major properties of relative clause attachment within a unified and internally consistent monostratal constraint-based grammatical framework.
People who delay claiming Social Security receive higher lifelong benefits upon retirement. We survey individuals on their willingness to delay claiming later, if they could receive a lump sum in lieu of a higher annuity payment. Using a moment-matching approach, we calibrate a lifecycle model tracking observed claiming patterns under current rules and predict optimal claiming outcomes under the lump sum approach. Our model correctly predicts that early claimers under current rules would delay claiming most when offered actuarially fair lump sums, and for lump sums worth 87% as much, claiming ages would still be higher than at present.
The international diffusion of technology plays a key role in stimulating global growth and explaining co-movements of international equity returns. Existing empirical evidence suggests that countries are heterogeneous in their attitude toward innovation: Some countries rely more on technology adoption while other countries rely more on internal technology production. European countries that rely more on adoption are also typically characterized by lower fiscal policy exibility and higher labor market rigidity. We develop a two-country model – where both countries rely on R&D and adoption – to study the short-run and long-run effects of aggregate technology and adoption probability shocks on economic growth in the presence of the aforementioned asymmetries. Our framework suggests that an increase in the ability to adopt technology from abroad stimulates economic growth in the country that benefits from higher adoption rates but the beneficial effects also spread to the foreign country. Moreover, it helps explaining the differences in macro quantities and equity returns observed in the international data.
Asymmetric social norms
(2017)
Studies of cooperation in infinitely repeated matching games focus on homogeneous economies, where full cooperation is efficient and any defection is collectively sanctioned. Here we study heterogeneous economies where occasional defections are part of efficient play, and show how to support those outcomes through contagious punishments.
This paper sets the background for the Special Issue of the Journal of Empirical Finance on the European Sovereign Debt Crisis. It identifies the channel through which risks in the financial industry leaked into the public sector. It discusses the role of the bank rescues in igniting the sovereign debt crisis and reviews approaches to detect early warning signals to anticipate the buildup of crises. It concludes with a discussion of potential implications of sovereign distress for financial markets.
Low probability events are overweighted in the pricing of out-of the-money index puts and single stock calls. We find that this behavioral bias is strongly time-varying, linked to equity market sentiment, and higher moments of the risk-neutral density. An implied volatility (IV) sentiment measure that is jointly derived from index and single stock options explains investors' overweight of tail events the best. Our findings also suggest that IV-sentiment predicts equity markets reversals better than overweight of small probabilities itself. When employed in a trading strategy, IV-sentiment delivers economically significant results, which are more consistent than the ones produced by the market sentiment factor. The joint use of information from the single stock and index option markets seems to explain the forecasting power of IV-sentiment. Out-of-sample tests on reversal prediction show that our IV-sentiment measure adds value over and above traditional factors in the equity risk premium literature, especially as an equity-buying signal. This reversals prediction seems to improve time-series and cross-sectional momentum strategies.
BACKGROUND: The analysis of microarray time series promises a deeper insight into the dynamics of the cellular response following stimulation. A common observation in this type of data is that some genes respond with quick, transient dynamics, while other genes change their expression slowly over time. The existing methods for detecting significant expression dynamics often fail when the expression dynamics show a large heterogeneity. Moreover, these methods often cannot cope with irregular and sparse measurements.
RESULTS: The method proposed here is specifically designed for the analysis of perturbation responses. It combines different scores to capture fast and transient dynamics as well as slow expression changes, and performs well in the presence of low replicate numbers and irregular sampling times. The results are given in the form of tables including links to figures showing the expression dynamics of the respective transcript. These allow to quickly recognise the relevance of detection, to identify possible false positives and to discriminate early and late changes in gene expression. An extension of the method allows the analysis of the expression dynamics of functional groups of genes, providing a quick overview of the cellular response. The performance of this package was tested on microarray data derived from lung cancer cells stimulated with epidermal growth factor (EGF).
CONCLUSION: Here we describe a new, efficient method for the analysis of sparse and heterogeneous time course data with high detection sensitivity and transparency. It is implemented as R package TTCA (transcript time course analysis) and can be installed from the Comprehensive R Archive Network, CRAN. The source code is provided with the Additional file 1.