Refine
Year of publication
Document Type
- Article (15199)
- Part of Periodical (2805)
- Working Paper (2337)
- Doctoral Thesis (2007)
- Book (1737)
- Preprint (1712)
- Part of a Book (1055)
- Conference Proceeding (733)
- Report (471)
- Review (165)
Language
- English (28410) (remove)
Keywords
- taxonomy (715)
- new species (430)
- morphology (169)
- Deutschland (141)
- Syntax (125)
- Englisch (120)
- distribution (111)
- Deutsch (98)
- biodiversity (96)
- inflammation (95)
Institute
- Medizin (5138)
- Physik (3422)
- Wirtschaftswissenschaften (1882)
- Frankfurt Institute for Advanced Studies (FIAS) (1551)
- Biowissenschaften (1481)
- Center for Financial Studies (CFS) (1473)
- Informatik (1364)
- Biochemie und Chemie (1063)
- Sustainable Architecture for Finance in Europe (SAFE) (1046)
- House of Finance (HoF) (700)
We study the impact of higher capital requirements on banks’ balance sheets and its transmission to the real economy. The 2011 EBA capital exercise provides an almost ideal quasi-natural experiment, which allows us to identify the effect of higher capital requirements using a difference-in-differences matching estimator. We find that treated banks increase their capital ratios not by raising their levels of equity, but by reducing their credit supply. We also show that this reduction in credit supply results in lower firm-, investment-, and sales growth for firms which obtain a larger share of their bank credit from the treated banks.
We show that banks that are facing relatively high locally non-diversifiable risks in their home region expand more across states than banks that do not face such risks following branching deregulation in the 1990s and 2000s. These banks with high locally non-diversifiable risks also benefit relatively more from deregulation in terms of higher bank stability. Further, these banks expand more into counties where risks are relatively high and positively correlated with risks in their home region, suggesting that they do not only diversify but also build on their expertise in local risks when they expand into new regions.
We introduce an innovative approach to measure bank integration, based on the corporate culture of multinational banking conglomerates. The new measure, the Power Index, assesses the prevalence of a language of power and authority in the financial reports of global banks. We employ a two-step approach: as a first step, we investigate whether parent-bank or parent-country characteristics are more important for bank integration. In a second step, we analyze whether bank integration affects the transmission of shocks across borders. We find that the level of integration of global banks is determined by parent-bank-specific factors, as well as by the social centralization in the parent’s country: ethnically diverse and linguistically homogenous countries nurture decentralized corporate structures. Political and economic factors, such as corruption, political rights and economic development also affect bank integration. Furthermore, we find that organizational integration affects the transmission of exogenous shocks from parent banks to their subsidiaries: the more centralized a global bank is, the lower the lending of its subsidiaries after a solvency shock. Wholesale shocks do not appear to be transmitted through this channel. Also, past experience with solvency shocks reduces the integration between parents and subsidiaries.
We investigate how solvency and wholesale funding shocks to 84 OECD parent banks affect the lending of 375 foreign subsidiaries. We find that parent solvency shocks are more important than wholesale funding shocks for subsidiary lending. Furthermore, we find that parent undercapitalization does not affect the transmission of shocks, while wholesale shocks transmit to foreign subsidiaries of parents that rely primarily on wholesale funding. We also find that transmission is affected by the strategic role of the subsidiary for the parent and follows a locational, rather than an organizational pecking order. Surprisingly, liquidity regulation exacerbates the transmission of adverse wholesale shocks. We further document that parent banks tend to use their own capital and liquidity buffers first, before transmitting. Finally, we show that solvency shocks have higher impact on large subsidiary banks with low growth opportunities in mature markets.
The paper analyses the relationship between deposit insurance, debt-holder monitoring, bank charter values, and risk taking for European banks. Utilising cross-sectional and time series variation in the existence of deposit insurance schemes in the EU, we find that the establishment of explicit deposit insurance significantly reduces the risk taking of banks. This finding stands in contrast to most of the previous empirical literature. It supports the hypothesis that in the absence of deposit insurance, European banking systems have been characterised by strong implicit insurance operating through the expectation of public intervention at times of distress. Hence the introduction of an explicit system may imply a de facto reduction in the scope of the safety net. This finding provides a new perspective on the effects of deposit insurance on risk taking. Unless the absence of any safety net is credible, the introduction of deposit insurance serves to explicitly limit the safety net and, hence, moral hazard. We also test further hypotheses regarding the interaction between deposit insurance and monitoring, charter values and "too-big-to-fail." We find that banks with lower charter values and more subordinated debt reduce risk taking more after the introduction of explicit deposit insurance, in support of the notion that charter values and subordinated debt may mitigate moral hazard. Finally, large banks (as measured in relation to the banking system as a whole) do not change their risk taking in response to the introduction of deposit insurance, which suggests that the introduction of explicit deposit insurance does not mitigate "too-big-to-fail" problems.
Poster presentation: The brain is autonomously active and this self-sustained neural activity is in general modulated, but not driven, by the sensory input data stream [1,2]. Traditionally one has regarded this eigendynamics as resulting from inter-modular recurrent neural activity [3]. Understanding the basic modules for cognitive computation is, in this view, the primary focus of research and the overall neural dynamics would be determined by the the topology of the intermodular pathways. Here we examine an alternative point of view, asking whether certain aspects of the neural eigendynamics have a central functional role for overall cognitive computation [4,5]. Transiently stable neural activity is regularly observed on the cognitive time-scale of 80–100 ms, with indications that neural competition [6] plays an important role in the selection of the transiently stable neural ensembles [7], also denoted winning coalitions [8]. We report on a theory approach which implements these two principles, transient-state dynamics and neural competition, in terms of an associative neural network with clique encoding [9]. A cognitive system [10] with a non-trivial internal eigendynamics has two seemingly contrasting tasks to fulfill. The internal processes need to be regular and not chaotic on one side, but sensitive to the afferent sensory stimuli on the other side. We show, that these two contrasting demands can be reconciled within our approach based on competitive transient-state dynamics, when allowing the sensory stimuli to modulate the competition for the next winning coalition. By testing the system with the bars problem, we find an emerging cognitive capability. Only based on the two basic architectural principles, neural competition and transient-state dynamics, with no explicit algorithmic encoding, the system performs on its own a non-linear independent component analysis of input data stream. The system has rudimentary biological features. All learning is local Hebbian-style, unsupervised and online. It exhibits an ever-ongoing eigendynamics and at no time is the state or the value of synaptic strengths reset or the system restarted; there is no separation between training and performance. We believe that this kind of approach – cognitive computation with autonomously active neural networks – to be an emerging field, relevant both for system neuroscience and synthetic cognitive systems.
An empirical study of the per capita yield of science Nobel prizes : is the US era coming to an end?
(2018)
We point out that the Nobel prize production of the USA, the UK, Germany and France has been in numbers that are large enough to allow for a reliable analysis of the long-term historical developments. Nobel prizes are often split, such that up to three awardees receive a corresponding fractional prize. The historical trends for the fractional number of Nobelists per population are surprisingly robust, indicating in particular that the maximum Nobel productivity peaked in the 1970s for the USA and around 1900 for both France and Germany. The yearly success rates of these three countries are to date of the order of 0.2–0.3 physics, chemistry and medicine laureates per 100 million inhabitants, with the US value being a factor of 2.4 down from the maximum attained in the 1970s. The UK in contrast managed to retain during most of the last century a rate of 0.9–1.0 science Nobel prizes per year and per 100 million inhabitants. For the USA, one finds that the entire history of science Noble prizes is described on a per capita basis to an astonishing accuracy by a single large productivity boost decaying at a continuously accelerating rate since its peak in 1972.
Envy, the inclination to compare rewards, can be expected to unfold when inequalities in terms of pay-off differences are generated in competitive societies. It is shown that increasing levels of envy lead inevitably to a self-induced separation into a lower and an upper class. Class stratification is Nash stable and strict, with members of the same class receiving identical rewards. Upper-class agents play exclusively pure strategies, all lower-class agents the same mixed strategy. The fraction of upper-class agents decreases progressively with larger levels of envy, until a single upper-class agent is left. Numerical simulations and a complete analytic treatment of a basic reference model, the shopping trouble model, are presented. The properties of the class-stratified society are universal and only indirectly controllable through the underlying utility function, which implies that class-stratified societies are intrinsically resistant to political control. Implications for human societies are discussed. It is pointed out that the repercussions of envy are amplified when societies become increasingly competitive.
Human societies are characterized by three constituent features, besides others. (A) Options, as for jobs and societal positions, differ with respect to their associated monetary and non-monetary payoffs. (B) Competition leads to reduced payoffs when individuals compete for the same option as others. (C) People care about how they are doing relatively to others. The latter trait –the propensity to compare one’s own success with that of others– expresses itself as envy. It is shown that the combination of (A)–(C) leads to spontaneous class stratification. Societies of agents split endogenously into two social classes, an upper and a lower class, when envy becomes relevant. A comprehensive analysis of the Nash equilibria characterizing a basic reference game is presented. Class separation is due to the condensation of the strategies of lower-class agents, which play an identical mixed strategy. Upper-class agents do not condense, following individualist pure strategies. The model and results are size-consistent, holding for arbitrary large numbers of agents and options. Analytic results are confirmed by extensive numerical simulations. An analogy to interacting confined classical particles is discussed.
Stationarity of the constituents of the body and of its functionalities is a basic requirement for life, being equivalent to survival in first place. Assuming that the resting state activity of the brain serves essential functionalities, stationarity entails that the dynamics of the brain needs to be regulated on a time-averaged basis. The combination of recurrent and driving external inputs must therefore lead to a non-trivial stationary neural activity, a condition which is fulfiled for afferent signals of varying strengths only close to criticality. In this view, the benefits of working in the vicinity of a second-order phase transition, such as signal enhancements, are not the underlying evolutionary drivers, but side effects of the requirement to keep the brain functional in first place. It is hence more appropriate to use the term 'self-regulated' in this context, instead of 'self-organized'.