Refine
Year of publication
- 2016 (179) (remove)
Document Type
- Working Paper (179) (remove)
Has Fulltext
- yes (179)
Is part of the Bibliography
- no (179) (remove)
Keywords
- monetary policy (5)
- Banking Regulation (3)
- Digital Humanities (3)
- Financial Crisis (3)
- Insurance (3)
- Interest Rate Risk (3)
- Life Insurance (3)
- Mobilität (3)
- Systemic Risk (3)
- Annuities (2)
Institute
- Wirtschaftswissenschaften (115)
- Center for Financial Studies (CFS) (106)
- Sustainable Architecture for Finance in Europe (SAFE) (71)
- House of Finance (HoF) (56)
- Rechtswissenschaft (22)
- Institute for Monetary and Financial Stability (IMFS) (10)
- Geographie (8)
- Institut für sozial-ökologische Forschung (ISOE) (7)
- Informatik (6)
- Gesellschaftswissenschaften (5)
How do insiders trade?
(2016)
We characterize how informed investors trade in the options market ahead of corporate news when they receive private, but noisy, information about (i) the timing of the announcement and (ii) its impact on stock prices. Our theoretical framework generates a rich set of predictions about the insiders’ behavior and their maximum expected returns. Three different analyses offer empirical support for our approach. First, predicted trades resemble illegal insider trades documented in SEC litigation cases with insiders being more likely to trade in options that offer higher expected returns. Second, pre-announcement patterns in unusual activity in the options market ahead of significant corporate news are consistent with the predictions of our framework. We employ our approach to characterize informed trading ahead of twelve different types of news including the announcement of earnings, corporate guidance, M&As, product innovations, management changes, and analyst recommendations. Third, to address concerns that pre-announcement patterns are driven by speculation, we show that measures capturing trading activity in call (put) options with high expected returns predict significant positive (negative) corporate news in the aggregate cross-section.
Using novel monthly data for 226 euro-area banks from 2007 to 2015, we investigate the determinants of changes in banks’ sovereign exposures and their effects during and after the crisis. First, public, bailed out and poorly capitalized banks responded to sovereign stress by purchasing domestic public debt more than other banks, with public banks’ purchases growing especially in coincidence with the largest ECB liquidity injections. Second, bank exposures significantly amplified the transmission of risk from the sovereign and its impact on lending. This amplification of the impact on lending does not appear to arise from spurious correlation or reverse causality.
Of the novelties introduced by digitization in the study of literature, the size of the archive is probably the most dramatic: we used to work on a couple of hundred nineteenth-century novels, and now we can analyze thousands of them, tens of thousands, tomorrow hundreds of thousands. It's a moment of euphoria, for quantitative literary history: like having a telescope that makes you see entirely new galaxies. And it's a moment of truth: so, have the digital skies revealed anything that changes our knowledge of literature? This is not a rhetorical question. In the famous 1958 essay in which he hailed "the advent of a quantitative history" that would "break with the traditional form of nineteenth-century history", Fernand Braudel mentioned as its typical materials "demographic progressions, the movement of wages, the variations in interest rates [...] productivity [...] money supply and demand." These were all quantifiable entities, clearly enough; but they were also completely new objects compared to the study of legislation, military campaigns, political cabinets, diplomacy, and so on. It was this double shift that changed the practice of history; not quantification alone. In our case, though, there is no shift in materials: we may end up studying 200,000 novels instead of 200; but, they're all still novels. Where exactly is the novelty?
Since the outbreak of the financial crisis, the macro-prudential policy paradigm has gained increasing prominence (Bank of England, 2009; Bernanke, 2011). The dynamics of this shift in the economic discourse, and the reasons this shift has not taken place prior to the crisis have not been addressed systemically. This paper investigates the evolution of the economic discourse on systemic risk and banking regulation to better understand these changes and their timing. Further, we use our sample to inquire whether, and if so, why the economic regulatory studies failed to recommend a reliable banking regulation prior to the crisis. By following a discourse analysis, we establish that the economic discourse on banking regulation has not been suitable for providing the knowledge basis required for a dynamically reliable banking regulation, and we identify the underlying reasons for such failure. These reasons include the obsession of economic discourse with optimization and particular forms of formalism, particularly, partial equilibrium analysis. Further, the economic discourse on banking regulation excludes historical and practitioners’ discourses and ignores weak signals. We point out that post-crisis, these epistemological failures of the economic discourse on banking regulation were not sufficiently recognized and that recent attempts to conceptualize systemic risk as a negative externality and to thus price it point to the persistence of formalism, equilibrium thinking and optimization, with their attending dangers.
The ECB’s Outright Monetary Transactions (OMT) program, launched in summer 2012, indirectly recapitalized periphery country banks through its positive impact on the value of sovereign bonds. However, the regained stability of the European banking sector has not fully transferred into economic growth. We show that zombie lending behavior of banks that still remained undercapitalized after the OMT announcement is an important reason for this development. As a result, there was no positive impact on real economic activity like employment or investment. Instead, firms mainly used the newly acquired funds to build up cash reserves. Finally, we document that creditworthy firms in industries with a high prevalence of zombie firms suffered significantly from the credit misallocation, which slowed down the economic recovery.
In der folgenden Anleitung werden diverse Methoden für den Zugriff auf das Ressourcen-Management, entwickelt von der AG Texttechnologie, erläutert. Das Ressourcen-Management ist für alle Anwendungen identisch. Erklärt wird das Auslesen des Ressourcen-Managements der Projects „PHI Picturing Atlas“. Alle Anweisungen erfolgen per RESTful-Aufrufen. Die API-Dokumentation findet sich unter http://phi.resources.hucompute.org.
The Shared Task on Source and Target Extraction from Political Speeches (STEPS) first ran in 2014 and is organized by the Interest Group on German Sentiment Analysis (IGGSA). This volume presents the proceedings of the workshop of the second iteration of the shared task. The workshop was held at KONVENS 2016 at Ruhr-University Bochum on September 22, 2016.
As in the first edition of the shared task the main focus of STEPS was on fine-grained sentiment analysis and offered a full task as well as two subtasks for the extraction Subjective Expressions and/or their respective Sources and Targets.
In order to make the task more accessible, the annotation schema was revised for this year’s edition and an adjudicated gold standard was used for the evaluation. In contrast to the pilot task, this iteration provided training data for the participants, opening the Shared Task for systems based on machine learning approaches.
The gold standard1 as well as the evaluation tool2 have been made publicly available to the research community via the STEPS’ website.
We would like to thank the GSCL for their financial support in annotating the 2014 test data, which were available as training data in this iteration. A special thanks also goes to Stephanie Köser for her support on preparing and carrying out the annotation of this year’s test data. Finally, we would like to thank all the participants for their contributions and discussions at the workshop.
NLP4CMC III : 3rd workshop on natural language processing for computer-mediated communication
(2016)
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g., title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).