Refine
Document Type
- Working Paper (3)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- compensation design (2)
- principal-agent models (2)
- duration of pay (1)
- financial regulation (1)
- informativeness principle (1)
- moral hazard (1)
- persistence (1)
Institute
- Wirtschaftswissenschaften (3) (remove)
Our paper evaluates recent regulatory proposals mandating the deferral of bonus payments and claw-back clauses in the financial sector. We study a broadly applicable principal agent setting, in which the agent exerts effort for an immediately observable task (acquisition) and a task for which information is only gradually available over time (diligence). Optimal compensation contracts trade off the cost and benefit of delay resulting from agent impatience and the informational gain. Mandatory deferral may increase or decrease equilibrium diligence depending on the importance of the acquisition task. We provide concrete conditions on economic primitives that make mandatory deferral socially (un)desirable.
This paper provides a complete characterization of optimal contracts in principal-agent settings where the agent's action has persistent effects. We model general information environments via the stochastic process of the likelihood-ratio. The martingale property of this performance metric captures the information benefit of deferral. Costs of deferral may result from both the agent's relative impatience as well as her consumption smoothing needs. If the relatively impatient agent is risk neutral, optimal contracts take a simple form in that they only reward maximal performance for at most two payout dates. If the agent is additionally risk-averse, optimal contracts stipulate rewards for a larger selection of dates and performance states: The performance hurdle to obtain the same level of compensation is increasing over time whereas the pay-performance sensitivity is declining.
Non-standard errors
(2021)
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.