Refine
Document Type
- Working Paper (7)
- Article (3)
- Preprint (2)
- Report (1)
Language
- English (13)
Has Fulltext
- yes (13)
Is part of the Bibliography
- no (13)
Keywords
- DSGE Model (1)
- DSGE Models (1)
- Fiscal Consolidation (1)
- Fiscal Multiplier (1)
- Fiscal Policy (1)
- Fiscal Stimulus (1)
- Fiskalpolitik (1)
- Geldtheorie (1)
- Government Debt (1)
- Government Deficit (1)
Diagnosing and treating acute severe and recurrent antivenom-related anaphylaxis (ARA) is challenging and reported experience is limited. Herein, we describe our experience of severe ARA in patients with neurotoxic snakebite envenoming in Nepal. Patients were enrolled in a randomised, double-blind trial of high vs. low dose antivenom, given by intravenous (IV) push, followed by infusion. Training in ARA management emphasised stopping antivenom and giving intramuscular (IM) adrenaline, IV hydrocortisone, and IV chlorphenamine at the first sign/s of ARA. Later, IV adrenaline infusion (IVAI) was introduced for patients with antecedent ARA requiring additional antivenom infusions. Preantivenom subcutaneous adrenaline (SCAd) was introduced in the second study year (2012). Of 155 envenomed patients who received ≥ 1 antivenom dose, 13 (8.4%), three children (aged 5−11 years) and 10 adults (18−52 years), developed clinical features consistent with severe ARA, including six with overlapping signs of severe envenoming. Four and nine patients received low and high dose antivenom, respectively, and six had received SCAd. Principal signs of severe ARA were dyspnoea alone (n=5 patients), dyspnoea with wheezing (n=3), hypotension (n=3), shock (n=3), restlessness (n=3), respiratory/cardiorespiratory arrest (n=7), and early (n=1) and late laryngeal oedema (n=1); rash was associated with severe ARA in 10 patients. Four patients were given IVAI. Of the 8 (5.1%) deaths, three occurred in transit to hospital. Severe ARA was common and recurrent and had overlapping signs with severe neurotoxic envenoming. Optimising the management of ARA at different healthy system levels needs more research. This trial is registered with NCT01284855.
Investigators in the cognitive neurosciences have turned to Big Data to address persistent replication and reliability issues by increasing sample sizes, statistical power, and representativeness of data. While there is tremendous potential to advance science through open data sharing, these efforts unveil a host of new questions about how to integrate data arising from distinct sources and instruments. We focus on the most frequently assessed area of cognition - memory testing - and demonstrate a process for reliable data harmonization across three common measures. We aggregated raw data from 53 studies from around the world which measured at least one of three distinct verbal learning tasks, totaling N = 10,505 healthy and brain-injured individuals. A mega analysis was conducted using empirical bayes harmonization to isolate and remove site effects, followed by linear models which adjusted for common covariates. After corrections, a continuous item response theory (IRT) model estimated each individual subject’s latent verbal learning ability while accounting for item difficulties. Harmonization significantly reduced inter-site variance by 37% while preserving covariate effects. The effects of age, sex, and education on scores were found to be highly consistent across memory tests. IRT methods for equating scores across AVLTs agreed with held-out data of dually-administered tests, and these tools are made available for free online. This work demonstrates that large-scale data sharing and harmonization initiatives can offer opportunities to address reproducibility and integration challenges across the behavioral sciences.
Recently there has been an explosion of research on whether the equilibrium real interest rate has declined, an issue with significant implications for monetary policy. A common finding is that the rate has declined. In this paper we provide evidence that contradicts this finding. We show that the perceived decline may well be due to shifts in regulatory policy and monetary policy that have been omitted from the research. In developing the monetary policy implications, it is promising that much of the research approaches the policy problem through the framework of monetary policy rules, as uncertainty in the equilibrium real rate is not a reason to abandon rules in favor of discretion. But the results are still inconclusive and too uncertain to incorporate into policy rules in the ways that have been suggested.
In the aftermath of the global financial crisis and great recession, many countries face substantial deficits and growing debts. In the United States, federal government outlays as a ratio to GDP rose substantially from about 19.5 percent before the crisis to over 24 percent after the crisis. In this paper we consider a fiscal consolidation strategy that brings the budget to balance by gradually reducing this spending ratio over time to the level that prevailed prior to the crisis. A crucial issue is the impact of such a consolidation strategy on the economy. We use structural macroeconomic models to estimate this impact focussing primarily on a dynamic stochastic general equilibrium model with price and wage rigidities and adjustment costs. We separate out the impact of reductions in government purchases and transfers, and we allow for a reduction in both distortionary taxes and government debt relative to the baseline of no consolidation. According to the model simulations GDP rises in the short run upon announcement and implementation of this fiscal consolidation strategy and remains higher than the baseline in the long run. We explore the role of the mix of expenditure cuts and tax reductions as well as gradualism in achieving this policy outcome. Finally, we conduct sensitivity studies regarding the type of model used and its parameterization.
In the aftermath of the global financial crisis and great recession, many countries face substantial deficits and growing debts. In the United States, federal government outlays as a ratio to GDP rose substantially from about 19.5 percent before the crisis to over 24 percent after the crisis. In this paper we consider a fiscal consolidation strategy that brings the budget to balance by gradually reducing this spending ratio over time to the level that prevailed prior to the crisis. A crucial issue is the impact of such a consolidation strategy on the economy. We use structural macroeconomic models to estimate this impact. We consider two types of dynamic stochastic general equilibrium models: a neoclassical growth model and more complicated models with price and wage rigidities and adjustment costs. We separate out the impact of reductions in government purchases and transfers, and we allow for a reduction in both distortionary taxes and government debt relative to the baseline of no consolidation. According to the initial model simulations GDP rises in the short run upon announcement and implementation of this fiscal consolidation strategy and remains higher than the baseline in the long run.
In the aftermath of the global financial crisis and great recession, many countries face substantial deficits and growing debts. In the United States, federal government outlays as a ratio to GDP rose substantially from about 19.5 percent before the crisis to over 24 percent after the crisis. In this paper we consider a fiscal consolidation strategy that brings the budget to balance by gradually reducing this spending ratio over time to the level that prevailed prior to the crisis. A crucial issue is the impact of such a consolidation strategy on the economy. We use structural macroeconomic models to estimate this impact focussing primarily on a dynamic stochastic general equilibrium model with price and wage rigidities and adjustment costs. We separate out the impact of reductions in government purchases and transfers, and we allow for a reduction in both distortionary taxes and government debt relative to the baseline of no consolidation. According to the model simulations GDP rises in the short run upon announcement and implementation of this fiscal consolidation strategy and remains higher than the baseline in the long run. We explore the role of the mix of expenditure cuts and tax reductions as well as gradualism in achieving this policy outcome. Finally, we conduct sensitivity studies regarding the type of model used and its parameterization.
Recently, we evaluated a fiscal consolidation strategy for the United States that would bring the government budget into balance by gradually reducing government spending relative to GDP to the ratio that prevailed prior to the crisis (Cogan et al, JEDC 2013). Specifically, we published an analysis of the macroeconomic consequences of the 2013 Budget Resolution that was passed by the U.S. House of Representatives in March 2012. In this note, we provide an update of our research that evaluates this year’s budget reform proposal that is to be discussed and voted on in the House of Representative in March 2013. Contrary to the views voiced by critics of fiscal consolidation, we show that such a reduction in government purchases and transfer payments can increase GDP immediately and permanently relative to a policy without spending restraint. Our research makes use of a modern structural model of the economy that incorporates the long-standing essential features of economics: opportunity costs, efficiency, foresight and incentives. GDP rises because households take into account that spending restraint helps avoid future increases in tax rates. Lower taxes imply less distorted incentives for work, investment and production relative to a scenario without fiscal consolidation and lead to higher growth.
Renewed interest in fiscal policy has increased the use of quantitative models to evaluate policy. Because of modeling uncertainty, it is essential that policy evaluations be robust to alternative assumptions. We find that models currently being used in practice to evaluate fiscal policy stimulus proposals are not robust. Government spending multipliers in an alternative empirically-estimated and widely-cited new Keynesian model are much smaller than in these old Keynesian models; the estimated stimulus is extremely small with GDP and employment effects only one-sixth as large.
The nucleosynthesis of elements beyond iron is dominated by neutron captures in the s and r processes. However, 32 stable, proton-rich isotopes cannot be formed during those processes, because they are shielded from the s-process flow and r-process β-decay chains. These nuclei are attributed to the p and rp process.
For all those processes, current research in nuclear astrophysics addresses the need for more precise reaction data involving radioactive isotopes. Depending on the particular reaction, direct or inverse kinematics, forward or time-reversed direction are investigated to determine or at least to constrain the desired reaction cross sections.
The Facility for Antiproton and Ion Research (FAIR) will offer unique, unprecedented opportunities to investigate many of the important reactions. The high yield of radioactive isotopes, even far away from the valley of stability, allows the investigation of isotopes involved in processes as exotic as the r or rp processes.
In this paper we investigate the comparative properties of empirically-estimated monetary models of the U.S. economy using a new database of models designed for such investigations. We focus on three representative models due to Christiano, Eichenbaum, Evans (2005), Smets and Wouters (2007) and Taylor (1993a). Although these models differ in terms of structure, estimation method, sample period, and data vintage, we find surprisingly similar economic impacts of unanticipated changes in the federal funds rate. However, optimized monetary policy rules differ across models and lack robustness. Model averaging offers an effective strategy for improving the robustness of policy rules.