TY - UNPD A1 - Svensson, Lars E.O. A1 - Williams, Noah T1 - Bayesian and adaptive optimal policy under model uncertainty T2 - Center for Financial Studies (Frankfurt am Main): CFS working paper series ; No. 2007,11 N2 - We study the problem of a policymaker who seeks to set policy optimally in an economy where the true economic structure is unobserved, and policymakers optimally learn from their observations of the economy. This is a classic problem of learning and control, variants of which have been studied in the past, but little with forward-looking variables which are a key component of modern policy-relevant models. As in most Bayesian learning problems, the optimal policy typically includes an experimentation component reflecting the endogeneity of information. We develop algorithms to solve numerically for the Bayesian optimal policy (BOP). However the BOP is only feasible in relatively small models, and thus we also consider a simpler specification we term adaptive optimal policy (AOP) which allows policymakers to update their beliefs but shortcuts the experimentation motive. In our setting, the AOP is significantly easier to compute, and in many cases provides a good approximation to the BOP. We provide a simple example to illustrate the role of learning and experimentation in an MJLQ framework. JEL Classification: E42, E52, E58 T3 - CFS working paper series - 2007, 11 KW - Optimal Monetary Policy KW - Learning KW - Recursive Saddlepoint Method KW - Geldpolitik KW - Bayes-Entscheidungstheorie Y1 - 2006 UR - http://publikationen.ub.uni-frankfurt.de/frontdoor/index/index/docId/1610 UR - https://nbn-resolving.org/urn:nbn:de:hebis:30-38218 N1 - Version November 2006 IS - November 2006 ER -