Refine
Year of publication
Document Type
- Article (15676)
- Part of Periodical (2814)
- Working Paper (2350)
- Doctoral Thesis (2052)
- Preprint (1953)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (750)
- Report (471)
- Review (165)
Language
- English (29228) (remove)
Keywords
- taxonomy (738)
- new species (441)
- morphology (173)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (116)
- biodiversity (100)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5323)
- Physik (3717)
- Wirtschaftswissenschaften (1906)
- Frankfurt Institute for Advanced Studies (FIAS) (1655)
- Biowissenschaften (1542)
- Center for Financial Studies (CFS) (1485)
- Informatik (1391)
- Biochemie und Chemie (1085)
- Sustainable Architecture for Finance in Europe (SAFE) (1065)
- House of Finance (HoF) (708)
We study the quantum Zeno effect in quantum statistical mechanics within the operator algebraic framework. We formulate a condition for the appearance of the effect in W*-dynamical systems, in terms of the short-time behaviour of the dynamics. Examples of quantum spin systems show that this condition can be effectively applied to quantum statistical mechanical models. Furthermore, we derive an explicit form of the Zeno generator, and use it to construct Gibbs equilibrium states for the Zeno dynamics. As a concrete example, we consider the X-Y model, for which we show that a frequent measurement at a microscopic level, e.g. a single lattice site, can produce a macroscopic effect in changing the global equilibrium. PACS - Klassifikation: 03.65.Xp, 05.30.-d, 02.30. See the corresponding papers: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Mathematics of the Quantum Zeno Effect" and the talk "Zeno Dynamics in Quantum Statistical Mechanics" - http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1167/
The dynamical quantum Zeno effect is studied in the context of von Neumann algebras. It is shown that the Zeno dynamics coincides with the modular dynamics of a localized subalgebra. This relates the modular operator of that subalgebra to the modular operator of the original algebra by a variant of the Kato-Lie-Trotter product formula.
We present a method for the construction of a Krein space completion for spaces of test functions, equipped with an indefinite inner product induced by a kernel which is more singular than a distribution of finite order. This generalizes a regularization method for infrared singularities in quantum field theory, introduced by G. Morchio and F. Strocchi, to the case of singularites of infinite order. We give conditions for the possibility of this procedure in terms of local differential operators and the Gelfand-Shilov test function spaces, as well as an abstract sufficient condition. As a model case we construct a maximally positive definite state space for the Heisenberg algebra in the presence of an infinite infrared singularity. See the corresponding paper: Schmidt, Andreas U.: "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and the presentation "Infinite Infrared Regularization in Krein Spaces"
This extended write-up of a talk gives an introductory survey of mathematical problems of the quantization of gauge systems. Using the Schwinger model as an exactly tractable but nontrivial example which exhibits general features of gauge quantum field theory, I cover the following subjects: The axiomatics of quantum field theory, formulation of quantum field theory in terms of Wightman functions, reconstruction of the state space, the local formulation of gauge theories, indefiniteness of the Wightman functions in general and in the special case of the Schwinger model, the state space of the Schwinger model, special features of the model. New results are contained in the Mathematical Appendix, where I consider in an abstract setting the Pontrjagin space structure of a special class of indefinite inner product spaces - the so called quasi-positive ones. This is motivated by the indefinite inner product space structure appearing in the above context and generalizes results of Morchio and Strocchi [J. Math. Phys. 31 (1990) 1467], and Dubin and Tarski [J. Math. Phys. 7 (1966) 574]. See the corresponding paper: Schmidt, Andreas U.: "Infinite Infrared Regularization and a State Space for the Heisenberg Algebra" and the presentation "Infinite Infrared Regularization in Krein Spaces".
Drug target 5-lipoxygenase : a link between cellular enzyme regulation and molecular pharmacology
(2005)
Leukotriene (LT) sind bioaktive Lipidmediatoren, die in einer Vielzahl von Entzündungskrankheiten wie z.B. Asthma, Psoriasis, Arthritis oder allergische Rhinitis involviert sind. Des Weiteren spielen LT in der Pathogenese von Erkrankungen wie Krebs, Osteoarthritis oder Atherosklerose eine Rolle. Die 5-Lipoxygenase (5-LO) ist das Enzym, das für die Bildung von LT verantwortlich ist. Aufgrund der physiologischen Eigenschaften der LT, ist die Entwicklung von potentiellen Arzneistoffen, welche die 5-LO als Zielstruktur besitzen, von erheblichem Interesse. Die Aktivität der 5-LO wird in vitro durch Ca2+, ATP, Phosphatidylcholin und Lipidhydroperoxide (LOOH) und durch die p38-abhängige MK-2/3 5-LO bestimmt. Inhibitorstudien weisen darauf hin, dass der MEK1/2-Signalweg ebenfalls in vivo an der 5-LO Aktivierung beteiligt ist. Hauptziel dieser Arbeit war es zu untersuchen, welche Rolle der MEK1/2-Signalweg bei der Aktivierung der 5-LO besitzt und welchen Einfluss der 5-LO Aktivierungsweg auf die Wirksamkeit potentieller Inhibitoren hat. „In gel kinase“ und „In vitro kinase“ Untersuchungen zeigten, dass die 5-LO ein Substrat für die Extracellular signal-regulated kinase (ERK) und MK-2/3 darstellt. Der Zusatz von mehrfach ungesättigten Fettsäuren (UFA), wie AA oder Ölsäure, verstärkte den Phosphorylierungsgrad der 5-LO sowohl durch ERK1/2 als auch durch MK-2/3. Die genannten Kinasen sind demnach auch für die 5-LO Aktivierung durch natürliche Stimuli verantwortlich, die den zellulären Ca2+-Spiegel kaum beeinflussen. Daraus ist ersichtlich, dass die Phosphorylierung der 5-LO durch ERK1/2 und/oder MK-2/3 einen alternativen Aktivierungsmechanismus neben Ca2+ darstellt. Ursprünglich wurden Nonredox-5-LO-Inhibitoren als kompetitive Wirkstoffe entwickelt, die mit AA um die Bindung an die katalytische Domäne der 5-LO konkurrieren. Vertreter dieser Inhibitoren, wie ZM230487 und L-739,010, zeigen eine potente Hemmung der LT-Biosynthese in verschiedenen Testsystemen. Sie scheiterten jedoch in klinischen Studien. In dieser Arbeit konnten wir zeigen, dass die Wirksamkeit dieser Inhibitoren vom Aktivierungsweg der 5-LO abhängig ist. Verglichen mit 5-LO Aktivität, die durch den unphysiologischen Stimulus Ca2+-Ionophor induziert wird, erfordert die Hemmung zellstress-induzierter Aktivität eine 10- bis 100-fach höhere Konzentration der Nonredox-5-LO-Inhibitoren. Die nicht-phosphorylierbare 5-LO Mutante (Ser271Ala/Ser663Ala) war wesentlich sensitiver gegenüber Nonredox-Inhibitoren als der Wildtyp, wenn das Enzym durch 5-LO Kinasen aktiviert wurde. Somit zeigen diese Ergebnisse, dass, im Gegensatz zu Ca2+, die 5-LO Aktivierung mittels Phosphorylierung die Wirksamkeit der Nonredox-Inhibitoren deutlich verringert. Des Weiteren wurde das pharmakologische Profil des neuen 5-LO Inhibitors CJ-13,610 mittels verschiedener in vitro-Testsysteme charakterisiert. In intakten PMNL, die durch Ca2+-Ionophor stimuliert wurden, hemmte die Substanz die 5-LO Produktbildung mit einem IC50 von 70 nM. Durch Zugabe von exogener AA, wird die Wirkung vermindert und der IC50 des Inhibitors steigt an. Dies deutet auf eine kompetitive Wirkweise hin. Wie die bekannten Nonredox-Inhibitoren, verliert auch CJ-13,610 seine Wirkung bei erhöhtem zellulärem Peroxidspiegel. Der Inhibitor CJ-13,610 zeigt jedoch keine Abhängigkeit vom Aktivierungsweg der 5-LO. Grundsätzlich ist es also von fundamentaler Bedeutung bei der Entwicklung von neuen Arzneistoffen, die zellulären Zusammenhänge, insbesondere die Regulierung der Aktivität von Enzymen, zu kennen. Wie in dieser Arbeit gezeigt, hat die Phosphorylierung der 5-LO einen starken Einfluss auf die Regulation der 5-LO Aktivität und eine elementare Wirkung auf die Hemmung des Enzyms durch verschiedene Wirkstoffe.
A fundamental work on THz measurement techniques for application to steel manufacturing processes
(2004)
The terahertz (THz) waves had not been obtained except by a huge system, such as a free electron laser, until an invention of a photo-mixing technique at Bell laboratory in 1984 [1]. The first method using the Auston switch could generate up to 1 THz [2]. After then, as a result of some efforts for extending the frequency limit, a combination of antennas for the generation and the detection reached several THz [3, 4]. This technique has developed, so far, with taking a form of filling up the so-called THz gap . At the same time, a lot of researches have been trying to increase the output power as well [5-7]. In the 1990s, a big advantage in the frequency band was brought by non-linear optical methods [8-11]. The technique led to drastically expand the frequency region and recently to realize a measurement up to 41 THz [12]. On the other hand, some efforts have yielded new generation and detection methods from other approaches, a CW-THz as well as the pulse generation [13-19]. Especially, a THz luminescence and a laser, originated in a research on the Bloch oscillator, are recently generated from a quantum cascade structure, even at an only low temperature of 60 K [20-22]. This research attracts a lot of attention, because it would be a breakthrough for the THz technique to become widespread into industrial area as well as research, in a point of low costs and easier operations. It is naturally thought that a technology of short pulse lasers has helped the THz field to be developed. As a background of an appearance of a stable Ti:sapphire laser and a high power chirped pulse amplification (CPA) laser, instead of a dye laser, a lot of concentration on the techniques of a pulse compression and amplification have been done. [23] Viewed from an application side, the THz technique has come into the limelight as a promising measurement method. A discovery of absorption peaks of a protein and a DNA in the THz region is promoting to put the technique into practice in the field of medicine and pharmaceutical science from several years ago [24-27]. It is also known that some absorption of light polar-molecules exist in the region, therefore, some ideas of gas and water content monitoring in the chemical and the food industries are proposed [28-32]. Furthermore, a lot of reports, such as measurements of carrier distribution in semiconductors, refractive index of a thin film and an object shape as radar, indicate that this technique would have a wide range of application [33-37]. I believe that it is worth challenging to apply it into the steel-making industry, due to its unique advantages. The THz wavelength of 30-300 ¼m can cope with both independence of a surface roughness of steel products and a detection with a sub-millimeter precision, for a remote surface inspection. There is also a possibility that it can measure thickness or dielectric constants of relatively high conductive materials, because of a high permeability against non-polar dielectric materials, short pulse detection and with a high signal-to-noise ratio of 103-5. Furthermore, there is a possibility that it could be applicable to a measurement at high temperature, for less influence by a thermal radiation, compared with the visible and infrared light. These ideas have motivated me to start this THz work.
The Kochen-Specker theorem has been discussed intensely ever since its original proof in 1967. It is one of the central no-go theorems of quantum theory, showing the non-existence of a certain kind of hidden states models. In this paper, we first offer a new, non-combinatorial proof for quantum systems with a type I_n factor as algebra of observables, including I_infinity. Afterwards, we give a proof of the Kochen-Specker theorem for an arbitrary von Neumann algebra R without summands of types I_1 and I_2, using a known result on two-valued measures on the projection lattice P(R). Some connections with presheaf formulations as proposed by Isham and Butterfield are made.
This paper has shown that some of the principal arguments against shareholder voice are unfounded. It has shown that shareholders do own corporations, and that the nature of their property interest is structured to meet the needs of the relationships found in stock corporations. The paper has explained that fiduciary and other duties restrain the actions of shareholders just as they do those of management, and that critics cannot reasonably expect court-imposed fiduciary duties to extend beyond the actual powers of shareholders. It has also illustrated how, although corporate statutes give shareholders complete power to structure governance as they will, the default governance structures of U.S. corporations leaves shareholders almost powerless to initiate any sort of action, and the interaction between state and federal law makes it almost impossible for shareholders to elect directors of their choice. Lastly, the paper has recalled how the percentage of U.S. corporate equities owned by institutional investors has increased dramatically in recent decades, and it has outlined some of the major developments in shareholder rights that followed this increase. I hope that this paper deflated some of the strong rhetoric used against shareholder voice by contrasting rhetoric to law, and that it illustrated why the picture of weak owners painted in the early 20th century should be updated to new circumstances, which will help avoid projecting an old description as a current normative model that perpetuates the inevitability of "managerialsm", perhaps better known as "dirigisme".
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
Syndicated loans and the number of lending relationships have raised growing attention. All other terms being equal (e.g. seniority), syndicated loans provide larger payments (in basis points) to lenders funding larger amounts. The paper explores empirically the motivation for such a price discrimination on sovereign syndicated loans in the period 1990-1997. First evidence suggests larger premia are associated with renegotiation prospects. This is consistent with the hypothesis that price discrimination is aimed at reducing the number of lenders and thus the expected renegotiation costs. However, larger payment discrimination is also associated with more targeted market segments and with larger loans, thus minimising borrowing costs and/or attempting to widen the circle of lending relationships in order to successfully raise the requested amount. JEL Classification: F34, G21, G33 This version: June, 2002. Later version (october 2003) with the title: "Why Borrowers Pay Premiums to Larger Lenders: Empirical Evidence from Sovereign Syndicated Loans" : http://publikationen.ub.uni-frankfurt.de/volltexte/2005/992/
We use consumer price data for 205 cities/regions in 21 countries to study deviations from the law-of-one-price before, during and after the major currency crises of the 1990s. We combine data from industrialised nations in North America (Unites States, Canada, Mexico), Europe (Germany, Italy, Spain and Portugal) and Asia (Japan, Korea, New Zealand, Australia) with corresponding data from emerging market economies in the South America (Argentine, Bolivia, Brazil, Columbia) and Asia (India, Indonesia, Malaysia, Philippines, Taiwan, Thailand). We confirm previous results that both distance and border explain a significant amount of relative price variation across different locations. We also find that currency attacks had major disintegration effects by significantly increasing these border effects, and by raising within country relative price dispersion in emerging market economies. These effects are found to be quite persistent since relative price volatility across emerging markets today is still significantly larger than a decade ago. JEL classification: F40, F41
We use consumer price data for 81 European cities (in Germany, Austria, Switzerland, Italy, Spain and Portugal) to study deviations from the law-of-one-price before and during the European Economic and Monetary Union (EMU) by analysing both aggregate and disaggregate CPI data for 7 categories of goods we find that the distance between cities explains a significant amount of the variation in the prices of similar goods in different locations. We also find that the variation of the relative price is much higher for two cities located in different countries than for two equidistant cities in the same country. Under EMU, the elimination of nominal exchange rate volatility has largely reduced these border effects, but distance and border still matter for intra-European relative price volatility. JEL classification: F40, F41
This paper analyzes a comprehensive data set of 108 non venture-backed, 58 venture-backed and 33 bridge financed companies going public at Germany s Neuer Markt between March 1997 and March 2000. I examine whether these three types of issues differ with regard to issuer characteristics, balance sheet data or offering characteristics. Moreover, this empirical study contributes to the underpricing literature by focusing on the complementary or rather competing role of venture capitalists and underwriters in certifying the quality of a company when going public. Companies backed by a prestigious venture capitalist and/or underwritten by a top bank are expected to show less underpricing at the initial public offering (IPO) due to a reduced ex-ante uncertainty. This study provides evidence to the contrary: VC-backed IPOs appear to be more underpriced than non VCbacked IPOs.
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature. December 2002. Revised: June 2003. Later version: http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1026/ with the title: "Accounting for financial instruments in the banking industry : conclusions from a simulation model"
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
This paper characterizes the optimal inflation buffer consistent with a zero lower bound on nominal interest rates in a New Keynesian sticky-price model. It is shown that a purely forward-looking version of the model that abstracts from inflation inertia would significantly underestimate the inflation buffer. If the central bank follows the prescriptions of a welfare-theoretic objective, a larger buffer appears optimal than would be the case employing a traditional loss function. Taking also into account potential downward nominal rigidities in the price-setting behavior of firms appears not to impose significant further distortions on the economy. JEL Klassifikation: C63, E31, E52 .
Ignoring the existence of the zero lower bound on nominal interest rates one considerably understates the value of monetary commitment in New Keynesian models. A stochastic forward-looking model with lower bound, calibrated to the U.S. economy, suggests that low values for the natural rate of interest lead to sizeable output losses and deflation under discretionary monetary policy. The fall in output and deflation are much larger than in the case with policy commitment and do not show up at all if the model abstracts from the existence of the lower bound. The welfare losses of discretionary policy increase even further when inflation is partly determined by lagged inflation in the Phillips curve. These results emerge because private sector expectations and the discretionary policy response to these expectations reinforce each other and cause the lower bound to be reached much earlier than under commitment. JEL Klassifikation: E31, E52
Using data from the Consumer Expenditure Survey we first document that the recent increase in income inequality in the US has not been accompanied by a corresponding rise in consumption inequality. Much of this divergence is due to different trends in within-group inequality, which has increased significantly for income but little for consumption. We then develop a simple framework that allows us to analytically characterize how within-group income inequality affects consumption inequality in a world in which agents can trade a full set of contingent consumption claims, subject to endogenous constraints emanating from the limited enforcement of intertemporal contracts (as in Kehoe and Levine, 1993). Finally, we quantitatively evaluate, in the context of a calibrated general equilibrium production economy, whether this set-up, or alternatively a standard incomplete markets model (as in Ayiagari 1994), can account for the documented stylized consumption inequality facts from the US data. JEL Klassifikation: E21, D91, D63, D31, G22
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61.