Universitätspublikationen
Refine
Year of publication
Document Type
- Working Paper (1903) (remove)
Is part of the Bibliography
- no (1903)
Keywords
- Deutschland (45)
- monetary policy (35)
- Banking Union (19)
- Mobilität (18)
- household finance (17)
- Monetary Policy (16)
- Covid-19 (15)
- ECB (15)
- ESG (15)
- Liquidity (15)
Institute
- Wirtschaftswissenschaften (1257)
- Center for Financial Studies (CFS) (1125)
- Sustainable Architecture for Finance in Europe (SAFE) (817)
- House of Finance (HoF) (672)
- Rechtswissenschaft (185)
- Institute for Monetary and Financial Stability (IMFS) (178)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (73)
- Informatik (68)
- Institut für sozial-ökologische Forschung (ISOE) (60)
- Foundation of Law and Finance (51)
Sozialpolitische Auseinandersetzungen kursieren gegenwärtig verschärft um die Gestaltung der Sicherung des sozio-kulturellen Existenzminimums, um eine angebliche "Kostenexplosion" bei der Grundsicherung für Arbeitsuchende und um Vermutungen über verbreiteten Leistungsmissbrauch. Der Blick ist also stark auf die staatlicherseits auf Basis des Sozialgesetzbuches (SGB) über Transfers "zu bekämpfende" und "bekämpfte" Armut gerichtet. Vor diesem Hintergrund sollen die auf relative Grenzen – 50% des arithmetischen Mittels oder 60% des Medians der Nettoäquivalenzeinkommen – bezogenen Studien über Armut in Deutschland um eine Armutsanalyse ergänzt werden, die den Einkommensbereich unterhalb des gesetzlichen Existenzminimums in den Fokus nimmt. In der folgenden Untersuchung geht es nicht nur um die Größe der edürftigenBevölkerungsgruppe insgesamt, sondern darüber hinaus um die Bedeutung von Ursachen der Hilfebedürftigkeit – Arbeitslosigkeit, Teilzeiterwerbstätigkeit, niedriges Erwerbseinkommen, Alter –, um geschlechtsspezifische Unterschiede und um die Betroffenheit von Kindern. Hier fehlt es bisher an zeitnahen empirischen Informationen. Daten über die Zahl und Struktur der Empfänger von Grundsicherungsleistungen – also von Arbeitslosengeld II (Alg II) bzw. Sozialgeld, Grundsicherung im Alter und bei Erwerbsminderung oder Hilfe zum Lebensunterhalt (HLu) der Sozialhilfe – vermitteln nur die "halbe Wahrheit". ...
The selection of features for classification, clustering and approximation is an important task in pattern recognition, data mining and soft computing. For real-valued features, this contribution shows how feature selection for a high number of features can be implemented using mutual in-formation. Especially, the common problem for mutual information computation of computing joint probabilities for many dimensions using only a few samples is treated by using the Rènyi mutual information of order two as computational base. For this, the Grassberger-Takens corre-lation integral is used which was developed for estimating probability densities in chaos theory. Additionally, an adaptive procedure for computing the hypercube size is introduced and for real world applications, the treatment of missing values is included. The computation procedure is accelerated by exploiting the ranking of the set of real feature values especially for the example of time series. As example, a small blackbox-glassbox example shows how the relevant features and their time lags are determined in the time series even if the input feature time series determine nonlinearly the output. A more realistic example from chemical industry shows that this enables a better ap-proximation of the input-output mapping than the best neural network approach developed for an international contest. By the computationally efficient implementation, mutual information becomes an attractive tool for feature selection even for a high number of real-valued features.
It is well known that artificial neural nets can be used as approximators of any continous functions to any desired degree. Nevertheless, for a given application and a given network architecture the non-trivial task rests to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation. In this paper the problem is treated by an information theoretic approach. The values for the weights and thresholds in the approximator network are determined analytically. Furthermore, the accuracy of the weights and the number of neurons are seen as general system parameters which determine the the maximal output information (i.e. the approximation error) by the absolute amount and the relative distribution of information contained in the network. A new principle of optimal information distribution is proposed and the conditions for the optimal system parameters are derived. For the simple, instructive example of a linear approximation of a non-linear, quadratic function, the principle of optimal information distribution gives the the optimal system parameters, i.e. the number of neurons and the different resolutions of the variables.
Towards correctness of program transformations through unification and critical pair computation
(2010)
Correctness of program transformations in extended lambda-calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, which results in so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study of an application we describe a finitary and decidable unification algorithm for the combination of the equational theory of left-commutativity modelling multi-sets, context variables and many-sorted unification. Sets of equations are restricted to be almost linear, i.e. every variable and context variable occurs at most once, where we allow one exception: variables of a sort without ground terms may occur several times. Every context variable must have an argument-sort in the free part of the signature. We also extend the unification algorithm by the treatment of binding-chains in let- and letrec-environments and by context-classes. This results in a unification algorithm that can be applied to all overlaps of normal-order reductions and transformations in an extended lambda calculus with letrec that we use as a case study.
Measuring confidence and uncertainty during the financial crisis : evidence from the CFS survey
(2010)
The CFS survey covers individual situations of banks and other companies of the financial sector during the financial crisis. This provides a rare possibility to analyze appraisals, expectations and forecast errors of the core sector of the recent turmoil. Following standard ways of aggregating individual survey data, we first present and introduce the CFS survey by comparing CFS indicators of confidence and predicted confidence to ifo and ZEW indicators. The major contribution is the analysis of several indicators of uncertainty. In addition to well established concepts, we introduce innovative measures based on the skewness of forecast errors and on the share of ‘no response’ replies. Results show that uncertainty indicators fit quite well with pattern of real and financial time series of the time period 2007 to 2010. Business Sentiment , Financial Crisis , Survey Indicator , Uncertainty
This paper provides theory as well as empirical results for pre-averaging estimators of the daily quadratic variation of asset prices. We derive jump robust inference for pre-averaging estimators, corresponding feasible central limit theorems and an explicit test on serial dependence in microstructure noise. Using transaction data of different stocks traded at the NYSE, we analyze the estimators’ sensitivity to the choice of the pre-averaging bandwidth and suggest an optimal interval length. Moreover, we investigate the dependence of pre-averaging based inference on the sampling scheme, the sampling frequency, microstructure noise properties as well as the occurrence of jumps. As a result of a detailed empirical study we provide guidance for optimal implementation of pre-averaging estimators and discuss potential pitfalls in practice. Quadratic Variation , MarketMicrostructure Noise , Pre-averaging , Sampling Schemes , Jumps
SUMMARY RECOMMENDATIONS 1. One of the major lessons from the current financial crisis refers to the systemic dimension of financial risk which had been almost completely neglected by bankers and supervisors in the pre-2007 years. 2. Accordingly, the most needed change in financial regulation, in order to avoid a repetition of such a crisis in the future, consists of influencing individual bank behaviour such that systemic risk is decreased. This objective is new and distinct from what Basle II was intended to achieve. 3. It is important, therefore, to evaluate proposed new regulatory instruments on the ground of whether or not they contribute to a reduction, or containment of systemic risk. We see two new regulatory measures of paramount importance: the introduction of a Systemic Risk Charge (SRC), and the implementation of a transparent bank resolution regime. Both measures complement each other, thus both have to be realized to be effective. 4. We propose a Systemic Risk Charge (SRC), a levy capturing the contribution of any individual bank to the overall systemic risk which is distinct from the institution’s own default risk. The SRC is set up such that the more systemic risk a bank contributes, the higher is the cost it has to bear. Therefore, the SRC serves to internalize the cost of systemic risk which, up to now, was borne by the taxpayer. 5. Major details of our SRC refer to the use of debt that may be converted into equity when systemic risk threatens the stability of the banking system. Also, the SRC raises some revenues for government. 6. The SRC has to be compared to several bank levies currently debated. The Financial Transaction Tax (FTT) does not directly address systemic risk and is therefore inferior to a SRC. Nevertheless, a FTT may offer the opportunity to subsidize on-exchange trading at the expense of off-exchange (over-the-counter, OTC) transactions, thereby enhancing financial market stability. The Financial Activity Tax (FAT) is similar to a VAT on financial services. It is the least adequate instrument among all instruments discussed above to limit systemic risk. 7. Bank resolution regime: No instrument to contain systemic risk can be effective unless the restructuring of bank debt, and the ensuing loss given default to creditors, is a real possibility. As the crisis has taught, bank restructuring is very difficult in light of contagion risk between major banks. We therefore need a regulatory procedure that allows winding down banks, even large banks, on short notice. Among other things, the procedure will require to distinguish systemically relevant exposures from those that are irrelevant. Only the former will be saved with government money, and it will then be the task of the supervisor to ensure a sufficient amount of nonsystemically relevant debt on the balance sheet of all banks. 8. Further issues discussed in this policy paper and its appendices refer to the necessity of a global level playing field, or the lack thereof, for these new regulatory measures; the convergence of our SRC proposal with what is expected to be long-term outcome of Basle III discussions; as well as the role of global imbalances.
Many studies show that most people are not financially literate and are unfamiliar with even the most basic economic concepts. However, the evidence on the determinants of economic literacy is scant. This paper uses international panel data on 55 countries from 1995 to 2008, merging indicators of economic literacy with a large set of macroeconomic and institutional variables. Results show that there is substantial heterogeneity of financial and economic competence across countries, and that human capital indicators (PISA test scores and college attendance) are positively correlated with economic literacy. Furthermore, inhabitants of countries with more generous social security systems are generally less literate, lending support to the hypothesis that the incentives to acquire economic literacy are related to the amount of resources available for private accumulation. JEL Classification: E2, D8, G1
This paper investigates the accuracy and heterogeneity of output growth and inflation forecasts during the current and the four preceding NBER-dated U.S. recessions. We generate forecasts from six different models of the U.S. economy and compare them to professional forecasts from the Federal Reserve’s Greenbook and the Survey of Professional Forecasters (SPF). The model parameters and model forecasts are derived from historical data vintages so as to ensure comparability to historical forecasts by professionals. The mean model forecast comes surprisingly close to the mean SPF and Greenbook forecasts in terms of accuracy even though the models only make use of a small number of data series. Model forecasts compare particularly well to professional forecasts at a horizon of three to four quarters and during recoveries. The extent of forecast heterogeneity is similar for model and professional forecasts but varies substantially over time. Thus, forecast heterogeneity constitutes a potentially important source of economic fluctuations. While the particular reasons for diversity in professional forecasts are not observable, the diversity in model forecasts can be traced to different modeling assumptions, information sets and parameter estimates. JEL Classification: G14, G15, G24
Price pressures
(2010)
We study price pressures in stock prices—price deviations from fundamental value due to a risk-averse intermediary supplying liquidity to asynchronously arriving investors. Empirically, twelve years of daily New York Stock Exchange intermediary data reveal economically large price pressures. A $100,000 inventory shock causes an average price pressure of 0.28% with a half-life of 0.92 days. Price pressure causes average transitory volatility in daily stock returns of 0.49%. Price pressure effects are substantially larger with longer durations in smaller stocks. Theoretically, in a simple dynamic inventory model the ‘representative’ intermediary uses price pressure to control risk through inventory mean reversion. She trades off the revenue loss due to price pressure against the price risk associated with remaining in a nonzero inventory state. The model’s closed-form solution identifies the intermediary’s relative risk aversion and the distribution of investors’ private values for trading from the observed time series patterns. These allow us to estimate the social costs—deviations from constrained Pareto efficiency—due to price pressure which average 0.35 basis points of the value traded. JEL Classification: G12, G14, D53, D61
This paper presents a model to analyze the consequences of competition in order-flow between a profit maximizing stock exchange and an alternative trading platform on the decisions concerning trading fees and listing requirements. Listing requirements, set by the exchange, provide public information on listed firms and contribute to a better liquidity on all trading venues. It is sometimes asserted that competition induces the exchange to lower its level of listing standards compared to a situation in which it is a monopolist, because the trading platform can free-ride on this regulatory activity and compete more aggressively on trading fees. The present analysis shows that this is not always true and depends on the existence and size of gains related to multi market trading. These gains relax competition on trading fees. The higher these gains are, the more the exchange can increase its revenue from listing and trading when it raises its listing standards. For large enough gains from multi-market trading, the exchange is not induced to lower the level of listing standards when a competing trading platform appears. As a second result, this analysis also reveals a cross - subsidization effect between the listing and the trading activity when listing is not competitive. This model yields implications about the fee structures on stock markets, the regulation of listings and the social optimality of competition for volume. JEL Classification: G10, G18, G12
This paper proposes the Shannon entropy as an appropriate one-dimensional measure of behavioural trading patterns in financial markets. The concept is applied to the illustrative example of algorithmic vs. non-algorithmic trading and empirical data from Deutsche Börse's electronic cash equity trading system, Xetra. The results reveal pronounced differences between algorithmic and non-algorithmic traders. In particular, trading patterns of algorithmic traders exhibit a medium degree of regularity while non-algorithmic trading tends towards either very regular or very irregular trading patterns. JEL Classification: C40, D0, G14, G15, G20
How ordinary consumers make complex economic decisions: financial literacy and retirement readiness
(2010)
This paper explores who is financially literate, whether people accurately perceive their own economic decision-making skills, and where these skills come from. Self-assessed and objective measures of financial literacy can be linked to consumers’ efforts to plan for retirement in the American Life Panel, and causal relationships with retirement planning examined by exploiting information about respondent financial knowledge acquired in school. Results show that those with more advanced financial knowledge are those more likely to be retirement-ready.
We examined financial literacy among the young using the most recent wave of the 1997 National Longitudinal Survey of Youth. We showed that financial literacy is low; fewer than one-third of young adults possess basic knowledge of interest rates, inflation, and risk diversification. Financial literacy was strongly related to sociodemographic characteristics and family financial sophistication. Specifically, a college-educated male whose parents had stocks and retirement savings was about 45 percentage points more likely to know about risk diversification than a female with less than a high school education whose parents were not wealthy. These findings have implications for consumer policy. JEL Classification: D91
This paper investigates the accuracy and heterogeneity of output growth and inflation forecasts during the current and the four preceding NBER-dated U.S. recessions. We generate forecasts from six different models of the U.S. economy and compare them to professional forecasts from the Federal Reserve’s Greenbook and the Survey of Professional Forecasters (SPF). The model parameters and model forecasts are derived from historical data vintages so as to ensure comparability to historical forecasts by professionals. The mean model forecast comes surprisingly close to the mean SPF and Greenbook forecasts in terms of accuracy even though the models only make use of a small number of data series. Model forecasts compare particularly well to professional forecasts at a horizon of three to four quarters and during recoveries. The extent of forecast heterogeneity is similar for model and professional forecasts but varies substantially over time. Thus, forecast heterogeneity constitutes a potentially important source of economic fluctuations. While the particular reasons for diversity in professional forecasts are not observable, the diversity in model forecasts can be traced to different modeling assumptions, information sets and parameter estimates. JEL Classification: C53, D84, E31, E32, E37 Keywords: Forecasting, Business Cycles, Heterogeneous Beliefs, Forecast Distribution, Model Uncertainty, Bayesian Estimation
This paper analyzes loan pricing when there is multiple banking and borrower distress. Using a unique data set on SME lending collected from major German banks, we can instrument for effective coordination between lenders, carrying out a panel estimation. The analysis allows to distinguish between rents that accrue due to single bank lending, rents that accrue due to relationship lending, and rents that accrue due to the elimination of competition among multiple lenders. We find the relationship lending to have no discernible impact on loan spreads, while both single lending and coordinated multiple lending significantly increase the spread. Thus, contrary to predictions in the literature, multiple lending does not insure the borrower against hold-up. JEL Classification: D74, G21, G33, G34
Zusammenfassung und Ergebnisse Es ist noch zu früh, eine abschließende Bewertung der Entwicklung auf den Finanzmärkten während der letzten zwei Jahre vorzunehmen. In jedem Fall sind aber alle Regelungen auf den Prüfstand zu stellen. Das Aufsichtsrecht hat insgesamt seine Aufgabe, Finanzstabilität zu gewährleisten, nicht erfüllt. Wesentliche Schritte für eine grundlegende Reform sind: - ein striktes Verständnis des Aufsichtsrechts als Sonderordnungsrecht - eine drastische Reduktion der Komplexität der Rechtsvorschriften - die Internationalisierung und Europäisierung der Aufsicht - die Steigerung der Transparenz der Verbriefung einschließlich eines möglichen Zulassungsverfahrens und des Verbots bestimmter gefährlicher „Produkte“ - die vollständige Neuausrichtung der Bewertung von Finanzunternehmen und ihrer „Produkte“ („ratings“) - Die Schaffung geeigneter Regeln und Verfahren, um auch systemisch relevante Institutionen der Marktdisziplin, also ihrem Untergang, auszusetzen - Die Grundlage für kurzfristige Entscheidung über Fortführung, Zerlegung oder Abwicklung eines Instituts als Maßnahme der Gefahrenabwehr muss geschaffen werden. Ein Sonderinsolvenzrecht für Banken ist nicht angezeigt - Die Einbeziehung des menschlichen Verhaltens und der Persönlichkeitsstruktur der maßgebenden Personen in den Finanzinstitutionen
ZUSAMMENFASSUNG UND ERGEBNISSE (1) Die Schaffung des Europäischen Ausschusses für Systemrisiken stößt nicht auf durchgreifende rechtliche Bedenken. (2) Es ist nicht sicher, dass die Errichtung der neuen Europäischen Aufsichtbehörden ohne entsprechende Änderung des Primärrechts zulässig ist. (3) Es kommt entscheidend darauf an, welche rechtsverbindlichen Einzelweisungsbefugnisse tatsächlich den Behörden verliehen werden. (4) Die nach dem Kompromiss vom 2. Dezember 2009 noch verbliebenen Einzelweisungsbefugnisse der Behörden gegenüber Privaten und gegenüber nationalen Aufsichtsbehörden sind rechtlich kaum abgesichert. (5) Wenn die hoheitlichen Befugnisse weitgehend oder vollständig beseitigt werden, bestehen Bedenken im Hinblick auf die Geeignetheit und Erforderlichkeit der Einrichtungen. (6) Die weitreichenden Unabhängigkeitsgarantien sind nicht mit den Anforderungen demokratischer Aufsicht und Kontrolle zu vereinbaren. (7) Für die Einräumung von Unabhängigkeit ist nach deutschem Verfassungsrecht eine ausdrückliche Regelung in der Verfassung, wie in Art. 88 Satz 2 GG, erforderlich. (8) Die transnationale Kooperation von Verwaltungsbehörden bedarf zumindest dann einer gesetzlichen Ermächtigung, wenn faktisch verbindliche Entscheidungen getroffen werden.
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
This note shows that in non-deterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors.
A logical framework consisting of a polymorphic call-by-value functional language and a first-order logic on the values is presented, which is a reconstruction of the logic of the verification system VeriFun. The reconstruction uses contextual semantics to define the logical value of equations. It equates undefinedness and non-termination, which is a standard semantical approach. The main results of this paper are: Meta-theorems about the globality of several classes of theorems in the logic, and proofs of global correctness of transformations and deduction rules. The deduction rules of VeriFun are globally correct if rules depending on termination are appropriately formulated. The reconstruction also gives hints on generalizations of the VeriFun framework: reasoning on nonterminating expressions and functions, mutual recursive functions and abstractions in the data values, and formulas with arbitrary quantifier prefix could be allowed.
Opting out of the great inflation: German monetary policy after the break down of Bretton Woods
(2009)
During the turbulent 1970s and 1980s the Bundesbank established an outstanding reputation in the world of central banking. Germany achieved a high degree of domestic stability and provided safe haven for investors in times of turmoil in the international financial system. Eventually the Bundesbank provided the role model for the European Central Bank. Hence, we examine an episode of lasting importance in European monetary history. The purpose of this paper is to highlight how the Bundesbank monetary policy strategy contributed to this success. We analyze the strategy as it was conceived, communicated and refined by the Bundesbank itself. We propose a theoretical framework (following Söderström, 2005) where monetary targeting is interpreted, first and foremost, as a commitment device. In our setting, a monetary target helps anchoring inflation and inflation expectations. We derive an interest rate rule and show empirically that it approximates the way the Bundesbank conducted monetary policy over the period 1975-1998. We compare the Bundesbank´s monetary policy rule with those of the FED and of the Bank of England. We find that the Bundesbank´s policy reaction function was characterized by strong persistence of policy rates as well as a strong response to deviations of inflation from target and to the activity growth gap. In contrast, the response to the level of the output gap was not significant. In our empirical analysis we use real-time data, as available to policy-makers at the time. JEL Classification: E31, E32, E41, E52, E58
Pion and strangeness puzzles
(1996)
Data on the mean multiplicity of strange hadrons produced in minimum bias proton--proton and central nucleus--nucleus collisions at momenta between 2.8 and 400 GeV/c per nucleon have been compiled. The multiplicities for nucleon--nucleon interactions were constructed. The ratios of strange particle multiplicity to participant nucleon as well as to pion multiplicity are larger for central nucleus--nucleus collisions than for nucleon--nucleon interactions at all studied energies. The data at AGS energies suggest that the latter ratio saturates with increasing masses of the colliding nuclei. The strangeness to pion multiplicity ratio observed in nucleon--nucleon interactions increases with collision energy in the whole energy range studied. A qualitatively different behaviour is observed for central nucleus--nucleus collisions: the ratio rapidly increases when going from Dubna to AGS energies and changes little between AGS and SPS energies. This change in the behaviour can be related to the increase in the entropy production observed in central nucleus-nucleus collisions at the same energy range. The results are interpreted within a statistical approach. They are consistent with the hypothesis that the Quark Gluon Plasma is created at SPS energies, the critical collision energy being between AGS and SPS energies.
The data on average hadron multiplicities in central A+A collisions measured at CERN SPS are analysed with the ideal hadron gas model. It is shown that the full chemical equilibrium version of the model fails to describe the experimental results. The agreement of the data with the off-equilibrium version allowing for partial strangeness saturation is significantly better. The freeze-out temperature of about 180 MeV seems to be independent of the system size (from S+S to Pb+Pb) and in agreement with that extracted in e+e-, pp and p{\bar p} collisions. The strangeness suppression is discussed at both hadron and valence quark level. It is found that the hadronic strangeness saturation factor gamma_S increases from about 0.45 for pp interactions to about 0.7 for central A+A collisions with no significant change from S+S to Pb+Pb collisions. The quark strangeness suppression factor lambda_S is found to be about 0.2 for elementary collisions and about 0.4 for heavy ion collisions independently of collision energy and type of colliding system
The transverse momentum and rapidity distributions of net protons and negatively charged hadrons have been measured for minimum bias proton-nucleus and deuteron-gold interactions, as well as central oxygen-gold and sulphur-nucleus collisions at 200 GeV per nucleon. The rapidity density of net protons at midrapidity in central nucleus-nucleus collisions increases both with target mass for sulphur projectiles and with the projectile mass for a gold target. The shape of the rapidity distributions of net protons forward of midrapidity for d+Au and central S+Au collisions is similar. The average rapidity loss is larger than 2 units of rapidity for reactions with the gold target. The transverse momentum spectra of net protons for all reactions can be described by a thermal distribution with temperatures' between 145 +- 11 MeV (p+S interactions) and 244 +- 43 MeV (central S+Au collisions). The multiplicity of negatively charged hadrons increases with the mass of the colliding system. The shape of the transverse momentum spectra of negatively charged hadrons changes from minimum bias p+p and p+S interactions to p+Au and central nucleus-nucleus collisions. The mean transverse momentum is almost constant in the vicinity of midrapidity and shows little variation with the target and projectile masses. The average number of produced negatively charged hadrons per participant baryon increases slightly from p+p, p+A to central S+S,Ag collisions.
A statistical model of the early stage of central nucleus--nucleus (A+A) collisions is developed. We suggest a description of the confined state with several free parameters fitted to a compilation of A+A data at the AGS. For the deconfined state a simple Bag model equation of state is assumed. The model leads to the conclusion that a Quark Gluon Plasma is created in central nucleus--nucleus collisions at the SPS. This result is in quantitative agreement with existing SPS data on pion and strangeness production and gives a natural explanation for their scaling behaviour. The localization and the properties of the transition region are discussed. It is shown that the deconfinement transition can be detected by observation of the characteristic energy dependence of pion and strangeness multiplicities, and by an increase of the event--by--event fluctuations. An attempt to understand the data on J/psi production in Pb+Pb collisions at the SPS within the same approach is presented.