Refine
Year of publication
- 2003 (449) (remove)
Document Type
- Article (161)
- Working Paper (109)
- Part of a Book (40)
- Preprint (40)
- Doctoral Thesis (33)
- Part of Periodical (29)
- Conference Proceeding (15)
- Book (8)
- Report (6)
- Review (5)
Language
- English (449) (remove)
Keywords
- Deutschland (24)
- Morphologie (14)
- Phonologie (12)
- Geldpolitik (11)
- Aspekt (10)
- Englisch (9)
- Europäische Union (7)
- Going Public (7)
- Kindersprache (7)
- Phonetik (7)
Institute
- Physik (63)
- Center for Financial Studies (CFS) (56)
- Wirtschaftswissenschaften (35)
- Medizin (17)
- Rechtswissenschaft (14)
- Geowissenschaften (12)
- Informatik (12)
- Biowissenschaften (11)
- Extern (11)
- Institut für Deutsche Sprache (IDS) Mannheim (11)
Learning and equilibrium selection in a monetary overlapping generations model with sticky prices
(2003)
We study adaptive learning in a monetary overlapping generations model with sticky prices and monopolistic competition for the case where learning agents observe current endogenous variables. Observability of current variables is essential for informational consistency of the learning setup with the model set up but generates multiple temporary equilibria when prices are flexible and prevents a straightforward construction of the learning dynamics. Sticky prices overcome this problem by avoiding simultaneity between prices and price expectations. Adaptive learning then robustly selects the determinate (monetary) steady state independent from the degree of imperfect competition. The indeterminate (non-monetary) steady state and non-stationary equilibria are never stable. Stability in a deterministic version of the model may differ because perfect foresight equilibria can be the limit of restricted perceptions equilibria of the stochastic economy with vanishing noise and thereby inherit different stability properties. This discontinuity at the zero variance of shocks suggests to analyze learning in stochastic models.
This paper considers a sticky price model with a cash-in-advance constraint where agents forecast inflation rates with the help of econometric models. Agents use least squares learning to estimate two competing models of which one is consistent with rational expectations once learning is complete. When past performance governs the choice of forecast model, agents may prefer to use the inconsistent forecast model, which generates an equilibrium where forecasts are inefficient. While average output and inflation result the same as under rational expectations, higher moments differ substantially: output and inflation show persistence, inflation responds sluggishly to nominal disturbances, and the dynamic correlations of output and inflation match U.S. data surprisingly well.
This paper compares Bayesian decision theory with robust decision theory where the decision maker optimizes with respect to the worst state realization. For a class of robust decision problems there exists a sequence of Bayesian decision problems whose solution converges towards the robust solution. It is shown that the limiting Bayesian problem displays infinite risk aversion and that decisions are insensitive (robust) to the precise assignment of prior probabilities. This holds independent from whether the preference for robustness is global or restricted to local perturbations around some reference model.
We study optimal nominal demand policy in an economy with monopolistic competition and flexible prices when firms have imperfect common knowledge about the shocks hitting the economy. Parametrizing firms´ information imperfections by a (Shannon) capacity parameter that constrains the amount of information flowing to each firm, we study how policy that minimizes a quadratic objective in output and prices depends on this parameter. When price setting decisions of firms are strategic complements, for a large range of capacity values optimal policy nominally accommodates mark-up shocks in the short-run. This finding is robust to the policy maker observing shocks imperfectly or being uncertain about firms´ capacity parameter. With persistent mark-up shocks accommodation may increase in the medium term, but decreases in the long-run thereby generating a hump-shaped price response and a slow reduction in output. Instead, when prices are strategic substitutes, policy tends to react restrictively to mark-up shocks. However, rational expectations equilibria may then not exist with small amounts of imperfect common knowledge.
We report measurements of single-particle inclusive spectra and two-particle azimuthal distributions of charged hadrons at high transverse momentum (high pT) in minimum bias and central d+Au collisions at sqrt[sNN]=200 GeV. The inclusive yield is enhanced in d+Au collisions relative to binary-scaled p+p collisions, while the two-particle azimuthal distributions are very similar to those observed in p+p collisions. These results demonstrate that the strong suppression of the inclusive yield and back-to-back correlations at high pT previously observed in central Au+Au collisions are due to final-state interactions with the dense medium generated in such collisions.
Pion-kaon correlation functions are constructed from central Au+Au STAR data taken at sqrt[sNN]=130 GeV by the STAR detector at the Relativistic Heavy Ion Collider (RHIC). The results suggest that pions and kaons are not emitted at the same average space-time point. Space-momentum correlations, i.e., transverse flow, lead to a space-time emission asymmetry of pions and kaons that is consistent with the data. This result provides new independent evidence that the system created at RHIC undergoes a collective transverse expansion.
We report high statistics measurements of inclusive charged hadron production in Au+Au and p+p collisions at sqrt[sNN]=200 GeV. A large, approximately constant hadron suppression is observed in central Au+Au collisions for 5<pT<12 GeV/c. The collision energy dependence of the yields and the centrality and pT dependence of the suppression provide stringent constraints on theoretical models of suppression. Models incorporating initial-state gluon saturation or partonic energy loss in dense matter are largely consistent with observations. We observe no evidence of pT-dependent suppression, which may be expected from models incorporating jet attenuation in cold nuclear matter or scattering of fragmentation hadrons.
We present the results of charged particle fluctuations measurements in Au+Au collisions at sqrt[sNN ]=130 GeV using the STAR detector. Dynamical fluctuations measurements are presented for inclusive charged particle multiplicities as well as for identified charged pions, kaons, and protons. The net charge dynamical fluctuations are found to be large and negative providing clear evidence that positive and negative charged particle production is correlated within the pseudorapidity range investigated. Correlations are smaller than expected based on model-dependent predictions for a resonance gas or a quark-gluon gas which undergoes fast hadronization and freeze-out. Qualitative agreement is found with comparable scaled p+p measurements and a heavy ion jet interaction generation model calculation based on independent particle collisions, although a small deviation from the 1/N scaling dependence expected from this model is observed.
Data from the first physics run at the Relativistic Heavy-Ion Collider at Brookhaven National Laboratory, Au+Au collisions at sqrt[sNN]=130 GeV, have been analyzed by the STAR Collaboration using three-pion correlations with charged pions to study whether pions are emitted independently at freeze-out. We have made a high-statistics measurement of the three-pion correlation function and calculated the normalized three-particle correlator to obtain a quantitative measurement of the degree of chaoticity of the pion source. It is found that the degree of chaoticity seems to increase with increasing particle multiplicity.
The balance function is a new observable based on the principle that charge is locally conserved when particles are pair produced. Balance functions have been measured for charged particle pairs and identified charged pion pairs in Au+Au collisions at sqrt[sNN]=130 GeV at the Relativistic Heavy Ion Collider using STAR. Balance functions for peripheral collisions have widths consistent with model predictions based on a superposition of nucleon-nucleon scattering. Widths in central collisions are smaller, consistent with trends predicted by models incorporating late hadronization.
Azimuthal anisotropy (v2) and two-particle angular correlations of high pT charged hadrons have been measured in Au+Au collisions at sqrt[sNN]=130 GeV for transverse momenta up to 6 GeV/c, where hard processes are expected to contribute significantly. The two-particle angular correlations exhibit elliptic flow and a structure suggestive of fragmentation of high pT partons. The monotonic rise of v2(pT) for pT<2 GeV/c is consistent with collective hydrodynamical flow calculations. At pT>3 GeV/c, a saturation of v2 is observed which persists up to pT=6 GeV/c.
Azimuthal anisotropy (v2) and two-particle angular correlations of high pT charged hadrons have been measured in Au+Au collisions at sqrt[sNN]=130 GeV for transverse momenta up to 6 GeV/c, where hard processes are expected to contribute significantly. The two-particle angular correlations exhibit elliptic flow and a structure suggestive of fragmentation of high pT partons. The monotonic rise of v2(pT) for pT<2 GeV/c is consistent with collective hydrodynamical flow calculations. At pT>3 GeV/c, a saturation of v2 is observed which persists up to pT=6 GeV/c.
Abstract Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics. PACS: 07.05.Tp; 13; 23
In this study a regime switching approach is applied to estimate the chartist and fundamentalist (c&f) exchange rate model originally proposed by Frankel and Froot (1986). The c&f model is tested against alternative regime switching specifications applying likelihood ratio tests. Nested atheoretical models like the popular segmented trends model suggested by Engel and Hamilton (1990) are rejected in favour of the multi agent model. Moreover, the c&f regime switching model seems to describe the data much better than a competing regime switching GARCH(1,1) model. Finally, our findings turned out to be relatively robust when estimating the model in subsamples. The empirical results suggest that the model is able to explain daily DM/Dollar forward exchange rate dynamics from 1982 to 1998.
Remodeling of extracellular matrix (ECM) is an important physiologic feature of normal growth and development. In addition to this critical function in physiology many diseases have been associated with an imbalance of ECM synthesis and degradation. In the kidney, dysregulation of ECM turnover can lead to interstitial fibrosis, and glomerulosclerosis. The major physiologic regulators of ECM degradation in the glomerulus are the large family of zinc-dependent proteases, collectively refered to matrix metalloproteinases (MMPs). The tight regulation of most of these proteases is accomplished by different mechanisms, including the regulation of MMP gene expression, the processing and conversion of the inactive zymogen by other proteases such as serine proteases and finally the inhibition of active MMPs by endogenous inhibitors of MMPs, denoted as tissue inhibitors of metalloproteinases (TIMPs). Namely, the MMP-9 has been shown to be critically involved in the dysregulation of ECM turnover associated with severe pathologic conditions such as rheumatoid arthritis or fibrosis of lung, skin and kidney. In the present work I searched for a possible modulation of MMP-9 expression and/or activity in glomerular mesangial cells which are thought as key players of many inflammatory and non-inflammatory glomerular diseases. I found that various structurally different PPARalpha agonists such as WY-14,643, LY-171883 and fibrates potently suppress the cytokine-induced MMP-9 expression in renal MC. Furthermore, I demonstrate that the inhibition of MMP-9 expression by PPARalpha agonists was paralleled by a strong increase of cytokine-induced iNOS expression and subsequent NO formation, suggesting that PPARalpha-dependent effects on MMP-9 expression level primarily result from alterations in NO production which in turn reduces the MMP-9 mRNA half-life. Searching for the detailed mechanism of NO-dependent effects on MMP-9 mRNA stability, I found that NO either given from exogenous sources or endogenously produced increases the MMP-9 mRNA degradation by decreasing the expression of the mRNA stabilizing factor HuR. Furthermore, I demonstrate a reduction in the RNA-binding capacity of HuR containing complexes to MMP-9 ARE motifs in cells treated with NO. Since the reduction of HuR expression can be mimicked by the cGMP analog 8-Bromo-cGMP, I suggest that NO reduces in a cGMP-dependent manner the expression of HuR. Finally, I elucidated the modulatory effect of extracellular nucleotides, mainly ATP, on cytokine-triggered MMP-9 expression. Interestingly, I found that in contrast to NO, gamma-S-ATP the stable analog of ATP potently amplifies the IL-beta mediated MMP-9 expression. The increase in mRNA stability was paralleled by an increase in the nuclear-cytosolic shuttling of the mRNA stabilizing factor HuR. Furthermore, I demonstrate an increase in the RNA-binding capacity of HuR containing complexes to the 3'-UTR of MMP-9 by ATP. In summary, the data presented here may help to find new targets (posttranscriptional regulation) that could be used to manipulate or modulate the expression of not only MMP-9 but also other genes regulated on the level of mRNA stability.
The paper is structured as follows. Section 2.1 introduces the basic classes of adjectives that constitute the factual core of the paper. Section 2.2 summarizes in greater detail the X° and the XP movement approaches to word order variation within the DP. Section 3 briefly discusses problems for both approaches. Sections 4.1, 5.1, and 5.2 draw from Alexiadou (2001) and contain a discussion of Greek DS and its relevance for a re-analysis of the word order variation in the Romance DP. Section 4.2 introduces refinements to Alexiadou & Wilder (1998) and Alexiadou (2001). Section 5.3. discusses certain issues that arise from the analysis of postnominal adjectives in Romance as involving raising of XPs. Section 6 discusses phenomena found in other languages, which at first sight seem similar to DS. However, I show that double definiteness in e.g. Hebrew, Scandinavian or other Balkan languages constitutes a different type of phenomenon from Greek DS, thus making a distinction between determiners that introduce CPs (Greek) and those that are merely morphological/agreement markers (Hebrew, Scandinavian, Albanian).
Directed and elliptic flow of charged pions and protons in Pb + Pb collisions at 40 and 158 A GeV
(2003)
Directed and elliptic flow measurements for charged pions and protons are reported as a function of transverse momentum, rapidity, and centrality for 40 and 158A GeV Pb + Pb collisions as recorded by the NA49 detector. Both the standard method of correlating particles with an event plane, and the cumulant method of studying multiparticle correlations are used. In the standard method the directed flow is corrected for conservation of momentum. In the cumulant method elliptic flow is reconstructed from genuine 4, 6, and 8-particle correlations, showing the first unequivocal evidence for collective motion in A+A collisions at SPS energies.
Observation of an exotic S = -2, Q = -2 baryon resonance in proton-proton collisions at the CERN SPS
(2003)
Results of resonance searches in the Xi- pi-, Xi- pi+, antiXi+ pi- and antiXi+ pi+ invariant mass spectra in proton-proton collisions at sqrt s=17.2 GeV are presented. Evidence is shown for the existence of a narrow Xi- pi- baryon resonance with mass of 1.862+/-0.002 GeV/c^2 and width below the detector resolution of about 0.018 GeV/c^2. The significance is estimated to be 4.0 sigma. This state is a candidate for the hypothetical exotic Xi_(3/2)^-- baryon with S = -2, I = 3/2 and a quark content of (d s d s ubar). At the same mass a peak is observed in the Xi- pi+ spectrum which is a candidate for the Xi_(3/2)^0 member of this isospin quartet with a quark content of (d s u s dbar). The corresponding antibaryon spectra also show enhancements at the same invariant mass.
A rapidly growing literature has documented important improvements in volatility measurement and forecasting performance through the use of realized volatilities constructed from high-frequency returns coupled with relatively simple reduced-form time series modeling procedures. Building on recent theoretical results from Barndorff-Nielsen and Shephard (2003c,d) for related bi-power variation measures involving the sum of high-frequency absolute returns, the present paper provides a practical framework for non-parametrically measuring the jump component in realized volatility measurements. Exploiting these ideas for a decade of high-frequency five-minute returns for the DM/$ exchange rate, the S&P500 market index, and the 30-year U.S. Treasury bond yield, we find the jump component of the price process to be distinctly less persistent than the continuous sample path component. Explicitly including the jump measure as an additional explanatory variable in an easy-to-implement reduced form model for realized volatility results in highly significant jump coefficient estimates at the daily, weekly and quarterly forecast horizons. As such, our results hold promise for improved financial asset allocation, risk management, and derivatives pricing, by separate modeling, forecasting and pricing of the continuous and jump components of total return variability.
Results are presented on event-by-event fluctuations in transverse momentum of charged particles, produced at forward rapidities in p+p, C+C, Si+Si and Pb+Pb collisions at 158 AGeV. Three different characteristics are discussed: the average transverse momentum of the event, the Phi_pT fluctuation measure and two-particle transverse momentum correlations. In the kinematic region explored, the dynamical fluctuations are found to be small. However, a significant system size dependence of Phi_pT is observed, with the largest value measured in peripheral Pb+Pb interactions. The data are compared with predictions of several models. PACS numbers: 14.20.Jn, 13.75.Cs, 12.39.-x
Pentatomidae (Heteroptera) of Honduras : a checklist with description of a new ochlerine genus
(2003)
Through collecting, surveys of museum collections, and search of the literature, we are able to list 181 species of Pentatomidae as occurring within the boundaries of the Republic of Honduras. Most of these, 129, around 70%, are widespread in the American tropics. Twenty-nine species are new country records, reported for Honduras for the first time. Four species of pentatomids are endemic to Honduras including a new genus and species of ochlerine (Discocephalinae) herein described. Although a few species extend from South America into Honduras (the Gondwanan element), and a few from North America extend into Honduras (the Nearctic element), the most important faunal element is one which is native to nuclear Central America.
Hepatitis E virus (HEV) is a positive-stranded RNA virus with a 7.2 kb genome that is capped and polyadenylated. The virus is currently unclassified : the organisation of the genome resembles that of the Caliciviridae but sequence analyses suggest that it is more closely related to the Togaviridae. HEV is an enterically transmitted virus that causes both epidemics and sporadic cases of acute hepatitis in many countries of Asia and Africa but only rarely causes disease in more industrialised countries. Initially the virus was believed to have a limited geographical distribution. However, serological studies suggest that that HEV may be endemic also in the United states and Europe even though it infrequently causes overt disease in these countries. Many different animal species worldwide recently have been shown to have antibodies to HEV suggesting that hepatitis E may be zoonotic. Although two related strains have been experimentally transmitted between species, direct transmission from animal to a human has not been documented. Our main objective in this study is to evaluate the suitability of current available HEV antibody assays for use in low-endemicity areas such as in Germany. Methods: We selected sera on the basis of at least borderline reactivity in the routinely used Abbot EIA. Most were tested as part of routine screening of long-term expatriates in endemic countries. The following assays (recombinant antigens : ORF2 and ORF3) were used: Abbot EIA, Genelabs ELISA, Mikrogen recomBlot and a 'Prototype' DSL-ELISA. We observed a wide range of sensitivity ( average of 56.8%) and specificity ( an average of 61.4%) in these used assays. These results implies that , these assays might be unreliable for detection of HEV infection in areas where hepatitis E is not endemic. However, most anti- HEV assays have not been correlated with the HEV RNA determined by reverse transcription. Many of these unexpected results and discrepancies can be alluded to the following reasons: I. The choice and the size of the HEV antigen. II. Duration of the antibody persistence III. A cross reactivity with different agent IV. Due to geographic species V. A low sensitivity of the available assays. VI. And infection with non-pathogenic HEV strain. (zoonotic strain?). We therefore suggest that, further studies will be required to improve the sensitivity and specificity of the available commercial assays on the market.
Japan and south eastern Australia have a large exotic flora in common, in spite of contrasting histories, physiographies and land-use patterns. There are some 187 common invading species and at least 71 of these are widespread in both locations. Some 15 widespread exotic invaders in Japan have not been recorded in Australia and a number of native Japanese plants that could be introduced as ornamentals and escape cultivation are noted. The incursion of most exotic species to Japan has been historically recent. The lack of quarantine for plants (apart from parasitic plants and plants infected with disease) coupled with large importations of wheat and soybeans from north America and contaminated grain and fodder for farm animals has led to an exponential rate of plant invasion in Japan. The apparent lack of impact of woody invading species in Japanese forests and forests margins may be due simply to the relatively short time invading species (some with long juvenile periods) have been naturalised.
In this paper we focus on the similarities tying together the second segment of an onset cluster and a singleton coda segment. We offer a proposal based on Baertsch (2002) accounting for this similarity and show how it captures a number of observations which have defied previous explanation. In accounting for the similarity of patterning between the second member of an onset and a coda consonant, we propose to augment Prince & Smolensky's (P&S, 1993/2002) Margin Hierarchy so as to distinguish between structural positions that prefer low sonority and those that prefer high sonority. P&S's Margin Hierarchy, which gives preference to segments of low sonority, applies to singleton onsets; this is our M1 hierarchy. Our proposed M2 hierarchy applies both to the second member of an onset and to a singleton coda. The M2 hierarchy differs from the M1 hierarchy in giving preference to consonants of high sonority. Splitting the Margin Hierarchy into the M1 and M2 hierarchies allows us to explain typological, phonotactic, and acquisitional observations that have defied previous explanation. In Section 2 of this paper, we briefly provide background on the links that tie together the second member of an onset and a singleton coda. In Section 3, we review P&S's Margin Hierarchy, showing that it becomes problematic when extended to coda consonants. We then offer our proposal for a split margin hierarchy. Section 4 extends the split margin approach to complex onsets. We then show how it is able to account for various typological, phonotactic, and acquisitional observations. In Section 5, we will conclude the paper by briefly sketching how the split margin approach enables us to analyze syllable contact phenomena without requiring a specific syllable contact constraint (or additional hierarchy) or reference to an external sonority scale.
After more than a decade of post-socialist transition, transition theories are increasingly criticised for their inability to grasp the new post-socialist reality. However, even in the light of political, economic, social and cultural restructuring processes taking place on a global scale, the structural legacies of socialist and pre-socialist development are not erased. On the contrary, they continue to play an important role by filtering the impact of global tendencies upon post-socialist societies. With reference to a case study from the Romanian city of Timisoara I will address in the following the ambivalencies connected to the efforts of local elites in the process of implementing global-level requirements in a post-socialist environment.
Financial markets are to a very large extent influenced by the advent of information. Such disclosures, however, do not only contain information about fundamentals underlying the markets, but they also serve as a focal point for the beliefs of market participants. This dual role of information gains further importance for explaining the development of asset valuations when taking into account that information may be perceived individually (private information), or may be commonly shared by all traders (public information). This study investigates into the recently developed theoretical structures explaining the operating mechanism of the two types of information and emphasizes the empirical testability and differentiation between the role of private and public information. Concluding from a survey of experimental studies and own econometric analyses, it is argued that most often public information dominates private information. This finding justifies central bankers´ unease when disseminating news to the markets and argues against the recent trend of demanding full transparency both for financial institutions and financial markets themselves.
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The paper undertakes a comparative study of the set of laws affecting corporate governance in the United States and Germany, and an evaluation of their design if one assumes that their objective were the protection of the interests of minority outside shareholders. The rationale for such an objective is reviewed, in terms of agency cost theory, and then the institutions that serve to bound agency costs are examined and critiqued. In particular, there is discussion of the applicable legal rules in each country, the role of the board of directors, the functioning of the market for corporate control, and (briefly) the use of incentive compensation. The paper concludes with the authors views on what taking shareholder protection seriously, in each country s legal system, would require.
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The attitude expressed by Carl Fuerstenberg, a leading German banker of his time, succinctly embodies one of the principal issues facing the large enterprise – the divergence of interest between the management of the firm and outside equity shareholders. Why do, or should, investors put some of their savings in the hands of others, to expend as they see fit, with no commitment to repayment or a return? The answers are far from simple, and involve a complex interaction among a number of legal rules, economic institutions and market forces. Yet crafting a viable response is essential to the functioning of a modern economy based upon technology with scale economies whose attainment is dependent on the creation of large firms.
Lexicon of Zionism
(2003)
Based on a broad set of regional aggregated and disaggregated consumer price index (CPI) data from major industrialized countries in Asia, North America and Europe we are examining the role that national borders play for goods market integration. In line with the existing literature we find that intra-national markets are better integrated than international market. Additionally, our results show that there is a large "ocean" effect, i.e., inter-continental markets are significantly more segmented than intra-continental markets. To examine the impact of the establishment of the European Monetary Union (EMU) on integration, we split our sample into a pre-EMU and EMU sample. We find that border effects across EMU countries have declined by about 80% to 90% after 1999 whereas border estimates across non-EMU countries have remained basically unchanged. Since global factors have affected all countries in our sample similarly and major integration efforts across EMU countries were made before 1999, we suggest that most of the reduction in EMU border estimates has been "nominal". Panel unit root evidence shows that the observed large differences in integration across intra- and inter-continental markets remain valid in the long-run. This finding implies that real factors are responsible for the documented segmentations across our sample countries.
Despite the apparent stability of the wage bargaining institutions in West Germany, aggregate union membership has been declining dramatically since the early 90's. However, aggregate gross membership numbers do not distinguish by employment status and it is impossible to disaggregate these sufficiently. This paper uses four waves of the German Socioeconomic Panel in 1985, 1989, 1993, and 1998 to perform a panel analysis of net union membership among employees. We estimate a correlated random effects probit model suggested in Chamberlain (1984) to take proper account of individual specfic effects. Our results suggest that at the individual level the propensity to be a union member has not changed considerably over time. Thus, the aggregate decline in membership is due to composition effects. We also use the estimates to predict net union density at the industry level based on the IAB employment subsample for the time period 1985 to 1997. JEL - Klassifikation: J5
The focus of this study were Celtic gold coins excavated from the Martberg, a Celtic oppidium and sanctuary, occupied in the first century B.C. by a Celtic tribe known as the Treveri. These coins and a number of associated coinages, were characterised in terms of their alloy compositions and their geochemical and isotopic signatures so as to answer archaeological and numismatic questions of coinage development and metal sources. This required the development of analytical methods involving; Electron Microprobe (EPMA), Laser Ablation-ICP-MS, solution Multicollector-ICPMS and LA-MC-ICP-MS. The alloy compositions (Au-Ag-Cu-Sn) were determined by EPMA on a small polished area on the edge of the coins. A large beam size, 50µm (diameter), was used to overcome the extreme heterogeneity of these alloys. These analyses were shown to be representative of the bulk composition of the coins. The metallurgical development of the coinages was defined and showed that the earlier coinages followed a debasement trend, which was superceded by a trend of increasing copper at the expense of sliver while gold compositions remained stable. This change occurred with the appearance of the inscribed "POTTINA" coinage, Scheers 30/V. Two typologically different coinages, Scheers 16 and 18 ("Armorican Types") were found to have markedly different compositions which do not fit into the trends described above. A Flan for a gold coin, which may indicate the presence of a mint at the Martberg, was found to have an identicle weight and composition as the Scheers 30/I coins, which preceeded the majority of the coins found at the Martberg in the coin development chronology. The trace element anaylses were made by Laser Ablation-ICPMS using an AridusTM desolvating nebuliser to introduce matrix matched solution standards to calibrate the measurements, which were then normalised to 100%. Quantitative results were obtained for the following elements: Sc, Ti, Cr, Mn, Co, Ni, Cu, Zn, Se, Ru, Rh, Pd, Ag, Sb, Te, W, Ir, Pt, Pb, Bi. The remaining elements remain problematic as they produced incorrect standardisations mainly due to chemical effects in solution such as adsorption onto the beaker walls or oxidation : V, Fe, Ga, Ge, As, Mo, Sn, Re, Os, Hg. Changes in the sources of Au, Ag and Cu were observed during the development of the coinages through the variation of trace elements, which correlate positively with the major components of the coin alloys. Changes in the Pt/Au ratios show that the Scheers 23 coins contain distinctly different gold from the later coinages and that the Scheers 18 gold source was also different. Te/Ag was used to show that the Sch.23 coins also contained different silver and some subgroups were observed in the Sch. 30/V coins. A major change in copper source is indicated by the sudden increase of Sb and Ni with the introduction of the Sch. 30/V coins (POTTINA), which can be linked to a similar change in copper observed in the contemporary silver coinage, Sch. 55 (with a ring). Lead isotopic analyses were made by solution- and Laser Ablation - MC-ICP-MS, The laser technique proved to be in good agreement with the solution analyses with precisions between 1 and 0.1%o (per mil). The development of the laser method opens the way for easy and virtually non-destructive Pb isotopic determinations of ancient gold coins. The results showed that Sch. 23 is very different from the following coinages, Sch. 16 and 18 are also different, forming their own group, and all the later "Eye" staters (Sch. 30/I-VI) lie on a mixing line controlled by the addition of copper from a Mediterranean source, probably Sardinia or Spain. An indication of gold and silver sources should be possible with further analyses of the Sch. 23 and Rainbow Cup gold coins and the Sch. 54 and 55 silver coinages. Copper Isotopic analyses were made by solution- and Laser Ablation - MC-ICP-MS. Both techniques require further development to produce more reproducible results. The results show that there appears to be a trend to more positive d Cu65 values for the later coinages and that the link between the copper used in the Sch. 30/V (POTTINA) coins and the silver Sch. 55 (with a ring) coins is also shown by similarly postive d Cu65 values. The full suite of analyses were also made on samples of gold from the region. They were mostly composed of "placer gold", alluvial gold found in rivers. It was found that when a study is restricted to a limited number of deposits or areas then it is possible to distinguish between deposits based on the concentration of those elements which are least affected by transport related alteration processes. These elements include; the PGE's, due to their refractory nature, and those elements which are usually present in high enough concentrations to remain relatively unaffected, eg: Cu, Pb and Sb. Due to the nature of the coin alloy it is not possible to link the gold used in the coins studied here with gold deposits, as the large amounts of Ag and Cu, added to the coin alloys, have masked the Au signature. However, further Pb isotopic analyses of gold deposits should prove useful in determining from which regions Celtic gold was derived.
In this thesis the anti-proton to proton ratio in 197Au + 197Au collisions, measured at mid-rapidity, at a center of mass energy of psNN = 200GeV is reported. The value was measured to be ¹p/p = 0.81+-0.002stat +- 0.05syst: in the 5% most central collisions. The ratio shows no dependence on rapidity in the range jyj < 0:5. Furthermore, a dependence on transverse momentum within 0:4< p? < 1:0 GeV/c is not observed. At higher p?, a slight drop in the ratio is observed. In the present analysis, the highest momentum considered is p? = 4:5 GeV/c yielding ¹p=p = 0:645§0:005stat: §0:10syst:. However, the systematic error is higher in this momentum range. A slight centrality dependence was observed, where a decrease from ¹p=p = 0:83§0:002stat:§0:05syst: for most peripheral collisions (less than 80% central) to ¹p=p = 0:78§0:002stat:§0:05syst: for the 5% most central collisions was measured. An estimate of the feed-down contributions fromthe decay of heavier strange baryons results in ¹p=p = 0:77 § 0:05syst:. The measured ratio indicates a » 12:5 times higher value compared to the highest SPS energy of psNN = 17:3 and an \almost net-baryon free" region, at mid- rapidity. The asymmetry of protons and anti-protons may be explained by the contribution ofvalence quarks in a nucleus break-up picture. In such a scenario, the absolute value of the ratio and the fact that the ratio does not depend on rapidity (at mid-rapidity) is well reproduced. Fragmentation of quarks and anti- quarks into protons and anti-protons is assumed. An estimate of the ratio, when feed-down correction is taken into consideration, agrees well with the prediction of a statistical model analysis at a temperature of T = 177 § 7 MeV and a baryon chemical potential of ¹B = 29 § 8 MeV. The temperature achieved is only slightly higher when compared to the top SPS energy, while the baryochemical potential is factor »10 lower. As in the case of the SPS results, these parameters are close to the phase boundary of Figure 1.6. The measurement of the ratio at high transverse momentum was of special in- terest in this analysis, since at RHIC energies, the cross section for hadrons at high transverse momentum is increased with respect to SPS energies. The weak dependence of the ratio on the transverse momentum is well described by the non- perturbative quenched and baryon junction scenario (i.e. Soft+Quench model), where baryon creation is enhanced by baryon junctions. In comparison the ratio does not decrease within the considered momentum range as predicted by pQCD.
We present an effort for the development of multilingual named entity grammars in a unification-based finite-state formalism (SProUT). Following an extended version of the MUC7 standard, we have developed Named Entity Recognition grammars for German, Chinese, Japanese, French, Spanish, English, and Czech. The grammars recognize person names, organizations, geographical locations, currency, time and date expressions. Subgrammars and gazetteers are shared as much as possible for the grammars of the different languages. Multilingual corpora from the business domain are used for grammar development and evaluation. The annotation format (named entity and other linguistic information) is described. We present an evaluation tool which provides detailed statistics and diagnostics, allows for partial matching of annotations, and supports user-defined mappings between different annotation and grammar output formats.
In contrast to the class A heat stress transcription factors (Hsfs) of plants, a considerable number of Hsfs assigned to classes B and C have no evident function as transcription activators on their own. In the course of my PhD work I showed that tomato HsfB1, a heat stress induced member of class B Hsf family, is a novel type of transcriptional coactivator in plants. Together with class A Hsfs, e.g. tomato HsfA1, it plays an important role in efficient transcrition initiation during heat stress by forming a type of enhanceosome on fragments of Hsp promoter. Characterization of promoter architecture of hsp promoters led to the identification of novel, complex heat stress element (HSE) clusters, which are required for optimal synergistic interactions of HsfA1 and HsfB1. In addition, HsfB1 showed synergistic activation of the expression of a subset of viral and house keeping promoters. CaMV35S promoter, the most widely expressed constitutive promoter turned out to be the the most interesting candidate to study this effect in detail. Because, for most house-keeping promoters tested during this study, the activators responsible for constitutive expression are not known, but in case of CaMV35S promoter they are quite well known (the bZip proteins, TGA1/2). These proteins belong to the acidic activators, similar to class A Hsfs. Actually, on heat stress inducible promoters HsfA1 or other class A Hsfs are the synergistic partners of HsfB1, whereas on house-keeping or viral promoters, HsfB1 shows synergistic transcriptional activation in cooperation with the promoter specific acidic activators, e.g. with TGA proteins on 35S promoter. In agreement with this the binding sites for HsfB1 were identified in both house-keeping and 35S promoter. It has been suggested during this study that HsfB1 acts in the maintenance of transcription of a sub-set of house-keeping and viral genes during heat stress. The coactivator function of HsfB1 depends on a single lysine residue in the GRGK motif in its CTD. Since, this motif is highly conserved among histones as the acetylation motif, especially in histones H2A and H4,. It was suggested that the GRGK motif acts as a recruitment motif, and together with the other acidic activator is responsible for corecruitment of a histone acetyl transferase (HAT). So, the effect of mammalian CBP (a well known HAT) and its plant orthologs (HAC1) was tested on the stimulation of synergistic reporter gene activation obtained with HsfA1 and HsfB1. Both in plant and mammalian cells, CBP/HAC1 further stimulated the HsfA1/B1 synergistic effect. Corecruitment of HAC1 was proven by in vitro pull down assays, where the NTD of HAC1 interacted specifically both with HsfA1 and HsfB1. Formation of a ternary complex between HsfA1, HsfB1 and CBP/HAC1 was shown via coimmunoprecipitation and electrophoretic mobility shift assays (EMSA). In conclusion, the work presented in my thesis presents a new model for transcriptional regulation during an ongoing heat stress.
In an attempt to search for potential candidate molecules involved in the pathogenesis of endometriosis, a novel 2910 bp cDNA encoding a putative 411 amino acid protein, shrew-1 was discovered. By computational analysis it was predicted to be an integral membrane protein with an outside-in transmembrane domain but no homology with any known protein or domain could be identified. Antibodies raised against the putative open-reading frame peptide of shrew-1 labelled a protein of ca. 48 kDa in extracts of shrew-1 mRNA positive tissues and also detected ectopically expressed shrew-1. In the course of my PhD work, I confirmed the prediction that shrew-1 is indeed a transmembrane protein, by expressing epitope-tagged shrew-1 in epithelial cells and analysing the transfected cells by surface biotinylation and immunoblots. Additionally, I could show that shrew-1 is able to target to E-cadherin-mediated adherens junctions and interacts with the E-cadherin-catenin complex in polarised MCF7 and MDCK cells, but not with the N-cadherin-catenin complex in non-polarised epithelial cells. A direct interaction of shrew-1 with beta-catenin could be shown in an in vitro pull-down assay. From this data, it could be assumed that shrew-1 might play a role in the function and/or regulation of the dynamics of E-cadherin-mediated junctional complexes. In the next part of my thesis, I showed that stable overexpression of shrew-1 in normal MDCK cells. causes changes in morphology of the cells and turns them invasive. Furthermore, transcription by ²-catenin was activated in these MDCK cells stably overexpressing shrew-1. It was probably the imbalance of shrew-1 protein at the adherens junctions that led to the misregulation of adherens junctions associated proteins, i.e. E-cadherin and beta-catenin. Caveolin-1 is another integral membrane protein that forms complexes with Ecadherin- beta-catenin complexes and also plays a role in the endocytosis of E-cadherin during junctional disruption. By immunofluorescence and biochemical studies, caveolin-1 was identified as another interacting partner of shrew-1. However, the functional relevance of this interaction is still not clear. In conclusion, it can be said that shrew-1 interacts with the key players of invasion and metastasis, E-cadherin and caveolin-1, suggesting its possible role in these processes and making it an interesting candidate to unravel other unknown mechanisms involved in the complex process of invasion.
Sino-Tibetan is a prime example of how strongly a language family can typologically diversify under the pressure of areal spread features (Matisoff 1991, 1999). One of the manifestation of this is the average length of prosodic words. In Southeast Asia, prosodic words tend to average on one or one-and-a-half syllables. In the Himalayas, by contrast, it is not uncommon to encounter prosodic words containing five to ten syllables. The following pair of examples illustrates this.
In many languages, clauses can be subordinated by means of case markers. For Bodic languages, a branch of Sino-Tibetan, Genetti (1986) has shown that the meaning of case markers on clauses is in most instances a natural extension of their function on nouns. A dative, for example, which marks a referential goal with a noun, signals a situational goal, i.e., a purpose, when used on a clause. Among the case markers recruited for subordination, we not only get relatively concrete cases like datives, comitatives and various types of locatives, but also core argument relators such as ergatives and accusatives. In this paper, I focus on ergative markers in one subgroup of Bodic, viz. in Kiranti languages spoken in Eastern Nepal, especially in Belhare.
We consider the long-time behaviour of spatially extended random populations with locally dependent branching. We treat two classes of models: 1) Systems of continuous-time random walks on the d-dimensional grid with state dependent branching rate. While there are k particles at a given site, a branching event occurs there at rate s(k), and one of the particles is replaced by a random number of offspring (according to a fixed distribution with mean 1 and finite variance). 2) Discrete-time systems of branching random walks in random environment. Given a space-time i.i.d. field of random offspring distributions, all particles act independently, the offspring law of a given particle depending on its position and generation. The mean number of children per individual, averaged over the random environment, equals one The long-time behaviour is determined by the interplay of the motion and the branching mechanism: In the case of recurrent symmetrised individual motion, systems of the second type become locally extinct. We prove a comparison theorem for convex functionals of systems of type one which implies that these systems also become locally extinct in this case, provided that the branching rate function grows at least linearly. Furthermore, the analysis of a caricature model leads to the conjecture that local extinction prevails generically in this case. In the case of transient symmetrised individual motion the picture is more complex: Branching random walks with state dependent branching rate converge towards a non-trivial equilibrium, which preserves the initial intensity, whenever the branching rate function grows subquadratically. Systems of type 1) and systems of type 2) with quadratic branching rate function show very similar behaviour. They converge towards a non-trivial equilibrium if a conditional exponential moment of the collision time of two random walks of an order that reflects the variability in the branching mechanism is finite almost surely. The equilibrium population has finite variance of the local particle number if the corresponding unconditional exponential moment is finite. These results are proved by means of genealogical representations of the locally size-biased population. Furthermore, we compute the threshold values for existence of conditional exponential moments of the collision time of two random walks in terms of the entropy of the transition functions, using tools from large deviations theory. Our results prove in particular that - in contrast to the classical case of independent branching - there is a regime of equilibria with variance of the local number of particles.
In the present paper, I will argue that even in a language like German, where the verb system does not contain a grammaticized aspect distinction, aspectual features do underlie the early form-function-mapping of verb forms in L1-acquisition. Furthermore, it will be argued that it is not only past tense forms that may receive an aspectual interpretation in early child language but also other forms of the verbal input. In the case of German, these are the forms of the present tense paradigm and the past participle. Showing and discussing various piecesof evidence for this assumption should strengthen the "aspect before tense" or "primacy of aspect" hypothesis. In general, the paper aims at a deeper understanding of the hierarchical relation between tense and aspect whereby aspect is the basic category and, therefore, aspectual features are the inevitable starting point of the acquisition of grammar.
Yields, rapidity and transverse momentum spectra of Delta++(1232), Lambda(1520), Sigma+-(1385) and the meson resonances K0(892), Phi, rho0 and f0(980) are predicted. Hadronic rescattering leads to a suppression of reconstructable resonances, especially at low p_perp. A mass shift of the rho of 10 MeV is obtained from the microscopic simulation, due to late stage rho formation in the cooling pion gas.
We study the production of transversely polarized Λ hyperons in high-energy collisions of protons with large nuclei. The large gluon density of the target at saturation provides an intrinsic semi-hard scale which should naturally allow for a weak-coupling QCD description of the process in terms of a convolution of the quark distribution of the proton with the elementary quark–nucleus scattering cross section (resummed to all twists) and a fragmentation function. In this case of transversely polarized Λ production we employ a so-called polarizing fragmentation function, which is an odd function of the transverse momentum of the Λ relative to the fragmenting quark. Due to this kt-odd nature, the resulting Λ polarization is essentially proportional to the derivative of the quark–nucleus cross section with respect to transverse momentum, which peaks near the saturation momentum scale. Such processes might therefore provide generic signatures for high parton density effects and for the approach to the “black-body” (unitarity) limit of hadronic scattering.
In this study explanations are sought for the often reported associations in child language between tense/aspect morphology and situation type. The study is done on the basis of adult-adult data, child language and input language to the children. First of all it is shown that the associations are natural, since they are strong in adult-adult English as well. Only in the early stages does child language differ from this distribution, in that the associations are either stronger or different. Input data appear to account to a large extent for these differing patterns. An additional explanation is found in the discourse topics: within the context of talking about the here-and-now, the combinations of morphology and situation type that can be seen as unmarked suffice. In the context of talking about past events and of giving general comments about the world, marked combinations are necessary. It is shown that children in and their parents at the early ages mainly talk about the here-and-now, whereas adults among themselves hardly ever do so. Later, describing past events and commenting on the world becomes more frequent in child language and input, and, as a consequence, marked combinations of tense/aspect morphology and situation types increase in use.
We use a simple hard-core gas model to study the dynamics of small exploding systems. The system is initially prepared in a thermalized state in a spherical container and then allowed to expand freely into the vacuum. We follow the expansion dynamics by recording the coordinates and velocities of all particles until their last collision points (freeze-out). We have found that the entropy per particle calculated for the ensemble of freeze-out points is very close to the initial value. This is in apparent contradiction with the Joule experiment in which the entropy grows when the gas expands irreversibly into a larger volume.
Erina
(2003)
Johan comes to Africa to manage a tea plantation. He meets Erina, and his life changes forever. The story takes a leap into the unknown, cleverly blending an African setting with the fantastic premise at its core: the arrival of a black female Christ-figure. The use of AIDS as a weapon to effect the ultimate defeat of Satan adds a powerful and provocative dimension. Erina won Best First Book at the Zimbabwe Book Publishers Association awards
This paper analyses the effects of the Initial Public Offering (IPO) market on real investment decisions in emerging industries. We first propose a model of IPO timing based on divergence of opinion among investors and short-sale constraints. Using a real option approach, we show that firms are more likely to go public when the ratio of overvaluation over profits is high, that is after stock market run-ups. Because initial returns increase with the demand from optimistic investors at the time of the offer, the model provides an explanation for the observed positive causality between average initial returns and IPO volume. Second, we discuss the possibility of real overinvestment in high-tech industries. We claim that investing in the industry gives agents an option to sell the project on the stock market at an overvalued price enabling then the financing of positive NPV projects which would not be undertaken otherwise. It is shown that the IPO market can however also lead to overinvestment in new industries. Finally, we present some econometric results supporting the idea that funds committed to the financing of high-tech industries may respond positively to optimistic stock market valuations.
We calculate open charm and charmonium production in Au + Au reac- tions at ps = 200 GeV within the hadron-string dynamics (HSD) transport approach employing open charm cross sections from pN and N reactions that are fitted to results from PYTHIA and scaled in magnitude to the available experimental data. Charmonium dissociation with nucleons and formed mesons to open charm (D + ¯D pairs) is included dynamically. The comover dissociation cross sections are described by a simple phase-space model including a single free parameter, i.e. an interaction strength M2 0 , that is fitted to the J/ suppression data for Pb + Pb collisions at SPS energies. As a novel feature we implement the backward channels for char- monium reproduction by D ¯D channels employing detailed balance. From our dynamical calculations we find that the charmonium recreation is com- parable to the dissociation by comoving mesons. This leads to the final result that the total J/ suppression at ps = 200 GeV as a function of centrality is slightly less than the suppression seen at SPS energies by the NA50 Collaboration, where the comover dissociation is substantial and the backward channels play no role. Furthermore, even in case that all di- rectly produced J/ mesons dissociate immediately (or are not formed as a mesonic state), a sizeable amount of charmonia is found asymptotically due to the D + ! J/ + meson channels in central collisions of Au + Au at ps = 200 GeV which, however, is lower than the J/ yield expected from f pp collis ns.
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. In this paper, for the small, important example of inflammation modeling a network is constructed and different learning algorithms are proposed. It turned out that due to the nonlinear dynamics evolutionary approaches are necessary to fit the parameters for sparse, given data. Proceedings of the 15th IEEE International Conference on Tools with Artificial Intelligence - ICTAI 2003
The Internet as the biggest human library ever assembled keeps on growing. Although all kinds of information carriers (e.g. audio/video/hybrid file formats) are available, text based documents dominate. It is estimated that about 80% of all information worldwide stored electronically exists in (or can be converted into) text form. More and more, all kinds of documents are generated by means of a text processing system and are therefore available electronically. Nowadays, many printed journals are also published online and may even discontinue to appear in print form tomorrow. This development has many convincing advantages: the documents are both available faster (cf. prepress services) and cheaper, they can be searched more easily, the physical storage only needs a fraction of the space previously necessary and the medium will not age. For most people, fast and easy access is the most interesting feature of the new age; computer-aided search for specific documents or Web pages becomes the basic tool for information-oriented work. But this tool has problems. The current keyword based search machines available on the Internet are not really appropriate for such a task; either there are (way) too many documents matching the specified keywords are presented or none at all. The problem lies in the fact that it is often very difficult to choose appropriate terms describing the desired topic in the first place. This contribution discusses the current state-of-the-art techniques in content-based searching (along with common visualization/browsing approaches) and proposes a particular adaptive solution for intuitive Internet document navigation, which not only enables the user to provide full texts instead of manually selected keywords (if available), but also allows him/her to explore the whole database.
Introduction: This open label, multicentre study was conducted to assess the times to offset of the pharmacodynamic effects and the safety of remifentanil in patients with varying degrees of renal impairment requiring intensive care.
Methods: A total of 40 patients, who were aged 18 years or older and had normal/mildly impaired renal function (estimated creatinine clearance ≥ 50 ml/min; n = 10) or moderate/severe renal impairment (estimated creatinine clearance <50 ml/min; n = 30), were entered into the study. Remifentanil was infused for up to 72 hours (initial rate 6–9 μg/kg per hour), with propofol administered if required, to achieve a target Sedation–Agitation Scale score of 2–4, with no or mild pain.
Results: There was no evidence of increased offset time with increased duration of exposure to remifentanil in either group. The time to offset of the effects of remifentanil (at 8, 24, 48 and 72 hours during scheduled down-titrations of the infusion) were more variable and were statistically significantly longer in the moderate/severe group than in the normal/mild group at 24 hours and 72 hours. These observed differences were not clinically significant (the difference in mean offset at 72 hours was only 16.5 min). Propofol consumption was lower with the remifentanil based technique than with hypnotic based sedative techniques. There were no statistically significant differences between the renal function groups in the incidence of adverse events, and no deaths were attributable to remifentanil use.
Conclusion: Remifentanil was well tolerated, and the offset of pharmacodynamic effects was not prolonged either as a result of renal dysfunction or prolonged infusion up to 72 hours.
Sand mining has been responsible for much of the degradation of the indigenous flora of sand dunes in New South Wales, to the extent that authentic foredune plant communities are now uncommon in much of NSW and southern Queensland. Dune heaths are very susceptible to invasion and infestation by the weed, bitou bush (Chrysanthemoides monilifera subsp. rotunda). This paper compares the floristic composition of dunes in 1941 (before sand mining) and 1997 & 1999 (after sand mining and invasion by bitou bush), at Bennetts Beach, Hawks Nest, on the lower north coast of NSW. The 1941 data provide a unique example of authentic foredune vegetation and is the first quantitative analysis of coastal dune vegetation in NSW. In 1941, 25 native species were recorded in the 0.5 ha site. Nine of these were considered to be characteristic of dune communities and eight of these nine were also recorded in a 1939 survey at Myall Lakes. Four other studies in the intervening 60 years contain species lists of dune vegetation in this general area (1986, 1995, 1997 and 1999). Of a total of 17 species considered to be strongly associated with dune habitats, five were reported in all of six surveys and 15 occurred in one or more of the more recent surveys (1986 and later); the two exceptions were Austrofestuca littoralis and Senecio spathulatus. Only one introduced weed was recorded in 1941 (Cakile edentula) and the only weeds recorded in 1939 were Cakile edentula and Oxalis corniculata, both cosmopolitan species. Thirteen additional weed species, the most abundant being Chrysanthemoides monilifera, were recorded in the more recent surveys. A set of 14 native species that are more typical of heath and eucalypt forest and woodland communities than of the dunes were absent in the 1939 and 1941 surveys but occurred in one or more of the post-mining surveys of 1995, 1997 and 1999. Detailed plant distribution and abundance were assessed in the same part of Bennetts Beach in 1941, 1997 and 1999. All show some patterns of zonation across the sand dune. However, clear phytosociological patterns of the dominant species that were obvious in 1941 were lacking in the 1997 and 1999 analyses. These contrasts suggest that post-mining revegetation has resulted in weed invasion, addition of native species from other communities, and a disruption of the distributions of typical dune species of species across the sand dunes that has been only partially recovered since sand mining and invasion of bitou bush.
One of the known apoptotic pathways in mammalian cells involves release of mitochondrial Cytochrome c into the cytosol. Cyt c then together with ATP or dATP induces a conformational change in the adaptator protein Apaf-1 (a homologue of the C. elegans CED4 protein) (Zou, Henzel et al. 1997), leading to its oligomerization and the recruitment of several pro-Casp-9 molecules. This protein complex assembly called "apoptosome" leads to the activation of Casp-9 which then initiates or amplifies the caspase cascade. The cell death program can be stalled at several points and we were interested in identifying new proteins inhibiting cell death downstream of Cyt c release. This thesis describes how I have screened a cDNA library derived from a pool of human breast carcinomas in a yeast-based survival screen, using the S. pombe yeast strain HC4 containing an inducible CED4 construct(James, Gschmeissner et al. 1997). The screen resulted in the identification of six proteins displaying cell death-inhibiting activity in S. pombe as well as anti-apoptotic potential in mammalian cells. Those six molecules were RoRet (Ruddy, Kronmal et al. 1997), Aven (Chau, Cheng et al. 2000), Fte-1/S3a (Kho, Wang et al. 1996), PGC2 (Padilla, Kaur et al. 2000; Goetze, Eilers et al. 2002), SAA1-2ß (Moriguchi, Terai et al. 2001) and FBP (Brockstedt, Rickers et al. 1998) of which I selected RoRet, Aven and Fte-1/S3a for further analysis. RoRet is a new anti-apoptotic molecule that can inhibit the mitochondrial pathway via its PRY-SPRY domain. RoRet does not seem to bind to Apaf-1, and does not co-localize with the activated Apaf-1/Caspase-9 complex. Aven was published to act as an anti-apoptotic protein and suggested to function via the recruitment of Bcl-XL to Apaf-1. This work shows that its C-terminal domain can bind to Apaf-1 and has a strong anti-apoptotic activity by itself. Moreover, Aven co-localizes with the activated Apaf-1/Caspase-9 complex suggesting that it is a component of the apoptosome. Furthermore, the expression of Aven is regulated in mammary glands during the pregnancy cycle. Fte-1/S3a has been already implicated in increased transformation capacity of v-Fos in fibroblasts (Kho and Zarbl 1992; Kho, Wang et al. 1996). This work shows that it has anti-apoptotic activity and can protect against Bak- and Apaf-1-induced apoptosis. It can bind directly to activated Apaf-1 at the linker domain between the WD40 repeats and the CED4-like domain, suggesting that it may protect by sequestering the activated Apaf-1 to some organelles whose nature remains to be determined. Moreover, expression studies on mRNA and protein level showed upregulation of Fte-1/S3a in colon, lung and kidney carcinoma. Hmgb1 (Flohr, Rogalla et al. 2001; Pasheva, Ugrinova et al. 2002; Stros, Ozaki et al. 2002) was identified during a survival screen performed with a NIH 3T3 mouse fibroblast cDNA library in a Bak-expressing yeast S. pombe strain. HMGB1 can protect against Bak-, UV-, FasL- and TRAIL-induced apoptosis. Significant overexpression of HMGB1 was found in breast and colon carcinoma, and elevated mRNA amounts were detected in uterus, colon and stomach carcinoma, suggesting that it may be a tumour marker (Brezniceanu et al., 2003).
It has been previously reported that in languages demonstrating the Root Infinitive (RI) Stage the use of RIs is characterized by two properties: these forms are overwhelmingly eventive and have, in the majority of instances, a modal interpretation. Hoekstra and Hyams (1998, 1999) have proposed a theory stating that these two properties of RIs are co-dependent in that the application of the modal reference restriction limits the use of the aspectual verbal classes to eventive predicates. Furthermore, this theory assumed that the described mutual dependency of these constraints was valid cross-linguistically.
In this paper, we investigate the application of this theory to the case of RIs in Russian, one of the languages exhibiting the RI Stage. Using new longitudinal data from two monolingual Russian-speaking children, we demonstrate that the predictions of Hoekstra and Hyams’ approach are not realized for Russian child speech. While the constraint requiring that Ris have a modal reference does not seem to apply in Russian since the infinitival forms do receive past and present tense interpretation, these predicates are still overwhelmingly eventive and stative predicates appear mostly as finite verbs. Having shown that a theory connecting the application of the two restrictions on RIs does not account for the Russian data, we examine several alternative analyses of Russian RIs. We arrive at a conclusion that an explanation based on the lack of the event variable in stative predicates (Kratzer 1989) necessary for the interpretation of RIs in discourse (Avrutin 1997) succeeds in handling the Russian data presented in this article.
Mechanisms of contrasting korean velar stops : A catalogue of acoustic and articulatory parameters
(2003)
The Korean stop system exhibits a three-way distinction in velar stops among /g/, /k'/ and /kh/. If the differentiation is regarded as being based on voicing, such a system is rather unusual because even a two-way distinction between a voiced and a voicless unaspirated velar stop gets easily lost in the languages of the world especially in the case of velar stops. One possibility for maintainig this distinction is that supralaryngeal characteristics like articulators' velocity, duration of surrounding vowels or stop closure duration are involved. The aim of the present study is to set up a catalogue of parameters which are involved in the distinction of Korean velar stops in intervocalic position.
Two Korean speakers have been recorded via Electromagnetic Articulography. The word material consisted of VCV-sequences where V is one of the three vowels /a/, /i/ or /u/ and C one of the Korean velars /g/, /k'/ or /kh/. Articulatory and acoustic signals have been analysed It turned out that the distinction is only partly built on laryngeal parameters and that supralaryngeal characteristics differ for the three stops. Another result is that the voicing contrast is not a matter of one parameter, but there is always a set of parameters involved. Furthermore, speakers seem to have a certain freedom in the choice of these parameters.
Escapist policy rules
(2003)
We study a simple, microfounded macroeconomic system in which the monetary authority employs a Taylor-type policy rule. We analyze situations in which the self-confirming equilibrium is unique and learnable according to Bullard and Mitra (2002). We explore the prospects for the use of 'large deviation' theory in this context, as employed by Sargent (1999) and Cho, Williams, and Sargent (2002). We show that our system can sometimes depart from the self-confirming equilibrium towards a non-equilibrium outcome characterized by persistently low nominal interest rates and persistently low inflation. Thus we generate events that have some of the properties of "liquidity traps" observed in the data, even though the policymaker remains committed to a Taylor-type policy rule which otherwise has desirable stabilization properties.
Mitogen activated protein kinases (MAPKs) are found in all eukaryotic cells and represent crucial elements in the signal transduction from the plasma membrane to the nucleus. Although a broad variety of extracellular stimuli activate MAPKs, they evoke very distinct cellular responses. The amplitude and duration of MAPK activation determine signal identity and ultimately cell fate. A tight and finely tuned regulation is therefore critical for a specific cellular response. The role and the regulation of extracellular signal-regulated kinase 5 (ERK5), a MAPK with a large and unique C-terminal tail, were studied in different cellular systems. The study highlights two aspects of ERK5 regulation: control of the phosphorylation state and regulated protein stability. In analogy to other MAPKs ERK5 is activated by dual phosphorylation of threonine and tyrosine residues in its activation motif. A first part of the study concentrates on whether and how the protein tyrosine phosphatase PTP-SL is involved in the downregulation of the ERK5 signal. The direct interaction of both proteins is shown to result in mutual modulation of their enzymatic activities. PTP-SL is a substrate of ERK5 and, independent of its phosphorylation, binding to the kinase enhances its catalytic phosphatase activity. On the other hand, interaction with PTP-SL does not only downregulate enzymatic ERK5 activity but also effectively impedes its translocation to the nucleus. The second part of this study focuses on the interaction of ERK5 with c-Abl and its oncogenic variants Bcr/Abl and v-Abl. In this study these tyrosine kinases are demonstrated to regulate ERK5 by two mechanisms: first, by induction of kinase activity and secondly, by stabilisation of the ERK5 protein. Stabilisation involves the direct interaction of unique ERK5 domains with Abl kinases and is independent of MAPK cascade activation. The level of ERK5 and its intrinsic basal activity – rather than its activation – are essential for v-Abl-induced transformation as well as for survival of Bcr/Abl-positive leukaemia cells. Stabilisation of ERK5 thus contributes to cell survival and should therefore be considered as an additional aspect in therapy of chronic myeloid leukaemia. Taken together, the results obtained in this study demonstrate that diverse pathways regulate ERK5 signalling by affecting kinase activity, localisation and protein stability. While the phosphatase PTP-SL is involved in negative regulation of ERK5, Abl kinases potently activate ERK5 and increase its half-life. Protein stabilisation thus is presented as a novel mechanism in the regulation of MAPKs.
Die Entwicklung der Renormierungsgruppen-Technik, die in ihrer feldtheoretischen Version auf Ideen von Stückelberg und Petermann und in der Festkörperphysik auf K.G. Wilson zurückgeht, hat wesentliche Einsichten in die Natur physikalischer Systeme geliefert. Insbesondere das Konzept der so genannten Universalitätsklassen erhellt, warum Systeme, die durch scheinbar sehr verschiedene Hamilton-Operatoren beschrieben werden, doch im Wesentlichen die selbe (Niederenergie-)Physik zeigen. Ein weiterer Grund für den Erfolg dieser Methode liegt darin begründet, dass sie in systematischer Weise unendlich viele Feynman-Diagramme aufsummiert und somit über konventionelle Störungstheorie hinaus geht. Dies spielt in der Festkörperphysik vor allem dann eine wichtige Rolle, wenn das vorliegende physikalische System stark korreliert ist. Entsprechend der Vielzahl von Anwendungsmöglichkeiten hat sich in den vergangenen Jahrzehnten eine große Bandbreite verschiedener Formulierungen der Renormierungsgruppen-Technik ergeben. Eine davon ist die sogenannte funktionale Renormierungsgruppe, die auf Wegner und Houghton zurück geht und die auch in der vorliegenden Arbeit benutzt und weiter entwickelt wurde. Wir haben hier insbesondere auf die Einbeziehung der wichtigen Reskalierungsschritte wertgelegt. Als erstes Anwendungsgebiet des neu entwickelten Formalismus wurden stark korrelierte Elektronen in einer Raumdimension ausgewählt und hier insbesondere ein Modell, das als Tomonaga-Luttinger-Modell (TLM) bezeichnet wird. Im TLM wechselwirken Elektronen mit einer strikt linearen Energiedispersion ausschließlich über so genannte Vorwärtsstreu-Prozesse. Aufgrund der Linearisierung der Energiedispersion nahe der Fermipunkte ergibt sich ein Modell, das z.B. mit Hilfe der so genannten Bosonisierungs-Technik exakt gelöst werden kann. Hauptziel der vorliegenden Arbeit ist es, die bekannte Spektralfunktion dieses Modells unter Verwendung des Renormierungsgruppen-Formalismus zu reproduzieren. Gegenüber der bisherigen Implementierung der Renormierungsgruppe, bei der lediglich der Fluss einer endlichen Anzahl von Kopplungskonstanten betrachtet wird, stellt die Berechnung des Flusses ganzer Korrelationsfunktionen eine enorme Erweiterung dar. Der Erfolg dieser Herangehensweise im TLM bestärkt die Hoffnung, dass es in Zukunft auch möglich sein wird, die Spektralfunktionen anderer Modelle mit dieser Methode zu berechnen, bei denen herkömmliche Techniken versagen.
Receptor tyrosine kinases of the epidermal growth factor (EGF) receptor family regulate essential cellular functions such as proliferation, survival, migration, and differentiation but also play central roles in the etiology and progression of tumors. We have identified short peptide sequences from a random peptide library integrated into the thioredoxin scaffold protein, which specifically bind to the intracellular domain of the EGF receptor (EGFR). These molecules have the potential to selectively inhibit specific aspects of EGF receptor signaling and might become valuable as anticancer agents. Intracellular expression of the aptamer encoding gene construct KDI1 or introduction of bacterially expressed KDI1 via a protein transduction domain into EGFR-expressing cells results in KDI1·EGF receptor complex formation, a slower proliferation, and reduced soft agar colony formation. Aptamer KDI1 did not summarily block the EGF receptor tyrosine kinase activity but selectively interfered with the EGF-induced phosphorylation of the tyrosine residues 845, 1068, and 1148 as well as the phosphorylation of tyrosine 317 of p46 Shc. EGF-induced phosphorylation of Stat3 at tyrosine 705 and Stat3-dependent transactivation were also impaired. Transduction of a short synthetic peptide aptamer sequence not embedded into the scaffold protein resulted in the same impairment of EGF-induced Stat3 activation.
Recently, we reported that in crude enzyme preparations, a monocyte-derived soluble protein (M-DSP) renders 5-lipoxygenase (5-LO) activity Ca2+-dependent. Here we provide evidence that this M-DSP is glutathione peroxidase (GPx)-1. Thus, the inhibitory effect of the M-DSP on 5-LO could be overcome by the GPx-1 inhibitor mercaptosuccinate and by the broad spectrum GPx inhibitor iodoacetate, as well as by addition of 13(S)-hydroperoxy-9Z,11E-octadecadienoic acid (13(S)-HPODE). Also, the chromatographic characteristics and the estimated molecular mass (80-100 kDa) of the M-DSP fit to GPx-1 (87 kDa), and GPx-1, isolated from bovine erythrocytes, mimicked the effects of the M-DSP. Intriguingly, only a trace amount of thiol (10 micro M GSH) was required for reduction of 5-LO activity by GPx-1 or the M-DSP. Moreover, the requirement of Ca2+ allowing 5-LO product synthesis in various leukocytes correlated with the respective GPx-1 activities. Mutation of the Ca2+ binding sites within the C2-like domain of 5-LO resulted in strong reduction of 5-LO activity by M-DSP and GPx-1, also in the presence of Ca2+. In summary, our data suggest that interaction of Ca2+ at the C2-like domain of 5-LO protects the enzyme against the effect of GPx-1. Apparently, in the presence of Ca2+, a low lipid hydroperoxide level is sufficient for 5-LO activation.
Role in routing to the plasma membrane of the L 0 domain of the multidrug resistance protein MRP1
(2003)
Die mehrfache Chemotherapieresistenz (Multidrug Resistance) beruht auf vermehrtem Transport von Xenobiotika aus der Zelle, was zu einer dramatischen Verringerung der intrazellulären Konzentration von chemotherapeutischen Substanzen führt. Dieser Effekt wird von transmembranen Transporter-Proteinen der ABC-Familie verursacht. Zu dieser Familie gehört MRP1, die eine große Vielfalt an Substraten transportieren kann. MRP1 ist ein 190 kDa Glykoprotein mit einer vermuteten Topologie, die zusätzlich zum typischen P-gp ähnlichen Kern (Delta MRP1) eine amino-proximale transmembrane Domäne aufweist, die aus fünf transmembranen Alpha-Helices besteht. Sie ist durch einen cytoplasmatischen Verbindungs-Loop (L0) mit Delta MRP1 verbunden. Wenn MRP1 in polarisierten Zellen exprimiert wird, wird es zu der basolateralen Membran geleitet. In der vorliegenden Arbeit sollte nun die Funktion des amino-terminalen Bereichs von MRP1, der aus der ersten transmembranen Domäne TMD0 und dem cytoplasmischen Verbindungs-Loop L0 besteht, durch Expression und Koexpression von diversen MRP1 Mutanten in polarisierten MDCKII Zellen untersucht werden. Es wurde gezeigt, dass in der L0 Region eine amphipathische Helix vorhanden ist, die für die Funktionalität der MRP1 notwendig ist; dass das isolierte L0-Peptid in der Lage ist, sich mit Delta MRPI zu assoziieren (dadurch erlangt das Protein wieder seine Funktion und lokalisiert sich in der basolateralen Membrane); dass TMD0L0 sich teilweise in der basolaterale Membrane befindet und dass seine Anwesenheit genügt, um die Glycosilierung (Fig. 4.17 in der Dissertation) und die Lokalisierung in der basolateralen Membrane des Delta MRP1 zu ermöglichen (Fig. 4.18 in der Dissertation); dass die Koexpression der zwei komplementären Fragmente eine wild-type-ähnliche Transportaktivität ergibt (Fig. 4.19 in der Dissertation) und dass die beiden Fragmente interagieren (Fig. 4.21 in der Dissertation). Es wurde ausserdem ein chimerisches Protein hergestellt, welches aus TMD0 von MRP1 und L0 von MRP2 besteht und in MDCKII und MDCKII-Delta MRP1 Zellen exprimiert. Es wurde festgestellt, dass das unvollständig glycosiliert ist (Fig. 4.24 in der Dissertation) und dass es sich im endoplasmatischen Reticulum lokalisiert (Fig. 425 in der Dissertation).
A total of thirteen mosses are reported as new for Chile: Aloinella andina Delgad., Coscinodontella bryanii R.S. Williams, Didymodon acutus (Brid.) K. Saito, Erythrophyllopsis fuscula (Müll. Hal.) Hilp., Fissidens excurrentinervis R.S. Williams, Grimmia molesta J. Muñoz, Grimmia pseudoanodon Deguchi, Jaffueliobryum williamsii (Deguchi) Delgad., Leptopteriginandrum austroalpinum Müll. Hal., Pseudocrossidium elatum (R.S. Williams) Delgad., Rhexophyllum subnigrum (Mitt.) Hilp., Saitobryum lorentzii (Müll. Hal.) Ochyra, and Syntrichia fragilis (Taylor) Ochyra. In addition, Grimmia plagiopodia Hedw., which was previosly known from Southern Chile, is reported ca. 3500 km more to the north, near the Bolivian border.
Dynamics of strange, charm and high momentum hadrons in relativistic nucleus nucleus collisions
(2003)
We investigate hadron production and attenuation of hadrons with strange and charm quarks (or antiquarks) as well as high transverse momentum hadrons in relativistic nucleus-nucleus col- lisions from 2 A·GeV to 21.3 A·TeV within two independent transport approaches (UrQMD and HSD). Both transport models are based on quark, diquark, string and hadronic degrees of freedom, but do not include any explicit phase transition to a quark-gluon plasma. From our dynamical calculations we find that both models do not describe the maximum in the K+/ + ratio at 20 - 30 A·GeV in central Au+Au collisions found experimentally, though the excitation functions of strange mesons are reproduced well in HSD and UrQMD. Furthermore, the transport calculations show that the charmonium recreation by D + J/ + meson reactions is comparable to the dissociation by comoving mesons at RHIC energies contrary to SPS energies. This leads to the final result that the total J/ suppression as a function of centrality at RHIC should be less than the suppression seen at SPS energies where the comover dissociation is substantial and the backward channels play no role. Furthermore, our transport calculations in comparison to exper- imental data on transverse momentum spectra from pp, d+Au and Au+Au reactions show that pre-hadronic e ects are responsible for both the hardening of the hadron spectra for low transverse momenta (Cronin e ect) as well as the suppression of high pT hadrons. The mutual interactions of formed hadrons are found to be negligible in central Au+Au collisions at s = 200 GeV for pT e 6 GeV/c and the sizeable suppression seen experimentally is attributed to a large extent to the interactions of leading pre-hadrons with the dense environment.
The transporter associated with antigen processing (TAP) plays a key role in the adaptive immune response by pumping antigenic peptides into the endoplasmic reticulum for subsequent loading of major histocompatibility complex class I molecules. TAP is a heterodimer consisting of TAP1 and TAP2. Each subunit is composed of a transmembrane domain and a nucleotide-binding domain, which energizes the peptide transport. To analyze ATP hydrolysis of each subunit we developed a method of trapping 8-azido-nucleotides to TAP in the presence of phosphate transition state analogs followed by photocross-linking, immunoprecipitation, and high resolution SDS-PAGE. Strikingly, trapping of both TAP subunits by beryllium fluoride is peptide-specific. The peptide concentration required for half-maximal trapping is identical for TAP1 and TAP2 and directly correlates with the peptide binding affinity. Only a background level of trapping was observed for low affinity peptides or in the presence of the herpes simplex viral protein ICP47, which specifically blocks peptide binding to TAP. Importantly, the peptide-induced trapped state is reached after ATP hydrolysis and not in a backward reaction of ADP binding and trapping. In the trapped state, TAP can neither bind nor exchange nucleotides, whereas peptide binding is not affected. In summary, these data support the model that peptide binding induces a conformation that triggers ATP hydrolysis in both subunits of the TAP complex within the catalytic cycle.
In this study, we perform a quantitative assessment of the role of money as an indicator variable for monetary policy in the euro area. We document the magnitude of revisions to euro area-wide data on output, prices, and money, and find that monetary aggregates have a potentially significant role in providing information about current real output. We then proceed to analyze the information content of money in a forward-looking model in which monetary policy is optimally determined subject to incomplete information about the true state of the economy. We show that monetary aggregates may have substantial information content in an environment with high variability of output measurement errors, low variability of money demand shocks, and a strong contemporaneous linkage between money demand and real output. As a practical matter, however, we conclude that money has fairly limited information content as an indicator of contemporaneous aggregate demand in the euro area.
Price stability and monetary policy effectiveness when nominal interest rates are bounded at zero
(2003)
This paper employs stochastic simulations of a small structural rational expectations model to investigate the consequences of the zero bound on nominal interest rates. We find that if the economy is subject to stochastic shocks similar in magnitude to those experienced in the U.S. over the 1980s and 1990s, the consequences of the zero bound are negligible for target inflation rates as low as 2 percent. However, the effects of the constraint are non-linear with respect to the inflation target and produce a quantitatively significant deterioration of the performance of the economy with targets between 0 and 1 percent. The variability of output increases significantly and that of inflation also rises somewhat. Also, we show that the asymmetry of the policy ineffectiveness induced by the zero bound generates a non-vertical long-run Phillips curve. Output falls increasingly short of potential with lower inflation targets.
In this paper we estimate a small model of the euro area to be used as a laboratory for evaluating the performance of alternative monetary policy strategies. We start with the relationship between output and inflation and investigate the fit of the nominal wage contracting model due to Taylor (1980)and three different versions of the relative real wage contracting model proposed by Buiter and Jewitt (1981)and estimated by Fuhrer and Moore (1995a) for the United States. While Fuhrer and Moore reject the nominal contracting model in favor of the relative contracting model which induces more inflation persistence, we find that both models fit euro area data reasonably well. When considering France, Germany and Italy separately, however, we find that the nominal contracting model fits German data better, while the relative contracting model does quite well in countries which transitioned out of a high inflation regime such as France and Italy. We close the model by estimating an aggregate demand relationship and investigate the consequences of the different wage contracting specifications for the inflation-output variability tradeoff, when interest rates are set according to Taylor 's rule.
In this paper we study the role of the exchange rate in conducting monetary policy in an economy with near-zero nominal interest rates as experienced in Japan since the mid-1990s. Our analysis is based on an estimated model of Japan, the United States and the euro area with rational expectations and nominal rigidities. First, we provide a quantitative analysis of the impact of the zero bound on the effectiveness of interest rate policy in Japan in terms of stabilizing output and inflation. Then we evaluate three concrete proposals that focus on depreciation of the currency as a way to ameliorate the effect of the zero bound and evade a potential liquidity trap. Finally, we investigate the international consequences of these proposals.
We estimate a Bayesian vector autoregression for the U.K. with drifting coefficients and stochastic volatilities. We use it to characterize posterior densities for several objects that are useful for designing and evaluating monetary policy, including local approximations to the mean, persistence, and volatility of inflation. We present diverse sources of uncertainty that impinge on the posterior predictive density for inflation, including model uncertainty, policy drift, structural shifts and other shocks. We use a recently developed minimum entropy method to bring outside information to bear on inflation forecasts. We compare our predictive densities with the Bank of England's fan charts.
Intangible assets as goodwill, licenses, research and development or customer relations become in high technology and service orientated economies more and more important. But comparing the book values of listed companies and their market capitalization the financial reports seems to fail the information needs of market participants regarding the estimate of the proper firm value. Moreover, with the introduction of Anglo-American accounting systems in Europe and Asia we can observe even in the accounts of companies sited in the same jurisdiction diverging accounting practices for intangible assets caused by different accounting standards. To assess the relevance of intangible assets in Japanese and German accounts of listed companies we therefore measure certain balance sheet and profit and loss relations according to goodwill and self-developed software. We compare and analyze valuation rules for goodwill and software costs according to German GAAP, Japanese GAAP, US GAAP and IAS to determine the possible impact of diverging rules in the comparability of the accounts. Our results show that the comparability of the accounts is impaired because of different accounting practices. The recognition and valuation of goodwill and self-developed software varies significantly according to the accounting regime applied. However, for the recognition of self-developed software, the effect on the average impact on asset coefficients or profit is not that high. Moreover, an industry bias can only be found for the financial industry. In contrast, for goodwill accounting we found major differences especially between German and Japanese Blue Chips. The introduction of the new goodwill impairment only approach and the prohibition of the pooling method may have a major impact especially for Japanese companies’ accounts.
I would like to begin my presentation with the quotation of the first sentence of Shafii’s Trea-tise er-Risala, the first work which has been reached us until now, concerning foundation of Islamic jurisprudence. “Praise be to God gratitude for one of His favors can only be paid through another favor of him. And this favor generates favor to be bestowed, wherefore, one should feel obliged continuously to pay gratitude to God for each favor.” It is possible to conceive that Mercy (al-Rahma), the common expression of all favors granted by the Almighty Creature of human beings, has two salient characteristics: one is vertical that is with regard to the Creator and creatures, and the other is horizontal that is concerning hu-man relations among themselves as well as with other creatures. When the concept of Mercy is evaluated in perspective of God-human being relations in the existing world, it indicates that God’s favors, without discrimination, are granted to all human beings. ...
Crosslinguistic research on the production of tense morphology in child language has shown that young children use past or perfective forms mainly with telic predicates and present or imperfective forms mainly with atelic predicates. However, this pattern, which has come to be known as the Aspect First Hypothesis, has been challenged in a number of comprehension studies. These studies suggest that children do not rely on aspectual information for their interpretation of tense morphology. The present paper tests the validity of the Aspect First Hypothesis in child Greek by investigating Greek-speaking children’s early comprehension of present, past and future tense morphology as well as the role that lexical aspect plays in the early use of tense morphology. It is suggested that although Greek-speaking children have not yet fully mapped the tense concepts to the correct tense morphology, tense acquisition does not seem to be significantly affected by the aspectual characteristics (i.e. the telicity) of the verb.
This memorandum describes the approach of the U.S. Securities and Exchange Commission (the "SEC") in monitoring and, where appropriate, regulating the use of research reports by investment banking firms in connection with securities transactions. The memorandum addresses the historical system of regulation, which continues in large measure to apply. It also examines the new initiatives taken, following a number of prominent corporate, accounting and banking scandals and a significant decline in U.S. and international capital markets, to supplement the current system in what some have dubbed the "post-Enron era".
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a >= 0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
We estimate a model with latent factors that summarize the yield curve (namely, level, slope, and curvature) as well as observable macroeconomic variables (real activity, inflation, and the stance of monetary policy). Our goal is to provide a characterization of the dynamic interactions between the macroeconomy and the yield curve. We find strong evidence of the effects of macro variables on future movements in the yield curve and much weaker evidence for a reverse influence. We also relate our results to a traditional macroeconomic approach based on the expectations hypothesis.
This paper proposes an intertemporal model of venture capital investment with screening and advising where the venture capitalist´s time endowment is the scarce input factor. Screening improves the selection of firms receiving finance, advising allows firms to develop a marketable product, both have a variable intensity. In our setup, optimal linear contracts solves the moral hazard problem. Screening however asks for an entrepreneur wage and does not allow for upfront payments which would cause severe adverse selection. Project characteristics have implications for screening and advising intensity and the distribution of profits. Finally, we develop a formal version of the "venture capital cycle" by extending the basic setup to a simple model of venture capital supply and demand.
Revised Draft: January 2005, First Draft: December 8, 2004 The picture of dispersed, isolated and uninterested shareholders so graphically drawn by Adolf Berle and Gardiner Means in 19321 is for the most part no longer accurate in today's market, although their famous observations on the separation of control and ownership of public corporations remain true.
The influence of high and low energy hadronic models on lateral distribution functions of cosmic ray air showers for Auger energies is explored. A large variety of presently used high and low energy hadron interaction models are analysed and the resulting lateral distribution functions are compared. We show that the slope depends on both the high and low energy hadronic model used. The models are confronted with available hadron-nucleus data from accelerator experiments.
The SENECA model, a new hybrid approach to air shower simulations, is presented. It combines the use of efficient cascade equations in the energy range where a shower can be treated as one-dimensional, with a traditional Monte Carlo method which traces individual particles. This allows one to reproduce natural fluctuations of individual showers as well as the lateral spread of low energy particles. The model is quite efficient in computation time. As an application of the new approach, the influence of the low energy hadronic models on shower properties for AUGER energies is studied. We conclude that these models have a significant impact on the tails of lateral distribution functions, and deserve therefore more attention.
Tetratheca juncea Smith (family Tremandraceae) is a terrestrial herbaceous plant now mainly found in the Lake Macquarie area of coastal NSW and listed as Vulnerable under Schedule 2 of the NSW Threatened Species Conservation Act 1995. This study carried out from July 2001 to June 2002 records the observation and identification of two species of native bee buzz-pollinating its flowers and describes a direct relationship between the first appearance of a pollinator and the commencement of seed set. Findings from this study with respect to the pollination ecology of Tetratheca juncea are:
• There is a strong flowering period from September to January, though a number of flowers can be found on some plants across the geographic range of the plant in all months of the year;
• Two species of native bee Lasioglossum convexum and Exoneura sp, were confirmed collecting pollen from the flowers by way of buzz pollination;
• Fruiting only occurred in coincidence with flower pollination by these bees;
• Flowering, seed set and seed release was a concurrent process while ever the bees were active;
• The bees are polylectic and the sexual reproductive process in Tetratheca juncea appears to be pollinator-limited.
Recent empirical work shows that a better legal environment leads to lower expected rates of return in an international cross-section of countries. This paper investigates whether differences in firm-specific corporate governance also help to explain expected returns in a cross-section of firms within a single jurisdiction. Constructing a corporate governance rating (CGR) for German firms, we document a positive relationship between the CGR and firm value. In addition, there is strong evidence that expected returns are negatively correlated with the CGR, if dividend yields and price-earnings ratios are used as proxies for the cost of capital. Most results are robust for endogeneity, with causation running from corporate governance practices to firm fundamentals. Finally, an investment strategy that bought high-CGR firms and shorted low-CGR firms would have earned abnormal returns of around 12 percent on an annual basis during the sample period. We rationalize the empirical evidence with lower agency costs and/or the removal of certain governance malfunctions for the high-CGR firms.
"[...] In 1639, Martin Opitz rescued for us the only complete surviving text of the Annolied (circa 1083), and now Graeme Dunphy has made available a reprint of the Opitz edition and with it Opitz’s prologue and notes, a new English translation, and the translator’s informative notes on the translation and on Opitz’s commentary. In his prologue Opitz expresses the purpose of the edition, which is to demonstrate that the German language was inherited by his contemporaries in an unbroken line from earliest times. This is a strikingly early formulation of the romantic thesis the Grimm brothers developed later. Thus by including Opitz’s prologue and notes on his sources and philological explanations, Dunphy gives us the essential tools to re-invigorate research in three areas: Opitz, who is too frequently thought of as a narrowly focused poeticist, the serious study of philology and history in the sixteenth century, and most importantly, the Annolied itself. [...]" Quelle: Maria Dobozy : http://www.iaslonline.de/index.php?vorgang_id=751
As major sources of reactive oxygen species (ROS), mitochondrial structures are exposed to high concentrations of ROS and may therefore be particularly susceptible to oxidative damage. Mitochondrial damage could play a pivotal role in the cell death decision. A decrease in mitochondrial energy charge and redox state, loss of transmembrane potential (depolarization), mitochondrial respiratory chain impairment, and release of substances such as calcium and cytochrome c all contribute to apoptosis. These mitochondrial abnormalities may constitute a part of the spectrum of chronic oxidative stress in Alzheimer's disease. Accumulation of amyloid beta (Abeta) in form of senile plaques is also thought to play a central role in the pathogenesis of Alzheimer's disease mediated by oxidative stress. In addition, increasing evidence shows that Abeta generates free radicals in vitro, which mediate the toxicity of this peptide. In our study, PC12 cells were used to examine the protective features of EGb 761(definition see editorial) on mitochondria stressed with hydrogen peroxide and antimycin, an inhibitor of complex III. In addition, we investigated the efficacy of EGb 761 in Abeta-induced MTT reduction in PC12 cells. Moreover, we examined the effects of EGb 761 on ROS levels and ROS-induced apoptosis in lymphocytes from aged mice after in vivo administration. Here, we will report that EGb 761 was able to protect mitochondria from the attack of hydrogen peroxide, antimycin and Abeta. Furthermore, EGb 761 reduced ROS levels and ROS-induced apoptosis in lymphocytes from aged mice treated orally with EGb 761 for 2 weeks. Our data further emphasize neuroprotective properties of EGb 761, such as protection against Abeta-toxicity, and antiapoptotic properties, which are probably due to its preventive effects on mitochondria.
While hedge funds have been around at least since the 1940's, it has only been in the last decade or so that they have attracted the widespread attention of investors, academics and regulators. Investors, mainly wealthy individuals but also increasingly institutional investors, are attracted to hedge funds because they promise high “absolute” returns -- high returns even when returns on mainstream asset classes like stocks and bonds are low or negative. This prospect, not surprisingly, has increased interest in hedge funds in recent years as returns on stocks have plummeted around the world, and as investors have sought alternative investment strategies to insulate them in the future from the kind of bear markets we are now experiencing. Government regulators, too, have become increasingly attentive to hedge funds, especially since the notorious collapse of the hedge fund Long-Term Capital Management (LTCM) in September 1998. Over the course of only a few months during the summer of 1998 LTCM lost billions of dollars because of failed investment strategies that were not well understood even by its own investors, let alone by its bankers and derivatives counterparties. LTCM had built up huge leverage both on and off the balance sheet, so that when its investments soured it was unable to meet the demands of creditors and derivatives counterparties. Had LTCM’s counterparties terminated and liquidated their positions with LTCM, the result could have been a severe liquidity shortage and sharp changes in asset prices, which many feared could have impaired the solvency of other financial institutions and destabilized financial markets generally. The Federal Reserve did not wait to see if this would happen. It intervened to organize an immediate (September 1998) creditor-bailout by LTCM’s largest creditors and derivatives counterparties, preventing the wholesale liquidation of LTCM’s positions. Over the course of the year that followed the bailout, the creditor committee charged with managing LTCM’s positions effected an orderly work-out and liquidation of LTCM’s positions. We will never know what would have happened had the Federal Reserve not intervened. In defending the Federal Reserve’s unusual actions in coming to the assistance of an unregulated financial institutions like a hedge fund, William McDonough, the president of the Federal Reserve Bank of New York, stated that it was the Federal Reserve’s judgement that the “...abrupt and disorderly close-out of LTCM’s positions would pose unacceptable risks to the American economy. ... there was a likelihood that a number of credit and interest rate markets would experience extreme price moves and possibly cease to function for a period of one or more days and maybe longer. This would have caused a vicious cycle: a loss of investor confidence, lending to further liquidations of positions, and so on.” The near-collapse of LTCM galvanized regulators throughout the world to examine the operations of hedge funds to determine if they posed a risk to investors and to financial stability more generally. Studies were undertaken by nearly every major central bank, regulatory agency, and international “regulatory” committee (such as the Basle Committee and IOSCO), and reports were issued, by among others, The President’s Working Group on Financial Markets, the United States General Accounting Office (GAO), the Counterparty Risk Management Policy Group, the Basle Committee on Banking Supervision, and the International Organization of Securities Commissions (IOSCO). Many of these studies concluded that there was a need for greater disclosure by hedge funds in order to increase transparency and enhance market discipline, by creditors, derivatives counterparties and investors. In the Fall of 1999 two bills were introduced before the U.S. Congress directed at increasing hedge fund disclosure (the “Hedge Fund Disclosure Act” [the “Baker Bill”] and the “Markey/Dorgan Bill”). But when the legislative firestorm sparked by the LTCM’s episode finally quieted, there was no new regulation of hedge funds. This paper provides an overview of the regulation of hedge funds and examines the key regulatory issues that now confront regulators throughout the world. In particular, two major issues are examined. First, whether hedge funds pose a systemic threat to the stability of financial markets, and, if so, whether additional government regulation would be useful. And second, whether existing regulation provides sufficient protection for hedge fund investors, and, if not, what additional regulation is needed.
Equal size, equal role? : interest rate interdependence between the Euro area and the United States
(2003)
This paper investigates whether the degree and the nature of economic and monetary policy interdependence between the United States and the euro area have changed with the advent of EMU. Using real-time data, it addresses this issue from the perspective of financial markets by analysing the effects of monetary policy announcements and macroeconomic news on daily interest rates in the United States and the euro area. First, the paper finds that the interdependence of money markets has increased strongly around EMU. Although spillover effects from the United States to the euro area remain stronger than in the opposite direction, we present evidence that US markets have started reacting also to euro area developments since the onset of EMU. Second, beyond these general linkages, the paper finds that certain macroeconomic news about the US economy have a large and significant effect on euro area money markets, and that these effects have become stronger in recent years. Finally, we show that US macroeconomic news have become good leading indicators for economic developments in the euro area. This indicates that the higher money market interdependence between the United States and the euro area is at least partly explained by the increased real integration of the two economies in recent years.
The endothelin B receptor belongs to the rhodopsin-like G-protein coupled receptors family. It plays an important role in vasodilatation and is found in the membranes of the endothelial cells enveloping blood vessels. During the course of this work, the production of recombinant human ETB receptor in yeast, insect and mammalian cells was evaluated. A number of different receptor constructs for production in the yeast P. pastoris was prepared. Various affinity tags were appended to the receptor N-and C-termini to enable receptor detection and purification. The clone pPIC9KFlagHisETBBio, with an expression level of 60 pmol/mg, yielded the highest amount of active receptor (1.2 mg of receptor per liter of shaking culture). The expression level of the same clone in fermentor culture was 17 pmol/mg, and from a 10L fermentor it was possible to obtain 3 kg of cells that contained 20-39 mg of the receptor. For receptor production in insect cells, Sf9 (S. frugiperda) suspension cells were infected with the recombinant baculovirus pVlMelFlagHisETBBio. The peak of receptor production was reached at 66 h post infection, and radioligand binding assays on insect cell membranes showed 30 pmoL of active receptor /mg of membrane protein. Subsequently, the efficiency of different detergents in solubilizing the active receptor was evaluated. N-dodecyl-beta-D-maltoside (LM), lauryl-sucrose and digitonine/cholate performed best, and LM was chosen for further work. The ETB receptor was produced in mammalian cells using the Semliki Forest Virus expression system. Radioligand binding assays on membranes from CHO cells infected with the recombinant virus pSFV3CAPETBHis showed 7 pmol of active receptor /mg of membrane protein. Since the receptor yield from mammalian cells was much lower than in yeast and insect cells, this system was not used for further large-scale receptor production. After production in yeast and insect cells, the ETB receptor was saturated with its ligand, endothelin-1, in order to stabilize its native form. The receptor was subsequently solubilized with n-dodecyl-beta-D-maltoside and subjected to purification on various affinity matrices. Two-step affinity purification via Ni2+-NTA and monomeric avidin proved the most efficient way to purify milligram amounts of the receptor. The purity of the receptor preparation after this procedure was over 95%, as judged from silver stained gels. However, the tendency of the ETB receptor produced in yeast to form aggregates was a constant problem. Attempts were made to stabilize the active, monomeric form of the receptor by testing a variety of different buffer conditions, but further efforts in this direction will be necessary in order to solve the aggregation problem. In contrast to preparations from yeast, the purification of the ETB receptor produced in insect cells yielded homogeneous receptor preparations, as shown by gel filtration analysis. This work has demonstrated that the amounts of receptor expressed in yeast and insect cells and the final yield of receptor, isolated by purification, represent a good basis for beginning 3D and continuing 2D crystallization trials.
Some of the most widely expressed myths about the German financial system are concerned with the close ties and intensive interaction between banks and firms, often described as Hausbank relationships. Links between banks and firms include direct shareholdings, board representation, and proxy voting and are particularly significant for corporate governance. Allegedly, these relationships promote investment and improve the performance of firms. Furthermore, German universal banks are believed to play a special role as large and informed monitoring investors (shareholders). However, for the very same reasons, German universal banks are frequently accused of abusing their influence on firms by exploiting rents and sustaining the entrenchment of firms against efficient transfers of firm control. In this paper, we review recent empirical evidence regarding the special role of banks for the corporate governance of German firms. We differentiate between large exchangelisted firms and small and medium sized companies throughout. With respect to the role of banks as monitoring investors, the evidence does not unanimously support a special role of banks for large firms. Only one study finds that banks´ control of management goes beyond what nonbank shareholders achieve. Proxyvoting rights apparently do not provide a significant means for banks to exert management control. Most of the recent evidence regarding small firms suggests that a Hausbank relationship can indeed be beneficial. Hausbanks are more willing to sustain financing when borrower quality deteriorates, and they invest more often than arm´s length banks in workouts if borrowers face financial distress.
The development of tractable forward looking models of monetary policy has lead to an explosion of research on the implications of adopting Taylor-type interest rate rules. Indeterminacies have been found to arise for some specifications of the interest rate rule, raising the possibility of inefficient fluctuations due to the dependence of expectations on extraneous "sunspots ". Separately, recent work by a number of authors has shown that sunspot equilibria previously thought to be unstable under private agent learning can in some cases be stable when the observed sunspot has a suitable time series structure. In this paper we generalize the "common factor "technique, used in this analysis, to examine standard monetary models that combine forward looking expectations and predetermined variables. We consider a variety of specifications that incorporate both lagged and expected inflation in the Phillips Curve, and both expected inflation and inertial elements in the policy rule. We find that some policy rules can indeed lead to learnable sunspot solutions and we investigate the conditions under which this phenomenon arises.
Most systematic discussion of dyad morphemes has focussed on Australian languages, owing to a combination of their relative prevalence there, and the development of a descriptive tradition that investigates them in some depth. In the course of researching this paper, however, I became aware of functionally and semantically similar morphemes in many other parts of the world, almost invariably described in isolation from any typological reference point. I have incorporated such data as far as I am aware of it, in the hope that a systematic study will encourage other investigators to identify, and investigate in detail, similar constructions in a range of languages. The current state of our research, however, as well as some interesting geographical skewings that I discuss below, such that outside Australia dyad constructions almost exclusively employ reciprocal morphology, means that most of this paper will focus on Australian languages.
An economy in which deposit-taking banks of a Diamond/ Dybvig style and an asset market coexist is modelled. Firstly, within this framework we characterize distinct financial systems depending on the fraction of households with direct investment opportunities that are less efficient than those available to banks. With this fraction comparatively low, the evolving financial system can be interpreted as market-oriented. In this system, banks only provide efficient investment opportunities to households with inferior investment alternatives. Banks are not active in the secondary financial market nor do they provide any liquidity insurance to their depositors. Households participate to a large extent in the primary as well as in the secondary financial markets. In the other case of a relatively high fraction of households with inefficient direct investment opportunities, a bank-dominated financial system arises, in which banks provide liquidity transformation, are active in secondary financial markets and are the only player in primary markets, while households only participate in secondary financial markets. Secondly, we analyze the effect a run on a single bank has on the entire financial system. Interestingly, we can show that a bank run on a single bank causes contagion via the financial market neither in market-oriented nor in extremely bank-dominated financial systems. But in only moderately bank-dominated (or hybrid) financial systems fire sales of long-term financial claims by a distressed bank cause a sudden drop in asset prices that precipitates other banks into crisis.
Obstacle detection is an important part of Video Processing because it is indispensable for a collision prevention of autonomously navigating moving objects. For example, vehicles driving without human guidance need a robust prediction of potential obstacles, like other vehicles or pedestrians. Most of the common approaches of obstacle detection so far use analytical and statistical methods like motion estimation or generation of maps. In the first part of this contribution a statistical algorithm for obstacle detection in monocular video sequences is presented. The proposed procedure is based on a motion estimation and a planar world model which is appropriate to traffic scenes. The different processing steps of the statistical procedure are a feature extraction, a subsequent displacement vector estimation and a robust estimation of the motion parameters. Since the proposed procedure is composed of several processing steps, the error propagation of the successive steps often leads to inaccurate results. In the second part of this contribution it is demonstrated, that the above mentioned problems can be efficiently overcome by using Cellular Neural Networks (CNN). It will be shown, that a direct obstacle detection algorithm can be easily performed, based only on CNN processing of the input images. Beside the enormous computing power of programmable CNN based devices, the proposed method is also very robust in comparison to the statistical method, because is shows much less sensibility to noisy inputs. Using the proposed approach of obstacle detection in planar worlds, a real time processing of large input images has been made possible.