Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10794)
- Doctoral Thesis (1564)
- Preprint (1539)
- Working Paper (1439)
- Part of Periodical (565)
- Conference Proceeding (510)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
- Master's Thesis (38)
- Bachelor Thesis (24)
- Periodical (11)
- magisterthesis (7)
- diplomthesis (5)
- Habilitation (5)
- Diploma Thesis (4)
- Other (3)
- Magister's Thesis (2)
- Contribution to a Periodical (1)
Language
- English (17069) (remove)
Keywords
- inflammation (92)
- COVID-19 (89)
- SARS-CoV-2 (62)
- Financial Institutions (47)
- Germany (45)
- climate change (45)
- aging (43)
- ECB (42)
- cancer (42)
- apoptosis (41)
Institute
- Medizin (5095)
- Physik (2958)
- Wirtschaftswissenschaften (1646)
- Frankfurt Institute for Advanced Studies (FIAS) (1576)
- Biowissenschaften (1399)
- Informatik (1249)
- Center for Financial Studies (CFS) (1137)
- Sustainable Architecture for Finance in Europe (SAFE) (1061)
- Biochemie und Chemie (854)
- House of Finance (HoF) (702)
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. In this paper, for the small, important example of inflammation modeling a network is constructed and different learning algorithms are proposed. It turned out that due to the nonlinear dynamics evolutionary approaches are necessary to fit the parameters for sparse, given data. Keywords: model parameter adaption, septic shock. coupled differential equations, genetic algorithm.
The selection of features for classification, clustering and approximation is an important task in pattern recognition, data mining and soft computing. For real-valued features, this contribution shows how feature selection for a high number of features can be implemented using mutual in-formation. Especially, the common problem for mutual information computation of computing joint probabilities for many dimensions using only a few samples is treated by using the Rènyi mutual information of order two as computational base. For this, the Grassberger-Takens corre-lation integral is used which was developed for estimating probability densities in chaos theory. Additionally, an adaptive procedure for computing the hypercube size is introduced and for real world applications, the treatment of missing values is included. The computation procedure is accelerated by exploiting the ranking of the set of real feature values especially for the example of time series. As example, a small blackbox-glassbox example shows how the relevant features and their time lags are determined in the time series even if the input feature time series determine nonlinearly the output. A more realistic example from chemical industry shows that this enables a better ap-proximation of the input-output mapping than the best neural network approach developed for an international contest. By the computationally efficient implementation, mutual information becomes an attractive tool for feature selection even for a high number of real-valued features.
In its first part, this contribution reviews shortly the application of neural network methods to medical problems and characterizes its advantages and problems in the context of the medical background. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic systems. Then, paradigm of neural networks is shortly introduced and the main problems of medical data base and the basic approaches for training and testing a network by medical data are described. Additionally, the problem of interfacing the network and its result is given and the neuro-fuzzy approach is presented. Finally, as case study of neural rule based diagnosis septic shock diagnosis is described, on one hand by a growing neural network and on the other hand by a rule based system. Keywords: Statistical Classification, Adaptive Prediction, Neural Networks, Neurofuzzy, Medical Systems
In contrast to the symbolic approach, neural networks seldom are designed to explain what they have learned. This is a major obstacle for its use in everyday life. With the appearance of neuro-fuzzy systems which use vague, human-like categories the situation has changed. Based on the well-known mechanisms of learning for RBF networks, a special neuro-fuzzy interface is proposed in this paper. It is especially useful in medical applications, using the notation and habits of physicians and other medically trained people. As an example, a liver disease diagnosis system is presented.
The prevention of credit card fraud is an important application for prediction techniques. One major obstacle for using neural network training techniques is the high necessary diagnostic quality: Since only one financial transaction of a thousand is invalid no prediction success less than 99.9% is acceptable. Due to these credit card transaction proportions complete new concepts had to be developed and tested on real credit card data. This paper shows how advanced data mining techniques and neural network algorithm can be combined successfully to obtain a high fraud coverage combined with a low false alarm rate.
This paper describes the use of a Radial Basis Function (RBF) neural network in the approximation of process parameters for the extrusion of a rubber profile in tyre production. After introducing the rubber industry problem, the RBF network model and the RBF net learning algorithm are developed, which uses a growing number of RBF units to compensate the approximation error up to the desired error limit. Its performance is shown for simple analytic examples. Then the paper describes the modelling of the industrial problem. Simulations show good results, even when using only a few training samples. The paper is concluded by a discussion of possible systematic error influences, improvements and potential generalisation benefits. Keywords: Adaptive process control; Parameter estimation; RBF-nets; Rubber extrusion
The encoding of images by semantic entities is still an unresolved task. This paper proposes the encoding of images by only a few important components or image primitives. Classically, this can be done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the signal processing and neural network community. Using this as pattern primitives we aim for source patterns with the highest occurrence probability or highest information. For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that the Independent Principal Components (IPC) in contrast to the Principal Independent Components (PIC) implement the classical demand of Shannon’s rate distortion theory.
This paper proposes a new approach for the encoding of images by only a few important components. Classically, this is done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the neural network community. Applied to images, we aim for the most important source patterns with the highest occurrence probability or highest information called principal independent components (PIC). For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
This paper describes the problems and an adaptive solution for process control in rubber industry. We show that the human and economical benefits of an adaptive solution for the approximation of process parameters are very attractive. The modeling of the industrial problem is done by the means of artificial neural networks. For the example of the extrusion of a rubber profile in tire production our method shows good results even using only a few training samples.
In this paper we regard first the situation where parallel channels are disturbed by noise. With the goal of maximal information conservation we deduce the conditions for a transform which "immunizes" the channels against noise influence before the signals are used in later operations. It shows up that the signals have to be decorrelated and normalized by the filter which corresponds for the case of one channel to the classical result of Shannon. Additional simulations for image encoding and decoding show that this constitutes an efficient approach for noise suppression. Furthermore, by a corresponding objective function we deduce the stochastic and deterministic learning rules for a neural network that implements the data orthonormalization. In comparison with other already existing normalization networks our network shows approximately the same in the stochastic case but, by its generic deduction ensures the convergence and enables the use as independent building block in other contexts, e.g. whitening for independent component analysis. Keywords: information conservation, whitening filter, data orthonormalization network, image encoding, noise suppression.
This paper describes the use of a radial basis function (RBF) neural network. It approximates the process parameters for the extrusion of a rubber profile used in tyre production. After introducing the problem, we describe the RBF net algorithm and the modeling of the industrial problem. The algorithm shows good results even using only a few training samples. It turns out that the „curse of dimensions“ plays an important role in the model. The paper concludes by a discussion of possible systematic error influences and improvements.
The paper focuses on the division of the sensor field into subsets of sensor events and proposes the linear transformation with the smallest achievable error for reproduction: the transform coding approach using the principal component analysis (PCA). For the implementation of the PCA, this paper introduces a new symmetrical, lateral inhibited neural network model, proposes an objective function for it and deduces the corresponding learning rules. The necessary conditions for the learning rate and the inhibition parameter for balancing the crosscorrelations vs. the autocorrelations are computed. The simulation reveals that an increasing inhibition can speed up the convergence process in the beginning slightly. In the remaining paper, the application of the network in picture encoding is discussed. Here, the use of non-completely connected networks for the self-organized formation of templates in cellular neural networks is shown. It turns out that the self-organizing Kohonen map is just the non-linear, first order approximation of a general self-organizing scheme. Hereby, the classical transform picture coding is changed to a parallel, local model of linear transformation by locally changing sets of self-organized eigenvector projections with overlapping input receptive fields. This approach favors an effective, cheap implementation of sensor encoding directly on the sensor chip. Keywords: Transform coding, Principal component analysis, Lateral inhibited network, Cellular neural network, Kohonen map, Self-organized eigenvector jets.
After a short introduction into traditional image transform coding, multirate systems and multiscale signal coding the paper focuses on the subject of image encoding by a neural network. Taking also noise into account a network model is proposed which not only learns the optimal localized basis functions for the transform but also learns to implement a whitening filter by multi-resolution encoding. A simulation showing the multi-resolution capabilitys concludes the contribution.
We present a framework for the self-organized formation of high level learning by a statistical preprocessing of features. The paper focuses first on the formation of the features in the context of layers of feature processing units as a kind of resource-restricted associative multiresolution learning We clame that such an architecture must reach maturity by basic statistical proportions, optimizing the information processing capabilities of each layer. The final symbolic output is learned by pure association of features of different levels and kind of sensorial input. Finally, we also show that common error-correction learning for motor skills can be accomplished also by non-specific associative learning. Keywords: feedforward network layers, maximal information gain, restricted Hebbian learning, cellular neural nets, evolutionary associative learning
One of the most interesting domains of feedforward networks is the processing of sensor signals. There do exist some networks which extract most of the information by implementing the maximum entropy principle for Gaussian sources. This is done by transforming input patterns to the base of eigenvectors of the input autocorrelation matrix with the biggest eigenvalues. The basic building block of these networks is the linear neuron, learning with the Oja learning rule. Nevertheless, some researchers in pattern recognition theory claim that for pattern recognition and classification clustering transformations are needed which reduce the intra-class entropy. This leads to stable, reliable features and is implemented for Gaussian sources by a linear transformation using the eigenvectors with the smallest eigenvalues. In another paper (Brause 1992) it is shown that the basic building block for such a transformation can be implemented by a linear neuron using an Anti-Hebb rule and restricted weights. This paper shows the analog VLSI design for such a building block, using standard modules of multiplication and addition. The most tedious problem in this VLSI-application is the design of an analog vector normalization circuitry. It can be shown that the standard approaches of weight summation will not give the convergence to the eigenvectors for a proper feature transformation. To avoid this problem, our design differs significantly from the standard approaches by computing the real Euclidean norm. Keywords: minimum entropy, principal component analysis, VLSI, neural networks, surface approximation, cluster transformation, weight normalization circuit.
It is well known that artificial neural nets can be used as approximators of any continuous functions to any desired degree and therefore be used e.g. in high - speed, real-time process control. Nevertheless, for a given application and a given network architecture the non-trivial task remains to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation which are critical issues in VLSI and computer implementations of nontrivial tasks. In this paper the accuracy of the weights and the number of neurons are seen as general system parameters which determine the maximal approximation error by the absolute amount and the relative distribution of information contained in the network. We define as the error-bounded network descriptional complexity the minimal number of bits for a class of approximation networks which show a certain approximation error and achieve the conditions for this goal by the new principle of optimal information distribution. For two examples, a simple linear approximation of a non-linear, quadratic function and a non-linear approximation of the inverse kinematic transformation used in robot manipulator control, the principle of optimal information distribution gives the the optimal number of neurons and the resolutions of the variables, i.e. the minimal amount of storage for the neural net. Keywords: Kolmogorov complexity, e-Entropy, rate-distortion theory, approximation networks, information distribution, weight resolutions, Kohonen mapping, robot control.
It is well known that artificial neural nets can be used as approximators of any continous functions to any desired degree. Nevertheless, for a given application and a given network architecture the non-trivial task rests to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation. In this paper the problem is treated by an information theoretic approach. The values for the weights and thresholds in the approximator network are determined analytically. Furthermore, the accuracy of the weights and the number of neurons are seen as general system parameters which determine the the maximal output information (i.e. the approximation error) by the absolute amount and the relative distribution of information contained in the network. A new principle of optimal information distribution is proposed and the conditions for the optimal system parameters are derived. For the simple, instructive example of a linear approximation of a non-linear, quadratic function, the principle of optimal information distribution gives the the optimal system parameters, i.e. the number of neurons and the different resolutions of the variables.
Clathrates are candidate materials for thermoelectric applications because of a number of unique properties. The clathrate I phases in the Ba-Ni-Ge ternary system allow controlled variation of the charge carrier concentration by adjusting the Ni content. Depending on the Ni content, the physical properties vary from metal-like to insulator-like and show a transition from p-type to n-type conduction. Here we present first results on the characterization of millimeter-sized single crystals grown by the Bridgman technique. Single crystals with a composition of Ba8Ni3.5Ge42.1h0.4 show metallic behavior (dp/dT > 0) albeit with high resistivity at room temperature [p (300 K) = 1 mOhm cm]. The charge carrier concentration at 300 K, as determined from Hall-effect measurements, is 2.3 e-/unit cell. The dimensionless thermoelectric figure of merit estimated at 680 K is ZT ~ 0.2. Keywords Clathrates - thermoelectric material - intermetallic compound - nickel
We suggest a new method to compute the spectrum and wave functions of excited states. We construct a stochastic basis of Bargmann link states, drawn from a physical probability density distribution and compute transition amplitudes between stochastic basis states. From such transition matrix we extract wave functions and the energy spectrum. We apply this method toU(1)2+1 lattice gauge theory. As a test we compute the energy spectrum, wave functions and thermodynamical functions of the electric Hamiltonian and compare it with analytical results. We find excellent agreement. We observe scaling of energies and wave functions in the variable of time. We also present first results on a small lattice for the full Hamiltonian including the magnetic term.
Central elements of the Bologna declaration have been implemented in a huge variety of curricula in humanities, social sciences, natural sciences and engineering sciences at German universities. Overall the results have been nothing less than disastrous. Surprisingly, this seems to be the perfect time for German universities to talk about introducing a curriculum that is fully compatible with the Bologna declaration for medical education as well. However, German medical education does not have problems the Bologna declaration is intended to solve, such as quality, mobility, internationalization and employability. It is already in the Post-Bologna age.
Towards correctness of program transformations through unification and critical pair computation
(2010)
Correctness of program transformations in extended lambda-calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, which results in so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study of an application we describe a finitary and decidable unification algorithm for the combination of the equational theory of left-commutativity modelling multi-sets, context variables and many-sorted unification. Sets of equations are restricted to be almost linear, i.e. every variable and context variable occurs at most once, where we allow one exception: variables of a sort without ground terms may occur several times. Every context variable must have an argument-sort in the free part of the signature. We also extend the unification algorithm by the treatment of binding-chains in let- and letrec-environments and by context-classes. This results in a unification algorithm that can be applied to all overlaps of normal-order reductions and transformations in an extended lambda calculus with letrec that we use as a case study.
Measuring confidence and uncertainty during the financial crisis : evidence from the CFS survey
(2010)
The CFS survey covers individual situations of banks and other companies of the financial sector during the financial crisis. This provides a rare possibility to analyze appraisals, expectations and forecast errors of the core sector of the recent turmoil. Following standard ways of aggregating individual survey data, we first present and introduce the CFS survey by comparing CFS indicators of confidence and predicted confidence to ifo and ZEW indicators. The major contribution is the analysis of several indicators of uncertainty. In addition to well established concepts, we introduce innovative measures based on the skewness of forecast errors and on the share of ‘no response’ replies. Results show that uncertainty indicators fit quite well with pattern of real and financial time series of the time period 2007 to 2010. Business Sentiment , Financial Crisis , Survey Indicator , Uncertainty
This paper provides theory as well as empirical results for pre-averaging estimators of the daily quadratic variation of asset prices. We derive jump robust inference for pre-averaging estimators, corresponding feasible central limit theorems and an explicit test on serial dependence in microstructure noise. Using transaction data of different stocks traded at the NYSE, we analyze the estimators’ sensitivity to the choice of the pre-averaging bandwidth and suggest an optimal interval length. Moreover, we investigate the dependence of pre-averaging based inference on the sampling scheme, the sampling frequency, microstructure noise properties as well as the occurrence of jumps. As a result of a detailed empirical study we provide guidance for optimal implementation of pre-averaging estimators and discuss potential pitfalls in practice. Quadratic Variation , MarketMicrostructure Noise , Pre-averaging , Sampling Schemes , Jumps
Nucleation experiments starting from the reaction of OH radicals with SO2 have been performed in the IfT-LFT flow tube under atmospheric conditions at 293±0.5 K for a relative humidity of 13–61%. The presence of different additives (H2, CO, 1,3,5-trimethylbenzene) for adjusting the OH radical concentration and resulting OH levels in the range (4–300) ×105 molecule cm -3 did not influence the nucleation process itself. The number of detected particles as well as the threshold H2SO4 concentration needed for nucleation was found to be strongly dependent on the counting efficiency of the used counting devices. High-sensitivity particle counters allowed the measurement of freshly nucleated particles with diameters down to about 1.5 nm. A parameterization of the experimental data was developed using power law equations for H2SO4 and H2O vapour. The exponent for H2SO4 from different measurement series was in the range of 1.7–2.1 being in good agreement with those arising from analysis of nucleation events in the atmosphere. For increasing relative humidity, an increase of the particle number was observed. The exponent for H2O vapour was found to be 3.1 representing an upper limit. Addition of 1.2×1011 molecule cm -3 or 1.2×1012 molecule cm -3 of NH3 (range of atmospheric NH3 peak concentrations) revealed that NH3 has a measureable, promoting effect on the nucleation rate under these conditions. The promoting effect was found to be more pronounced for relatively dry conditions, i.e. a rise of the particle number by 1–2 orders of magnitude at RH = 13% and only by a factor of 2–5 at RH = 47% (NH3 addition: 1.2×1012 molecule cm -3). Using the amine tert-butylamine instead of NH3, the enhancing impact of the base for nucleation and particle growth appears to be stronger. Tert-butylamine addition of about 1010 molecule cm -3 at RH = 13% enhances particle formation by about two orders of magnitude, while for NH3 only a small or negligible effect on nucleation in this range of concentration appeared. This suggests that amines can strongly influence atmospheric H2SO4-H2O nucleation and are probably promising candidates for explaining existing discrepancies between theory and observations.
We report the first measurements of 1,1,1,2,3,3,3-heptafluoropropane (HFC-227ea), a substitute for ozone depleting compounds, in remote regions of the atmosphere and present evidence for its rapid growth. Observed mixing ratios ranged from below 0.01 ppt in deep firn air to 0.59 ppt in the northern mid-latitudinal upper troposphere. Firn air samples collected in Greenland were used to reconstruct a history of atmospheric abundance. Year-on-year increases were deduced, with acceleration in the growth rate from 0.026 ppt per year in 2000 to 0.057 ppt per year in 2007. Upper tropospheric air samples provide evidence for a continuing growth until late 2009. Fur- thermore we calculated a stratospheric lifetime of 370 years from measurements of air samples collected on board high altitude aircraft and balloons. Emission estimates were determined from the reconstructed atmospheric trend and suggest that current "bottom-up" estimates of global emissions for 2005 are too high by more than a factor of three.
We report the first measurements of 1,1,1,2,3,3,3-heptafluoropropane (HFC-227ea), a substitute for ozone depleting compounds, in air samples originating from remote regions of the atmosphere and present evidence for its accelerating growth. Observed mixing ratios ranged from below 0.01 ppt in deep firn air to 0.59 ppt in the current northern mid-latitudinal upper troposphere. Firn air samples collected in Greenland were used to reconstruct a history of atmospheric abundance. Year-on-year increases were deduced, with acceleration in the growth rate from 0.029 ppt per year in 2000 to 0.056 ppt per year in 2007. Upper tropospheric air samples provide evidence for a continuing growth until late 2009. Furthermore we calculated a stratospheric lifetime of 370 years from measurements of air samples collected on board high altitude aircraft and balloons. Emission estimates were determined from the reconstructed atmospheric trend and suggest that current "bottom-up" estimates of global emissions for 2005 are too high by a factor of three.
A comprehensive evaluation of seasonal backward trajectories initialized in the northern hemisphere lowermost stratosphere (LMS) has been performed to investigate the factors that determine the temporal and spatial structure of troposphere-to-stratosphere-transport (TST) and it’s impact on the LMS. In particular we explain the fundamental role of the transit time since last TST (tTST) for the chemical composition of the LMS. According to our results the structure of the LMS can be characterized by a layer with tTST<40 days forming a narrow band around the local tropopause. This layer extends about 30K above the local dynamical tropopause, corresponding to the extratropical tropopause transition layer (ExTL) as identified by CO. The LMS beyond this layer shows a relatively well defined separation as marked by an aprupt transition to longer tTST indicating less frequent mixing and a smaller fraction of tropospheric air. Thus the LMS constitutes a region of two well defined regimes of tropospheric influence. These can be characterized mainly by different transport times from the troposphere and different fractions of tropospheric air. Carbon monoxide (CO) mirrors this structure of tTST due to it’s finite lifetime on the order of three months. Water vapour isopleths, on the other hand, do not uniquely indicate TST and are independent of tTST, but are determined by the Lagrangian Cold Point (LCP) of air parcels. Most of the backward trajectories from the LMS experienced their LCP in the tropics and sub-tropics, and TST often occurs 20 days after trajectories have encountered their LCP. Therefore, ExTL properties deduced from CO and H2O provide totally different informations on transport and particular TST for the LMS.
Two different single particle mass spectrometers were operated in parallel at the Swiss High Alpine Research Station Jungfraujoch (JFJ, 3580 m a.s.l.) during the Cloud and Aerosol Characterization Experiment (CLACE 6) in February and March 2007. During mixed phase cloud events ice crystals from 5–20 micro m were separated from larger ice aggregates, non-activated, interstitial aerosol particles and supercooled droplets using an Ice-Counterflow Virtual Impactor (Ice-CVI). During one cloud period supercooled droplets were additionally sampled and analyzed by changing the Ice-CVI setup. The small ice particles and droplets were evaporated by injection into dry air inside the Ice-CVI. The resulting ice and droplet residues (IR and DR) were analyzed for size and composition by the two single particle mass spectrometers: a custom-built Single Particle Laser-Ablation Time-of-Flight Mass Spectrometer (SPLAT) and a commercial Aerosol Time-of-Flight Mass Spectrometer (ATOFMS, TSI Model 3800). During CLACE 6 the SPLAT instrument characterized 355 individual IR that produced a mass spectrum for at least one polarity and the ATOFMS measured 152 IR. The mass spectra were binned in classes, based on the combination of dominating substances, such as mineral dust, sulfate, potassium and elemental carbon or organic material. The derived chemical information from the ice residues is compared to the JFJ ambient aerosol that was sampled while the measurement station was out of clouds (several thousand particles analyzed by SPLAT and ATOFMS) and to the composition of the residues of supercooled cloud droplets (SPLAT: 162 cloud droplet residues analyzed, ATOFMS: 1094). The measurements showed that mineral dust was strongly enhanced in the ice particle residues. Close to all of the SPLAT spectra from ice residues did contain signatures from mineral compounds, albeit connected with varying amounts of soluble compounds. Similarly, close to all of the ATOFMS IR spectra show a mineral or metallic component. Pure sulfate and nitrate containing particles were depleted in the ice residues. Sulfate and nitrate was found to dominate the droplet residues (~90% of the particles). The results from the two different single particle mass spectrometers were generally in agreement. Differences in the results originate from several causes, such as the different wavelength of the desorption and ionisation lasers and different size-dependent particle detection efficiencies.
Background: The integration of the non-cross-resistant chemotherapeutic agents capecitabine and vinorelbine into an intensified dose-dense sequential anthracycline- and taxane-containing regimen in high-risk early breast cancer (EBC) could improve efficacy, but this combination was not examined in this context so far. Methods: Patients with stage II/IIIA EBC (four or more positive lymph nodes) received post-operative intensified dose-dense sequential epirubicin (150mg/m2 every 2 weeks) and paclitaxel (225mg/m2 every 2 weeks) with filgrastim and darbepoetin alfa, followed by capecitabine alone (dose levels 1 and 3) or with vinorelbine (dose levels 2 and 4). Capecitabine was given on days 1-14 every 21 days at 1000 or 1250 mg/m2 twice daily (dose levels 1/2 and 3/4, respectively). Vinorelbine 25 mg/m2 was given on days 1 and 8 of each 21-day course (dose levels 2 and 4). Results: Fifty-one patients were treated. There was one dose-limiting toxicity (DLT) at dose level 1. At dose level 2 (capecitabine and vinorelbine), five of 10 patients experienced DLTs. Therefore evaluation of vinorelbine was abandoned and dose level 3 (capecitabine monotherapy) was expanded. Hand-foot syndrome and diarrhoea were dose limiting with capecitabine 1250 mg/m2 twice daily. At 35.2 months' median follow-up, the estimated 3-year relapse-free and overall survival rates were 82% and 91%, respectively. Administration of capecitabine monotherapy after sequential dose-dense epirubicin and paclitaxel is feasible in node-positive EBC, while the combination of capecitabine and vinorelbine as used here caused more DLTs. Trial registration: Current Controlled Trials ISRCTN38983527.
Background: European robins, Erithacus rubecula, show two types of directional responses to the magnetic field: (1) compass orientation that is based on radical pair processes and lateralized in favor of the right eye and (2) so-called 'fixed direction' responses that originate in the magnetite-based receptors in the upper beak. Both responses are light-dependent. Lateralization of the 'fixed direction' responses would suggest an interaction between the two magnetoreception systems. Results: Robins were tested with either the right or the left eye covered or with both eyes uncovered for their orientation under different light conditions. With 502 nm turquoise light, the birds showed normal compass orientation, whereas they displayed an easterly 'fixed direction' response under a combination of 502 nm turquoise with 590 nm yellow light. Monocularly right-eyed birds with their left eye covered were oriented just as they were binocularly as controls: under turquoise in their northerly migratory direction, under turquoise-and-yellow towards east. The response of monocularly left-eyed birds differed: under turquoise light, they were disoriented, reflecting a lateralization of the magnetic compass system in favor of the right eye, whereas they continued to head eastward under turquoise-and-yellow light. Conclusion: 'Fixed direction' responses are not lateralized. Hence the interactions between the magnetite-receptors in the beak and the visual system do not seem to involve the magnetoreception system based on radical pair processes, but rather other, non-lateralized components of the visual system.
Background: Pythium ultimum (P. ultimum) is a ubiquitous oomycete plant pathogen responsible for a variety of diseases on a broad range of crop and ornamental species. Results: The P. ultimum genome (42.8 Mb) encodes 15,290 genes and has extensive sequence similarity and synteny with related Phytophthora species, including the potato blight pathogen Phytophthora infestans. Whole transcriptome sequencing revealed expression of 86% of genes, with detectable differential expression of suites of genes under abiotic stress and in the presence of a host. The predicted proteome includes a large repertoire of proteins involved in plant pathogen interactions although surprisingly, the P. ultimum genome does not encode any classical RXLR effectors and relatively few Crinkler genes in comparison to related phytopathogenic oomycetes. A lower number of enzymes involved in carbohydrate metabolism were present compared to Phytophthora species, with the notable absence of cutinases, suggesting a significant difference in virulence mechanisms between P. ultimum and more host specific oomycete species. Although we observed a high degree of orthology with Phytophthora genomes, there were novel features of the P. ultimum proteome including an expansion of genes involved in proteolysis and genes unique to Pythium. We identified a small gene family of cadherins, proteins involved in cell adhesion, the first report in a genome outside the metazoans. Conclusions: Access to the P. ultimum genome has revealed not only core pathogenic mechanisms within the oomycetes but also lineage specific genes associated with the alternative virulence and lifestyles found within the pythiaceous lineages compared to the Peronosporaceae.
Hereditary angioedema (C1 inhibitor deficiency, HAE) is associated with intermittent swellings which are disabling and may be fatal. Effective treatments are available and these are most useful when given early in the course of the swelling. The requirement to attend a medical facility for parenteral treatment results in delays. Home therapy offers the possibility of earlier treatment and better symptom control, enabling patients to live more healthy, productive lives. This paper examines the evidence for patient-controlled home treatment of acute attacks ('self or assisted administration') and suggests a framework for patients and physicians interested in participating in home or self-administration programmes. It represents the opinion of the authors who have a wide range of expert experience in the management of HAE.
Background: The human chromosomal region 9p21.3 has been shown to be strongly associated with Coronary Heart Disease (CHD) in several Genome-wide Association Studies (GWAS). Recently, this region has also been shown to be associated with Aggressive Periodontitis (AgP), strengthening the hypothesis that the established epidemiological association between periodontitis and CHD is caused by a shared genetic background, in addition to common environmental and behavioural risk factors. However, the size of the analyzed cohorts in this primary analysis was small compared to other association studies on complex diseases. Using our own AgP cohort, we attempted to confirm the described associations for the chromosomal region 9p21.3. Methods: We analyzed our cohort consisting of patients suffering from the most severe form of AgP, generalized AgP (gAgP) (n = 130) and appropriate periodontally healthy control individuals (n = 339) by genotyping four tagging SNPs (rs2891168, rs1333042, rs1333048 and rs496892), located in the chromosomal region 9p21.3, that have been associated with AgP. Results: The results confirmed significant associations between three of the four SNPs and gAgP. The combination of our results with those from the study which described this association for the first time in a meta-analysis of the four tagging SNPs produced clearly lower p-values compared with the results of each individual study. According to these results, the most plausible genetic model for the association of all four tested SNPs with gAgP seems to be the multiplicative one. Conclusion: We positively replicated the finding of an association between the chromosomal region 9p21.3 and gAgP. This result strengthens support for the hypothesis that shared susceptibility genes within this chromosomal locus might be involved in the pathogenesis of both CHD and gAgP.
SUMMARY RECOMMENDATIONS 1. One of the major lessons from the current financial crisis refers to the systemic dimension of financial risk which had been almost completely neglected by bankers and supervisors in the pre-2007 years. 2. Accordingly, the most needed change in financial regulation, in order to avoid a repetition of such a crisis in the future, consists of influencing individual bank behaviour such that systemic risk is decreased. This objective is new and distinct from what Basle II was intended to achieve. 3. It is important, therefore, to evaluate proposed new regulatory instruments on the ground of whether or not they contribute to a reduction, or containment of systemic risk. We see two new regulatory measures of paramount importance: the introduction of a Systemic Risk Charge (SRC), and the implementation of a transparent bank resolution regime. Both measures complement each other, thus both have to be realized to be effective. 4. We propose a Systemic Risk Charge (SRC), a levy capturing the contribution of any individual bank to the overall systemic risk which is distinct from the institution’s own default risk. The SRC is set up such that the more systemic risk a bank contributes, the higher is the cost it has to bear. Therefore, the SRC serves to internalize the cost of systemic risk which, up to now, was borne by the taxpayer. 5. Major details of our SRC refer to the use of debt that may be converted into equity when systemic risk threatens the stability of the banking system. Also, the SRC raises some revenues for government. 6. The SRC has to be compared to several bank levies currently debated. The Financial Transaction Tax (FTT) does not directly address systemic risk and is therefore inferior to a SRC. Nevertheless, a FTT may offer the opportunity to subsidize on-exchange trading at the expense of off-exchange (over-the-counter, OTC) transactions, thereby enhancing financial market stability. The Financial Activity Tax (FAT) is similar to a VAT on financial services. It is the least adequate instrument among all instruments discussed above to limit systemic risk. 7. Bank resolution regime: No instrument to contain systemic risk can be effective unless the restructuring of bank debt, and the ensuing loss given default to creditors, is a real possibility. As the crisis has taught, bank restructuring is very difficult in light of contagion risk between major banks. We therefore need a regulatory procedure that allows winding down banks, even large banks, on short notice. Among other things, the procedure will require to distinguish systemically relevant exposures from those that are irrelevant. Only the former will be saved with government money, and it will then be the task of the supervisor to ensure a sufficient amount of nonsystemically relevant debt on the balance sheet of all banks. 8. Further issues discussed in this policy paper and its appendices refer to the necessity of a global level playing field, or the lack thereof, for these new regulatory measures; the convergence of our SRC proposal with what is expected to be long-term outcome of Basle III discussions; as well as the role of global imbalances.
Many studies show that most people are not financially literate and are unfamiliar with even the most basic economic concepts. However, the evidence on the determinants of economic literacy is scant. This paper uses international panel data on 55 countries from 1995 to 2008, merging indicators of economic literacy with a large set of macroeconomic and institutional variables. Results show that there is substantial heterogeneity of financial and economic competence across countries, and that human capital indicators (PISA test scores and college attendance) are positively correlated with economic literacy. Furthermore, inhabitants of countries with more generous social security systems are generally less literate, lending support to the hypothesis that the incentives to acquire economic literacy are related to the amount of resources available for private accumulation. JEL Classification: E2, D8, G1
This paper investigates the accuracy and heterogeneity of output growth and inflation forecasts during the current and the four preceding NBER-dated U.S. recessions. We generate forecasts from six different models of the U.S. economy and compare them to professional forecasts from the Federal Reserve’s Greenbook and the Survey of Professional Forecasters (SPF). The model parameters and model forecasts are derived from historical data vintages so as to ensure comparability to historical forecasts by professionals. The mean model forecast comes surprisingly close to the mean SPF and Greenbook forecasts in terms of accuracy even though the models only make use of a small number of data series. Model forecasts compare particularly well to professional forecasts at a horizon of three to four quarters and during recoveries. The extent of forecast heterogeneity is similar for model and professional forecasts but varies substantially over time. Thus, forecast heterogeneity constitutes a potentially important source of economic fluctuations. While the particular reasons for diversity in professional forecasts are not observable, the diversity in model forecasts can be traced to different modeling assumptions, information sets and parameter estimates. JEL Classification: G14, G15, G24
Price pressures
(2010)
We study price pressures in stock prices—price deviations from fundamental value due to a risk-averse intermediary supplying liquidity to asynchronously arriving investors. Empirically, twelve years of daily New York Stock Exchange intermediary data reveal economically large price pressures. A $100,000 inventory shock causes an average price pressure of 0.28% with a half-life of 0.92 days. Price pressure causes average transitory volatility in daily stock returns of 0.49%. Price pressure effects are substantially larger with longer durations in smaller stocks. Theoretically, in a simple dynamic inventory model the ‘representative’ intermediary uses price pressure to control risk through inventory mean reversion. She trades off the revenue loss due to price pressure against the price risk associated with remaining in a nonzero inventory state. The model’s closed-form solution identifies the intermediary’s relative risk aversion and the distribution of investors’ private values for trading from the observed time series patterns. These allow us to estimate the social costs—deviations from constrained Pareto efficiency—due to price pressure which average 0.35 basis points of the value traded. JEL Classification: G12, G14, D53, D61
This paper presents a model to analyze the consequences of competition in order-flow between a profit maximizing stock exchange and an alternative trading platform on the decisions concerning trading fees and listing requirements. Listing requirements, set by the exchange, provide public information on listed firms and contribute to a better liquidity on all trading venues. It is sometimes asserted that competition induces the exchange to lower its level of listing standards compared to a situation in which it is a monopolist, because the trading platform can free-ride on this regulatory activity and compete more aggressively on trading fees. The present analysis shows that this is not always true and depends on the existence and size of gains related to multi market trading. These gains relax competition on trading fees. The higher these gains are, the more the exchange can increase its revenue from listing and trading when it raises its listing standards. For large enough gains from multi-market trading, the exchange is not induced to lower the level of listing standards when a competing trading platform appears. As a second result, this analysis also reveals a cross - subsidization effect between the listing and the trading activity when listing is not competitive. This model yields implications about the fee structures on stock markets, the regulation of listings and the social optimality of competition for volume. JEL Classification: G10, G18, G12
This paper proposes the Shannon entropy as an appropriate one-dimensional measure of behavioural trading patterns in financial markets. The concept is applied to the illustrative example of algorithmic vs. non-algorithmic trading and empirical data from Deutsche Börse's electronic cash equity trading system, Xetra. The results reveal pronounced differences between algorithmic and non-algorithmic traders. In particular, trading patterns of algorithmic traders exhibit a medium degree of regularity while non-algorithmic trading tends towards either very regular or very irregular trading patterns. JEL Classification: C40, D0, G14, G15, G20
How ordinary consumers make complex economic decisions: financial literacy and retirement readiness
(2010)
This paper explores who is financially literate, whether people accurately perceive their own economic decision-making skills, and where these skills come from. Self-assessed and objective measures of financial literacy can be linked to consumers’ efforts to plan for retirement in the American Life Panel, and causal relationships with retirement planning examined by exploiting information about respondent financial knowledge acquired in school. Results show that those with more advanced financial knowledge are those more likely to be retirement-ready.
We examined financial literacy among the young using the most recent wave of the 1997 National Longitudinal Survey of Youth. We showed that financial literacy is low; fewer than one-third of young adults possess basic knowledge of interest rates, inflation, and risk diversification. Financial literacy was strongly related to sociodemographic characteristics and family financial sophistication. Specifically, a college-educated male whose parents had stocks and retirement savings was about 45 percentage points more likely to know about risk diversification than a female with less than a high school education whose parents were not wealthy. These findings have implications for consumer policy. JEL Classification: D91
This paper investigates the accuracy and heterogeneity of output growth and inflation forecasts during the current and the four preceding NBER-dated U.S. recessions. We generate forecasts from six different models of the U.S. economy and compare them to professional forecasts from the Federal Reserve’s Greenbook and the Survey of Professional Forecasters (SPF). The model parameters and model forecasts are derived from historical data vintages so as to ensure comparability to historical forecasts by professionals. The mean model forecast comes surprisingly close to the mean SPF and Greenbook forecasts in terms of accuracy even though the models only make use of a small number of data series. Model forecasts compare particularly well to professional forecasts at a horizon of three to four quarters and during recoveries. The extent of forecast heterogeneity is similar for model and professional forecasts but varies substantially over time. Thus, forecast heterogeneity constitutes a potentially important source of economic fluctuations. While the particular reasons for diversity in professional forecasts are not observable, the diversity in model forecasts can be traced to different modeling assumptions, information sets and parameter estimates. JEL Classification: C53, D84, E31, E32, E37 Keywords: Forecasting, Business Cycles, Heterogeneous Beliefs, Forecast Distribution, Model Uncertainty, Bayesian Estimation
This paper analyzes loan pricing when there is multiple banking and borrower distress. Using a unique data set on SME lending collected from major German banks, we can instrument for effective coordination between lenders, carrying out a panel estimation. The analysis allows to distinguish between rents that accrue due to single bank lending, rents that accrue due to relationship lending, and rents that accrue due to the elimination of competition among multiple lenders. We find the relationship lending to have no discernible impact on loan spreads, while both single lending and coordinated multiple lending significantly increase the spread. Thus, contrary to predictions in the literature, multiple lending does not insure the borrower against hold-up. JEL Classification: D74, G21, G33, G34
Grace in Sikhism
(2010)
As in all other religions there are two contrary streaming in Sikhism too. One teaches that meaning and value of human existence depends on the human works which we call the operative model. The other streaming preaches that the Holy’s grace is the substance of men’s ultimate destination, and it alone gives meaning to their existence; this position we call the receptive model. As the third streaming we can identify the doctrine of conditioned gratification which means that the humans get Divine support for achieving the salvation of their souls. This third one is obviously the predominant model in all religions. The religious books of the Sikhs have incorporated all positions. Therefore they are widespread and popular. Everybody finds what suits to him. We will reconstruct the receptive model as it is shown in Nitnem, where the daily prayers od the Sikhs are collected.
In this thesis we have studied the physics of different ultracold Bose-Fermi mixtures in optical lattices, as well as spin 1=2 fermions in a harmonic trap. To study these systems we generalized dynamical mean-field theory for a mixture of fermions and bosons, as well as for an inhomogeneous environment. Generalized dynamical mean-field theory (GDMFT) is a method that describes a mixture of fermions and bosons. This method consists of Gutzwiller mean-field for the bosons, and dynamical mean-field theory for the fermions, which are coupled on-site by the Bose-Fermi density-density interaction and possibly a Feshbach term which converts a pair of up and down fermions into a molecule, i.e. a boson. We derived the self-consistency equations and showed that this method is well-controlled in the limit of high lattice coordination number z. We develop real-space dynamical mean-field theory for studying systems in an inhomogeneous environment, e.g. in a harmonic trap. The crucial difference compared to standard DMFT is that we are taking into account that different sites are not equivalent to each other and thus take into account the inhomogeneity of the system. Different sites are coupled by the real-space Dyson equation. ...
On demand treatment and home therapy of hereditary angioedema in Germany - the Frankfurt experience
(2010)
Background: Manifestation of acute edema in hereditary angioedema (HAE) is characterized by interindividual and intraindividual variability in symptom expression over time. Flexible therapy options are needed. Methods: We describe and report on the outcomes of the highly individualized approach to HAE therapy practiced at our HAE center in Frankfurt (Germany). Results: The HAE center at the Frankfurt University Hospital currently treats 450 adults with HAE or AAE and 107 pediatric HAE patients with highly individualized therapeutic approaches. 73.9% of the adult patients treat HAE attacks by on-demand therapy with pasteurized pd C1-INH concentrate, 9.8% use additional prophylaxis with attenuated androgens, 1% of the total patient population in Frankfurt has been treated with Icatibant up to now. In addition adult and selected pediatric patients with a high frequency of severe attacks are instructed to apply individual replacement therapy (IRT) with pasteurized pd C1-INH concentrate. Improvement on Quality of Life items was shown for these patients compared to previous long-term danazol prophylaxis. Home treatment of HAE patients was developed in the Frankfurt HAE center in line with experiences in hemophilia therapy and has so far been implemented over a period of 28 years. At present 248 (55%) of the adult patients and 26 (24%) of the pediatric patients are practicing home treatment either as on demand or IRT treatment. Conclusions: In conclusion, the individualized home therapies provided by our HAE center, aim to limit the disruption to normal daily activities that occurs for many HAE patients. Furthermore, we seek to optimize the economic burden of the disease while offering a maximum quality of life to our patients.
Background We published the Canadian 2003 International Consensus Algorithm for the Diagnosis, Therapy, and Management of Hereditary Angioedema (HAE; C1 inhibitor [C1-INH] deficiency) and updated this as Hereditary angioedema: a current state-of-the-art review: Canadian Hungarian 2007 International Consensus Algorithm for the Diagnosis, Therapy, and Management of Hereditary Angioedema. Objective To update the International Consensus Algorithm for the Diagnosis, Therapy and Management of Hereditary Angioedema (circa 2010). Methods The Canadian Hereditary Angioedema Network (CHAEN)/Reseau Canadien d'angioedeme hereditaire (RCAH) (www.haecanada.com) and cosponsors University of Calgary and the Canadian Society of Allergy and Clinical Immunology (with an unrestricted educational grant from CSL Behring) held our third Conference May 15th to 16th, 2010 in Toronto Canada to update our consensus approach. The Consensus document was reviewed at the meeting and then circulated for review. Results This manuscript is the 2010 International Consensus Algorithm for the Diagnosis, Therapy and Management of Hereditary Angioedema that resulted from that conference. Conclusions Consensus approach is only an interim guide to a complex disorder such as HAE and should be replaced as soon as possible with large phase III and IV clinical trials, meta analyses, and using data base registry validation of approaches including quality of life and cost benefit analyses, followed by large head-to-head clinical trials and then evidence-based guidelines and standards for HAE disease management.
To date it is not clear at which stage of differentiation mature T cell leukaemia/lymphoma is initiated. Previous studies in our group showed that mature T cells are relatively resistant to transformation. We wanted to further investigate the transformation potential of NPM-ALK, p21SNFT and the viral oncoprotein Tax on mature T cells. First, we analyzed the effects on T cell growth in vitro after transducing human T cell lines with gammaretroviral vectors encoding these genes. No growth or proliferation promoting effect of all three genes was observed. In the second part of the project, we transduced murine, mature T cells and/or haematopoietic stem cells (HPCs/HSCs) and transplanted these cells into Rag-1 deficient recipients. All mice transplanted with NPM-ALK transduced monoclonal mature T cells (OT-1) developed leukaemia/lymphoma. In contrast, only few NPM-ALK transduced polyclonal T cell and HPC/HSC transplanted mice developed leukaemia/lymphoma. From the p21SNFT group, only two mice transplanted with transduced OT-1 T cells developed leukaemia/lymphoma, which showed high eGFP and interestingly CD19 expression. No malignancies were observed in Tax transplanted animals so far. Furthermore, the recipients do not show any eGFP marking in the periphery. In conclusion, our results show that compared to polyclonal T cells, monoclonal T cells are transformable after gammaretroviral transfer of NPM-ALK and p21SNFT.
Background: The potential anti-cancer effects of mammalian target of rapamycin (mTOR) inhibitors are being intensively studied. To date, however, few randomised clinical trials (RCT) have been performed to demonstrate anti-neoplastic effects in the pure oncology setting, and at present, no oncology endpoint-directed RCT has been reported in the high-malignancy risk population of immunosuppressed transplant recipients. Interestingly, since mTOR inhibitors have both immunosuppressive and anti-cancer effects, they have the potential to simultaneously protect against immunologic graft loss and tumour development. Therefore, we designed a prospective RCT to determine if the mTOR inhibitor sirolimus can improve hepatocellular carcinoma (HCC)-free patient survival in liver transplant (LT) recipients with a pre-transplant diagnosis of HCC. Methods: The study is an open-labelled, randomised, RCT comparing sirolimus-containing versus mTOR-inhibitor-free immunosuppression in patients undergoing LT for HCC. Patients with a histologically confirmed HCC diagnosis are randomised into 2 groups within 4-6 weeks after LT; one arm is maintained on a centre-specific mTOR-inhibitor-free immunosuppressive protocol and the second arm is maintained on a centre-specific mTOR-inhibitor-free immunosuppressive protocol for the first 4-6 weeks, at which time sirolimus is initiated. A 3-year recruitment phase is planned with a 5-year follow-up, testing HCC-free survival as the primary endpoint. Our hypothesis is that sirolimus use in the second arm of the study will improve HCC-free survival. The study is a non-commercial investigator-initiated trial (IIT) sponsored by the University Hospital Regensburg and is endorsed by the European Liver and Intestine Transplant Association; 13 countries within Europe, Canada and Australia are participating. Discussion: If our hypothesis is correct that mTOR inhibition can reduce HCC tumour growth while simultaneously providing immunosuppression to protect the liver allograft from rejection, patients should experience less post-transplant problems with HCC recurrence, and therefore could expect a longer and better quality of life. A positive outcome will likely change the standard of posttransplant immunosuppressive care for LT patients with HCC. (trial registered at www.clinicaltrials.gov: NCT00355862) (EudraCT Number: 2005-005362-36)