Universitätspublikationen
Refine
Year of publication
Document Type
- Preprint (729)
- Article (370)
- Working Paper (68)
- Doctoral Thesis (64)
- Book (36)
- Bachelor Thesis (35)
- Conference Proceeding (23)
- Diploma Thesis (19)
- Part of a Book (11)
- Contribution to a Periodical (10)
Has Fulltext
- yes (1380)
Is part of the Bibliography
- no (1380)
Keywords
- Heavy Ion Experiments (19)
- Lambda-Kalkül (12)
- Hadron-Hadron Scattering (11)
- Formale Semantik (10)
- Hadron-Hadron scattering (experiments) (10)
- LHC (8)
- Heavy-ion collision (7)
- concurrency (6)
- functional programming (6)
- Operationale Semantik (5)
Institute
- Informatik (1380) (remove)
After a short introduction into traditional image transform coding, multirate systems and multiscale signal coding the paper focuses on the subject of image encoding by a neural network. Taking also noise into account a network model is proposed which not only learns the optimal localized basis functions for the transform but also learns to implement a whitening filter by multi-resolution encoding. A simulation showing the multi-resolution capabilitys concludes the contribution.
We present a framework for the self-organized formation of high level learning by a statistical preprocessing of features. The paper focuses first on the formation of the features in the context of layers of feature processing units as a kind of resource-restricted associative multiresolution learning We clame that such an architecture must reach maturity by basic statistical proportions, optimizing the information processing capabilities of each layer. The final symbolic output is learned by pure association of features of different levels and kind of sensorial input. Finally, we also show that common error-correction learning for motor skills can be accomplished also by non-specific associative learning. Keywords: feedforward network layers, maximal information gain, restricted Hebbian learning, cellular neural nets, evolutionary associative learning
One of the most interesting domains of feedforward networks is the processing of sensor signals. There do exist some networks which extract most of the information by implementing the maximum entropy principle for Gaussian sources. This is done by transforming input patterns to the base of eigenvectors of the input autocorrelation matrix with the biggest eigenvalues. The basic building block of these networks is the linear neuron, learning with the Oja learning rule. Nevertheless, some researchers in pattern recognition theory claim that for pattern recognition and classification clustering transformations are needed which reduce the intra-class entropy. This leads to stable, reliable features and is implemented for Gaussian sources by a linear transformation using the eigenvectors with the smallest eigenvalues. In another paper (Brause 1992) it is shown that the basic building block for such a transformation can be implemented by a linear neuron using an Anti-Hebb rule and restricted weights. This paper shows the analog VLSI design for such a building block, using standard modules of multiplication and addition. The most tedious problem in this VLSI-application is the design of an analog vector normalization circuitry. It can be shown that the standard approaches of weight summation will not give the convergence to the eigenvectors for a proper feature transformation. To avoid this problem, our design differs significantly from the standard approaches by computing the real Euclidean norm. Keywords: minimum entropy, principal component analysis, VLSI, neural networks, surface approximation, cluster transformation, weight normalization circuit.
It is well known that artificial neural nets can be used as approximators of any continuous functions to any desired degree and therefore be used e.g. in high - speed, real-time process control. Nevertheless, for a given application and a given network architecture the non-trivial task remains to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation which are critical issues in VLSI and computer implementations of nontrivial tasks. In this paper the accuracy of the weights and the number of neurons are seen as general system parameters which determine the maximal approximation error by the absolute amount and the relative distribution of information contained in the network. We define as the error-bounded network descriptional complexity the minimal number of bits for a class of approximation networks which show a certain approximation error and achieve the conditions for this goal by the new principle of optimal information distribution. For two examples, a simple linear approximation of a non-linear, quadratic function and a non-linear approximation of the inverse kinematic transformation used in robot manipulator control, the principle of optimal information distribution gives the the optimal number of neurons and the resolutions of the variables, i.e. the minimal amount of storage for the neural net. Keywords: Kolmogorov complexity, e-Entropy, rate-distortion theory, approximation networks, information distribution, weight resolutions, Kohonen mapping, robot control.
It is well known that artificial neural nets can be used as approximators of any continous functions to any desired degree. Nevertheless, for a given application and a given network architecture the non-trivial task rests to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation. In this paper the problem is treated by an information theoretic approach. The values for the weights and thresholds in the approximator network are determined analytically. Furthermore, the accuracy of the weights and the number of neurons are seen as general system parameters which determine the the maximal output information (i.e. the approximation error) by the absolute amount and the relative distribution of information contained in the network. A new principle of optimal information distribution is proposed and the conditions for the optimal system parameters are derived. For the simple, instructive example of a linear approximation of a non-linear, quadratic function, the principle of optimal information distribution gives the the optimal system parameters, i.e. the number of neurons and the different resolutions of the variables.
Towards correctness of program transformations through unification and critical pair computation
(2010)
Correctness of program transformations in extended lambda-calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, which results in so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems. We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study of an application we describe a finitary and decidable unification algorithm for the combination of the equational theory of left-commutativity modelling multi-sets, context variables and many-sorted unification. Sets of equations are restricted to be almost linear, i.e. every variable and context variable occurs at most once, where we allow one exception: variables of a sort without ground terms may occur several times. Every context variable must have an argument-sort in the free part of the signature. We also extend the unification algorithm by the treatment of binding-chains in let- and letrec-environments and by context-classes. This results in a unification algorithm that can be applied to all overlaps of normal-order reductions and transformations in an extended lambda calculus with letrec that we use as a case study.
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
This note shows that in non-deterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors.
A logical framework consisting of a polymorphic call-by-value functional language and a first-order logic on the values is presented, which is a reconstruction of the logic of the verification system VeriFun. The reconstruction uses contextual semantics to define the logical value of equations. It equates undefinedness and non-termination, which is a standard semantical approach. The main results of this paper are: Meta-theorems about the globality of several classes of theorems in the logic, and proofs of global correctness of transformations and deduction rules. The deduction rules of VeriFun are globally correct if rules depending on termination are appropriately formulated. The reconstruction also gives hints on generalizations of the VeriFun framework: reasoning on nonterminating expressions and functions, mutual recursive functions and abstractions in the data values, and formulas with arbitrary quantifier prefix could be allowed.
Zur genomweiten Genexpressionsanalyse werden Microarray-Experimente verwendet. Ziel dieser Arbeit ist es, Methoden zur Präprozessierung von Microarrays der Firma Affymetrix zu evaluieren und die VSN-Methode für Experimente mit weniger als 1000 Zellen zu verbessern. Bei dieser Technologie wird die Expression jedes Gens durch mehrere Probessets gemessen. Jedes Probeset besteht aus einem Perfect-Match (PM) und einem dazugehörigen Mismatch (MM). Der Expressionswert pro Gen wird durch ein vierstufiges Verfahren aus den einzelnen Probe-Werten berechnet: Hintergrundkorrektur, Normalisierung, PM-Adjustierung und Aggregation. Für jeden dieser Schritte existieren mehrere Algorithmen. Dazu dienten die im affy-Paket des Bioconductor implementierten Methoden MAS5, RMA, VSN und die Methode sRMA von Cope et al. [Cope et al., 2006] in Kombination mit der Methode VSN von Huber et al. [Huber et al., 2002]. Den ersten Teil dieser Arbeit bildet die Reanalyse der Datensätze von Küppers et al. [Küppers et al., 2003] und Piccaluga et al. [Piccaluga et al., 2007] mit der VSN-Methode. Dabei konnte gezeigt werden, dass die VSN-Methode gegenüber Klein et al. [Klein et al., 2001] Vorteile zeigt. Bei beiden Datensätzen wurden zusätzliche Gene gefunden, die für die Pathogenese der jeweiligen Tumorarten wichtig sein können. Einige der zusätzlich gefunden Gene wurden durch andere wissenschaftliche Arbeiten bestätigt. Die Gene, die bisher in keinem Zusammenhang mit der untersuchten Tumorart stehen, sind eine Möglichkeit für die weitere Forschung. Vor allem der Zytokine/Zytokine Signalweg wurde bei beiden Reanalysen als überrepräsentiert erkannt. Da für einige Microarray-Experimente die Anzahl der Zellen und damit die Menge an mRNA nur begrenzt zur Verfügung stehen, müssen die Laborarbeit und die statistischen Analysen angepasst werden. Hierzu werden fünf Methoden für die Präprozessierung untersucht, um zu evaluieren, welche Methode geeignet ist, derartige Expressionsdaten zu verrechnen. Auf Basis eines Testdatensatzes der bereits zur Etablierung des Laborprozesses diente werden Expressionswerte durch empirische Verteilung, Gammaverteilung und ein linear gemischtes Modell simuliert. Die Simulation lässt sich in vier Schritte einteilen: Wahl der Verteilung, Simulation der Expressionsmatrix, Simulation der differentiellen Expression, Sortierung der Probes innerhalb des Probesets. Anschließend werden die fünf Präprozessierungsmethoden mit diesen simulierten Expressionsdaten auf ihre Sensitivität und Spezifität untersucht. Während sich bei den empirisch und gammaverteilt simulierten Expressionsdaten kein eindeutiges Ergebnis abzeichnet, hat sVSN bei den Daten aus dem linear gemischten Modell die größte Sensitivität und die größte Spezifität. Der in dieser Arbeit entwickelte sVSN-Algorithmus wurde zum ersten Mal angewendet und bewertet. Abschließend wird ein Teildatensatz von Brune et al. verwendet und hinsichtlich der fünf Präprozessierungsmethoden untersucht. Die Ergebnisse der sVSN-Methode wird im Detail weiter verfolgt. Die zusätzlich gefunden Gene können durch bereits veröffentlichte Arbeiten bestätigt werden. Letztendlich zeigt sich, dass neuere statistische Methoden (wie das im Rahmen dieser Arbeit entwickelte sVSN) bei der Analyse von Affymetrix Microarrays einen Vorteil bringen. Die sVSN und sRMA Methoden zeigen Vorteile, da die Probes nach der Normalisierung gewichtet werden, bevor diese aggregiert werden. Die MAS5-Methode schneidet am schlechtesten ab und sollte bei geringen Zellmengen nicht eingesetzt werden. Für die Analyse mit geringer Menge an mRNA müssen weitere Untersuchungen vorgenommen werden, um eine geeignete statistische Methode für die Analyse der Expressionsdaten zu finden.