Refine
Year of publication
Document Type
- Article (391) (remove)
Has Fulltext
- yes (391)
Is part of the Bibliography
- no (391)
Keywords
- Heavy Ion Experiments (19)
- Hadron-Hadron scattering (experiments) (10)
- Hadron-Hadron Scattering (9)
- LHC (8)
- Heavy-ion collision (7)
- Quark-Gluon Plasma (4)
- ALICE (3)
- ALICE experiment (3)
- Collective Flow (3)
- Jets and Jet Substructure (3)
Institute
- Informatik (391) (remove)
Considered are the classes QL (quasilinear) and NQL (nondet quasllmear) of all those problems that can be solved by deterministic (nondetermlnlsttc, respectively) Turmg machines in time O(n(log n) ~) for some k Effloent algorithms have time bounds of th~s type, it is argued. Many of the "exhausUve search" type problems such as satlsflablhty and colorabdlty are complete in NQL with respect to reductions that take O(n(log n) k) steps This lmphes that QL = NQL iff satisfiabdlty is m QL CR CATEGORIES: 5.25
We present a hierarchy of polynomial time lattice basis reduction algorithms that stretch from Lenstra, Lenstra, Lovász reduction to Korkine–Zolotareff reduction. Let λ(L) be the length of a shortest nonzero element of a lattice L. We present an algorithm which for k∈N finds a nonzero lattice vector b so that |b|2⩽(6k2)nkλ(L)2. This algorithm uses O(n2(kk+o(k))+n2)log B) arithmetic operations on O(n log B)-bit integers. This holds provided that the given basis vectors b1,…,bn∈Zn are integral and have the length bound B. This algorithm successively applies Korkine–Zolotareff reduction to blocks of length k of the lattice basis. We also improve Kannan's algorithm for Korkine-Zolotareff reduction.
The general subset sum problem is NP-complete. However, there are two algorithms, one due to Brickell and the other to Lagarias and Odlyzko, which in polynomial time solve almost all subset sum problems of sufficiently low density. Both methods rely on basis reduction algorithms to find short nonzero vectors in special lattices. The Lagarias-Odlyzko algorithm would solve almost all subset sum problems of density < 0.6463 . . . in polynomial time if it could invoke a polynomial-time algorithm for finding the shortest non-zero vector in a lattice. This paper presents two modifications of that algorithm, either one of which would solve almost all problems of density < 0.9408 . . . if it could find shortest non-zero vectors in lattices. These modifications also yield dramatic improvements in practice when they are combined with known lattice basis reduction algorithms.
It is well known that artificial neural nets can be used as approximators of any continuous functions to any desired degree and therefore be used e.g. in high - speed, real-time process control. Nevertheless, for a given application and a given network architecture the non-trivial task remains to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation which are critical issues in VLSI and computer implementations of nontrivial tasks. In this paper the accuracy of the weights and the number of neurons are seen as general system parameters which determine the maximal approximation error by the absolute amount and the relative distribution of information contained in the network. We define as the error-bounded network descriptional complexity the minimal number of bits for a class of approximation networks which show a certain approximation error and achieve the conditions for this goal by the new principle of optimal information distribution. For two examples, a simple linear approximation of a non-linear, quadratic function and a non-linear approximation of the inverse kinematic transformation used in robot manipulator control, the principle of optimal information distribution gives the the optimal number of neurons and the resolutions of the variables, i.e. the minimal amount of storage for the neural net. Keywords: Kolmogorov complexity, e-Entropy, rate-distortion theory, approximation networks, information distribution, weight resolutions, Kohonen mapping, robot control.
Let b1, . . . , bm 2 IRn be an arbitrary basis of lattice L that is a block Korkin Zolotarev basis with block size ¯ and let ¸i(L) denote the successive minima of lattice L. We prove that for i = 1, . . . ,m 4 i + 3 ° 2 i 1 ¯ 1 ¯ · kbik2/¸i(L)2 · ° 2m i ¯ 1 ¯ i + 3 4 where °¯ is the Hermite constant. For ¯ = 3 we establish the optimal upper bound kb1k2/¸1(L)2 · µ3 2¶m 1 2 1 and we present block Korkin Zolotarev lattice bases for which this bound is tight. We improve the Nearest Plane Algorithm of Babai (1986) using block Korkin Zolotarev bases. Given a block Korkin Zolotarev basis b1, . . . , bm with block size ¯ and x 2 L(b1, . . . , bm) a lattice point v can be found in time ¯O(¯) satisfying kx vk2 · m° 2m ¯ 1 ¯ minu2L kx uk2.
Parallel FFT-hashing
(1994)
We propose two families of scalable hash functions for collision resistant hashing that are highly parallel and based on the generalized fast Fourier transform (FFT). FFT hashing is based on multipermutations. This is a basic cryptographic primitive for perfect generation of diffusion and confusion which generalizes the boxes of the classic FFT. The slower FFT hash functions iterate a compression function. For the faster FFT hash functions all rounds are alike with the same number of message words entering each round.
We present a framework for the self-organized formation of high level learning by a statistical preprocessing of features. The paper focuses first on the formation of the features in the context of layers of feature processing units as a kind of resource-restricted associative multiresolution learning We clame that such an architecture must reach maturity by basic statistical proportions, optimizing the information processing capabilities of each layer. The final symbolic output is learned by pure association of features of different levels and kind of sensorial input. Finally, we also show that common error-correction learning for motor skills can be accomplished also by non-specific associative learning. Keywords: feedforward network layers, maximal information gain, restricted Hebbian learning, cellular neural nets, evolutionary associative learning
After a short introduction into traditional image transform coding, multirate systems and multiscale signal coding the paper focuses on the subject of image encoding by a neural network. Taking also noise into account a network model is proposed which not only learns the optimal localized basis functions for the transform but also learns to implement a whitening filter by multi-resolution encoding. A simulation showing the multi-resolution capabilitys concludes the contribution.
The paper focuses on the division of the sensor field into subsets of sensor events and proposes the linear transformation with the smallest achievable error for reproduction: the transform coding approach using the principal component analysis (PCA). For the implementation of the PCA, this paper introduces a new symmetrical, lateral inhibited neural network model, proposes an objective function for it and deduces the corresponding learning rules. The necessary conditions for the learning rate and the inhibition parameter for balancing the crosscorrelations vs. the autocorrelations are computed. The simulation reveals that an increasing inhibition can speed up the convergence process in the beginning slightly. In the remaining paper, the application of the network in picture encoding is discussed. Here, the use of non-completely connected networks for the self-organized formation of templates in cellular neural networks is shown. It turns out that the self-organizing Kohonen map is just the non-linear, first order approximation of a general self-organizing scheme. Hereby, the classical transform picture coding is changed to a parallel, local model of linear transformation by locally changing sets of self-organized eigenvector projections with overlapping input receptive fields. This approach favors an effective, cheap implementation of sensor encoding directly on the sensor chip. Keywords: Transform coding, Principal component analysis, Lateral inhibited network, Cellular neural network, Kohonen map, Self-organized eigenvector jets.
This paper describes the use of a radial basis function (RBF) neural network. It approximates the process parameters for the extrusion of a rubber profile used in tyre production. After introducing the problem, we describe the RBF net algorithm and the modeling of the industrial problem. The algorithm shows good results even using only a few training samples. It turns out that the „curse of dimensions“ plays an important role in the model. The paper concludes by a discussion of possible systematic error influences and improvements.
We analyse a continued fraction algorithm (abbreviated CFA) for arbitrary dimension n showing that it produces simultaneous diophantine approximations which are up to the factor 2^((n+2)/4) best possible. Given a real vector x=(x_1,...,x_{n-1},1) in R^n this CFA generates a sequence of vectors (p_1^(k),...,p_{n-1}^(k),q^(k)) in Z^n, k=1,2,... with increasing integers |q^{(k)}| satisfying for i=1,...,n-1 | x_i - p_i^(k)/q^(k) | <= 2^((n+2)/4) sqrt(1+x_i^2) |q^(k)|^(1+1/(n-1)) By a theorem of Dirichlet this bound is best possible in that the exponent 1+1/(n-1) can in general not be increased.
A memory checker for a data structure provides a method to check that the output of the data structure operations is consistent with the input even if the data is stored on some insecure medium. In [8] we present a general solution for all data structures that are based on insert(i,v) and delete(j) commands. In particular this includes stacks, queues, deques (double-ended queues) and lists. Here, we describe more time and space efficient solutions for stacks, queues and deques. Each algorithm takes only a single function evaluation of a pseudorandomlike function like DES or a collision-free hash function like MD5 or SHA for each push/pop resp. enqueue/dequeue command making our methods applicable to smart cards.
A new approach to optimize multilevel logic circuits is introduced. Given a multilevel circuit, the synthesis method optimizes its area while simultaneously enhancing its random pattern testability. The method is based on structural transformations at the gate level. New transformations involving EX-OR gates as well as Reed–Muller expansions have been introduced in the synthesis of multilevel circuits. This method is augmented with transformations that specifically enhance random-pattern testability while reducing the area. Testability enhancement is an integral part of our synthesis methodology. Experimental results show that the proposed methodology not only can achieve lower area than other similar tools, but that it achieves better testability compared to available testability enhancement tools such as tstfx. Specifically for ISCAS-85 benchmark circuits, it was observed that EX-OR gate-based transformations successfully contributed toward generating smaller circuits compared to other state-of-the-art logic optimization tools.
In this paper we regard first the situation where parallel channels are disturbed by noise. With the goal of maximal information conservation we deduce the conditions for a transform which "immunizes" the channels against noise influence before the signals are used in later operations. It shows up that the signals have to be decorrelated and normalized by the filter which corresponds for the case of one channel to the classical result of Shannon. Additional simulations for image encoding and decoding show that this constitutes an efficient approach for noise suppression. Furthermore, by a corresponding objective function we deduce the stochastic and deterministic learning rules for a neural network that implements the data orthonormalization. In comparison with other already existing normalization networks our network shows approximately the same in the stochastic case but, by its generic deduction ensures the convergence and enables the use as independent building block in other contexts, e.g. whitening for independent component analysis. Keywords: information conservation, whitening filter, data orthonormalization network, image encoding, noise suppression.
Im Zeitraum 1. 11. 1993 bis 30. 3. 1997 wurden 1149 allgemeinchirurgische Intensivpatienten prospektiv erfaßt, von denen 114 die Kriterien des septischen Schocks erfüllten. Die Letalität der Patienten mit einem septischen Schock betrug 47,3%. Nach Training eines neuronalen Netzes mit 91 (von insgesamt n = 114) Patienten ergab die Testung bei den verbleibenden 23 Patienten bei der Berücksichtigung von Parameterveränderungen vom 1. auf den 2. Tag des septischen Schocks folgendes Ergebnis: Alle 10 verstorbenen Patienten wurden korrekt als nicht überlebend vorhergesagt, von den 13 Überlebenden wurden 12 korrekt als überlebend vorhergesagt (Sensitivität 100%; Spezifität 92,3%).
In this paper we present a non-deterministic call-by-need (untyped) lambda calculus lambda nd with a constant choice and a let-syntax that models sharing. Our main result is that lambda nd has the nice operational properties of the standard lambda calculus: confluence on sets of expressions, and normal order reduction is sufficient to reach head normal form. Using a strong contextual equivalence we show correctness of several program transformations. In particular of lambdalifting using deterministic maximal free expressions. These results show that lambda nd is a new and also natural combination of non-determinism and lambda-calculus, which has a lot of opportunities for parallel evaluation. An intended application of lambda nd is as a foundation for compiling lazy functional programming languages with I/O based on direct calls. The set of correct program transformations can be rigorously distinguished from non-correct ones. All program transformations are permitted with the slight exception that for transformations like common subexpression elimination and lambda-lifting with maximal free expressions the involved subexpressions have to be deterministic ones.