Part of a Book
Refine
Year of publication
Document Type
- Part of a Book (13) (remove)
Has Fulltext
- yes (13)
Is part of the Bibliography
- no (13)
Keywords
- E-Learning (3)
- Hochschule (3)
- Informatik (3)
- Kollaboration <Informatik> (3)
- Lehre (3)
- Präsenzlehre (3)
- Learning Analytics (2)
- Architekturen (1)
- Blended learning (1)
- Clustering (1)
- Dispositional learning analytics (1)
- Exchange Format (1)
- Fiber Bundles (1)
- Formative assessment (1)
- Forschungswerkzeuge (1)
- Frontmatter (1)
- Gamification (1)
- Goal setting (1)
- IT-literacy (1)
- Interoperabilität (1)
- Kompetenzmessung (1)
- Learning analytics (1)
- Learning dispositions (1)
- Medienkompetenz (1)
- Preface (1)
- Python (1)
- Smart Learning (1)
- Table of Contents (1)
- Texttechnologie (1)
- Typical Learning Behaviour (1)
- VLSI (1)
- Workshop Organization (1)
- cluster transformation (1)
- digital literacy (1)
- digitale Kompetenz (1)
- heterogeneity (1)
- minimum entropy (1)
- neural networks (1)
- principal component analysis (1)
- surface approximation (1)
- xAPI (1)
Institute
- Informatik (13) (remove)
One of the most interesting domains of feedforward networks is the processing of sensor signals. There do exist some networks which extract most of the information by implementing the maximum entropy principle for Gaussian sources. This is done by transforming input patterns to the base of eigenvectors of the input autocorrelation matrix with the biggest eigenvalues. The basic building block of these networks is the linear neuron, learning with the Oja learning rule. Nevertheless, some researchers in pattern recognition theory claim that for pattern recognition and classification clustering transformations are needed which reduce the intra-class entropy. This leads to stable, reliable features and is implemented for Gaussian sources by a linear transformation using the eigenvectors with the smallest eigenvalues. In another paper (Brause 1992) it is shown that the basic building block for such a transformation can be implemented by a linear neuron using an Anti-Hebb rule and restricted weights. This paper shows the analog VLSI design for such a building block, using standard modules of multiplication and addition. The most tedious problem in this VLSI-application is the design of an analog vector normalization circuitry. It can be shown that the standard approaches of weight summation will not give the convergence to the eigenvectors for a proper feature transformation. To avoid this problem, our design differs significantly from the standard approaches by computing the real Euclidean norm. Keywords: minimum entropy, principal component analysis, VLSI, neural networks, surface approximation, cluster transformation, weight normalization circuit.
The encoding of images by semantic entities is still an unresolved task. This paper proposes the encoding of images by only a few important components or image primitives. Classically, this can be done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the signal processing and neural network community. Using this as pattern primitives we aim for source patterns with the highest occurrence probability or highest information. For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that the Independent Principal Components (IPC) in contrast to the Principal Independent Components (PIC) implement the classical demand of Shannon’s rate distortion theory.
We introduce novel security proofs that use combinatorial counting arguments rather than reductions to the discrete logarithm or to the Diffie-Hellman problem. Our security results are sharp and clean with no polynomial reduction times involved. We consider a combination of the random oracle model and the generic model. This corresponds to assuming an ideal hash function H given by an oracle and an ideal group of prime order q, where the binary encoding of the group elements is useless for cryptographic attacks In this model, we first show that Schnorr signatures are secure against the one-more signature forgery : A generic adversary performing t generic steps including l sequential interactions with the signer cannot produce l+1 signatures with a better probability than (t 2)/q. We also characterize the different power of sequential and of parallel attacks. Secondly, we prove signed ElGamal encryption is secure against the adaptive chosen ciphertext attack, in which an attacker can arbitrarily use a decryption oracle except for the challenge ciphertext. Moreover, signed ElGamal encryption is secure against the one-more decryption attack: A generic adversary performing t generic steps including l interactions with the decryption oracle cannot distinguish the plaintexts of l + 1 ciphertexts from random strings with a probability exceeding (t 2)/q.
In intensive care units physicians are aware of a high lethality rate of septic shock patients. In this contribution we present typical problems and results of a retrospective, data driven analysis based on two neural network methods applied on the data of two clinical studies. Our approach includes necessary steps of data mining, i.e. building up a data base, cleaning and preprocessing the data and finally choosing an adequate analysis for the medical patient data. We chose two architectures based on supervised neural networks. The patient data is classified into two classes (survived and deceased) by a diagnosis based either on the black-box approach of a growing RBF network and otherwise on a second network which can be used to explain its diagnosis by human-understandable diagnostic rules. The advantages and drawbacks of these classification methods for an early warning system are discussed.
The Internet as the biggest human library ever assembled keeps on growing. Although all kinds of information carriers (e.g. audio/video/hybrid file formats) are available, text based documents dominate. It is estimated that about 80% of all information worldwide stored electronically exists in (or can be converted into) text form. More and more, all kinds of documents are generated by means of a text processing system and are therefore available electronically. Nowadays, many printed journals are also published online and may even discontinue to appear in print form tomorrow. This development has many convincing advantages: the documents are both available faster (cf. prepress services) and cheaper, they can be searched more easily, the physical storage only needs a fraction of the space previously necessary and the medium will not age. For most people, fast and easy access is the most interesting feature of the new age; computer-aided search for specific documents or Web pages becomes the basic tool for information-oriented work. But this tool has problems. The current keyword based search machines available on the Internet are not really appropriate for such a task; either there are (way) too many documents matching the specified keywords are presented or none at all. The problem lies in the fact that it is often very difficult to choose appropriate terms describing the desired topic in the first place. This contribution discusses the current state-of-the-art techniques in content-based searching (along with common visualization/browsing approaches) and proposes a particular adaptive solution for intuitive Internet document navigation, which not only enables the user to provide full texts instead of manually selected keywords (if available), but also allows him/her to explore the whole database.
We study queueing strategies in the adversarial queueing model. Rather than discussing individual prominent queueing strategies we tackle the issue on a general level and analyze classes of queueing strategies. We introduce the class of queueing strategies that base their preferences on knowledge of the entire graph, the path of the packet and its progress. This restriction only rules out time keeping information like a packet’s age or its current waiting time.
We show that all strategies without time stamping have exponential queue sizes, suggesting that time keeping is necessary to obtain subexponential performance bounds. We further introduce a new method to prove stability for strategies without time stamping and show how it can be used to completely characterize a large class of strategies as to their 1-stability and universal stability.
This volume contains the papers presented at the First International Workshop on Rewriting Techniques for Program Transformations and Evaluation (WPTE 2014) which was held on July 13, 2014 in Vienna, Austria during the Vienna Summer of Logic 2014 (VSL 2014) as a workshop of the Sixth Federated Logic Conference (FLoC 2014). WPTE 2014 was affiliated with the 25th International Conference on Rewriting Techniques and Applications joined with the 12th International Conference on Typed Lambda Calculi and Applications (RTA/TLCA 2014).
In diesem Beitrag untersuchen wir Entwicklungstendenzen von Infrastrukturen in den Digitalen Geisteswissenschaften. Wir argumentieren, dass infolge (1) der Verfügbarkeit von immer mehr Daten über sozial-semiotische Netzwerke, (2) der Methodeninflation in geisteswissenschaftlichen Disziplinen, (3) der zunehmend hybriden Arbeitsteilung zwischen Mensch und Maschine und (4) der explosionsartigen Vermehrung künstlicher Texte ein erheblicher Anpassungsdruck auf die Weiterentwicklung solcher Infrastrukturen entstanden ist. In diesem Zusammenhang beschreiben wir drei Informationssysteme, die sich unter anderem durch die Interaktionsmöglichkeiten unterscheiden, die sie ihren Nutzern bieten, um solchen Herausforderungen zu begegnen. Dabei skizzieren wir mit VienNA eine neuartige Architektur solcher Systeme, welche aufgrund ihrer Flexibilität die Möglichkeit bieten könnte, letztere Herausforderungen zu bewältigen.
Mit der Smart Learning Infrastruktur wurde ein neuartiges didaktisches Konzept für Kurse in der Weiterbildung entwickelt. Diese Infrastruktur ist vielfältig anwendbar. Erste Analysen von Kursen zeigen, dass TeilnehmerInnen, die alle Übungen korrekt abgearbeitet haben, eine bessere Note erreichen als die Durchschnittsnote. Dieser Beitrag beschreibt ein Konzept für ein Gamification-Modul, welches mit spielerischen Elementen möglichst frühzeitig dazu animiert, alle Übungen eines Kurses korrekt und mit Verstand abzuarbeiten.