Universitätspublikationen
Refine
Year of publication
Document Type
- Preprint (729)
- Article (370)
- Working Paper (68)
- Doctoral Thesis (64)
- Book (36)
- Bachelor Thesis (35)
- Conference Proceeding (23)
- Diploma Thesis (19)
- Part of a Book (11)
- Contribution to a Periodical (10)
Has Fulltext
- yes (1380)
Is part of the Bibliography
- no (1380)
Keywords
- Heavy Ion Experiments (19)
- Lambda-Kalkül (12)
- Hadron-Hadron Scattering (11)
- Formale Semantik (10)
- Hadron-Hadron scattering (experiments) (10)
- LHC (8)
- Heavy-ion collision (7)
- concurrency (6)
- functional programming (6)
- Operationale Semantik (5)
- Programmiersprache (5)
- Verifikation (5)
- lambda calculus (5)
- pi-calculus (5)
- ALICE (4)
- Collective Flow (4)
- E-Learning (4)
- Hochschule (4)
- Informatik (4)
- Kollaboration <Informatik> (4)
- Lehre (4)
- Logik (4)
- Nebenläufigkeit (4)
- Petri net (4)
- Präsenzlehre (4)
- Quark-Gluon Plasma (4)
- adequate translations (4)
- contextual equivalence (4)
- semantics (4)
- ALICE experiment (3)
- Funktionale Programmierung (3)
- Jets (3)
- Jets and Jet Substructure (3)
- Natural Language Processing (3)
- Organic Computing (3)
- Verification (3)
- letrec (3)
- verification (3)
- (surface) partial differential equations (2)
- Analog Circuits (2)
- BioCreative V.5 (2)
- BioNLP (2)
- Charm physics (2)
- Clustering (2)
- Computer science (2)
- Databases (2)
- Digital Humanities (2)
- Electroencephalography (2)
- F.4.1 (2)
- FPGA (2)
- Finite Volumes (2)
- Heavy Ions (2)
- Heavy Quark Production (2)
- Information Retrieval (2)
- Kinect (2)
- Kontextuelle Gleichheit (2)
- Learning Analytics (2)
- Lepton-Nucleon Scattering (experiments) (2)
- Logic in computer science (2)
- Logics (2)
- Microarray (2)
- Multimodal Learning Analytics (2)
- NLP (2)
- Named entity recognition (2)
- Numismatics (2)
- Paging (2)
- Particle Correlations and Fluctuations (2)
- Particle and resonance production (2)
- Pb–Pb collisions (2)
- Programming (2)
- QCD (2)
- Semantics (2)
- Visualization (2)
- agent-based modeling (2)
- amyloid precursor protein (2)
- artificial intelligence (2)
- call-by-need (2)
- computational virology (2)
- context lemma (2)
- data structures (2)
- economics (2)
- functional programming languages (2)
- gene expression (2)
- hepatitis C virus (HCV) (2)
- hippocampus (2)
- massively parallel multigrid solvers (2)
- matching (2)
- morphology (2)
- neural networks (2)
- observational semantics (2)
- parameter estimation (2)
- population dynamics (2)
- pp collisions (2)
- presynaptic active zone (2)
- program transformation (2)
- realistic geometries (2)
- septic shock (2)
- viral dynamics (2)
- 3D (1)
- 3D spatio-temporal resolved mathematical models (1)
- 3D spatiotemporal resolved mathematical models (1)
- 900 GeV (1)
- ALICE detector (1)
- Abfrageverarbeitung (1)
- Abstraction (1)
- Abstrakter Automat (1)
- Active learning (1)
- Adaptive Prediction (1)
- Adaptive control (1)
- Adaptive process control (1)
- Affymetrix (1)
- Agent <Künstliche Intelligenz> (1)
- Agenten (1)
- Agents (1)
- Agroecology (1)
- Alpha equivalence (1)
- Alternate hydrophobicity (1)
- Amino acid pattern (1)
- Analog (1)
- Analog Verification (1)
- Analoges System (1)
- Analogschaltungen (1)
- Angiography (1)
- Anti-nuclei (1)
- App ecosystem (1)
- Arabidopsis thaliana metabolism (1)
- Architekturen (1)
- Artificial neural networks (1)
- Atrial fibrillation classification (1)
- Attention mechanism (1)
- Augmented Reality (1)
- Auswahlprozess (1)
- Automata theory (1)
- Automatic (1)
- Automatic prediction (1)
- Autonomie (1)
- Autonomous Learning (1)
- Autorensystem (1)
- Autorensysteme (1)
- BIM (1)
- Beauty production (1)
- Behavioural ecology (1)
- Behavioural methods (1)
- Benchmark testing (1)
- Beta-sheet (1)
- Big Data (1)
- BigBench (1)
- Bilderwelten (1)
- Bildverarbeitung (1)
- Biodiversity (1)
- Bioinformatics (1)
- Bioinformatik (1)
- Biomedical named entity recognition (1)
- Bitcoin (1)
- Blended learning (1)
- Boosted Jets (1)
- C-reactive protein (1)
- CABG (1)
- CBM detector (1)
- CEMP (1)
- CHEMDNER (1)
- CRF (1)
- CT (1)
- CTL (1)
- Calcium (1)
- Cell staining (1)
- Cellular neural network (1)
- Centrality Class (1)
- Centrality Selection (1)
- Chatbot (1)
- Closed World Assumption (CWA) (1)
- Coding Scheme (1)
- Cognition (1)
- Cognitive (1)
- Collective Flow, (1)
- Comparison with QCD (1)
- Complexity (1)
- Computational Humanities (1)
- Computational geometry (1)
- Computational models (1)
- Computational neuroscience (1)
- Computational science (1)
- Computer Science (1)
- Computerlinguistik (1)
- Concrete (1)
- Conjoint Analysis (1)
- Connected Components (1)
- Continual deep learning (1)
- Convolutional Neural Networks (1)
- Coordination (1)
- Correctness (1)
- Crowdsourcing (1)
- Cuneiform (1)
- Data Analytics (1)
- Data exploration (1)
- Data processing (1)
- Data science (1)
- Dataflow Computing (1)
- Datenaustausch (1)
- Datenintegration (1)
- Datenqualität (1)
- Deep learning (1)
- Deictic and iconic gestures (1)
- Depth-Map (1)
- Detector Readout (1)
- Developmental Robotics (1)
- Developmental biology (1)
- Dialog Generation (1)
- Dichte-getriebene Strömung (1)
- Didaktik (1)
- Differentielle Genexpression (1)
- Dimensions-Adaptivität (1)
- Dimensionsreduktion (1)
- Direct Manipulation (1)
- Diskrete Mathematik (1)
- Dispositional learning analytics (1)
- Dynamical systems (1)
- EDISON competence framework (1)
- Echtzeitsystem (1)
- Educational institutions (1)
- Effective Field Theories (1)
- Eingebettetes System (1)
- Einteilung (1)
- Electron-pion identification (1)
- Electroweak interaction (1)
- Elektronik (1)
- Elliptic flow (1)
- Energy Efficiency (1)
- Energy-efficiency (1)
- Engineering (1)
- Entwurfsautomation (1)
- Equation of State (1)
- Erasure-Correcting Codes (1)
- Error Mitigation (1)
- Euler method (1)
- Europe (1)
- Event-related potential (1)
- Exchange Format (1)
- Experimental Evaluation (1)
- Experimental Methods (1)
- Explicit Feedback (1)
- External Memory (1)
- Extraterrestrial measurements (1)
- F.1.3 (1)
- FPGAs (1)
- Fachübergreifender Unterricht (1)
- Fault Tolerance (1)
- Feasibility (1)
- Femtoscopy (1)
- Fiber Bundles (1)
- Fibre/foam sandwich radiator (1)
- Formal Verification (1)
- Formale Verifikation (1)
- Formative assessment (1)
- Forschungswerkzeuge (1)
- Fourier-Motzkin algorithm (1)
- Frontmatter (1)
- Funktionale Programmiersprache (1)
- GPGPU (1)
- GPRO (1)
- GPU Computing (1)
- GPU algorithms (1)
- Gamification (1)
- Gedächtnis (1)
- Gedächtnisbildung (1)
- Gene expression (1)
- Generalized procrustes analysis (1)
- Genexpression (1)
- Geodesics (1)
- Germanistik (1)
- Gestural writing (1)
- Goal setting (1)
- Graph (1)
- Graph Algorithms (1)
- Graph database (1)
- Graph theory (1)
- Green Computing (1)
- Großhirnrinde (1)
- Gruppe (1)
- H.2.3 (1)
- HBT (1)
- HCI (1)
- HEP-Computing (1)
- HPC (1)
- Hadron production (1)
- Hadron-Hadron Scattering Heavy (1)
- Hard Scattering (1)
- Hash Funktionen (1)
- Heavy Ion Experiment (1)
- Heavy flavor production (1)
- Heavy flavour production (1)
- Heavy ions (1)
- Heavy-flavour decay muons (1)
- Heavy-ion collisions (1)
- Hematoxylin staining (1)
- Heuristics (1)
- High Level Synthesis (1)
- High energy physics (1)
- Higher education (1)
- Histology (1)
- Hive (1)
- Hodgkin lymphoma (1)
- Hybrid automaton (1)
- Hydrophobicity scale (1)
- Hypertext (1)
- I/O Model (1)
- ICA (1)
- IS post-adoption (1)
- IT-literacy (1)
- Iconography (1)
- Image processing (1)
- Image schemata (1)
- Implicit Discourse Parsing (1)
- Implicit Feedback (1)
- Implicit Semantic Role Labeling (1)
- In-TIPS thrombosis (1)
- Inclusive spectra (1)
- Induktive Inferenz (1)
- Infinite games with perfect information (1)
- Inflammation (1)
- Informationelle Ungewissheit (1)
- Informationsintegration (1)
- Intensity interferometry (1)
- Intensivpatient (1)
- Interoperabilität (1)
- Intrinsic Motivations (1)
- Invariant Mass Distribution (1)
- Invariant object recognition (1)
- Ionisation energy loss (1)
- Ionizing Radiation (1)
- Isolation (1)
- Java (1)
- Jet Physics (1)
- Jet Substructure (1)
- Kalman filter (1)
- Kinetic hypertext authoring (1)
- Klebsiella pneumoniae (1)
- Kluft (1)
- Knowledge engineering (1)
- Kohonen map (1)
- Kohonen mapping (1)
- Kolmogorov complexity (1)
- Kompetenz (1)
- Kompetenzmessung (1)
- Koordination (1)
- Kryptographie (1)
- Künstliches Hormonsystem (1)
- LSTM (1)
- LTP (1)
- Langzeitgedächtnis (1)
- Large Language Models (1)
- Lateral inhibited network (1)
- Lattice QCD (1)
- Lattice Quantum Field Theory (1)
- Lattice-QCD (1)
- Layout (1)
- Learner goals (1)
- Learning analytics (1)
- Learning analytics dashboard (1)
- Learning dispositions (1)
- Leichtgewichtige Kryptographie (1)
- Lernen (1)
- Lifelong machine learning (1)
- Line reconstruction (1)
- Linked Data (1)
- Linux Container (1)
- Local field potential (1)
- Lymph nodes (1)
- Lymphocytes (1)
- MIMIC-III (1)
- Machine learning (1)
- Manatee invariant (1)
- Manhattan distance (1)
- Many-core computer architectures (1)
- MapReduce (1)
- Material budget (1)
- Mathematical model (1)
- Mathematik (1)
- Medical Systems (1)
- Medical database (1)
- Medienkompetenz (1)
- Medienproduktion (1)
- Mehragentensystem (1)
- Membranpotential (1)
- Mid-rapidity (1)
- Minimum Bias (1)
- Mitochondria (1)
- Mobile platforms (1)
- Model Checking (1)
- Modellgetriebene Architektur (MDA) (1)
- Modellierung (1)
- Monte Carlo (1)
- Multi-Parton Interactions (1)
- Multi-strange baryons (1)
- Multi-wire proportional drift chamber (1)
- Multiset independent component analysis (1)
- Museum information system (1)
- Museums (1)
- Musik (1)
- NF-κB pathway (1)
- NREM-Schlaf (1)
- Natural Language Understanding (1)
- Network models (1)
- Neural Networks (1)
- Neural encoding models (1)
- Neural network (1)
- Neural networks (1)
- Neurofuzzy (1)
- Neurofuzzy medical systems (1)
- Neuronale Plastizität (1)
- Neuronales Netzwerk (1)
- Nichtdeterminismus (1)
- Noisy point clouds (1)
- Normalisierung (1)
- Nuclear Physics (1)
- Nuclear modification factor (1)
- ODE (1)
- ORC (1)
- Object vision (1)
- Objekterkennung (1)
- Online Algorithmen (1)
- Online algorithms (1)
- Online self-assessment (1)
- Online-Self-Assessment (1)
- Ontologien (1)
- Ontologies (1)
- Ontology (1)
- Open set recognition (1)
- Open world learning (1)
- Open-access data resource (1)
- PDE (1)
- PDEs (1)
- PYTHIA (1)
- Parallel and SIMD calculations (1)
- Paramecium (1)
- Parameter estimation (1)
- Parkinson diagnosis (1)
- Parquet (1)
- Particle and Resonance Production (1)
- Particle correlations and fluctuations (1)
- Pathologists (1)
- Patternsprachen (1)
- Pb–Pb (1)
- Phase Diagram of QCD (1)
- Phase-reset (1)
- Phosphate (1)
- Planning systems (1)
- Planungssystem (1)
- Planungssysteme (1)
- Plastizität (1)
- Plastizität <Physiologie> (1)
- PointNet (1)
- Positive fluid balance (1)
- Postoperative atrial fibrillation (1)
- Preface (1)
- Principal component analysis (1)
- Prioritäten (1)
- Production Cross Section (1)
- Prognose (1)
- Prognostische Validität (1)
- Program Transformations (1)
- Programmiersprachen (1)
- Programmtransformation (1)
- Properties of Hadrons (1)
- Prophet Inequalities (1)
- Proteins (1)
- Proton–proton (1)
- Präprozessierung (1)
- Psychometrische Güte (1)
- Pufferspeicher (1)
- Python (1)
- Q-modularity (1)
- Quantitative Imaging (1)
- Quark Deconfinement (1)
- Quark Gluon Plasma (1)
- Quark Production (1)
- Quark gluon plasma (1)
- Quarkonium (1)
- RBF-nets (1)
- RDF (1)
- RF development (1)
- RNA biology (1)
- RNA interference (1)
- Randomization (1)
- Randomized algorithms (1)
- Rapidity Range (1)
- Rating Scale (1)
- Rating Scale Design (1)
- Rating System (1)
- Reaktivierung der Gedächtnisspuren (1)
- Real-time systems (1)
- Reduktionssystem (1)
- Reinforcement Learning (1)
- Relationale Datenbank (1)
- Relativistic heavy ion physics (1)
- Resolution Parameter (1)
- Resonances (1)
- Roboter (1)
- Robotics (1)
- Rubber extrusion (1)
- Rènyi mutual information (1)
- SHA-3 (1)
- SMASH (1)
- SORN (1)
- SPARQL (1)
- SQL-on-Hadoop (1)
- STDP (1)
- SWRL (1)
- Saward (1)
- Schlaf (1)
- Sehrinde (1)
- Selbstorganisation (1)
- Selbstorganisierende Karte (1)
- Selbstorganisierende Taskverteilung (1)
- Selection process (1)
- Self-Instruct (1)
- Self-organized eigenvector jets (1)
- Self-regulated learning (1)
- Semantik (1)
- Semantische Dienste (1)
- Semantisches Dienstgütemanagement (1)
- Sense-making (1)
- Sentiment (1)
- Sentiment-Analysis (1)
- Sequence analysis (1)
- Service Level Management (SLM) (1)
- Short-lived particles (1)
- Sichere Antworten (1)
- Signaling pathway (1)
- Simulation (1)
- Single Event Effects (1)
- Single electrons (1)
- Single muons (1)
- Smart Learning (1)
- Social behaviour (1)
- Software updates (1)
- Softwareentwicklung (1)
- SparkSQL (1)
- Specialized Information Service (1)
- Speicherbedarf (1)
- Sprachtheorie (1)
- Statistical Classification (1)
- Statistical classification (1)
- Stochastic Probing (1)
- Student-facing learning analytics (1)
- Studienleistungen (1)
- Sumerian (1)
- Systematic Uncertainty (1)
- Szenengenerierung (1)
- TIPS (1)
- TR (1)
- Table of Contents (1)
- Takens-Grassberger correlation integral (1)
- Technologie Enhanced Learning (1)
- Technology-Enhanced Learning (1)
- Termination (1)
- Terrmersetzungssystem (1)
- Testing (1)
- Text mining (1)
- Text2Scene (1)
- Texttechnologie (1)
- Textvisualisierung (1)
- Theoretische Informatik (1)
- Time Constraints (1)
- Time Projection Chamber (1)
- Tracking (1)
- Training (1)
- Transfer learning (1)
- Transform coding (1)
- Transition invariant (1)
- Transition radiation detector (1)
- Transjugular Intrahepatic Portosystemic Shunt (1)
- Translation (1)
- Transmembrane helix (1)
- Transmembrane sheets (1)
- Transverse momentum (1)
- Trigger (1)
- Typical Learning Behaviour (1)
- Ubiquitous Computing (1)
- Umbenennung (1)
- Ungewissheit (1)
- Unity (1)
- Universa feature extraxtion (1)
- Unterrichtsbeispiele (1)
- Unüberwachtes Lernen (1)
- User Interface (1)
- User Profile (1)
- VLSI (1)
- VR (1)
- Valve surgery (1)
- Vector Boson Production (1)
- Vectorization (1)
- Verdeckung (1)
- Verteilung (1)
- Virtuelle Realität (1)
- Visual (1)
- Visual object recognition (1)
- WWW (1)
- Web (1)
- Web Analytics (1)
- Web Based Training (1)
- Web Search (1)
- Web application (1)
- Web-based Training (1)
- Workshop Organization (1)
- Xenon-based gas mixture (1)
- Z-inspection (1)
- acute infection (1)
- adaption (1)
- addition-invariant first-order logic (1)
- adequate translation (1)
- aging (1)
- algebraic closure properties (1)
- algorithms (1)
- alignment in communication structural coupling (1)
- alpha renaming (1)
- anaplastic large cell lymphoma (1)
- anatomy ontologies (1)
- approximation complexity (1)
- approximation networks (1)
- automated deduction (1)
- automatic handwriting analysis (1)
- average pairwise distance (1)
- big data (1)
- big data benchmarking (1)
- biochemical pathways (1)
- bisimulation (1)
- boutons (1)
- calcium dynamics (1)
- call-by-name (1)
- call-by-need evaluation (1)
- cancer (1)
- cell motility (1)
- cellular neural nets (1)
- centrality (1)
- certain answers (1)
- chemokine receptors (1)
- chronic infection (1)
- classical Hodgkin lymphoma (1)
- classification (1)
- clinical presentation (1)
- closed world assumption (CWA) (1)
- cluster transformation (1)
- clustering (1)
- codon usage preferences (1)
- columnar file formats (1)
- combinatorics (1)
- common transition pairs (1)
- community (1)
- competence (1)
- complementary information (1)
- compression (1)
- computer vision (1)
- corpus study (1)
- correctness (1)
- coupled differential equations (1)
- coupon collector problem (1)
- course achievement (1)
- critically ill patients (1)
- dE/dx (1)
- dancing (1)
- data acquisition; high energy physics; FPGA; CBM; FLES (1)
- data analysis (1)
- data crowdsourcing (1)
- data exchange (1)
- data orthonormalization network (1)
- data quality (1)
- data science education (1)
- decidable characterisations (1)
- deduction (1)
- deductive database (1)
- deduktive Datenbank (1)
- deep learning (1)
- descriptive patterns (1)
- deskriptive Pattern (1)
- detailed modeling (1)
- detector (1)
- differential equations (1)
- digital literacy (1)
- digital pathology (1)
- digitale Kompetenz (1)
- discrete processing (1)
- diskrete Verarbeitung (1)
- dissemination (1)
- dynamic programming (1)
- e-Entropy (1)
- eHumanities (1)
- eLearning (1)
- early warning signs (1)
- electrical stimulation (1)
- electronic pen (1)
- elementary mode (1)
- emergence (1)
- epigenome (1)
- epileptogenesis (1)
- epistemic network analysis (1)
- ethical co-design (1)
- ethics (1)
- evolutionary associative learning (1)
- experimental results (1)
- eye-tracking (1)
- feature selection (1)
- feedforward network layers (1)
- finite model theory (1)
- firing pattern (1)
- first-order logic (1)
- functional module (1)
- gamification (1)
- gamma Zyklus (1)
- gamma cycle (1)
- generatives Lernen (1)
- genetic algorithm (1)
- genotype–phenotype (1)
- grammar-based compression (1)
- graph algorithms (1)
- graph understanding (1)
- hashing (1)
- hawkes processes (1)
- healthcare (1)
- heavy ion collisions (1)
- heavy ion experiments (1)
- heterogeneity (1)
- hierarchies and reducibilities (1)
- higher education (1)
- homeostasis (1)
- human lymph node (1)
- human viruses (1)
- hybrid (1)
- image analysis (1)
- image encoding (1)
- immune response (1)
- immune system (1)
- impact parameter (1)
- inductive inference (1)
- infinitary lambda calculus (1)
- information conservation (1)
- information decomposition (1)
- information distribution (1)
- information landscape (1)
- integer points (1)
- intertextual similarity (1)
- intratextual similarity (1)
- intrinsic plasticity (1)
- invariant transition pairs (1)
- kinetic fingerprint (1)
- knowledge area (1)
- knowledge graphs (1)
- kontextabhängige Verarbeitung (1)
- lambda-calculus (1)
- lazy evaluation (1)
- learning goal (1)
- learning unit (1)
- linguistic linked open data (1)
- linguistic networks graph distance measures (1)
- linguistic relativity (1)
- linked open data (1)
- literature digitization (1)
- logic (1)
- logical interpretations (1)
- low-resource languages (1)
- lymph node (1)
- machine learning (1)
- macronucleus (1)
- malignant melanoma (1)
- mathematical modeling (1)
- mathematical models of viral RNA cycle (1)
- matroids, online algorithm (1)
- maximal common transition set (1)
- maximal information gain (1)
- meta languages (1)
- metabolic networks (1)
- microarrays (1)
- minimal cut set (1)
- minimum description length (1)
- minimum entropy (1)
- mitochondria (1)
- model parameter adaption (1)
- model verification (1)
- modeling (1)
- modeling and simulation (1)
- models of computation (1)
- morphological filtering (1)
- multiple correspondence analysis (1)
- multiple texts (1)
- mutation (1)
- mutual information (1)
- mutual information of graphs (1)
- network model (1)
- network reduction (1)
- network similarity measurement (1)
- neural nets (1)
- neural network (1)
- neuromuscular junction (1)
- neuron (1)
- neuronal network (1)
- newspaper (1)
- noise suppression (1)
- nominal unification (1)
- non-commercial publishing (1)
- non-determinism (1)
- normalization (1)
- ntracellular signaling (1)
- observational equivalence (1)
- off-line memory reprocessing (1)
- open access (1)
- ordinary differential equation (1)
- organic computing (1)
- overfitting (1)
- packing problem (1)
- parallel (1)
- parallel processes (1)
- parliamentary debate (1)
- parsing (1)
- partial differential equation (1)
- pattern languages (1)
- phage (1)
- phage therapy (1)
- plasticity (1)
- poröses Medium (1)
- prediction quality (1)
- predictive validity (1)
- principal component analysis (1)
- process approximation (1)
- psychometric properties (1)
- public speaking (1)
- quark gluon plasma (1)
- quicksort (1)
- random hyperbolic graph generator (1)
- randomized algorithms (1)
- rate-distortion theory (1)
- reactive systems (1)
- reception (1)
- redundancy (1)
- redundant information (1)
- regular tree languages (1)
- relative termination (1)
- representation (1)
- representative claims (1)
- resistance (1)
- restricted Hebbian learning (1)
- reward-dependent learning (1)
- rewriting (1)
- rewriting systems (1)
- rosetting T cells (1)
- salsa (1)
- secretary problem (1)
- segmentation (1)
- sensor-based learning support (1)
- shock filter (1)
- small RNA (1)
- social media (1)
- sorting (1)
- space improvements (1)
- space optimization (1)
- sparse coding (1)
- specialized vocabulary (1)
- specification and verification (1)
- spectra (1)
- storage optimization (1)
- straight line programs (1)
- streaming algorithm (1)
- string rewriting (1)
- stroke (1)
- structure-function relationships (1)
- succinct data structures (1)
- succinctness (1)
- surface approximation (1)
- synapse (1)
- synaptic normalization (1)
- synergistic interaction (1)
- synergy (1)
- syntax (1)
- systems biology (1)
- t-cluster (1)
- t-invariant (1)
- tailored learning path (1)
- termination (1)
- text mining tools (1)
- text search (1)
- textbooks (1)
- three-level topic model (1)
- threshold concepts (1)
- trajectories (1)
- transition invariant (1)
- translational selection (1)
- trustworthy AI (1)
- trustworthy AI Co-design (1)
- uncertainty (1)
- unique information (1)
- variability (1)
- volatility clustering (1)
- warts (1)
- weight resolutions (1)
- whitening filter (1)
- whole slide image (1)
- wikipedia (1)
- within-host viral modeling (1)
- within-host viral modelling (1)
- women’s quota (1)
- xAPI (1)
- √sN N = 2.76 TeV (1)
Institute
- Informatik (1380) (remove)
We present a hierarchy of polynomial time lattice basis reduction algorithms that stretch from Lenstra, Lenstra, Lovász reduction to Korkine–Zolotareff reduction. Let λ(L) be the length of a shortest nonzero element of a lattice L. We present an algorithm which for k∈N finds a nonzero lattice vector b so that |b|2⩽(6k2)nkλ(L)2. This algorithm uses O(n2(kk+o(k))+n2)log B) arithmetic operations on O(n log B)-bit integers. This holds provided that the given basis vectors b1,…,bn∈Zn are integral and have the length bound B. This algorithm successively applies Korkine–Zolotareff reduction to blocks of length k of the lattice basis. We also improve Kannan's algorithm for Korkine-Zolotareff reduction.
Performance and storage requirements of topology-conserving maps for robot manipulator control
(1989)
A new programming paradigm for the control of a robot manipulator by learning the mapping between the Cartesian space and the joint space (inverse Kinematic) is discussed. It is based on a Neural Network model of optimal mapping between two high-dimensional spaces by Kohonen. This paper describes the approach and presents the optimal mapping, based on the principle of maximal information gain. It is shown that Kohonens mapping in the 2-dimensional case is optimal in this sense. Furthermore, the principal control error made by the learned mapping is evaluated for the example of the commonly used PUMA robot, the trade-off between storage resources and positional error is discussed and an optimal position encoding resolution is proposed.
It is well known that artificial neural nets can be used as approximators of any continous functions to any desired degree. Nevertheless, for a given application and a given network architecture the non-trivial task rests to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation. In this paper the problem is treated by an information theoretic approach. The values for the weights and thresholds in the approximator network are determined analytically. Furthermore, the accuracy of the weights and the number of neurons are seen as general system parameters which determine the the maximal output information (i.e. the approximation error) by the absolute amount and the relative distribution of information contained in the network. A new principle of optimal information distribution is proposed and the conditions for the optimal system parameters are derived. For the simple, instructive example of a linear approximation of a non-linear, quadratic function, the principle of optimal information distribution gives the the optimal system parameters, i.e. the number of neurons and the different resolutions of the variables.
It is well known that artificial neural nets can be used as approximators of any continuous functions to any desired degree and therefore be used e.g. in high - speed, real-time process control. Nevertheless, for a given application and a given network architecture the non-trivial task remains to determine the necessary number of neurons and the necessary accuracy (number of bits) per weight for a satisfactory operation which are critical issues in VLSI and computer implementations of nontrivial tasks. In this paper the accuracy of the weights and the number of neurons are seen as general system parameters which determine the maximal approximation error by the absolute amount and the relative distribution of information contained in the network. We define as the error-bounded network descriptional complexity the minimal number of bits for a class of approximation networks which show a certain approximation error and achieve the conditions for this goal by the new principle of optimal information distribution. For two examples, a simple linear approximation of a non-linear, quadratic function and a non-linear approximation of the inverse kinematic transformation used in robot manipulator control, the principle of optimal information distribution gives the the optimal number of neurons and the resolutions of the variables, i.e. the minimal amount of storage for the neural net. Keywords: Kolmogorov complexity, e-Entropy, rate-distortion theory, approximation networks, information distribution, weight resolutions, Kohonen mapping, robot control.
One of the most interesting domains of feedforward networks is the processing of sensor signals. There do exist some networks which extract most of the information by implementing the maximum entropy principle for Gaussian sources. This is done by transforming input patterns to the base of eigenvectors of the input autocorrelation matrix with the biggest eigenvalues. The basic building block of these networks is the linear neuron, learning with the Oja learning rule. Nevertheless, some researchers in pattern recognition theory claim that for pattern recognition and classification clustering transformations are needed which reduce the intra-class entropy. This leads to stable, reliable features and is implemented for Gaussian sources by a linear transformation using the eigenvectors with the smallest eigenvalues. In another paper (Brause 1992) it is shown that the basic building block for such a transformation can be implemented by a linear neuron using an Anti-Hebb rule and restricted weights. This paper shows the analog VLSI design for such a building block, using standard modules of multiplication and addition. The most tedious problem in this VLSI-application is the design of an analog vector normalization circuitry. It can be shown that the standard approaches of weight summation will not give the convergence to the eigenvectors for a proper feature transformation. To avoid this problem, our design differs significantly from the standard approaches by computing the real Euclidean norm. Keywords: minimum entropy, principal component analysis, VLSI, neural networks, surface approximation, cluster transformation, weight normalization circuit.
We present a framework for the self-organized formation of high level learning by a statistical preprocessing of features. The paper focuses first on the formation of the features in the context of layers of feature processing units as a kind of resource-restricted associative multiresolution learning We clame that such an architecture must reach maturity by basic statistical proportions, optimizing the information processing capabilities of each layer. The final symbolic output is learned by pure association of features of different levels and kind of sensorial input. Finally, we also show that common error-correction learning for motor skills can be accomplished also by non-specific associative learning. Keywords: feedforward network layers, maximal information gain, restricted Hebbian learning, cellular neural nets, evolutionary associative learning
After a short introduction into traditional image transform coding, multirate systems and multiscale signal coding the paper focuses on the subject of image encoding by a neural network. Taking also noise into account a network model is proposed which not only learns the optimal localized basis functions for the transform but also learns to implement a whitening filter by multi-resolution encoding. A simulation showing the multi-resolution capabilitys concludes the contribution.
The paper focuses on the division of the sensor field into subsets of sensor events and proposes the linear transformation with the smallest achievable error for reproduction: the transform coding approach using the principal component analysis (PCA). For the implementation of the PCA, this paper introduces a new symmetrical, lateral inhibited neural network model, proposes an objective function for it and deduces the corresponding learning rules. The necessary conditions for the learning rate and the inhibition parameter for balancing the crosscorrelations vs. the autocorrelations are computed. The simulation reveals that an increasing inhibition can speed up the convergence process in the beginning slightly. In the remaining paper, the application of the network in picture encoding is discussed. Here, the use of non-completely connected networks for the self-organized formation of templates in cellular neural networks is shown. It turns out that the self-organizing Kohonen map is just the non-linear, first order approximation of a general self-organizing scheme. Hereby, the classical transform picture coding is changed to a parallel, local model of linear transformation by locally changing sets of self-organized eigenvector projections with overlapping input receptive fields. This approach favors an effective, cheap implementation of sensor encoding directly on the sensor chip. Keywords: Transform coding, Principal component analysis, Lateral inhibited network, Cellular neural network, Kohonen map, Self-organized eigenvector jets.
This paper describes the use of a radial basis function (RBF) neural network. It approximates the process parameters for the extrusion of a rubber profile used in tyre production. After introducing the problem, we describe the RBF net algorithm and the modeling of the industrial problem. The algorithm shows good results even using only a few training samples. It turns out that the „curse of dimensions“ plays an important role in the model. The paper concludes by a discussion of possible systematic error influences and improvements.
In this paper we regard first the situation where parallel channels are disturbed by noise. With the goal of maximal information conservation we deduce the conditions for a transform which "immunizes" the channels against noise influence before the signals are used in later operations. It shows up that the signals have to be decorrelated and normalized by the filter which corresponds for the case of one channel to the classical result of Shannon. Additional simulations for image encoding and decoding show that this constitutes an efficient approach for noise suppression. Furthermore, by a corresponding objective function we deduce the stochastic and deterministic learning rules for a neural network that implements the data orthonormalization. In comparison with other already existing normalization networks our network shows approximately the same in the stochastic case but, by its generic deduction ensures the convergence and enables the use as independent building block in other contexts, e.g. whitening for independent component analysis. Keywords: information conservation, whitening filter, data orthonormalization network, image encoding, noise suppression.
This paper describes the problems and an adaptive solution for process control in rubber industry. We show that the human and economical benefits of an adaptive solution for the approximation of process parameters are very attractive. The modeling of the industrial problem is done by the means of artificial neural networks. For the example of the extrusion of a rubber profile in tire production our method shows good results even using only a few training samples.
The encoding of images by semantic entities is still an unresolved task. This paper proposes the encoding of images by only a few important components or image primitives. Classically, this can be done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the signal processing and neural network community. Using this as pattern primitives we aim for source patterns with the highest occurrence probability or highest information. For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that the Independent Principal Components (IPC) in contrast to the Principal Independent Components (PIC) implement the classical demand of Shannon’s rate distortion theory.
This paper proposes a new approach for the encoding of images by only a few important components. Classically, this is done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the neural network community. Applied to images, we aim for the most important source patterns with the highest occurrence probability or highest information called principal independent components (PIC). For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
Im Zeitraum 1. 11. 1993 bis 30. 3. 1997 wurden 1149 allgemeinchirurgische Intensivpatienten prospektiv erfaßt, von denen 114 die Kriterien des septischen Schocks erfüllten. Die Letalität der Patienten mit einem septischen Schock betrug 47,3%. Nach Training eines neuronalen Netzes mit 91 (von insgesamt n = 114) Patienten ergab die Testung bei den verbleibenden 23 Patienten bei der Berücksichtigung von Parameterveränderungen vom 1. auf den 2. Tag des septischen Schocks folgendes Ergebnis: Alle 10 verstorbenen Patienten wurden korrekt als nicht überlebend vorhergesagt, von den 13 Überlebenden wurden 12 korrekt als überlebend vorhergesagt (Sensitivität 100%; Spezifität 92,3%).
Diese Arbeit plädiert für eine rationale Behandlung von Patientendaten und untersucht dazu die Analyse der Daten mit Hilfe neuronale Netze etwas näher. Erfolgreiche Beispielanwendungen zeigen, daß die menschlichen Diagnosefähigkeiten deutlich schlechter sind als neuronale Diagnosesysteme. Für das Beispiel der neueren Architektur mit RBF-Netzen wird die Funktionalität näher erläutert und gezeigt, wie menschliche und neuronale Expertise miteinander gekoppelt werden kann. Der Ausblick deutet Anwendungen und Praxisproblematik derartiger Systeme an.
This paper describes the use of a Radial Basis Function (RBF) neural network in the approximation of process parameters for the extrusion of a rubber profile in tyre production. After introducing the rubber industry problem, the RBF network model and the RBF net learning algorithm are developed, which uses a growing number of RBF units to compensate the approximation error up to the desired error limit. Its performance is shown for simple analytic examples. Then the paper describes the modelling of the industrial problem. Simulations show good results, even when using only a few training samples. The paper is concluded by a discussion of possible systematic error influences, improvements and potential generalisation benefits. Keywords: Adaptive process control; Parameter estimation; RBF-nets; Rubber extrusion
The prevention of credit card fraud is an important application for prediction techniques. One major obstacle for using neural network training techniques is the high necessary diagnostic quality: Since only one financial transaction of a thousand is invalid no prediction success less than 99.9% is acceptable. Due to these credit card transaction proportions complete new concepts had to be developed and tested on real credit card data. This paper shows how advanced data mining techniques and neural network algorithm can be combined successfully to obtain a high fraud coverage combined with a low false alarm rate.
In contrast to the symbolic approach, neural networks seldom are designed to explain what they have learned. This is a major obstacle for its use in everyday life. With the appearance of neuro-fuzzy systems which use vague, human-like categories the situation has changed. Based on the well-known mechanisms of learning for RBF networks, a special neuro-fuzzy interface is proposed in this paper. It is especially useful in medical applications, using the notation and habits of physicians and other medically trained people. As an example, a liver disease diagnosis system is presented.
In its first part, this contribution reviews shortly the application of neural network methods to medical problems and characterizes its advantages and problems in the context of the medical background. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic systems. Then, paradigm of neural networks is shortly introduced and the main problems of medical data base and the basic approaches for training and testing a network by medical data are described. Additionally, the problem of interfacing the network and its result is given and the neuro-fuzzy approach is presented. Finally, as case study of neural rule based diagnosis septic shock diagnosis is described, on one hand by a growing neural network and on the other hand by a rule based system. Keywords: Statistical Classification, Adaptive Prediction, Neural Networks, Neurofuzzy, Medical Systems