Refine
Year of publication
Document Type
- Preprint (739)
- Article (400)
- Working Paper (119)
- Doctoral Thesis (92)
- Diploma Thesis (46)
- Conference Proceeding (41)
- Book (37)
- Bachelor Thesis (34)
- diplomthesis (29)
- Report (25)
Has Fulltext
- yes (1593) (remove)
Is part of the Bibliography
- no (1593) (remove)
Keywords
- Lambda-Kalkül (21)
- Heavy Ion Experiments (19)
- Formale Semantik (11)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (10)
- Operationale Semantik (9)
- lambda calculus (9)
- Kongress (8)
- LHC (8)
- Textanalyse ; Linguistische Datenverarbeitung; Computerlinguistik (8)
- Computerlinguistik (7)
- Heavy-ion collision (7)
- Nebenläufigkeit (7)
- Programmiersprache (7)
- concurrency (7)
- Informatik (6)
- Linguistische Datenverarbeitung (6)
- Textanalyse (6)
- functional programming (6)
- semantics (6)
- E-Learning (5)
- Kryptologie (5)
- Online-Publikation (5)
- Verifikation (5)
- contextual equivalence (5)
- pi-calculus (5)
- ALICE (4)
- Collective Flow (4)
- Hochschule (4)
- Kollaboration <Informatik> (4)
- Lehre (4)
- Logik (4)
- Petri net (4)
- Präsenzlehre (4)
- Quark-Gluon Plasma (4)
- RDF (4)
- adequate translations (4)
- letrec (4)
- ALICE experiment (3)
- Beschreibungskomplexität (3)
- Contextual Equivalence (3)
- Funktionale Programmiersprache (3)
- Funktionale Programmierung (3)
- Information Retrieval (3)
- Jets (3)
- Jets and Jet Substructure (3)
- Kontextuelle Gleichheit (3)
- LLL-reduction (3)
- Natural Language Processing (3)
- Organic Computing (3)
- Semantic Web (3)
- Verification (3)
- Verteiltes System (3)
- automated deduction (3)
- call-by-need (3)
- context lemma (3)
- functional programming languages (3)
- observational semantics (3)
- rewriting (3)
- verification (3)
- (surface) partial differential equations (2)
- Analog Circuits (2)
- Bildverarbeitung (2)
- BioCreative V.5 (2)
- BioNLP (2)
- Call-by-Need (2)
- Charm physics (2)
- Clustering (2)
- Commitment Scheme (2)
- Computer science (2)
- Data Mining (2)
- Databases (2)
- Digital Humanities (2)
- Electroencephalography (2)
- Elektronische Bibliothek (2)
- External Memory (2)
- F.4.1 (2)
- FPGA (2)
- Finite Volumes (2)
- Heavy Ions (2)
- Heavy Quark Production (2)
- Kinect (2)
- Knapsack problem (2)
- Lambda Calculus (2)
- Lattice basis reduction (2)
- Learning Analytics (2)
- Lepton-Nucleon Scattering (experiments) (2)
- Logic in computer science (2)
- Logics (2)
- Mehragentensystem (2)
- Metadaten (2)
- Microarray (2)
- Multimodal Learning Analytics (2)
- Named entity recognition (2)
- Nichtdeterminismus (2)
- Numismatics (2)
- Oblivious Transfer (2)
- Paging (2)
- Particle Correlations and Fluctuations (2)
- Particle and resonance production (2)
- Pb–Pb collisions (2)
- Programming (2)
- Programmtransformation (2)
- Pufferspeicher (2)
- QCD (2)
- Randomization (2)
- Relationale Datenbank (2)
- San Jose (2)
- Semantics (2)
- Shortest lattice vector problem (2)
- Softwareentwicklung (2)
- Subset sum problem (2)
- Theoretische Informatik (2)
- Virtuelle Realität (2)
- Visualisierung (2)
- WWW (2)
- Zellularer Automat (2)
- agent-based modeling (2)
- amyloid precursor protein (2)
- artificial intelligence (2)
- call-by-name (2)
- computational complexity (2)
- computational virology (2)
- context unification (2)
- data structures (2)
- economics (2)
- gene expression (2)
- hepatitis C virus (HCV) (2)
- hippocampus (2)
- infinitary lambda calculus (2)
- lambda-calculus (2)
- lazy evaluation (2)
- logics in artificial intelligence (2)
- lower bounds (2)
- massively parallel multigrid solvers (2)
- matching (2)
- morphology (2)
- neural networks (2)
- non-determinism (2)
- nondeterminism (2)
- parameter estimation (2)
- population dynamics (2)
- pp collisions (2)
- presynaptic active zone (2)
- program transformation (2)
- programming languages (2)
- realistic geometries (2)
- second order unification (2)
- segments (2)
- septic shock (2)
- simulation (2)
- unification (2)
- viral dynamics (2)
- 3D (1)
- 3D spatio-temporal resolved mathematical models (1)
- 3D spatiotemporal resolved mathematical models (1)
- 900 GeV (1)
- ALICE detector (1)
- ATPG (1)
- Abfrageverarbeitung (1)
- Abstraction (1)
- Abstrakte Reduktion (1)
- Abstrakter Automat (1)
- Active learning (1)
- Adaptive Prediction (1)
- Adaptive control (1)
- Adaptive process control (1)
- Affymetrix (1)
- Agent <Künstliche Intelligenz> (1)
- Agenten (1)
- Agents (1)
- Agroecology (1)
- Algebraische Gleichung (1)
- Algorithmus (1)
- Alice ML (1)
- Alpha equivalence (1)
- Alternate hydrophobicity (1)
- Amino acid pattern (1)
- Analog (1)
- Analog Verification (1)
- Analoges System (1)
- Analogschaltungen (1)
- Angiography (1)
- Anti-nuclei (1)
- Anwendungssystem (1)
- App ecosystem (1)
- Approximability (1)
- Approximation algorithm (1)
- Approximationsgüte (1)
- Approximierbarkeit (1)
- Arabidopsis thaliana metabolism (1)
- Architekturen (1)
- Artefakt (1)
- Artificial neural networks (1)
- Atrial fibrillation classification (1)
- Attention mechanism (1)
- Augmented Reality (1)
- Auswahlprozess (1)
- Automata theory (1)
- Automatic (1)
- Automatic prediction (1)
- Autonomie (1)
- Autonomous Learning (1)
- Autorensystem (1)
- Autorensysteme (1)
- BESIII (1)
- BFS (1)
- BIM (1)
- BMRT (1)
- BRDF (1)
- BTF (1)
- Baumgrammatiken (1)
- Beauty production (1)
- Bedienstrategie (1)
- Behavioural ecology (1)
- Behavioural methods (1)
- Beleuchtungsmodell (1)
- Benchmark testing (1)
- Berechnungskomplexität (1)
- Berufswahl (1)
- Beta-sheet (1)
- Big Data (1)
- BigBench (1)
- Bilderwelten (1)
- Bildnisschutz (1)
- Biochemie (1)
- Biodiversity (1)
- Bioinformatics (1)
- Bioinformatik (1)
- Biomedical named entity recognition (1)
- Bitcoin (1)
- Blended learning (1)
- Blind Signature (1)
- Block Korkin—Zolotarev reduction (1)
- Boosted Jets (1)
- Branching fractions (1)
- Breaking knapsack cryptosystems (1)
- C-reactive protein (1)
- CABG (1)
- CBM detector (1)
- CEMP (1)
- CHEMDNER (1)
- CRF (1)
- CT (1)
- CTL (1)
- Caché (1)
- Calcium (1)
- Call-by-need Lambda Calculus (1)
- Cameras (1)
- Cell staining (1)
- Cellular neural network (1)
- Centrality Class (1)
- Centrality Selection (1)
- Chatbot (1)
- Chiffrierung (1)
- Chinese Remainder Theorem (1)
- Claude Elwood (1)
- Clean (1)
- Closed World Assumption (CWA) (1)
- Closest Vector Problem (1)
- Coding Scheme (1)
- Cognition (1)
- Cognitive (1)
- Collaborative Filtering (1)
- Collective Flow, (1)
- Commitment (1)
- Commitment schemes (1)
- Comparison with QCD (1)
- Complexity (1)
- Computational Humanities (1)
- Computational complexity (1)
- Computational geometry (1)
- Computational models (1)
- Computational neuroscience (1)
- Computational science (1)
- Computer Science (1)
- Computersimulation (1)
- Concrete (1)
- Conjoint Analysis (1)
- Connected Components (1)
- Content-Based Filtering (1)
- Continual deep learning (1)
- Convolutional Neural Networks (1)
- Coordination (1)
- Correctness (1)
- Crowdsourcing (1)
- Cuneiform (1)
- Data Analytics (1)
- Data processing (1)
- Data protection (1)
- Data science (1)
- Dataflow Computing (1)
- Datenaustausch (1)
- Datenbank (1)
- Datenbanksystem (1)
- Datenintegration (1)
- Datenmodell (1)
- Datenqualität (1)
- Datenschutz (1)
- Deep learning (1)
- Deictic and iconic gestures (1)
- Depth-Map (1)
- Detector Readout (1)
- Developmental Robotics (1)
- Developmental biology (1)
- Dialog Generation (1)
- Dichte-getriebene Strömung (1)
- Didaktik (1)
- Dienstgüte (1)
- Differentielle Genexpression (1)
- Digital Rights Management (1)
- Digitalkamera (1)
- Dimensions-Adaptivität (1)
- Dimensionsreduktion (1)
- Direct Manipulation (1)
- Dirichlet bound (1)
- Discrete Logarithm (1)
- Diskrete Mathematik (1)
- Dispositional learning analytics (1)
- Divide-and-conquer-Verfahren (1)
- Dreidimensionale Computergraphik (1)
- Dynamic Graph Algorithms (1)
- Dynamical systems (1)
- EDISON competence framework (1)
- Echtzeitprogrammierung (1)
- Echtzeitsystem (1)
- Educational institutions (1)
- Effective Field Theories (1)
- Ego-motion Estimation (1)
- Eigenbewegungsschaetzung (1)
- Ein-Ausgabe ; Funktionale Programmiersprache ; Lambda-Kalkül ; Nichtdeterminismus ; Operationale Semantik ; Programmtransformation (1)
- Eingebettetes System (1)
- Einteilung (1)
- Electron-pion identification (1)
- Electroweak interaction (1)
- Elektronik (1)
- Elektronisches Wasserzeichen (1)
- Elliptic flow (1)
- Energy Efficiency (1)
- Energy-efficiency (1)
- Engineering (1)
- Entropie (1)
- Entropie <Informationstheorie> (1)
- Entwurfsautomation (1)
- Entwurfsmuster (1)
- Equation of State (1)
- Erasure-Correcting Codes (1)
- Erlösrechnung (1)
- Error Mitigation (1)
- Euler method (1)
- Europe (1)
- Event-related potential (1)
- Exchange Format (1)
- Experimental Evaluation (1)
- Experimental Methods (1)
- Explicit Feedback (1)
- Extraterrestrial measurements (1)
- F.1.3 (1)
- FPGAs (1)
- Fachübergreifender Unterricht (1)
- Factoring (1)
- Fault Tolerance (1)
- Feasibility (1)
- Femtoscopy (1)
- Fiber Bundles (1)
- Fibre/foam sandwich radiator (1)
- Finanzanalyse (1)
- Flash Memories (1)
- Formal Verification (1)
- Formale Grammatik (1)
- Formale Sprache (1)
- Formale Verifikation (1)
- Formative assessment (1)
- Forschungswerkzeuge (1)
- Fourier-Motzkin algorithm (1)
- Frankfurt <Main, 2003> (1)
- Frontmatter (1)
- Futures (1)
- GPGPU (1)
- GPI (1)
- GPRO (1)
- GPU Computing (1)
- GPU algorithms (1)
- Gamification (1)
- Gedächtnis (1)
- Gedächtnisbildung (1)
- Gene expression (1)
- Generalized procrustes analysis (1)
- Genexpression (1)
- Geodesics (1)
- Germanistik (1)
- Gestural writing (1)
- Gleichheitsanalyse (1)
- Goal setting (1)
- Grammatiksysteme (1)
- Graph (1)
- Graph Algorithms (1)
- Graph theory (1)
- Graphikprozessor (1)
- Green Computing (1)
- Großhirnrinde (1)
- Gruppe (1)
- H.2.3 (1)
- HASKELL (1)
- HBT (1)
- HCI (1)
- HEP-Computing (1)
- HPC (1)
- Hadron production (1)
- Hadron-Hadron Scattering Heavy (1)
- Hadronic decays (1)
- Hard Scattering (1)
- Hash Funktionen (1)
- Haskell (1)
- Haskell 98 (1)
- Hauptkomponentenanalyse (1)
- Heavy Ion Experiment (1)
- Heavy flavor production (1)
- Heavy flavour production (1)
- Heavy ions (1)
- Heavy-flavour decay muons (1)
- Heavy-ion collisions (1)
- Hematoxylin staining (1)
- Heuristics (1)
- Heuristik (1)
- High Level Synthesis (1)
- High energy physics (1)
- Higher education (1)
- Histology (1)
- Hive (1)
- Hodgkin lymphoma (1)
- Householder reflection (1)
- Hybrid automaton (1)
- Hydrophobicity scale (1)
- Hypercube (1)
- Hypertext (1)
- I/O Model (1)
- ICA (1)
- IS post-adoption (1)
- IT-literacy (1)
- Iconography (1)
- Identification (1)
- Image processing (1)
- Image schemata (1)
- Image-based-rendering (1)
- Implicit Discourse Parsing (1)
- Implicit Feedback (1)
- Implicit Semantic Role Labeling (1)
- In-TIPS thrombosis (1)
- Inclusive spectra (1)
- Independent Component Analysis ICA (1)
- Induktive Inferenz (1)
- Infinite games with perfect information (1)
- Inflammation (1)
- Infomediary (1)
- Information Overkill (1)
- Informationelle Ungewissheit (1)
- Informationelles Selbstbestimmungsrecht (1)
- Informationsintegration (1)
- Informationsportal (1)
- Informationssystem (1)
- Informationsüberflutung (1)
- Infrastruktur (1)
- Inhaltsbasiertes Filtern (1)
- Integer relations (1)
- Intensity interferometry (1)
- Intensivpatient (1)
- InterSystems (1)
- Interaktion (1)
- Internet (1)
- Internetportal (1)
- Interoperabilität (1)
- Interpretierer (1)
- Intrinsic Motivations (1)
- Invariant Mass Distribution (1)
- Invariant object recognition (1)
- Ionisation energy loss (1)
- Ionizing Radiation (1)
- Isolation (1)
- Iteratives Array (1)
- Java (1)
- Java, Programmiersprache (1)
- Jet Physics (1)
- Jet Substructure (1)
- Kalman Filter (1)
- Kalman filter (1)
- Kalman-Filter (1)
- Kinetic hypertext authoring (1)
- Klebsiella pneumoniae (1)
- Kluft (1)
- Knowledge engineering (1)
- Kohonen map (1)
- Kohonen mapping (1)
- Kollaboratives Filtern (1)
- Kolmogorov complexity (1)
- Kommunikationsprotokoll ; Komplexitätstheorie ; Stochastischer Automat ; Nichtdeterministischer Automat ; Quantencomputer (1)
- Kommunikationssystem (1)
- Kompetenz (1)
- Kompetenzmessung (1)
- Komponente <Software> (1)
- Kontext (1)
- Kontextuelle Gleicheit (1)
- Koordination (1)
- Korkin—Zolotarev reduction (1)
- Kostenrechnung (1)
- Kryptographie (1)
- Kryptosystem (1)
- Künstliches Hormonsystem (1)
- LSTM (1)
- LTP (1)
- LZW-Algorithmus (1)
- Label cover (1)
- Langzeitgedächtnis (1)
- Large Language Models (1)
- Lateral inhibited network (1)
- Lattice QCD (1)
- Lattice Quantum Field Theory (1)
- Lattice Reduction (1)
- Lattice-QCD (1)
- Layout (1)
- Learner goals (1)
- Learning analytics (1)
- Learning analytics dashboard (1)
- Learning dispositions (1)
- Leichtgewichtige Kryptographie (1)
- Lernen (1)
- Letrec-Kalkül (1)
- Lifelong machine learning (1)
- Line reconstruction (1)
- Linked Data (1)
- Linux Container (1)
- Local field potential (1)
- Low density subset sum algorithm (1)
- Lymph nodes (1)
- Lymphocytes (1)
- MIMIC-III (1)
- ML <Programmiersprache> (1)
- Machine learning (1)
- Manatee invariant (1)
- Manhattan distance (1)
- Many-core computer architectures (1)
- MapReduce (1)
- Maschinelles Lernen (1)
- Material budget (1)
- Mathematical model (1)
- Mathematik (1)
- McEliece (1)
- Medical Systems (1)
- Medienkompetenz (1)
- Medienproduktion (1)
- Membranpotential (1)
- Mensch-Maschine-Kommunikation (1)
- Merkmalsraum (1)
- Merkmalsselektion (1)
- Message authentication (1)
- Mid-rapidity (1)
- Middleware (1)
- Minimum Bias (1)
- Mitochondria (1)
- Mobile Phones (1)
- Mobile platforms (1)
- Model Checking (1)
- Modellgetriebene Architektur (MDA) (1)
- Modellierung (1)
- Modular Multiplication (1)
- Monte Carlo (1)
- Multi-Parton Interactions (1)
- Multi-strange baryons (1)
- Multi-wire proportional drift chamber (1)
- Multimedia (1)
- Multiset independent component analysis (1)
- Museum (1)
- Museum information system (1)
- Museums (1)
- Musik (1)
- NF-κB pathway (1)
- NLP (1)
- NP-complete problems (1)
- NP-hard (1)
- NP-hardness (1)
- NREM-Schlaf (1)
- Natural Language Understanding (1)
- Network models (1)
- Netzzusammenschaltung (1)
- Neural Networks (1)
- Neural encoding models (1)
- Neural network (1)
- Neural networks (1)
- Neurofuzzy (1)
- Neurofuzzy medical systems (1)
- Neuronale Plastizität (1)
- Neuronales Netz (1)
- Neuronales Netzwerk (1)
- Nichtlineare Datenanalyse (1)
- Nichtlineare Merkmalsselektion (1)
- Noisy point clouds (1)
- Non-Malleability (1)
- Non-determinism (1)
- Nonlinear Data Analysis (1)
- Nonlinear Feature Selection (1)
- Normalisierung (1)
- Noticeable Probability (1)
- Nuclear Physics (1)
- Nuclear modification factor (1)
- ODE (1)
- ORC (1)
- Object vision (1)
- Objekterkennung (1)
- Objektorientiertes Datenbanksystem ; Schemaevolution ; Version <Informatik> (1)
- Objektorientierung (1)
- Offenes Kommunikationssystem ; Verteiltes System ; Telekommunikationsdienst ; Vermittlung ; Typ ; Wissensbasiertes System (1)
- Online Algorithmen (1)
- Online Algorithms (1)
- Online algorithms (1)
- Online self-assessment (1)
- Online-Self-Assessment (1)
- Ontologien (1)
- Ontologies (1)
- Ontology (1)
- Open set recognition (1)
- Open world learning (1)
- Open-access data resource (1)
- Operational Semantics (1)
- Oracle Query (1)
- PCA-Zerlegung (1)
- PDE (1)
- PDEs (1)
- PYTHIA (1)
- Paging Algorithms (1)
- Parallel and SIMD calculations (1)
- Paramecium (1)
- Parameter estimation (1)
- Parkinson diagnosis (1)
- Parquet (1)
- Particle and Resonance Production (1)
- Particle correlations and fluctuations (1)
- Pathologists (1)
- Patternsprachen (1)
- Pb–Pb (1)
- Personalisierung (1)
- Personalization (1)
- Persönlichkeitsrecht (1)
- Phase Diagram of QCD (1)
- Phase-reset (1)
- Phosphate (1)
- Planning systems (1)
- Planungssystem (1)
- Planungssysteme (1)
- Plastizität (1)
- Plastizität <Physiologie> (1)
- PointNet (1)
- Polynomielles Wortproblem (1)
- Portabler Objektadapter (1)
- Portal Website (1)
- Pose Estimation (1)
- Positive fluid balance (1)
- Postoperative atrial fibrillation (1)
- Prag <1999> (1)
- Precongruence (1)
- Preface (1)
- Preßburg <2003> (1)
- Principal Component Analysis PCA (1)
- Principal Independent Component Analysis PICA (1)
- Principal component analysis (1)
- Prioritäten (1)
- Privacy (1)
- Private Information Retrieval (1)
- Probabilistically checkable proofs (1)
- Problem kürzester Superstrings (1)
- Production Cross Section (1)
- Prognose (1)
- Prognostische Validität (1)
- Program Transformations (1)
- Program Verification (1)
- Programmiersprachen (1)
- Programmkalküle (1)
- Programmkorrektheit (1)
- Programmverifikation (1)
- Properties of Hadrons (1)
- Prophet Inequalities (1)
- Proteins (1)
- Proton–proton (1)
- Präkongruenz (1)
- Präprozessierung (1)
- Psychometrische Güte (1)
- Public Key Cryptosystem (1)
- Public Parameter (1)
- Public-Key-Kryptosystem (1)
- Python (1)
- Q-modularity (1)
- Quadratic Residue (1)
- Quantitative Imaging (1)
- Quark Deconfinement (1)
- Quark Gluon Plasma (1)
- Quark Production (1)
- Quark gluon plasma (1)
- Quarkonium (1)
- Query Language (1)
- RBF-nets (1)
- RDBMS (1)
- RDF <Informatik> (1)
- RF development (1)
- RNA biology (1)
- RNA interference (1)
- RQL (1)
- Random Oracle (1)
- Random String (1)
- Randomized algorithms (1)
- Rapidity Range (1)
- Rate Distortion Theory (1)
- Rating Scale (1)
- Rating Scale Design (1)
- Rating System (1)
- Reaktivierung der Gedächtnisspuren (1)
- Real-time systems (1)
- Recht am eignenen Bild (1)
- Reduktionssystem (1)
- Regular Languages (1)
- Reguläre Sprachen (1)
- Reinforcement Learning (1)
- Relativistic heavy ion physics (1)
- RenderMan (1)
- Representation Problem (1)
- Resolution Parameter (1)
- Resonances (1)
- Retrievalsprache (1)
- Roboter (1)
- Robotics (1)
- Rubber extrusion (1)
- Rènyi mutual information (1)
- SHA-3 (1)
- SLAM (1)
- SLLL-reduction (1)
- SMASH (1)
- SORN (1)
- SPARQL (1)
- SQL-on-Hadoop (1)
- STDP (1)
- SVD-Zerlegung (1)
- SWRL (1)
- San Francisco (1)
- Santa Barbara (1)
- Saward (1)
- Schema, Informatik (1)
- Schlaf (1)
- Security (1)
- Security Parameter (1)
- Sehrinde (1)
- Seitenersetzungsstrategie (1)
- Selbstorganisation (1)
- Selbstorganisierende Karte (1)
- Selbstorganisierende Taskverteilung (1)
- Selection process (1)
- Self-Assessment (1)
- Self-Instruct (1)
- Self-organized eigenvector jets (1)
- Self-regulated learning (1)
- Semantik (1)
- Semantische Dienste (1)
- Semantisches Dienstgütemanagement (1)
- Semantisches Web (1)
- Sense-making (1)
- Sentiment (1)
- Sentiment-Analysis (1)
- Sequence analysis (1)
- Server (1)
- Service Level Management (SLM) (1)
- Shader (1)
- Shannon (1)
- Sharing (1)
- Short-lived particles (1)
- Shortest Common Superstring Problem (1)
- Sichere Antworten (1)
- Sicherungstechnik (1)
- Signaling pathway (1)
- Signature (1)
- Similarity (1)
- Simulation (1)
- Single Event Effects (1)
- Single electrons (1)
- Single muons (1)
- Smart Learning (1)
- Social behaviour (1)
- Software updates (1)
- SparkSQL (1)
- Specialized Information Service (1)
- Speicherbedarf (1)
- Spezifikation (1)
- Sprachtheorie (1)
- Stable reduction algorithm (1)
- Statistical Classification (1)
- Statistical classification (1)
- Stereo Vision (1)
- Stereophotographie (1)
- Stochastic Probing (1)
- Striktheitsanalyse (1)
- Student-facing learning analytics (1)
- Studienberatung (1)
- Studienleistungen (1)
- Studienwahl (1)
- Sumerian (1)
- Systematic Uncertainty (1)
- Systembiologie (1)
- Szenengenerierung (1)
- TIPS (1)
- TR (1)
- Table of Contents (1)
- Takens-Grassberger correlation integral (1)
- Technologie Enhanced Learning (1)
- Technology-Enhanced Learning (1)
- Telekommunikationsnetz (1)
- Telekommunikationswirtschaft (1)
- Termersetzungssystem (1)
- Termination (1)
- Terrmersetzungssystem (1)
- Testing (1)
- Text mining (1)
- Text2Scene (1)
- Texttechnologie (1)
- Textvisualisierung (1)
- Time Constraints (1)
- Time Projection Chamber (1)
- Tracking (1)
- Training (1)
- Transfer learning (1)
- Transform coding (1)
- Transinformation (1)
- Transition invariant (1)
- Transition radiation detector (1)
- Transjugular Intrahepatic Portosystemic Shunt (1)
- Translation (1)
- Transmembrane helix (1)
- Transmembrane sheets (1)
- Transverse momentum (1)
- Trapdoor (1)
- Trigger (1)
- Typical Learning Behaviour (1)
- UPT (1)
- Ubiquitous Computing (1)
- Umbenennung (1)
- Unabhängige Komponentenanalyse (1)
- Ungewissheit (1)
- Uniform resource locators (1)
- Unity (1)
- Universa feature extraxtion (1)
- Unterrichtsbeispiele (1)
- Unüberwachtes Lernen (1)
- User Interface (1)
- User Profile (1)
- VLSI (1)
- VR (1)
- VRML (1)
- Valve surgery (1)
- Vector Boson Production (1)
- Vectorization (1)
- Verdeckung (1)
- Verteilung (1)
- Visual (1)
- Visual object recognition (1)
- Visualization (1)
- Voyeurism (1)
- Voyeurismus (1)
- Watermarking (1)
- Web (1)
- Web Analytics (1)
- Web Based Training (1)
- Web Search (1)
- Web-based Training (1)
- Wechselseitige Information (1)
- Wiki (1)
- Wissensbasiertes System (1)
- Workshop Organization (1)
- World wide web (1)
- Wortproblem (1)
- XML (1)
- Xenon-based gas mixture (1)
- Z-inspection (1)
- Zeichenkette (1)
- abstract reduction (1)
- acute infection (1)
- adaption (1)
- addition-invariant first-order logic (1)
- adequate translation (1)
- aging (1)
- algebraic closure properties (1)
- algorithms (1)
- alignment in communication structural coupling (1)
- alpha renaming (1)
- ambiguity (1)
- anaplastic large cell lymphoma (1)
- anatomy ontologies (1)
- approximation complexity (1)
- approximation networks (1)
- attention-based object recognition (1)
- automata (1)
- automatic handwriting analysis (1)
- average pairwise distance (1)
- big data (1)
- big data benchmarking (1)
- biochemical pathways (1)
- bisimulation (1)
- boutons (1)
- built-in self-test (1)
- calcium dynamics (1)
- call-by-need evaluation (1)
- call-by-need lambda calculus (1)
- cancer (1)
- cell motility (1)
- cellular automata (1)
- cellular neural nets (1)
- centrality (1)
- certain answers (1)
- chemokine receptors (1)
- chosen ciphertext attack (1)
- chronic infection (1)
- classical Hodgkin lymphoma (1)
- classification (1)
- clinical presentation (1)
- clique problem (1)
- closed world assumption (CWA) (1)
- cluster transformation (1)
- clustering (1)
- codon usage preferences (1)
- colorabdity (1)
- columnar file formats (1)
- combinatorics (1)
- common transition pairs (1)
- communication complexity (1)
- community (1)
- competence (1)
- complementary information (1)
- compression (1)
- computational geometry (1)
- computer vision (1)
- concurrent composition (1)
- continued fraction algorithm (1)
- corpus study (1)
- correctness (1)
- coupled differential equations (1)
- coupon collector problem (1)
- course achievement (1)
- critically ill patients (1)
- dE/dx (1)
- dancing (1)
- data acquisition; high energy physics; FPGA; CBM; FLES (1)
- data analysis (1)
- data crowdsourcing (1)
- data exchange (1)
- data orthonormalization network (1)
- data quality (1)
- data science education (1)
- data streams (1)
- decidability questions (1)
- decidable characterisations (1)
- deduction (1)
- deductive database (1)
- deduktive Datenbank (1)
- deep learning (1)
- descriptional complexity (1)
- descriptive patterns (1)
- deskriptive Pattern (1)
- detailed modeling (1)
- detector (1)
- differential equations (1)
- digital literacy (1)
- digital pathology (1)
- digitale Kompetenz (1)
- discrete logarithm (1)
- discrete processing (1)
- diskrete Verarbeitung (1)
- dissemination (1)
- drahtlos (1)
- dynamic programming (1)
- e-Entropy (1)
- eHumanities (1)
- eLearning (1)
- eRQL (1)
- early warning signs (1)
- electrical stimulation (1)
- electronic pen (1)
- elementary mode (1)
- emergence (1)
- endlicher Index (1)
- epigenome (1)
- epileptogenesis (1)
- epistemic network analysis (1)
- error bounds (1)
- ethical co-design (1)
- ethics (1)
- evolutionary associative learning (1)
- experimental results (1)
- exponentiation (1)
- eye-tracking (1)
- families of hash functions (1)
- feature selection (1)
- feedforward network layers (1)
- finite index (1)
- finite model theory (1)
- firing pattern (1)
- first-order logic (1)
- floating point arithmetic (1)
- floating point errors (1)
- formal languages (1)
- formal semantics (1)
- fractions of exponentiation (1)
- functional module (1)
- futures (1)
- gamification (1)
- gamma Zyklus (1)
- gamma cycle (1)
- generatives Lernen (1)
- generic algorithm (1)
- generic group model (1)
- genetic algorithm (1)
- genotype–phenotype (1)
- grammar systems (1)
- grammar-based compression (1)
- graph algorithms (1)
- graph isomorphism (1)
- graph understanding (1)
- hard bit (1)
- hashing (1)
- haskell (1)
- hawkes processes (1)
- healthcare (1)
- heavy ion collisions (1)
- heavy ion experiments (1)
- heterogeneity (1)
- hierarchies and reducibilities (1)
- higher education (1)
- highly regular nearby points (1)
- homeostasis (1)
- human lymph node (1)
- human viruses (1)
- hybrid (1)
- image analysis (1)
- image databases (1)
- image encoding (1)
- immune response (1)
- immune system (1)
- impact parameter (1)
- incremental schemes (1)
- inductive inference (1)
- information conservation (1)
- information decomposition (1)
- information distribution (1)
- information landscape (1)
- inner product (1)
- integer points (1)
- integer relation (1)
- integer vector (1)
- intertextual similarity (1)
- intratextual similarity (1)
- intrinsic plasticity (1)
- invariant transition pairs (1)
- iterated subsegments (1)
- iterative arrays (1)
- kinetic fingerprint (1)
- knapsack cryptosystems (1)
- knowledge area (1)
- knowledge graphs (1)
- kontextabhängige Verarbeitung (1)
- kontextuelle Gleichheit (1)
- lattice basis reduction (1)
- lattices (1)
- learning goal (1)
- learning unit (1)
- length defect (1)
- linguistic linked open data (1)
- linguistic networks graph distance measures (1)
- linguistic relativity (1)
- linked open data (1)
- literature digitization (1)
- local LLL-reduction (1)
- local LLLreduction (1)
- local coordinates (1)
- local randomness (1)
- logic (1)
- logic synthesis (1)
- logical interpretations (1)
- logical networks (1)
- low-resource languages (1)
- lymph node (1)
- machine learning (1)
- machine models (1)
- macronucleus (1)
- malignant melanoma (1)
- mathematical modeling (1)
- mathematical models of viral RNA cycle (1)
- matroids, online algorithm (1)
- maximal common transition set (1)
- maximal information gain (1)
- meson (1)
- meta languages (1)
- metabolic networks (1)
- metalinear (1)
- microarrays (1)
- minimal cut set (1)
- minimum description length (1)
- minimum entropy (1)
- mitochondria (1)
- model parameter adaption (1)
- model verification (1)
- modeling (1)
- modeling and simulation (1)
- models of computation (1)
- morphological filtering (1)
- multiple correspondence analysis (1)
- multiple texts (1)
- mutation (1)
- mutual information (1)
- mutual information of graphs (1)
- network model (1)
- network reduction (1)
- network similarity measurement (1)
- neural nets (1)
- neural network (1)
- neuromuscular junction (1)
- neuron (1)
- neuronal network (1)
- newspaper (1)
- noise suppression (1)
- nominal unification (1)
- non-commercial publishing (1)
- non-malleability (1)
- nondeterministic finite automata (1)
- nondetermmistlc Turing machines (1)
- normalization (1)
- ntracellular signaling (1)
- observational equivalence (1)
- off-line memory reprocessing (1)
- one-more decryption attack (1)
- one-way function (1)
- one-way functions (1)
- open access (1)
- optimization (1)
- ordinary differential equation (1)
- organic computing (1)
- overfitting (1)
- packing problem (1)
- parallel (1)
- parallel processes (1)
- parliamentary debate (1)
- parsing (1)
- partial differential equation (1)
- pattern languages (1)
- phage (1)
- phage therapy (1)
- plasticity (1)
- polynomial random number generator (1)
- polynomial word problem (1)
- poröses Medium (1)
- postrelational database (1)
- postrelationale Datenbank (1)
- precongruence (1)
- prediction quality (1)
- predictive validity (1)
- principal component analysis (1)
- process approximation (1)
- program correctness (1)
- programming calculi (1)
- programming languages design (1)
- psychometric properties (1)
- public speaking (1)
- quark gluon plasma (1)
- quicksort (1)
- random function generator (1)
- random hyperbolic graph generator (1)
- random number generator (1)
- random oracle model (1)
- randomized algorithms (1)
- rate-distortion theory (1)
- reaction-diffusion systems (1)
- reactive systems (1)
- reception (1)
- redundancy (1)
- redundant information (1)
- regular tree languages (1)
- relative termination (1)
- representation (1)
- representative claims (1)
- resistance (1)
- restricted Hebbian learning (1)
- reward-dependent learning (1)
- rewriting systems (1)
- rosetting T cells (1)
- salsa (1)
- satlsfiablhty (1)
- scale-invariant object model (1)
- secretary problem (1)
- secure bit (1)
- security analysis of protocols (1)
- security of data (1)
- segmentation (1)
- semantic web (1)
- sensor-based learning support (1)
- sharing (1)
- shock filter (1)
- short integer relation (1)
- shortest lattice vector (1)
- signature size (1)
- signed ElGamal encryption (1)
- similarity (1)
- simultaneous diophantine approximations (1)
- simultaneous security of bits (1)
- single block replacement (1)
- small RNA (1)
- social media (1)
- sorting (1)
- space bounded computations (1)
- space improvements (1)
- space optimization (1)
- sparse coding (1)
- specialized vocabulary (1)
- specification and verification (1)
- spectra (1)
- storage optimization (1)
- straight line programs (1)
- streaming algorithm (1)
- strictness analysis (1)
- string rewriting (1)
- stroke (1)
- structure-function relationships (1)
- subset sum problems (1)
- substitution attacks (1)
- succinct data structures (1)
- succinctness (1)
- surface approximation (1)
- synapse (1)
- synaptic normalization (1)
- synergistic interaction (1)
- synergy (1)
- syntax (1)
- systems biology (1)
- t-cluster (1)
- t-invariant (1)
- tailored learning path (1)
- termination (1)
- testability (1)
- text mining tools (1)
- text search (1)
- textbooks (1)
- the set disjointness problem (1)
- theory of computation (1)
- three-level topic model (1)
- threshold concepts (1)
- trajectories (1)
- transition invariant (1)
- translation (1)
- translational selection (1)
- tree grammars (1)
- trustworthy AI (1)
- trustworthy AI Co-design (1)
- uncertainty (1)
- unique information (1)
- variability (1)
- verzögerte Auswertung (1)
- volatility clustering (1)
- volume-visualization (1)
- warts (1)
- weight resolutions (1)
- whitening filter (1)
- whole slide image (1)
- wikipedia (1)
- wireless business model pyramid scheme multi-level (1)
- within-host viral modeling (1)
- within-host viral modelling (1)
- women’s quota (1)
- xAPI (1)
- √sN N = 2.76 TeV (1)
Institute
- Informatik (1593) (remove)
At present, there are no quantitative, objective methods for diagnosing the Parkinson disease. Existing methods of quantitative analysis by myograms suffer by inaccuracy and patient strain; electronic tablet analysis is limited to the visible drawing, not including the writing forces and hand movements. In our paper we show how handwriting analysis can be obtained by a new electronic pen and new features of the recorded signals. This gives good results for diagnostics. Keywords: Parkinson diagnosis, electronic pen, automatic handwriting analysis
Modern experiments in heavy ion collisions operate with huge data rates that can not be fully stored on the currently available storage devices. Therefore the data flow should be reduced by selecting those collisions that potentially carry the information of the physics interest. The future CBM experiment will have no simple criteria for selecting such collisions and requires the full online reconstruction of the collision topology including reconstruction of short-lived particles.
In this work the KF Particle Finder package for online reconstruction and selection of short-lived particles is proposed and developed. It reconstructs more than 70 decays, covering signals from all the physics cases of the CBM experiment: strange particles, strange resonances, hypernuclei, low mass vector mesons, charmonium, and open-charm particles.
The package is based on the Kalman filter method providing a full set of the particle parameters together with their errors including position, momentum, mass, energy, lifetime, etc. It shows a high quality of the reconstructed particles, high efficiencies, and high signal to background ratios.
The KF Particle Finder is extremely fast for achieving the reconstruction speed of 1.5 ms per minimum-bias AuAu collision at 25 AGeV beam energy on single CPU core. It is fully vectorized and parallelized and shows a strong linear scalability on the many-core architectures of up to 80 cores. It also scales within the First Level Event Selection package on the many-core clusters up to 3200 cores.
The developed KF Particle Finder package is a universal platform for short- lived particle reconstruction, physics analysis and online selection.
Conceptual design of an ALICE Tier-2 centre integrated into a multi-purpose computing facility
(2012)
This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.
Co-design of a trustworthy AI system in healthcare: deep learning based skin lesion classifier
(2021)
This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.
Jeden Tag werden 2,5 Trillionen Bytes an Daten generiert. Diese enorme Menge an Daten wird beispielsweise durch digitale Bilder, Videos, Beiträge in den sozialen Medien, intelligente Sensoren, Einzelhandels- und Finanztransaktionen und GPS-Signale von Handys erzeugt. Das ist Big Data. Es besteht kein Zweifel daran, dass Big Data und das, was wir damit tun, das Potential hat, ein signifikanter Treiber für Innovationen und Wertschöpfung zu werden.
Human lymph nodes play a central part of immune defense against infection agents and tumor cells. Lymphoid follicles are compartments of the lymph node which are spherical, mainly filled with B cells. B cells are cellular components of the adaptive immune systems. In the course of a specific immune response, lymphoid follicles pass different morphological differentiation stages. The morphology and the spatial distribution of lymphoid follicles can be sometimes associated to a particular causative agent and development stage of a disease. We report our new approach for the automatic detection of follicular regions in histological whole slide images of tissue sections immuno-stained with actin. The method is divided in two phases: (1) shock filter-based detection of transition points and (2) segmentation of follicular regions. Follicular regions in 10 whole slide images were manually annotated by visual inspection, and sample surveys were conducted by an expert pathologist. The results of our method were validated by comparing with the manual annotation. On average, we could achieve a Zijbendos similarity index of 0.71, with a standard deviation of 0.07.
Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable because of the combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm uses interaction delays reconstructed for directed bivariate interactions to tag potentially spurious edges on the basis of their timing signatures in the context of the surrounding network. Such tagged interactions may then be pruned, which produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation in MATLAB to test the algorithm’s performance on simulated networks as well as networks derived from magnetoencephalographic data. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a tractable and data-efficient way of reconstructing approximative networks of multivariate interactions. It is preferable if available data are limited or if fully multivariate approaches are computationally infeasible.
Suche im Semantic Web : Erweiterung des VRP um eine intuitive und RQL-basierte Anfrageschnittstelle
(2003)
Datenflut im World Wide Web - ein Problem jedes Internetbenutzers. Klassische Internetsuchmaschinen sind überfordert und liefern immer seltener brauchbare Resultate. Das Semantic Web verspricht Hoffnung - maßgeblich basierend auf RDF. Das Licht der Öffentlichkeit erblickt das Semantic Web vermutlich zunächst in spezialisierten Informationsportalen, so genannten Infomediaries. Besucher von Informationsportalen benötigen eine Abfragesprache, welche ebenso einfach wie eine gewöhnliche Internetsuchmaschine anzuwenden ist. Eine derartige Abfragesprache existiert für RDF zur Zeit nicht. Diese Arbeit stellt eine neuartige Abfragesprache vor, welche dieser Anforderung genügt: eRQL. Bestandteil dieser Arbeit ist der mittels Java implementierte eRQL-Prozessor eRqlEngine, welcher unter http://www.wleklinski.de/rdf/ und unter http://www.dbis.informatik.uni-frankfurt.de/~tolle/RDF/eRQL/ bezogen werden kann.
Die Menge digital zur Verfügung stehender Dokumente wächst zunehmend. Umso wichtiger sind adäquate Methoden, um sehr große Dokumentkollektionen durch-suchen zu können. Im Gegensatz zur exakten Suche, bei der nach Dokumenten mit bekannten Dateinamen gesucht wird, werden Techniken des Information Retrieval (IR) dazu eingesetzt, relevante Ergebnisse zu einer Anfrage ausfindig zu machen. Seit einigen Jahren werden verstärkt Kollektionen mit strukturierten Dokumenten durch¬sucht, insbesondere seit Durchsetzung der eXtensible Markup Language (XML) als offizieller Standard des World Wide Web Consortiums (W3C). Mittlerweile gibt es eine Reihe von Forschungsansätzen, bei denen IR-Methoden auf XML-Dokumente angewendet werden. XML Information Retrieval (XML-IR) nutzt dabei die Struktur der Dokumente, um die Suche nach und in denselben effektiver zu machen, d.h. die Qualität von Suchergebnissen zu verbessern, beispielsweise durch Fokussierung auf besonders relevante Dokumentteile. Die bisherigen Lösungen beziehen sich jedoch alle auf zentralisierte Stand-Alone Suchmaschinen zu Forschungszwecken. Sehr große, über eine Vielzahl von Rechnern verteilte Datenkollektionen lassen sich damit nicht durchsuchen. Techniken für verteiltes XML-IR werden in der Praxis auch dort benötigt, wo das zu durchsuchende System aus einer Vielzahl lokaler, heterogener XML-Kollektionen besteht, deren Benutzer ihre Dokumente nicht auf einem zent¬ralen Server speichern wollen oder können; solche Benutzer schließen sich häufig in Form eines dezentralen Peer-to-Peer (P2P) Netzes zusammen. Dennoch gibt es derzeit weder für Systeme im Allgemeinen, noch für P2P-Systeme im Speziellen Suchmaschinen, mit denen nach relevanten Dokumenten gesucht werden kann. In der vorliegenden Dissertation wird daher am Beispiel von P2P-Netzen erstmalig untersucht, inwiefern XML-IR in verteilten Systemen überhaupt effektiv und effizient möglich ist. Dazu wird ein allgemeines Architekturmodell für die Entwick-lung von P2P-Suchmaschinen für XML-Retrieval entworfen, in dem Funktionalität aus den Bereichen XML-IR und P2P in abstrakten Schichten angeordnet ist. Das Modell wird als Grundlage für den Entwurf einer konkreten P2P-Suchmaschine für XML-IR verwendet. Es werden dazu verschiedene Techniken für verteiltes XML-IR entwickelt, um die einzelnen Phasen der Suche umzusetzen: Indizierung der Doku¬mente, Routing der Anfragen, Ranking geeigneter Dokumente und Retrieval von Ergebnissen. Insbesondere die Problematik von aus mehreren Suchbegriffen bestehenden Multitermanfragen sowie Verteilungsaspekte werden berücksichtigt. Neben der zu erzie-lenden Suchqualität steht vor allem der notwendige Kommunikations¬aufwand im Vordergrund. Die entwickelten Methoden werden in Form einer P2P-Suchmaschine für verteiltes XML-Retrieval implementiert, die aus fast 40.000 Zeilen Java-Code besteht. Diese Suchmaschine namens SPIRIX kann voll-funktionsfähig nach XML-Dokumenten in einem P2P-Netz suchen und deren Relevanz inhaltsbasiert bewerten. Für die Kommunikation zwischen Peers wird ein P2P-Protokoll namens SpirixDHT entworfen, das auf Basis von Chord arbeitet und speziell für den Einsatz von XML-IR angepasst wird. Für die Evaluierung der entworfenen Techniken wird zunächst die Suchqualität von SPIRIX nachgewiesen. Dies geschieht durch die Teilnahme an INEX, der internationalen Initiative für die Evaluierung von XML-Retrieval. Im Rahmen von INEX werden jedes Jahr XML-IR Lösungen weltweit miteinander verglichen. Für 2008 konnte mit SPIRIX eine Suchpräzision erreicht werden, die vergleichbar mit der Qualität der Top-10 XML-IR Lösungen ist. In weiteren Experimenten werden die entworfenen Methoden für verteiltes XML-Retrieval mit INEX-Werkzeugen evaluiert; dabei werden jeweils die erzielte Such-qualität und der notwendige Aufwand gegenübergestellt. Die gewonnenen Er¬kenn-tnisse werden auf den Routingprozess angewendet; hier ist speziell die Frage-stellung interessant, wie XML-Struktur zur Performanzverbesserung in Bezug auf die Effizienz eines verteilten Systems genutzt werden kann. Die Evaluierung der konzi¬pier¬ten Routingtechniken zeigt eine signifikante Reduzierung der Anzahl versendeter Nachrichten, ihrer Größe und somit der Netzlast, wobei gleichzeitig eine Steigerung der Suchqualität erreicht wird. Im Rahmen der Dissertation wird somit der Nachweis erbracht, dass verteiltes XML-IR sowohl effektiv als auch effizient möglich ist. Zugleich wird gezeigt, wie die Ver¬wendung von XML-IR Techniken beim Routing der Anfragen dazu beitragen kann, den notwendige Suchaufwand – insbesondere den für die Kommunikation zwischen Peers – so weit zu reduzieren, dass das System auch zu einer großen Anzahl von teil¬nehmenden Peers skaliert und trotzdem eine hohe Suchqualität aufrecht erhalten werden kann.
A key competence for open-ended learning is the formation of increasingly abstract representations useful for driving complex behavior. Abstract representations ignore specific details and facilitate generalization. Here we consider the learning of abstract representations in a multi-modal setting with two or more input modalities. We treat the problem as a lossy compression problem and show that generic lossy compression of multimodal sensory input naturally extracts abstract representations that tend to strip away modalitiy specific details and preferentially retain information that is shared across the different modalities. Furthermore, we propose an architecture to learn abstract representations by identifying and retaining only the information that is shared across multiple modalities while discarding any modality specific information.
Recent advances in artificial neural networks enabled the quick development of new learning algorithms, which, among other things, pave the way to novel robotic applications. Traditionally, robots are programmed by human experts so as to accomplish pre-defined tasks. Such robots must operate in a controlled environment to guarantee repeatability, are designed to solve one unique task and require costly hours of development. In developmental robotics, researchers try to artificially imitate the way living beings acquire their behavior by learning. Learning algorithms are key to conceive versatile and robust robots that can adapt to their environment and solve multiple tasks efficiently. In particular, Reinforcement Learning (RL) studies the acquisition of skills through teaching via rewards. In this thesis, we will introduce RL and present recent advances in RL applied to robotics. We will review Intrinsically Motivated (IM) learning, a special form of RL, and we will apply in particular the Active Efficient Coding (AEC) principle to the learning of active vision. We also propose an overview of Hierarchical Reinforcement Learning (HRL), an other special form of RL, and apply its principle to a robotic manipulation task.
Various concurrency primitives had been added to functional programming languages in different ways. In Haskell such a primitive is a MVar, joins are described in JoCaml and AliceML uses futures to provide a concurrent behaviour. Despite these concurrency libraries seem to behave well, their equivalence between each other has not been proven yet. An expressive formal system is needed. In their paper "On proving the equivalence of concurrency primitives", Jan Schwinghammer, David Sabel, Joachim Niehren, and Manfred Schmidt-Schauß define a universal calculus for concurrency primitives known as the typed lambda calculus with futures. There, equivalence of processes had been proved. An encoding of simple one-place buffers had been worked out. This bachelor’s thesis is about encoding more complex concurrency abstractions in the lambda calculus with futures and proving correctness of its operational semantics. Given the new abstractions, we will discuss program equivalence between them. Finally, we present a library written in Haskell that exposes futures and our concurrency abstractions as a proof of concept.
Wir haben in dieser Arbeit einige Probleme auf Objekten betrachtet, deren Struktur wohlgeformten Klammerworten entspricht. Dies waren spezielle Routing-Probleme, das Umformen und Auswerten algebraischer Ausdrücke, sowie die Berechnung korrespondierender Symbole zweier Ausdrücke. Eine effiziente Lösung dieser Probleme gelang durch einen rekursiven Divide-and-Conquer Ansatz, der auf Grund der “natürlichen” rekursiven Definition der betrachteten Objekte auch nahe liegt. Im Divide-Schritt wurde das jeweilige Problem in viele wesentlich kleinere Teilprobleme zerlegt, so daß die gesamte Laufzeit des Algorithmus asymptotisch gleich der des Divide-Schrittes und des Conquer-Schrittes blieb. Das Zerlegen der Probleme erfolgte im wesentlichen unter Anwendung bekannter Routing-Algorithmen für monotone Routings und Bit-Permute-Complement Permutationen. Im Conquer-Schritt für das Klammerrouting und das Knotenkorrespondenzproblem wurden nur die Datenbewegungen des Divide-Schrittes rückwärts ausgeführt. Für das Tree-Contraction-Problem wurde dagegen im Conquer-Schritt die Hauptarbeit geleistet. Die Methode der Simulation eines PRAMAlgorithmus durch die Berechnung seiner Kommunikationsstruktur und eine entsprechende Umordnung der Datenelemente konnte sowohl für eine effiziente Implementierung des Tree-Contraction Conquer-Schrittes auf dem Hyperwürfel als auch für die Konstruktion eines einfachen NC1-Schaltkreises zum Auswerten Boolescher Formeln angewandt werden. In einer Implementierung eines Divide-and-Conquer Algorithmus auf einem Netzwerk müssen den generierten Teilproblemen für ihre weitere Bearbeitung Teile des Netzwerks zugeordnet werden. Um die weiteren Divide-Schritte nach der gleichen Methode ausführen zu können, sollte die Struktur dieser Teilnetzwerke analog zu der des gesamten Netzwerks sein. Wir haben das Teilnetzwerk-Zuweisungsproblem für den Hyperwürfel und einige hyperwürfelartige Netzwerke untersucht. Der Hyperwürfel und das Butterfly-Netzwerk können so in Teilnetzwerke vorgegebener Größen aufgeteilt werden, daß nur ein geringer Anteil der Prozessoren ungenutzt bleibt, und die Teilprobleme können schnell in die ihnen zugeordneten Teilnetzwerke gesendet werden. Unter Anwendung dieser Teilnetzwerk-Zuweisungs-Algorithmen haben wir optimale Implementierungen für eine große Klasse von Divide-and-Conquer Algorithmen auf dem Hyperwüfel und hyperwürfelartigen Netzwerken erhalten. Wir konnten garantieren, daß die Laufzeit der gesamten Implementierung des Divide-and-Conquer Algorithmus asymptotisch gleich der Laufzeit ist, die sich aus dem gegebenen Divide-Schritt und Conquer-Schritt ergibt, wenn man alle mit der Teilnetzwerk-Zuweisung verbundenen Probleme außer acht läßt. Wir haben die hier vorgestellte allgemeine Divide-and-Conquer Implementierung im optimalen Teilwürfel-Zuweisungs-Algorithmus, im Klammerrouting-Algorithmus, der selbst ein wesentlicher Teil des Tree-Contraction-Algorithmus ist, und im Algorithmus für das Knotenkorrespondenzproblem eingesetzt.
This thesis will first introduce in more detail the Bayesian theory and its use in integrating multiple information sources. I will briefly talk about models and their relation to the dynamics of an environment, and how to combine multiple alternative models. Following that I will discuss the experimental findings on multisensory integration in humans and animals. I start with psychophysical results on various forms of tasks and setups, that show that the brain uses and combines information from multiple cues. Specifically, the discussion will focus on the finding that humans integrate this information in a way that is close to the theoretical optimal performance. Special emphasis will be put on results about the developmental aspects of cue integration, highlighting experiments that could show that children do not perform similar to the Bayesian predictions. This section also includes a short summary of experiments on how subjects handle multiple alternative environmental dynamics. I will also talk about neurobiological findings of cells receiving input from multiple receptors both in dedicated brain areas but also primary sensory areas. I will proceed with an overview of existing theories and computational models of multisensory integration. This will be followed by a discussion on reinforcement learning (RL). First I will talk about the original theory including the two different main approaches model-free and model-based reinforcement learning. The important variables will be introduced as well as different algorithmic implementations. Secondly, a short review on the mapping of those theories onto brain and behaviour will be given. I mention the most in uential papers that showed correlations between the activity in certain brain regions with RL variables, most prominently between dopaminergic neurons and temporal difference errors. I will try to motivate, why I think that this theory can help to explain the development of near-optimal cue integration in humans. The next main chapter will introduce our model that learns to solve the task of audio-visual orienting. Many of the results in this section have been published in [Weisswange et al. 2009b,Weisswange et al. 2011]. The model agent starts without any knowledge of the environment and acts based on predictions of rewards, which will be adapted according to the reward signaling the quality of the performed action. I will show that after training this model performs similarly to the prediction of a Bayesian observer. The model can also deal with more complex environments in which it has to deal with multiple possible underlying generating models (perform causal inference). In these experiments I use di#erent formulations of Bayesian observers for comparison with our model, and find that it is most similar to the fully optimal observer doing model averaging. Additional experiments using various alterations to the environment show the ability of the model to react to changes in the input statistics without explicitly representing probability distributions. I will close the chapter with a discussion on the benefits and shortcomings of the model. The thesis continues whith a report on an application of the learning algorithm introduced before to two real world cue integration tasks on a robotic head. For these tasks our system outperforms a commonly used approximation to Bayesian inference, reliability weighted averaging. The approximation is handy because of its computational simplicity, because it relies on certain assumptions that are usually controlled for in a laboratory setting, but these are often not true for real world data. This chapter is based on the paper [Karaoguz et al. 2011]. Our second modeling approach tries to address the neuronal substrates of the learning process for cue integration. I again use a reward based training scheme, but this time implemented as a modulation of synaptic plasticity mechanisms in a recurrent network of binary threshold neurons. I start the chapter with an additional introduction section to discuss recurrent networks and especially the various forms of neuronal plasticity that I will use in the model. The performance on a task similar to that of chapter 3 will be presented together with an analysis of the in uence of different plasticity mechanisms on it. Again benefits and shortcomings and the general potential of the method will be discussed. I will close the thesis with a general conclusion and some ideas about possible future work.
Unter Web-based Trainings (WBTs) versteht man multimediale, interaktive und thematisch abgeschlossene Lerneinheiten in einem Browser. Seit der Entstehung des Internets in den 1990er Jahren sind diese ein wichtiger und etablierter Baustein bei der Konzeption und Entwicklung von eLearning-Szenarien. Diese Lerneinheiten werden üblicherweise von Lehrenden mit entsprechenden Autorensystemen erstellt. In selteneren Fällen handelt es sich bei deren Umsetzungen um individuell programmierte Einzellösungen. Betrachtet man WBTs aus der Sicht der Lernenden, dann lässt sich feststellen, dass zunehmend auch nicht explizit als Lerneinheiten erstellte Inhalte genutzt werden, die jedoch genau den Bedürfnissen des jeweiligen Lernenden entsprechen (im Rahmen des informellen und selbstgesteuerten Lernens). Zum einen liegt das an der zunehmenden Verfügbarkeit und Vielfalt von „alternativen Lerninhalten“ im Internet generell (freie Lizenzen und innovative Autorentools). Zum anderen aber auch an der Möglichkeit, diese Inhalte von überall aus und zu jeder Zeit einfach finden zu können (mobiles Internet, Suchmaschinen und Sprachassistenten) bzw. eingeordnet und empfohlen zu bekommen (Empfehlungssysteme und soziale Medien).
Aus dieser Veränderung heraus ergibt sich im Rahmen dieser Dissertation die zentrale Fragestellung, ob das Konzept eines dedizierten WBT-Autorensystems den neuen Anforderungen von frei verfügbaren, interaktiven Lerninhalten (Khan Academy, YouTube und Wikipedia) und einer Vielzahl ständig wachsender und kostenfreier Autorentools für beliebige Web-Inhalte (H5P, PowToon oder Pageflow) überhaupt noch gerecht wird und wo in diesem Fall genau die Alleinstellungsmerkmale eines WBTs liegen?
Zur Beantwortung dieser Frage beschäftigt sich die Arbeit grundlegend mit dem Begriff „Web-based Training“, den über die Zeit geänderten Rahmenbedingungen und den daraus resultierenden Implikationen für die Entwicklung von WBT-Autorensystemen. Mittels des gewählten Design-based Research (DBR)-Ansatzes konnte durch kontinuierliche Zyklen von Gestaltung, Durchführung, Analyse und Re-Design am Beispiel mehrerer eLearning-Projekte der Begriff WBT neudefininiert bzw. reinterpretiert werden, so dass sich der Fokus der Definition auf das konzentriert, was WBTs im Vergleich zu anderen Inhalten und Funktionen im Internet im Kern unterscheidet: dem Lehr-/Lernaspekt (nachfolgend Web-based Training 2.0 (WBT 2.0)).
Basierend auf dieser Neudefinition konnten vier Kernfunktionalitäten ausgearbeitet werden, die die zuvor genannten Herausforderungen adressieren und in Form eines Design Frameworks detailliert beschreiben. Untersucht und entwickelt wurden die unterschiedlichen Aspekte und Funktionen der WBTs 2.0 anhand der iterativen „Meso-Zyklen“ des DBR-Ansatzes, wobei jedes der darin durchgeführten Projekte auch eigene Ergebnisse mit sich bringt, welche jeweils unter didaktischen und vor allem aber technischen Gesichtspunkten erörtert wurden. Die dadurch gewonnenen Erkenntnisse flossen jeweils in den Entwicklungsprozess der LernBar ein („Makro-Zyklus“), ein im Rahmen dieser Arbeit und von studiumdigitale, der zentralen eLearning-Einrichtung der Goethe-Universität, entwickeltes WBT-Autorensystem. Dabei wurden die Entwicklungen kontinuierlich unter Einbezug von Nutzerfeedbacks (jährliche Anwendertreffen, Schulungen, Befragungen, Support) überprüft und weiterentwickelt.
Abschließend endet der letzte Entwicklungszyklus des DBR-Ansatzes mit der Konzeption und Umsetzung von drei WBT 2.0-Systemkomponenten, wodurch sich flexibel beliebige Web-Inhalte mit entsprechenden WBT 2.0-Funktionalitäten erweitern lassen, um auch im Kontext von offenen Lehr-/Lernprozessen durchgeführte Aktivitäten transparent, nachvollziehbar und somit überprüfbar zu machen (Constructive Alignment).
Somit bietet diese Forschungsarbeit einen interdisziplinären, nutzerzentrierten und in der Praxis erprobten Ansatz für die Umsetzung und den Einsatz von WBTs im Kontext offener Lehr-/Lernprozesse. Dabei verschiebt sich der bisherige Fokus von der reinen Medienproduktion hin zu einem ganzheitlichen Ansatz, bei dem der Lehr-/Lernaspekt im Vordergrund steht (Lernbedarf erkennen, decken und überprüfen). Entscheidend ist dabei, dass zum Decken eines Lernbedarfs sämtliche zur Verfügung stehenden Ressourcen des Internets genutzt werden können, wobei WBTs 2.0 dazu lediglich den didaktischen Prozess definieren und diesen für die Lehrenden und Lernende transparent und zugänglich machen.
WBTs 2.0 profitieren dadurch zukünftig von der zunehmenden Vielfalt und Verfügbarkeit von Inhalten und Funktionen im Internet und ermöglichen es, den Entwicklern von WBT 2.0-Autorensystemen sich auf das Wesentliche zu konzentrieren: den Lehr-/Lernprozess.
The amyloid precursor protein (APP) was discovered in the 1980s as the precursor protein of the amyloid A4 peptide. The amyloid A4 peptide, also known as A-beta (Aβ), is the main constituent of senile plaques implicated in Alzheimer’s disease (AD). In association with the amyloid deposits, increasing impairments in learning and memory as well as the degeneration of neurons especially in the hippocampus formation are hallmarks of the pathogenesis of AD. Within the last decades much effort has been expended into understanding the pathogenesis of AD. However, little is known about the physiological role of APP within the central nervous system (CNS). Allocating APP to the proteome of the highly dynamic presynaptic active zone (PAZ) identified APP as a novel player within this neuronal communication and signaling network. The analysis of the hippocampal PAZ proteome derived from APP-mutant mice demonstrates that APP is tightly embedded in the underlying protein network. Strikingly, APP deletion accounts for major dysregulation within the PAZ proteome network. Ca2+-homeostasis, neurotransmitter release and mitochondrial function are affected and resemble the outcome during the pathogenesis of AD. The observed changes in protein abundance that occur in the absence of APP as well as in AD suggest that APP is a structural and functional regulator within the hippocampal PAZ proteome. Within this review article, we intend to introduce APP as an important player within the hippocampal PAZ proteome and to outline the impact of APP deletion on individual PAZ proteome subcommunities.
FIFO is the most prominent queueing strategy due to its simplicity and the fact that it only works with local information. Its analysis within the adversarial queueing theory however has shown, that there are networks that are not stable under the FIFO protocol, even at arbitrarily low rate. On the other hand there are networks that are universally stable, i.e., they are stable under every greedy protocol at any rate r < 1. The question as to which networks are stable under the FIFO protocol arises naturally. We offer the first polynomial time algorithm for deciding FIFO stability and simple-path FIFO stability of a directed network, answering an open question posed in [1, 4]. It turns out, that there are networks, that are FIFO stable but not universally stable, hence FIFO is not a worst case protocol in this sense. Our characterization of FIFO stability is constructive and disproves an open characterization in [4].
We study queueing strategies in the adversarial queueing model. Rather than discussing individual prominent queueing strategies we tackle the issue on a general level and analyze classes of queueing strategies. We introduce the class of queueing strategies that base their preferences on knowledge of the entire graph, the path of the packet and its progress. This restriction only rules out time keeping information like a packet’s age or its current waiting time.
We show that all strategies without time stamping have exponential queue sizes, suggesting that time keeping is necessary to obtain subexponential performance bounds. We further introduce a new method to prove stability for strategies without time stamping and show how it can be used to completely characterize a large class of strategies as to their 1-stability and universal stability.