Institutes
Refine
Year of publication
Document Type
- Doctoral Thesis (91)
- Article (58)
- Bachelor Thesis (17)
- Book (13)
- Master's Thesis (10)
- Conference Proceeding (4)
- Contribution to a Periodical (4)
- Habilitation (2)
- Preprint (2)
- Diploma Thesis (1)
Has Fulltext
- yes (202)
Is part of the Bibliography
- no (202) (remove)
Keywords
- Machine Learning (5)
- NLP (4)
- ALICE (3)
- Annotation (3)
- Machine learning (3)
- Text2Scene (3)
- TextAnnotator (3)
- Virtual Reality (3)
- mathematics education (3)
- Artificial intelligence (2)
- Blockchain (2)
- CBM experiment (2)
- Cellular Automaton (2)
- Computer Vision (2)
- Experimental nuclear physics (2)
- Experimental particle physics (2)
- FPGA (2)
- MathCityMap (2)
- Natural Language Processing (2)
- Positive polynomials (2)
- Prostate cancer (2)
- Simulation (2)
- Sums of arithmetic-geometric exponentials (2)
- Tracking (2)
- algorithms (2)
- 11N45 (1)
- 14N10 (secondary) (1)
- 2-SAT (1)
- 30F30 (1)
- 32G15 (primary) (1)
- AI Safety (1)
- ALICE experiment (1)
- Ageing (1)
- Agent (1)
- Akademische Zertifikate (1)
- Algebraic Hodge polynomial (1)
- Algebraic number theory (1)
- Anabelian Geometry (1)
- Anemia management (1)
- Approximation Algorithms (1)
- Arithmetic-geometric exponentials (1)
- Augmented reality (1)
- Autonomous Driving (1)
- Autophagy (1)
- Autorensystem (1)
- BIOfid (1)
- Bayesian Persuasion (1)
- Bayesian Statistics (1)
- Belief Propagation (1)
- Bert (1)
- Bifurcation Theory (1)
- Big Data (1)
- Big Data Benchmarks (1)
- Biodiversity (1)
- Bioinformatik (1)
- Biological sciences (1)
- Blood loss calculator (1)
- Blood loss formula (1)
- Blood management (1)
- Boundary elements (1)
- Brownian motion (1)
- C++ (1)
- CBM (1)
- COGNIMUSE (1)
- COVID-19 pandemic (1)
- Calderón operator (1)
- Cannings model (1)
- Capital gains taxes (1)
- Changes in labor markets (1)
- Classification (1)
- Cognitive Maps (1)
- Cognitive Spatial Distortions (1)
- Convexity (1)
- Convolution quadrature (1)
- Curvature measure (1)
- Cycle class (1)
- DDC (1)
- DNN Robustness (1)
- Data Acquisition (1)
- Datenanalyse (1)
- Deep Learning (1)
- Delegated Search (1)
- Demuškin groups (1)
- Dewey Decimal Classification (1)
- Diagnostic markers (1)
- Diagramme und Mathematiklernen (1)
- Digitale Pathologie (1)
- Distributional super-solution (1)
- Docker (1)
- Dual cone (1)
- Educational texttechnology (1)
- Event Buffering (1)
- Exponential sums (1)
- External-memory graph algorithms (1)
- Failure Erasure Code (1)
- Finance (1)
- Finite elements (1)
- Finitely many measurements (1)
- Fractional Laplacian (1)
- Functional magnetic resonance imaging (1)
- Future of work (1)
- GABAergic (1)
- GPGPU (1)
- GPU (1)
- Gale-dual pairs (1)
- Gaussian Processes (1)
- Gesten beim Mathematiklernen (1)
- Gesten-Lautsprache-Relationen (1)
- Google Bert (1)
- Graph Neural Networks (1)
- Graph generation (1)
- Graphentheorie (1)
- Ground Texture (1)
- HLT (1)
- HPC (1)
- Hadron-hadron interactions (1)
- Hardy’s inequality (1)
- Heavy Ion experiments (1)
- Hidden Markov Model (1)
- High energy physics (1)
- High-Level-Trigger (1)
- Higher education (1)
- Historical Document Analysis (1)
- Hodge conjecture (1)
- Hopf boundary lemma (1)
- Human factors (1)
- Human-enhancing technologies (1)
- I/O efficiency (1)
- Immunology (1)
- Individual differences (1)
- Information Retrieval (1)
- Intelligence augmentation (1)
- Inter-annotator agreement (1)
- Inverse Problem (1)
- IsoSpace (1)
- Kalman Filter (1)
- Kapitalertragsteuern (1)
- K–12 (1)
- LDPC Codes (1)
- Lattice path matroids (1)
- Leapfrog (1)
- Learning analytics (1)
- Limit mixed Hodge structures (1)
- Linear regression analysis (1)
- Linpack (1)
- Lipschitz–Killing measures (1)
- Localization (1)
- Loewner order (1)
- Log convex sets (1)
- Lyapunov exponents (1)
- Many-core computer architectures (1)
- Mathematical biosciences (1)
- Mathematik (1)
- Mathematikdidaktik (1)
- Mathtrails (1)
- Mc Kean martingale (1)
- MediaEval 2016 (1)
- Mobile (1)
- Mobile Learning (1)
- Moduli space of semi-stable sheaves (1)
- Mollifier decorrelation (1)
- Mollifier multiscale reconstruction and decomposition (1)
- Monocular Scene Flow (1)
- Monotonicity (1)
- Multiparametric MRI (1)
- Multiplicative convexity (1)
- Named entity recognition (1)
- Networking (1)
- Neural Networks (1)
- Neural networks (1)
- Neuronales Netz (1)
- Neuroscience (1)
- Nodal curves (1)
- Non-Fungible-Token (1)
- Non-negativity certificate (1)
- Nonlinear Schrödinger equation (1)
- Nonlocal Neumann conditions (1)
- Nonlocal normal derivative (1)
- Nonlocal operators (1)
- Online Algorithms (1)
- OpenStreetMap (1)
- OpenStreetMap quality evaluation (1)
- Optimal stopping problem (1)
- Optimales Stoppproblem (1)
- Orbital stability (1)
- Parallel Computing (1)
- Parallel and SIMD calculations (1)
- Partial Differential Equations (1)
- Pedestrian Detection (1)
- Perfect graphs (1)
- Permutation (1)
- Podospora anserina (1)
- Pointwise super-solution (1)
- Polyhedron (1)
- Positive function (1)
- Positive signomials (1)
- Potential methods in exploration (1)
- Preclinical research (1)
- Prediction (1)
- Predictive markers (1)
- Processor (1)
- Prognostic markers (1)
- Protein-protein interaction (1)
- Pseudo-Riemannian manifolds (1)
- Public Administration (1)
- Public Transport (1)
- Quantitative features (1)
- RADIUS Protocol (1)
- Radiomics (1)
- Random CSP (1)
- Random Graphs (1)
- Random Matrices (1)
- Random graphs (1)
- Reflexive polytopes (1)
- Regional Laplacian (1)
- Regional fractional Laplacian (1)
- Reinforcement Learning (1)
- Relativistic heavy-ion collisions (1)
- Robotics (1)
- SIMD (1)
- SLAM (1)
- STAR (1)
- STAR experiment (1)
- STEM education (1)
- Script Compression (1)
- Second-order cone (1)
- Semantic portal (1)
- Semantics (1)
- Semiotik nach C. S. Peirce (1)
- Sensory perception (1)
- Sign-changing solutions (1)
- Signed Birkhoff polytopes (1)
- Simplicial complexes (1)
- Smartphone (1)
- Specialized information service (1)
- Spectral Theory (1)
- Standard monomials (1)
- Standing waves (1)
- Statistical analysis (1)
- Strange particles (1)
- Student expectations (1)
- Sublinear circuit (1)
- Sums of non-negative circuit polynomials (1)
- Sums of nonnegative circuit polynomials (SONC) (1)
- Surgical blood loss (1)
- Symmetries (1)
- Symmetry Breaking (1)
- TRD (1)
- TTLab (1)
- Taxon (1)
- Text Annotation (1)
- TextImager (1)
- Themenklassifikation (1)
- Thermoelastic wave equation (1)
- Tobler's First Law (1)
- Tokenisierung (1)
- Toxicity (1)
- Traffic Scenes (1)
- Transcriptome analysis (1)
- Translational research (1)
- Transparent boundary conditions (1)
- UIMA (1)
- Unconditional polytopes (1)
- Unimodular triangulations (1)
- Unity (1)
- UrQMD (1)
- Valuation (1)
- Vannotator (1)
- Variational Methods (1)
- Verkehr (1)
- Virtual reality (1)
- Virtuelle Realität (1)
- Vision (1)
- Visual cortex (1)
- Volunteered Geographic Information (1)
- Wavelet decomposition (1)
- Weak super-solution (1)
- Web (1)
- Web Based Training (1)
- Weyl principle (1)
- affective computing (1)
- algebraic thinking (1)
- algorithm engineering (1)
- anabelian geometry (1)
- ancestral selection graph (1)
- approximation algorithms (1)
- arithmetic geometry (1)
- autoregressive GANs (1)
- average-case complexity (1)
- barrel cortex (1)
- bioinformatics (1)
- bistable perception (1)
- catastrophic forgetting (1)
- central limit theorem (1)
- changepoint (1)
- chatbots (1)
- cluster computing (1)
- co-located collaboration analytics (1)
- coding theory (1)
- collaboration (1)
- collaboration analytics (1)
- computational thinking (1)
- computer vision (1)
- continual deep learning (1)
- convergence (1)
- cover times (1)
- data parallel (1)
- data structures (1)
- debugging (1)
- deep generative models (1)
- deformable model (1)
- density maps (1)
- density visualization (1)
- digital distractions (1)
- digital learning (1)
- digitization (1)
- directional selection (1)
- disaster risk management (1)
- discrepancy principle (1)
- distance learning (1)
- domains (1)
- dynamic algorithms (1)
- education (1)
- educational technology (1)
- emotion generation (1)
- emotion prediction (1)
- epilepsy, epileptogenesis, model, neuro-immune, neuroinflammation, blood brain barrier, seizure (1)
- equity and access to technology (1)
- erasure codes (1)
- error correction codes (1)
- event reconstruction (1)
- external memory (1)
- extreme value theory (1)
- field mapping (1)
- field papers (1)
- flood risk perception (1)
- flooding (1)
- fringe tree (1)
- fundamental theorem of asset pricing (1)
- generic tasks (1)
- graph theory (1)
- group speech analytics (1)
- hierarchical fields (1)
- high performance computing (1)
- independence number (1)
- information processing (1)
- information transfer (1)
- inquiry-based education (1)
- interactive data analysis (1)
- k-shortest path (1)
- literature review (1)
- machine learning (1)
- math trails (1)
- mathematics (1)
- media multitasking (1)
- mikroskopisch (1)
- multimodal (1)
- multimodal fusion (1)
- multimodal learning analytics (1)
- neural network decoder (1)
- neural networks (1)
- neural ordinary differential equation (1)
- neuronal morphology (1)
- neuroscience (1)
- no unbounded profit with bounded risk (1)
- octonions (1)
- online bayesian change point detection (1)
- open-set recognition (1)
- optimal coding (1)
- optimality (1)
- outdoor activities (1)
- outdoors (1)
- parallel file systems (1)
- parallel programming (1)
- patricia trie (1)
- pedagogical roles (1)
- phase coding (1)
- point inversion (1)
- point process (1)
- positivity preserving property (1)
- privacy (1)
- privacy-enhancing technologies (1)
- probability of fixation (1)
- problem solving (1)
- proportional transaction costs (1)
- protein assembly (1)
- protein structure (1)
- random energy model (1)
- random tree (1)
- real world problems ; (1)
- representation learning (1)
- respiratory complex I (1)
- sampling duality (1)
- satisfiability problem (1)
- section conjecture (1)
- security (1)
- security management (1)
- self-attention (1)
- self-control (1)
- self-regulation (1)
- semimartingales (1)
- shape prior (1)
- shortest path (1)
- social engineering (1)
- spectral cut-off (1)
- spike timing (1)
- spin group (1)
- statistical inverse problems (1)
- statistical shape analysis (1)
- stochastic integration (1)
- stochastic model (1)
- storage (1)
- sum-product algorithm (1)
- synaptogenesis (1)
- synchronous teaching (1)
- task design (1)
- teaching with technology (1)
- technology-enhanced learning (1)
- torsion function (1)
- transfer entropy (1)
- valuation (1)
- variational inference (1)
- vectorization (1)
- video prediction (1)
- visual programming (1)
- ÖPNV (1)
- 𝒮-cone (1)
Institute
In the first part of the thesis we investigate Lyapunov exponents for general flat vector bundles over Riemann surfaces and we describe properties of Lyapunov exponents on special loci of the moduli space of flat vector bundles. In the second part of the thesis we show how the knowledge of Lyapunov exponent over a sporadic Teichmüller curve can be used to compute the algebraic equation of the associated universal family of curves.
A lot of software systems today need to make real-time decisions to optimize an objective of interest. This could be maximizing the click-through rate of an ad displayed on a web page or profit for an online trading software. The performance of these systems is crucial for the parties involved. Although great progress has been made over the years in understanding such online systems and devising efficient algorithms, a fine-grained analysis and problem specific solutions are often missing. This dissertation focuses on two such specific problems: bandit learning and pricing in gross-substitutes markets.
Bandit learning problems are a prominent class of sequential learning problems with several real-world applications. The classical algorithms proposed for these problems, although optimal in a theoretical sense often tend to overlook model-specific proper- ties. With this as our motivation, we explore several sequential learning models and give efficient algorithms for them. Our approaches, inspired by several classical works, incorporate the model-specific properties to derive better performance bounds.
The second part of the thesis investigates an important class of price update strategies in static markets. Specifically, we investigate the effectiveness of these strategies in terms of the total revenue generated by the sellers and the convergence of the resulting dynamics to market equilibrium. We further extend this study to a class of dynamic markets. Interestingly, in contrast to most prior works on this topic, we demonstrate that these price update dynamics may be interpreted as resulting from revenue optimizing actions of the sellers. No such interpretation was known previously. As a part of this investigation, we also study some specialized forms of no-regret dynamics and prediction techniques for supply estimation. These approaches based on learning algorithms are shown to be particularly effective in dynamic markets.
Die vorliegende Arbeit beschäftigt sich mit dem Thema Stemmatologie, d.h. primär der Rekonstruktion der Kopiergeschichte handschriftlich fixierter Dokumente. Zentrales Objekt der Stemmatologie ist das Stemma, eine visuelle Darstellung der Kopiergeschichte, welche i.d.R. graphtheoretisch als Baum bzw. gerichteter azyklischer Graph vorliegt, wobei die Knoten Textzeugen (d.s. die Textvarianten) darstellen während die Kanten für einzelne Kopierprozesse stehen. Im Mittelpunkt des Wissenschaftszweiges steht die Frage des Autorenoriginals (falls ein einziges solches existiert haben sollte) und die Frage der Rekonstruktion seines Textes. Das Stemma selbst ist ein Mittel zu diesem Hauptzweck (Cameron 1987). Der durch für manuelle Kopierprozesse kennzeichnende Abweichungen zunehmend abgewandelte Originaltext ist meist nicht direkt überliefert. Ziel der Arbeit ist es, die semi-automatische Stemmatologie umfassend zu beschreiben und durch Tools und analytische Verfahren weiterzuentwickeln. Der erste Teil der Arbeit beschreibt die Geschichte der computer-assistierten Stemmatologie inkl. ihrer klassischen Vorläufer und mündet in der Vorstellung eines einfachen Tools zur dynamischen graphischen Darstellung von Stemmata. Ein Exkurs zum philologischen Leitphänomen Lectio difficilior erörtert dessen mögliche psycholinguistische Ursachen im schnelleren lexikalischen Zugriff auf hochfrequente Lexeme. Im zweiten Teil wird daraufhin die existenziellste aller stemmatologischen Debatten, initiiert durch Joseph Bédier, mit mathematischen Argumenten auf Basis eines von Paul Maas 1937 vorgeschlagenen stemmatischen Models beleuchtet. Des Weiteren simuliert der Autor in diesem Kapitel Stemmata, um den potenziellen Einfluss der Distribution an Kopierhäufigkeiten pro Manuskript abzuschätzen.
Im nächsten Teil stellt der Autor ein eigens erstelltes Korpus in persischer Sprache vor, welches ebenso wie 3 der bekannten artifiziellen Korpora (Parzival, Notre Besoin, Heinrichi) qualitativ untersucht wird. Schließlich wird mit der Multi Modal Distance eine Methode zur Stemmagenerierung angewandt, welche auf externen Daten psycholinguistisch determinierter Buchstabenverwechslungswahrscheinlichkeiten beruht. Im letzten Teil arbeitet der Autor mit minimalen Spannbäumen zur Stemmaerzeugung, wobei eine vergleichende Studie zu 4 Methoden der Distanzmatrixgenerierung mit 4 Methoden zur Stemmaerzeugung durchgeführt, evaluiert und diskutiert wird.
The thesis deals with the analysis and modeling of point processes emerging from different experiments in neuroscience. In particular, the description and detection of different types of variability changes in point processes is of interest.
A non-stationary rate or variance of life times is a well-known problem in the description of point processes like neuronal spike trains and can affect the results of further analyses requiring stationarity. Moreover, non-stationary parameters might also contain important information themselves. The goal of the first part of the thesis is the (further) development of a technique to detect both rate and variance changes that may occur in multiple time scales separately or simultaneously. A two-step procedure building on the multiple filter test (Messer et al., 2014) is used that first tests the null hypothesis of rate homogeneity allowing for an inhomogeneous variance and that estimates change points in the rate if the null hypothesis is rejected. In the second step, the null hypothesis of variance homogeneity is tested and variance change points are estimated. Rate change points are used as input. The main idea is the comparison of estimated variances in adjacent windows of different sizes sliding over the process. To determine the rejection threshold functionals of the Brownian motion are identified as limit processes under the null of variance homogeneity. The non-parametric procedure is not restricted to the case of at most one change point. It is shown in simulation studies that the corresponding test keeps the asymptotic significance level for a wide range of parameters and that the test power is remarkable. The practical applicability of the procedure is underlined by the analysis of neuronal spike trains.
Point processes resulting from experiments on bistable perception are analyzed in the second part of the thesis. Visual illusions allowing for than more possible perception lead to unpredictable changes of perception. In the thesis data from (Schmack et al., 2015) are used. A rotating sphere with switching perceived rotation direction was presented to the participants of the study. The stimulus was presented continuously and intermittently, i.e., with short periods of „blank display“ between the presentation periods. There are remarkable differences in the response patterns between the two types of presentation. During continuous presentation the distribution of dominance times, i.e., the intervals of constant perception, is a right-skewed and unimodal distribution with a mean of about five seconds. In contrast, during intermittent presentation one observes very long, stable dominance times of more than one minute interchanging with very short, unstable dominance times of less than five seconds, i.e., an increase of variability.
The main goal of the second part is to develop a model for the response patterns to bistable perception that builds a bridge between empirical data analysis and mechanistic modeling. Thus, the model should be able to describe both the response patterns to continuous presentation and to intermittent presentation. Moreover, the model should be fittable to typically short experimental data, and the model should allow for neuronal correlates. Current approaches often use detailed assumptions and large parameter sets, which complicate parameter estimation.
First, a Hidden Markov Model is applied. Second, to allow for neuronal correlates, a Hierarchical Brownian Model (HBM) is introduced, where perception is modeled by the competition of two neuronal populations. The activity difference between these two populations is described by a Brownian motion with drift fluctuating between two borders, where each first hitting time causes a perceptual change. To model the response patterns to intermittent presentation a second layer with competing neuronal populations (coding a stable and an unstable state) is assumed. Again, the data are described very well, and the hypothesis that the relative time in the stable state is identical in a group of patients with schizophrenia and a control group is rejected. To sum up, the HBM intends to link empirical data analysis and mechanistic modeling and provides interesting new hypotheses on potential neuronal mechanisms of cognitive phenomena.
Diese Arbeit beschäftigt sich mit inversen Problemen für partielle Differentialgleichungen. Moderne Lösungsverfahren solcher inversen Probleme müssen die zugehörige partielle Differentialgleichung (PDGL) oft sehr häufig lösen. Mit Hinblick auf die Rechenzeit solcher Verfahren stellt das häufige Lösen der PDGL den Hauptanteil der benötigten Rechenzeit dar. Daraus resultiert die Grundidee dieser Arbeit: es sollen Lösungsverfahren von inversen Problemen beschleunigt werden, indem die für die Vorwärtslösung benötigte Rechenzeit verringert wird. Genauer gesagt soll anstatt der Vorwärtslösung eine Approximation an diese, welche kostengünstig zu berechnen ist, verwendet werden. Für die Bestimmung einer kostengünstigen Annäherung an die Vorwärtslösung wird die Reduzierte Basis Methode, eine Modellreduktionstechnik, verwendet.
Das Ziel der klassischen Reduzierten Basis Methode ist es einen globalen Reduzierte Basis Raum (RB-Raum) zu konstruieren. Dabei handelt es sich um einen niedrigdimensionalen Teilraum des Lösungsraumes der PDGL, welcher für jeden Parameter aus dem Parameterraum eine gute Näherung der PDGL-Lösung liefert. Eine beispielhafte Methode zur Konstruktion eines solchen Raumes ist es, geschickt Parameter auszuwählen und die dazu gehörigen PDGL-Lösungen als Basisvektoren des RB-Raumes zu verwenden. Die orthogonale Projektion der PDGL auf diesen RB-Raum liefert die entsprechenden Reduzierte Basis Lösungen. Das Besondere in dieser Arbeit ist, dass die betrachteten PDGLn einen sehr hochdimensionalen und unbeschränkten Parameterraum besitzen, und es ist bekannt, dass dies für die Reduzierte Basis Methode eine immense Schwierigkeit darstellt.
In Kapitel 1 wird ein schlechtgestelltes inverses Modellproblem, die Rekonstruktion der Wärmeleitfähigkeit eines Gegenstandes aus der Messung der Temperatur desselben, eingeführt und das nichtlineare Landweber-Verfahren als iteratives Regularisierungsverfahren zur Lösung dieses inversen Problems vorgestellt. Die Grundlagen der Reduzierten Basis Methode werden dargelegt und es wird erläutert, warum die klassische Variante der Methode in diesem Kontext der Bildrekonstruktion versagt. Daraufhin wird der neuartig Ansatz, ein adaptiver Reduzierte Basis Ansatz, entwickelt. Die folgenden Schritte bilden die Grundlage dieses adaptiven Reduzierte Basis Ansatzes:
1. Sei ein RB-Raum gegeben, so projiziere den Lösungsalgorithmus des inversen Problems auf diesen RB-Raum.
2. Generiere mit Hilfe dieses projizierten Verfahrens neue Iterierte bis entweder eine Iterierte das inverse Problem löst oder bis der RB-Raum erweitert werden muss.
3. Im ersten Fall wird das Verfahren beendet, im zweiten Fall wird die zur aktuellen Iterierten gehörige Vorwärtslösung verwendet um den RB-Raum zu verbessern. Danach wird mit dem ersten Schritt fortgefahren.
Es wird also nach und nach ein lokal approximierender RB-Raum konstruiert, indem Parameter für neue Basisvektoren mittels einer projizierten Variante des Lösungsalgorithmus des inversen Problems gefunden werden. Das neuartige Reduzierte Basis Landweber-Verfahren ist das Hauptresultat von Kapitel 1, wobei das Verfahren ausführlich numerisch untersucht und mit dem ursprünglichen Landweber-Verfahren verglichen wird.
In Kapitel 2 dieser Arbeit soll der zuvor entwickelte adaptive Reduzierte Basis Ansatz auf ein komplexes und praxisrelevantes Problem angewandt werden. Insbesondere soll die dadurch entstehende neue Methode mit Hinblick auf Konvergenz theoretisch ausführlich untersucht werden. Daher widmet sich der zweite Teil dieser Arbeit dem Problem der Magnet Resonanz Elektrischen Impedanztomographie (MREIT).
Bei der MREIT handelt es sich um ein Bildgebungsverfahren, welches während der letzten drei Jahrzehnte entwickelt wurde. Dabei wird ein Gegenstand, an welchen Elektroden angeheftet sind, in einen Kernspintomographen gelegt und es ist das Ziel des Verfahrens die elektrische Leitfähigkeit des Gegenstandes zu bestimmen. Die dazu benötigten Daten werden folgendermaßen gewonnen: indem Strom an einer der Elektroden angelegt wird, wird ein Stromfluss erzeugt, welcher wiederum eine Änderung der Magnetflussdichte induziert. Diese kann mit Hilfe des Kernspintomographen gemessen werden, wodurch man einen vollen Satz innerer Daten zur Hand hat, sodass hoch aufgelöste Bilder der elektrischen Leitfähigkeit des Gegenstandes rekonstruiert werden können.
Als Lösungsalgorithmus für dieses praxisrelevante Problem wird der bereits bekannte Harmonische Bz Algorithmus vorgestellt. Das Problem und der Algorithmus werden mit Hinblick auf Konvergenz des Verfahrens untersucht und ein Konvergenzresultat, welches die bestehende Konvergenztheorie hin zu einem approximativen Harmonischen Bz Algorithmus erweitert, wird bewiesen. Dabei hängt das Resultat nicht davon ab welche Art von Approximation an die Vorwärtslösung der entsprechenden PDGL im approximativen Harmonischen Bz Algorithmus verwendet wird solange diese einer Regularitäts- und einer Qualitätsbedingung genügt. Damit folgt das zweite Hauptresultat dieser Arbeit: die numerische Konvergenz des Harmonischen Bz Algorithmus. Es soll dabei hervorgehoben werden, dass Konvergenzresultate im Bereich der inversen Probleme (sofern es sie gibt) meistens die Kenntnis der exakten Vorwärtslösung annehmen, sodass keine numerische Konvergenz des zugehörigen Verfahrens folgt (in einer numerischen Implementation wird stets eine Approximation an die Vorwärtslösung verwendet). Somit ist dieses Konvergenzresultat ein Schritt hin zur numerischen Konvergenz anderer Lösungsverfahren von inversen Problemen.
Da das theoretische Resultat von der Art der Approximation nicht abhängt, erhält man ebenfalls die Konvergenz des neuartigen Reduzierte Basis Harmonischen Bz Algorithmus, welcher die Kombination des in Kapitel 1 entwickelten adaptiven Reduzierte Basis Ansatzes und des Harmonischen Bz Algorithmus ist. In einer kurzen numerischen Untersuchung wird festgestellt, dass dieser Reduzierte Basis Harmonische Bz Algorithmus schneller als der Harmonische Bz Algorithmus ist, wobei die Qualität der Rekonstruktion gleichbleibend ist. Somit funktioniert der entwickelte adaptive Reduzierte Basis Ansatz auch angewandt auf dieses komplexe praxisrelevante inverse Problem der MREIT.
The results of this thesis lie in the area of convex algebraic geometry, which is the intersection of real algebraic geometry, convex geometry, and optimization.
We study sums of nonnegative circuit polynomials (SONC) and their related cone, both geometrically and in application to polynomial optimization. SONC polynomials are certain sparse polynomials having a special structure in terms of their Newton polytopes and supports, and serve as a certificate of nonnegativity for real polynomials, which is independent of sums of squares.
The first part of this thesis is dedicated to the convex geometric study of the SONC cone. As main results we show that the SONC cone is full-dimensional in the cone of nonnegative polynomials, we exactly determine the number of zeros of a nonnegative circuit polynomial, and we give a complete and explicit characterization of the number of zeros of SONC polynomials and forms. Moreover, we provide a first approach to the study of the exposed faces of the SONC cone and their dimensions.
In the second part of the thesis we use SONC polynomials to tackle constrained polynomial optimization problems (CPOPs).
As a first step, we derive a lower bound for the optimal value of CPOP based on SONC polynomials by using a single convex optimization program, which is a geometric program (GP) under certain assumptions. GPs are a special type of convex optimization problems and can be solved in polynomial time. We test the new method experimentally and provide examples comparing our new SONC/GP approach with Lasserre's relaxation, a common approach for tackling CPOPs, which approximates nonnegative polynomials via sums of squares and semidefinite programming (SDP). The new approach comes with the benefit that in practice GPs can be solved significantly faster than SDPs. Furthermore, increasing the degree of a given problem has almost no effect on the runtime of the new program, which is in sharp contrast to SDPs.
As a second step, we establish a hierarchy of efficiently computable lower bounds converging to the optimal value of CPOP based on SONC polynomials. For a given degree each bound is computable by a relative entropy program. This program is also a convex optimization program, which is more general than a geometric program, but still efficiently solvable via interior point methods.
Powerful environment perception systems are a fundamental prerequisite for the successful deployment of intelligent vehicles, from advanced driver assistance systems to self-driving cars. Arguably the most essential task of such systems is the reliable detection and localization of obstacles in order to avoid collisions. Two particularly challenging scenarios in this context are represented by small, unexpected obstacles on the road ahead, and by potentially dynamic objects observed from a large distance. Both scenarios become exceedingly critical when the ego-vehicle is traveling at high speed. As a consequence, two major requirements placed on environment perception systems are the capability of (a) high-sensitivity generic object detection and (b) high-accuracy obstacle distance estimation. The present thesis addresses both requirements by proposing novel approaches based on stereo vision for spatial perception.
First, this work presents a novel method for the detection of small, generic obstacles and objects at long range directly from stereo imagery. The detection is based on sound statistical tests using local geometric criteria which are applicable to both static and moving objects. The approach is not limited to predefined sets of semantic object classes and does not rely on restrictive assumptions on the environment, such as oversimplified global ground surface models. Free-space and obstacle hypotheses are evaluated based on a statistical model of the input image data in order to avoid a loss of sensitivity through intermediate processing steps. In addition to the detection result, the algorithm simultaneously yields refined estimates of object distances, originating from an implicit optimization of the geometric obstacle hypothesis models. The proposed detection system provides multiple flexible output representations, ranging from 3D obstacle point clouds to compact mid-level obstacle segments to bounding box representations of object instances suitable for model-based tracking. The core algorithm concept lends itself to massive parallelization and can be implemented efficiently on dedicated hardware. Real-time execution is demonstrated on a test vehicle in real-world traffic. For a thorough quantitative evaluation of the detection performance, two dedicated datasets are employed, covering small and hard-to-detect obstacles in urban environments as well as distant dynamic objects in highway driving scenarios. The proposed system is shown to significantly outperform current general purpose obstacle detection approaches in both setups, providing a considerable increase in detection range while reducing the false positive rate at the same time.
Second, this work considers the high-accuracy estimation of object distances from stereo vision, particularly at long range. Several new methods for optimizing the stereo-based distance estimates of detected objects are proposed and compared to state-of-the-art concepts. A comprehensive statistical evaluation is performed on an extensive dedicated dataset, establishing reference values for the accuracy limits actually achievable in practice. Notably, the refined distance estimates implicitly provided by the proposed obstacle detection system are shown to yield highly accurate results, on par with the top-performing dedicated stereo matching algorithms considered in the analysis.
In this thesis we introduce the imaginary projection of (multivariate) polynomials as the projection of their variety onto its imaginary part, I(f) = { Im(z_1, ... , z_n) : f(z_1, ... , z_n) = 0 }. This induces a geometric viewpoint to stability, since a polynomial f is stable if and only if its imaginary projection does not intersect the positive orthant. Accordingly, the thesis is mainly motivated by the theory of stable polynomials.
Interested in the number and structure of components of the complement of imaginary projections, we show as a key result that there are only finitely many components which are all convex. This offers a connection to the theory of amoebas and coamoebas as well as to the theory of hyperbolic polynomials.
For hyperbolic polynomials, we show that hyperbolicity cones coincide with components of the complement of imaginary projections, which provides a strong structural relationship between these two sets. Based on this, we prove a tight upper bound for the number of hyperbolicity cones and, respectively, for the number of components of the complement in the case of homogeneous polynomials. Beside this, we investigate various aspects of imaginary projections and compute imaginary projections of several classes explicitly.
Finally, we initiate the study of a conic generalization of stability by considering polynomials whose roots have no imaginary part in the interior of a given real, n-dimensional, proper cone K. This appears to be very natural, since many statements known for univariate and multivariate stable polynomials can be transferred to the conic situation, like the Hermite-Biehler Theorem and the Hermite-Kakeya-Obreschkoff Theorem. When considering K to be the cone of positive semidefinite matrices, we prove a criterion for conic stability of determinantal polynomials.
As an integral part of ALICE, the dedicated heavy ion experiment at CERN’s Large Hadron Collider, the Transition Radiation Detector (TRD) contributes to the experiment’s tracking, triggering and particle identification. Central element in the TRD’s processing chain is its trigger and readout processor, the Global Tracking Unit (GTU). The GTU implements fast triggers on various signatures, which rely on the reconstruction of up to 20 000 particle track segments to global tracks, and performs the buffering and processing of event raw data as part of a complex detector readout tree.
The high data rates the system has to handle and its dual use as trigger and readout processor with shared resources and interwoven processing paths require the GTU to be a unique, high-performance parallel processing system. To achieve high data taking efficiency, all elements of the GTU are optimized for high running stability and low dead time.
The solutions presented in this thesis for the handling of readout data in the GTU, from the initial reception to the final assembly and transmission to the High-Level Trigger computer farm, address all these aspects. The presented concepts employ multi-event buffering, in-stream data processing, extensive embedded diagnostics, and advanced features of modern FPGAs to build a robust high-performance system that can conduct the high- bandwidth readout of the TRD with maximum stability and minimized dead time. The work summarized here not only includes the complete process from the conceptual layout of the multi-event data handling and segment control, but also its implementation, simulation, verification, operation and commissioning. It also covers the system upgrade for the second data taking period and presents an analysis of the actual system performance.
The presented design of the GTU’s input stage, which is comprised of 90 FPGA-based nodes, is built to support multi-event buffering for the data received from the 18 TRD supermodules on 1080 optical links at the full sender aggregate net bandwidth of 2.16 Tbit/s. With careful design of the control logic and the overall data path, the readout on the 18 concentrator nodes of the supermodule stage can utilize an effective aggregate output bandwidth of initially 3.33 GiB/s, and, after the successful readout bandwidth upgrade, 6.50 GiB/s via 18 optical links. The high possible readout link utilization of more than 99 % and the intermediate buffering of events on the GTU helps to keep the dead time associated with the local event building and readout typically below 10%. The GTU has been used for production data taking since start-up of the experiment and ever since performs the event buffering, local event building and readout for the TRD in a correct, efficient and highly dependable fashion.