Institutes
Refine
Year of publication
Document Type
- Doctoral Thesis (91)
- Article (58)
- Bachelor Thesis (17)
- Book (13)
- Master's Thesis (10)
- Conference Proceeding (4)
- Contribution to a Periodical (4)
- Habilitation (2)
- Preprint (2)
- Diploma Thesis (1)
Has Fulltext
- yes (202)
Is part of the Bibliography
- no (202) (remove)
Keywords
- Machine Learning (5)
- NLP (4)
- ALICE (3)
- Annotation (3)
- Machine learning (3)
- Text2Scene (3)
- TextAnnotator (3)
- Virtual Reality (3)
- mathematics education (3)
- Artificial intelligence (2)
- Blockchain (2)
- CBM experiment (2)
- Cellular Automaton (2)
- Computer Vision (2)
- Experimental nuclear physics (2)
- Experimental particle physics (2)
- FPGA (2)
- MathCityMap (2)
- Natural Language Processing (2)
- Positive polynomials (2)
- Prostate cancer (2)
- Simulation (2)
- Sums of arithmetic-geometric exponentials (2)
- Tracking (2)
- algorithms (2)
- 11N45 (1)
- 14N10 (secondary) (1)
- 2-SAT (1)
- 30F30 (1)
- 32G15 (primary) (1)
- AI Safety (1)
- ALICE experiment (1)
- Ageing (1)
- Agent (1)
- Akademische Zertifikate (1)
- Algebraic Hodge polynomial (1)
- Algebraic number theory (1)
- Anabelian Geometry (1)
- Anemia management (1)
- Approximation Algorithms (1)
- Arithmetic-geometric exponentials (1)
- Augmented reality (1)
- Autonomous Driving (1)
- Autophagy (1)
- Autorensystem (1)
- BIOfid (1)
- Bayesian Persuasion (1)
- Bayesian Statistics (1)
- Belief Propagation (1)
- Bert (1)
- Bifurcation Theory (1)
- Big Data (1)
- Big Data Benchmarks (1)
- Biodiversity (1)
- Bioinformatik (1)
- Biological sciences (1)
- Blood loss calculator (1)
- Blood loss formula (1)
- Blood management (1)
- Boundary elements (1)
- Brownian motion (1)
- C++ (1)
- CBM (1)
- COGNIMUSE (1)
- COVID-19 pandemic (1)
- Calderón operator (1)
- Cannings model (1)
- Capital gains taxes (1)
- Changes in labor markets (1)
- Classification (1)
- Cognitive Maps (1)
- Cognitive Spatial Distortions (1)
- Convexity (1)
- Convolution quadrature (1)
- Curvature measure (1)
- Cycle class (1)
- DDC (1)
- DNN Robustness (1)
- Data Acquisition (1)
- Datenanalyse (1)
- Deep Learning (1)
- Delegated Search (1)
- Demuškin groups (1)
- Dewey Decimal Classification (1)
- Diagnostic markers (1)
- Diagramme und Mathematiklernen (1)
- Digitale Pathologie (1)
- Distributional super-solution (1)
- Docker (1)
- Dual cone (1)
- Educational texttechnology (1)
- Event Buffering (1)
- Exponential sums (1)
- External-memory graph algorithms (1)
- Failure Erasure Code (1)
- Finance (1)
- Finite elements (1)
- Finitely many measurements (1)
- Fractional Laplacian (1)
- Functional magnetic resonance imaging (1)
- Future of work (1)
- GABAergic (1)
- GPGPU (1)
- GPU (1)
- Gale-dual pairs (1)
- Gaussian Processes (1)
- Gesten beim Mathematiklernen (1)
- Gesten-Lautsprache-Relationen (1)
- Google Bert (1)
- Graph Neural Networks (1)
- Graph generation (1)
- Graphentheorie (1)
- Ground Texture (1)
- HLT (1)
- HPC (1)
- Hadron-hadron interactions (1)
- Hardy’s inequality (1)
- Heavy Ion experiments (1)
- Hidden Markov Model (1)
- High energy physics (1)
- High-Level-Trigger (1)
- Higher education (1)
- Historical Document Analysis (1)
- Hodge conjecture (1)
- Hopf boundary lemma (1)
- Human factors (1)
- Human-enhancing technologies (1)
- I/O efficiency (1)
- Immunology (1)
- Individual differences (1)
- Information Retrieval (1)
- Intelligence augmentation (1)
- Inter-annotator agreement (1)
- Inverse Problem (1)
- IsoSpace (1)
- Kalman Filter (1)
- Kapitalertragsteuern (1)
- K–12 (1)
- LDPC Codes (1)
- Lattice path matroids (1)
- Leapfrog (1)
- Learning analytics (1)
- Limit mixed Hodge structures (1)
- Linear regression analysis (1)
- Linpack (1)
- Lipschitz–Killing measures (1)
- Localization (1)
- Loewner order (1)
- Log convex sets (1)
- Lyapunov exponents (1)
- Many-core computer architectures (1)
- Mathematical biosciences (1)
- Mathematik (1)
- Mathematikdidaktik (1)
- Mathtrails (1)
- Mc Kean martingale (1)
- MediaEval 2016 (1)
- Mobile (1)
- Mobile Learning (1)
- Moduli space of semi-stable sheaves (1)
- Mollifier decorrelation (1)
- Mollifier multiscale reconstruction and decomposition (1)
- Monocular Scene Flow (1)
- Monotonicity (1)
- Multiparametric MRI (1)
- Multiplicative convexity (1)
- Named entity recognition (1)
- Networking (1)
- Neural Networks (1)
- Neural networks (1)
- Neuronales Netz (1)
- Neuroscience (1)
- Nodal curves (1)
- Non-Fungible-Token (1)
- Non-negativity certificate (1)
- Nonlinear Schrödinger equation (1)
- Nonlocal Neumann conditions (1)
- Nonlocal normal derivative (1)
- Nonlocal operators (1)
- Online Algorithms (1)
- OpenStreetMap (1)
- OpenStreetMap quality evaluation (1)
- Optimal stopping problem (1)
- Optimales Stoppproblem (1)
- Orbital stability (1)
- Parallel Computing (1)
- Parallel and SIMD calculations (1)
- Partial Differential Equations (1)
- Pedestrian Detection (1)
- Perfect graphs (1)
- Permutation (1)
- Podospora anserina (1)
- Pointwise super-solution (1)
- Polyhedron (1)
- Positive function (1)
- Positive signomials (1)
- Potential methods in exploration (1)
- Preclinical research (1)
- Prediction (1)
- Predictive markers (1)
- Processor (1)
- Prognostic markers (1)
- Protein-protein interaction (1)
- Pseudo-Riemannian manifolds (1)
- Public Administration (1)
- Public Transport (1)
- Quantitative features (1)
- RADIUS Protocol (1)
- Radiomics (1)
- Random CSP (1)
- Random Graphs (1)
- Random Matrices (1)
- Random graphs (1)
- Reflexive polytopes (1)
- Regional Laplacian (1)
- Regional fractional Laplacian (1)
- Reinforcement Learning (1)
- Relativistic heavy-ion collisions (1)
- Robotics (1)
- SIMD (1)
- SLAM (1)
- STAR (1)
- STAR experiment (1)
- STEM education (1)
- Script Compression (1)
- Second-order cone (1)
- Semantic portal (1)
- Semantics (1)
- Semiotik nach C. S. Peirce (1)
- Sensory perception (1)
- Sign-changing solutions (1)
- Signed Birkhoff polytopes (1)
- Simplicial complexes (1)
- Smartphone (1)
- Specialized information service (1)
- Spectral Theory (1)
- Standard monomials (1)
- Standing waves (1)
- Statistical analysis (1)
- Strange particles (1)
- Student expectations (1)
- Sublinear circuit (1)
- Sums of non-negative circuit polynomials (1)
- Sums of nonnegative circuit polynomials (SONC) (1)
- Surgical blood loss (1)
- Symmetries (1)
- Symmetry Breaking (1)
- TRD (1)
- TTLab (1)
- Taxon (1)
- Text Annotation (1)
- TextImager (1)
- Themenklassifikation (1)
- Thermoelastic wave equation (1)
- Tobler's First Law (1)
- Tokenisierung (1)
- Toxicity (1)
- Traffic Scenes (1)
- Transcriptome analysis (1)
- Translational research (1)
- Transparent boundary conditions (1)
- UIMA (1)
- Unconditional polytopes (1)
- Unimodular triangulations (1)
- Unity (1)
- UrQMD (1)
- Valuation (1)
- Vannotator (1)
- Variational Methods (1)
- Verkehr (1)
- Virtual reality (1)
- Virtuelle Realität (1)
- Vision (1)
- Visual cortex (1)
- Volunteered Geographic Information (1)
- Wavelet decomposition (1)
- Weak super-solution (1)
- Web (1)
- Web Based Training (1)
- Weyl principle (1)
- affective computing (1)
- algebraic thinking (1)
- algorithm engineering (1)
- anabelian geometry (1)
- ancestral selection graph (1)
- approximation algorithms (1)
- arithmetic geometry (1)
- autoregressive GANs (1)
- average-case complexity (1)
- barrel cortex (1)
- bioinformatics (1)
- bistable perception (1)
- catastrophic forgetting (1)
- central limit theorem (1)
- changepoint (1)
- chatbots (1)
- cluster computing (1)
- co-located collaboration analytics (1)
- coding theory (1)
- collaboration (1)
- collaboration analytics (1)
- computational thinking (1)
- computer vision (1)
- continual deep learning (1)
- convergence (1)
- cover times (1)
- data parallel (1)
- data structures (1)
- debugging (1)
- deep generative models (1)
- deformable model (1)
- density maps (1)
- density visualization (1)
- digital distractions (1)
- digital learning (1)
- digitization (1)
- directional selection (1)
- disaster risk management (1)
- discrepancy principle (1)
- distance learning (1)
- domains (1)
- dynamic algorithms (1)
- education (1)
- educational technology (1)
- emotion generation (1)
- emotion prediction (1)
- epilepsy, epileptogenesis, model, neuro-immune, neuroinflammation, blood brain barrier, seizure (1)
- equity and access to technology (1)
- erasure codes (1)
- error correction codes (1)
- event reconstruction (1)
- external memory (1)
- extreme value theory (1)
- field mapping (1)
- field papers (1)
- flood risk perception (1)
- flooding (1)
- fringe tree (1)
- fundamental theorem of asset pricing (1)
- generic tasks (1)
- graph theory (1)
- group speech analytics (1)
- hierarchical fields (1)
- high performance computing (1)
- independence number (1)
- information processing (1)
- information transfer (1)
- inquiry-based education (1)
- interactive data analysis (1)
- k-shortest path (1)
- literature review (1)
- machine learning (1)
- math trails (1)
- mathematics (1)
- media multitasking (1)
- mikroskopisch (1)
- multimodal (1)
- multimodal fusion (1)
- multimodal learning analytics (1)
- neural network decoder (1)
- neural networks (1)
- neural ordinary differential equation (1)
- neuronal morphology (1)
- neuroscience (1)
- no unbounded profit with bounded risk (1)
- octonions (1)
- online bayesian change point detection (1)
- open-set recognition (1)
- optimal coding (1)
- optimality (1)
- outdoor activities (1)
- outdoors (1)
- parallel file systems (1)
- parallel programming (1)
- patricia trie (1)
- pedagogical roles (1)
- phase coding (1)
- point inversion (1)
- point process (1)
- positivity preserving property (1)
- privacy (1)
- privacy-enhancing technologies (1)
- probability of fixation (1)
- problem solving (1)
- proportional transaction costs (1)
- protein assembly (1)
- protein structure (1)
- random energy model (1)
- random tree (1)
- real world problems ; (1)
- representation learning (1)
- respiratory complex I (1)
- sampling duality (1)
- satisfiability problem (1)
- section conjecture (1)
- security (1)
- security management (1)
- self-attention (1)
- self-control (1)
- self-regulation (1)
- semimartingales (1)
- shape prior (1)
- shortest path (1)
- social engineering (1)
- spectral cut-off (1)
- spike timing (1)
- spin group (1)
- statistical inverse problems (1)
- statistical shape analysis (1)
- stochastic integration (1)
- stochastic model (1)
- storage (1)
- sum-product algorithm (1)
- synaptogenesis (1)
- synchronous teaching (1)
- task design (1)
- teaching with technology (1)
- technology-enhanced learning (1)
- torsion function (1)
- transfer entropy (1)
- valuation (1)
- variational inference (1)
- vectorization (1)
- video prediction (1)
- visual programming (1)
- ÖPNV (1)
- 𝒮-cone (1)
Institute
Proteins are biological macromolecules playing essential roles in all living organisms.
Proteins often bind with each other forming complexes to fulfill their function. Such protein complexes assemble along an ordered pathway. An assembled protein complex can often be divided into structural and functional modules. Knowing the order of assembly and the modules of a protein complex is important to understand biological processes and treat diseases related to misassembly.
Typical structures of the Protein Data Bank (PDB) contain two to three subunits and a few thousand atoms. Recent developments have led to large protein complexes being resolved. The increasing number and size of the protein complexes demand for computational assistance for the visualization and analysis. One such large protein complex is respiratory complex I accounting for 45 subunits in Homo sapiens.
Complex I is a well understood protein complex that served as case study to validate our methods.
Our aim was to analyze time-resolved Molecular Dynamics (MD) simulation data, identify modules of a protein complex and generate hypotheses for the assembly pathway of a protein complex. For that purpose, we abstracted the topology of protein complexes to Complex Graphs of the Protein Topology Graph Library (PTGL). The subunits are represented as vertices, and spatial contacts as edges. The edges are weighted with the number of contacts based on a distance threshold. This allowed us to apply graph-theoretic methods to visualize and analyze protein complexes.
We extended the implementations of two methods to achieve a computation of Complex Graphs in feasible runtimes. The first method skipped checks for contacts using the information which residues are sequential neighbors. We extended the method to protein complexes and structures containing ligands. The second method introduced spheres encompassing all atoms of a subunit and skipped the check for contacts if the corresponding spheres do not overlap. Both methods combined allowed skipping up to 93 % of the checks for contacts for sample complexes of 40 subunits compared to up to 10 % of the previous implementation. We showed that the runtime of the combined method scaled linearly with the number of atoms compared to a non-linear scaling of the previous implementation We implemented a third method fixing the assignment of an orientation to secondary structure elements. We placed a three-dimensional vector in each secondary structure element and computed the angle between secondary structure elements to assign an orientation. This method sped up the runtime especially for large structures, such as the capsid of human immunodeficiency virus, for which the runtime decreased from 43 to less than 9 hours.
The feasible runtimes allowed us to investigate two data sets of MD trajectories of respiratory complex I of Thermus thermophilus that we received. The data sets differ only by whether ubiquinone is bound to the complex. We implemented a pipeline, PTGLdynamics, to compute the contacts and Complex Graphs for all time steps of the trajectories. We investigated different methods to track changes of contacts during the simulation and created a heat map put onto the three-dimensional structure visualizing the changes. We also created line plots to visualize the changes of contacts over the course of the simulation. Both visualizations helped spotting outstandingly flexible or rigid regions of the structure or time points of the simulation in which major dynamics occur.
We introduced normalizations of the edge weights of Complex Graphs for identi-fying modules and predicting the assembly pathway. The idea is to normalize the number of contacts for the number of residues of a subunit. We defined five different normalizations.
To identify structural and functional modules, we applied the Leiden graph clustering algorithm to the Complex Graphs of respiratory complex I and the respiratory supercomplex. We examined the results for the different normalizations of the weights of the Complex Graphs. The absolute edge weight produced the best result identifying three of four modules that have been defined in the literature for respiratory complex I.
We applied agglomerative hierarchical clustering to the edges of a Complex Graph to create hypotheses of the assembly pathway. The rationale was that subunits with an extensive interface in the final structure assemble early. We tested our method against two existing methods on a data set of 21 proteins with reported assembly pathways. Our prediction outperformed the other methods and ran in feasible runtimes of a few minutes at most.
We also tested our method on respiratory complex I, the respiratory supercomplex and the respiratory megacomplex. We compared the results for the different normalizations with an assembly pathway of respiratory complex I described in the literature. We transformed the assembly pathways to dendrograms and compared the predictions to the reference using the Robinson-Foulds distance and clustering information distance. We analyzed the landscape of the clustering information distance by generating random dendrograms and showed that our result is far better than expected at random. We showed in a detailed analysis that the assembly prediction using one normalization was able to capture key features of the assembly pathway that has been proposed in the literature.
In conclusion, we presented different applications of graph theory to automatically analyze the topology of protein complexes. Our programs run in feasible runtimes even for large complexes. We showed that graph-theoretic modeling of the protein structure can be used to analyze MD simulation data, identify modules of protein complexes and predict assembly pathways.
The present paper is concerned with the half-space Dirichlet problem [...] where ℝ𝑁+:={𝑥∈ℝ𝑁:𝑥𝑁>0} for some 𝑁≥1 and 𝑝>1, 𝑐>0 are constants. We analyse the existence, non-existence and multiplicity of bounded positive solutions to (𝑃𝑐). We prove that the existence and multiplicity of bounded positive solutions to (𝑃𝑐) depend in a striking way on the value of 𝑐>0 and also on the dimension N. We find an explicit number 𝑐𝑝∈(1,𝑒√), depending only on p, which determines the threshold between existence and non-existence. In particular, in dimensions 𝑁≥2, we prove that, for 0<𝑐<𝑐𝑝, problem (𝑃𝑐) admits infinitely many bounded positive solutions, whereas, for 𝑐>𝑐𝑝, there are no bounded positive solutions to (𝑃𝑐).
Goal-Conditioned Reinforcement Learning (GCRL) is a popular framework for training agents to solve multiple tasks in a single environment. It is cru- cial to train an agent on a diverse set of goals to ensure that it can learn to generalize to unseen downstream goals. Therefore, current algorithms try to learn to reach goals while simultaneously exploring the environment for new ones (Aubret et al., 2021; Mendonca et al., 2021). This creates a form of the prominent exploration-exploitation dilemma. To relieve the pres- sure of a single agent having to optimize for two competing objectives at once, this thesis proposes the novel algorithm family Goal-Conditioned Re- inforcement Learning with Prior Intrinsic Exploration (GC-π), which sep- arates exploration and goal learning into distinct phases. In the first ex- ploration phase, an intrinsically motivated agent explores the environment and collects a rich dataset of states and actions. This dataset is then used to learn a representation space, which acts as the distance metric for the goal- conditioned reward signal. In the final phase, a goal-conditioned policy is trained with the help of the representation space, and its training goals are randomly sampled from the dataset collected during the exploration phase. Multiple variations of these three phases have been extensively evaluated in the classic AntMaze MuJoCo environment (Nachum et al., 2018). The fi- nal results show that the proposed algorithms are able to fully explore the environment and solve all downstream goals while using every dimension of the state space for the goal space. This makes the approach more flexible compared to previous GCRL work, which only ever uses a small subset of the dimensions for the goals (S. Li et al., 2021a; Pong et al., 2020).
Nowadays, digitalization has an immense impact on the landscape of jobs. This technological revolution creates new industries and professions, promises greater efficiency and improves the quality of working life. However, emerging technologies such as robotics and artificial intelligence (AI) are reducing human intervention, thus advancing automation and eliminating thousands of jobs and whole occupational images. To prepare employees for the changing demands of work, adequate and timely training of the workforce and real-time support of workers in new positions is necessary. Therefore, it is investigated whether user-oriented technologies, such as augmented reality (AR) and virtual reality (VR) can be applied “on-the-job” for such training and support—also known as intelligence augmentation (IA). To address this problem, this work synthesizes results of a systematic literature review as well as a practically oriented search on augmented reality and virtual reality use cases within the IA context. A total of 150 papers and use cases are analyzed to identify suitable areas of application in which it is possible to enhance employees' capabilities. The results of both, theoretical and practical work, show that VR is primarily used to train employees without prior knowledge, whereas AR is used to expand the scope of competence of individuals in their field of expertise while on the job. Based on these results, a framework is derived which provides practitioners with guidelines as to how AR or VR can support workers at their job so that they can keep up with anticipated skill demands. Furthermore, it shows for which application areas AR or VR can provide workers with sufficient training to learn new job tasks. By that, this research provides practical recommendations in order to accompany the imminent distortions caused by AI and similar technologies and to alleviate associated negative effects on the German labor market.
We present a symmetry result to solutions of equations involving the fractional Laplacian in a domain with at least two perpendicular symmetries. We show that if the solution is continuous, bounded, and odd in one direction such that it has a fixed sign on one side, then it will be symmetric in the perpendicular direction. Moreover, the solution will be monotonic in the part where it is of fixed sign. In addition, we present also a class of examples in which our result can be applied.
Motivated by Gröbner basis theory for finite point configurations, we define and study the class of standard complexes associated to a matroid. Standard complexes are certain subcomplexes of the independence complex that are invariant under matroid duality. For the lexicographic term order, the standard complexes satisfy a deletion-contraction-type recurrence. We explicitly determine the lexicographic standard complexes for lattice path matroids using classical bijective combinatorics.
For an abeloid variety A over a complete algebraically closed field extension K of Qp, we construct a p-adic Corlette–Simpson correspondence, namely an equivalence between finite-dimensional continuous K-linear representations of the Tate module and a certain subcategory of the Higgs bundles on A. To do so, our central object of study is the category of vector bundles for the v-topology on the diamond associated to A. We prove that any pro-finite-étale v-vector bundle can be built from pro-finite-étale v-line bundles and unipotent v-bundles. To describe the latter, we extend the theory of universal vector extensions to the v-topology and use this to generalise a result of Brion by relating unipotent v-bundles on abeloids to representations of vector groups.
Nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) can show variable histological growth patterns and present remarkable overlap with T-cell/histiocyte-rich large B-cell lymphoma (THRLBCL). Previous studies suggest that NLPHL histological variants represent progression forms of NLPHL and THRLBCL transformation in aggressive disease. Since molecular studies of both lymphomas are limited due to the low number of tumor cells, the present study aimed to learn if a better understanding of these lymphomas is possible via detailed measurements of nuclear and cell size features in 2D and 3D sections. Whereas no significant differences were visible in 2D analyses, a slightly increased nuclear volume and a significantly enlarged cell size were noted in 3D measurements of the tumor cells of THRLBCL in comparison to typical NLPHL cases. Interestingly, not only was the size of the tumor cells increased in THRLBCL but also the nuclear volume of concomitant T cells in the reactive infiltrate when compared with typical NLPHL. Particularly CD8+ T cells had frequent contacts to tumor cells of THRLBCL. However, the nuclear volume of B cells was comparable in all cases. These results clearly demonstrate that 3D tissue analyses are superior to conventional 2D analyses of histological sections. Furthermore, the results point to a strong activation of T cells in THRLBCL, representing a cytotoxic response against the tumor cells with unclear effectiveness, resulting in enhanced swelling of the tumor cell bodies and limiting proliferative potential. Further molecular studies combining 3D tissue analyses and molecular data will help to gain profound insight into these ill-defined cellular processes.
Through the glasses of didactic reduction, we consider a (periodic) tessellation Δ of either Euclidean or hyperbolic 𝑛-space 𝑀. By a piecewise isometric rearrangement of Δ we mean the process of cutting 𝑀 along corank-1 tile-faces into finitely many convex polyhedral pieces, and rearranging the pieces to a new tight covering of the tessellation Δ. Such a rearrangement defines a permutation of the (centers of the) tiles of Δ, and we are interested in the group of 𝑃𝐼(Δ) all piecewise isometric rearrangements of Δ. In this paper, we offer (a) an illustration of piecewise isometric rearrangements in the visually attractive hyperbolic plane, (b) an explanation on how this is related to Richard Thompson's groups, (c) a section on the structure of the group pei(ℤ𝑛) of all piecewise Euclidean rearrangements of the standard cubically tessellated ℝ𝑛, and (d) results on the finiteness properties of some subgroups of pei(ℤ𝑛).
Conditional Sums-of-AM/GM-Exponentials (conditional SAGE) is a decomposition method to prove nonnegativity of a signomial or polynomial over some subset X of real space. In this article, we undertake the first structural analysis of conditional SAGE signomials for convex sets X. We introduce the X-circuits of a finite subset A⊂Rn , which generalize the simplicial circuits of the affine-linear matroid induced by A to a constrained setting. The X-circuits serve as the main tool in our analysis and exhibit particularly rich combinatorial properties for polyhedral X, in which case the set of X-circuits is comprised of one-dimensional cones of suitable polyhedral fans. The framework of X-circuits transparently reveals when an X-nonnegative conditional AM/GM-exponential can in fact be further decomposed as a sum of simpler X-nonnegative signomials. We develop a duality theory for X-circuits with connections to geometry of sets that are convex according to the geometric mean. This theory provides an optimal power cone reconstruction of conditional SAGE signomials when X is polyhedral. In conjunction with a notion of reduced X-circuits, the duality theory facilitates a characterization of the extreme rays of conditional SAGE cones. Since signomials under logarithmic variable substitutions give polynomials, our results also have implications for nonnegative polynomials and polynomial optimization.
In this article, we prove the Hodge conjecture for a desingularization of the moduli space of rank 2, semi-stable, torsion-free sheaves with fixed odd degree determinant over a very general irreducible nodal curve of genus at least 2. We also compute the algebraic Poincaré polynomial of the associated cohomology ring.
Background: The ability to approximate intra-operative hemoglobin loss with reasonable precision and linearity is prerequisite for determination of a relevant surgical outcome parameter: This information enables comparison of surgical procedures between different techniques, surgeons or hospitals, and supports anticipation of transfusion needs. Different formulas have been proposed, but none of them were validated for accuracy, precision and linearity against a cohort with precisely measured hemoglobin loss and, possibly for that reason, neither has established itself as gold standard. We sought to identify the minimal dataset needed to generate reasonably precise and accurate hemoglobin loss prediction tools and to derive and validate an estimation formula.
Methods: Routinely available clinical and laboratory data from a cohort of 401 healthy individuals with controlled hemoglobin loss between 29 and 233 g were extracted from medical charts. Supervised learning algorithms were applied to identify a minimal data set and to generate and validate a formula for calculation of hemoglobin loss.
Results: Of the classical supervised learning algorithms applied, the linear and Ridge regression models performed at least as well as the more complex models. Most straightforward to analyze and check for robustness, we proceeded with linear regression. Weight, height, sex and hemoglobin concentration before and on the morning after the intervention were sufficient to generate a formula for estimation of hemoglobin loss. The resulting model yields an outstanding R2 of 53.2% with similar precision throughout the entire range of volumes or donor sizes, thereby meaningfully outperforming previously proposed medical models.
Conclusions: The resulting formula will allow objective benchmarking of surgical blood loss, enabling informed decision making as to the need for pre-operative type-and-cross only vs. reservation of packed red cell units, depending on a patient’s anemia tolerance, and thus contributing to resource management.
The novel coronavirus (SARS-CoV-2), identified in China at the end of December 2019 and causing the disease COVID-19, has meanwhile led to outbreaks all over the globe with about 2.2 million confirmed cases and more than 150,000 deaths as of April 17, 2020 [37]. In view of most recent information on testing activity [32], we present here an update of our initial work [4]. In this work, mathematical models have been developed to study the spread of COVID-19 among the population in Germany and to asses the impact of non-pharmaceutical interventions. Systems of differential equations of SEIR type are extended here to account for undetected infections, as well as for stages of infections and age groups. The models are calibrated on data until April 5, data from April 6 to 14 are used for model validation. We simulate different possible strategies for the mitigation of the current outbreak, slowing down the spread of the virus and thus reducing the peak in daily diagnosed cases, the demand for hospitalization or intensive care units admissions, and eventually the number of fatalities. Our results suggest that a partial (and gradual) lifting of introduced control measures could soon be possible if accompanied by further increased testing activity, strict isolation of detected cases and reduced contact to risk groups.
This thesis presents a first-of-its-kind phenomenological framework that formally describes the development of acquired epilepsy and the role of the neuro-immune axis in this development. Formulated as a system of nonlinear differential equations, the model describes the interaction of processes such as neuroinflammation, blood- brain barrier disruption, neuronal death, circuit remodeling, and epileptic seizures. The model allows for the simulation of epilepsy development courses caused by a variety of neurological injuries. The simulation results are in agreement with ex- perimental findings from three distinct animal models of epileptogenesis. Simula- tions capture injury-specific temporal patterns of seizure occurrence, neuroinflam- mation, blood-brain barrier leakage, and progression of neuronal death. In addition, the model provides insights into phenomena related to epileptogenesis such as the emergence of paradoxically long time scales of disease development after injury, the dose-dependence of epileptogenesis features on injury severity, and the variability of clinical outcomes in subjects exposed to identical injury. Moreover, the developed framework allows for the simulation of therapeutic interventions, which provides insights into the injury-specificity of prominent intervention strategies. Thus, the model can be used as an in silico tool for the generation of testable predictions, which may aid pre-clinical research for the development of epilepsy treatments.
In the recent past, we are making huge progress in the field of Artificial Intelligence. Since the rise of neural networks, astonishing new frontiers are continuously being discovered. The development is so fast that overall no major technical limits are in sight. Hence, digitization has expanded from the base of academia and industry to such an extent that it is prevalent in the politics, mass media and even popular arts. The DFG-funded project Specialized Information Service for Biodiversity Research and the BMBF-funded project Linked Open Tafsir can be placed exactly in that overall development. Both projects aim to build an intelligent, up-to-date, modern research infrastructure on biodiversity and theological studies for scholars researching in these respective fields of historical science. Starting from digitized German and Arabic historical literature containing so far unavailable valuable knowledge on biodiversity and theological studies, at its core, our dissertation targets to incorporate state-of-the-art Machine Learning methods for analyzing natural language texts of low-resource languages and enabling foundational Natural Language Processing tasks on them, such as Sentence Boundary Detection, Named Entity Recognition, and Topic Modeling. This ultimately leads to paving the way for new scientific discoveries in the historical disciplines of natural science and humanities. By enriching the landscape of historical low-resource languages with valuable annotation data, our work becomes part of the greater movement of digitizing the society, thus allowing people to focus on things which really matter in science and industry.
We provide a Hopf boundary lemma for the regional fractional Laplacian (−Δ)sΩ, with Ω ⊂ RN a bounded open set. More precisely, given u a pointwise or weak super-solution of the equation (−Δ)s u = c(x)u in Ω, we show that the ratio u(x)∕(dist(x, 𝜕Ω))2s−1 is strictly Ω positive as x approaches the boundary 𝜕Ω of Ω. We also prove a strong maximum principle for distributional super-solutions.
Die Emergenz digitaler Netzwerke ist auf die ständige Entwicklung und Transformation neuer Informationstechnologien zurückzuführen.
Dieser Strukturwandel führt zu äußerst komplexen Systemen in vielen verschiedenen Lebensbereichen.
Es besteht daher verstärkt die Notwendigkeit, die zugrunde liegenden wesentlichen Eigenschaften von realen Netzwerken zu untersuchen und zu verstehen.
In diesem Zusammenhang wird die Netzwerkanalyse als Mittel für die Untersuchung von Netzwerken herangezogen und stellt beobachtete Strukturen mithilfe mathematischer Modelle dar.
Hierbei, werden in der Regel parametrisierbare Zufallsgraphen verwendet, um eine systematische experimentelle Evaluation von Algorithmen und Datenstrukturen zu ermöglichen.
Angesichts der zunehmenden Menge an Informationen, sind viele Aspekte der Netzwerkanalyse datengesteuert und zur Interpretation auf effiziente Algorithmen angewiesen.
Algorithmische Lösungen müssen daher sowohl die strukturellen Eigenschaften der Eingabe als auch die Besonderheiten der zugrunde liegenden Maschinen, die sie ausführen, sorgfältig berücksichtigen.
Die Generierung und Analyse massiver Netzwerke ist dementsprechend eine anspruchsvolle Aufgabe für sich.
Die vorliegende Arbeit bietet daher algorithmische Lösungen für die Generierung und Analyse massiver Graphen.
Zu diesem Zweck entwickeln wir Algorithmen für das Generieren von Graphen mit vorgegebenen Knotengraden, die Berechnung von Zusammenhangskomponenten massiver Graphen und zertifizierende Grapherkennung für Instanzen, die die Größe des Hauptspeichers überschreiten.
Unsere Algorithmen und Implementierungen sind praktisch effizient für verschiedene Maschinenmodelle und bieten sequentielle, Shared-Memory parallele und/oder I/O-effiziente Lösungen.
Antimicrobial resistant infections arise as a consequential response to evolutionary mechanisms within microbes which cause them to be protected from the effects of antimicrobials. The frequent occurrence of resistant infections poses a global public health threat as their control has become challenging despite many efforts. The dynamics of such infections are driven by processes at multiple levels. For a long time, mathematical models have proved valuable for unravelling complex mechanisms in the dynamics of infections. In this thesis, we focus on mathematical approaches to modelling the development and spread of resistant infections at between-host (population-wide) and within-host (individual) levels.
Within an individual host, switching between treatments has been identified as one of the methods that can be employed for the gradual eradication of resistant strains on the long term. With this as motivation, we study the problem using dynamical systems and notions from control theory. We present a model based on deterministic logistic differential equations which capture the general dynamics of microbial resistance inside an individual host. Fundamentally, this model describes the spread of resistant infections whilst accounting for evolutionary mutations observed in resistant pathogens and capturing them in mutation matrices. We extend this model to explore the implications of therapy switching from a control theoretic perspective by using switched systems and developing control strategies with the goal of reducing the appearance of drug resistant pathogens within the host.
At the between-host level, we use compartmental models to describe the transmission of infection between multiple individuals in a population. In particular, we make a case study of the evolution and spread of the novel coronavirus (SARS-CoV-2) pandemic. So far, vaccination remains a critical component in the eventual solution to this public health crisis. However, as with many other pathogens, vaccine resistant variants of the virus have been a major concern in control efforts by governments and all stakeholders. Using network theory, we investigate the spread and transmission of the disease on social networks by compartmentalising and studying the progression of the disease in each compartment, considering both the original virus strain and one of its highly transmissible vaccine-resistant mutant strains. We investigate these dynamics in the presence of vaccinations and other interventions. Although vaccinations are of absolute importance during viral outbreaks, resistant variants coupled with population hesitancy towards vaccination can lead to further spread of the virus.
We give theorems about asymptotic normality of general additive functionals on patricia tries, derived from results on tries. These theorems are applied to show asymptotic normality of the distribution of random fringe trees in patricia tries. Formulas for asymptotic mean and variance are given. The proportion of fringe trees with 𝑘 keys is asymptotically, ignoring oscillations, given by (1−𝜌(𝑘))/(𝐻 +𝐽)𝑘(𝑘−1) with the source entropy 𝐻, an entropy-like constant 𝐽, that is 𝐻 in the binary case, and an exponentially decreasing function 𝜌(𝑘). Another application gives asymptotic normality of the independence number and the number of 𝑘-protected nodes.
We thoroughly study the properties of conically stable polynomials and imaginary projections. A multivariate complex polynomial is called stable if its nonzero whenever all coordinates of the respective argument have a positive imaginary part. In this dissertation we consider the generalized notion of K-stability. A multivariate complex polynomial is called K-stable if its non-zero whenever the imaginary part of the respective argument lies in the relative interior of the cone K. We study connections to various other objects, including imaginary projections as well as preservers and combinatorial criteria for conically stable polynomials.
In particle collider experiments, elementary particle interactions with large momentum transfer produce quarks and gluons (known as partons) whose evolution is governed by the strong force, as described by the theory of quantum chromodynamics (QCD)1. These partons subsequently emit further partons in a process that can be described as a parton shower2, which culminates in the formation of detectable hadrons. Studying the pattern of the parton shower is one of the key experimental tools for testing QCD. This pattern is expected to depend on the mass of the initiating parton, through a phenomenon known as the dead-cone effect, which predicts a suppression of the gluon spectrum emitted by a heavy quark of mass mQ and energy E, within a cone of angular size mQ/E around the emitter3. Previously, a direct observation of the dead-cone effect in QCD had not been possible, owing to the challenge of reconstructing the cascading quarks and gluons from the experimentally accessible hadrons. We report the direct observation of the QCD dead cone by using new iterative declustering techniques4,5 to reconstruct the parton shower of charm quarks. This result confirms a fundamental feature of QCD. Furthermore, the measurement of a dead-cone angle constitutes a direct experimental observation of the non-zero mass of the charm quark, which is a fundamental constant in the standard model of particle physics.
People can describe spatial scenes with language and, vice versa, create images based on linguistic descriptions. However, current systems do not even come close to matching the complexity of humans when it comes to reconstructing a scene from a given text. Even the ever-advancing development of better and better Transformer-based models has not been able to achieve this so far. This task, the automatic generation of a 3D scene based on an input text, is called text-to-3D scene generation. The key challenge, and focus of this dissertation, now relate to the following topics:
(a) Analyses of how well current language models understand spatial information, how static embeddings compare, and whether they can be improved by anaphora resolution.
(b) Automated resource generation for context expansion and grounding that can help in the creation of realistic scenes.
(c) Creation of a VR-based text-to-3D scene system that can be used as an annotation and active-learning environment, but can also be easily extended in a modular way with additional features to solve more contexts in the future.
(d) Analyze existing practices and tools for digital and virtual teaching, learning, and collaboration, as well as the conditions and strategies in the context of VR.
In the first part of this work, we could show that static word embeddings do not benefit significantly from pronoun substitution. We explain this result by the loss of contextual information, the reduction in the relative occurrence of rare words, and the absence of pronouns to be substituted. But we were able to we have shown that both static and contextualizing language models appear to encode object knowledge, but require a sophisticated apparatus to retrieve it. The models themselves in combination with the measures differ greatly in terms of the amount of knowledge they allow to extract.
Classifier-based variants perform significantly better than the unsupervised methods from bias research, but this is also due to overfitting. The resources generated for this evaluation are later also an important component of point three.
In the second part, we present AffordanceUPT, a modularization of UPT trained on the HICO-DET dataset, which we have extended with Gibsonien/telic annotations. We then show that AffordanceUPT can effectively make the Gibsonian/telic distinction and that the model learns other correlations in the data to make such distinctions (e.g., the presence of hands in the image) that have important implications for grounding images to language.
The third part first presents a VR project to support spatial annotation respectively IsoSpace. The direct spatial visualization and the immediate interaction with the 3D objects should make the labeling more intuitive and thus easier. The project will later be incorporated as part of the Semantic Scene Builder (SeSB). The project itself in turn relies on the Text2SceneVR presented here for generating spatial hypertext, which in turn is based on the VAnnotatoR. Finally, we introduce Semantic Scene Builder (SeSB), a VR-based text-to-3D scene framework using Semantic Annotation Framework (SemAF) as a scheme for annotating semantic relations. It integrates a wide range of tools and resources by utilizing SemAF and UIMA as a unified data structure to generate 3D scenes from textual descriptions and also supports annotations. When evaluating SeSB against another state-of-the-art tool, it was found that our approach not only performed better, but also allowed us to model a wider variety of scenes. The final part reviews existing practices and tools for digital and virtual teaching, learning, and collaboration, as well as the conditions and strategies needed to make the most of technological opportunities in the future.
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown. Here, we generated large stochastic populations of biophysically realistic hippocampal granule cell models comparing those with all 15 ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were more frequent and more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve target excitability.
The 𝒮-cone provides a common framework for cones of polynomials or exponen- tial sums which establish non-negativity upon the arithmetic-geometric inequality, in particular for sums of non-negative circuit polynomials (SONC) or sums of arithmetic- geometric exponentials (SAGE). In this paper, we study the S-cone and its dual from the viewpoint of second-order representability. Extending results of Averkov and of Wang and Magron on the primal SONC cone, we provide explicit generalized second- order descriptions for rational S-cones and their duals.
In the human brain, the incoming light to the retina is transformed into meaningful representations that allow us to interact with the world. In a similar vein, the RGB pixel values are transformed by a deep neural network (DNN) into meaningful representations relevant to solving a computer vision task it was trained for. Therefore, in my research, I aim to reveal insights into the visual representations in the human visual cortex and DNNs solving vision tasks.
In the previous decade, DNNs have emerged as the state-of-the-art models for predicting neural responses in the human and monkey visual cortex. Research has shown that training on a task related to a brain region’s function leads to better predictivity than a randomly initialized network. Based on this observation, we proposed that we can use DNNs trained on different computer vision tasks to identify functional mapping of the human visual cortex.
To validate our proposed idea, we first investigate a brain region occipital place area (OPA) using DNNs trained on scene parsing task and scene classification task. From the previous investigations about OPA’s functions, we knew that it encodes navigational affordances that require spatial information about the scene. Therefore, we hypothesized that OPA’s representation should be closer to a scene parsing model than a scene classification model as the scene parsing task explicitly requires spatial information about the scene. Our results showed that scene parsing models had representation closer to OPA than scene classification models thus validating our approach.
We then selected multiple DNNs performing a wide range of computer vision tasks ranging from low-level tasks such as edge detection, 3D tasks such as surface normals, and semantic tasks such as semantic segmentation. We compared the representations of these DNNs with all the regions in the visual cortex, thus revealing the functional representations of different regions of the visual cortex. Our results highly converged with previous investigations of these brain regions validating the feasibility of the proposed approach in finding functional representations of the human brain. Our results also provided new insights into underinvestigated brain regions that can serve as starting hypotheses and promote further investigation into those brain regions.
We applied the same approach to find representational insights about the DNNs. A DNN usually consists of multiple layers with each layer performing a computation leading to the final layer that performs prediction for a given task. Training on different tasks could lead to very different representations. Therefore, we first investigate at which stage does the representation in DNNs trained on different tasks starts to differ. We further investigate if the DNNs trained on similar tasks lead to similar representations and on dissimilar tasks lead to more dissimilar representations. We selected the same set of DNNs used in the previous work that were trained on the Taskonomy dataset on a diverse range of 2D, 3D and semantic tasks. Then, given a DNN trained on a particular task, we compared the representation of multiple layers to corresponding layers in other DNNs. From this analysis, we aimed to reveal where in the network architecture task-specific representation is prominent. We found that task specificity increases as we go deeper into the DNN architecture and similar tasks start to cluster in groups. We found that the grouping we found using representational similarity was highly correlated with grouping based on transfer learning thus creating an interesting application of the approach to model selection in transfer learning.
During previous works, several new measures were introduced to compare DNN representations. So, we identified the commonalities in different measures and unified different measures into a single framework referred to as duality diagram similarity. This work opens up new possibilities for similarity measures to understand DNN representations. While demonstrating a much higher correlation with transfer learning than previous state-of-the-art measures we extend it to understanding layer-wise representations of models trained on the Imagenet and Places dataset using different tasks and demonstrate its applicability to layer selection for transfer learning.
In all the previous works, we used the task-specific DNN representations to understand the representations in the human visual cortex and other DNNs. We were able to interpret our findings in terms of computer vision tasks such as edge detection, semantic segmentation, depth estimation, etc. however we were not able to map the representations to human interpretable concepts. Therefore in our most recent work, we developed a new method that associates individual artificial neurons with human interpretable concepts.
Overall, the works in this thesis revealed new insights into the representation of the visual cortex and DNNs...
Polarization of Λ and ¯Λ hyperons along the beam direction in Pb-Pb collisions at √sNN=5.02 TeV
(2022)
The polarization of the Λ and ¯Λ hyperons along the beam (z) direction, Pz, has been measured in Pb-Pb collisions at √sNN=5.02 TeV recorded with ALICE at the Large Hadron Collider (LHC). The main contribution to Pz comes from elliptic flow-induced vorticity and can be characterized by the second Fourier sine coefficient Pz,s2=⟨Pzsin(2φ−2Ψ2)⟩, where φ is thhyperon azimuthal emission angle and Ψ2 is the elliptic flow plane angle. We report the measurement of Pz,s2 for different collision centralities and in the 30%–50% centrality interval as a function of the hyperon transverse momentum and rapidity. The Pz,s2 is positive similarly as measured by the STAR Collaboration in Au-Au collisions at √sNN=200 GeV, with somewhat smaller amplitude in the semicentral collisions. This is the first experimental evidence of a nonzero hyperon Pz in Pb-Pb collisions at the LHC. The comparison of the measured Pz,s2 with the hydrodynamic model calculations shows sensitivity to the competing contributions from thermal and the recently found shear-induced vorticity, as well as to whether the polarization is acquired at the quark-gluon plasma or the hadronic phase.
In this thesis, we cover two intimately related objects in combinatorics, namely random constraint satisfaction problems and random matrices. First we solve a classic constraint satisfaction problem, 2-SAT using the graph structure and a message passing algorithm called Belief Propagation. We also explore another message passing algorithm called Warning Propagation and prove a useful result that can be employed to analyze various type of random graphs. In particular, we use this Warning Propagation to study a Bernoulli sparse parity matrix and reveal a unique phase transition regarding replica symmetry. Lastly, we use variational methods and a version of local limit theorem to prove a sufficient condition for a general random matrix to be of full rank.
Ausgangspunkt der Forschungsarbeit ist der Gebrauch von Gesten in mathematischen Interaktionen von Lernenden. Es wird untersucht, inwiefern Gesten Teil des mathematischen Aushandlungsprozesses sind. Damit ist die Rekonstruktion einer potentiell fachlichen Bedeutung des Gestengebrauchs beim Mathematiklernen das zentrale Forschungsanliegen.
Theoretisch gerahmt wird die Arbeit von Erkenntnissen aus der psychologisch-linguistischen Gestenforschung zur systematischen Beschreibung von Gestik im Zusammenspiel mit der gleichzeitig geäußerten Lautsprache (McNeill, 1992; Kendon, 2004). Es werden ebenso ausgewählte Forschungen zur Gestik beim Mathematiklernen beleuchtet (Arzarello, 2006; Wille, 2020; Kiesow, 2016). Die mathematikdidaktische Interaktionstheorie begründet den sozial-konstruktivistischen Lernbegriff (Krummheuer, 1992). Ausgewählte Aspekte der Semiotik nach C. S. Peirce bieten eine theoretische Fundierung des Zeichenbegriffs und des Kerns mathematischen Agierens, verstanden als diagrammatisches Arbeiten (Peirce, 1931, CP 1.54 u. 1932, CP 2.228).
Von besonderer Bedeutung für die vorliegende Forschungsarbeit ist der linguistische Ansatz der Code-Integration und -Manifestation von redebegleitenden Gesten im Sprachsystem nach Fricke (2007, 2012) in Verbindung mit dem Peirce’schen Diagrammbegriff. Diese Perspektive ermöglicht eine theoretische Fundierung der zunächst empirisch beobachtbaren Multimodalität der Ausdrucksweisen von Lernenden beim gemeinsamen Mathematiktreiben. Der Peirce’sche Diagrammbegriff dient hierbei zur Rekonstruktion einer systemischen Relevanz von Gesten für das Betreiben von Mathematik: Bestimmte Gesten sind semiotisch als mathematische Zeichen beschreibbar und haben potentiell konstituierende Funktion für das diagrammatische Arbeiten der Lernenden. Der übergeordnete Forschungsfokus lautet: Wie nutzen Grundschüler*innen Gestik und Lautsprache, insbesondere in deren Zusammenspiel, um ihre mathematischen Ideen in den interaktiven Aushandlungsprozess einzubringen und über den Verlauf der Interaktion aufzugreifen, möglicherweise weiterzuentwickeln oder auch zu verwerfen? In der Ausdifferenzierung wird die Funktion der verwendeten Gesten und die Rekonstruktion von potentiell gemeinsam gebrauchten Gesten der Interagierenden in den Blick genommen.
Methodisch lässt sich die Forschungsarbeit der qualitativen Sozialforschung (Bohnsack, 2008) bzw. der interpretativen mathematikdidaktischen Unterrichtsforschung zuordnen (Krummheuer & Naujok, 1999). Es werden Beispiele aus mathematischen Interaktionssituationen ausgewertet, in denen sich Paare von Zweitklässler*innen mit einem mathematischen Problem aus der Kombinatorik und der Geometrie beschäftigen. Eine eigens theoriekonform entwickelte Transkriptpartitur dient zur Aufarbeitung der Videodaten. Mit der textbasierten Interaktionsanalyse (Krummheuer, 1992) und der grafisch angelegten Semiotischen Analyse (Schreiber, 2010) in einer Weiterentwicklung der Semiotischen Prozess-Karten (Huth, 2014) werden zwei hierarchisch aufeinander aufbauende Analyseverfahren verwendet.
Zentrale Forschungsergebnisse sind 1) die funktionale und gestalterische Flexibilität des Gestengebrauchs beim diagrammatischen Arbeiten der Lernenden, 2) die Rekonstruktion von Modusschnittstellen der Gesten mit anderen Ausdrucksmodi in Funktion, interaktionaler Bedeutungszuschreibung und Chronologie, und 3) die häufige Verwendung der Gesten als Modus der Wahl der Lernenden in mathematischen Interaktionen. Gesten weisen eine unmittelbare und voraussetzungslose Verfügbarkeit auf, eine funktionale und gestalterische Flexibilität in der mathematischen Auseinandersetzung und die Möglichkeit, Funktionen anderer Modi (vorübergehen) zu übernehmen. Es zeigt sich eine konstitutive und fachliche Bedeutung der Gestik für das mathematisch-diagrammatische Agieren der Lernenden. In der Arbeit wird daraus schließlich das doppelte Kontinuum der Gesten für das Mathematiklernen entwickelt. Es zeigt in der Dimension der Funktion des Gestengebrauchs und der Dimension des Objektbezugs der Gestengestalt die Vielfältigkeit der Gestenfunktionen im gemeinsamen diagrammatischen Arbeiten der Lernenden und gibt Einblick in die verwendeten Gestengestalten.
Die Forschungsarbeit offenbart den Bedarf einer Beachtung von Gesten in der fachdidaktischen Planung und Gestaltung von Mathematikunterricht und in der Erforschung und Diagnostik der mathematischen Entwicklung von Lernenden. Es handelt sich bei Gesten in mathematischen Interaktionen nicht um ein reines Beiwerk der Äußerung, sondern um einen fachlich bedeutsamen Modus in Bezug auf das Mathematiklernen. Der Gebrauch von Gestik ermöglicht die Erzeugung von Diagrammen im Handumdrehen und eröffnet perspektivisch eine Erforschung ihrer Bedeutung für mathematische Lehr-Lern-Prozesse.
Die in dieser Zusammenfassung angegebene Literatur findet sich im Literaturverzeichnis der vorgelegten Forschungsarbeit.
AI-based computer vision systems play a crucial role in the environment perception for autonomous driving. Although the development of self-driving systems has been pursued for multiple decades, it is only recently that breakthroughs in Deep Neural Networks (DNNs) have led to their widespread application in perception pipelines, which are getting more and more sophisticated. However, with this rising trend comes the need for a systematic safety analysis to evaluate the DNN's behavior in difficult scenarios as well as to identify the various factors that cause misbehavior in such systems. This work aims to deliver a crucial contribution to the lacking literature on the systematic analysis of Performance Limiting Factors (PLFs) for DNNs by investigating the task of pedestrian detection in urban traffic from a monocular camera mounted on an autonomous vehicle. To investigate the common factors that lead to DNN misbehavior, six commonly used state-of-the-art object detection architectures and three detection tasks are studied using a new large-scale synthetic dataset and a smaller real-world dataset for pedestrian detection. The systematic analysis includes 17 factors from the literature and four novel factors that are introduced as part of this work. Each of the 21 factors is assessed based on its influence on the detection performance and whether it can be considered a Performance Limiting Factor (PLF). In order to support the evaluation of the detection performance, a novel and task-oriented Pedestrian Detection Safety Metric (PDSM) is introduced, which is specifically designed to aid in the identification of individual factors that contribute to DNN failure. This work further introduces a training approach for F1-Score maximization whose purpose is to ensure that the DNNs are assessed at their highest performance. Moreover, a new occlusion estimation model is introduced to replace the missing pedestrian occlusion annotations in the real-world dataset. Based on a qualitative analysis of the correlation graphs that visualize the correlation between the PLFs and the detection performance, this study identified 16 of the initial 21 factors as being PLFs for DNNs out of which the entropy, the occlusion ratio, the boundary edge strength, and the bounding box aspect ratio turned out to be most severely affecting the detection performance. The findings of this study highlight some of the most serious shortcomings of current DNNs and pave the way for future research to address these issues.
Non-Fungible Token und die Blockchain Technologie haben in dem vergangenen Jahr immer mehr an Popularität gewonnen. Wie bei jeder neuartigen Technologie stellt sich jedoch die Frage, in welchen Bereichen diese eine Anwendung finden können.
Das Ziel in der vorliegenden Arbeit ist es zu beantworten, ob Non-Fungible Token und die Blockchain Technologie eine sinnvolle Anwendung im Bereich von akademischen Zertifikaten hat.
Um diese Frage zu beantworten, sind Gründe für die Anwendung von Non-Fungible Token gegen Nachteile abgewogen und Lösungsansätze für potentielle Risiken erhoben worden. Außerdem wurde selbstständig ein ERC-721 Token Contract für akademische Zertifikate mittels Solidity entwickelt.
Die Arbeit zeigt, dass Blockchain basierte akademische Zertifikate vor allem die Mobilität von Studenten unterstützen, den administrativen Aufwand der Ausstellung und Verifizierung von Abschlusszeugnissen verringern und entgegen der Fälschung von Abschlüssen arbeiten. Außerdem können erwägte Risiken und Nachteile durch Zusammenschluss von Institutionen zu einer Konsortialen Blockchain umgangen werden.
Die erfolgreiche Entwicklung des ERC-721 Token Contracts “MetaDip” zeigt eine potentielle Umsetzung für die Digitalisierung von Abschlusszeugnissen und demonstriert, dass Non-Fungible Token basierte akademische Zertifikate aktuell bereits technisch realisierbar sind.
Die Arbeit legt dar, dass Non-Fungible Token und die Blockchain Technologie eine vielversprechende Zukunft für akademische Zertifikate bietet und bereits von vereinzelten Institutionen realisiert wird. Jedoch müssen noch einige Vorkehrungen getroffen werden, bevor eine breite Umsetzung von Blockchain basierten akademischen Zertifikaten möglich ist.
In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our generation process, i.e., the output from each time step is fed as the input to the next step. Unlike other video prediction methods that use “one shot” generation, our method is able to preserve much more details from the input image, while also capturing the critical pixel-level changes between the frames. We overcome the problem of generation quality degradation by introducing a “complementary mask” module in our architecture, and we show that this allows the model to only focus on the generation of the pixels that need to be changed, and to reuse those that should remain static from its previous frame. We empirically validate our methods against various video prediction models on the UT Dallas Dataset, and show that our approach is able to generate high quality realistic video sequences from one static input image. In addition, we also validate the robustness of our method by testing a pre-trained model on the unseen ADFES facial expression dataset. We also provide qualitative results of our model tested on a human action dataset: The Weizmann Action database.
Tasks are a key resource in the process of teaching and learning mathematics, which is why task design continues to be one of the main research issues in mathematics education. Different settings can influence the principles underlying the formulation of tasks, and so does the outdoor context. Specifically, a math trail can be a privileged context, known to promote positive attitudes and additional engagement for the learning of mathematics, confronting students with a sequence of real-life tasks, related to a particular mathematical theme. Recently, mobile devices and apps, i.e., MathCityMap, have been recognized as an important resource to facilitate the extension of the classroom to the outdoors. The study reported in this paper intends to identify the principles of design for mobile theme-based math trails (TBT) that result in rich learning experiences in early algebraic thinking. A designed-based research is used, through a qualitative approach, to develop and refine design principles for TBT about Sequences and Patterns. The iterative approach is described by cycles with the intervention of the researchers, pre-service and in-service teachers and students of the targeted school levels. The results are discussed taking into account previous research and data collected along the cycles, conducing to the development of general design principles for TBT tasks.
Existence of nonradial domains for overdetermined and isoperimetric problems in nonconvex cones
(2022)
In this work we address the question of the existence of nonradial domains inside a nonconvex cone for which a mixed boundary overdetermined problem admits a solution. Our approach is variational, and consists in proving the existence of nonradial minimizers, under a volume constraint, of the associated torsional energy functional. In particular we give a condition on the domain D on the sphere spanning the cone which ensures that the spherical sector is not a minimizer. Similar results are obtained for the relative isoperimetric problem in nonconvex cones.
The main task of modern large experiments with heavy ions, such as CBM (FAIR), STAR (BNL) and ALICE (CERN) is a detailed study of the phase diagram of quantum chromodynamics (QCD) in the quark-gluon plasma (QGP), the equation of state of matter at extremely high baryonic densities, and the transition from the hadronic phase of matter to the quark-gluon phase.
In the thesis, the missing mass method is developed for the reconstruction of short-lived particles with neutral particles in their decay products, as well as its implementation in the form of fast algorithms and a set of software for prac- tical application in heavy ion physics experiments. Mathematical procedures implementing the method were developed and implemented within the KF Par- ticle Finder package for the future CBM (FAIR) experiment and subsequently adapted and applied for processing and analysis of real data in the STAR (BNL) experiment.
The KF Particle Finder package is designed to reconstruct most signal particles from the physics program of the CBM experiment, including strange particles, strange resonances, hypernuclei, light vector mesons, charm particles and char- monium. The package includes searches for over a hundred decays of short-lived particles. This makes the KF Particle Finder a universal platform for short-lived particle reconstruction and physics analysis both online and offline.
The missing mass method has been proposed to reconstruct decays of short-lived charged particles when one of the daughter particles is neutral and is not regis- tered in the detector system. The implementation of the missing mass method was integrated into the KF Particle Finder package to search for 18 decays with a neutral daughter particle.
Like all other algorithms of the KF Particle Finder package, the missing mass method is implemented with extensive use of vector (SIMD) instructions and is optimized for parallel operation on modern many-core high performance com- puter clusters, which can include both processors and coprocessors. A set of algorithms implementing the method was tested on computers with tens of cores and showed high speed and practically linear scalability with respect to the num- ber of cores involved.
It is extremely important, especially for the initial stage of the CBM experiment, which is planned for 2025, to demonstrate already now on real data the reliability of the developed approach, as well as the high efficiency of the current implemen- tation of both the entire KF Particle Finder package, and its integral part, the missing mass method. Such an opportunity was provided by the FAIR Phase-0 program, motivating the use in the STAR experiment of software packages orig- inally developed for the CBM experiment.
Application of the method to real data of the STAR experiment shows very good results with a high signal-to-background ratio and a large significance value. The results demonstrate the reliability and high efficiency of the missing mass method in the reconstruction of both charged mother particles and their neutral daughter particles. Being an integral part of the KF Particle Finder package, now the main approach for reconstruction and analysis of short-lived particles in the STAR experiment, the missing mass method will continue to be used for the physics analysis in online and offline modes.
The high quality of the results of the express data analysis has led to their status as preliminary physics results with the right to present them at international physics conferences and meetings on behalf of the STAR Collaboration.
Statistical shape models learn to capture the most characteristic geometric variations of anatomical structures given samples from their population. Accordingly, shape models have become an essential tool for many medical applications and are used in, for example, shape generation, reconstruction, and classification tasks. However, established statistical shape models require precomputed dense correspondence between shapes, often lack robustness, and ignore the global surface topology. This thesis presents a novel neural flow-based shape model that does not require any precomputed correspondence. The proposed model relies on continuous flows of a neural ordinary differential equation to model shapes as deformations of a template. To increase the expressivity of the neural flow and disentangle global, low-frequency deformations from the generation of local, high- frequency details, we propose to apply a hierarchy of flows. We evaluate the performance of our model on two anatomical structures, liver, and distal femur. Our model outperforms state-of-the-art methods in providing an expressive and robust shape prior, as indicated by its generalization ability and specificity. More so, we demonstrate the effectiveness of our shape model on shape reconstruction tasks and find anatomically plausible solutions. Finally, we assess the quality of the emerging shape representation in an unsupervised setting and discriminate healthy from pathological shapes.
Debate topic expansion
(2022)
Given a debate topic, it is often to make an expansion of the topic, the reasons can be the followings: (1) The scope of the debate topic is too shallow and we eager to discuss more. (2) A debate topic is sometimes related to the others and the discussion will not be complete when we do not discuss the others as well. (3) We may want to discuss the particular concept or the core the debate topic. It's thus meaningful to build a model in order to find the expansions of the topics.
IBM Research Team has proposed a method to expand the boundary and find the expansion topics of the given debate topics in 2019. There are two types of topic expansions in their paper, consistent and contrastive expansions. We focus on the consistent expansions. Consistent expansions are defined as the expansions that expand our topics in a positive way or at least neutral.
The main objective of this paper is to follow and examine the steps of IBM Research Team's idea and since the original discusses the model in english, we would like to implement a topic expansion model with 7 steps, including pattern extraction, filtering, training, etc, in another language (german) using translator and compare the result between different models to propose the final german model at the end.
Das Projekt anan ist ein Werkzeug zur Fehlersuche in verteilten Hochleistungsrechnern. Die Neuheit des Beitrags besteht darin, dass die bekannten Methoden, die bereits erfolgreich zum Debuggen von Soft- und Hardware eingesetzt werden, auf Hochleistungs-Rechnen übertragen worden sind. Im Rahmen der vorliegenden Arbeit wurde ein Werkzeug namens anan implementiert, das bei der Fehlersuche hilft. Außerdem kann es als dynamischeres Monitoring eingesetzt werden. Beide Einsatzzwecke sind
getestet worden.
Das Werkzeug besteht aus zwei Teilen:
1. aus einem Teil namens anan, der interaktiv vom Nutzer bedient wird
2. und aus einem Teil namens anand, der automatisiert die verlangten Messwerte erhebt und nötigenfalls Befehle ausführt.
Der Teil anan führt Sensoren aus — kleine mustergesteuerte Algorithmen —, deren Ergebnisse per anan zusammengeführt werden. In erster Näherung lässt anan sich als Monitoring beschreiben, welches (1) schnell umkonfiguriert werden (2) komplexere Werte messen kann, die über Korrelationen einfacher Zeitreihen hinausgehen.
In this thesis we discuss the group Out(Gal_K) of outer automorphism of the absolute Galois group Gal_K of a p-adic number field K. Using results about the mapping class group of a surface S, as well as a result by Jannsen--Wingberg on the structure of the absolute Galois group Gal_K, we construct a large subgroup of Out(Gal_K) arising as images of certain Dehn twists on S.
Bei der Bekleidungsmodellierung geht es um den Entwurf von Bekleidung von Personen, die beispielsweise in Szenen dargestellt werden können. Dabei stützt sich der Entwurf auf Informationen aus einer Datengrundlage. Die Darstellung von Szenen, in denen Personen dargestellt werden, stellt sich grundsätzlich als Zusammenspiel komplexer Teilaspekte dar. Dabei wird die Nachvollziehbarkeit einer modellierten Szene oder modellierter Avatare im Auge des Betrachters ganz wesentlich durch den Faktor passend gewählter Kleidung bestimmt.
In dieser Arbeit werden Ansätze und Verfahren vorgestellt, die zur Bekleidungsmodellierung auf Grundlage von Textdokumenten basieren. Dafür werden Möglichkeiten erörtert, die es erlauben Informationen aus Texten zu extrahieren und für die Modellierung einzusetzen.
Zur Bearbeitung der Aufgabenstellung wird zunächst ein aus dem Machine Learning bekanntes kontextuelles Modell hinsichtlich einer Mehrklassen-Klassifizierung trainiert und angewendet. Daraufhin wird die Erstellung einer eigenen Wissensressource, die sich auf textlicher Ebene mit dem Thema der Bekleidung auseinandersetzt, aufgebaut und mit zahlreichen Informationen aus bereits bestehenden Ressourcen popularisiert. Die neue Ressource wird in Form einer Graphdatenbank entworfen. Dabei werden Relationen zwischen den einzelnen Elementen mithilfe von statischen Modellen sowie einem kontextuellen Modell, dem BERT-Modell, erstellt. Schließlich wird auf Grundlage der entwickelten Graphdatenbank ein in der Programmiersprache Python entwickeltes Programm vorgestellt, dass Eingabetexte unter Hinzunahme der Informationen und Relationen innerhalb der Graphdatenbank verarbeitet und Kleidungsstücke detektiert.
Nach der theoretischen Aufarbeitung der entwickelten Ansätze werden die daraus resultierenden Ergebnisse diskutiert und bestehende Problematiken bei der Bearbeitung der Aufgabenstellung angesprochen. Abschließend wird die Arbeit zusammengefasst und Anregungen für die weitere Bearbeitung dieser Thematik vorgestellt.
This thesis is concerned with the study of symmetry breaking phenomena for several different semilinear partial differential equations. Roughly speaking, this encompasses equations whose symmetries are not necessarily inherited by their solutions, which is particularly interesting for ground state solutions.
Reactive oxygen species are a class of naturally occurring, highly reactive molecules that change the structure and function of macromolecules. This can often lead to irreversible intracellular damage. Conversely, they can also cause reversible changes through post-translational modification of proteins which are utilized in the cell for signaling. Most of these modifications occur on specific cysteines. Which structural and physicochemical features contribute to the sensitivity of cysteines to redox modification is currently unclear. Here, I investigated the in uence of protein structural and sequence features on the modifiability of proteins and specific cysteines therein using statistical and machine learning methods. I found several strong structural predictors for redox modification, such as a higher accessibility to the cytosol and a high number of positively charged amino acids in the close vicinity. I detected a high frequency of other post-translational modifications, such as phosphorylation and ubiquitination, near modified cysteines. Distribution of secondary structure elements appears to play a major role in the modifiability of proteins. Utilizing these features, I created models to predict the presence of redox modifiable cysteines in proteins, including human mitochondrial complex I, NKG2E natural killer cell receptors and proximal tubule cell proteins, and compared some of these predictions to earlier experimental results.