510 Mathematik
Refine
Year of publication
- 2021 (42) (remove)
Document Type
- Article (28)
- Doctoral Thesis (9)
- Preprint (4)
- Bachelor Thesis (1)
Language
- English (42)
Has Fulltext
- yes (42)
Is part of the Bibliography
- no (42)
Keywords
- Finitely many measurements (2)
- 2-SAT (1)
- Belief Propagation (1)
- Branching process approximation (1)
- COVID-19 pandemic (1)
- Cannings model (1)
- Coin tossing (1)
- Concordance (1)
- Convexity (1)
- Curvature measure (1)
Institute
We consider a linear ill-posed equation in the Hilbert space setting. Multiple independent unbiased measurements of the right-hand side are available. A natural approach is to take the average of the measurements as an approximation of the right-hand side and to estimate the data error as the inverse of the square root of the number of measurements. We calculate the optimal convergence rate (as the number of measurements tends to infinity) under classical source conditions and introduce a modified discrepancy principle, which asymptotically attains this rate.
For genus g=2i≥4 and the length g−1 partition μ=(4,2,…,2,−2,…,−2) of 0, we compute the first coefficients of the class of D¯¯¯¯(μ) in PicQ(R¯¯¯¯g), where D(μ) is the divisor consisting of pairs [C,η]∈Rg with η≅OC(2x1+x2+⋯+xi−1−xi−⋯−x2i−1) for some points x1,…,x2i−1 on C. We further provide several enumerative results that will be used for this computation.
Between his arrival in Frankfurt in 1922 and and his proof of his famous finiteness theorem for integral points in 1929, Siegel had no publications. He did, however, write a letter to Mordell in 1926 in which he explained a proof of the finiteness of integral points on hyperelliptic curves. Recognizing the importance of this argument (and Siegel's views on publication), Mordell sent the relevant extract to be published under the pseudonym "X".
The purpose of this note is to explain how to optimize Siegel's 1926 technique to obtain the following bound. Let K be a number field, S a finite set of places of K, and f∈oK,S[t] monic of degree d≥5 with discriminant Δf∈o×K,S. Then: #|{(x,y):x,y∈oK,S,y2=f(x)}|≤2rankJac(Cf)(K)⋅O(1)d3⋅([K:Q]+#|S|).
This improves bounds of Evertse-Silverman and Bombieri-Gubler from 1986 and 2006, respectively.
The main point underlying our improvement is that, informally speaking, we insist on "executing the descents in the presence of only one root (and not three) until the last possible moment".
For genus g=2i≥4 and the length g−1 partition μ=(4,2,…,2,−2,…,−2) of 0, we compute the first coefficients of the class of D¯¯¯¯(μ) in PicQ(R¯¯¯¯g), where D(μ) is the divisor consisting of pairs [C,η]∈Rg with η≅OC(2x1+x2+⋯+xi−1−xi−⋯−x2i−1) for some points x1,…,x2i−1 on C. We further provide several enumerative results that will be used for this computation.
The present paper is concerned with the half-space Dirichlet problem [...] where ℝ𝑁+:={𝑥∈ℝ𝑁:𝑥𝑁>0} for some 𝑁≥1 and 𝑝>1, 𝑐>0 are constants. We analyse the existence, non-existence and multiplicity of bounded positive solutions to (𝑃𝑐). We prove that the existence and multiplicity of bounded positive solutions to (𝑃𝑐) depend in a striking way on the value of 𝑐>0 and also on the dimension N. We find an explicit number 𝑐𝑝∈(1,𝑒√), depending only on p, which determines the threshold between existence and non-existence. In particular, in dimensions 𝑁≥2, we prove that, for 0<𝑐<𝑐𝑝, problem (𝑃𝑐) admits infinitely many bounded positive solutions, whereas, for 𝑐>𝑐𝑝, there are no bounded positive solutions to (𝑃𝑐).
In an earlier paper we proposed a recursive model for epidemics; in the present paper we generalize this model to include the asymptomatic or unrecorded symptomatic people, which we call dark people (dark sector). We call this the SEPARd-model. A delay differential equation version of the model is added; it allows a better comparison to other models. We carry this out by a comparison with the classical SIR model and indicate why we believe that the SEPARd model may work better for Covid-19 than other approaches.
In the second part of the paper we explain how to deal with the data provided by the JHU, in particular we explain how to derive central model parameters from the data. Other parameters, like the size of the dark sector, are less accessible and have to be estimated more roughly, at best by results of representative serological studies which are accessible, however, only for a few countries. We start our country studies with Switzerland where such data are available. Then we apply the model to a collection of other countries, three European ones (Germany, France, Sweden), the three most stricken countries from three other continents (USA, Brazil, India). Finally we show that even the aggregated world data can be well represented by our approach.
At the end of the paper we discuss the use of the model. Perhaps the most striking application is that it allows a quantitative analysis of the influence of the time until people are sent to quarantine or hospital. This suggests that imposing means to shorten this time is a powerful tool to flatten the curves.
Korrektur zu: Höllbacher, S., Wittum, G. Correction to: A sharp interface method using enriched finite elements for elliptic interface problems. Numer. Math. 147, 783 (2021). DOI: 10.1007/s00211-021-01180-0.
We present an immersed boundary method for the solution of elliptic interface problems with discontinuous coefficients which provides a second-order approximation of the solution. The proposed method can be categorised as an extended or enriched finite element method. In contrast to other extended FEM approaches, the new shape functions get projected in order to satisfy the Kronecker-delta property with respect to the interface. The resulting combination of projection and restriction was already derived in Höllbacher and Wittum (TBA, 2019a) for application to particulate flows. The crucial benefits are the preservation of the symmetry and positive definiteness of the continuous bilinear operator. Besides, no additional stabilisation terms are necessary. Furthermore, since our enrichment can be interpreted as adaptive mesh refinement, the standard integration schemes can be applied on the cut elements. Finally, small cut elements do not impair the condition of the scheme and we propose a simple procedure to ensure good conditioning independent of the location of the interface. The stability and convergence of the solution will be proven and the numerical tests demonstrate optimal order of convergence.
Several novel imaging and non-destructive testing technologies are based on reconstructing the spatially dependent coefficient in an elliptic partial differential equation from measurements of its solution(s). In practical applications, the unknown coefficient is often assumed to be piecewise constant on a given pixel partition (corresponding to the desired resolution), and only finitely many measurement can be made. This leads to the problem of inverting a finite-dimensional non-linear forward operator F: D(F)⊆Rn→Rm , where evaluating ℱ requires one or several PDE solutions.
Numerical inversion methods require the implementation of this forward operator and its Jacobian. We show how to efficiently implement both using a standard FEM package and prove convergence of the FEM approximations against their true-solution counterparts. We present simple example codes for Comsol with the Matlab Livelink package, and numerically demonstrate the challenges that arise from non-uniqueness, non-linearity and instability issues. We also discuss monotonicity and convexity properties of the forward operator that arise for symmetric measurement settings.
This text assumes the reader to have a basic knowledge on Finite Element Methods, including the variational formulation of elliptic PDEs, the Lax-Milgram-theorem, and the Céa-Lemma. Section 3 also assumes that the reader is familiar with the concept of Fréchet differentiability.
Consider two independent random walks. By chance, there will be spells of association between them where the two processes move in the same direction, or in opposite direction. We compute the probabilities of the length of the longest spell of such random association for a given sample size, and discuss measures like mean and mode of the exact distributions. We observe that long spells (relative to small sample sizes) of random association occur frequently, which explains why nonsense correlation between short independent random walks is the rule rather than the exception. The exact figures are compared with approximations. Our finite sample analysis as well as the approximations rely on two older results popularized by Révész (Stat Pap 31:95–101, 1990, Statistical Papers). Moreover, we consider spells of association between correlated random walks. Approximate probabilities are compared with finite sample Monte Carlo results.
We establish weighted Lp-Fourier extension estimates for O(N−k)×O(k)-invariant functions defined on the unit sphere SN−1, allowing for exponents p below the Stein–Tomas critical exponent 2(N+1)/N−1. Moreover, in the more general setting of an arbitrary closed subgroup G⊂O(N) and G-invariant functions, we study the implications of weighted Fourier extension estimates with regard to boundedness and nonvanishing properties of the corresponding weighted Helmholtz resolvent operator. Finally, we use these properties to derive new existence results for G-invariant solutions to the nonlinear Helmholtz equation −Δu−u = Q(x)|u|p−2u,u∈W2,p(RN), where Q is a nonnegative bounded and G-invariant weight function.
This thesis concerns three specific constraint satisfaction problems: the k-SAT problem, random linear equations and the Potts model. We investigated a phenomenon called replica symmetry, its consequences and its limitation. For the $k$-SAT problem, we were able to show that replica symmetry holds up to a threshold $d^{*}$. However, after another critical threshold $d^{**}$, we discovered that replica symmetry could not hold anymore, which enabled us to establish the existence of a replica symmetry breaking region. For the random linear problem, a peculiar phenomenon occurs. We observed that a more robust version of replica symmetry (strong replica symmetry) holds up to a threshold $d=e$ and ceases to hold after. This phenomenon is linked to the fact that before the threshold $d=e$, the fraction of frozen variables, i.e. variable forced to take the same value in all solutions, is concentrated around a deterministic value but vacillates between two values with equal probability for $d>e$. Lastly, for the Potts model, we show that a phenomenon called metastability occurs. The latter phenomenon can be understood as a consequence of trivial replica symmetry breaking scheme. This metastability phenomenon further produces slow mixing results for two famous Markov chains, the Glauber and the Swendsen-Wang dynamics.
In this survey paper, we present a multiscale post-processing method in exploration. Based on a physically relevant mollifier technique involving the elasto-oscillatory Cauchy–Navier equation, we mathematically describe the extractable information within 3D geological models obtained by migration as is commonly used for geophysical exploration purposes. More explicitly, the developed multiscale approach extracts and visualizes structural features inherently available in signature bands of certain geological formations such as aquifers, salt domes etc. by specifying suitable wavelet bands.
Monte Carlo methods : barrier option pricing with stable Greeks and multilevel Monte Carlo learning
(2021)
For discretely observed barrier options, there exists no closed solution under the Black-Scholes model. Thus, it is often helpful to use Monte Carlo simulations, which are easily adapted to these models. However, as presented above, the discontinuous payoff may lead to instability in option's sensitivities for Monte Carlo algorithms.
This thesis presents a new Monte Carlo algorithm that can calculate the pathwise sensitivities for discretely monitored barrier options. The idea is based on Glasserman and Staum's one-step survival strategy and the results of Alm et al., with which we can stably determine the option's sensitivities such as Delta and Vega by finite-differences. The basic idea of Glasserman and Staum is to use a truncated normal distribution, which excludes the values above the barrier (e.g.\ for knock-up-out options), instead of sampling from the full normal distribution. This approach avoids the discontinuity generated by any Monte Carlo path crossing the barrier and yields a Lipschitz-continuous payoff function.
The new part will be to develop an extended algorithm that estimates the sensitivities directly, without simulation at multiple parameter values as in finite-difference.
Consider the local volatility model, which is a generalisation of the Black-Scholes model. Although standard Monte Carlo algorithms work well for the pricing of continuously monitored barrier options within this model, they often do not behave stably with respect to numerical differentiation.
To bypass this problem, one would generally either resort to regularised differentiation schemes or derive an algorithm for precise differentiation. Unfortunately, while the widespread solution of using a Brownian bridge approach leads to accurate first derivatives, they are not Lipschitz-continuous. This leads to instability with respect to numerical differentiation for second-order Greeks.
To alleviate this problem - i.e. produce Lipschitz-continuous first-order derivatives - and reduce variance, we generalise the idea of one-step survival to general scalar stochastic differential equations. This approach leads to the new one-step survival Brownian bridge approximation, which allows for stable second-order Greeks calculations.
To show the new approach's numerical efficiency, we present a new respective Monte Carlo pathwise sensitivity estimator for the first-order Greeks and study different methods to compute second-order Greeks stably. Finally, we develop a one-step survival Brownian bridge multilevel Monte Carlo algorithm to reduce the computational cost in practice.
This thesis proves unbiasedness and variance reduction of our new, one-step survival version with respect to the classical, Brownian bridge approach. Furthermore, we will present a new convergence result for the Brownian bridge approach using the Milstein scheme under certain conditions. Overall, these properties imply convergence of the new one-step survival Brownian bridge approach.
In recent years, deep learning has become pervasive in various fields. As a family of machine learning methods it is used in a broad set of applications, such as image processing, voice recognition, email filtering, computer vision. Most modern deep learning algorithms are based on artificial neural networks inspired by the biological neural networks constituting animal brains. Also in computational finance deep learning may be of use: Consider there is no closed-solution available for an option price, Monte Carlo simulations are substantially for estimation. Instead of persistently contributing new price computations arising from an updated volatility term, one could replace these by evaluating a neural network.
If an according neural network is available, the evaluation could lead to substantial savings and be highly efficient. I.e., once trained, a neural network could save further expensive estimations. However, in practice, the challenge is the training process of the neural network.
We study and compare two generic neural network training algorithms' computational complexity. Then, we introduce a new multilevel training algorithm that combines a deep learning algorithm with the idea of multilevel Monte Carlo path simulation. The idea is to train several neural networks with training data computed from the so-called level estimators of the multilevel Monte Carlo approach introduced by Giles. We show that the new method can reduce computational complexity by formulating a complexity theorem.
In this article we use techniques from tropical and logarithmic geometry to construct a non-Archimedean analogue of Teichmüller space T¯g whose points are pairs consisting of a stable projective curve over a non-Archimedean field and a Teichmüller marking of the topological fundamental group of its Berkovich analytification. This construction is closely related to and inspired by the classical construction of a non-Archimedean Schottky space for Mumford curves by Gerritzen and Herrlich. We argue that the skeleton of non-Archimedean Teichmüller space is precisely the tropical Teichmüller space introduced by Chan–Melo–Viviani as a simplicial completion of Culler–Vogtmann Outer space. As a consequence, Outer space turns out to be a strong deformation retract of the locus of smooth Mumford curves in T¯g.
The risk of developing severe complications from an influenza virus infection is increased in patients with chronic inflammatory diseases such as psoriasis (PsO) and atopic dermatitis (AD). However, low influenza vaccination rates have been reported. The aim of this study was to determine vaccination rates in PsO compared to AD patients and explore patient perceptions of vaccination. A multicenter cross-sectional study was performed in 327 and 98 adult patients with PsO and AD, respectively. Data on vaccination, patient and disease characteristics, comorbidity, and patient perceptions was collected with a questionnaire. Medical records and vaccination certificates were reviewed. A total of 49.8% of PsO and 32.7% of AD patients were vaccinated at some point, while in season 2018/2019, 30.9% and 13.3% received an influenza vaccination, respectively. There were 96.6% and 77.6% of PsO and AD patients who had an indication for influenza vaccination due to age, immunosuppressive therapy, comorbidity, occupation, and/or pregnancy. Multivariate regression analysis revealed higher age (p < 0.001) and a history of bronchitis (p = 0.023) as significant predictors of influenza vaccination in PsO patients. Considering that most patients had an indication for influenza vaccination, the rate of vaccinated patients was inadequately low.
HbA1c is the gold standard test for monitoring medium/long term glycemia conditions in diabetes care, which is a critical factor in reducing the risk of chronic diabetes complications. Current technologies for measuring HbA1c concentration are invasive and adequate assays are still limited to laboratory-based methods that are not widely available worldwide. The development of a non-invasive diagnostic tool for HbA1c concentration can lead to the decrease of the rate of undiagnosed cases and facilitate early detection in diabetes care. We present a preliminary validation diagnostic study of W-band spectroscopy for detection and monitoring of sustained hyperglycemia, using the HbA1c concentration as reference. A group of 20 patients with type 1 diabetes mellitus and 10 healthy subjects were non-invasively assessed at three different visits over a period of 7 months by a millimeter-wave spectrometer (transmission mode) operating across the full W-band. The relationship between the W-band spectral profile and the HbA1c concentration is studied using longitudinal and non-longitudinal functional data analysis methods. A potential blind discrimination between patients with or without diabetes is obtained, and more importantly, an excellent relation (R-squared = 0.97) between the non-invasive assessment and the HbA1c measure is achieved. Such results support that W-band spectroscopy has great potential for developing a non-invasive diagnostic tool for in-vivo HbA1c concentration monitoring in humans.
We prove new existence results for a nonlinear Helmholtz equation with sign-changing nonlinearity of the form − delta u−k2u=Q(x)/u/p−2u, uEW2, p(RN) – delta u − k2u=Q(x)/u/p−2u, uEW2, p(RN) with k>0, k>0, N≥3N≥3, pE[2(N+1)N − 1, 2NN − 2)pE[2(N+1)N − 1, 2NN−2) and QEL ∞ (RN)QEL ∞ (RN). Due to the sign-changes of Q, our solutions have infinite Morse-Index in the corresponding dual variational formulation.
The recently introduced Lipschitz–Killing curvature measures on pseudo-Riemannian manifolds satisfy a Weyl principle, i.e. are invariant under isometric embeddings. We show that they are uniquely characterized by this property. We apply this characterization to prove a Künneth-type formula for Lipschitz–Killing curvature measures, and to classify the invariant generalized valuations and curvature measures on all isotropic pseudo-Riemannian space forms.
In the model of randomly perturbed graphs we consider the union of a deterministic graph G with minimum degree αn and the binomial random graph G(n, p). This model was introduced by Bohman, Frieze, and Martin and for Hamilton cycles their result bridges the gap between Dirac’s theorem and the results by Pósa and Korshunov on the threshold in G(n, p). In this note we extend this result in G ∪G(n, p) to sparser graphs with α = o(1). More precisely, for any ε > 0 and α: N ↦→ (0, 1) we show that a.a.s. G ∪ G(n, β/n) is Hamiltonian, where β = −(6 + ε) log(α). If α > 0 is a fixed constant this gives the aforementioned result by Bohman, Frieze, and Martin and if α = O(1/n) the random part G(n, p) is sufficient for a Hamilton cycle. We also discuss embeddings of bounded degree trees and other spanning structures in this model, which lead to interesting questions on almost spanning embeddings into G(n, p).
In our work, we establish the existence of standing waves to a nonlinear Schrödinger equation with inverse-square potential on the half-line. We apply a profile decomposition argument to overcome the difficulty arising from the non-compactness of the setting. We obtain convergent minimizing sequences by comparing the problem to the problem at “infinity” (i.e., the equation without inverse square potential). Finally, we establish orbital stability/instability of the standing wave solution for mass subcritical and supercritical nonlinearities respectively.
The sum of Lyapunov exponents Lf of a semi-stable fibration is the ratio of the degree of the Hodge bundle by the Euler characteristic of the base. This ratio is bounded from above by the Arakelov inequality. Sheng-Li Tan showed that for fiber genus g≥2 the Arakelov equality is never attained. We investigate whether there are sequences of fibrations approaching asymptotically the Arakelov bound. The answer turns out to be no, if the fibration is smooth, or non-hyperelliptic, or has a small base genus. Moreover, we construct examples of semi-stable fibrations showing that Teichmüller curves are not attaining the maximal possible value of Lf.
We obtain spectral inequalities and asymptotic formulae for the discrete spectrum of the operator 12log(−Delta) in an open set OmegaERd, d≥2, of finite measure with Dirichlet boundary conditions. We also derive some results regarding lower bounds for the eigenvalue Lambda1(Omega) and compare them with previously known inequalities.
Topological phases set themselves apart from other phases since they cannot be understood in terms of the usual Landau theory of phase transitions. This fact, which is a consequence of the property that topological phase transitions can occur without breaking symmetries, is reflected in the complicated form of topological order parameters. While the mathematical classification of phases through homotopy theory is known, an intuition for the relation between phase transitions and changes to the physical system is largely inhibited by the general complexity.
In this thesis we aim to get back some of this intuition by studying the properties of the Chern number (a topological order parameter) in two scenarios. First, we investigate the effect of electronic correlations on topological phases in the Green's function formalism. By developing a statistical method that averages over all possible solutions of the manybody problem, we extract general statements about the shape of the phase diagram and investigate the stability of topological phases with respect to interactions. In addition, we find that in many topological models the local approximation, which is part of many standard methods for solving the manybody lattice model, is able to produce qualitatively correct phase transitions at low to intermediate correlations.
We then extend the statistical method to study the effect of the lattice, where we evaluate possible applications of standard machine learning techniques against our information theoretical approach. We define a measure for the information about particular topological phases encoded in individual lattice parameters, which allows us to construct a qualitative phase diagram that gives a more intuitive understanding of the topological phase.
Finally, we discuss possible applications of our method that could facilitate the discovery of new materials with topological properties.
Point forecasts can be interpreted as functionals (i.e., point summaries) of predictive distributions. We extend methodology for the identification of the functional based on time series of point forecasts and associated realizations. Focusing on state-dependent quantiles and expectiles, we provide a generalized method of moments estimator for the functional, along with tests of optimality under general joint hypotheses of functional relationships and information bases. Our tests are more flexible, and in simulations better calibrated and more powerful than existing solutions. In empirical examples, economic growth forecasts and model output for precipitation are indicative of overstatement in anticipation of extreme events.
In the first part of this thesis, we introduce the concept of prospective strict no-arbitrage for discrete-time financial market models with proportional transaction. The prospective strict no-arbitrage condition, which is a variant of strict no-arbitrage, is slightly weaker than the robust no-arbitrage condition. It still implies that the set of portfolios attainable from zero initial endowment is closed in probability. Consequently, prospective strict no-arbitrage implies the existence of consistent prices, which may lie on the boundary of the bid-ask spread. A weak version of prospective strict no-arbitrage turns out to be equivalent to the existence of a consistent price system.
In continuous-time financial market models with proportional transaction costs, efficient friction, i.e., nonvanishing transaction costs, is a standing assumption. Together with robust no free lunch with vanishing risk, it rules out strategies of infinite variation which usually appear in frictionless financial markets. In the second part of this thesis, we show how models with and without transaction costs can be unified. The bid and the ask price of a risky asset are given by cadlag processes which are locally bounded from below and may coincide at some points. In a first step, we show that if the bid-ask model satisfies no unbounded profit with bounded risk for simple long-only strategies, then there exists a semimartingale lying between the bid and the ask price process.
In a second step, under the additional assumption that the zeros of the bid-ask spread are either starting points of an excursion away from zero or inner points from the right, we show that for every bounded predictable strategy specifying the amount of risky assets, the semimartingale can be used to construct the corresponding self-financing risk-free position in a consistent way. Finally, the set of most general strategies is introduced, which also provides a new view on the frictionless case.
For a class of Cannings models we prove Haldane’s formula, π(sN)∼2sNρ2, for the fixation probability of a single beneficial mutant in the limit of large population size N and in the regime of moderately strong selection, i.e. for sN∼N−b and 0<b<1/2. Here, sN is the selective advantage of an individual carrying the beneficial type, and ρ2 is the (asymptotic) offspring variance. Our assumptions on the reproduction mechanism allow for a coupling of the beneficial allele’s frequency process with slightly supercritical Galton–Watson processes in the early phase of fixation.
The sum of Lyapunov exponents Lf of a semi-stable fibration is the ratio of the degree of the Hodge bundle by the Euler characteristic of the base. This ratio is bounded from above by the Arakelov inequality. Sheng-Li Tan showed that for fiber genus g≥2 the Arakelov equality is never attained. We investigate whether there are sequences of fibrations approaching asymptotically the Arakelov bound. The answer turns out to be no, if the fibration is smooth, or non-hyperelliptic, or has a small base genus. Moreover, we construct examples of semi-stable fibrations showing that Teichmüller curves are not attaining the maximal possible value of Lf.
Sublinear circuits are generalizations of the affine circuits in matroid theory, and they arise as the convex-combinatorial core underlying constrained non-negativity certificates of exponential sums and of polynomials based on the arithmetic-geometric inequality. Here, we study the polyhedral combinatorics of sublinear circuits for polyhedral constraint sets. We give results on the relation between the sublinear circuits and their supports and provide necessary as well as sufficient criteria for sublinear circuits. Based on these characterizations, we provide some explicit results and enumerations for two prominent polyhedral cases, namely the non-negative orthant and the cube [− 1,1]n.
We derive a shape derivative formula for the family of principal Dirichlet eigenvalues λs(Ω) of the fractional Laplacian (−Δ)s associated with bounded open sets Ω⊂RN of class C1,1. This extends, with a help of a new approach, a result in Dalibard and Gérard-Varet (Calc. Var. 19(4):976–1013, 2013) which was restricted to the case s=12. As an application, we consider the maximization problem for λs(Ω) among annular-shaped domains of fixed volume of the type B∖B¯¯¯¯′, where B is a fixed ball and B′ is ball whose position is varied within B. We prove that λs(B∖B¯¯¯¯′) is maximal when the two balls are concentric. Our approach also allows to derive similar results for the fractional torsional rigidity. More generally, we will characterize one-sided shape derivatives for best constants of a family of subcritical fractional Sobolev embeddings.
Solving an inverse elliptic coefficient problem by convex non-linear semidefinite programming
(2021)
Several applications in medical imaging and non-destructive material testing lead to inverse elliptic coefficient problems, where an unknown coefficient function in an elliptic PDE is to be determined from partial knowledge of its solutions. This is usually a highly non-linear ill-posed inverse problem, for which unique reconstructability results, stability estimates and global convergence of numerical methods are very hard to achieve. The aim of this note is to point out a new connection between inverse coefficient problems and semidefinite programming that may help addressing these challenges. We show that an inverse elliptic Robin transmission problem with finitely many measurements can be equivalently rewritten as a uniquely solvable convex non-linear semidefinite optimization problem. This allows to explicitly estimate the number of measurements that is required to achieve a desired resolution, to derive an error estimate for noisy data, and to overcome the problem of local minima that usually appears in optimization-based approaches for inverse coefficient problems.
We investigated whether dichotomous data showed the same latent structure as the interval-level data from which they originated. Given constancy of dimensionality and factor loadings reflecting the latent structure of data, the focus was on the variance of the latent variable of a confirmatory factor model. This variance was shown to summarize the information provided by the factor loadings. The results of a simulation study did not reveal exact correspondence of the variances of the latent variables derived from interval-level and dichotomous data but shrinkage. Since shrinkage occurred systematically, methods for recovering the original variance were fleshed out and evaluated.
The problem of unconstrained or constrained optimization occurs in many branches of mathematics and various fields of application. It is, however, an NP-hard problem in general. In this thesis, we examine an approximation approach based on the class of SAGE exponentials, which are nonnegative exponential sums. We examine this SAGE-cone, its geometry, and generalizations. The thesis consists of three main parts:
1. In the first part, we focus purely on the cone of sums of globally nonnegative exponential sums with at most one negative term, the SAGE-cone. We ex- amine the duality theory, extreme rays of the cone, and provide two efficient optimization approaches over the SAGE-cone and its dual.
2. In the second part, we introduce and study the so-called S-cone, which pro- vides a uniform framework for SAGE exponentials and SONC polynomials. In particular, we focus on second-order representations of the S-cone and its dual using extremality results from the first part.
3. In the third and last part of this thesis, we turn towards examining the con- ditional SAGE-cone. We develop a notion of sublinear circuits leading to new duality results and a partial characterization of extremality. In the case of poly- hedral constraint sets, this examination is simplified and allows us to classify sublinear circuits and extremality for some cases completely. For constraint sets with certain conditions such as sets with symmetries, conic, or polyhedral sets, various optimization and representation results from the unconstrained setting can be applied to the constrained case.
The aim of this bachelor thesis is to compare and empirically test the use of classification to improve the topic models Latent Dirichlet Allocation (LDA) and Author Topic Modeling
(ATM) in the context of the social media platform Twitter. For this purpose, a corpus was classified with the Dewey Decimal Classification (DDC) and then used to train the topic models. A second dataset, the unclassified corpus, was used for comparison. The assumption that the use of classification could improve the topic models did not prove true for the LDA topic model. Here, a sufficiently good improvement of the models could not be achieved. The ATM model, on the other hand, could be improved by using the classification. In general, the ATM model performed significantly better than the LDA model. In the context of the social media platform Twitter, it can thus be seen that the ATM model is superior to the LDA model and can additionally be improved by classifying the data.
We show that throughout the satisfiable phase the normalized number of satisfying assignments of a random 2-SAT formula converges in probability to an expression predicted by the cavity method from statistical physics. The proof is based on showing that the Belief Propagation algorithm renders the correct marginal probability that a variable is set to “true” under a uniformly random satisfying assignment.
Within the last thirty years, the contraction method has become an important tool for the distributional analysis of random recursive structures. While it was mainly developed to show weak convergence, the contraction approach can additionally be used to obtain bounds on the rate of convergence in an appropriate metric. Based on ideas of the contraction method, we develop a general framework to bound rates of convergence for sequences of random variables as they mainly arise in the analysis of random trees and divide-and-conquer algorithms. The rates of convergence are bounded in the Zolotarev distances. In essence, we present three different versions of convergence theorems: a general version, an improved version for normal limit laws (providing significantly better bounds in some examples with normal limits) and a third version with a relaxed independence condition. Moreover, concrete applications are given which include parameters of random trees, quantities of stochastic geometry as well as complexity measures of recursive algorithms under either a random input or some randomization within the algorithm.
In 2020, Germany and Spain experienced lockdowns of their school systems. This resulted in a new challenge for learners and teachers: lessons moved from the classroom to the children’s homes. Therefore, teachers had to set rules, implement procedures and make didactical–methodical decisions regarding how to handle this new situation. In this paper, we focus on the roles of mathematics teachers in Germany and Spain. The article first describes how mathematics lessons were conducted using distance learning. Second, problems encountered throughout this process were examined. Third, teachers drew conclusions from their mathematics teaching experiences during distance learning. To address these research interests, a questionnaire was answered by N = 248 teachers (N1 = 171 German teachers; N2 = 77 Spanish teachers). Resulting from a mixed methods approach, differences between the countries can be observed, e.g., German teachers conducted more lessons asynchronously. In contrast, Spanish teachers used synchronous teaching more frequently, but still regard the lack of personal contact as a main challenge. Finally, for both countries, the digitization of mathematics lessons seems to have been normalized by the pandemic.
We show the existence of additive kinematic formulas for general flag area measures, which generalizes a recent result by Wannerer. Building on previous work by the second named author, we introduce an algebraic framework to compute these formulas explicitly. This is carried out in detail in the case of the incomplete flag manifold consisting of all (p+1)-planes containing a unit vector.
Studying large discrete systems is of central interest in, non-exclusively, discrete mathematics, computer sciences and statistical physics. The study of phase transitions, e.g. points in the evolution of a large random system in which the behaviour of the system changes drastically, became of interest in the classical field of random graphs, the theory of spin glasses as well as in the analysis of algorithms [78,82, 121].
It turns out that ideas from the statistical physics’ point of view on spin glass systems can be used to study inherently combinatorial problems in discrete mathematics and theoretical computer sciences(for instance, satisfiability) or to analyse phase transitions occurring in inference problems (like the group testing problem) [68, 135, 168]. A mathematical flaw of this approach is that the physical methods only render mathematical conjectures as they are not known to be rigorous.
In this thesis, we will discuss the results of six contributions. For instance, we will explore how the
theory of diluted mean-field models for spin glasses helps studying random constraint satisfaction problems through the example of the random 2−SAT problem. We will derive a formula for the number of satisfying assignments that a random 2−SAT formula typically possesses [2].
Furthermore, we will discuss how ideas from spin glass models (more precisely, from their planted versions) can be used to facilitate inference in the group testing problem. We will answer all major open questions with respect to non-adaptive group testing if the number of infected individuals scales sublinearly in the population size and draw a complete picture of phase transitions with respect to the
complexity and solubility of this inference problem [41, 46].
Subsequently, we study the group testing problem under sparsity constrains and obtain a (not fully understood) phase diagram in which only small regions stay unexplored [88].
In all those cases, we will discover that important results can be achieved if one combines the rich theory of the statistical physics’ approach towards spin glasses and inherent combinatorial properties of the underlying random graph.
Furthermore, based on partial results of Coja-Oghlan, Perkins and Skubch [42] and Coja-Oghlan et al. [49], we introduce a consistent limit theory for discrete probability measures akin to the graph limit theory [31, 32, 128] in [47]. This limit theory involves the extensive study of a special variant of the cut-distance and we obtain a continuous version of a very simple algorithm, the pinning operation, which allows to decompose the phase space of an underlying system into parts such that a probability
measure, restricted to this decomposition, is close to a product measure under the cut-distance. We will see that this pinning lemma can be used to rigorise predictions, at least in some special cases, based on the physical idea of a Bethe state decomposition when applied to the Boltzmann distribution.
Finally, we study sufficient conditions for the existence of perfect matchings, Hamilton cycles and bounded degree trees in randomly perturbed graph models if the underlying deterministic graph is sparse [93].
We provide extensions of the dual variational method for the nonlinear Helmholtz equation from Evéquoz and Weth. In particular we prove the existence of dual ground state solutions in the Sobolev critical case, extend the dual method beyond the standard Stein Tomas and Kenig Ruiz Sogge range and generalize the method for sign changing nonlinearities.