Informatik
Refine
Year of publication
Document Type
- Article (179)
- Working Paper (119)
- Doctoral Thesis (89)
- diplomthesis (75)
- Conference Proceeding (40)
- Book (37)
- Bachelor Thesis (32)
- Report (25)
- Preprint (24)
- Part of a Book (13)
Has Fulltext
- yes (648)
Is part of the Bibliography
- no (648)
Keywords
Institute
- Informatik (648)
- Frankfurt Institute for Advanced Studies (FIAS) (73)
- Physik (62)
- Mathematik (54)
- Präsidium (38)
- Biowissenschaften (21)
- Medizin (17)
- Exzellenzcluster Makromolekulare Komplexe (8)
- Geowissenschaften (5)
- Psychologie (5)
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
Active efficient coding explains the development of binocular vision and its failure in amblyopia
(2020)
The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here, we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a formulation of the active efficient coding theory, which proposes that eye movements as well as stimulus encoding are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to coordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
Future operation of the CBM detector requires ultra-fast analysis of the continuous stream of data from all subdetector systems. Determining the inter-system time shifts among individual detector systems in the existing prototype experiment mCBM is an essential step for data processing and in particular for stable data taking. Based on the input of raw measurements from all detector systems, the corresponding time correlations can be obtained at digital level by evaluating the differences in time stamps. If the relevant systems are stable during data taking and sufficient digital measurements are available, the distribution of time differences should display a clear peak. Up to now, the outcome of the processed time differences is stored in histograms and the maximum peak is considered, after the evaluation of all timeslices of a run leading to significant run times. The results presented here demonstrate the stability of the synchronicity of mCBM systems. Furthermore it is illustrated that relatively small amounts of raw measurements are sufficient to evaluate corresponding time correlations among individual mCBM detectors, thus enabling fast online monitoring of them in future online data processing.
Im Fachbereich der Computerlinguistik ist die automatische Generierung von Szenen aus, in natürlicher Sprache verfassten, Text seit bereits vielen Jahrzehnten ein wichtiger Bestandteil der Forschung, welche in der "Kunst", "Lehre" und "Robotik" Verwendung finden. Mit Hilfe von neuen Technologien im Bereich der Künstlichen Intelligenzen (KI), werden neue Entwicklungen möglich, welche diese Generierungen vereinfachen, allerdings auch undurchsichtige interne vom Modell getroffene Entscheidungen fördern.
Ziel der vorgeschlagenen Lösung „ARES: Annotation von Relationen und Eigenschaften zur Szenengenerierung“ ist es, ein modulares System zu entwerfen, wobei einzelne Prozesse für den Benutzer verständlich bleiben. Außerdem sollen Möglichkeiten geboten werden, neue Entitäten und Relationen, welche über die Textanalyse bereitgestellt werden, auch in die Szenengenerierung im dreidimensionalen Raum einzupflegen, ohne dass hierfür Code zwingend notwendig wird.
Der Fokus liegt auf der syntaktisch korrekten Darstellung der Elemente im Raum. Dagegen lässt sich die semantische Korrektheit durch weitere manuelle Anpassungen, welche für spätere Generierungen gespeichert werden erhöhen. Letztlich soll die Menge der zur Darstellung benötigten Annotationen möglichst gering bleiben und neue szenenbezogene Annotationen durch die implementierten Annotationstools hinzugefügt werden.
In der Arbeit wird das Certainty-Tool, eine Erweiterung für den Unity-basierten Teil des Stolperwege Projektes, vorgestellt. Dieses verfolgt die Idee des VAnnotatoR weiter und erlaubt die Visualisierung von informationeller Ungewissheit der im Stolperwege-Praktikum digital rekonstruierten Gebäude. Dabei inkorporiert das Tool das Konzept hinter BIM (Building Information Modelling), eine neuartige Methode der Planung in der AEC-Branche, welches ein Selbstbewusstsein von Informationen für Teile eines Gebäudes ermöglicht. Dabei werden im Certainty-Tool Stufen der informationellen Ungewissheit entwickelt und diese auf Teile des Gebäudes zugewiesen. Das Tool wird anhand einer digitalen Rekonstruktion des zerstörten Rothschild-Palais vorgestellt. Des Weiteren wurde eine Evaluation basierend auf der Usability Metric for User Experience durchgeführt und weiterführende Entwicklungen und Verbesserungen des Tools diskutiert.
We investigate the QCD phase diagram for nonzero background magnetic fields using first-principles lattice simulations. At the physical point (in terms of quark masses), the thermodynamics of this system is controlled by two opposing effects: magnetic catalysis (enhancement of the quark condensate) at low temperature and inverse magnetic catalysis (reduction of the condensate) in the transition region. While the former is known to be robust and independent of the details of the interactions, inverse catalysis arises as a result of a delicate competition, effective only for light quarks. By performing simulations at different quark masses, we determine the pion mass above which inverse catalysis does not take place in the transition region anymore. Even for pions heavier than this limiting value — where the quark condensate undergoes magnetic catalysis — our results are consistent with the notion that the transition temperature is reduced by the magnetic field. These findings will be useful to guide low-energy models and effective theories of QCD.
The SU(3) spin model with chemical potential corresponds to a simplified version of QCD with static quarks in the strong coupling regime. It has been studied previously as a testing ground for new methods aiming to overcome the sign problem of lattice QCD. In this work we show that the equation of state and the phase structure of the model can be fully determined to reasonable accuracy by a linked cluster expansion. In particular, we compute the free energy to 14-th order in the nearest neighbour coupling. The resulting predictions for the equation of state and the location of the critical end points agree with numerical determinations to O(1%) and O(10%), respectively. While the accuracy for the critical couplings is still limited at the current series depth, the approach is equally applicable at zero and non-zero imaginary or real chemical potential, as well as to effective QCD Hamiltonians obtained by strong coupling and hopping expansions.
The production of the Λ(1520) baryonic resonance has been measured at midrapidity in inelastic pp collisions at s√=7 TeV and in p–Pb collisions at sNN−−−√=5.02 TeV for non-single diffractive events and in multiplicity classes. The resonance is reconstructed through its hadronic decay channel Λ(1520) →pK− and the charge conjugate with the ALICE detector. The integrated yields and mean transverse momenta are calculated from the measured transverse momentum distributions in pp and p–Pb collisions. The mean transverse momenta follow mass ordering as previously observed for other hyperons in the same collision systems. A Blast-Wave function constrained by other light hadrons (π, K, K0S, p, Λ) describes the shape of the Λ(1520) transverse momentum distribution up to 3.5 GeV/c in p–Pb collisions. In the framework of this model, this observation suggests that the Λ(1520) resonance participates in the same collective radial flow as other light hadrons. The ratio of the yield of Λ(1520) to the yield of the ground state particle Λ remains constant as a function of charged-particle multiplicity, suggesting that there is no net effect of the hadronic phase in p–Pb collisions on the Λ(1520) yield.
We present the charged-particle multiplicity distributions over a wide pseudorapidity range (−3.4<η<5.0) for pp collisions at s√=0.9,7, and 8 TeV at the LHC. Results are based on information from the Silicon Pixel Detector and the Forward Multiplicity Detector of ALICE, extending the pseudorapidity coverage of the earlier publications and the high-multiplicity reach. The measurements are compared to results from the CMS experiment and to PYTHIA, PHOJET and EPOS LHC event generators, as well as IP-Glasma calculations.
We report measurements of the production of prompt D0, D+, D*+ and D+s mesons in Pb–Pb collisions at the centre-of-mass energy per nucleon-nucleon pair sNN−−−√=5.02 TeV, in the centrality classes 0–10%, 30–50% and 60–80%. The D-meson production yields are measured at mid-rapidity (|y| < 0.5) as a function of transverse momentum (pT). The pT intervals covered in central collisions are: 1 < pT< 50 GeV/c for D0, 2 < pT< 50GeV/c for D+, 3 < pT< 50GeV/c for D*+, and 4 < pT< 16GeV/c for D +s mesons. The nuclear modification factors (RAA) for non-strange D mesons (D0, D+, D*+) show minimum values of about 0.2 for pT = 6–10 GeV/c in the most central collisions and are compatible within uncertainties with those measured at s√NN=2.76 TeV. For D +s mesons, the values of RAA are larger than those of non-strange D mesons, but compatible within uncertainties. In central collisions the average RAA of non-strange D mesons is compatible with that of charged particles for pT> 8 GeV/c, while it is larger at lower pT. The nuclear modification factors for strange and non-strange D mesons are also compared to theoretical models with different implementations of in-medium energy loss.