Refine
Document Type
- Article (3)
- Part of a Book (1)
Language
- English (4)
Has Fulltext
- yes (4)
Is part of the Bibliography
- no (4)
Keywords
- Adaptive control (1)
- Automatic (1)
- Computational geometry (1)
- E-Learning (1)
- Exchange Format (1)
- Fiber Bundles (1)
- Hochschule (1)
- Informatik (1)
- Kollaboration <Informatik> (1)
- Learning Analytics (1)
Institute
Hyperhomocysteinemia has been suggested potentially to contribute to a variety of pathologies, such as Alzheimer’s disease (AD). While the impact of hyperhomocysteinemia on AD has been investigated extensively, there are scarce data on the effect of AD on hyperhomocysteinemia. The aim of this in vivo study was to investigate the kinetics of homocysteine (HCys) and homocysteic acid (HCA) and effects of AD-like pathology on the endogenous levels. The mice received a B-vitamin deficient diet for eight weeks, followed by the return to a balanced control diet for another eight weeks. Serum, urine, and brain tissues of AppNL-G-F knock-in and C57BL/6J wild type mice were analyzed for HCys and HCA using LC-MS/MS methods. Hyperhomocysteinemic levels were found in wild type and knock-in mice due to the consumption of the deficient diet for eight weeks, followed by a rapid normalization of the levels after the return to control chow. Hyperhomocysteinemic AppNL-G-F mice had significantly higher HCys in all matrices, but not HCA, compared to wild type control. Higher serum concentrations were associated with elevated levels in both the brain and in urine. Our findings confirm a significant impact of AD-like pathology on hyperhomocysteinemia in the AppNL-G-F mouse model. The immediate normalization of HCys and HCA after the supply of B-vitamins strengthens the idea of a B-vitamin intervention as a potentially preventive treatment option for HCys-related disorders such as AD.
We propose and create a new data model for learning specific environments and learning analytics applications. This is motivated from the experience in the Fiber Bundle Data Model used for large - time and space dependent - data. Our proposed data model integrates file or stream-based data structures from capturing devices more easily. Learning analytics algorithms are added directly to the data, and formulation of queries and analytics is done in Python. It is designed to improve collaboration in the field of learning analytics. We leverage a hierarchical data structure, where varying data is located near the leaves. Abstract data types are identified in four distinct pathways, which allow storing most diverse data sources. We compare different implementations regarding its memory footprint and performance. Our tests indicate that LeAn Bundles can be smaller than a naïve xAPI export. The benchmarks show that the performance is comparable to a MongoDB, while having the benefit of being portable and extensible.
Point-based geometry representations have become widely used in numerous contexts, ranging from particle-based simulations, over stereo image matching, to depth sensing via light detection and ranging. Our application focus is on the reconstruction of curved line structures in noisy 3D point cloud data. Respective algorithms operating on such point clouds often rely on the notion of a local neighborhood. Regarding the latter, our approach employs multi-scale neighborhoods, for which weighted covariance measures of local points are determined. Curved line structures are reconstructed via vector field tracing, using a bidirectional piecewise streamline integration. We also introduce an automatic selection of optimal starting points via multi-scale geometric measures. The pipeline development and choice of parameters was driven by an extensive, automated initial analysis process on over a million prototype test cases. The behavior of our approach is controlled by several parameters — the majority being set automatically, leaving only three to be controlled by a user. In an extensive, automated final evaluation, we cover over one hundred thousand parameter sets, including 3D test geometries with varying curvature, sharp corners, intersections, data holes, and systematically applied varying types of noise. Further, we analyzed different choices for the point of reference in the co-variance computation; using a weighted mean performed best in most cases. In addition, we compared our method to current, publicly available line reconstruction frameworks. Up to thirty times faster execution times were achieved in some cases, at comparable error measures. Finally, we also demonstrate an exemplary application on four real-world 3D light detection and ranging datasets, extracting power line cables.
The nucleosynthesis of elements beyond iron is dominated by neutron captures in the s and r processes. However, 32 stable, proton-rich isotopes cannot be formed during those processes, because they are shielded from the s-process flow and r-process β-decay chains. These nuclei are attributed to the p and rp process.
For all those processes, current research in nuclear astrophysics addresses the need for more precise reaction data involving radioactive isotopes. Depending on the particular reaction, direct or inverse kinematics, forward or time-reversed direction are investigated to determine or at least to constrain the desired reaction cross sections.
The Facility for Antiproton and Ion Research (FAIR) will offer unique, unprecedented opportunities to investigate many of the important reactions. The high yield of radioactive isotopes, even far away from the valley of stability, allows the investigation of isotopes involved in processes as exotic as the r or rp processes.