000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Year of publication
- 2003 (47)
- 2004 (47)
- 2005 (45)
- 2006 (44)
- 2002 (35)
- 2007 (20)
- 2013 (11)
- 2001 (10)
- 2008 (9)
- 2010 (9)
- 2022 (7)
- 2012 (6)
- 2016 (6)
- 2009 (5)
- 2011 (5)
- 2014 (5)
- 2015 (4)
- 1672 (2)
- 1995 (2)
- 1996 (2)
- 1999 (2)
- 2017 (2)
- 2020 (2)
- 2021 (2)
- 1531 (1)
- 1538 (1)
- 1598 (1)
- 1750 (1)
- 1762 (1)
- 1772 (1)
- 1783 (1)
- 1796 (1)
- 1843 (1)
- 1866 (1)
- 1887 (1)
- 1890 (1)
- 1920 (1)
- 1937 (1)
- 1976 (1)
- 1993 (1)
- 1997 (1)
- 2000 (1)
- 2019 (1)
- 2023 (1)
Document Type
- Article (245)
- Part of Periodical (37)
- Doctoral Thesis (22)
- Book (19)
- Review (9)
- Conference Proceeding (7)
- Periodical (4)
- Part of a Book (2)
- Other (2)
- Contribution to a Periodical (1)
Is part of the Bibliography
- no (349)
Keywords
- Frankfurt <Main> / Universität (115)
- Frankfurt <Main> (60)
- Frankfurt (42)
- Forschung (30)
- Zeitschrift (29)
- Biographie (26)
- Preis <Auszeichnung> (10)
- Wissenschaft (10)
- Frankfurt / Universität (9)
- Adorno (7)
Institute
- Präsidium (42)
- Informatik (28)
- Fachübergreifend (16)
- Frankfurt Institute for Advanced Studies (FIAS) (10)
- Informatik und Mathematik (3)
- Biowissenschaften (2)
- Erziehungswissenschaften (2)
- Evangelische Theologie (2)
- Extern (2)
- Geschichtswissenschaften (2)
- Gesellschaftswissenschaften (2)
- Kulturwissenschaften (2)
- Mathematik (2)
- Medizin (2)
- Neuere Philologien (2)
- SFB 268 (2)
- Universitätsbibliothek (2)
- keine Angabe Institut (2)
- Frobenius Institut (1)
- Geographie (1)
- Philosophie (1)
- Physik (1)
- Rechtswissenschaft (1)
- Sprachwissenschaften (1)
- Wirtschaftswissenschaften (1)
- Zentrum für Interdisziplinäre Afrikaforschung (ZIAF) (1)
"... der Wissenschaft einen Tempel bauen" : zum 300. Geburtstag Johann Christian Senckenbergs
(2006)
Das Foto auf der Homepage von Prof. Dr. Norman Davis zeigt einen verschmitzt lächelnden, weißbärtigen Mann. Die Brille hat er keck auf die Nasenspitze geschoben. Entspannt sitzt der US-amerikanische Neurobiologe im blaugrau gemusterten Poloshirt an seinem Mikroskop, das in der Division of Neurobiology an der University of Arizona in Tuscon steht. Davis ist dort Research Professor im Team von Prof. Dr. John Hildebrand. Früher war er Lehrstuhlinhaber an einer der renommierten Ostküsten-Unis. Doch ans Aufhören dachte er auch im hohen Alter nicht. Stattdessen erforscht er nun als ganz normales Teammitglied ohne Extravaganzen. ...
Wer unseren Planeten erforschen will, muss sich auf die Socken machen. Forschungsreisen gehören für fast alle Wissenschaftlerinnen und Wissenschaftler, die in diesem Heft Ergebnisse ihrer Arbeit präsentieren, selbstverständlich zum Beruf. Dass sie dabei natürlich neben rein forschungsbezogenen auch ganz persönliche Erfahrungen machen, bereichert ihr Leben. Und sie haben wunderbare kleine Geschichten zu erzählen, die wir unseren Lesern nicht vorenthalten wollen. ...
In Amerika wird Crack, eine rauchbare Form von Kokain, seit Mitte der 1980er Jahre konsumiert. In Deutschland wähnte man sich vor dieser »Ghetto-Droge« sicher. Doch seit Mitte der 1990er Jahre gibt es auch in Frankfurt und Hamburg Crack-Szenen. In der Main-Metropole war es zunächst eine kleine, von den Heroin-Süchtigen getrennte Raucherszene, aber schon 2002 hatte Crack das Kokain-Pulver völlig verdrängt und sogar das Heroin als bisher meistgebrauchte Droge auf den zweiten Rang verwiesen. Heute konsumieren 60 Prozent der Frankfurter Szene-Junkies mehrmals in der Woche Heroin, aber über 80 Prozent – oftmals dieselben Drogenabhängigen – auch mehrmals in der Woche Crack. Da bei dieser Droge der Kick zwar stark, aber nicht nachhaltig ist und die Junkies sich deshalb nie gesättigt fühlen, treibt die Abhängigen eine enorme Unruhe. Beobachtungen und Interviews mit Betroffenen zeigen, wie sich der Konsum dieser Droge verschärfend auf das Leben der Junkies und damit auf die gesamte Szene auswirkt.
We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.
After a short introduction into traditional image transform coding, multirate systems and multiscale signal coding the paper focuses on the subject of image encoding by a neural network. Taking also noise into account a network model is proposed which not only learns the optimal localized basis functions for the transform but also learns to implement a whitening filter by multi-resolution encoding. A simulation showing the multi-resolution capabilitys concludes the contribution.
Acceleration of Biomedical Image Processing and Reconstruction with FPGAs
Increasing chip sizes and better programming tools have made it possible to increase the boundaries of application acceleration with reconfigurable computer chips. In this thesis the potential of acceleration with Field Programmable Gate Arrays (FPGAs) is examined for applications that perform biomedical image processing and reconstruction. The dataflow paradigm was used to port the analysis of image data for localization microscopy and for 3D electron tomography from an imperative description towards the FPGA for the first time.
After the primitives of image processing on FPGAs are presented, a general workflow is given for analyzing imperative source code and converting it to a hardware pipeline where every node processes image data in parallel. The theoretical foundation is then used to accelerate both example applications. For localization microscopy, an acceleration of 185 compared to an Intel i5 450 CPU was achieved, and electron tomography could be sped up by a factor of 5 over an Nvidia Tesla C1060 graphics card while maintaining full accuracy in both cases.
Magnetoencephalography (MEG) measures neural activity non-invasively and at an excellent temporal resolution. Since its invention (Cohen, 1968, 1972), MEG has proven a most valuable tool in neurocognitive (Salmelin et al., 1994) and clinical research (Stufflebeam et al., 2009; Van ’t Ent et al., 2003). MEG is able to measure rapid changes in electrophysiological neural signals related to sensory and cognitive processes. The magnetic fields measured outside the head by MEG directly reflect the cortical currents generated by the synchronised activity of thousands of neuronal sources. This distinguishes MEG from functional magnetic resonance imaging (fMRI), where measurements are only indirectly related to electrophysiological activity through neurovascular coupling...
Biological ageing is a degenerative and irreversible process, ultimately leading to death of the organism. The process is complex and under the control of genetic, environmental and stochastic traits. Although many theories have been established during the last decades, none of these are able to fully describe the complex mechanisms, which lead to ageing. Generally, biological processes and environmental factors lead to molecular damage and an accumulation of impaired cellular components. In contrast, counteracting surveillance systems are effective, including repair, remodelling and degradation of damaged or impaired components, respectively. Nevertheless, at some point these systems are no longer effective, either because the increasing amount of molecular damages can not longer be removed efficiently or because the repairing and removing mechanisms themselves become affected by impairing effects. The organism finally declines and dies. To investigate and to understand these counteracting mechanisms and the complex interplay of decline and maintenance, holistic and systems biological investigations are required. Hence, the processes which lead to ageing in the fungal model organism Podospora anserina, had been analysed using different advanced bioinformatics methods. In contrast to many other ageing models, P. anserina exhibits a short lifespan, a less biochemical complexity and it provides a good accessibility for genetic manipulations.
To achieve a general overview on the different biochemical processes, which are affected during ageing in P. anserina, an initial comprehensive investigation was applied, which aimed to reveal genes significantly regulated and expressed in an age-dependent manner. This investigation was based on an age-dependent transcriptome analysis. Sophisticated and comprehensive analyses revealed different age-related pathways and indicated that especially autophagy may play a crucial role during ageing. For example, it was found that the expression of autophagy-associated genes increases in the course of ageing.
Subsequently, to investigate and to characterise the autophagy pathway, its associated single components and their interactions, Path2PPI, a new bioinformatics approach, was developed. Path2PPI enables the prediction of protein-protein interaction networks of particular pathways by means of a homology comparison approach and was applied to construct the protein-protein interaction network of autophagy in P. anserina.
The predicted network was extended by experimental data, comprising the transcriptome data as well as newly generated protein-protein interaction data achieved from a yeast two-hybrid analysis. Using different mathematical and statistical methods the topological properties of the constructed network had been compared with those of randomly generated networks to approve its biological significance. In addition, based on this topological and functional analysis, the most important proteins were determined and functional modules were identified, which correspond to the different sub-pathways of autophagy. Due to the integrated transcriptome data the autophagy network could be linked to the ageing process. For example, different proteins had been identified, which genes are continuously up- or down-regulated during ageing and it was shown for the first time that autophagy-associated genes are significantly often co-expressed during ageing.
The presented biological network provides a systems biological view on autophagy and enables further studies, which aim to analyse the relationship of autophagy and ageing. Furthermore, it allows the investigation of potential methods for intervention into the ageing process and to extend the healthy lifespan of P. anserina as well as of other eukaryotic organisms, in particular humans.
Wenn man in der ersten Hälfte des vergangenen Jahrhunderts bereits den Begriff "Brain Drain" (Abwanderung) gekannt hätte, dann wären damit bestimmt nicht die deutschen Wissenschaftler gemeint gewesen, denn die geistige Elite zog es noch nicht in Scharen aus ihrer Heimat. Im Gegenteil! Damals folgte die internationale wissenschaftliche Elite dem Ruf nach Deutschland, weil hier weltweit herausragende Forscherpersönlichkeiten arbeiteten und lehrten. Das galt auch für die Frankfurter Universität. Namen wie Paul Ehrlich, Franz Oppenheimer oder Friedrich Dessauer stehen für hochkarätige Forschung, die ausländische Studenten und Wissenschaftler in die Mainmetropole lockte, bis das Nazi-Regime mit der Verfolgung der jüdischen Wissenschaftler dieser Blütezeit ein jähes Ende setze und viele Forscher ins Ausland – insbesondere in die USA – fliehen mussten.
Vor wenigen Tagen ist die 50. Poetik-Gastdozentur an der Universität Frankfurt zu Ende gegangen. Wenn Elisabeth Borchers im Hörsaal VI über 'Lichtwelten' sprach, dann verkörperte sie das Prinzip dieser 'Vorlesungen': Seit 44 Jahren sprechen in Frankfurt bekannte deutschsprachige Autorinnen und Autoren über Literatur und über ihre eigenen Vorstellungen davon. Die 'Frankfurter Poetikdozentur' wurde über die Jahre zu einem Markenzeichen, das aus dem literarischen Leben Frankfurts wie Deutschlands nicht mehr wegzudenken ist. Anlass genug für einen kleinen Rückblick darauf, wie alles anfing.
We present an implementation of an interpreter LRPi for the call-by-need calculus LRP, based on a variant of Sestoft's abstract machine Mark 1, extended with an eager garbage collector. It is used as a tool for exact space usage analyses as a support for our investigations into space improvements of call-by-need calculi.
Driven by rapid technological advancements, the amount of data that is created, captured, communicated, and stored worldwide has grown exponentially over the past decades. Along with this development it has become critical for many disciplines of science and business to being able to gather and analyze large amounts of data. The sheer volume of the data often exceeds the capabilities of classical storage systems, with the result that current large-scale storage systems are highly distributed and are comprised of a high number of individual storage components. As with any other electronic device, the reliability of storage hardware is governed by certain probability distributions, which in turn are influenced by the physical processes utilized to store the information. The traditional way to deal with the inherent unreliability of combined storage systems is to replicate the data several times. Another popular approach to achieve failure tolerance is to calculate the block-wise parity in one or more dimensions. With better understanding of the different failure modes of storage components, it has become evident that sophisticated high-level error detection and correction techniques are indispensable for the ever-growing distributed systems. The utilization of powerful cyclic error-correcting codes, however, comes with a high computational penalty, since the required operations over finite fields do not map very well onto current commodity processors. This thesis introduces a versatile coding scheme with fully adjustable fault-tolerance that is tailored specifically to modern processor architectures. To reduce stress on the memory subsystem the conventional table-based algorithm for multiplication over finite fields has been replaced with a polynomial version. This arithmetically intense algorithm is better suited to the wide SIMD units of the currently available general purpose processors, but also displays significant benefits when used with modern many-core accelerator devices (for instance the popular general purpose graphics processing units). A CPU implementation using SSE and a GPU version using CUDA are presented. The performance of the multiplication depends on the distribution of the polynomial coefficients in the finite field elements. This property has been used to create suitable matrices that generate a linear systematic erasure-correcting code which shows a significantly increased multiplication performance for the relevant matrix elements. Several approaches to obtain the optimized generator matrices are elaborated and their implications are discussed. A Monte-Carlo-based construction method allows it to influence the specific shape of the generator matrices and thus to adapt them to special storage and archiving workloads. Extensive benchmarks on CPU and GPU demonstrate the superior performance and the future application scenarios of this novel erasure-resilient coding scheme.