Refine
Document Type
- Article (5) (remove)
Has Fulltext
- yes (5) (remove)
Is part of the Bibliography
- no (5)
Keywords
- Big Data (5) (remove)
Institute
Despite advances in myocardial reperfusion therapies, acute myocardial ischaemia/reperfusion injury and consequent ischaemic heart failure represent the number one cause of morbidity and mortality in industrialized societies. Although different therapeutic interventions have been shown beneficial in preclinical settings, an effective cardioprotective or regenerative therapy has yet to be successfully introduced in the clinical arena. Given the complex pathophysiology of the ischaemic heart, large scale, unbiased, global approaches capable of identifying multiple branches of the signalling networks activated in the ischaemic/reperfused heart might be more successful in the search for novel diagnostic or therapeutic targets. High-throughput techniques allow high-resolution, genome-wide investigation of genetic variants, epigenetic modifications, and associated gene expression profiles. Platforms such as proteomics and metabolomics (not described here in detail) also offer simultaneous readouts of hundreds of proteins and metabolites. Isolated omics analyses usually provide Big Data requiring large data storage, advanced computational resources and complex bioinformatics tools. The possibility of integrating different omics approaches gives new hope to better understand the molecular circuitry activated by myocardial ischaemia, putting it in the context of the human ‘diseasome’. Since modifications of cardiac gene expression have been consistently linked to pathophysiology of the ischaemic heart, the integration of epigenomic and transcriptomic data seems a promising approach to identify crucial disease networks. Thus, the scope of this Position Paper will be to highlight potentials and limitations of these approaches, and to provide recommendations to optimize the search for novel diagnostic or therapeutic targets for acute ischaemia/reperfusion injury and ischaemic heart failure in the post-genomic era.
The value of plant ecological datasets with hundreds or thousands of species is principally determined by the taxonomic accuracy of their plant names. However, combining existing lists of species to assemble a harmonized dataset that is clean of taxonomic errors can be a difficult task for non-taxonomists. Here, we describe the range of taxonomic difficulties likely to be encountered during dataset assembly and present an easy-to-use taxonomic cleaning protocol aimed at assisting researchers not familiar with the finer details of taxonomic cleaning. The protocol produces a final dataset (FD) linked to a companion dataset (CD), providing clear details of the path from existing lists to the FD taken by each cleaned taxon. Taxa are checked off against ten categories in the CD that succinctly summarize all taxonomic modifications required. Two older, publicly-available lists of naturalized Asteraceae in Australia were merged into a harmonized dataset as a case study to quantify the impacts of ignoring the critical process of taxonomic cleaning in invasion ecology. Our FD of naturalized Asteraceae contained 257 species and infra-species. Without implementation of the full cleaning protocol, the dataset would have contained 328 taxa, a 28% overestimate of taxon richness by 71 taxa. Our naturalized Asteraceae CD described the exclusion of 88 names due to nomenclatural issues (e.g. synonymy), the inclusion of 26 updated currently accepted names and four taxa newly naturalized since the production of the source datasets, and the exclusion of 13 taxa that were either found not to be in Australia or were in fact doubtfully naturalized. This study also supports the notion that automated processes alone will not be enough to ensure taxonomically clean datasets, and that manual scrutiny of data is essential. In the long term, this will best be supported by increased investment in taxonomy and botany in university curricula.
Mapping a public discourse with the tools of computational text analysis comes with many contingencies in the areas of corpus curation, data processing and analysis, and visualisation. However, the complexity of algorithmic assemblies and the beauty of resulting images give the impression of ‘objectivity’. Instead of concealing uncertainties and artefacts in order to tell a coherent and all-encompassing story, retaining the variety of alternative assemblies may actually strengthen the method. By utilising the mobility of digital devices, we could create mutable mobiles that allow access to our laboratories and enable challenging rearrangements and interpretations.
Hintergrund: Die digitale Transformation des Gesundheitssystems verändert den Beruf des Arztes. Data Literacy wird hierbei als eine der führenden Zukunftskompetenzen erachtet, findet jedoch derzeit weder in den implementierten Curricula des Medizinstudiums noch in den aktuell laufenden Reformprozessen (Masterplan Medizinstudium 2020 und Nationaler Kompetenzbasierter Lernzielkatalog) Beachtung.
Ziel: Der Beitrag möchte zum einen die Aspekte beleuchten, die im Begriff der Data Literacy im medizinischen Kontext gebündelt werden. Zum andern wird ein Lehrkonzept vorgestellt, das Data Literacy im Zeichen der digitalen Transformation erstmals im Medizinstudium abbildet.
Material und Methoden: Das Blended-Learning-Curriculum „Medizin im digitalen Zeitalter“ adressiert in 5 Modulen den diversen Transformationsprozess der Medizin von digitaler Kommunikation über Smart Devices und medizinische Apps, Telemedizin, virtuelle/augmentierte und robotische Chirurgie bis hin zu individualisierter Medizin und Big Data. Diese Arbeit stellt Konzept und Erfahrungen der erstmaligen Implementierung des 5. Moduls dar, welches transdisziplinär und integrativ den Aspekt Data Literacy erläutert.
Ergebnisse: Die Evaluation des Kurskonzepts erfolgte sowohl qualitativ als auch quantitativ und demonstriert einen Kompetenzgewinn in den Bereichen Wissen und Fertigkeiten sowie eine differenziertere Haltung nach Kursabschluss.
Schlussfolgerungen: Die curriculare Integration von Data Literacy ist eine transdisziplinäre und longitudinale Aufgabe. Bei der Entwicklung dieser Curricula sollten die hohe Geschwindigkeit des Veränderungsprozesses der digitalen Transformation beachtet und die curriculare Anpassung im Sinne eines Agility by Design bereits bei der Konzeption adressiert werden.
Research in the field of Digital Humanities, also known as Humanities Computing, has seen a steady increase over the past years. Situated at the intersection of computing science and the humanities, present efforts focus on making resources such as texts, images, musical pieces and other semiotic artifacts digitally available, searchable and analysable. To this end, computational tools enabling textual search, visual analytics, data mining, statistics and natural language processing are harnessed to support the humanities researcher. The processing of large data sets with appropriate software opens up novel and fruitful approaches to questions in the traditional humanities. This report summarizes the Dagstuhl seminar 14301 on “Computational Humanities - bridging the gap between Computer Science and Digital Humanities”.
1998 ACM Subject Classification I.2.7 Natural Language Processing, J.5 Arts and Humanities