Universitätsbibliothek
Refine
Year of publication
Document Type
- Conference Proceeding (68)
- Article (16)
- Part of a Book (4)
- Other (4)
- Report (4)
- Book (2)
- Contribution to a Periodical (1)
- Lecture (1)
Language
- English (100) (remove)
Is part of the Bibliography
- no (100)
Keywords
- Deutschland (3)
- web archiving (3)
- Afrikanistik (2)
- BIOfid (2)
- Bibliothek (2)
- Biodiversity (2)
- Digital libraries (2)
- Europe (2)
- Germany (2)
- Special issue (2)
Institute
Current research on theory and practice of digital libraries: best papers from TPDL 2019 & 2020
(2022)
This volume presents a special issue on selected papers from the 2019 & 2020 editions of the International Conference on Theory and Practice of Digital Libraries (TPDL). They cover different research areas within Digital Libraries, from Ontology and Linked Data to quality in Web Archives and Topic Detection. We first provide a brief overview of both TPDL editions, and we introduce the selected papers.
Die Ausstellung in der Universitätsbibliothek wird noch bis zum 26. Februar 2023 verlängert
Biodiversity information is contained in countless digitized and unprocessed scholarly texts. Although automated extraction of these data has been gaining momentum for years, there are still innumerable text sources that are poorly accessible and require a more advanced range of methods to extract relevant information. To improve the access to semantic biodiversity information, we have launched the BIOfid project (www.biofid.de) and have developed a portal to access the semantics of German language biodiversity texts, mainly from the 19th and 20th century. However, to make such a portal work, a couple of methods had to be developed or adapted first. In particular, text-technological information extraction methods were needed, which extract the required information from the texts. Such methods draw on machine learning techniques, which in turn are trained by learning data. To this end, among others, we gathered the BIOfid text corpus, which is a cooperatively built resource, developed by biologists, text technologists, and linguists. A special feature of BIOfid is its multiple annotation approach, which takes into account both general and biology-specific classifications, and by this means goes beyond previous, typically taxon- or ontology-driven proper name detection. We describe the design decisions and the genuine Annotation Hub Framework underlying the BIOfid annotations and present agreement results. The tools used to create the annotations are introduced, and the use of the data in the semantic portal is described. Finally, some general lessons, in particular with multiple annotation projects, are drawn.
The Specialized Information Service Biodiversity Research (BIOfid) has been launched to mobilize valuable biological data from printed literature hidden in German libraries for over the past 250 years. In this project, we annotate German texts converted by OCR from historical scientific literature on the biodiversity of plants, birds, moths and butterflies. Our work enables the automatic extraction of biological information previously buried in the mass of papers and volumes. For this purpose, we generated training data for the tasks of Named Entity Recognition (NER) and Taxa Recognition (TR) in biological documents. We use this data to train a number of leading machine learning tools and create a gold standard for TR in biodiversity literature. More specifically, we perform a practical analysis of our newly generated BIOfid dataset through various downstream-task evaluations and establish a new state of the art for TR with 80.23% F-score. In this sense, our paper lays the foundations for future work in the field of information extraction in biology texts.
Current research on theory and practice of digital libraries: best papers from TPDL 2019 & 2020
(2022)
This volume presents a special issue on selected papers from the 2019 & 2020 editions of the International Conference on Theory and Practice of Digital Libraries (TPDL). They cover different research areas within Digital Libraries, from Ontology and Linked Data to quality in Web Archives and Topic Detection. We first provide a brief overview of both TPDL editions, and we introduce the selected papers.
The authors reflect on their experiences as the founding editors of the History of Knowledge blog. Situating the project in its specific institutional, geographical, and historiographical contexts, they highlight its role in scholarly communication and research alongside journals and books in a research domain that is still young, especially when viewed from an international perspective. At the same time, the authors discuss the blog’s role as a tool for classifying and structuring a corpus of work as it grows over time and as new themes and connections emerge from the contributions of its many authors.
In 23 survey areas with woodland vegetation or woodland succession in Frankfurt/Main with a total size of 134 hectares, woody species were surveyed (excluding species only occurring as planted individuals). We found 149 woody taxa; 42% of them indigenous, and 58% non-native. Out of the 86 non-native taxa, 49 were naturalized in Frankfurt while 37 were considered as casual. Among non-native taxa, East Asian taxa formed the largest phytogeographic group. We found taxa originating from horticulture (cultigens) to be an important part of the woody flora of Frankfurt/Main. The most common taxa were Acer pseudoplatanus, A. platanoides, Betula pendula, and Sambucus nigra; the two Acer species were regarded as naturalized. Non-native woody species were generally common (with percentages ranging from 24% to 79% in individual areas).
The scientific innovation process embraces the steps from problem definition through the development and evaluation of innovative solutions to their successful exploitation. The challenges imposed by this process can be answered by the creation of a powerful and flexible next-generation e-Science infrastructure, which exploits leading edge information and knowledge technologies and enables a comprehensive and intelligent means of supporting this process. This paper describes our vision of a Knowledge-based eScience infrastructure, which is based on the results of an in-depth study of the researchers requirements. Furthermore, it introduces the Fraunhofer e-Science Cockpit as a first implementation of our vision.
The correspondence between the terminology used for querying and the one used in content objects to be retrieved, is a crucial prerequisite for effective retrieval technology. However, as terminology is evolving over time, a growing gap opens up between older documents in (long-term) archives and the active language used for querying such archives. Thus, technologies for detecting and systematically handling terminology evolution are required to ensure "semantic" accessibility of (Web) archive content on the long run. As a starting point for dealing with terminology evolution this paper formalizes the problem and discusses issues, first ideas and relevant technologies.
High impact events, political changes and new technologies are reflected in our language and lead to constant evolution of terms, expressions and names. Not knowing about names used in the past for referring to a named entity can severely decrease the performance of many computational linguistic algorithms. We propose NEER, an unsupervised method for named entity evolution recognition independent of external knowledge sources. We find time periods with high likelihood of evolution. By analyzing only these time periods using a sliding window co-occurrence method we capture evolving terms in the same context. We thus avoid comparing terms from widely different periods in time and overcome a severe limitation of existing methods for named entity evolution, as shown by the high recall of 90% on the New York Times corpus. We compare several relatedness measures for filtering to improve precision. Furthermore, using machine learning with minimal supervision improves precision to 94%.
We present a method for detecting word sense changes by utilizing automatically induced word senses. Our method works on the level of individual senses and allows a word to have e.g. one stable sense and then add a novel sense that later experiences change. Senses are grouped based on polysemy to find linguistic concepts and we can find broadening and narrowing as well as novel (polysemous and homonymic) senses. We evaluate on a testset, present recall and estimates of the time between expected and found change.
Web archives created by the Internet Archive (IA) (https://archive.org), national libraries and other archiving services contain large amounts of information collected for a time period of over twenty years. These archives constitute a valuable source for research in many disciplines, including the digital humanities and the historical sciences by offering a unique possibility to look into past events and their representation on the Web.
Most Web archive services aim to capture the entire Web (IA) or national top-level domains and are therefore broad in their scope, diverse regarding the topics they contain and the time intervals they cover. Due to the large size and the broad scope it is difficult for interested researchers to locate relevant information in the archives as search facilities are very limited. Many users are more interested in studying smaller and topically coherent event-centric collections of documents contained in a Web archive [1,2]. Such collections can reflect specific events such as elections, or natural disasters, e.g. the Fukushima nuclear disaster (2011) or the German federal elections.
BIOfid is a specialized information service currently being developed to mobilize biodiversity data dormant in printed historical and modern literature and to offer a platform for open access journals on the science of biodiversity. Our team of librarians, computer scientists and biologists produce high-quality text digitizations, develop new text-mining tools and generate detailed ontologies enabling semantic text analysis and semantic search by means of user-specific queries. In a pilot project we focus on German publications on the distribution and ecology of vascular plants, birds, moths and butterflies extending back to the Linnaeus period about 250 years ago. The three organism groups have been selected according to current demands of the relevant research community in Germany. The text corpus defined for this purpose comprises over 400 volumes with more than 100,000 pages to be digitized and will be complemented by journals from other digitization projects, copyright-free and project-related literature. With TextImager (Natural Language Processing & Text Visualization) and TextAnnotator (Discourse Semantic Annotation) we have already extended and launched tools that focus on the text-analytical section of our project. Furthermore, taxonomic and anatomical ontologies elaborated by us for the taxa prioritized by the project’s target group - German institutions and scientists active in biodiversity research - are constantly improved and expanded to maximize scientific data output. Our poster describes the general workflow of our project ranging from literature acquisition via software development, to data availability on the BIOfid web portal (http://biofid.de/), and the implementation into existing platforms which serve to promote global accessibility of biodiversity data.
The concept of culturomics was born out of the availability of massive amounts of textual data and the interest to make sense of cultural and language phenomena over time. Thus far however, culturomics has only made use of, and shown the great potential of, statistical methods. In this paper, we present a vision for a knowledge-based culturomics that complements traditional culturomics. We discuss the possibilities and challenges of combining knowledge-based methods with statistical methods and address major challenges that arise due to the nature of the data; diversity of sources, changes in language over time as well as temporal dynamics of information in general. We address all layers needed for knowledge-based culturomics, from natural language processing and relations to summaries and opinions.
The web and the social web play an increasingly important role as an information source for Members of Parliament and their assistants, journalists, political analysts and researchers. It provides important and crucial background information, like reactions to political events and comments made by the general public. The case study presented in this paper is driven by two European parliaments (the Greek and the Austrian parliament) and targets an effective exploration of political web archives. In this paper, we describe semantic technologies deployed to ease the exploration of the archived web and social web content and present evaluation results.
The World Wide Web is the largest information repository available today. However, this information is very volatile and Web archiving is essential to preserve it for the future. Existing approaches to Web archiving are based on simple definitions of the scope of Web pages to crawl and are limited to basic interactions with Web servers. The aim of the ARCOMEM project is to overcome these limitations and to provide flexible, adaptive and intelligent content acquisition, relying on social media to create topical Web archives. In this article, we focus on ARCOMEM’s crawling architecture. We introduce the overall architecture and we describe its modules, such as the online analysis module, which computes a priority for the Web pages to be crawled, and the Application-Aware Helper which takes into account the type of Web sites and applications to extract structure from crawled content. We also describe a large-scale distributed crawler that has been developed, as well as the modifications we have implemented to adapt Heritrix, an open source crawler, to the needs of the project. Our experimental results from real crawls show that ARCOMEM’s crawling architecture is effective in acquiring focused information about a topic and leveraging the information from social media.
The constantly growing amount of Web content and the success of the SocialWeb lead to increasing needs for Web archiving. These needs go beyond the pure preservationo of Web pages. Web archives are turning into “community memories” that aim at building a better understanding of the public view on, e.g., celebrities, court decisions and other events. Due to the size of the Web, the traditional “collect-all” strategy is in many cases not the best method to build Web archives. In this paper, we present the ARCOMEM (From Future Internet 2014, 6 689 Collect-All Archives to Community Memories) architecture and implementation that uses semantic information, such as entities, topics and events, complemented with information from the Social Web to guide a novel Web crawler. The resulting archives are automatically enriched with semantic meta-information to ease the access and allow retrieval based on conditions that involve high-level concepts.
The Specialised Information Service Performing Arts (SIS PA) is part of a funding programme by the German Research Foundation that enables libraries to develop tailor-made services for individual disciplines in order to provide researchers direct access to relevant materials and resources from their field. For the field of performing arts, the SIS PA is aggregating metadata about theater and dance resources from currently, mostly, German-speaking cultural heritage institutions in a VuFind-based search portal.
In this article, we focus on metadata quality and its impact on the aggregation workflow by describing the different, possibly data provider-specific, process stages of improving data quality in order to achieve a searchable, interlinked knowledge base. We also describe lessons learned and limitations of the process.
The Goethe University Frankfurt has updated its APC expenditures, providing data for the 2019 period.
The University Library Johann Christian Senckenberg is in charge of the University’s Open Access Publishing Fund, which is supported under the DFG’s Open Access Publishing Programme.
Contact Person is Roland Wagner.
Biodiversity research heavily relies on recent and older literature, and the data contained therein. Despite great effort, large parts of the literature and the data it holds are still not available in appropriate formats needed for efficient compilation and analysis. As a part of the current funding strategy of the German Research Council (Deutsche Forschungsgemeinschaft, DFG), and resulting from an extensive dialogue with the scientific community in Germany, a "Specialised Information Service" (Fachinformationsdienst, FID) for Biodiversity Research will be established with the objective of making further segments of literature about biodiversity available in up-to-date formats. This project, starting 2017, is conducted by the University Library Johann Christian Senckenberg (Frankfurt/Main, Germany) together with the Senckenberg Gesellschaft für Naturforschung and the Text Technology Lab of the Goethe University (Frankfurt/Main).
The new Specialised Information Service for Biodiversity Research (FID Biodiversitätsforschung) comprises four core elements: (A) A text mining approach which encompasses advanced text technologies and a large body of 20th century literature; (B) the digitisation of selected German biodiversity literature; (C) a platform für Open Access journals; and (D) Acquisition of specialised print literature.
In order to promote the accessibility of biodiversity data in historic and contemporary literature, we introduce a new interdisciplinary project called BIOfid (FID=Fachinformationsdienst, a service for providing specialized information). The project aims at a mobilization of data available in print only by combining digitization of scientific biodiversity literature with the development of innovative text mining tools for complex, eventually semantic searches throughout the complete text corpus. A major prerequisite for the development of such search tools is the provision of sophisticated anatomy ontologies on the one hand, and of complete lists of species names (currently considered valid as well as all synonyms) at a global scale on the other hand. In the initial stage, we chose examples from German publications of the past 250 years dealing with the geographic distribution and ecology of vascular plants (Tracheophyta), birds (Aves), as well as moths and butterflies (Lepidoptera) in Germany. These taxa have been prioritized according to current demands of German research groups (about 50 sites) aiming at analyses and modeling of distribution patterns and their changes through time. In the long term, we aim at providing data and open source software applicable for any taxon and geographic region. For this purpose, a platform for open access journals for long-term availability of professional e-journals will be established. All generated data will also be made accessible through GFBio (German Federation for Biological Data). BIOfid is supported by the LIS-Scientific Library Services and Information Systems program of the German Research Foundation (DFG).
This paper introduces a novel research tool for the field of linguistics: The Linjgujisjtik web portal provides a virtual library which offers scientific information on every linguistic subject. It comprises selected internet sources and databases as well as catalogues for linguistic literature, and addresses an interdisciplinary audience. The virtual library is the most recent outcome of the Special Subject Collection Linguistics of the German Research Foundation (DFG), and also integrates the knowledge accumulated in the Bibliography of Linguistic Literature. In addition to the portal, we describe long-term goals and prospects with a special focus on ongoing efforts regarding an extension towards integrating language resources and Linguistic Linked Open Data.
Europeana provides a common access point to digital cultural heritage objects across different cultural domains among which the libraries. The recent development of the Europeana Data Model (EDM) provide new ways for libraries to experiment with Linked Data. Indeed the model is designed as a framework reusing various wellknown standards developed in the Semantic Web Community, such as the Resource Description Framework (RDF), the OAI Object Reuse and Exchange (ORE), and Dublin Core namespaces. It provides new opportunities for libraries to provide rich and interlinked metadata to the Europeana aggregation.
However to be able to provide data to Europeana, libraries need to create mappings from the librarystandard to EDM. This step involves decisions based on domainspecific requirements and on the possibilities offered by EDM. The crossdomain nature of EDM limiting in some cases the completeness of the mappings, extension of the model have been proposed to accommodate the library needs.
The "Digitised Manuscripts to Europeana" project (DM2E) has created an extension of EDM to optimise the mappings of librarydata for manuscripts. This extension is in the form of subclasses and subproperties that further specialise EDM concepts and properties. It includes spatial creation and publishing information, specific contributor and publication type properties and more.
Furthermore the granularity of the mapping has been extended to allow references and annotations on page level as required for scholarly work. As part of this project the metadata of the Hebrew Manuscripts as well as of the Medieval Manuscripts presented in the Digital Collections of the Frankfurt University Library have been mapped to this extension. This includes links to the Integrated Authority File (GND) of the German National Library with further links to the Virtual International Authority File (VIAF).
Based on this development a new comprehensive mapping from the digitalisation metadata format METS/MODS to EDM has been established for all materials of the Frankfurt Judaica in "Judaica Europeana ". It demonstrates today’s capabilities of the creation of linked Data structures in Europeana based on library catalogue data and structural data from the digitalisation process.
Cultural heritage reconstructed - Compact Memory and the Frankfurt Digital Judaica Collection
(2014)
Compact Memory, the internet archive of German Jewish periodicals, provides free global internet access to the vast majority of German-Jewish newspapers and periodicals of the 19th and 20th century.
Jewish historical newspapers are the invaluable sources that supply direct and detailed information of the transformation process of Jewry and offer new insights into European Jewish history. The use of these historical sources however is extremely difficult, as complete sets of periodicals are very rarely to be found and they are scattered all over the world in different libraries and archives and in different physical formats (paper, microfilm).
Compact Memory contains the 110 most important Jewish German newspapers and periodicals in Central Europe in the period from 1806-1938, covering the complete range of religious, political, social, cultural and academic aspects of Jewish life. The texts are available partly as full-texts, processed by OCR, partly as graphic documents with corresponding index options. The database offers advanced search options, downloading and printing of articles. Thousands of essays of more than 10.000 individual contributors have been bibliographically indexed.
Compact Memory was established by the Judaica Division of the University Library Frankfurt am Main and in charge today in cooperation with the Aachen Chair of German-Jewish Literary History and the Cologne library Germania Judaica.
Compact Memory is one database within the Digital Collection Judaica which being part of Europeana and other digital portals offers resources for the reconstruction and representation of Jewish cultural heritage.
Library Buildings around the World" is a survey based on researches of several years. The objective was to gather library buildings on an international level starting with 1990.
The parts Germany, France, United Kingdom, United States have been thoroughly revised, supplemented and completed for this 2nd edition. A revision of the other countries is planned for the next edition.
Management Summary: Conducted within the project “Economic Implications of New Models for Information Supply for Science and Research in Germany”, the Houghton Report for Germany provides a general cost and benefit analysis for scientific communication in Germany comparing different scenarios according to their specific costs and explicitly including the German National License Program (NLP).
Basing on the scholarly lifecycle process model outlined by Björk (2007), the study compared the following scenarios according to their accounted costs:
- Traditional subscription publishing,
- Open access publishing (Gold Open Access; refers primarily to journal publishing where access is free of charge to readers, while the authors or funding organisations pay for publication)
- Open Access self-archiving (authors deposit their work in online open access institutional or subject-based repositories, making it freely available to anyone with Internet access; further divided into (i) CGreen Open Access’ self-archiving operating in parallel with subscription publishing; and (ii) the ‘overlay services’ model in which self-archiving provides the foundation for overlay services (e.g. peer review, branding and quality control services))
- the NLP.
Within all scenarios, five core activity elements (Fund research and research communication; perform research and communicate the results; publish scientific and scholarly works; facilitate dissemination, retrieval and preservation; study publications and apply the knowledge) were modeled and priced with all their including activities.
Modelling the impacts of an increase in accessibility and efficiency resulting from more open access on returns to R&D over a 20 year period and then comparing costs and benefits, we find that the benefits of open access publishing models are likely to substantially outweigh the costs and, while smaller, the benefits of the German NLP also exceed the costs.
This analysis of the potential benefits of more open access to research findings suggests that different publishing models can make a material difference to the benefits realised, as well as the costs faced. It seems likely that more Open Access would have substantial net benefits in the longer term and, while net benefits may be lower during a transitional period, they are likely to be positive for both ‘author-pays’ Open Access publishing and the ‘over-lay journals’ alternatives (‘Gold Open Access’), and for parallel subscription publishing and self-archiving (‘Green Open Access’). The NLP returns substantial benefits and savings at a modest cost, returning one of the highest benefit/cost ratios available from unilateral national policies during a transitional period (second to that of ‘Green Open Access’ self-archiving). Whether ‘Green Open Access’ self-archiving in parallel with subscriptions is a sustainable model over the longer term is debateable, and what impact the NLP may have on the take up of Open Access alternatives is also an important consideration. So too is the potential for developments in Open Access or other scholarly publishing business models to significantly change the relative cost-benefit of the NLP over time.
The results are comparable to those of previous studies from the UK and Netherlands. Green Open Access in parallel with the traditional model yields the best benefits/cost ratio. Beside its benefits/cost ratio, the meaningfulness of the NLP is given by its enforceability. The true costs of toll access publishing (beside the buyback” of information) is the prohibition of access to research and knowledge for society.
New projects, services and collaborations have recently brought the infrastructural services for African Studies a big step forward. This report gives an account of new subject gateways and digitisation projects. It discusses recent European cooperation ventures in the field of librarianship. Additionally, new developments and services of the Africa Collection at Frankfurt University Library are presented, which help to address the changing needs of researchers and to handle information overload, while keeping up with the latest developments. Nevertheless, the fragmentation and compartmentalisation of the different services still hinder more integrated information services.
Vortrag im Rahmen des Symposiums der Universitätsbibliothek Frankfurt am Main in Kooperation mit der Frankfurter Buchmesse 2011 "Economy and Acceptance of Open Access Strategies", am 14.10.2011.
Vortrag im Rahmen des Symposiums der Universitätsbibliothek Frankfurt am Main in Kooperation mit der Frankfurter Buchmesse 2011 "Economy and Acceptance of Open Access Strategies", am 14.10.2011.
Vortrag im Rahmen des Symposiums der Universitätsbibliothek Frankfurt am Main in Kooperation mit der Frankfurter Buchmesse 2011 "Economy and Acceptance of Open Access Strategies", am 14.10.2011.
Using faculty-librarian partnerships to ensure that students become information fluent in the 21st century In the 21st century educators in partnership with librarians must prepare students effectively for productive use of information especially in higher education. Students will need to graduate from universities with appropriate information and technology skills to enable them to become productive citizens in the workplace and in society. Technology is having a major impact on society; in economics e-business is moving to the forefront; in communication e-mail, the Internet and cellular telephones have reformed how people communicate; in the work environment computers and web utilizations are emphasized and in education virtual learning and teaching are becoming more important. These few examples indicate how the 21st century information environment requires future members of the workforce to be information fluent so they will have the ability to locate information efficiently, evaluate information for specific needs, organize information to address issues, apply information skillfully to solve problems, use information to communicate effectively, and use information responsibly to ensure a productive work environment. Individuals can achieve information fluency by acquiring cultural, visual, computer, technology, research and information management skills to enable them to think critically.
Teaching information literacy: substance and process This presentation explores the concept of information literacy within the broader context of higher education. It argues that, certain assertions in the library literature notwithstanding, the concepts associated with information literacy are not new, but rather very closely resemble the qualities traditionally considered to characterize a well-educated person. The presentation also considers the extent to which the higher education system does indeed foster the attributes commonly associated with information literacy. The term information literacy has achieved the immediacy it currently enjoys within the library community with the advent of the so-called "information age" The information age is commonly touted in the literature, both popular and professional, as constituting nothing short of a revolution. Academic librarians and other educators have of course felt called upon to make their teaching reflect both the growing proliferation of information formats and the major transformations affecting the process of information seeking. Faced with so much novelty and uncertainty, it is no surprise that many have felt that these changes call for a revolution in teaching. It is within this context that the concept of information literacy has flourished. It is argued in this presentation, however, that by treating information literacy as an essentially new specialty that owes much of its importance to the plethora of electronic information, we risk obscuring some of the most fundamental and enduring educational values we should be imparting to our students. Much of the literature on information literacy assumes - rather than argues - that recent changes in the way we approach education are indications of progress. Indeed, much of the self-narrative that institutions produce (in bulletins, mission statements, web sites, etc.) endorses an approach to education that will result in lifelong learners who are critical consumers of information. After critically examining the degree to which such statements of educational approach reflect reality, this presentation concludes by considering the effects of certain changes in the culture of higher education. It considers particularly the transformation - at least in North America - of the traditional model of higher education as a public good to a market-driven business model. It poses the question of whether a change of this significance might in fact detract from, rather than promote, the development of information literate students.
The emperor's new colonies
(2008)
The Colonial Picture Archive in Frankfurt offers a unique pictorial record of German colonial history. For many years the collection was virtually forgotten. However, following painstaking description and digitalisation, the photo documents are now available on the Internet to researchers in Germany and abroad.
3.11.2008 - 4.11.2008 fand in Frankfurt am Main folgende Tagung statt: 21st Century Libraries: Changing Forms, Changing Challenges, Changing Objectives = 8th Frankfurt Scientific Symposium: 3.11.2008 - 4.11.2008. Sie wurde von der Universitätsbiblithek Johann Christian Senckenberg in Zusammanarbeit mit dem Deutschem Architektur Museum (Frankfurt am Main) und der Akademie der Architekten- und Stadtplanerkammer Hessen (Wiesbaden) organisiert Das 8. Frankfurt Symposium stellt den zeitgenössischen Bibliotheksbau, die Entwicklungen und die Probleme des gegenwärtigen Bibliotheksbaus zur Diskussion. Einige theoretische und technische Beiträge runden das Programm ab. Zwei zentrale Schwerpunkte des Symposiums werden die Einbindung von Bibliotheksbauten in das Stadtumfeld und die Auswirkungen gesellschaftspolitischer und technischer Entwicklungen auf die Architektur von Bibliotheken sein.
Contents - BIX: pole position and runner-up - Frankfurt University Library: its responsibilities, its collections, its databases, its supra-regional collecting responsibilities – and some statistics - The "Sondersammelgebiet" Germanistik: its scope and contents, its principal strengths, present situation, and budget - Sammlung Deutscher Drucke: the 1801-1870 segment of the "Distributed National Library" - Information Services: Bibliographie der deutschen Sprach- und Literaturwissenschaft (BDSL), Neuerwerbungsliste Germanistik, Bibliographie germanistischer Bibliographien (BgB), DigiZeitschriften, information bulletins - Work of the Subject Specialist: exhibitions, publicity material
In several academic fields (most notably: physics, mathematics, economics, astronomy, and computer science), most current research papers are freely accessible on the Internet in both pre- and post-publication formats. For these disciplines, open-access dissemination of publications and data has created a robust and useful information environment that is highly valued by researchers. While the acceptance of open-access dissemination has been disruptive to traditional scholarly publishing, the status and economic value of the elite journals has remained largely intact. Indeed, publication in the most prestigious journals (e.g., Science, Nature, Cell, BMJ, etc.) may have more influence than ever in determining the advancement of academic careers. Traditional publishing and open access will continue to coexist uncomfortably for years to come, but the next wave of digital publishing systems (empowered social networking applications) will establish open access repositories as indispensable infrastructure for the sciences and social sciences.
University 2.0
(2007)
The major challenge facing universities in the next decade is to reinvent themselves as information organizations. Universities are, at their core, organizations that cultivate knowledge, seeking both to create new knowledge and to preserve and convey existing knowledge, but they are remarkably inefficient and therefore ineffective in the way that they leverage their own information resources to advance that core activity. This talk will explore ways that the university could learn from what is now widely called "Web 2.0" -- a term that is meant to identify a shift in emphasis from the computer as platform to the network as platform, from hardware to data, from the wisdom of the expert to the wisdom of crowds, and from fixity to remixability.
Universities of the 21st century heavily depend on an efficient IT infrastructure for teaching, research and administration. E-Learning environments, blended learning and all sorts of multimedia and cooperative environments are important requirements for teaching at universities and for further education. Many of the organizational structures such as continuous examinations, interdisciplinary studies, ECTS system and many more require efficient examination administration systems as well as room and personnel management. Research is based on Internet inquiries, eScience, eLibrary and other IT supported media. Research results must be documented and archived in a digital way and results must be distributed and marketed through the Internet. The efficient administration of all kinds of resources of the university must be planned using management support systems. Decisions of university heads must be prepared from well documented statistics and analysis software. In the past, many of the applications named above for teaching, research and administration have been performed by separate software applications and run in distributed environments of universities. Powerful server structures and networking features as well as new software technology like service-oriented architectures make it necessary to recentralize the IT services of the university after a long period of decentralization. Based on metadirectories and unified access procedures, all of the software components must be integrated into a seamless IT infrastructure. To guarantee consistency, data must not be stored in a redundant way. Project IntegraTUM of Technische Universität München started in 2003 and is an umbrella project to define such a seamless IT infrastructure for a university with 22.000 students and approximately 10.000 staff. The talk describes the project, which besides the definition of new technology is based on a fundamental process analysis of the university and many changes in the organizational structure.
Working closely with teaching and research staff is critical to the success of libraries and information services. Indeed, the degree of integration with a University's academic work is one of the factors that distinguish a successful service from a poor one. This paper will consider the relationship between information services and how universities operate. Using the challenges facing institutions as a starting point - including the move towards a single European higher education market - the impact of information provision on institutional strategies will be explored. Information resources underpin all learning, teaching and research activities and the presentation will consider the professional practice which ensures that libraries and computing services are fully exploited. The focus on the experience of students is leading some institutions to integrate information services with a wide range of other activities and the paper will consider the opportunities and challenges which this brings, including the need to build working relationships with a broader range of professional groups.
Trends for distributed, open, and increasingly collaborative models of information delivery challenge the library's classic roles. In addition, trends within the research community for more interdisciplinary and collaborative scholarship create an opportunity for more enabling information infrastructure. In an age of Amazon, Google, and "social" tools, how should the library respond? My presentation will focus on strategies for bringing the library's "assets" into the flow of researchers' work. How can the library integrate its resources into the scholar's workflow? What are the emerging challenges of this integration?