Universitätsbibliothek
Refine
Year of publication
Document Type
- Conference Proceeding (69)
- Article (17)
- Other (6)
- Part of a Book (4)
- Report (4)
- Book (2)
- Contribution to a Periodical (1)
- Lecture (1)
Language
- English (104) (remove)
Is part of the Bibliography
- no (104)
Keywords
- Deutschland (3)
- web archiving (3)
- Afrikanistik (2)
- BIOfid (2)
- Bibliothek (2)
- Biodiversity (2)
- Digital libraries (2)
- Europe (2)
- Germany (2)
- Special issue (2)
Institute
Kagbeni and its irrigated oasis are surrounded by subdesert dwarf scrubland. In the present study, a list of 78 species of vascular plants is presented for Kagbeni and its immediate surroundings, supplemented with data on the distribution of the species within the entire Mustan District. The data are arrived from own investigations and the geobotanical literature. A phytogeographical analysis shows the prevalence of western over eastern elements. Species with a wide distribution in Eurasia, which constitute one third of the total flora of Kagbeni, are of great importance as weeds on arable fields and in ruderal places within the irrigated oasis. Their occurrence is closely related to human activity. Presumably, most of these weeds have reached the area under study in connection with agriculture a long time ago. Weeds from the New World, although recorded in other villages of Mustan District, have not been found in Kagbeni. The weed vegetation of Kagbeni is documented by nine vegetation releves, and is compared to releves from Jomsom and Mzrpha. A floristic gradient from south to north that has been detected by earlier investigations throughout the whole district can be reproduced at the local scale. With regard to the weed flora, the effects of different crops are minimal, compared to effects of altitude and other factors related to altitude.
Teaching information literacy: substance and process This presentation explores the concept of information literacy within the broader context of higher education. It argues that, certain assertions in the library literature notwithstanding, the concepts associated with information literacy are not new, but rather very closely resemble the qualities traditionally considered to characterize a well-educated person. The presentation also considers the extent to which the higher education system does indeed foster the attributes commonly associated with information literacy. The term information literacy has achieved the immediacy it currently enjoys within the library community with the advent of the so-called "information age" The information age is commonly touted in the literature, both popular and professional, as constituting nothing short of a revolution. Academic librarians and other educators have of course felt called upon to make their teaching reflect both the growing proliferation of information formats and the major transformations affecting the process of information seeking. Faced with so much novelty and uncertainty, it is no surprise that many have felt that these changes call for a revolution in teaching. It is within this context that the concept of information literacy has flourished. It is argued in this presentation, however, that by treating information literacy as an essentially new specialty that owes much of its importance to the plethora of electronic information, we risk obscuring some of the most fundamental and enduring educational values we should be imparting to our students. Much of the literature on information literacy assumes - rather than argues - that recent changes in the way we approach education are indications of progress. Indeed, much of the self-narrative that institutions produce (in bulletins, mission statements, web sites, etc.) endorses an approach to education that will result in lifelong learners who are critical consumers of information. After critically examining the degree to which such statements of educational approach reflect reality, this presentation concludes by considering the effects of certain changes in the culture of higher education. It considers particularly the transformation - at least in North America - of the traditional model of higher education as a public good to a market-driven business model. It poses the question of whether a change of this significance might in fact detract from, rather than promote, the development of information literate students.
Using faculty-librarian partnerships to ensure that students become information fluent in the 21st century In the 21st century educators in partnership with librarians must prepare students effectively for productive use of information especially in higher education. Students will need to graduate from universities with appropriate information and technology skills to enable them to become productive citizens in the workplace and in society. Technology is having a major impact on society; in economics e-business is moving to the forefront; in communication e-mail, the Internet and cellular telephones have reformed how people communicate; in the work environment computers and web utilizations are emphasized and in education virtual learning and teaching are becoming more important. These few examples indicate how the 21st century information environment requires future members of the workforce to be information fluent so they will have the ability to locate information efficiently, evaluate information for specific needs, organize information to address issues, apply information skillfully to solve problems, use information to communicate effectively, and use information responsibly to ensure a productive work environment. Individuals can achieve information fluency by acquiring cultural, visual, computer, technology, research and information management skills to enable them to think critically.
Information literacy is a mosaic of attitudes, understandings, capabilities and knowledge about which there are three myths. The first myth is that it is about the ability to use ICTs to access a wealth of information. The second is that students entering higher education are information literate because student centred, resource based, and ICT focused learning are now pervasive in secondary education. The third myth is that information literacy development can be addressed by library-centric generic approaches. This paper addresses those myths and emphasises the need for information literacy to be recognised as the critical whole of education and societal issue, fundamental to an information-enabled and better world. In formal education, information literacy can only be developed by infusion into curriculum design, pedagogies, and assessment.
Navigating information, facilitating knowledge: the library, the academy, and student learning
(2004)
Understanding the nature and complementarity of the phenomena of information and knowledge lend not only epistemological clarity to their relationship, but also reaffirms the place of the library in the academic mission of knowledge transfer, acquisition, interpretation, and creation. These in turn reassert the legitimacy of the academic library as necessary participant in the teaching enterprise of colleges and universities. Such legitimacy induces an obligation to teach, and that obligation needs to be explored and implemented with adequate vigor and reach. Librarians and the academy must, however, concede that the scope of the task calls for a solution that goes beyond shared responsibilities. Academic libraries should assume a full teaching function even as they continue their exploration and design of activities and programs aimed at reinforcing information literacy in the various disciplines on campus. All must concede that need for collaboration cannot provide grounds for questioning the desirability of autonomous teaching status for the academic library in information literacy education
Course management software : supporting the university’s teaching with technology initiatives
(2004)
An increasingly important element of the teaching with technology activities at Northwestern University is the course management system, a web-based class communication and administration environment. The usage growth of the system is substantial and amplifies the need for integration with other web services and resources. Integration is particularly material in area of library services. This presentation contains a case study of Northwestern University's implementation of its course management system software and highlights examples of how the system is being used to enhance the teaching and learning. A description of the integration efforts with library resources is provided. The goal of the presentation is to equip librarians with the basic knowledge required to engage with their colleagues in conversations surrounding the nature of integration of these systems within the teaching and learning landscapes of their home institutions.
Im Zeitalter von Internet und digitaler Wissensvermittlung hat auch die Geschichtswissenschaft die Photographie als Quellenmaterial zur Dokumentation historischer Lebensbedingungen und Ereignisse schätzen gelernt. Neben dem geisteswissenschaftlichen Aspekt solcher Photodokumente gibt es einen technisch-konservatorischen Aspekt.
While science claims to be universal, the notion of universality actually covers two very different facets: on the one hand, it refers to the universal value of the epistemological claims of science while, on the other hand, it addresses the issue of how fully the process of scientific communication is presently globalized. How the issue of open access crosses that of the globalization of scientific communication will be the theme of this presentation. The conclusion will be that, without open access, the globalization of scientific communication will lead to increased knowledge and digital divisions.
Der Vortrag wurde am 5th Frankfurt Scientific Symposium gehalten (22-23 Oktober 2005). Die Betrachtung des Videos ist (leider) nur mit den Browsern Internet Explorer ab 5.0, Netscape Navigator ab 7.0 oder Internet Explorer ab 5.2.2 für MaC möglich (s. Dokument 1.html). Die gesamten Tagungsbeiträge sind unter http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1992/ abrufbar.
In keeping with the views of its guru, Stephen Harnard, the open access movement is only prepared to discuss the two models of the "green road" and the "golden road" as sole alternatives for the future of scientific publishing. The "golden road" is put forward as the royal road for solving the journals crisis. However, no one has drawn attention to the fact that the golden road represents a purely socialist solution to a free-market problem and thus continues the "samizdat" tradition of underground literature in the former Eastern bloc. The present paper reveals the alarmingly low level at which the open access movement intends to publish top-class results from science and research, and the low degree of professionalism with which they are satisfied.
The economical and organizational debates about open access have mostly been concerned with journals. This is not surprising since the open access movement can be seen largely as a response to the serials crisis. Recently the open access debate has been extended to include access to government produced data in different forms. In this presentation I'll critically look at some economic and organizational issues pertaining to the open access provision of bibliographical data.
In this increasingly complex world of learned information delivery and discovery - is it possible that the "free lunch" the Publishing world worries about could come true? Although Open Access and Institutional Repositories have not (yet) created the "scorched earth" effect many were predicting, they are slowly and inevitably gaining momentum. Broader access to top-level information via Google (and others) does indeed appear to be "good enough" for many in their search for content. But you rarely get food for free in a good quality restaurant. You pay for the selection, preparation, speed and expertise of the delivery. At the soup kitchen the food can often be filling - but the queue will be long, the wait even longer and there is no chance of silver service or à la carte. If you are unfortunate enough to have little choice then this may be a great solution. Others will be willing to pay for a more satisfactory meal. As in all aspects of life, diversification and specialisation are fundamental forces. The publishing community in the years to come will continue to develop its offerings for a variety of needs that require more than just broth. To stretch the analogy, the ongoing presence of tap water in our lives has done little to halt the extraordinary rise of bottled water as part of our staple diet. Business reality will continue to settle these types of debate; my bet is that the commercial publishers see a role as providing information that commands an intrinsic value proposition to enough customers to remain economically viable for some time to come. Inspired by the comments and ideas expounded by Dr. James O'Donnell of Georgetown University on the liblicense listserv on 20th July this year, this paper will look to expand on the analogy and identify the good, the bad - but importantly the difference in information quality and access that will result in the radically changed (but still co-existent) information landscape of tomorrow.
Background and Purpose of this Meeting With an opening reception sponsored by Thomson Scientific on the evening of Thursday, October 5, the University Library of Frankfurt and the German-North American Resources Partnership (GNARP) and will be hosting an important two-day conference this Autumn in Frankfurt, Germany: »The World According to GNARP: Prospects for Transatlantic Library Partnership in the Digital Age« Sessions at this meeting will explore the wealth of library resources - archival, print, and digital - available to students and researchers (in Germany and the United States) in five selected subject areas: North American Studies, German Studies, Judaica, Africana, and South Asia/India, highlighting both existing avenues (and obstacles) for transatlantic resource sharing along with future prospects. In addition, several other important topics will be highlighted through individual presentations and panel discussions: the future of German as a language of the sciences; existing and planned electronic journal archives in Germany and the U.S.; print and digital repositories; and a special panel on »comparative cataloging cultures« on both sides of the Atlantic. The »World According to GNARP« conference will be taking place simultaneously with the Frankfurt Book Fair, the largest book-related event in the world, attracting annually 285,000 visitors (2005), thus giving participants who arrive early the chance to combine attendance at both the Book Fair and Conference. A cultural event and dinner in Frankfurt are planned for Friday 6th October.
The LOCKSS (Lots of Copies Keep Stuff Safe) Alliance is an international community of about 100 libraries and partners like OCLC. For almost a decade the LOCKSS open source model has been tested for its robustness against attack and for its ability to migrate formats. LOCKSS »boxes« at 150 institutions in more than 20 countries comprise a peer-to-peer system that automatically cross-checks content to ensure the accuracy and completeness of all member archives. Eighty publishers, including large publishers like Oxford University Press, are now participating in LOCKSS or actively preparing to add their journals to the program.
The mission of the Harvard Judaica Collection is to comprehensively document Jewish history and civilization in all places and periods. To accomplish its mission, the Judaica Collection collects materials in all languages and in all formats—books, pamphlets, periodicals, newspapers, sound recordings, and videos, posters, broadsides, and photographs. A particular focus is the Library’s Documenting Israel program, which covers all aspects of Israeli life and culture in great depth; Harvard has the largest collection of Israeli publications and Israel-related materials outside the State of Israel. The Harvard Judaica Collection also attempts to have comprehensive coverage of the publications of Jewish communities throughout the globe, including a significant collection of publications from countries across Europe. Collecting these materials requires cooperation with a wide array of institutions and individuals around the world.
The enhancing importance of digital documents has effected activities on how to deal with them. One line came from the more general field of "scientific publishing", which was handled in detail by DINI (Deutsche Initiative für Netzwerkinformation). But for this initiative long- time archiving was only one field of many and was not their primary focus. DINI first of all concentrated on the elaboration of effective and standardized methods and tools for publishing and related services on the basis of open access policy via the use of institutional repositories. The second line of projects came from the more general view of maintaining cultural heritage also in a digital world. Especially under the patronage of the Ministry of Education and Research important projects were being financed. Strategic solutions including archives, libraries, and museums are discussed and elaborated within NESTOR, where more technical solutions based on the term of practicability are developed within KOPAL. KOPAL brought together the industry (IBM) with a public- funded technical center (GWDG) and two libraries (DNB and SUB Göttingen). Within this project a general software implementation, which took into consideration all necessary international standards, could be finished last month and has been now for about two weeks. Based on early results within NESTOR it seemed important too, to strengthen all activities by giving them a legal basis. Therefore when the law changed concerning the German National Library from June 22nd this year (DNBG), the library was authorized with all the necessary instruments to collect digital documents in "non-physical" form as well. With this law at the moment Germany is in the rare position of being one of the few countries where the collection of network publications is part of the whole legal deposit strategy.
To stimulate further discussion, I would like to briefly tackle the following questions: * How can one become informed about what is going on in German Studies in the US? * What kinds of American guides to German resources are available? * What kinds of German Studies resources are being produced in the US? * What do we know about how scholars are using (or not) these guides and resources?
U. S. library resources on South Asia that were built around the limited needs of a handful of Sanskritics before World War II have made a long journey during the past half century. Since the inception of the Library of Congress Cooperative South Asia Acquisition Program (formerly called "PL-480" program), in 1962, libraries have built significant collections with financial support from governmental agencies and philanthropic foundations, to support teaching and research in all areas of social sciences and humanities. These collections have been supplemented by efforts to build retrospective collections and to microfilm rare materials in British and South Asian libraries and archives. Today, in cooperation with South Asian libraries, several projects are underway to preserve and digitize rapidly deteriorating materials so that these riches can be shared with the global scholarly community through electronic means.
A number of pressures on academic libraries imperil the long-term survivability of printed knowledge and heritage materials. Ever-growing volumes of materials, costs of preserving and delivering paper-based research resources, and researchers’ growing demand for source materials in electronic formats all produce strain on our institutions. ...
The University library in Frankfurt/Main owns the largest collection of literature on Judaism and Israel in Germany and one of the major collections in the world. Its task is to document the history of the Jewish people and to serve as a resource for study and research in Germany. The Jewish Division is therefore collecting all relevant national and international publications covering all aspects of post-biblical Judaism and Jewish culture in a most comprehensive manner, as well as all publications on the modern State of Israel. Two databases offer access to the large collections of the Judaica-Division. Yiddish Literature is the database that offers online access to the page images of the outstanding historical Yiddish collection, containing about 800 extremely rare and precious Yiddish and German-Jewish books printed in Hebrew letters from the 16th century onwards. Compact memory is a gateway to more than 100 Jewish periodicals in the German language area published in the 19th and 20th century, providing partly images, partly full-text-search and a bibliographic database of articles. The implementation of a third, new digital project, the »Virtual Judaica-collection« has just started - the digitization and online-presentation of the historical Judaica resources. Formed by his curator Prof. Aron Freimann, it was the largest and most significant Judaica collection on the European continent before the war. The goal is to offer free access to about 18.000 books with 2 Mill. pages over the web. In light of these developments, the presentation will evaluate the current possibilities of German-North-American cooperation in the area of digital projects.
The paper will provide a brief background to the history of the organization and cooperative efforts of African studies librarians in the United States including their efforts at international cooperation. Particular emphasis will be placed on the current opportunities for improved cooperation as digitization activities increase. Examples will include the DISA and Aluka initiatives and well as the Timbuktu manuscript digitization project at the Center for Research Libraries. Particular emphasis will placed on the possibilities for German-North American cooperation in the area of digital projects of historical photographs given the extensive collections held at Northwestern and Frankfurt.
... This year's Scientific Symposium of the University Library is already number six in the row. It was again prepared and organised like some of the previous conferences together with our North American partners. This means that a continuous specialists’ discussion and a professional partnership have been already installed. All librarians and information managers are invited to learn more about the results of this cooperation every year when it's time for the next Symposium during Frankfurt Book Fair. ...
The paper presents an overview about some of the international relevant projects of digital resources in Germany. Online presentations of primary sources, e.g. photographic material, and bibliographic tools supporting research, such as cross searching, will be presented as potential partners of resource sharing with North America. Not only the possibility of cooperation will be sketched, but also necessary preliminary work and some obstacles will be outlined. This report is accompanied by a short characterization of African studies in Germany and the status quo of Open Access-initiatives.
Large American research libraries have been acquiring - by purchase and by lease - huge multi-disciplinary electronic collections of primary and secondary source materials. For example, the Digital Evans and Canadian Poetry easily make available to scholars primary materials that once were scattered in libraries across North America and Europe. The American State Papers, 1789 – 1838 collection allows easier searching of fragile rare materials. Collections made by libraries digitizing their own holdings, such the Archive of Early American Images from the John Carter Brown Library at Brown University, make research materials more discoverable and usable. Yet recent scholarship in American Studies by American and European scholars makes relatively little use of these new materials. Both disparities and congruities in what scholars use and what research libraries collect are apparent. Some simple reasons explain the dissonance. Furthermore, conversations with scholars suggest that materials and collections alone will not suffice to support research. Librarians’ skills and actions will increase the value of the new research materials.
Seit 2005 ist die Bibliothek des Südasien-Instituts in Kooperation mit der UB Heidelberg Trägerin des DFG-geförderten Sondersammelgebiets Südasien. Damit hat sie von der UB Tübingen ein traditionsreiches Sondersammelgebiet übernommen, dessen Geschichte bis ins Jahr 1949 zurückreicht. Der Vortrag wird zum Einen einen kurzen Überblick über den historischen Kontext des SSG Südasien geben und zum Anderen über Neuentwicklungen, wie z.B. die Virtuelle Fachbibliothek Südasien, die in den letzten zwei Jahren an der Bibliothek des Südasien-Instituts aufgebaut wurde. Vor diesem Hintergrund soll vor allem das Kooperationspotential im Bereich digitaler Informationsressourcen beleuchtet werden.
Contents - BIX: pole position and runner-up - Frankfurt University Library: its responsibilities, its collections, its databases, its supra-regional collecting responsibilities – and some statistics - The "Sondersammelgebiet" Germanistik: its scope and contents, its principal strengths, present situation, and budget - Sammlung Deutscher Drucke: the 1801-1870 segment of the "Distributed National Library" - Information Services: Bibliographie der deutschen Sprach- und Literaturwissenschaft (BDSL), Neuerwerbungsliste Germanistik, Bibliographie germanistischer Bibliographien (BgB), DigiZeitschriften, information bulletins - Work of the Subject Specialist: exhibitions, publicity material
The scientific innovation process embraces the steps from problem definition through the development and evaluation of innovative solutions to their successful exploitation. The challenges imposed by this process can be answered by the creation of a powerful and flexible next-generation e-Science infrastructure, which exploits leading edge information and knowledge technologies and enables a comprehensive and intelligent means of supporting this process. This paper describes our vision of a Knowledge-based eScience infrastructure, which is based on the results of an in-depth study of the researchers requirements. Furthermore, it introduces the Fraunhofer e-Science Cockpit as a first implementation of our vision.
University 2.0
(2007)
The major challenge facing universities in the next decade is to reinvent themselves as information organizations. Universities are, at their core, organizations that cultivate knowledge, seeking both to create new knowledge and to preserve and convey existing knowledge, but they are remarkably inefficient and therefore ineffective in the way that they leverage their own information resources to advance that core activity. This talk will explore ways that the university could learn from what is now widely called "Web 2.0" -- a term that is meant to identify a shift in emphasis from the computer as platform to the network as platform, from hardware to data, from the wisdom of the expert to the wisdom of crowds, and from fixity to remixability.
In several academic fields (most notably: physics, mathematics, economics, astronomy, and computer science), most current research papers are freely accessible on the Internet in both pre- and post-publication formats. For these disciplines, open-access dissemination of publications and data has created a robust and useful information environment that is highly valued by researchers. While the acceptance of open-access dissemination has been disruptive to traditional scholarly publishing, the status and economic value of the elite journals has remained largely intact. Indeed, publication in the most prestigious journals (e.g., Science, Nature, Cell, BMJ, etc.) may have more influence than ever in determining the advancement of academic careers. Traditional publishing and open access will continue to coexist uncomfortably for years to come, but the next wave of digital publishing systems (empowered social networking applications) will establish open access repositories as indispensable infrastructure for the sciences and social sciences.
The aim of the meeting is to expose this current topic for critical discussion with international speakers and participants and to find solutions which optimize the integration of information services into university structures. Presentations and discussions will consider: * integrated versus cooperative models * single-unit operations, central or de-centralized faculty organisations * outsourcing services versus own organisation/effort * institutional repository versus discipline-based repository * information supply in the era of "Google print" »The Integration of Information Services into University Structures« Symposium will be taking place simultaneously with the Frankfurt Book Fair, the largest book-related event in the world attracting annually 286,621 people (2006) , thus giving participants who arrive early the chance to combine attendance at both the Book Fair and Symposium. A cultural event and dinner in one of Frankfurt's historical rooms on Friday will be a social highlight! A contingency of hotel rooms has been reserved on a »first come, first served basis« outside Frankfurt at non-Book Fair prices. More information on request.
Rather than introducing a new system for global identity management, the University of Freiburg decided to continue with the existing software systems (esp. from HIS), to identify the leading system for each set of data and to mirror the data between the various systems. A clearly defined workflow ensures that changes to data are made only on the relevant "leading" system and then propagated to the other systems. User authentication for systems managed by the computer center is done via LDAP. Consequently, while access rights are granted by the LDAP system, the decision of whether or not the person is a member of the University is left to the administration. As a consequence the implementation of a portal called mylogin to get the necessary tickets for shibboleth is a straightforward process as it only remains to check the data against LDAP before issueing the corresponding tickets.
Working closely with teaching and research staff is critical to the success of libraries and information services. Indeed, the degree of integration with a University's academic work is one of the factors that distinguish a successful service from a poor one. This paper will consider the relationship between information services and how universities operate. Using the challenges facing institutions as a starting point - including the move towards a single European higher education market - the impact of information provision on institutional strategies will be explored. Information resources underpin all learning, teaching and research activities and the presentation will consider the professional practice which ensures that libraries and computing services are fully exploited. The focus on the experience of students is leading some institutions to integrate information services with a wide range of other activities and the paper will consider the opportunities and challenges which this brings, including the need to build working relationships with a broader range of professional groups.
Universities of the 21st century heavily depend on an efficient IT infrastructure for teaching, research and administration. E-Learning environments, blended learning and all sorts of multimedia and cooperative environments are important requirements for teaching at universities and for further education. Many of the organizational structures such as continuous examinations, interdisciplinary studies, ECTS system and many more require efficient examination administration systems as well as room and personnel management. Research is based on Internet inquiries, eScience, eLibrary and other IT supported media. Research results must be documented and archived in a digital way and results must be distributed and marketed through the Internet. The efficient administration of all kinds of resources of the university must be planned using management support systems. Decisions of university heads must be prepared from well documented statistics and analysis software. In the past, many of the applications named above for teaching, research and administration have been performed by separate software applications and run in distributed environments of universities. Powerful server structures and networking features as well as new software technology like service-oriented architectures make it necessary to recentralize the IT services of the university after a long period of decentralization. Based on metadirectories and unified access procedures, all of the software components must be integrated into a seamless IT infrastructure. To guarantee consistency, data must not be stored in a redundant way. Project IntegraTUM of Technische Universität München started in 2003 and is an umbrella project to define such a seamless IT infrastructure for a university with 22.000 students and approximately 10.000 staff. The talk describes the project, which besides the definition of new technology is based on a fundamental process analysis of the university and many changes in the organizational structure.
Information Supply in the era of mass digitization Drawing on his experience at the Bodleian Library and now at the British Library, Ronald Milne will share his first-hand impressions of 'boutique' and mass digitization programmes, such as those being undertaken by Google and Microsoft, and their effect on information supply. Collections define libraries. What does this mean in the 21st Century? Will all libraries become equal as the digital revolution progresses? What might the digitization and indexing of millions of works mean for university researchers and the intellectually curious more generally? What are the benefits and what are the strategic issues that we are bound to consider?
Information supply is the genuine task of academic institutions as well as of publishers. Publishers profit from copyright provisions which give them exclusive rights in their products. The same copyright provisions are often the limiting factor when academic institutions try to improve their service to the academic community. This is the case in particular when it comes to digital access to information. In a so-called "Second Basket", the German copyright act has just been revised, introducing explicit legal exemptions for document deliveries and on the spot consultation of works contained in public libraries' collections. At the same time, unresolved issues remain with respect to existing legal exemptions as well as the new ones. What will the legal parameters look like for academic institutions once the "Second basket" has been put into force? How can libraries work with these provisions in practice?
In the year 2000 the Deutsche Initiative für Netzwerkinformation (DINI) / German Coalition of Network Information was founded: 10 theses "Changes in information infrastructure – challenges to universities and their information and communications facilities" is the DINI’s founding charter (s. http://www.dini.de).
Thesis 4 states: "The universities need to establish information management structures to integrate departments. University managements, departments and central institutions ought to prepare a university development plan for the areas of information, communication and multimedia." ...
Trends for distributed, open, and increasingly collaborative models of information delivery challenge the library's classic roles. In addition, trends within the research community for more interdisciplinary and collaborative scholarship create an opportunity for more enabling information infrastructure. In an age of Amazon, Google, and "social" tools, how should the library respond? My presentation will focus on strategies for bringing the library's "assets" into the flow of researchers' work. How can the library integrate its resources into the scholar's workflow? What are the emerging challenges of this integration?
The Frankfurt University Library possesses one of the outstanding Africana Collections in continental Europe; its regional anddisciplinary scope is unique in Germany. Today about 5,000 new acquisitions a year have accumulated over 200,000 items on Africa south of the Sahara. Some 50,000 historical and rare photographs are fully digitized and freely accessible. Together with a collection of around 18,000 books stemming from the collections of the German Colonial Society at the end of the 19th and the beginning of the 20th century they constitute the historical foundations of the collection. Recently the University Library Frankfurt and the library of the GIGA Institute of African Affairs, Hamburg, started the project ilissAfrica (internet library sub-Saharan Africa), a central subject gateway for online resources and a powerful tool for bibliographic research. These new services will be indispensable for researchers and librarians of African Studies and will promote African studies worldwide.
The emperor's new colonies
(2008)
The Colonial Picture Archive in Frankfurt offers a unique pictorial record of German colonial history. For many years the collection was virtually forgotten. However, following painstaking description and digitalisation, the photo documents are now available on the Internet to researchers in Germany and abroad.
3.11.2008 - 4.11.2008 fand in Frankfurt am Main folgende Tagung statt: 21st Century Libraries: Changing Forms, Changing Challenges, Changing Objectives = 8th Frankfurt Scientific Symposium: 3.11.2008 - 4.11.2008. Sie wurde von der Universitätsbiblithek Johann Christian Senckenberg in Zusammanarbeit mit dem Deutschem Architektur Museum (Frankfurt am Main) und der Akademie der Architekten- und Stadtplanerkammer Hessen (Wiesbaden) organisiert Das 8. Frankfurt Symposium stellt den zeitgenössischen Bibliotheksbau, die Entwicklungen und die Probleme des gegenwärtigen Bibliotheksbaus zur Diskussion. Einige theoretische und technische Beiträge runden das Programm ab. Zwei zentrale Schwerpunkte des Symposiums werden die Einbindung von Bibliotheksbauten in das Stadtumfeld und die Auswirkungen gesellschaftspolitischer und technischer Entwicklungen auf die Architektur von Bibliotheken sein.
The correspondence between the terminology used for querying and the one used in content objects to be retrieved, is a crucial prerequisite for effective retrieval technology. However, as terminology is evolving over time, a growing gap opens up between older documents in (long-term) archives and the active language used for querying such archives. Thus, technologies for detecting and systematically handling terminology evolution are required to ensure "semantic" accessibility of (Web) archive content on the long run. As a starting point for dealing with terminology evolution this paper formalizes the problem and discusses issues, first ideas and relevant technologies.
New projects, services and collaborations have recently brought the infrastructural services for African Studies a big step forward. This report gives an account of new subject gateways and digitisation projects. It discusses recent European cooperation ventures in the field of librarianship. Additionally, new developments and services of the Africa Collection at Frankfurt University Library are presented, which help to address the changing needs of researchers and to handle information overload, while keeping up with the latest developments. Nevertheless, the fragmentation and compartmentalisation of the different services still hinder more integrated information services.
Vortrag im Rahmen des Symposiums der Universitätsbibliothek Frankfurt am Main in Kooperation mit der Frankfurter Buchmesse 2011 "Economy and Acceptance of Open Access Strategies", am 14.10.2011.
Vortrag im Rahmen des Symposiums der Universitätsbibliothek Frankfurt am Main in Kooperation mit der Frankfurter Buchmesse 2011 "Economy and Acceptance of Open Access Strategies", am 14.10.2011.
Vortrag im Rahmen des Symposiums der Universitätsbibliothek Frankfurt am Main in Kooperation mit der Frankfurter Buchmesse 2011 "Economy and Acceptance of Open Access Strategies", am 14.10.2011.
Management Summary: Conducted within the project “Economic Implications of New Models for Information Supply for Science and Research in Germany”, the Houghton Report for Germany provides a general cost and benefit analysis for scientific communication in Germany comparing different scenarios according to their specific costs and explicitly including the German National License Program (NLP).
Basing on the scholarly lifecycle process model outlined by Björk (2007), the study compared the following scenarios according to their accounted costs:
- Traditional subscription publishing,
- Open access publishing (Gold Open Access; refers primarily to journal publishing where access is free of charge to readers, while the authors or funding organisations pay for publication)
- Open Access self-archiving (authors deposit their work in online open access institutional or subject-based repositories, making it freely available to anyone with Internet access; further divided into (i) CGreen Open Access’ self-archiving operating in parallel with subscription publishing; and (ii) the ‘overlay services’ model in which self-archiving provides the foundation for overlay services (e.g. peer review, branding and quality control services))
- the NLP.
Within all scenarios, five core activity elements (Fund research and research communication; perform research and communicate the results; publish scientific and scholarly works; facilitate dissemination, retrieval and preservation; study publications and apply the knowledge) were modeled and priced with all their including activities.
Modelling the impacts of an increase in accessibility and efficiency resulting from more open access on returns to R&D over a 20 year period and then comparing costs and benefits, we find that the benefits of open access publishing models are likely to substantially outweigh the costs and, while smaller, the benefits of the German NLP also exceed the costs.
This analysis of the potential benefits of more open access to research findings suggests that different publishing models can make a material difference to the benefits realised, as well as the costs faced. It seems likely that more Open Access would have substantial net benefits in the longer term and, while net benefits may be lower during a transitional period, they are likely to be positive for both ‘author-pays’ Open Access publishing and the ‘over-lay journals’ alternatives (‘Gold Open Access’), and for parallel subscription publishing and self-archiving (‘Green Open Access’). The NLP returns substantial benefits and savings at a modest cost, returning one of the highest benefit/cost ratios available from unilateral national policies during a transitional period (second to that of ‘Green Open Access’ self-archiving). Whether ‘Green Open Access’ self-archiving in parallel with subscriptions is a sustainable model over the longer term is debateable, and what impact the NLP may have on the take up of Open Access alternatives is also an important consideration. So too is the potential for developments in Open Access or other scholarly publishing business models to significantly change the relative cost-benefit of the NLP over time.
The results are comparable to those of previous studies from the UK and Netherlands. Green Open Access in parallel with the traditional model yields the best benefits/cost ratio. Beside its benefits/cost ratio, the meaningfulness of the NLP is given by its enforceability. The true costs of toll access publishing (beside the buyback” of information) is the prohibition of access to research and knowledge for society.
High impact events, political changes and new technologies are reflected in our language and lead to constant evolution of terms, expressions and names. Not knowing about names used in the past for referring to a named entity can severely decrease the performance of many computational linguistic algorithms. We propose NEER, an unsupervised method for named entity evolution recognition independent of external knowledge sources. We find time periods with high likelihood of evolution. By analyzing only these time periods using a sliding window co-occurrence method we capture evolving terms in the same context. We thus avoid comparing terms from widely different periods in time and overcome a severe limitation of existing methods for named entity evolution, as shown by the high recall of 90% on the New York Times corpus. We compare several relatedness measures for filtering to improve precision. Furthermore, using machine learning with minimal supervision improves precision to 94%.
Library Buildings around the World" is a survey based on researches of several years. The objective was to gather library buildings on an international level starting with 1990.
The parts Germany, France, United Kingdom, United States have been thoroughly revised, supplemented and completed for this 2nd edition. A revision of the other countries is planned for the next edition.
The World Wide Web is the largest information repository available today. However, this information is very volatile and Web archiving is essential to preserve it for the future. Existing approaches to Web archiving are based on simple definitions of the scope of Web pages to crawl and are limited to basic interactions with Web servers. The aim of the ARCOMEM project is to overcome these limitations and to provide flexible, adaptive and intelligent content acquisition, relying on social media to create topical Web archives. In this article, we focus on ARCOMEM’s crawling architecture. We introduce the overall architecture and we describe its modules, such as the online analysis module, which computes a priority for the Web pages to be crawled, and the Application-Aware Helper which takes into account the type of Web sites and applications to extract structure from crawled content. We also describe a large-scale distributed crawler that has been developed, as well as the modifications we have implemented to adapt Heritrix, an open source crawler, to the needs of the project. Our experimental results from real crawls show that ARCOMEM’s crawling architecture is effective in acquiring focused information about a topic and leveraging the information from social media.
Europeana provides a common access point to digital cultural heritage objects across different cultural domains among which the libraries. The recent development of the Europeana Data Model (EDM) provide new ways for libraries to experiment with Linked Data. Indeed the model is designed as a framework reusing various wellknown standards developed in the Semantic Web Community, such as the Resource Description Framework (RDF), the OAI Object Reuse and Exchange (ORE), and Dublin Core namespaces. It provides new opportunities for libraries to provide rich and interlinked metadata to the Europeana aggregation.
However to be able to provide data to Europeana, libraries need to create mappings from the librarystandard to EDM. This step involves decisions based on domainspecific requirements and on the possibilities offered by EDM. The crossdomain nature of EDM limiting in some cases the completeness of the mappings, extension of the model have been proposed to accommodate the library needs.
The "Digitised Manuscripts to Europeana" project (DM2E) has created an extension of EDM to optimise the mappings of librarydata for manuscripts. This extension is in the form of subclasses and subproperties that further specialise EDM concepts and properties. It includes spatial creation and publishing information, specific contributor and publication type properties and more.
Furthermore the granularity of the mapping has been extended to allow references and annotations on page level as required for scholarly work. As part of this project the metadata of the Hebrew Manuscripts as well as of the Medieval Manuscripts presented in the Digital Collections of the Frankfurt University Library have been mapped to this extension. This includes links to the Integrated Authority File (GND) of the German National Library with further links to the Virtual International Authority File (VIAF).
Based on this development a new comprehensive mapping from the digitalisation metadata format METS/MODS to EDM has been established for all materials of the Frankfurt Judaica in "Judaica Europeana ". It demonstrates today’s capabilities of the creation of linked Data structures in Europeana based on library catalogue data and structural data from the digitalisation process.
The constantly growing amount of Web content and the success of the SocialWeb lead to increasing needs for Web archiving. These needs go beyond the pure preservationo of Web pages. Web archives are turning into “community memories” that aim at building a better understanding of the public view on, e.g., celebrities, court decisions and other events. Due to the size of the Web, the traditional “collect-all” strategy is in many cases not the best method to build Web archives. In this paper, we present the ARCOMEM (From Future Internet 2014, 6 689 Collect-All Archives to Community Memories) architecture and implementation that uses semantic information, such as entities, topics and events, complemented with information from the Social Web to guide a novel Web crawler. The resulting archives are automatically enriched with semantic meta-information to ease the access and allow retrieval based on conditions that involve high-level concepts.
The web and the social web play an increasingly important role as an information source for Members of Parliament and their assistants, journalists, political analysts and researchers. It provides important and crucial background information, like reactions to political events and comments made by the general public. The case study presented in this paper is driven by two European parliaments (the Greek and the Austrian parliament) and targets an effective exploration of political web archives. In this paper, we describe semantic technologies deployed to ease the exploration of the archived web and social web content and present evaluation results.
Cultural heritage reconstructed - Compact Memory and the Frankfurt Digital Judaica Collection
(2014)
Compact Memory, the internet archive of German Jewish periodicals, provides free global internet access to the vast majority of German-Jewish newspapers and periodicals of the 19th and 20th century.
Jewish historical newspapers are the invaluable sources that supply direct and detailed information of the transformation process of Jewry and offer new insights into European Jewish history. The use of these historical sources however is extremely difficult, as complete sets of periodicals are very rarely to be found and they are scattered all over the world in different libraries and archives and in different physical formats (paper, microfilm).
Compact Memory contains the 110 most important Jewish German newspapers and periodicals in Central Europe in the period from 1806-1938, covering the complete range of religious, political, social, cultural and academic aspects of Jewish life. The texts are available partly as full-texts, processed by OCR, partly as graphic documents with corresponding index options. The database offers advanced search options, downloading and printing of articles. Thousands of essays of more than 10.000 individual contributors have been bibliographically indexed.
Compact Memory was established by the Judaica Division of the University Library Frankfurt am Main and in charge today in cooperation with the Aachen Chair of German-Jewish Literary History and the Cologne library Germania Judaica.
Compact Memory is one database within the Digital Collection Judaica which being part of Europeana and other digital portals offers resources for the reconstruction and representation of Jewish cultural heritage.
The concept of culturomics was born out of the availability of massive amounts of textual data and the interest to make sense of cultural and language phenomena over time. Thus far however, culturomics has only made use of, and shown the great potential of, statistical methods. In this paper, we present a vision for a knowledge-based culturomics that complements traditional culturomics. We discuss the possibilities and challenges of combining knowledge-based methods with statistical methods and address major challenges that arise due to the nature of the data; diversity of sources, changes in language over time as well as temporal dynamics of information in general. We address all layers needed for knowledge-based culturomics, from natural language processing and relations to summaries and opinions.
This paper introduces a novel research tool for the field of linguistics: The Linjgujisjtik web portal provides a virtual library which offers scientific information on every linguistic subject. It comprises selected internet sources and databases as well as catalogues for linguistic literature, and addresses an interdisciplinary audience. The virtual library is the most recent outcome of the Special Subject Collection Linguistics of the German Research Foundation (DFG), and also integrates the knowledge accumulated in the Bibliography of Linguistic Literature. In addition to the portal, we describe long-term goals and prospects with a special focus on ongoing efforts regarding an extension towards integrating language resources and Linguistic Linked Open Data.
Web archives created by the Internet Archive (IA) (https://archive.org), national libraries and other archiving services contain large amounts of information collected for a time period of over twenty years. These archives constitute a valuable source for research in many disciplines, including the digital humanities and the historical sciences by offering a unique possibility to look into past events and their representation on the Web.
Most Web archive services aim to capture the entire Web (IA) or national top-level domains and are therefore broad in their scope, diverse regarding the topics they contain and the time intervals they cover. Due to the large size and the broad scope it is difficult for interested researchers to locate relevant information in the archives as search facilities are very limited. Many users are more interested in studying smaller and topically coherent event-centric collections of documents contained in a Web archive [1,2]. Such collections can reflect specific events such as elections, or natural disasters, e.g. the Fukushima nuclear disaster (2011) or the German federal elections.
In order to promote the accessibility of biodiversity data in historic and contemporary literature, we introduce a new interdisciplinary project called BIOfid (FID=Fachinformationsdienst, a service for providing specialized information). The project aims at a mobilization of data available in print only by combining digitization of scientific biodiversity literature with the development of innovative text mining tools for complex, eventually semantic searches throughout the complete text corpus. A major prerequisite for the development of such search tools is the provision of sophisticated anatomy ontologies on the one hand, and of complete lists of species names (currently considered valid as well as all synonyms) at a global scale on the other hand. In the initial stage, we chose examples from German publications of the past 250 years dealing with the geographic distribution and ecology of vascular plants (Tracheophyta), birds (Aves), as well as moths and butterflies (Lepidoptera) in Germany. These taxa have been prioritized according to current demands of German research groups (about 50 sites) aiming at analyses and modeling of distribution patterns and their changes through time. In the long term, we aim at providing data and open source software applicable for any taxon and geographic region. For this purpose, a platform for open access journals for long-term availability of professional e-journals will be established. All generated data will also be made accessible through GFBio (German Federation for Biological Data). BIOfid is supported by the LIS-Scientific Library Services and Information Systems program of the German Research Foundation (DFG).
We present a method for detecting word sense changes by utilizing automatically induced word senses. Our method works on the level of individual senses and allows a word to have e.g. one stable sense and then add a novel sense that later experiences change. Senses are grouped based on polysemy to find linguistic concepts and we can find broadening and narrowing as well as novel (polysemous and homonymic) senses. We evaluate on a testset, present recall and estimates of the time between expected and found change.
Biodiversity research heavily relies on recent and older literature, and the data contained therein. Despite great effort, large parts of the literature and the data it holds are still not available in appropriate formats needed for efficient compilation and analysis. As a part of the current funding strategy of the German Research Council (Deutsche Forschungsgemeinschaft, DFG), and resulting from an extensive dialogue with the scientific community in Germany, a "Specialised Information Service" (Fachinformationsdienst, FID) for Biodiversity Research will be established with the objective of making further segments of literature about biodiversity available in up-to-date formats. This project, starting 2017, is conducted by the University Library Johann Christian Senckenberg (Frankfurt/Main, Germany) together with the Senckenberg Gesellschaft für Naturforschung and the Text Technology Lab of the Goethe University (Frankfurt/Main).
The new Specialised Information Service for Biodiversity Research (FID Biodiversitätsforschung) comprises four core elements: (A) A text mining approach which encompasses advanced text technologies and a large body of 20th century literature; (B) the digitisation of selected German biodiversity literature; (C) a platform für Open Access journals; and (D) Acquisition of specialised print literature.
BIOfid is a specialized information service currently being developed to mobilize biodiversity data dormant in printed historical and modern literature and to offer a platform for open access journals on the science of biodiversity. Our team of librarians, computer scientists and biologists produce high-quality text digitizations, develop new text-mining tools and generate detailed ontologies enabling semantic text analysis and semantic search by means of user-specific queries. In a pilot project we focus on German publications on the distribution and ecology of vascular plants, birds, moths and butterflies extending back to the Linnaeus period about 250 years ago. The three organism groups have been selected according to current demands of the relevant research community in Germany. The text corpus defined for this purpose comprises over 400 volumes with more than 100,000 pages to be digitized and will be complemented by journals from other digitization projects, copyright-free and project-related literature. With TextImager (Natural Language Processing & Text Visualization) and TextAnnotator (Discourse Semantic Annotation) we have already extended and launched tools that focus on the text-analytical section of our project. Furthermore, taxonomic and anatomical ontologies elaborated by us for the taxa prioritized by the project’s target group - German institutions and scientists active in biodiversity research - are constantly improved and expanded to maximize scientific data output. Our poster describes the general workflow of our project ranging from literature acquisition via software development, to data availability on the BIOfid web portal (http://biofid.de/), and the implementation into existing platforms which serve to promote global accessibility of biodiversity data.
The Specialized Information Service Biodiversity Research (BIOfid) has been launched to mobilize valuable biological data from printed literature hidden in German libraries for over the past 250 years. In this project, we annotate German texts converted by OCR from historical scientific literature on the biodiversity of plants, birds, moths and butterflies. Our work enables the automatic extraction of biological information previously buried in the mass of papers and volumes. For this purpose, we generated training data for the tasks of Named Entity Recognition (NER) and Taxa Recognition (TR) in biological documents. We use this data to train a number of leading machine learning tools and create a gold standard for TR in biodiversity literature. More specifically, we perform a practical analysis of our newly generated BIOfid dataset through various downstream-task evaluations and establish a new state of the art for TR with 80.23% F-score. In this sense, our paper lays the foundations for future work in the field of information extraction in biology texts.
The Goethe University Frankfurt has updated its APC expenditures, providing data for the 2019 period.
The University Library Johann Christian Senckenberg is in charge of the University’s Open Access Publishing Fund, which is supported under the DFG’s Open Access Publishing Programme.
Contact Person is Roland Wagner.
The Specialised Information Service Performing Arts (SIS PA) is part of a funding programme by the German Research Foundation that enables libraries to develop tailor-made services for individual disciplines in order to provide researchers direct access to relevant materials and resources from their field. For the field of performing arts, the SIS PA is aggregating metadata about theater and dance resources from currently, mostly, German-speaking cultural heritage institutions in a VuFind-based search portal.
In this article, we focus on metadata quality and its impact on the aggregation workflow by describing the different, possibly data provider-specific, process stages of improving data quality in order to achieve a searchable, interlinked knowledge base. We also describe lessons learned and limitations of the process.
Biodiversity information is contained in countless digitized and unprocessed scholarly texts. Although automated extraction of these data has been gaining momentum for years, there are still innumerable text sources that are poorly accessible and require a more advanced range of methods to extract relevant information. To improve the access to semantic biodiversity information, we have launched the BIOfid project (www.biofid.de) and have developed a portal to access the semantics of German language biodiversity texts, mainly from the 19th and 20th century. However, to make such a portal work, a couple of methods had to be developed or adapted first. In particular, text-technological information extraction methods were needed, which extract the required information from the texts. Such methods draw on machine learning techniques, which in turn are trained by learning data. To this end, among others, we gathered the BIOfid text corpus, which is a cooperatively built resource, developed by biologists, text technologists, and linguists. A special feature of BIOfid is its multiple annotation approach, which takes into account both general and biology-specific classifications, and by this means goes beyond previous, typically taxon- or ontology-driven proper name detection. We describe the design decisions and the genuine Annotation Hub Framework underlying the BIOfid annotations and present agreement results. The tools used to create the annotations are introduced, and the use of the data in the semantic portal is described. Finally, some general lessons, in particular with multiple annotation projects, are drawn.
In 23 survey areas with woodland vegetation or woodland succession in Frankfurt/Main with a total size of 134 hectares, woody species were surveyed (excluding species only occurring as planted individuals). We found 149 woody taxa; 42% of them indigenous, and 58% non-native. Out of the 86 non-native taxa, 49 were naturalized in Frankfurt while 37 were considered as casual. Among non-native taxa, East Asian taxa formed the largest phytogeographic group. We found taxa originating from horticulture (cultigens) to be an important part of the woody flora of Frankfurt/Main. The most common taxa were Acer pseudoplatanus, A. platanoides, Betula pendula, and Sambucus nigra; the two Acer species were regarded as naturalized. Non-native woody species were generally common (with percentages ranging from 24% to 79% in individual areas).
The authors reflect on their experiences as the founding editors of the History of Knowledge blog. Situating the project in its specific institutional, geographical, and historiographical contexts, they highlight its role in scholarly communication and research alongside journals and books in a research domain that is still young, especially when viewed from an international perspective. At the same time, the authors discuss the blog’s role as a tool for classifying and structuring a corpus of work as it grows over time and as new themes and connections emerge from the contributions of its many authors.
Current research on theory and practice of digital libraries: best papers from TPDL 2019 & 2020
(2022)
This volume presents a special issue on selected papers from the 2019 & 2020 editions of the International Conference on Theory and Practice of Digital Libraries (TPDL). They cover different research areas within Digital Libraries, from Ontology and Linked Data to quality in Web Archives and Topic Detection. We first provide a brief overview of both TPDL editions, and we introduce the selected papers.
Current research on theory and practice of digital libraries: best papers from TPDL 2019 & 2020
(2022)
This volume presents a special issue on selected papers from the 2019 & 2020 editions of the International Conference on Theory and Practice of Digital Libraries (TPDL). They cover different research areas within Digital Libraries, from Ontology and Linked Data to quality in Web Archives and Topic Detection. We first provide a brief overview of both TPDL editions, and we introduce the selected papers.