Universitätsbibliothek
Refine
Year of publication
Document Type
- Conference Proceeding (69)
- Article (17)
- Other (6)
- Part of a Book (4)
- Report (4)
- Book (2)
- Contribution to a Periodical (1)
- Lecture (1)
Language
- English (104) (remove)
Is part of the Bibliography
- no (104)
Keywords
- Deutschland (3)
- web archiving (3)
- Afrikanistik (2)
- BIOfid (2)
- Bibliothek (2)
- Biodiversity (2)
- Digital libraries (2)
- Europe (2)
- Germany (2)
- Special issue (2)
Institute
The paper presents an overview about some of the international relevant projects of digital resources in Germany. Online presentations of primary sources, e.g. photographic material, and bibliographic tools supporting research, such as cross searching, will be presented as potential partners of resource sharing with North America. Not only the possibility of cooperation will be sketched, but also necessary preliminary work and some obstacles will be outlined. This report is accompanied by a short characterization of African studies in Germany and the status quo of Open Access-initiatives.
Large American research libraries have been acquiring - by purchase and by lease - huge multi-disciplinary electronic collections of primary and secondary source materials. For example, the Digital Evans and Canadian Poetry easily make available to scholars primary materials that once were scattered in libraries across North America and Europe. The American State Papers, 1789 – 1838 collection allows easier searching of fragile rare materials. Collections made by libraries digitizing their own holdings, such the Archive of Early American Images from the John Carter Brown Library at Brown University, make research materials more discoverable and usable. Yet recent scholarship in American Studies by American and European scholars makes relatively little use of these new materials. Both disparities and congruities in what scholars use and what research libraries collect are apparent. Some simple reasons explain the dissonance. Furthermore, conversations with scholars suggest that materials and collections alone will not suffice to support research. Librarians’ skills and actions will increase the value of the new research materials.
Seit 2005 ist die Bibliothek des Südasien-Instituts in Kooperation mit der UB Heidelberg Trägerin des DFG-geförderten Sondersammelgebiets Südasien. Damit hat sie von der UB Tübingen ein traditionsreiches Sondersammelgebiet übernommen, dessen Geschichte bis ins Jahr 1949 zurückreicht. Der Vortrag wird zum Einen einen kurzen Überblick über den historischen Kontext des SSG Südasien geben und zum Anderen über Neuentwicklungen, wie z.B. die Virtuelle Fachbibliothek Südasien, die in den letzten zwei Jahren an der Bibliothek des Südasien-Instituts aufgebaut wurde. Vor diesem Hintergrund soll vor allem das Kooperationspotential im Bereich digitaler Informationsressourcen beleuchtet werden.
Contents - BIX: pole position and runner-up - Frankfurt University Library: its responsibilities, its collections, its databases, its supra-regional collecting responsibilities – and some statistics - The "Sondersammelgebiet" Germanistik: its scope and contents, its principal strengths, present situation, and budget - Sammlung Deutscher Drucke: the 1801-1870 segment of the "Distributed National Library" - Information Services: Bibliographie der deutschen Sprach- und Literaturwissenschaft (BDSL), Neuerwerbungsliste Germanistik, Bibliographie germanistischer Bibliographien (BgB), DigiZeitschriften, information bulletins - Work of the Subject Specialist: exhibitions, publicity material
The scientific innovation process embraces the steps from problem definition through the development and evaluation of innovative solutions to their successful exploitation. The challenges imposed by this process can be answered by the creation of a powerful and flexible next-generation e-Science infrastructure, which exploits leading edge information and knowledge technologies and enables a comprehensive and intelligent means of supporting this process. This paper describes our vision of a Knowledge-based eScience infrastructure, which is based on the results of an in-depth study of the researchers requirements. Furthermore, it introduces the Fraunhofer e-Science Cockpit as a first implementation of our vision.
University 2.0
(2007)
The major challenge facing universities in the next decade is to reinvent themselves as information organizations. Universities are, at their core, organizations that cultivate knowledge, seeking both to create new knowledge and to preserve and convey existing knowledge, but they are remarkably inefficient and therefore ineffective in the way that they leverage their own information resources to advance that core activity. This talk will explore ways that the university could learn from what is now widely called "Web 2.0" -- a term that is meant to identify a shift in emphasis from the computer as platform to the network as platform, from hardware to data, from the wisdom of the expert to the wisdom of crowds, and from fixity to remixability.
In several academic fields (most notably: physics, mathematics, economics, astronomy, and computer science), most current research papers are freely accessible on the Internet in both pre- and post-publication formats. For these disciplines, open-access dissemination of publications and data has created a robust and useful information environment that is highly valued by researchers. While the acceptance of open-access dissemination has been disruptive to traditional scholarly publishing, the status and economic value of the elite journals has remained largely intact. Indeed, publication in the most prestigious journals (e.g., Science, Nature, Cell, BMJ, etc.) may have more influence than ever in determining the advancement of academic careers. Traditional publishing and open access will continue to coexist uncomfortably for years to come, but the next wave of digital publishing systems (empowered social networking applications) will establish open access repositories as indispensable infrastructure for the sciences and social sciences.
The aim of the meeting is to expose this current topic for critical discussion with international speakers and participants and to find solutions which optimize the integration of information services into university structures. Presentations and discussions will consider: * integrated versus cooperative models * single-unit operations, central or de-centralized faculty organisations * outsourcing services versus own organisation/effort * institutional repository versus discipline-based repository * information supply in the era of "Google print" »The Integration of Information Services into University Structures« Symposium will be taking place simultaneously with the Frankfurt Book Fair, the largest book-related event in the world attracting annually 286,621 people (2006) , thus giving participants who arrive early the chance to combine attendance at both the Book Fair and Symposium. A cultural event and dinner in one of Frankfurt's historical rooms on Friday will be a social highlight! A contingency of hotel rooms has been reserved on a »first come, first served basis« outside Frankfurt at non-Book Fair prices. More information on request.
Rather than introducing a new system for global identity management, the University of Freiburg decided to continue with the existing software systems (esp. from HIS), to identify the leading system for each set of data and to mirror the data between the various systems. A clearly defined workflow ensures that changes to data are made only on the relevant "leading" system and then propagated to the other systems. User authentication for systems managed by the computer center is done via LDAP. Consequently, while access rights are granted by the LDAP system, the decision of whether or not the person is a member of the University is left to the administration. As a consequence the implementation of a portal called mylogin to get the necessary tickets for shibboleth is a straightforward process as it only remains to check the data against LDAP before issueing the corresponding tickets.
Working closely with teaching and research staff is critical to the success of libraries and information services. Indeed, the degree of integration with a University's academic work is one of the factors that distinguish a successful service from a poor one. This paper will consider the relationship between information services and how universities operate. Using the challenges facing institutions as a starting point - including the move towards a single European higher education market - the impact of information provision on institutional strategies will be explored. Information resources underpin all learning, teaching and research activities and the presentation will consider the professional practice which ensures that libraries and computing services are fully exploited. The focus on the experience of students is leading some institutions to integrate information services with a wide range of other activities and the paper will consider the opportunities and challenges which this brings, including the need to build working relationships with a broader range of professional groups.
Universities of the 21st century heavily depend on an efficient IT infrastructure for teaching, research and administration. E-Learning environments, blended learning and all sorts of multimedia and cooperative environments are important requirements for teaching at universities and for further education. Many of the organizational structures such as continuous examinations, interdisciplinary studies, ECTS system and many more require efficient examination administration systems as well as room and personnel management. Research is based on Internet inquiries, eScience, eLibrary and other IT supported media. Research results must be documented and archived in a digital way and results must be distributed and marketed through the Internet. The efficient administration of all kinds of resources of the university must be planned using management support systems. Decisions of university heads must be prepared from well documented statistics and analysis software. In the past, many of the applications named above for teaching, research and administration have been performed by separate software applications and run in distributed environments of universities. Powerful server structures and networking features as well as new software technology like service-oriented architectures make it necessary to recentralize the IT services of the university after a long period of decentralization. Based on metadirectories and unified access procedures, all of the software components must be integrated into a seamless IT infrastructure. To guarantee consistency, data must not be stored in a redundant way. Project IntegraTUM of Technische Universität München started in 2003 and is an umbrella project to define such a seamless IT infrastructure for a university with 22.000 students and approximately 10.000 staff. The talk describes the project, which besides the definition of new technology is based on a fundamental process analysis of the university and many changes in the organizational structure.
Information Supply in the era of mass digitization Drawing on his experience at the Bodleian Library and now at the British Library, Ronald Milne will share his first-hand impressions of 'boutique' and mass digitization programmes, such as those being undertaken by Google and Microsoft, and their effect on information supply. Collections define libraries. What does this mean in the 21st Century? Will all libraries become equal as the digital revolution progresses? What might the digitization and indexing of millions of works mean for university researchers and the intellectually curious more generally? What are the benefits and what are the strategic issues that we are bound to consider?
Information supply is the genuine task of academic institutions as well as of publishers. Publishers profit from copyright provisions which give them exclusive rights in their products. The same copyright provisions are often the limiting factor when academic institutions try to improve their service to the academic community. This is the case in particular when it comes to digital access to information. In a so-called "Second Basket", the German copyright act has just been revised, introducing explicit legal exemptions for document deliveries and on the spot consultation of works contained in public libraries' collections. At the same time, unresolved issues remain with respect to existing legal exemptions as well as the new ones. What will the legal parameters look like for academic institutions once the "Second basket" has been put into force? How can libraries work with these provisions in practice?
In the year 2000 the Deutsche Initiative für Netzwerkinformation (DINI) / German Coalition of Network Information was founded: 10 theses "Changes in information infrastructure – challenges to universities and their information and communications facilities" is the DINI’s founding charter (s. http://www.dini.de).
Thesis 4 states: "The universities need to establish information management structures to integrate departments. University managements, departments and central institutions ought to prepare a university development plan for the areas of information, communication and multimedia." ...
Trends for distributed, open, and increasingly collaborative models of information delivery challenge the library's classic roles. In addition, trends within the research community for more interdisciplinary and collaborative scholarship create an opportunity for more enabling information infrastructure. In an age of Amazon, Google, and "social" tools, how should the library respond? My presentation will focus on strategies for bringing the library's "assets" into the flow of researchers' work. How can the library integrate its resources into the scholar's workflow? What are the emerging challenges of this integration?
The Frankfurt University Library possesses one of the outstanding Africana Collections in continental Europe; its regional anddisciplinary scope is unique in Germany. Today about 5,000 new acquisitions a year have accumulated over 200,000 items on Africa south of the Sahara. Some 50,000 historical and rare photographs are fully digitized and freely accessible. Together with a collection of around 18,000 books stemming from the collections of the German Colonial Society at the end of the 19th and the beginning of the 20th century they constitute the historical foundations of the collection. Recently the University Library Frankfurt and the library of the GIGA Institute of African Affairs, Hamburg, started the project ilissAfrica (internet library sub-Saharan Africa), a central subject gateway for online resources and a powerful tool for bibliographic research. These new services will be indispensable for researchers and librarians of African Studies and will promote African studies worldwide.
The emperor's new colonies
(2008)
The Colonial Picture Archive in Frankfurt offers a unique pictorial record of German colonial history. For many years the collection was virtually forgotten. However, following painstaking description and digitalisation, the photo documents are now available on the Internet to researchers in Germany and abroad.
3.11.2008 - 4.11.2008 fand in Frankfurt am Main folgende Tagung statt: 21st Century Libraries: Changing Forms, Changing Challenges, Changing Objectives = 8th Frankfurt Scientific Symposium: 3.11.2008 - 4.11.2008. Sie wurde von der Universitätsbiblithek Johann Christian Senckenberg in Zusammanarbeit mit dem Deutschem Architektur Museum (Frankfurt am Main) und der Akademie der Architekten- und Stadtplanerkammer Hessen (Wiesbaden) organisiert Das 8. Frankfurt Symposium stellt den zeitgenössischen Bibliotheksbau, die Entwicklungen und die Probleme des gegenwärtigen Bibliotheksbaus zur Diskussion. Einige theoretische und technische Beiträge runden das Programm ab. Zwei zentrale Schwerpunkte des Symposiums werden die Einbindung von Bibliotheksbauten in das Stadtumfeld und die Auswirkungen gesellschaftspolitischer und technischer Entwicklungen auf die Architektur von Bibliotheken sein.
The correspondence between the terminology used for querying and the one used in content objects to be retrieved, is a crucial prerequisite for effective retrieval technology. However, as terminology is evolving over time, a growing gap opens up between older documents in (long-term) archives and the active language used for querying such archives. Thus, technologies for detecting and systematically handling terminology evolution are required to ensure "semantic" accessibility of (Web) archive content on the long run. As a starting point for dealing with terminology evolution this paper formalizes the problem and discusses issues, first ideas and relevant technologies.
New projects, services and collaborations have recently brought the infrastructural services for African Studies a big step forward. This report gives an account of new subject gateways and digitisation projects. It discusses recent European cooperation ventures in the field of librarianship. Additionally, new developments and services of the Africa Collection at Frankfurt University Library are presented, which help to address the changing needs of researchers and to handle information overload, while keeping up with the latest developments. Nevertheless, the fragmentation and compartmentalisation of the different services still hinder more integrated information services.
Vortrag im Rahmen des Symposiums der Universitätsbibliothek Frankfurt am Main in Kooperation mit der Frankfurter Buchmesse 2011 "Economy and Acceptance of Open Access Strategies", am 14.10.2011.
Vortrag im Rahmen des Symposiums der Universitätsbibliothek Frankfurt am Main in Kooperation mit der Frankfurter Buchmesse 2011 "Economy and Acceptance of Open Access Strategies", am 14.10.2011.
Vortrag im Rahmen des Symposiums der Universitätsbibliothek Frankfurt am Main in Kooperation mit der Frankfurter Buchmesse 2011 "Economy and Acceptance of Open Access Strategies", am 14.10.2011.
Management Summary: Conducted within the project “Economic Implications of New Models for Information Supply for Science and Research in Germany”, the Houghton Report for Germany provides a general cost and benefit analysis for scientific communication in Germany comparing different scenarios according to their specific costs and explicitly including the German National License Program (NLP).
Basing on the scholarly lifecycle process model outlined by Björk (2007), the study compared the following scenarios according to their accounted costs:
- Traditional subscription publishing,
- Open access publishing (Gold Open Access; refers primarily to journal publishing where access is free of charge to readers, while the authors or funding organisations pay for publication)
- Open Access self-archiving (authors deposit their work in online open access institutional or subject-based repositories, making it freely available to anyone with Internet access; further divided into (i) CGreen Open Access’ self-archiving operating in parallel with subscription publishing; and (ii) the ‘overlay services’ model in which self-archiving provides the foundation for overlay services (e.g. peer review, branding and quality control services))
- the NLP.
Within all scenarios, five core activity elements (Fund research and research communication; perform research and communicate the results; publish scientific and scholarly works; facilitate dissemination, retrieval and preservation; study publications and apply the knowledge) were modeled and priced with all their including activities.
Modelling the impacts of an increase in accessibility and efficiency resulting from more open access on returns to R&D over a 20 year period and then comparing costs and benefits, we find that the benefits of open access publishing models are likely to substantially outweigh the costs and, while smaller, the benefits of the German NLP also exceed the costs.
This analysis of the potential benefits of more open access to research findings suggests that different publishing models can make a material difference to the benefits realised, as well as the costs faced. It seems likely that more Open Access would have substantial net benefits in the longer term and, while net benefits may be lower during a transitional period, they are likely to be positive for both ‘author-pays’ Open Access publishing and the ‘over-lay journals’ alternatives (‘Gold Open Access’), and for parallel subscription publishing and self-archiving (‘Green Open Access’). The NLP returns substantial benefits and savings at a modest cost, returning one of the highest benefit/cost ratios available from unilateral national policies during a transitional period (second to that of ‘Green Open Access’ self-archiving). Whether ‘Green Open Access’ self-archiving in parallel with subscriptions is a sustainable model over the longer term is debateable, and what impact the NLP may have on the take up of Open Access alternatives is also an important consideration. So too is the potential for developments in Open Access or other scholarly publishing business models to significantly change the relative cost-benefit of the NLP over time.
The results are comparable to those of previous studies from the UK and Netherlands. Green Open Access in parallel with the traditional model yields the best benefits/cost ratio. Beside its benefits/cost ratio, the meaningfulness of the NLP is given by its enforceability. The true costs of toll access publishing (beside the buyback” of information) is the prohibition of access to research and knowledge for society.
High impact events, political changes and new technologies are reflected in our language and lead to constant evolution of terms, expressions and names. Not knowing about names used in the past for referring to a named entity can severely decrease the performance of many computational linguistic algorithms. We propose NEER, an unsupervised method for named entity evolution recognition independent of external knowledge sources. We find time periods with high likelihood of evolution. By analyzing only these time periods using a sliding window co-occurrence method we capture evolving terms in the same context. We thus avoid comparing terms from widely different periods in time and overcome a severe limitation of existing methods for named entity evolution, as shown by the high recall of 90% on the New York Times corpus. We compare several relatedness measures for filtering to improve precision. Furthermore, using machine learning with minimal supervision improves precision to 94%.
Library Buildings around the World" is a survey based on researches of several years. The objective was to gather library buildings on an international level starting with 1990.
The parts Germany, France, United Kingdom, United States have been thoroughly revised, supplemented and completed for this 2nd edition. A revision of the other countries is planned for the next edition.
The World Wide Web is the largest information repository available today. However, this information is very volatile and Web archiving is essential to preserve it for the future. Existing approaches to Web archiving are based on simple definitions of the scope of Web pages to crawl and are limited to basic interactions with Web servers. The aim of the ARCOMEM project is to overcome these limitations and to provide flexible, adaptive and intelligent content acquisition, relying on social media to create topical Web archives. In this article, we focus on ARCOMEM’s crawling architecture. We introduce the overall architecture and we describe its modules, such as the online analysis module, which computes a priority for the Web pages to be crawled, and the Application-Aware Helper which takes into account the type of Web sites and applications to extract structure from crawled content. We also describe a large-scale distributed crawler that has been developed, as well as the modifications we have implemented to adapt Heritrix, an open source crawler, to the needs of the project. Our experimental results from real crawls show that ARCOMEM’s crawling architecture is effective in acquiring focused information about a topic and leveraging the information from social media.
Europeana provides a common access point to digital cultural heritage objects across different cultural domains among which the libraries. The recent development of the Europeana Data Model (EDM) provide new ways for libraries to experiment with Linked Data. Indeed the model is designed as a framework reusing various wellknown standards developed in the Semantic Web Community, such as the Resource Description Framework (RDF), the OAI Object Reuse and Exchange (ORE), and Dublin Core namespaces. It provides new opportunities for libraries to provide rich and interlinked metadata to the Europeana aggregation.
However to be able to provide data to Europeana, libraries need to create mappings from the librarystandard to EDM. This step involves decisions based on domainspecific requirements and on the possibilities offered by EDM. The crossdomain nature of EDM limiting in some cases the completeness of the mappings, extension of the model have been proposed to accommodate the library needs.
The "Digitised Manuscripts to Europeana" project (DM2E) has created an extension of EDM to optimise the mappings of librarydata for manuscripts. This extension is in the form of subclasses and subproperties that further specialise EDM concepts and properties. It includes spatial creation and publishing information, specific contributor and publication type properties and more.
Furthermore the granularity of the mapping has been extended to allow references and annotations on page level as required for scholarly work. As part of this project the metadata of the Hebrew Manuscripts as well as of the Medieval Manuscripts presented in the Digital Collections of the Frankfurt University Library have been mapped to this extension. This includes links to the Integrated Authority File (GND) of the German National Library with further links to the Virtual International Authority File (VIAF).
Based on this development a new comprehensive mapping from the digitalisation metadata format METS/MODS to EDM has been established for all materials of the Frankfurt Judaica in "Judaica Europeana ". It demonstrates today’s capabilities of the creation of linked Data structures in Europeana based on library catalogue data and structural data from the digitalisation process.
The constantly growing amount of Web content and the success of the SocialWeb lead to increasing needs for Web archiving. These needs go beyond the pure preservationo of Web pages. Web archives are turning into “community memories” that aim at building a better understanding of the public view on, e.g., celebrities, court decisions and other events. Due to the size of the Web, the traditional “collect-all” strategy is in many cases not the best method to build Web archives. In this paper, we present the ARCOMEM (From Future Internet 2014, 6 689 Collect-All Archives to Community Memories) architecture and implementation that uses semantic information, such as entities, topics and events, complemented with information from the Social Web to guide a novel Web crawler. The resulting archives are automatically enriched with semantic meta-information to ease the access and allow retrieval based on conditions that involve high-level concepts.
The web and the social web play an increasingly important role as an information source for Members of Parliament and their assistants, journalists, political analysts and researchers. It provides important and crucial background information, like reactions to political events and comments made by the general public. The case study presented in this paper is driven by two European parliaments (the Greek and the Austrian parliament) and targets an effective exploration of political web archives. In this paper, we describe semantic technologies deployed to ease the exploration of the archived web and social web content and present evaluation results.
Cultural heritage reconstructed - Compact Memory and the Frankfurt Digital Judaica Collection
(2014)
Compact Memory, the internet archive of German Jewish periodicals, provides free global internet access to the vast majority of German-Jewish newspapers and periodicals of the 19th and 20th century.
Jewish historical newspapers are the invaluable sources that supply direct and detailed information of the transformation process of Jewry and offer new insights into European Jewish history. The use of these historical sources however is extremely difficult, as complete sets of periodicals are very rarely to be found and they are scattered all over the world in different libraries and archives and in different physical formats (paper, microfilm).
Compact Memory contains the 110 most important Jewish German newspapers and periodicals in Central Europe in the period from 1806-1938, covering the complete range of religious, political, social, cultural and academic aspects of Jewish life. The texts are available partly as full-texts, processed by OCR, partly as graphic documents with corresponding index options. The database offers advanced search options, downloading and printing of articles. Thousands of essays of more than 10.000 individual contributors have been bibliographically indexed.
Compact Memory was established by the Judaica Division of the University Library Frankfurt am Main and in charge today in cooperation with the Aachen Chair of German-Jewish Literary History and the Cologne library Germania Judaica.
Compact Memory is one database within the Digital Collection Judaica which being part of Europeana and other digital portals offers resources for the reconstruction and representation of Jewish cultural heritage.
The concept of culturomics was born out of the availability of massive amounts of textual data and the interest to make sense of cultural and language phenomena over time. Thus far however, culturomics has only made use of, and shown the great potential of, statistical methods. In this paper, we present a vision for a knowledge-based culturomics that complements traditional culturomics. We discuss the possibilities and challenges of combining knowledge-based methods with statistical methods and address major challenges that arise due to the nature of the data; diversity of sources, changes in language over time as well as temporal dynamics of information in general. We address all layers needed for knowledge-based culturomics, from natural language processing and relations to summaries and opinions.
This paper introduces a novel research tool for the field of linguistics: The Linjgujisjtik web portal provides a virtual library which offers scientific information on every linguistic subject. It comprises selected internet sources and databases as well as catalogues for linguistic literature, and addresses an interdisciplinary audience. The virtual library is the most recent outcome of the Special Subject Collection Linguistics of the German Research Foundation (DFG), and also integrates the knowledge accumulated in the Bibliography of Linguistic Literature. In addition to the portal, we describe long-term goals and prospects with a special focus on ongoing efforts regarding an extension towards integrating language resources and Linguistic Linked Open Data.
Web archives created by the Internet Archive (IA) (https://archive.org), national libraries and other archiving services contain large amounts of information collected for a time period of over twenty years. These archives constitute a valuable source for research in many disciplines, including the digital humanities and the historical sciences by offering a unique possibility to look into past events and their representation on the Web.
Most Web archive services aim to capture the entire Web (IA) or national top-level domains and are therefore broad in their scope, diverse regarding the topics they contain and the time intervals they cover. Due to the large size and the broad scope it is difficult for interested researchers to locate relevant information in the archives as search facilities are very limited. Many users are more interested in studying smaller and topically coherent event-centric collections of documents contained in a Web archive [1,2]. Such collections can reflect specific events such as elections, or natural disasters, e.g. the Fukushima nuclear disaster (2011) or the German federal elections.
In order to promote the accessibility of biodiversity data in historic and contemporary literature, we introduce a new interdisciplinary project called BIOfid (FID=Fachinformationsdienst, a service for providing specialized information). The project aims at a mobilization of data available in print only by combining digitization of scientific biodiversity literature with the development of innovative text mining tools for complex, eventually semantic searches throughout the complete text corpus. A major prerequisite for the development of such search tools is the provision of sophisticated anatomy ontologies on the one hand, and of complete lists of species names (currently considered valid as well as all synonyms) at a global scale on the other hand. In the initial stage, we chose examples from German publications of the past 250 years dealing with the geographic distribution and ecology of vascular plants (Tracheophyta), birds (Aves), as well as moths and butterflies (Lepidoptera) in Germany. These taxa have been prioritized according to current demands of German research groups (about 50 sites) aiming at analyses and modeling of distribution patterns and their changes through time. In the long term, we aim at providing data and open source software applicable for any taxon and geographic region. For this purpose, a platform for open access journals for long-term availability of professional e-journals will be established. All generated data will also be made accessible through GFBio (German Federation for Biological Data). BIOfid is supported by the LIS-Scientific Library Services and Information Systems program of the German Research Foundation (DFG).
We present a method for detecting word sense changes by utilizing automatically induced word senses. Our method works on the level of individual senses and allows a word to have e.g. one stable sense and then add a novel sense that later experiences change. Senses are grouped based on polysemy to find linguistic concepts and we can find broadening and narrowing as well as novel (polysemous and homonymic) senses. We evaluate on a testset, present recall and estimates of the time between expected and found change.