Refine
Year of publication
Document Type
- Article (30564)
- Part of Periodical (11898)
- Book (8260)
- Doctoral Thesis (5708)
- Part of a Book (3721)
- Working Paper (3386)
- Review (2878)
- Contribution to a Periodical (2369)
- Preprint (2077)
- Report (1544)
Language
- German (42445)
- English (29256)
- French (1067)
- Portuguese (723)
- Multiple languages (309)
- Croatian (302)
- Spanish (301)
- Italian (194)
- mis (174)
- Turkish (148)
Is part of the Bibliography
- no (75244) (remove)
Keywords
- Deutsch (1038)
- Literatur (809)
- taxonomy (760)
- Deutschland (543)
- Rezension (491)
- new species (449)
- Frankfurt <Main> / Universität (341)
- Rezeption (325)
- Geschichte (292)
- Linguistik (268)
Institute
- Medizin (7694)
- Präsidium (5190)
- Physik (4463)
- Wirtschaftswissenschaften (2698)
- Extern (2661)
- Gesellschaftswissenschaften (2373)
- Biowissenschaften (2184)
- Biochemie und Chemie (1974)
- Frankfurt Institute for Advanced Studies (FIAS) (1687)
- Center for Financial Studies (CFS) (1630)
Aerosolteilchen agieren als Kondensationskeime für Wolkentröpfchen (engl. Cloud Condensation Nuclei, CCN) oder Eiskristalle und sind deswegen für die Wolken- und Niederschlagsbildung entscheidend. Sowohl die Aerosolpartikel als auch die Wolken können Sonnenlicht effizient streuen, wodurch ein kühlender Effekt auf das Klima ausgeübt wird. Einige der Teilchen, wie z. B. aufgewirbelter Staub oder Seesalz, werden direkt in die Atmosphäre injiziert; der größte Anteil der Teilchen und etwa die Hälfte der CCN werden allerdings durch die Kondensation gasförmiger Substanzen gebildet. Dieser Prozess wird als Nukleation oder Partikelneubildung (engl. New Particle Formation, NPF) bezeichnet. Trotz intensiver Forschung ist die NPF noch nicht vollständig verstanden, was an der Komplexität der chemischen Abläufe in der Atmosphäre und an der Schwierigkeit liegt, die relevanten Substanzen bei extrem geringen Mischungsverhältnissen (etwa ein Molekül oder Cluster per 1012 bis 1015 Moleküle) zu identifizieren und zu quantifizieren. Neben der Frage nach den bei der Nukleation beteiligten Substanzen ist außerdem noch unklar, ob Ionen-induzierte Nukleation ein wichtiger Prozess für das Klima ist. Das CLOUD-Projekt (Cosmics Leaving OUtdoor Droplets) am CERN soll diesen Fragen nachgehen, indem dort die Partikelbildung in einem Kammer-Experiment unter extrem gut kontrollierten Bedingungen simuliert wird. Die chemischen Systeme, die in dieser Schrift diskutiert werden, umfassen das binäre (H2SO4-H2O), das ternäre Ammoniak (H2SO4-H2O-NH3) und das ternäre Dimethylamin (H2SO4-H2O-(CH3)2NH)-System.
Einige der wesentlichen Ergebnisse von Experimenten an der CLOUD-Kammer werden diskutiert. Diese zeigen, dass das binäre und das ternäre Ammoniak System die atmosphärische Nukleation bei niedrigen Temperaturen erklären können, wohingegen das ternäre Dimethylamin System prinzipiell in der Lage ist, die hohen bodennahen Nukleationsraten bei atmosphärisch relevanten Schwefelsäure-Konzentrationen zu beschreiben. Des Weiteren werden zwei für Nukleationsstudien wesentliche Messmethoden vorgestellt. Das Chemical Ionization Mass Spectrometer (CIMS) wird zur Messung von gasförmiger Schwefelsäure verwendet, da H2SO4 vermutlich die wichtigste Substanz bei der atmosphärischen Nukleation ist. Das Chemical Ionization-Atmospheric Pressure interface-Time Of Flight (CI-APi-TOF) Massenspektrometer misst Schwefelsäure und neutrale Cluster. Beide Geräte wurden für den Einsatz bei CLOUD optimiert und instrumentelle Entwicklungen wurden in Bezug auf die Ionenquelle vorgenommen, die eine Korona-Entladung verwendet. Außerdem wurden eine Kalibrationseinheit zur Bereitstellung definierter Schwefelsäure-Konzentrationen entwickelt und das CI-APi-TOF aufgebaut. In Bezug auf das ternäre Dimethylamin System werden Nukleationsraten und die ersten Messungen von gro en nukleierenden neutralen Clustern präsentiert. Monomer- und Dimer-Konzentrationen der Schwefelsäure, die mit dem CIMS bei tiefen Temperaturen gemessen wurden, dienten der Ableitung der thermodynamischen Eigenschaften bei der Dimer-Bildung im binären und ternären Ammoniak System. Um möglichst exakte Nukleationsraten zu bestimmen, wurde eine neue Methode entwickelt, die es erlaubt, den Effekt der Selbst-Koagulation bei der Nukleation miteinzubeziehen.
Die zusammengefassten Studien tragen signifikant zum Verständnis der Partikelneubildung bei.
Der musikalische Serialismus wird bis heute als ein mechanistisches Kompositionsverfahren abgetan, wie es typisch für die verunsicherte Nachkriegszeit der 1950er Jahre zu sein scheint. Dieser Ansatz übersieht die grundsätzlichen Impulse, die von diesen Ansätzen bis heute ausgehen. Festmachen kann man dies z. B. daran, wie stark seit den 90er Jahren ein formales strukturorientiertes Denken in der elektronischen Musik zum Durchbruch gelangte, wie es noch mit Sound- und Midiorientierten Verfahren in den 80er Jahren nicht absehbar war. Für einen qualitativ neuen Typus form- und strukturgenerierender Tools im Virtuellen digitaler Symbolverarbeitung stehen heute kollaborative Echtzeitprogramme bereit, die 'Komposition' als ein neues strukturelles Dispositiv erfahren lassen und in direkter Linie zum Serialismus gesehen werden können.
Nur die Einbeziehung des Fremden bewahre vor steriler Identität, meinte Adorno, aus dem Exil in den USA nach Frankfurt zurückgekehrt. Wie steht es heute um Fremdheit und Fremde und um das Verhältnis von Eigenem und Fremdem? Das erkundet der Philosoph und Publizist Rolf Wiggershaus im Gespräch mit fünf Frankfurter Professorinnen und Professoren.
Haben Vorurteile einen Sinn? In der Entwicklungsgeschichte des Menschen vermutlich schon, um Freund und Feind unterscheiden zu können. Aber in der heutigen globalen, wenn auch komplexeren Welt ist es wichtig zu wissen, warum Vorurteile entstehen und welche Gruppenprozesse dahinterstecken. Die Sozialpsychologie kann seit den 1950er Jahren auf eine Vielzahl von Experimenten verweisen – mit spannenden Ergebnissen. Eines lautet: Je mehr Kontakt Menschen aus unterschiedlichen Gruppen miteinander haben, desto geringer sind auch die Vorurteile.
Die Begegnung mit dem Nachbarn bleibt nahezu kommunikationsfrei, wenn man nur darauf achtet, dass die Zweige der Birke nicht über die Grenze zu ihm hinüberwachsen. Ist der Nachbar nicht mehr nur der Andere von nebenan und gegenüber, sondern tendenziell längst ein Fremder geworden? Haben Nachbarschaften wirklich ihren sozialen Verpflichtungscharakter weitgehend verloren? Der Kulturanthropologe begibt sich auf Spurensuche.
Die 16 Bücher, die Ed Ruscha zwischen 1963 und 1978 publizierte, teilen eine Reihe charakteristischer Eigenschaften. In ihrer unprätentiösen Erscheinung bestechen sie auf unverkennbare Art und Weise durch ein klares Layout und eine nüchterne Typografie, die scheinbar im Widerspruch zu den oft etwas absurd anmutenden Titeln wie 'Twentysix Gasoline Stations', 'Nine Swimming Pools', 'Some Los Angeles Apartments' oder 'Every Building on the Sunset Strip' stehen. Im Innern der Bücher ist jeweils eine Serie meist schwarz-weißer Fotografien zu sehen, welche abbildet, was im Titel angekündigt wird: Tankstellen, Swimmingpools, Parkplätze etc. Obschon einem beim Durchblättern auf den ersten Blick hauptsächlich die Fotografien ins Auge springen, wird rasch deutlich, dass diese immer in engem Zusammenhang mit ihrem Trägermedium, dem Buch, stehen. Zwei Aspekte von Ruschas Fotobüchern sind interessant im Hinblick auf das Prinzip der Reihung: einerseits das prototypische Vorgehen des Künstlers in der Produktion seiner Publikationen und andererseits der Zusammenhang zwischen der Präsentation einer fotografischen Serie in Buchform und der daraus resultierenden doppelten Zeitlichkeit.
Einer der berühmtesten Sätze Nietzsches lautet: "Was ist also Wahrheit? Ein bewegliches Heer von Metaphern, Metonymien, Anthropomorphismen kurz eine Summe von menschlichen Relationen, die, poetisch und rhetorisch gesteigert, übertragen, geschmückt wurden, und die nach langem Gebrauche einem Volke fest, canonisch und verbindlich dünken [...]". Der Satz wird immer wieder zitiert und aufgerufen, wenn es darum geht, Geltungsansprüche von Theorien in Frage zu stellen, Begriffe und normative Vorstellungen zu dekonstruieren oder den Anteil des Figurativen im Denken hervorzukehren. Dabei liegt der Akzent dann auf den dominanten Tropen Metapher und Metonymie, wenn nicht allein auf der Metapher. In jedem Fall aber ist es die Identifizierung von Wahrheit und Metapher, die in den Mittelpunkt der Aufmerksamkeit gerückt wird, von Wahrheit und Bild oder übertragener, 'uneigentlicher', verschobener Bedeutung, von Wahrheit und nicht fixierbarer, sondern in Bewegung befindlicher Bedeutung, und die Verschiebung kommt - dem Wortsinn entsprechend - durch Tropen zustande. Denn zumindest traditionelle Rhetorik definiert Tropen als diejenigen Figuren, bei denen sich die Bedeutung vom ursprünglichen Wortinhalt wegwendet (τρέπεσϑαι). Tropen sind Wendungen des Sinns. An Nietzsches berühmtem Satz soll hier das Augenmerk kurz auf zwei andere Aspekte gerichtet werden: zum einen auf die in dem Satz selbst enthaltene Metapher, die des Heeres, und zum anderen auf die Struktur des Satzes, d. h. darauf, dass die Prädizierung der Wahrheit als "Heer von Metaphern, Metonymien, Anthropomorphismen" ihrerseits eine rhetorische Figur darstellt, nämlich eine Aufzählung.
One of the main things that we as humans do in our lifetime is the recognition and/or classification of all kind of visual objects. It is known that about fifty percentage of the neocortex is responsible for visual processing. This fact tells us that object recognition (OR) is a complex task in our and in the animal brain, but we do it in a fraction of a second.
The main question is: How does the brain exactly do it? Does the brain use some feature extraction algorithm for OR tasks? The hierarchical structure of the visual cortex and studies on a part of the visual cortex called V1 tell us that our brain uses feature extraction for OR tasks by Gabor filters. We also use our previous knowledge in object recognition to detect and recognize the objects which we never saw before. Also, as we grow up we learn new objects faster than before.
These facts imply that the visual cortex of human and other animals uses some common (universal) features at least in the first stages to distinguish between different objects. In this context, we might ask: Do universal features in images exist, such that by using them we are able to efficiently recognize any unknown object? Is it necessary to extract new special features for any new object? How about using existing features from other tasks for this? Is it possible to efficiently use extracted feature of a specific task for other tasks? Are there some general features in natural and non-natural images which can also be used for specific object recognition? For example, can we use extracted features of natural images also for handwritten digit classification?
In this context, our work proposes a new information-based approach and tries to give some answers to the questions above. As a result, in our case we found that we could indeed extract unique features which are valid in all three different kinds of tasks. They give classification results that are about as good as the results reported by the corresponding literature for the specialized systems, or even better ones.
Another problem of the OR task is the recognition of objects, independently of any perception changes. We as humans or also animals can recognize objects in spite of many deformations (e.g. changes in illumination, rotation in any direction or angles, distortion and scaling up or down) in a fraction of a second. When observing an object which we never saw, we can imagine the rotated or scaled up objectin our mind. Here, also the question arises: How does the brain solve this problem? To do this, does the brain learn some mapping algorithm (transformation), independent of the objects or their features?
There are many approaches to model the mapping task. One of the most versatile ones is the idea of dynamically changing mappings, the dynamic link mapping (DLM). Although the dynamic link mapping systems show interesting results, the DLM system has the problem of a high computational complexity. In addition, because it uses the least mean squared error as risk function, the performance for classification is also not optimal. For random values where outliers are present, this system may not work well because outliers influence the mean squared error classification much more than probability-based systems. Therefore, we would like to complete the DLM system by a modified approach.
In our contribution, we will introduce a new system which employs the information criteria (i.e. probabilities) to overcome the outlier problem of the DLM systems and has a smaller computational complexity. The new information based selforganised system can solve the problem of invariant object recognition, especially in the task of rotation in depth, and does not have the disadvantage of current DLM systems and has a smaller computational complexity.
Il presente lavoro ha come oggetto d'esame lo scontro del filosofo neo-idealista Giovanni Gentile (1875-1944) col Modernismo cattolico. Obiettivo del lavoro è offrire per la prima volta un esame completo della controversa, durata sei anni, dal 1903 al 1909, attraverso una constestualizzazione storico ecclesiastica e storico filosofica di essa. Il tutto basandosi sulla pubblicazione di inediti fonti d'archivio.
Background aims: Immunomagnetic enrichment of CD34+ hematopoietic “stem” cells (HSCs) using paramagnetic nanobead coupled CD34 antibody and immunomagnetic extraction with the CliniMACS plus system is the standard approach to generating T-cell-depleted stem cell grafts. Their clinical beneficence in selected indications is established. Even though CD34+ selected grafts are typically given in the context of a severely immunosuppressive conditioning with anti-thymocyte globulin or similar, the degree of T-cell depletion appears to affect clinical outcomes and thus in addition to CD34 cell recovery, the degree of T-cell depletion critically describes process quality. An automatic immunomagnetic cell processing system, CliniMACS Prodigy, including a protocol for fully automatic CD34+ cell selection from apheresis products, was recently developed. We performed a formal process validation to support submission of the protocol for CE release, a prerequisite for clinical use of Prodigy CD34+ products.
Methods: Granulocyte-colony stimulating factor–mobilized healthy-donor apheresis products were subjected to CD34+ cell selection using Prodigy with clinical reagents and consumables and advanced beta versions of the CD34 selection software. Target and non-target cells were enumerated using sensitive flow cytometry platforms.
Results: Nine successful clinical-scale CD34+ cell selections were performed. Beyond setup, no operator intervention was required. Prodigy recovered 74 ± 13% of target cells with a viability of 99.9 ± 0.05%. Per 5 × 10E6 CD34+ cells, which we consider a per-kilogram dose of HSCs, products contained 17 ± 3 × 10E3 T cells and 78 ± 22 × 10E3 B cells.
Conclusions: The process for CD34 selection with Prodigy is robust and labor-saving but not time-saving. Compared with clinical CD34+ selected products concurrently generated with the predecessor technology, product properties, importantly including CD34+ cell recovery and T-cell contents, were not significantly different. The automatic system is suitable for routine clinical application.
Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE's apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs.
In diesem polemischen Aufsatz ist die Kunst des klassischen Stils als natürliches Produkt und Erziehungsmittel der freien Gesellschaft und in dieser Hinsicht als Gegensatz der sogenannten Avantgarde-Kunst betrachtet.
Background: One aspect of premating isolation between diverging, locally-adapted population pairs is female mate choice for resident over alien male phenotypes. Mating preferences often show considerable individual variation, and whether or not certain individuals are more likely to contribute to population interbreeding remains to be studied. In the Poecilia mexicana-species complex different ecotypes have adapted to hydrogen sulfide (H2S)-toxic springs, and females from adjacent non-sulfidic habitats prefer resident over sulfide-adapted males. We asked if consistent individual differences in behavioral tendencies (animal personality) predict the strength and direction of the mate choice component of premating isolation in this system.
Results: We characterized focal females for their personality and found behavioral measures of ‘novel object exploration’, ‘boldness’ and ‘activity in an unknown area’ to be highly repeatable. Furthermore, the interaction term between our measures of exploration and boldness affected focal females’ strength of preference (SOP) for the resident male phenotype in dichotomous association preference tests. High exploration tendencies were coupled with stronger SOPs for resident over alien mating partners in bold, but not shy, females. Shy and/or little explorative females had an increased likelihood of preferring the non-resident phenotype and thus, are more likely to contribute to rare population hybridization. When we offered large vs. small conspecific stimulus males instead, less explorative females showed stronger preferences for large male body size. However, this effect disappeared when the size difference between the stimulus males was small.
Conclusions: Our results suggest that personality affects female mate choice in a very nuanced fashion. Hence, population differences in the distribution of personality types could be facilitating or impeding reproductive isolation between diverging populations depending on the study system and the male trait(s) upon which females base their mating decisions, respectively.
This dissertation provides a comprehensive account of the grammar of relative clause extraposition in English. Based on a systematic review and evaluation of the empirical generalizations and theoretical approaches provided in the literature on generative grammar, it is shown that none of the previous theories is able to account for all the relevant facts. Among the most problematic data are the Principle C and scope effects of relative clause extraposition, cases with obligatory relative clauses, and relative clauses with elliptical NPs as antecedents.
I propose a new analysis of relative clause extraposition within the constraint-based, monostratal grammatical framework of Head-driven Phrase Structure Grammar (HPSG), enhanced with the semantic theory of Lexical Resource Semantics (LRS). Crucially, it is a general analysis of relative clause attachment, since both canonical and extraposed relative clauses are licensed by the same syntactic and semantic constraints. The basic assumption is that a relative clause can be adjoined to any phrase that contains a suitable antecedent of the relative pronoun. The semantic information that licenses the relative clause is introduced by the determiner of the antecedent NP. The techniques of underspecified semantics and the standard semantic representation language used by LRS make it possible to formulate constraints which yield the correct intersective interpretation of the relative clause (arbitrarily distant from its antecedent NP) and at the same time link the scope of the antecedent NP to the adjunction site of the relative clause.
In combination with the revised HPSG binding theory developed in this dissertation, the proposed analysis is able to capture the major properties of relative clause attachment within a unified and internally consistent monostratal constraint-based grammatical framework.
Previous studies suggest that the application of Controlled Language (CL) rules can significantly improve the readability, consistency, and machine-translatability of source text. One of the justifications for the application of CL rules is that they can have a similar impact on several target languages by reducing the post-editing effort required to bring Machine Translation (Ml’) output to acceptable quality. In certain situations, however, post-editing services may not always be a viable solution. Web-based information is often expected to be made available in real-time to ensure that its access is not restricted to certain users based on their locale. Uncertainties remain with regard to the actual usefulness of MT output for such users, as no empirical study has examined the impact of CL rules on the usefulness, comprehensibility, and acceptability of MT technical documents from a Web user's perspective. In this study, a two-phase approach is used to determine whether Controlled English rules can have a significant impact on these three variables. First, individual CL rules are evaluated within an experimental environment, which is loosely based on a test suite.Two documents are then published and subject to a randomised evaluation within the framework of an online experiment using a customer satisfaction questionnaire. The findings indicate that a limited number of CL rules have a similar impact on the comprehensibility of French and German output at the segment level. The results of the online experiment show that the application of certain CL rules has the potential to significantly improve the comprehensibility of German MT technical documentation. Our findings also show that the introduction of CL rules did not lead to any significant improvement of the comprehensibility, usefulness, and acceptability of French MT technical documentation.
The research reported in this thesis examines two main questions: firstly, which dictionary type, bilingual or monolingual, is most effective for intermediate learners of German for reading comprehension, and secondly, which features make monolingual dictionary definitions effective for these learners. These questions divide the thesis into two parts. The first part compares the effectiveness of the bilingual versus the monolingual dictionary, and the second part compares two different monolingual definition styles.
The research was originally motivated by the observation that Hong Kong Chinese intermediate learners of German prefer to use a German-English bilingual dictionary. Since the translations are presented in the learners' second language, the effectiveness of this bilingual dictionary is doubtful. On the other hand, the learners are reluctant to use the monolingual dictionary, recommended to them by their language teachers. Three investigations were conducted in order to gain more detailed knowledge about the learners' dictionary preference, and the effectiveness of the two dictionary types. The learners' dictionary preference was investigated by means of a survey of ninety-eight foreign language students. The effectiveness of the bilingual and monolingual dictionary for reading comprehension and incidental vocabulary learning was first measured experimentally. The think-aloud method was then used in order to discover factors which determine the effectiveness of the two dictionary types.
The results of the experiment revealed that the German-English bilingual dictionary was not significantly more effective for the learners than the monolingual dictionary. The only monolingual dictionary available for German at that time, however, is linguistically too difficult for this proficiency level. Because of these findings, the research turned to monolingual dictionary definitions with the aim of identifying features that make them accessible to intermediate learners. Based on findings from the first think-aloud study, and principles promoted as user-friendly in the lexicographic literature, new definitions were developed for the target words in the research. These new definitions were compared with those in the existing dictionary. A second think-aloud study was conducted in order to generate hypotheses about individual definition features. These hypotheses were then tested in the second experiment, which was conducted with eighty-six learners of German in Shanghai.
The investigations reveal several features that determine the effectiveness of monolingual definitions for intermediate learners. The findings have theoretical and pedagogical implications. In the theoretical field, some lexicographic principles were recommended that are, unlike previous principles, based on empirical insights into user needs. In the pedagogical field, the research findings provide an empirical basis for the evaluation and recommendation of suitable dictionaries to intermediate learners.
A model of dictionary effectiveness is proposed. This model could help to assess the effectiveness of different information categories in dictionaries for different proficiency levels and different activity contexts. It could also provide lexicographic principles for the design of dictionaries. This research contributes one component to the proposed model: criteria for the effectiveness of definition features for intermediate learners in the activity context of reading.
Towards a German grammar programme for post-leaving certificate students at Dublin City University
(1999)
With the introduction of the communicative method of language learning, overall standards of grammatical competence and performance among Irish second level students would appear to have been significantly reduced. As a consequence, learners who continue to study a given language at third level apparently no longer possess the knowledge which, under the grammar-translation methodology, further education institutions were able to build upon. This thesis examines the basis for the above perceptions, investigates the role of formal grammar instruction in the second language acquisition process and reports on a programme which was developed at Dublin City University (DCU) in order to ease, for Irish university students of German, the transition from a primarily memory-based approach to language acquisition to the analytical approach which is still being considered crucial to a university student's linguistic education. While the research was undertaken in response to locally existing difficulties, it may also be considered as a case study of more general interest, and as such serve as an exemplar to German departments in other universities as well as to other foreign language departments both within DCU or outside. The aim of the programme under investigation was to ease the transition on a socio-affective, cognitive and metacognitive level without lowering overall proficiency expectations and standards. Primary research was conducted among secondary school teachers, post-Leaving Certificate students on entry into DCU and among third level lecturers. The purpose of this research was to identify and define the programme’s content and progression. To this effect, the German junior and senior cycle syllabi at second level were also taken into consideration. The subsequent German grammar programme was implemented at DCU in the academic year 1996/7. While the programme would appear to have been judged favourably regarding some affective and cognitive-motivational aspects, results show mixed success rates for the other two factors under investigation, cognitive-analytical and metacognitive skills. Thus, some degree courses and some language combinations clearly benefited more from the programme than others. One of the conclusions drawn from this research suggests that unless certain changes are introduced prior to students’ entry into third level, university graduates are likely to remain well below the standards of accuracy and overall proficiency which were previously achieved.
Two decades after the predicted “end of ideology”, we are observing a re-emphasis on party ideology under Hu Jintao. The paper looks into the reasons for and the factors shaping the re-formulation of the Chinese Communist Party’s (CCP) ideology since 2002 and assesses the progress and limits of this process. Based on the analysis of recent elite debates, it is argued that the remaking of ideology has been the consequence of perceived challenges to the legitimacy of CCP rule. Contrary to many Western commentators, who see China’s successful economic performance as the most important if not the only source of regime legitimacy, Chinese party theorists and scholars have come to regard Deng Xiaoping’s formula of performance-based legitimacy as increasingly precarious. In order to tackle the perceived “performance dilemma” of party rule, the adaptation and innovation of party ideology is regarded as a crucial measure to relegitimize CCP rule.
For more than two decades, the National Planning Office for Philosophy and Social Sciences (NPOPSS) has been managing official funding of social science research in China under the orbit of the Communist Party of China’s (CPC) propaganda system. By focusing on “Major Projects”, the most prestigious and well-funded program initiated by the NPOPSS in 2004, this contribution outlines the political and institutional ramifications of this line of official funding and attempts to identify larger shifts during the past decade in the “ideologics” of official social science research funding – the changing ideological circumscriptions of research agendas in the more narrow sense of echoing party theory and rhetoric and – in the broader sense – of adapting to an increasingly dominant official discourse of cultural and national self-assertion. To conclude, this article offers reflections on the potential repercussions of these shifts for international academic collaboration.