Refine
Year of publication
Document Type
- Doctoral Thesis (2033) (remove)
Language
- English (2033) (remove)
Has Fulltext
- yes (2033) (remove)
Is part of the Bibliography
- no (2033)
Keywords
- ALICE (8)
- Quark-Gluon-Plasma (8)
- Membranproteine (7)
- Geldpolitik (6)
- Proteine (6)
- Apoptosis (5)
- Biochemie (5)
- Heavy Ion Collisions (5)
- Immunologie (5)
- LHC (5)
Institute
- Biowissenschaften (421)
- Physik (373)
- Biochemie und Chemie (279)
- Biochemie, Chemie und Pharmazie (203)
- Medizin (120)
- Pharmazie (92)
- Geowissenschaften (87)
- Informatik und Mathematik (85)
- Informatik (54)
- Mathematik (45)
This dissertation contains five independent chapters dealing with wage dispersion and unemployment. The first chapter deals with the explanation of international changes in wage inequality and unemployment in the 80s and 90s. Both theoretically and empirically, social benefits and its link to average income are blamed for the different experiences across countries. The second chapter discusses the search framework, to explain residual wage inequality and finds that institutional wage compression has ambiguous effects on employment. In the third chapter, we apply the theory to German data. We show that job-to-job transitions are important in explaining both frictions and career advances. In the fourth chapter, we empirically assess the relationship between wage dispersion and unemployment for homogeneous workers. We find that neither a frictional nor a neo-classical view in explaining this relationship are convincing. Unemployment within cells is not negatively correlated with wage dispersion. Finally, the last chapter builds a theoretical model which treats heterogeneous individuals in a production function framework and a frictional labor market. The model generates both wage dispersion within and between skill groups and both frictional and structural unemployment. In sum, the dissertation stresses the importance of modelling frictions to understand different types of wage inequality and unemployment.
The dissertation collects four self-contained essays which contribute to the literature on wage structures, heterogeneous labor demand, and the impact of trade unions. The first paper provides a detailed description of the evolution of wage inequality in East and West Germany in the late years of the twentieth century. In contrast to previous decades, wage inequality has been rising in several dimensions during that period. The second paper identifies cohort effects in the evolution of both wages and employment. Observed structures are consistent with a labor demand framework that incorporates steady skill-biased technical change. Substitutability between skill and age groups in the German labor market is found to be relatively high. Simulations based on estimated elasticities of substitution illustrate that higher wage dispersion between skill groups would have contributed to a reduction in unemployment. The third paper estimates determinants of individual union membership decisions and studies the erosion of union density in East and West Germany. Using corresponding predictions of net union density, the fourth paper analyzes the link between union strength and the structure of wages. A higher union density is associated with lower residual wage dispersion, reduced skill wage differentials, and a lower wage level. This finding is in line with an insurance motive for union action. The thesis comprises the following articles: (1) “Rising Wage Dispersion, After All! The German Wage Structure at the Turn of the Century,” IZA Discussion Paper 2098, April 2006. (2) “Skill Wage Premia, Employment, and Cohort Effects: Are Workers in Germany All of the Same Type?”, IZA Discussion Paper 2185, June 2006, joint with Bernd Fitzenberger. (3) “The Erosion of Union Membership in Germany: Determinants, Densities, Decompositions,” IZA Discussion Paper 2193, July 2006, joint with Bernd Fitzenberger and Qingwei Wang. (4) “Equal Pay for Equal Work? On Union Power and the Structure of Wages in West Germany, 1985–1997,” translation of “Gleicher Lohn für gleiche Arbeit? Zum Zusammenhang zwischen Gewerkschaftsmitgliedschaft und Lohnstruktur in Westdeutschland 1985–1997,” Zeitschrift für Arbeitsmarkt-Forschung, 38 (2/3), 125-146, joint with Bernd Fitzenberger, 2005.
1 Purpose of the Study:
The purpose of this retrospective study was to assess the volumetric changes of our institutional pediatric neuroblastoma in response to various therapeutic protocols.
2 Materials and Methods:
A retrospective study was conducted on children with neuroblastoma from different anatomical locations including suprarenal, paraspinal, pelvic, mediastinal and cervical neuroblastoma primaries. These children underwent tumor-stage based therapeutic protocols in Johann Wolfgang Goethe University Hospital, Frankfurt am Main, Germany, between January 1996 and July 2008. The study included 72 patients (44 males and 28 females). Patient demographics (age and gender), disease-related symptoms, laboratory results (tumor biomarkers including ferritin, neuron specific enolase, and urine catecholamine) and histopathological reports were collected from the electronic medical archiving system and subsequently analyzed.
Patients were classified into following groups according the anatomical origin of the primary neuroblastoma into:
1) Suprarenal neuroblastoma Group: This group included patients with neuroblastoma arising from the suprarenal gland. This group composed of 54 patients with male to female ratio (32:22).
2) Paravertebral neuroblastoma Group: This group composed of 6 male patients.
3) Mediastinal neuroblastoma Group: This group included patients with mediastinal neuroblastoma and composed of 3 patients (1 male and 2 females).
4) Pelvic neuroblastoma Group: This group included patients with pelvic neuroblastoma and composed of 6 patients (3 males and 3 females).
5) Cervical neuroblastoma Group: This group included patients with cervical neuroblastoma and composed of 2 male patients.
3 Results:
The mean volume of all suprarenal neuroblastoma group involved in the study before therapy was 176.62 cm3 (SD: 234.15) range: 239.4-968.9cm3. The mean initial volume of all suprarenal neuroblastoma group who underwent observation protocol was 86.0378 cm3 (SD: 114.44) range: 5.2-347.94cm3. Volumetric evaluation of suprarenal neuroblastoma following observation (Wait and See) protocol revealed continuous reduction of the tumor volumes in a statistically significant manner during the follow up periods up to 12 months with p value of less than 0.05. The volumetric changes afterwards were statistically insignificant.
The mean initial volume of all suprarenal neuroblastoma group who underwent primary surgery protocol was 42.4 cm3 (SD: 28.5) range: 7.5-90cm3. Complete surgical resection of the tumor was not feasible in all lesions due to local tumor extension and / or infiltration with the associated risk of injury of nearby organs or structures. However statistical analysis of the volumetric changes in the successive follow up periods did not reveal statistical significance.
Volumetric estimation of the tumor in the subsequent follow up periods revealed significant changes within the period first (3-9 month periods). The changes afterwards were statistically non significant. On the other hand, the mean initial volume of all suprarenal neuroblastoma group who underwent combined chemotherapy and Stem cell transplantation protocol only without surgical interference was 99.98cm3 (SD:46.2) range: 48.48-160.48 cm3. In this group the volumetric changes were variable and difference in volumes in follow up was statistically non significant during the follow up period.
The mean initial volume of all abdominal paravertebral neuroblastoma group was 249.197cm3 (SD: 249.63) range: 9.6-934cm3. The mean initial volume of all pelvic neuroblastoma group was 118.88cm3 (SD: 50.61) range: 73.4-173.4cm3. The mean initial volume of all mediastinal neuroblastoma group was 189.7cm3 (SD: 139.057) range: 10.7-415 cm3. The mean initial volume of all cervical neuroblastoma group was 189.7cm3 (SD: 139.057) range: 10.7-415 cm3. The volumetric measurements in the corresponding follow up periods according to the therapeutic protocol of abdominal paravertebral neuroblastoma, pelvic neuroblastoma, mediastinal and cervical neuroblastoma revealed significant change in the tumor volume within the early 3-6 months from the initial therapy while subsequently the tumor volumetric changes were statistically non significant.
4 Conclusion:
In conclusion, the role of MRI volumetry in the evaluation of tumor response is dependent on the risk adapted concept of neuroblastoma with the combination of different imaging modalities as well the therapeutic protocol. MRI Volumetry in addition to new protocols such as Whole-body imaging and 3D visualization techniques are gaining more importance and acceptance.
Through an examination of Joseph Roth’s reportage and fiction published between 1923 and 1932, this thesis seeks to provide a systematic analysis of a particular aspect of the author’s literary style, namely his use of sharply focused visual representations, which are termed Heuristic Visuals. Close textual analysis, supplemented by insights from reader-response theory, psychology, psycholinguistics and sociology illuminate the function of these visual representations. The thesis also seeks to discover whether there are significant differences and correspondences in the use of visual representations between the reportage and fiction genres. Roth believed that writers should be engagiert, and that the truth could only be arrived at through close observation of reality, not subordinated to theory. The research analyses the techniques by which Roth challenges his readers and encourages them to discover the truth for themselves. Three basic variants of Heuristic Visuals are identified, and their use in different contexts, including that of dialectical presentations, is explored. There is evidence of the use of different variants of Heuristic Visuals according to the respective rhetorical demands of particular thematic issues. It has also been possible to establish synchronic correspondences between the different genres, and diachronic correspondences within genres. Although there are examples within the reportage where the entire article is based on an Heuristic Visual, the use of Heuristic Visuals cannot be seen as a key organizing principle in Roth’s work as a whole. As his mastery of the technique reaches its highest point in the early 1930s, Heuristic Visuals are often incorporated into the reconstruction of a complete sensory experience. Analysis of Roth’s heuristic use of visual representations has led to important insights, including a reinterpretation of the endings of Roth’s two most famous novels: Hiob and Radetzkymarsch.
Virtuous democrats, liberal aristocrats : political discourse and the Pennsylvania Constitution
(2001)
Virtual screening of potential bioactive substances using the support vector machine approach
(2005)
Die vorliegende Dissertation stellt eine kumulative Arbeit dar, die in insgesamt acht wissenschaftlichen Publikationen (fünf publiziert, zwei eingerichtet und eine in Vorbereitung) dargelegt ist. In diesem Forschungsprojekt wurden Anwendungen von maschinellem Lernen für das virtuelle Screening von Moleküldatenbanken durchgeführt. Das Ziel war primär die Einführung und Überprüfung des Support-Vector-Machine (SVM) Ansatzes für das virtuelle Screening nach potentiellen Wirkstoffkandidaten. In der Einleitung der Arbeit ist die Rolle des virtuellen Screenings im Wirkstoffdesign beschrieben. Methoden des virtuellen Screenings können fast in jedem Bereich der gesamten pharmazeutischen Forschung angewendet werden. Maschinelles Lernen kann einen Einsatz finden von der Auswahl der ersten Moleküle, der Optimierung der Leitstrukturen bis hin zur Vorhersage von ADMET (Absorption, Distribution, Metabolism, Toxicity) Eigenschaften. In Abschnitt 4.2 werden möglichen Verfahren dargestellt, die zur Beschreibung von chemischen Strukturen eingesetzt werden können, um diese Strukturen in ein Format zu bringen (Deskriptoren), das man als Eingabe für maschinelle Lernverfahren wie Neuronale Netze oder SVM nutzen kann. Der Fokus ist dabei auf diejenigen Verfahren gerichtet, die in der vorliegenden Arbeit verwendet wurden. Die meisten Methoden berechnen Deskriptoren, die nur auf der zweidimensionalen (2D) Struktur basieren. Standard-Beispiele hierfür sind physikochemische Eigenschaften, Atom- und Bindungsanzahl etc. (Abschnitt 4.2.1). CATS Deskriptoren, ein topologisches Pharmakophorkonzept, sind ebenfalls 2D-basiert (Abschnitt 4.2.2). Ein anderer Typ von Deskriptoren beschreibt Eigenschaften, die aus einem dreidimensionalen (3D) Molekülmodell abgeleitet werden. Der Erfolg dieser Beschreibung hangt sehr stark davon ab, wie repräsentativ die 3D-Konformation ist, die für die Berechnung des Deskriptors angewendet wurde. Eine weitere Beschreibung, die wir in unserer Arbeit eingesetzt haben, waren Fingerprints. In unserem Fall waren die verwendeten Fingerprints ungeeignet zum Trainieren von Neuronale Netzen, da der Fingerprintvektor zu viele Dimensionen (~ 10 hoch 5) hatte. Im Gegensatz dazu hat das Training von SVM mit Fingerprints funktioniert. SVM hat den Vorteil im Vergleich zu anderen Methoden, dass sie in sehr hochdimensionalen Räumen gut klassifizieren kann. Dieser Zusammenhang zwischen SVM und Fingerprints war eine Neuheit, und wurde von uns erstmalig in die Chemieinformatik eingeführt. In Abschnitt 4.3 fokussiere ich mich auf die SVM-Methode. Für fast alle Klassifikationsaufgaben in dieser Arbeit wurde der SVM-Ansatz verwendet. Ein Schwerpunkt der Dissertation lag auf der SVM-Methode. Wegen Platzbeschränkungen wurde in den beigefügten Veröffentlichungen auf eine detaillierte Beschreibung der SVM verzichtet. Aus diesem Grund wird in Abschnitt 4.3 eine vollständige Einführung in SVM gegeben. Darin enthalten ist eine vollständige Diskussion der SVM Theorie: optimale Hyperfläche, Soft-Margin-Hyperfläche, quadratische Programmierung als Technik, um diese optimale Hyperfläche zu finden. Abschnitt 4.3 enthält auch eine Diskussion von Kernel-Funktionen, welche die genaue Form der optimalen Hyperfläche bestimmen. In Abschnitt 4.4 ist eine Einleitung in verschiede Methoden gegeben, die wir für die Auswahl von Deskriptoren genutzt haben. In diesem Abschnitt wird der Unterschied zwischen einer „Filter“- und der „Wrapper“-basierten Auswahl von Deskriptoren herausgearbeitet. In Veröffentlichung 3 (Abschnitt 7.3) haben wir die Vorteile und Nachteile von Filter- und Wrapper-basierten Methoden im virtuellen Screening vergleichend dargestellt. Abschnitt 7 besteht aus den Publikationen, die unsere Forschungsergebnisse enthalten. Unsere erste Publikation (Veröffentlichung 1) war ein Übersichtsartikel (Abschnitt 7.1). In diesem Artikel haben wir einen Gesamtüberblick der Anwendungen von SVM in der Bio- und Chemieinformatik gegeben. Wir diskutieren Anwendungen von SVM für die Gen-Chip-Analyse, die DNASequenzanalyse und die Vorhersage von Proteinstrukturen und Proteininteraktionen. Wir haben auch Beispiele beschrieben, wo SVM für die Vorhersage der Lokalisation von Proteinen in der Zelle genutzt wurden. Es wird dabei deutlich, dass SVM im Bereich des virtuellen Screenings noch nicht verbreitet war. Um den Einsatz von SVM als Hauptmethode unserer Forschung zu begründen, haben wir in unserer nächsten Publikation (Veröffentlichung 2) (Abschnitt 7.2) einen detaillierten Vergleich zwischen SVM und verschiedenen neuronalen Netzen, die sich als eine Standardmethode im virtuellen Screening etabliert haben, durchgeführt. Verglichen wurde die Trennung von wirstoffartigen und nicht-wirkstoffartigen Molekülen („Druglikeness“-Vorhersage). Die SVM konnte 82% aller Moleküle richtig klassifizieren. Die Klassifizierung war zudem robuster als mit dreilagigen feedforward-ANN bei der Verwendung verschiedener Anzahlen an Hidden-Neuronen. In diesem Projekt haben wir verschiedene Deskriptoren zur Beschreibung der Moleküle berechnet: Ghose-Crippen Fragmentdeskriptoren [86], physikochemische Eigenschaften [9] und topologische Pharmacophore (CATS) [10]. Die Entwicklung von weiteren Verfahren, die auf dem SVM-Konzept aufbauen, haben wir in den Publikationen in den Abschnitten 7.3 und 7.8 beschrieben. Veröffentlichung 3 stellt die Entwicklung einer neuen SVM-basierten Methode zur Auswahl von relevanten Deskriptoren für eine bestimmte Aktivität dar. Eingesetzt wurden die gleichen Deskriptoren wie in dem oben beschriebenen Projekt. Als charakteristische Molekülgruppen haben wir verschiedene Untermengen der COBRA Datenbank ausgewählt: 195 Thrombin Inhibitoren, 226 Kinase Inhibitoren und 227 Faktor Xa Inhibitoren. Es ist uns gelungen, die Anzahl der Deskriptoren von ursprünglich 407 auf ungefähr 50 zu verringern ohne signifikant an Klassifizierungsgenauigkeit zu verlieren. Unsere Methode haben wir mit einer Standardmethode für diese Anwendung verglichen, der Kolmogorov-Smirnov Statistik. Die SVM-basierte Methode erwies sich hierbei in jedem betrachteten Fall als besser als die Vergleichsmethoden hinsichtlich der Vorhersagegenauigkeit bei der gleichen Anzahl an Deskriptoren. Eine ausführliche Beschreibung ist in Abschnitt 4.4 gegeben. Dort sind auch verschiedene „Wrapper“ für die Deskriptoren-Auswahl beschrieben. Veröffentlichung 8 beschreibt die Anwendung von aktivem Lernen mit SVM. Die Idee des aktiven Lernens liegt in der Auswahl von Molekülen für das Lernverfahren aus dem Bereich an der Grenze der verschiedenen zu unterscheidenden Molekülklassen. Auf diese Weise kann die lokale Klassifikation verbessert werden. Die folgenden Gruppen von Moleküle wurden genutzt: ACE (Angiotensin converting enzyme), COX2 (Cyclooxygenase 2), CRF (Corticotropin releasing factor) Antagonisten, DPP (Dipeptidylpeptidase) IV, HIV (Human immunodeficiency virus) protease, Nuclear Receptors, NK (Neurokinin receptors), PPAR (peroxisome proliferator-activated receptor), Thrombin, GPCR und Matrix Metalloproteinasen. Aktives Lernen konnte die Leistungsfähigkeit des virtuellen Screenings verbessern, wie sich in dieser retrospektiven Studie zeigte. Es bleibt abzuwarten, ob sich das Verfahren durchsetzen wird, denn trotzt des Gewinns an Vorhersagegenauigkeit ist es aufgrund des mehrfachen SVMTrainings aufwändig. Die Publikationen aus den Abschnitten 7.5, 7.6 und 7.7 (Veröffentlichungen 5-7) zeigen praktische Anwendungen unserer SVM-Methoden im Wirkstoffdesign in Kombination mit anderen Verfahren, wie der Ähnlichkeitssuche und neuronalen Netzen zur Eigenschaftsvorhersage. In zwei Fällen haben wir mit dem Verfahren neuartige Liganden für COX-2 (cyclooxygenase 2) und dopamine D3/D2 Rezeptoren gefunden. Wir konnten somit klar zeigen, dass SVM-Methoden für das virtuelle Screening von Substanzdatensammlungen sinnvoll eingesetzt werden können. Es wurde im Rahmen der Arbeit auch ein schnelles Verfahren zur Erzeugung großer kombinatorischer Molekülbibliotheken entwickelt, welches auf der SMILES Notation aufbaut. Im frühen Stadium des Wirstoffdesigns ist es wichtig, eine möglichst „diverse“ Gruppe von Molekülen zu testen. Es gibt verschiedene etablierte Methoden, die eine solche Untermenge auswählen können. Wir haben eine neue Methode entwickelt, die genauer als die bekannte MaxMin-Methode sein sollte. Als erster Schritt wurde die „Probability Density Estimation“ (PDE) für die verfügbaren Moleküle berechnet. [78] Dafür haben wir jedes Molekül mit Deskriptoren beschrieben und die PDE im N-dimensionalen Deskriptorraum berechnet. Die Moleküle wurde mit dem Metropolis Algorithmus ausgewählt. [87] Die Idee liegt darin, wenige Moleküle aus den Bereichen mit hoher Dichte auszuwählen und mehr Moleküle aus den Bereichen mit niedriger Dichte. Die erhaltenen Ergebnisse wiesen jedoch auf zwei Nachteile hin. Erstens wurden Moleküle mit unrealistischen Deskriptorwerten ausgewählt und zweitens war unser Algorithmus zu langsam. Dieser Aspekt der Arbeit wurde daher nicht weiter verfolgt. In Veröffentlichung 6 (Abschnitt 7.6) haben wir in Zusammenarbeit mit der Molecular-Modeling Gruppe von Aventis-Pharma Deutschland (Frankfurt) einen SVM-basierten ADME Filter zur Früherkennung von CYP 2C9 Liganden entwickelt. Dieser nichtlineare SVM-Filter erreichte eine signifikant höhere Vorhersagegenauigkeit (q2 = 0.48) als ein auf den gleichen Daten entwickelten PLS-Modell (q2 = 0.34). Es wurden hierbei Dreipunkt-Pharmakophordeskriptoren eingesetzt, die auf einem dreidimensionalen Molekülmodell aufbauen. Eines der wichtigen Probleme im computerbasierten Wirkstoffdesign ist die Auswahl einer geeigneten Konformation für ein Molekül. Wir haben versucht, SVM auf dieses Problem anzuwenden. Der Trainingdatensatz wurde dazu mit jeweils mehreren Konformationen pro Molekül angereichert und ein SVM Modell gerechnet. Es wurden anschließend die Konformationen mit den am schlechtesten vorhergesagten IC50 Wert aussortiert. Die verbliebenen gemäß dem SVM-Modell bevorzugten Konformationen waren jedoch unrealistisch. Dieses Ergebnis zeigt Grenzen des SVM-Ansatzes auf. Wir glauben jedoch, dass weitere Forschung auf diesem Gebiet zu besseren Ergebnissen führen kann.
Time-critical applications process a continuous stream of input data and have to meet specific timing constraints. A common approach to ensure that such an application satisfies its constraints is over-provisioning: The application is deployed in a dedicated cluster environment with enough processing power to achieve the target performance for every specified data input rate. This approach comes with a drawback: At times of decreased data input rates, the cluster resources are not fully utilized. A typical use case is the HLT-Chain application that processes physics data at runtime of the ALICE experiment at CERN. From a perspective of cost and efficiency it is desirable to exploit temporarily unused cluster resources. Existing approaches aim for that goal by running additional applications. These approaches, however, a) lack in flexibility to dynamically grant the time-critical application the resources it needs, b) are insufficient for isolating the time-critical application from harmful side-effects introduced by additional applications or c) are not general because application-specific interfaces are used. In this thesis, a software framework is presented that allows to exploit unused resources in a dedicated cluster without harming a time-critical application. Additional applications are hosted in Virtual Machines (VMs) and unused cluster resources are allocated to these VMs at runtime. In order to avoid resource bottlenecks, the resource usage of VMs is dynamically modified according to the needs of the time-critical application. For this purpose, a number of previously not combined methods is used. On a global level, appropriate VM manipulations like hot migration, suspend/resume and start/stop are determined by an informed search heuristic and applied at runtime. Locally on cluster nodes, a feedback-controlled adaption of VM resource usage is carried out in a decentralized manner. The employment of this framework allows to increase a cluster’s usage by running additional applications, while at the same time preventing negative impact towards a time-critical application. This capability of the framework is shown for the HLT-Chain application: In an empirical evaluation the cluster CPU usage is increased from 49% to 79%, additional results are computed and no negative effect towards the HLT-Chain application are observed.
Autophagy, together with the ubiquitin-proteasome system, is the main quality control pathway responsible for maintaining cell homeostasis. There are several types of autophagy distinguished by cargo selectivity and means of induction. This thesis focuses on macroautophagy, hereafter autophagy, where a double-layered membrane is formed originating from the endoplasmatic reticulum (ER) engulfing cargo selectively or unselectively. Subsequently, a vesicle forms around the cargo, an autophagosome, and eventually fuses with the lysosome leading to degradation of the vesicle content and release of the cargo “building blocks”. Basal autophagy continuously occurs, unselectively engulfing a portion of the cytoplasm. However, autophagy can also be induced by stress such as starvation, protein aggregation, damaged organelles, intracellular pathogens etc. In this case, the cargo is selectively targeted, and the fate of the autophagosome is the same as in basal autophagy. In recent years, interest in identifying mechanisms of autophagy regulation has risen due to its importance in neurodegenerative diseases and cancer. Given the complexity of the process, its execution is tightly regulated from initiation, autophagosome formation, expansion, closure, and finally fusion with the lysosome. Each of the steps involves different protein complexes, whose timely activity is orchestrated by post-translational modifications. One of them is ubiquitination. Ubiquitin is a small, 76-amino acid protein conjugated in a 3-step reaction to other proteins, in a reversible manner, meaning undone by deubiquitinases. Originally described as a degradation signal targeting proteins to the proteasome, today it is known it has various additional non-proteolytic functions, such as regulating a protein’s activity, localization, or interaction partners. The role of ubiquitin in autophagy has already been shown. However, given the reversibility and fine-tuning of the ubiquitin signal, many expected regulators remain unidentified. This work aimed to identify novel deubiquitinating enzymes that regulate autophagy. We identified ubiquitin-specific protease 11 (USP11) as a novel, negative regulator of autophagy. Loss of USP11 leads to an increase in autophagic flux, whereas overexpression of USP11 attenuates it. Moreover, this observation was reproducible in model organism Caenorhabditis elegans, emphasizing the importance of USP11 in autophagy regulation. To identify the mechanism of USP11-dependent autophagy regulation, we performed a USP11 interactome screen after 4 hour Torin1 treatment and identified a plethora of autophagy-related proteins. Following the most prominent hits, we have investigated versatile ways in which USP11 regulates autophagy. USP11 interacts with the PI3KC3 complex, the role of which is phosphorylating lipids of the ER, thereby initiating the formation of the autophagosomal membrane. Phosphorylated lipids serve as a recruitment signal for downstream effector proteins necessary for the membrane expansion. The core components of the complex are VPS34, the lipid kinase, ATG14, the protein responsible for targeting the complex to the ER, VPS15, a pseudokinase with a scaffolding role, Beclin1, a regulatory subunit, and NRBF2, the dimer-inducing subunit. We have found USP11 interacts with the complex and, based on its activity, USP11 influences post-translational status of all the aforementioned subunits, except for ATG14. Moreover, we have found that loss of USP11 leads to an increase in NRBF2 levels, whereas it does not change the levels of the other proteins. Given that the dimerization of the complex leads to an increase in complex activity, we investigated if the complex is more tightly formed in the absence of USP11, and if it is more active. We have found both to be the case. Although the exact mechanism of USP11-dependent PI3KC3 complex regulation remains to be identified, we found that loss of USP11 stimulates the complex formation and activity, likely contributing to the general effect of USP11 on autophagy flux. Additionally, we found that USP11 modulates levels of mTOR, the most upstream kinase in autophagy initiation steps and general multifaceted metabolism regulator. Loss of USP11 led to downregulation of mTOR levels, suggesting USP11 may rescue mTOR from proteasome-mediated degradation. Furthermore, we found mTOR to be differentially modified depending on the activity of USP11. However, it remains to be shown if USP11-dependent mTOR regulation contributes to the observed autophagy phenotype. Taken together, USP11 is a novel, versatile, negative regulator of autophagy, and an important addition to our knowledge on the regulation of autophagy by the ubiquitin system.
With the increasing energies and intensities of heavy-ion accelerator facilities, the problem of an excessive activation of the accelerator components caused by beam losses becomes more and more important. Numerical experiments using Monte Carlo transport codes are performed in order to assess the levels of activation. The heavy-ion versions of the codes were released approximately a decade ago, therefore the verification is needed to be sure that they give reasonable results. Present work is focused on obtaining the experimental data on activation of the targets by heavy-ion beams. Several experiments were performed at GSI Helmholtzzentrum für Schwerionenforschung. The interaction of nitrogen, argon and uranium beams with aluminum targets, as well as interaction of nitrogen and argon beams with copper targets was studied. After the irradiation of the targets by different ion beams from the SIS18 synchrotron at GSI, the γ-spectroscopy analysis was done: the γ-spectra of the residual activity were measured, the radioactive nuclides were identified, their amount and depth distribution were detected. The obtained experimental results were compared with the results of the Monte Carlo simulations using FLUKA, MARS and SHIELD. The discrepancies and agreements between experiment and simulations are pointed out. The origin of discrepancies is discussed. Obtained results allow for a better verification of the Monte Carlo transport codes, and also provide information for their further development. The necessity of the activation studies for accelerator applications is discussed. The limits of applicability of the heavy-ion beam-loss criteria were studied using the FLUKA code. FLUKA-simulations were done to determine the most preferable from the radiation protection point of view materials for use in accelerator components.
The venture capital industry holds relevance for entrepreneurs looking for money to finance an innovative project, investors seeking to make money by investing in entrepreneurial firms and governments trying to promote innovation and entrepreneurship. Venture capital investment could facilitate innovation and thus a better economy.
Venture capital has enabled the U.S. to support its entrepreneurial talent by turning ideas into world-famous products and services, building companies from mere business plans to mature and powerful organizations. Three of the five largest U.S. public companies by market capitalization – Apple, Google and Microsoft – received most of their early external funding from venture capital. Having its ups and downs, venture capital investment in the U.S. expanded from virtually zero in the mid-1970s to $8 billion in 1995 and $49.3 billion in 2014. Venture backed companies have been a prime driver of economic growth in the U.S.Across the pacific, venture capital investment in China has grown out of the transition from a centrally planned economy to a free market economy over the past three decades, becoming an important pillar supporting China’s innovation system. In 2015, a total of 2,824 venture capital investment deals provided an aggregate investment of $36.9 billion. Venture capital has long been a hot topic in China’s capital market, particularly since the government decided to boost “mass entrepreneurship and innovation” in 2014.
In the U.S., most venture capital firms are organized as limited partnerships, with the venture capitalists being general partners and the investors limited partners. Studies have shown that investors choose to invest through venture funds as an intermediary rather than placing their investments directly with the entrepreneurs; because of the high risk nature of the entrepreneur’s business, it is hard for them to get bank loans or direct equity investments. Conflicts may also arise, however, between the venture capitalists acting as agents and the investors as principals.5 This agency problem maybe particularly severe, since venture capital provides money for businesses with high potential and high risk, although the limited partnership has certain merits and is still most commonly chosen as the business form for venture capital funds.6 At the same time, the fact that general partners have total control of the partnership business necessitates that the agency problem is addressed by legal rules, contracts and other mechanisms.
Meanwhile, despite the rapid growth of venture capital investments in China, little attention has been paid to the organizational form of venture capital funds. In contrast to the U.S., most Chinese venture funds have been structured as corporations. One may argue that it was due to legislative reasons: that the limited partnership was not recognized by Chinese law when venture capital first appeared in China. However, after adopted a chapter was adopted in the Partnership Enterprise Law (PEL) governing limited partnerships in 2007, most of the venture funds abided by their choice, while those opting for the limited partnership have encountered difficulties: the limited partners are having trouble trusting the general partners with their money and are therefore interfering with the operation of the partnership business, which may lead to dissolution of the partnership.
This thesis applies transaction cost theory to explain the benefits and costs of choosing the limited partnership as a business form in the special context of venture capital investments, showing that the potential agency conflict between the general partners and the limited partners have been mitigated by legal and other mechanismsin the United States, and that the U.S. investors could therefore exploit the merit of the limited partnership form in venture capital financing. In China, investors have different answers to the agency problem. Similarly to the situation in the U.S., Chinese partners also employ contract terms to deal with agency problems, and the legislators enact laws that aim at regulating the limited partnership form; some legislation was even transplanted from the U.S., such as that part of the PEL which governs limited partnerships. It seems, then, that similar mechanisms that deal with agency problems also exist in China. However, given the unique history of the development of China’s innovation system and venture capital market, the effectiveness of these constraints is questionable. Chinese venture capital investors have therefore characteristically behaved differently to U.S. investors. Rather than relying on these questionable mechanisms, Chinese investors as well as the Chinese government have developed different approaches to addressing these agency problems.