Refine
Year of publication
- 2016 (122) (remove)
Document Type
- Preprint (70)
- Article (38)
- Doctoral Thesis (7)
- Working Paper (3)
- Report (2)
- Bachelor Thesis (1)
- Conference Proceeding (1)
Language
- English (122) (remove)
Has Fulltext
- yes (122)
Is part of the Bibliography
- no (122) (remove)
Keywords
- Alternate hydrophobicity (1)
- Amino acid pattern (1)
- Beta-sheet (1)
- Centrality Class (1)
- Centrality Selection (1)
- Computer science (1)
- Crowdsourcing (1)
- Data science (1)
- Databases (1)
- EDISON competence framework (1)
Institute
- Informatik (122) (remove)
50 years of amino acid hydrophobicity scales : revisiting the capacity for peptide classification
(2016)
Background: Physicochemical properties are frequently analyzed to characterize protein-sequences of known and unknown function. Especially the hydrophobicity of amino acids is often used for structural prediction or for the detection of membrane associated or embedded β-sheets and α-helices. For this purpose many scales classifying amino acids according to their physicochemical properties have been defined over the past decades. In parallel, several hydrophobicity parameters have been defined for calculation of peptide properties. We analyzed the performance of separating sequence pools using 98 hydrophobicity scales and five different hydrophobicity parameters, namely the overall hydrophobicity, the hydrophobic moment for detection of the α-helical and β-sheet membrane segments, the alternating hydrophobicity and the exact ß-strand score.
Results: Most of the scales are capable of discriminating between transmembrane α-helices and transmembrane β-sheets, but assignment of peptides to pools of soluble peptides of different secondary structures is not achieved at the same quality. The separation capacity as measure of the discrimination between different structural elements is best by using the five different hydrophobicity parameters, but addition of the alternating hydrophobicity does not provide a large benefit. An in silico evolutionary approach shows that scales have limitation in separation capacity with a maximal threshold of 0.6 in general. We observed that scales derived from the evolutionary approach performed best in separating the different peptide pools when values for arginine and tyrosine were largely distinct from the value of glutamate. Finally, the separation of secondary structure pools via hydrophobicity can be supported by specific detectable patterns of four amino acids.
Conclusion: It could be assumed that the quality of separation capacity of a certain scale depends on the spacing of the hydrophobicity value of certain amino acids. Irrespective of the wealth of hydrophobicity scales a scale separating all different kinds of secondary structures or between soluble and transmembrane peptides does not exist reflecting that properties other than hydrophobicity affect secondary structure formation as well. Nevertheless, application of hydrophobicity scales allows distinguishing between peptides with transmembrane α-helices and β-sheets. Furthermore, the overall separation capacity score of 0.6 using different hydrophobicity parameters could be assisted by pattern search on the protein sequence level for specific peptides with a length of four amino acids.
The calculus LRP is a polymorphically typed call-by-need lambda calculus extended by data constructors, case-expressions, seq-expressions and type abstraction and type application. This report is devoted to the extension LRPw of LRP by scoped sharing decorations. The extension cannot be properly encoded into LRP if improvements are defined w.r.t. the number of lbeta, case, and seq-reductions, which makes it necessary to reconsider the claims and proofs of properties. We show correctness of improvement properties of reduction and transformation rules and also of computation rules for decorations in the extended calculus LRPw. We conjecture that conservativity of the embedding of LRP in LRPw holds.
An improvement is a correct program transformation that optimizes the program, where the criterion is that the number of computation steps until a value is obtained is decreased. This paper investigates improvements in both { an untyped and a polymorphically typed { call-by-need lambda-calculus with letrec, case, constructors and seq. Besides showing that several local optimizations are improvements, the main result of the paper is a proof that common subexpression elimination is correct and an improvement, which proves a conjecture and thus closes a gap in Moran and Sands' improvement theory. We also prove that several different length measures used for improvement in Moran and Sands' call-by-need calculus and our calculus are equivalent.
The degradation of cytosol-invading pathogens by autophagy, a process known as xenophagy, is an important mechanism of the innate immune system. Inside the host, Salmonella Typhimurium invades epithelial cells and resides within a specialized intracellular compartment, the Salmonella-containing vacuole. A fraction of these bacteria does not persist inside the vacuole and enters the host cytosol. Salmonella Typhimurium that invades the host cytosol becomes a target of the autophagy machinery for degradation. The xenophagy pathway has recently been discovered, and the exact molecular processes are not entirely characterized. Complete kinetic data for each molecular process is not available, so far. We developed a mathematical model of the xenophagy pathway to investigate this key defense mechanism. In this paper, we present a Petri net model of Salmonella xenophagy in epithelial cells. The model is based on functional information derived from literature data. It comprises the molecular mechanism of galectin-8-dependent and ubiquitin-dependent autophagy, including regulatory processes, like nutrient-dependent regulation of autophagy and TBK1-dependent activation of the autophagy receptor, OPTN. To model the activation of TBK1, we proposed a new mechanism of TBK1 activation, suggesting a spatial and temporal regulation of this process. Using standard Petri net analysis techniques, we found basic functional modules, which describe different pathways of the autophagic capture of Salmonella and reflect the basic dynamics of the system. To verify the model, we performed in silico knockout experiments. We introduced a new concept of knockout analysis to systematically compute and visualize the results, using an in silico knockout matrix. The results of the in silico knockout analyses were consistent with published experimental results and provide a basis for future investigations of the Salmonella xenophagy pathway.
Author Summary
Salmonellae are Gram-negative bacteria, which cause the majority of foodborne diseases worldwide. Serovars of Salmonella cause a broad range of diseases, ranging from diarrhea to typhoid fever in a variety of hosts. In the year 2010, Salmonella Typhi caused 7.6 million foodborne diseases and 52 000 deaths, and Salmonella enterica was responsible for 78.7 million diseases and 59 000 deaths. After invasion of Salmonella into host epithelial cells, a small fraction of Salmonella escapes from a specialized intracellular compartment and replicates inside the host cytosol. Xenophagy is a host defense mechanism to protect the host cell from cytosolic pathogens. Understanding how Salmonella is recognized and targeted for xenophagy is an important subject of current research. To the best of our knowledge, no mathematical model has been presented so far, describing the process of Salmonella Typhimurium xenophagy. Here, we present a manually curated and mathematically verified theoretical model of Salmonella Typhimurium xenophagy in epithelial cells, which is consistent with the current state of knowledge. Our model reproduces literature data and postulates new hypotheses for future investigations.
The behaviour of electronic circuits is influenced by ageing effects. Modelling the behaviour of circuits is a standard approach for the design of faster, smaller, more reliable and more robust systems. In this thesis, we propose a formalization of robustness that is derived from a failure model, which is based purely on the behavioural specification of a system. For a given specification, simulation can reveal if a system does not comply with a specification, and thus provide a failure model. Ageing usually works against the specified properties, and ageing models can be incorporated to quantify the impact on specification violations, failures and robustness. We study ageing effects in the context of analogue circuits. Here, models must factor in infinitely many circuit states. Ageing effects have a cause and an impact that require models. On both these ends, the circuit state is highly relevant, an must be factored in. For example, static empirical models for ageing effects are not valid in many cases, because the assumed operating states do not agree with the circuit simulation results. This thesis identifies essential properties of ageing effects and we argue that they need to be taken into account for modelling the interrelation of cause and impact. These properties include frequency dependence, monotonicity, memory and relaxation mechanisms as well as control by arbitrary shaped stress levels. Starting from decay processes, we define a class of ageing models that fits these requirements well while remaining arithmetically accessible by means of a simple structure.
Modeling ageing effects in semiconductor circuits becomes more relevant with higher integration and smaller structure sizes. With respect to miniaturization, digital systems are ahead of analogue systems, and similarly ageing models predominantly focus on digital applications. In the digital domain, the signal levels are either on or off or switching in between. Given an ageing model as a physical effect bound to signal levels, ageing models for components and whole systems can be inferred by means of average operation modes and cycle counts. Functional and faithful ageing effect models for analogue components often require a more fine-grained characterization for physical processes. Here, signal levels can take arbitrary values, to begin with. Such fine-grained, physically inspired ageing models do not scale for larger applications and are hard to simulate in reasonable time. To close the gap between physical processes and system level ageing simulation, we propose a data based modelling strategy, according to which measurement data is turned into ageing models for analogue applications. Ageing data is a set of pairs of stress patterns and the corresponding parameter deviations. Assuming additional properties, such as monotonicity or frequency independence, learning algorithm can find a complete model that is consistent with the data set. These ageing effect models decompose into a controlling stress level, an ageing process, and a parameter that depends on the state of this process. Using this representation, we are able to embed a wide range of ageing effects into behavioural models for circuit components. Based on the developed modelling techniques, we introduce a novel model for the BTI effect, an ageing effect that permits relaxation. In the following, a transistor level ageing model for BTI that targets analogue circuits is proposed. Similarly, we demonstrate how ageing data from analogue transistor level circuit models lift to purely behavioural block models. With this, we are the first to present a data based hierarchical ageing modeling scheme. An ageing simulator for circuits or system level models computes long term transients, solutions of a differential equation. Long term transients are often close to quasi-periodic, in some sense repetitive. If the evaluation of ageing models under quasi-periodic conditions can be done efficiently, long term simulation becomes practical. We describe an adaptive two-time simulation algorithm that basically skips periods during simulation, advancing faster on a second time axis. The bottleneck of two-time simulation is the extrapolation through skipped frames. This involves both the evaluation of the ageing models and the consistency of the boundary conditions. We propose a simulator that computes long term transients exploiting the structure of the proposed ageing models. These models permit extrapolation of the ageing state by means of a locally equivalent stress, a sort of average stress level. This level can be computed efficiently and also gives rise to a dynamic step control mechanism. Ageing simulation has a wide range of applications. This thesis vastly improves the applicability of ageing simulation for analogue circuits in terms of modelling and efficiency. An ageing effect model that is a part of a circuit component model accounts for parametric drift that is directly related to the operation mode. For example asymmetric load on a comparator or power-stage may lead to offset drift, which is not an empiric effect. Monitor circuits can report such effects during operation, when they become significant. Simulating the behaviour of these monitors is important during their development. Ageing effects can be compensated using redundant parts, and annealing can revert broken components to functional. We show that such mechanisms can be simulated in place using our models and algorithms. The aim of automatized circuit synthesis is to create a circuit that implements a specification for a certain use case. Ageing simulation can identify candidates that are more reliable. Efficient ageing simulation allows to factor in various operation modes and helps refining the selection. Using long term ageing simulation, we have analysed the fitness of a set of synthesized operational amplifiers with similar properties concerning various use cases. This procedure enables the selection of the most ageing resilient implementation automatically.
We present techniques to prove termination of cycle rewriting, that is, string rewriting on cycles, which are strings in which the start and end are connected. Our main technique is to transform cycle rewriting into string rewriting and then apply state of the art techniques to prove termination of the string rewrite system. We present three such transformations, and prove for all of them that they are sound and complete. In this way not only termination of string rewriting of the transformed system implies termination of the original cycle rewrite system, a similar conclusion can be drawn for non-termination. Apart from this transformational approach, we present a uniform framework of matrix interpretations, covering most of the earlier approaches to automatically proving termination of cycle rewriting. All our techniques serve both for proving termination and relative termination. We present several experiments showing the power of our techniques.
To accommodate the growth of the software industry, programming languages are getting increasingly easy to use. The latest trend in the simplification of the software development process is the usage of visual programming environments. To make visual programming effective, the graph-like representation of the source code must be clearly arranged. This thesis details some of the difficulties in automatic layout generation and proposes an interface as well as two different implementations of automatic layout generators to integrate into the VWorkflows visual programming framework.
Since Mobile Virtual Assistants are rising in popularity and come with most new smartphones out of the box and theoretical work in the field is hard to come by, a test is in order to establish the status quo of development. We did a manual test on six different Mobile Virtual Assistants in the categories Voice Recognition, Online Search, Phone Control and Natural Conversation and the results show that Siri is currently the best Mobile Virtual Assistant on the market with a success rate of 65.8% on average over all four categories.
This thesis contributes to the field of machine learning with a specific focus on the methods for learning relations between the inputs. Learning relationships between images is the most common primitive in vision. There are many vision tasks in which relationships across images play an important role. Some of them are motion estimation, activity recognition, stereo vision, multi-view geometry and visual odometry. Many of such tasks mainly depend on motion and disparity cues, which are inferred based on the relations across multiple image pairs. The approaches presented in this thesis mainly deal with, but are not limited to, learning of the representations for motion and depth. This thesis by articles consists of five articles which present relational feature learning models along with their applications in computer vision. In the first article, we present an approach for encoding motion in videos. To this end, we show that the detection of spatial transformations can be viewed as detection of coincidence or synchrony between the given sequence of frames and a sequence of features which are related by the transformation we wish to detect. Learning to detect synchrony is possible by introducing "multiplicative interactions'' into the hidden units of single layered sparse coding models.
We show that the learned motion representations employed for the task of activity recognition achieve competitive performance on multiple benchmarks. Stereo vision is an important challenge in computer vision and useful for many applications in that field. In the second article, we extend the energy based learning models, which were previously used for motion encoding, to the context of depth perception. Given the common architecture of the models for encoding motion and depth, we show that it is possible to define a single model for learning a unified representation for both the cues. Our experimental results show that learning a combined representation for depth and motion makes it possible to achieve state-of-the-art performance at the task of 3-D activity analysis, and to perform better than the existing hand-engineered 3-D motion features. Autoencoder is a popular unsupervised learning method for learning efficient encoding for a given set of data samples. Typically, regularized autoencoders which are used to learn over-complete and sparse representations for the input data, were shown to fail on intrinsically high dimensional data like videos. In the third article, we investigate the reason for such a behavior. It can be observed that the regularized autoencoders typically learn negative hidden unit biases. We show that the learning of negative biases is the result of hidden units being responsible for both the sparsity and the representation of the input data. It is shown that, as a result, the behavior of the model resembles clustering methods which would require exponentially large number of features to model intrinsically high dimensional data. Based on this understanding, we propose a new activation function which decouples the roles of hidden layer and uses linear encoding. This allows to learn representations on data with very high intrinsic dimensionality. We also show that gating connections in the bi-linear models and the single layer models from articles one and two of this thesis can be thought of as a way to attain a linear encoding scheme which allows them to learn good representations on videos. Visual odometry is the task of inferring egomotion of a moving object from visual information such as images and videos. It can primarily be used for the task of localization and has many applications in the fields of robotics and navigation. The work in article four was motivated by the idea of using deep learning techniques, which are successful methods for many vision tasks, for visual odometry. The visual odometry task mainly requires inference of motion and depth information from visual input which can then be mapped to velocity and change in direction. We use relational feature models presented in the articles one and two for inferring a combined motion and depth representation from stereo video sequences. The combined representation is then mapped to discrete velocity and change in direction labels using convolutional neural networks. Our approach is an end-to-end deep learning-based architecture which uses a single type of computational model and learning rule. Preliminary results show that the architecture is capable of learning the mapping from input video to egomotion. Activity recognition is a challenging computer vision task with many real world applications. It is well know that it is a hard task to use computer vision research for real-time applications. In the fifth article of this thesis, we present a real-time activity recognition system based on deep learning based methods. Our approach uses energy based relational feature learning models for the computation of local motion features directly from videos. A bag-of-words over the local motion features is used for the analysis of activity in a given video sequence. We implement this system on a distributed computational platform and demonstrate its performance on the iCub robot. Using GPUs we demonstrate real time performance which makes the deployment of activity recognition systems in real world scenarios possible.