Refine
Year of publication
Document Type
- Article (19)
- Working Paper (5)
- Part of a Book (4)
- Conference Proceeding (4)
- Preprint (3)
- Report (3)
Has Fulltext
- yes (38)
Is part of the Bibliography
- no (38)
Keywords
Institute
- Informatik (38)
- Medizin (3)
- E-Finance Lab e.V. (1)
The encoding of images by semantic entities is still an unresolved task. This paper proposes the encoding of images by only a few important components or image primitives. Classically, this can be done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the signal processing and neural network community. Using this as pattern primitives we aim for source patterns with the highest occurrence probability or highest information. For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that the Independent Principal Components (IPC) in contrast to the Principal Independent Components (PIC) implement the classical demand of Shannon’s rate distortion theory.
This paper proposes a new approach for the encoding of images by only a few important components. Classically, this is done by the Principal Component Analysis (PCA). Recently, the Independent Component Analysis (ICA) has found strong interest in the neural network community. Applied to images, we aim for the most important source patterns with the highest occurrence probability or highest information called principal independent components (PIC). For the example of a synthetic image composed by characters this idea selects the salient ones. For natural images it does not lead to an acceptable reproduction error since no a-priori probabilities can be computed. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
Classically, encoding of images by only a few, important components is done by the Principal Component Analysis (PCA). Recently, a data analysis tool called Independent Component Analysis (ICA) for the separation of independent influences in signals has found strong interest in the neural network community. This approach has also been applied to images. Whereas the approach assumes continuous source channels mixed up to the same number of channels by a mixing matrix, we assume that images are composed by only a few image primitives. This means that for images we have less sources than pixels. Additionally, in order to reduce unimportant information, we aim only for the most important source patterns with the highest occurrence probabilities or biggest information called „Principal Independent Components (PIC)“. For the example of a synthetic picture composed by characters this idea gives us the most important ones. Nevertheless, for natural images where no a-priori probabilities can be computed this does not lead to an acceptable reproduction error. Combining the traditional principal component criteria of PCA with the independence property of ICA we obtain a better encoding. It turns out that this definition of PIC implements the classical demand of Shannon’s rate distortion theory.
The efficient management of large multimedia databases requires the development of new techniques to process, characterize, and search for multimedia objects. Especially in the case of image data, the rapidly growing amount of documents prohibits a manual description of the images’ content. Instead, the automated characterization is highly desirable to support annotation and retrieval of digital images. However, this is a very complex and still unsolved task. To contribute to a solution of this problem, we have developed a mechanism for recognizing objects in images based on the query by example paradigm. Therefore, the most salient image features of an example image representing the searched object are extracted to obtain a scale-invariant object model. The use of this model provides an efficient and robust strategy for recognizing objects in images independently of their size. Further applications of the mechanism are classical recognition tasks such as scene decomposition or object tracking in video sequences.
Diese Arbeit plädiert für eine rationale Behandlung von Patientendaten und untersucht dazu die Analyse der Daten mit Hilfe neuronale Netze etwas näher. Erfolgreiche Beispielanwendungen zeigen, daß die menschlichen Diagnosefähigkeiten deutlich schlechter sind als neuronale Diagnosesysteme. Für das Beispiel der neueren Architektur mit RBF-Netzen wird die Funktionalität näher erläutert und gezeigt, wie menschliche und neuronale Expertise miteinander gekoppelt werden kann. Der Ausblick deutet Anwendungen und Praxisproblematik derartiger Systeme an.
Performance and storage requirements of topology-conserving maps for robot manipulator control
(1989)
A new programming paradigm for the control of a robot manipulator by learning the mapping between the Cartesian space and the joint space (inverse Kinematic) is discussed. It is based on a Neural Network model of optimal mapping between two high-dimensional spaces by Kohonen. This paper describes the approach and presents the optimal mapping, based on the principle of maximal information gain. It is shown that Kohonens mapping in the 2-dimensional case is optimal in this sense. Furthermore, the principal control error made by the learned mapping is evaluated for the example of the commonly used PUMA robot, the trade-off between storage resources and positional error is discussed and an optimal position encoding resolution is proposed.
In bioinformatics, biochemical signal pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically obtaining the most appropriate model and learning its parameters is extremely interesting. One of the most often used approaches for model selection is to choose the least complex model which “fits the needs”. For noisy measurements, the model which has the smallest mean squared error of the observed data results in a model which fits too accurately to the data – it is overfitting. Such a model will perform good on the training data, but worse on unknown data. This paper propose as model selection criterion the least complex description of the observed data by the model, the minimum description length. For the small, but important example of inflammation modeling the performance of the approach is evaluated. Keywords: biochemical pathways, differential equations, septic shock, parameter estimation, overfitting, minimum description length.
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. In this paper, for the small, important example of inflammation modeling a network is constructed and different learning algorithms are proposed. It turned out that due to the nonlinear dynamics evolutionary approaches are necessary to fit the parameters for sparse, given data. Proceedings of the 15th IEEE International Conference on Tools with Artificial Intelligence - ICTAI 2003
The early prediction of mortality is one of the unresolved tasks in intensive care medicine. This contribution models medical symptoms as observations cased by transitions between hidden markov states. Learning the underlying state transition probabilities results in a prediction probability success of about 91%. The results are discussed and put in relation to the model used. Finally, the rationales for using the model are reflected: Are there states in the septic shock data?
Data driven automatic model selection and parameter adaptation – a case study for septic shock
(2004)
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. This paper propose as model selection criterion the least complex description of the observed data by the model, the minimum description length. For the small, but important example of inflammation modeling the performance of the approach is evaluated.