### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (12)
- Diploma Thesis (1)
- diplomthesis (1)

#### Has Fulltext

- yes (14)

#### Is part of the Bibliography

- no (14)

#### Keywords

- Chirale Symmetrie (3)
- energy system design (2)
- power transmission (2)
- CJT formalism (1)
- CJT-Formalismus (1)
- CO2 emission reduction targets (1)
- Chiral Symmetry (1)
- Dirac vacuum (1)
- Dirac-Gleichung (1)
- Dirac-Vakuum (1)

#### Institute

In this work the nuclear structure of exotic nuclei and superheavy nuclei is studied in a relativistic framework. In the relativistic mean-field (RMF) approximation, the nucleons interact with each other through the exchange of various effective mesons (scalar, vector, isovector-vector). Ground state properties of exotic nuclei and superheavy nuclei are studied in the RMF theory with the three different parameter sets (ChiM, NL3, NL-Z2). Axial deformation of nuclei within two drip lines are performed with the parameter set (ChiM). The position of drip lines are investigated with three different parameter sets (ChiM, NL3, NL-Z2) and compared with the experimental drip line nuclei. In addition, the structure of hypernuclei are studied and for a certain isotope, hyperon halo nucleus is predicted.

Chapter 1 contains the general background of our work. We briefly discuss important aspects of quantum chromodynamics (QCD) and introduce the concept of the chiral condensate as an order parameter for the chiral phase transition. Our focus is on the concept of universality and the arguments why the O(4) model should fall into the same universality class as the effective Lagrangian for the order parameter of (massless) two-flavor QCD. Chapter 2 pedagogically explains the CJT formalism and is concerned with the WKB method. In chapter 3 the CJT formalism is then applied to a simple Z(2) symmetric toy model featuring a one-minimum classical potential. As for all other models we are concerned with in this thesis, we study the behavior at nonzero temperature. This is done in 1+3 dimensions as well as in 1+0 dimensions. In the latter case we are able to compare the effective potential at its global minimum (which is minus the pressure) with our result from the WKB approximation. In chapter 4 this program is also carried out for the toy model with a double-well classical potential, which allows for spontaneous symmetry breaking and tunneling. Our major interest however is in the O(2) model with the fields treated as polar coordinates. This model can be regarded as the first step towards the O(4) model in four-dimensional polar coordinates. Although in principle independent, all subjects discussed in this thesis are directly related to questions arising from the investigation of this particular model. In chapter 5 we start from the generating functional in cartesian coordinates and carry out the transition to polar coordinates. Then we are concerned with the question under which circumstances it is allowed to use the same Feynman rules in polar coordinates as in cartesian coordinates. This question turns out to be non-trivial. On the basis of the common Feynman rules we apply the CJT formalism in chapter 6 to the polar O(2) model. The case of 1+0 dimensions was intended to be a toy model on the basis of which one could more easily explore the transition to polar coordinates. However, it turns out that we are faced with an additional complication in this case, the infrared divergence of thermal integrals. This problem requires special attention and motivates the explicit study of a massless field under topological constraints in chapter 8. In chapter 7 we investigate the cartesian O(2) model in 1+0 dimensions. We compare the effective potential at its global minimum calculated in the CJT formalism and via the WKB approximation. Appendix B reviews the derivation of standard thermal integrals in 1+0 and 1+3 dimensions and constitutes the basis for our CJT calculations and the discussion of infrared divergences. In chapter 9 we discuss the so-called path integral collapse and propose a solution of this problem. In chapter 10 we present our conclusions and an outlook. Since we were interested in organizing our work as pedagogical as possible within the narrow scope of a diploma thesis, we decided to make extensive use of appendices. Appendices A-H are intended for students who are not familiar with several important concepts we are concerned with. We will refer to them explicitly to establish the connection between our work and the general context in which it is settled.

Neutron stars are very dense objects. One teaspoon of their material would have a mass of five billion tons. Their gravitational force is so strong that if an object were to fall from just one meter high it would hit the surface of the respective neutron star at two thousand kilometers per second. In such dense bodies, different particles from the ones present in atomic nuclei, the nucleons, can exist. These particles can be hyperons, that contain non-zero strangeness, or broader resonances. There can also be different states of matter inside neutron stars, such as meson condensates and if the density is height enough to deconfine the nucleons, quark matter. As new degrees of freedom appear in the system, different aspects of matter have to be taken into account. The most important of them being the restoration of the chiral symmetry. This symmetry is spontaneously broken, which is a fact related to the presence of a condensate of scalar quark-antiquark pairs, that for this reason is called chiral condensate. This condensate is present at low densities and even in vacuum. It is important to remember at this point that the modern concept of vacuum is far away from emptiness. It is full of virtual particles that are constantly created and annihilated, being their existence allowed by the uncertainty principle. At very high temperature/density, when the composite particles are dissolved into constituents, the chiral consensate vanishes and the chiral symmetry is restored. To explain how and when chiral symmetry is restored in neutron stars we use a model called non-linear sigma model. This is an effective quantum relativistic model that was developed in order to describe systems of hadrons interacting via meson exchange. The model was constructed from symmetry relations, which allow it to be chiral invariant. The first consequence of this invariance is that there are no bare mass terms in the lagrangian density, causing all, or most of the particles masses to come from the interactions with the medium. There are still other interesting features in neutron stars that cannot be found anywhere else in nature. One of them is the high isospin asymmetry. In a normal nucleus, the amount of protons and neutrons is more or less the same. In a neutron star the amount of neutrons is much higher than the protons. The resulting extra energy (called Fermi energy) increases the energy of the system, allowing the star to support more mass against gravitational collapse. As a consequence of that in early stages of the neutron star evolution, when there are still many trapped neutrinos, the proton fraction is higher than in later stages and consequently the maximum mass that the star can support against gravity is smaller. This, between many other features, shows how the microscopic phenomena of the star can reflect into the macroscopic properties. Another important property of neutron stars is charge neutrality. It is a required assumption for stability in neutron stars, but there are others. One example is chemical equilibrium. It means that the number of particles from each kind is not conserved, but they are created and annihilated through specific reactions that happen at the same rate in both directions. Although to calculate microscopic physics of neutron stars the space-time of special relativity, the Minkowski space, can be used, this is not true for the global properties of the star. In this case general relativity has to be used. The solution of Einstein's equations simplified to static, spherical and isotropic stars correspond to the configurations in which the star is in hydrostatic equilibrium. That means that the internal pressure, coming mainly from the Fermi energy of the neutrons, balances the gravity avoiding the collapse. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also makes it non-spherical, what requires the metric of the star to also be a function of the polar coordinate. Another important feature that has to be taken into account is the dragging of the local inertial frame. It generates centrifugal forces that are not originated in interactions with other bodies, but from the non-rotation of the frame of reference within which observations are made. These modifications are introduced through the Hartle's approximation that solves the problem by applying perturbation theory. In the mean field approximation, the couplings as well as the parameters of the non-linear sigma model are calibrated to reproduce massive neutron stars. The introduction of new degrees of freedom decreases the maximum mass allowed for the neutron star, as they soften the equation of state. In practice, the only baryons present in the star besides the nucleons are the Lambda and Sigma-, in the case in which the baryon octet is included, and Lambda and Delta-,0,+,++, in the case in which the baryon decuplet is included. The leptons are included to ensure charge neutrality. We choose to proceed our calculations including the baryon octet but not the decuplet, in order to avoid uncertainties in the couplings. The couplings of the hyperons were fitted to the depth of their potentials in nuclei. In this case the chiral symmetry restoration can be observed through the behavior of the related order parameter. The symmetry begins to be restored inside neutron stars and the transition is a smooth crossover. Different stages of the neutron star cooling are reproduced taking into account trapped neutrinos, finite temperature and entropy. Finite-temperature calculations include the heat bath of hadronic quasiparticles within the grand canonical potential of the system. Different schemes are considered, with constant temperature, metric dependent temperature and constant entropy. The neutrino chemical potential is introduced by fixing the lepton number in the system, that also controls the amount of electrons and protons (for charge neutrality). The balance between these two features is delicate and influenced mainly by the baryon number conservation. Isolated stars have a fixed number of baryons, which creates a link between different stages of the cooling. The maximum masses allowed in each stage of the cooling process, the one with high entropy and trapped neutrinos, the deleptonized one with high entropy, and the cold one in beta equilibrium. The cooling process is also influenced by constraints related to the rotation of the star. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also deforms it, requiring the metric of the star to include modifications that are introduced through the use of perturbation theory. The analysis of the first stages of the neutron star, when it is called proto-neutron star, gives certain constraints on the possible rotation frequencies in the colder stages. Instability windows are calculated in which the star can be stable during certain stages but collapses into black holes during the cooling process. In the last part of the work the hadronic SU(3) model is extended to include quark degrees of freedom. A new effective potential to the order parameter for deconfinement, the Polyakov loop, makes the connection between the physics at low chemical potential and hight temperature of the QCD phase diagram with the height chemical potential and low temperature part. This is done through the introduction of a chemical potential dependency on the already temperature dependent potential. Analyzing the effect of both order parameters, the chiral condensate and the Polyakov loop, we can drawn a phase diagram for symmetric as well as for star matter. The diagram contains a crossover region as well as a first order phase transition line. The new couplings and parameters of the model are chosen mainly to fit lattice QCD, including the position of the critical point. Finally, this matter containing different degrees of freedom (depending on which phase of the diagram we are) is used to calculate hybrid star properties.

Compact stars can be treated as the ultimate laboratories for testing theories of dense matter. They are not only extremely dense objects, but they are known to be associated with strong magnetic fields, fast rotation and, in certain cases, with very high temperatures. Here, we present several different approaches to model numerically the signatures and properties of these stars, namely:
•The effects of strong magnetic fields on hybrid stars by using a fully general relativistic approach. We solved the coupled Maxwell-Einstein equations in a self-consistent way, taking into consideration the anisotropy of the energy-momentum tensor due purely to the magnetic field, magnetic field effects on equation of state and the interaction between matter and the magnetic field (magnetization). We showed that the effects of the magnetization and the magnetic field on the equation of state for matter do not play an important role on global properties of neutron stars (only the pure magnetic _eld contribution does). In addition, the magnetic field breaks the spherical symmetry of stars, inducing major changes in the populated degrees of freedom inside these objects and, potentially, converting a hybrid star into a hadronic star over time.
•The effects of magnetic fields and rotation on the structure and composition of proto-neutron stars. We found that the magnetic field not only deforms these stars, but also significantly alters the number of trapped neutrinos in the stellar interior, together with the strangeness content and temperature in each evolution stage from a hot proto-neutron star to a cold neutron star.
•The influence of the quark-hadron phase transitions in neutron stars. In particular, previous calculations have shown that fast rotating neutron stars, when subjected to a quark-hadron phase transition in their interiors, could give rise to the backbending phenomenon characterized by a spin-up era. In this work, we obtained the interesting backbending phenomenon for fast spinning neutron stars. More importantly, we showed that a magnetic field, which is assumed to be axisymmetric and poloidal, can also be enhanced due to the phase transition from normal hadronic matter to quark matter on highly magnetized neutron stars. Therefore, in parallel to the spin-up era, classes of neutron stars endowed with strong magnetic fields may go through a `magnetic-up era' in their lives.
•Finally, we were also able to calculate super-heavy white dwarfs in the presence of strong magnetic fields. White dwarfs are the progenitors of supernova Type Ia explosions and they are widely used as candles to show that the Universe is expanding and accelerating. However, observations of ultraluminous supernovae have suggested that the progenitor of such an explosion should be a white dwarf with mass above the well-known Chandrasekhar limit ~ 1.4 M. In corroboration with other works, but by using a fully general relativistic framework, we obtained also strongly magnetized white dwarfs with masses M ~ 2:0 M.

The present work deals with the integration of variable renewable energy sources, wind and solar energy into the European and US power grid. In contrast to other networks, such as the gas supply mains, the electricity network is practically not able to store energy. Generation and consumption therefore always have tobe balanced. Currently, the load curve is viewed as a rigid boundary condition, which must be followed by the generation system. The basic idea of the approach followed here is that weather-dependent generation causes a shift of focus of the electricity supply. At high shares of wind and solar generation, the role of the rigid boundary condition falls to the residual load, that is, the remaining load after subtraction of renewable generation. The goal is to include the weather dependence as well as the load curve in the design of the future electricity supply.
After a brief introduction, the present work first turns to the underlying weather-, generation and load data, which form the starting point of the analysis. In addition, some basic concepts of energy economics are discussed, which are needed in the following.
In the main part of the thesis, several algorithms are developed to determine the load flow in a network with a high share of wind and solar energy and to determine the backup supply needed at the same time. Minimization of the energy needed from controllable power plants, the capacity variable power plants, and the capacity of storing serve as guiding principles. In addition, the optimization problem of grid extensions is considered. It is shown that it can be formulated as a convex optimization problem. It turns out that with an optimized, international transmission network which is about four times the currently available transmission capacity, much of the potential savings in backup energy (about 40%) in Europe can be reached. In contrast, a twelvefold increase the transmission capacity would be necessary for a complete implementation of all possible savings in dispatchable power plants.
The reduction of the dispatchable generation capacity and storage capacity, however, presents a greater challenge. Due to correlations in the generation of time series of individual countries, it may be reduced only with difficulty, and by only about 30%.
In the following, the influence of the relative share of wind and solar energy is illuminated and examined the interplay with the line capacitance. A stronger transmission network tends to lead to a higher proportion of wind energy being better integrated. With increasing line capacity, the optimal mix in Europe therefore shifts from about 70% to 80% wind. Similar analyses are carried out for the US with comparable results.
In addition, the cost of the overall system can be reduced. It is interesting at this point that the advantages for the network integration may outweigh higher production costs of individual technologies, so that it is more favourable from the viewpoint of the entire system to use the more expensive technologies.
Finally, attention is given to the flexibility of the dispatchable power plants. Starting from a Fourier-like decomposition of the load curve as it was a few years ago, when hardly renewable generation capacity was present, capacities of different flexibility classes of dispatchable power plant are calculated. For this purpose, it is assumed that the power plant park is able to follow the load curve without significant surplusses or deficits. From this examination, it is derived what capacity must at least be available without having to resort to a detailed database of existing power plants.
Assuming a strong European cooperation, with a stronger international transmission network, the dispatchable power capacity can be significantly reduced while maintaining security of supply and generating relatively small surplusses in dispatchable power plants.

In this work the flexibility requirements of a highly renewable European electricity network that has to cover fluctuations of wind and solar power generation on different temporal and spatial scales are studied. Cost optimal ways to do so are analysed that include optimal distribution of the infrastructure, large scale transmission, storage, and dispatchable generators. In order to examine these issues, a model of increasing sophistication is built, first considering different flexibility classes of conventional generation, then adding storage, before finally considering transmission to see the effects of each.
To conclude, in this work it was shown that slowly flexible base load generators can only be used in energy systems with renewable shares of less than 50%, independent of the expansion of an interconnecting transmission network within Europe. Furthermore, for a system with a dominant fraction of renewable generation, highly flexible generators are essentially the only necessary class of backup generators. The total backup capacity can only be decreased significantly if interconnecting transmission is allowed, clearly favouring a European-wide energy network. These results are independent of the complexity level of the cost assumptions used for the models. The use of storage technologies allows to reduce the required conventional backup capacity further. This highlights the importance of including additional technologies into the energy system that provide flexibility to balance fluctuations caused by the renewable energy sources. These technologies could for example be advanced energy storage systems, interconnecting transmission in the electricity network, and hydro power plants.
It was demonstrated that a cost optimal European electricity system with almost 100% renewable generation can have total system costs comparable to today's system cost. However, this requires a very large transmission grid expansion to nine times the line volume of the present-day system. Limiting transmission increases the system cost by up to a third, however, a compromise grid with four times today's line volume already locks in most of the cost benefits. Therefore, it is very clear that by increasing the pan-European network connectivity, a cost efficient inclusion of renewable energies can be achieved, which is strongly needed to reach current climate change prevention goals.
It was also shown that a similarly cost efficient, highly renewable European electricity system can be achieved that considers a wide range of additional policy constraints and plausible changes of economic parameters.

Nanomaterials, i.e., materials that are manufactured at a very small spatial scale, can possess unique physical and chemical properties and exhibit novel characteristics as compared to the same material without nanoscale features. The reduction of size down to the nanometer scale leads to the abundance of potential applications in different fields of technology. For instance, tailoring the physicochemical properties of nanomaterials for modification of their interaction with a biological environment has been reflected in a number of biomedical applications.
Strategies to choose the size and the composition of nanoscale systems are often hindered by a limited understanding of interactions that are difficult to study experimentally. However, this goal can be achieved by means of advanced computer simulations. This thesis explores, from a theoretical and a computational viewpoints, stability, electronic and thermo-mechanical properties of nanoscale systems and materials which are related to biomedical applications.
We examine the ability of existing classical interatomic potentials to reproduce stability and thermo-mechanical properties of metal systems, assuming that these potentials have been fitted to describe ground-state properties of the perfect bulk materials.
It is found that existing classical interatomic potentials poorly describe highly-excited vibrational states when the system is far from the potential energy minimum. On the other hand, construction of a reliable computational model is essential for further development of nanomaterials for applications. A new interatomic potential that is able to correctly reproduce both the melting temperature and the ground-state properties of different metals, such as gold, platinum, titanium, and magnesium, by means of classical molecular dynamics simulations is proposed in this work. The suggested modification of a many-body potential has a general nature and can be utilized for similar numerical exploration of thermo-mechanical properties of a broad range of molecular and solid state systems experiencing phase transitions.
The applicability of the classical interatomic potentials to the description of nanoscale systems, consisting of several tens-hundreds of atoms, is also explored in this study. This issue is important, for instance, in the case of nanostructured materials, where grains or nanocrystals have a typical size of about a few nanometers. We validate classical potentials through the comparison with density-functional theory calculations of small
atomic clusters made of titanium and nickel. By this analysis, we demonstrate that the classical potentials fitted to describe ground-state properties of a bulk material can describe the energetics of nanoscale systems with a reasonable accuracy.
In this work, we also analyze electronic properties of nanometer-size nanoparticles made of gold, platinum, silver, and gadolinium; nanoparticles composed of these materials are of current interest for radiation therapy applications. We focus on the production of low-energy electrons, having the kinetic energy from a few electronvolts to several tens of electronvolts. It is currently established that the low-energy secondary electrons of such energies play an important role in the nanoscale mechanisms of biological damage resulting from ionizing radiation. We provide a methodology for analyzing the dynamic response of nanoparticles of the experimentally relevant sizes, namely of about several nanometers, exposed to ionizing radiation. Because of a large number of constituent atoms (about 1000 −10000 atoms) and consequently high computational costs, the electronic properties of such systems can hardly be described by means of ab initio methods based on a quantum-mechanical treatment of electrons, and this analysis should rely on model approaches. By comparing the response of smaller systems (of about 1 nm size) calculated within the ab initio- and the model framework, we validate this methodology and make predictions for the electron production in larger systems.
We have revealed that a significant increase in the number of the low-energy electrons emitted from nanometer-size noble metal nanoparticles arises from collective electron excitations formed in the systems. It is demonstrated that the dominating mechanisms of electron yield enhancement are related to the formation of plasmons excited in a whole system and of atomic giant resonances formed due to excitation of valence d electrons in individual atoms of a nanoparticle. Being embedded in a biological medium, the noble metal nanoparticles thus represent an important source of low-energy electrons, able to produce a significant irrepairable damage in biological systems.
A general methodology for studying electronic properties of nanosystems is used to make quantitative predictions for electron production by non-metal nanoparticles. The analysis illustrates that due to a prominent collective response to an external electric field, carbon nanoparticles embedded in a biological medium also enhance the production of low-energy electrons. The number of low-energy electrons emitted from carbon nanoparticles is demonstrated to be several times higher as compared to the case of liquid water.

Defossiliation of the energy system is crucial in the face of the impending risks of climate change. Electricity generation by burning fossil fuels is being displaced by renewable energy sources like hydro, wind and solar, driven by support schemes and falling costs from technological advances as well as manufacturing scale effects. The unavoidable shift from flexibly dispatchable generation to weather-dependent spatio-temporally varying generators transforms the generation and distribution of electricity into highly interdependent complex systems in multiple dimensions and disciplines:
In time, different scales, stretching from intra-day, diurnal, synoptic to seasonal oscillations of the weather interact with years and decades of planning and construction of capacity. In space, long-range correlations and local variations of weather systems as well as local bottlenecks in transmission networks affect solutions. The investment decisions about technological mix and spatial distribution of capacity follow economic principles, within restrictions which adapt in social feedback loops to public opinion and lobbyist influences.
In this work, a family of self-consistent models is developed which map physical steady-state operation, capacity investments and exogeneous restrictions of a European electricity system, in higher simultaneous spatial and temporal detail as well as scope than has previously been computationally tractable. Increasing the spatial detail of the renewable resources and co-optimizing the expansion of only a few transmission lines, reveals solutions to serve the European electricity demand at about today’s electricity cost with only 5% of its carbon-dioxide emissions; and importantly their electricity mix differs from the findings at low spatial resolution.
As important intermediate steps,
• new algorithms for the convex optimization of electricity system infrastructure are derived from graph-theoretic decompositions of network flows. Only these enable the investigation of model detail beyond previous computational limitations.
• a comprehensive European electricity network model down to individual substations at the transmission voltage levels is built by combining and completing data from freely available sources.
• a network reduction technique is developed to approximate the detailed model at a sequence of spatial resolutions to investigate the role of spatial scale, and identify a level of spatial resolution which captures all relevant detail, but is still computationally tractable.
• a method to trace the flow of power through the network, which is related to a vector diffusion process on a directed flow graph embedded in a network, is used to analyse the resulting technology mix and its interactions with the power network
The open-source nature of the model and restriction to freely available data encourages an accessible and transparent discussion about the future European electricity system, primarily based on renewable wind and solar resources.

Nanotechnology is a rapidly developing branch of science, which is focused on the study of phenomena at the nanometer scale, in particular related to the possibilities of matter manipulation. One of the main goals of nanotechnology is the development of controlled, reproducible, and industrially transposable nanostructured materials.
The conventional technique of thin-film growth by deposition of atoms, small atomic clusters and molecules on surfaces is the general method, which is often used in nanotechnology for production of new materials. Recent experiments show, that patterns with different morphology can be formed in the course of nanoparticles deposition process on a surface. In this context, predicting of the final architecture of the growing materials is a fundamental problem worth studying.
Another factor, which plays an important role in industrial applications of new materials, is the question of post-growth stability of deposited structures. The understanding of the post-growth relaxation processes would give a possibility to estimate the lifetime of the deposited material depending on the conditions at which the material was fabricated. Controllable post-growth manipulations with the architecture of deposited structures opens new path for engineering of nanostructured materials.
The task of this thesis is to advance understanding mechanisms of formation and post-growth evolution of nanostructured materials fabricated by atomic clusters deposition on a surface. In order to achieve this goal the following main problems were addressed:
1. The properties of isolated clusters can significantly differ from those of analogous clusters occurring on a solid surface. The difference is caused by the interaction between the cluster and the solid. Therefore, the understanding of structural and dynamical properties of an atomic cluster on a surface is a topic of intense interest from the scientific and technological point of view. In the thesis, stability, energy, and geometry of an atomic cluster on a solid surface were studied using a liquid drop approach which takes into account the cluster-solid interaction. Geometries of the deposited clusters are compared with those of isolated clusters and the differences are discussed.
2. The formation scenarios of patterns on a surface in the course of the process of cluster deposition depend strongly on the dynamics of deposited clusters. Therefore, an important step towards predicting pattern morphology is to study dynamics of a single cluster on a surface. The process of cluster diffusion on a surface was modeled with the use of classical molecular dynamics technique, and the diffusion coefficients for the silver nanoclusters were obtained from the analysis of trajectories of the clusters. The dependence of the diffusion coefficient on the system’s temperature and cluster-surface interaction was established. The results of the calculations are compared with the available experimental results for the diffusion coefficient of silver clusters on graphite surface.
3. The methods of classical molecular dynamics cannot be used for modeling the self-assembly processes of atomic clusters on a surface, because these processes occur on the minutes timescale, what would require an unachievable computer resource for the simulation. Based on the results of molecular dynamics simulations for a single cluster on a surface a Monte-Carlo based approach has been developed to describe the dynamics of the self-assembly of nanoparticles on a surface. This method accounts for the free particle diffusion on a surface, aggregation into islands and detachment from these islands. The developed method is allowed to study pattern formation of structures up to thousands nm, as well as the stability of these structures. Developed method was implemented in MBN Explorer computer package.
4. The process of the pattern formation on a surface was modeled for several different scenarios. Based on the analysis of results of simulations was suggested a criterion, which can be used to distinguish between different patterns formed on a surface, for example: between fractals or compact islands.This criteria can be used to predict the final morphology of a growing structure.
5. The post-growth evolution of patterns on a surface was also analyzed. In particular, attention in the thesis is payed to a systematical theoretical analysis of the post-growth processes occurring in nanofractals on a surface. The time evolution of fractal morphology in the course of the post-growth relaxation was analyzed, the results of these calculations were compared with experimental data available for the post-growth relaxation of silver cluster fractals on graphite substrate.
All the aforementioned problems are discussed in details in the thesis.