Refine
Year of publication
Document Type
- Article (15673)
- Part of Periodical (2814)
- Working Paper (2350)
- Doctoral Thesis (2052)
- Preprint (1949)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (750)
- Report (471)
- Review (165)
Language
- English (29221) (remove)
Keywords
- taxonomy (738)
- new species (441)
- morphology (173)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (116)
- biodiversity (100)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5323)
- Physik (3715)
- Wirtschaftswissenschaften (1906)
- Frankfurt Institute for Advanced Studies (FIAS) (1654)
- Biowissenschaften (1539)
- Center for Financial Studies (CFS) (1485)
- Informatik (1390)
- Biochemie und Chemie (1085)
- Sustainable Architecture for Finance in Europe (SAFE) (1065)
- House of Finance (HoF) (708)
Robotic gesture recognition
(1998)
Robots of the future should communicate with humans in a natural way. We are especially interested in vision-based gesture interfaces. In the context of robotics several constraints exist, which make the task of gesture recognition particularly challenging. We discuss these constraints and report on progress being made in our lab in the development of techniques for building robust gesture interfaces which can handle these constraints. In an example application, the techniques are shown to be easily combined to build a gesture interface for a real robot grasping objects on a table in front of it.
The study of hidden charm production is an important part of the heavy ion program. The standard approach to this problem [1] assumes that c¯c bound states are created only at the initial stage of the reaction and then partially destroyed at later stages due to interactions with the medium [2, 3, 4].
The binding problem is regarded as one of today's key questions about brain function. Several solutions have been proposed, yet the issue is still controversial. The goal of this article is twofold. Firstly, we propose a new experimental paradigm requiring feature binding, the "delayed binding response task". Secondly, we propose a binding mechanism employing fast reversible synaptic plasticity to express the binding between concepts. We discuss the experimental predictions of our model for the delayed binding response task.
Nuclear collisions at intermediate, relativistic, and ultra-relativistic energies offer unique opportunities to study in detail manifold fragmentation and clustering phenomena in dense nuclear matter. At intermediate energies, the well known processes of nuclear multifragmentation -- the disintegration of bulk nuclear matter in clusters of a wide range of sizes and masses -- allow the study of the critical point of the equation of state of nuclear matter. At very high energies, ultra-relativistic heavy-ion collisions offer a glimpse at the substructure of hadronic matter by crossing the phase boundary to the quark-gluon plasma. The hadronization of the quark-gluon plasma created in the fireball of a ultra-relativistic heavy-ion collision can be considered, again, as a clustering process. We will present two models which allow the simulation of nuclear multifragmentation and the hadronization via the formation of clusters in an interacting gas of quarks, and will discuss the importance of clustering to our understanding of hadronization in ultra-relativistic heavy-ion collisions.
Entropy production in the compression stage of heavy ion collisions is discussed within three distinct macroscopic models (i.e. generalized RHTA, geometrical overlap model and three-fluid hydrodynamics). We find that within these models \sim 80% or more of the experimentally observed final-state entropy is created in the early stage. It is thus likely followed by a nearly isentropic expansion. We employ an equation of state with a first-order phase transition. For low net baryon density, the entropy density exhibits a jump at the phase boundary. However, the excitation function of the specific entropy per net baryon, S/A, does not reflect this jump. This is due to the fact that for final states (of the compression) in the mixed phase, the baryon density \rho_B increases with \sqrt{s}, but not the temperature T. Calculations within the three-fluid model show that a large fraction of the entropy is produced by nuclear shockwaves in the projectile and target. With increasing beam energy, this fraction of S/A decreases. At \sqrt{s}=20 AGeV it is on the order of the entropy of the newly produced particles around midrapidity. Hadron ratios are calculated for the entropy values produced initially at beam energies from 2 to 200 AGeV.
Impact parameter dependencies in Pb(160 AGeV)+Pb reactions : hydrodynamical vs. cascade calculations
(1999)
We investigate the impact parameter dependence of the specific entropy S/A in relativistic heavy ion collisions. Especially the anti-Lambda/anti-proton ratio is found to be a useful tool to distinguish between chemical equilibrium assumptions assumed in hydrodynamics (here: the 3-fluid model) and the chemical non-equilibrium scenario like in microscopic models as the UrQMD model.
Entropy production in the initial compression stage of relativistic heavy-ion collisions from AGS to SPS energies is calculated within a three-fluid hydrodynamical model. The entropy per participating net baryon is found to increase smoothly and does not exhibit a jump or a plateau as in the 1-dimensional one-fluid shock model. Therefore, the excess of pions per participating net baryon in nucleus-nucleus collisions as compared to proton-proton reactions also increases smoothly with beam energy.
The isospin and strangeness dimensions of the Equation of State are explored. RIA and the SIS200 accelerator at GSI will allow to explore these regions in compressed baryonic matter. 132 Sn + 132 Sn and 100 Sn + 100 Sn collisions as well as the excitation functions of K/pi, Lambda/pi and the centrality dependence of charmonium suppression from the UrQMD and HSD transport models are presented and compared to data. Unambiguous proof for the creation of a 'novel phase of matter' from strangeness and charm yields is not in sight.
We study Mach shocks generated by fast partonic jets propagating through a deconfined strongly-interacting matter. Our main goal is to take into account different types of collective motion during the formation and evolution of this matter. We predict a significant deformation of Mach shocks in central Au+Au collisions at RHIC and LHC energies as compared to the case of jet propagation in a static medium. The observed broadening of the near-side two-particle correlations in pseudorapidity space is explained by the Bjorken-like longitudinal expansion. Three-particle correlation measurements are proposed for a more detailed study of the Mach shock waves.
Nonequilibrium models (three-fluid hydrodynamics, UrQMD, and quark molecular dynamics) are used to discuss the uniqueness of often proposed experimental signatures for quark matter formation in relativistic heavy ion collisions from the SPS via RHIC to LHC. It is demonstrated that these models - although they do treat the most interesting early phase of the collisions quite differently (thermalizing QGP vs. coherent color fields with virtual particles) -- all yield a reasonable agreement with a large variety of the available heavy ion data. Hadron/hyperon yields, including J/Psi meson production/suppression, strange matter formation, dileptons, and directed flow (bounce-off and squeeze-out) are investigated. Observations of interesting phenomena in dense matter are reported. However, we emphasize the need for systematic future measurements to search for simultaneous irregularities in the excitation functions of several observables in order to come close to pinning the properties of hot, dense QCD matter from data. The role of future experiments with the STAR and ALICE detectors is pointed out.
We study the effects of isovector-scalar meson delta on the equation of state (EOS) of neutron star matter in strong magnetic fields. The EOS of neutron-star matter and nucleon effective masses are calculated in the framework of Lagrangian field theory, which is solved within the mean-field approximation. From the numerical results one can find that the delta-field leads to a remarkable splitting of proton and neutron effective masses. The strength of delta-field decreases with the increasing of the magnetic field and is little at ultrastrong field. The proton effective mass is highly influenced by magnetic fields, while the effect of magnetic fields on the neutron effective mass is negligible. The EOS turns out to be stiffer at B < 10^15G but becomes softer at stronger magnetic field after including the delta-field. The AMM terms can affect the system merely at ultrastrong magnetic field(B > 10^19G). In the range of 10^15 G - 10^18 G the properties of neutron-star matter are found to be similar with those without magnetic fields.
A strong interest is currently going on in the physics of high intensity and high energy beams: intense proton or deuteron beams are required in various fields of science and industry, including sources of neutrons for research experiments and material processing, nuclear physics experiments, tritium production and nuclear waste transmutation. High current heavy ion beams are envisaged for power production facilities (inertial fusion). Several projects presently under study are based on rf linacs as driver, sometimes followed by accumulation and/or compressor rings [Acc98]. The critical issue for all of them is to be operated in a low loss regime, because of activation problems in the structure. For this reason careful investigations have to be performed in order to understand and control the beam behaviour, aiming at conserving the beam quality, reducing the emittance growth and filamentation and avoiding the formation of halo. The beam current to be accelerated is actually limited by the amount of beam losses, which depends upon the beam halo: in order to reduce induced radioactivity and to allow for hands-on maintenance, normally losses <1 W/m are considered as acceptable [Sto96]. One of the major facilities under study is the European Spallation Source (ESS), a project based on a H- linac accelerating a 107 mA peak current beam (360 ns pulse in the DTL) and on two compressor rings, producing 5 MW average beam power [ESS]. Also the USA are developing a proposal for a Spallation Neutron Source (SNS), providing a short pulse H- beam with average power of 1÷2 MW; a 30 mA linac is required [SNS]. The Accelerator for Production of Tritium (APT), studied at Los Alamos, requires a 100 mA proton beam current (cw) to produce a power of 130÷170 MW [APT]. A similar but smaller accelerator (40 mA, 40 MW beam power) would serve as driver for the Accelerator Driven Transmutation of Waste (ADTW) system [ATW]. The accelerator system for the International Fusion Material Irradiation Facility (IFMIF) will test the behaviour of materials to be used for magnetic fusion (e.g. ITER); it consists of two 125 mA deuteron beams in parallel, to generate a fusion-like neutron spectrum with 10 MW cw [IFM]. In the field of heavy ions, for about 20 years scientists have been working on inertial confinement fusion, as an alternative to magnetic confinement one, to find a practical and cleaner method for producing energy. Nuclear fusion occurs when the nuclei of lighter elements (in a state of matter called "plasma") merge to form heavier elements; the extremely high temperatures and densities needed to get the nuclei to collide in the proper way and release big amounts of energy are obtained in a small "pellet" of fusion fuel, which receives energy from laser or ion beams, implodes and its inertia compresses it hard enough to hold together the plasma until it reaches ignition. Both laser and accelerator facilities have been investigated as drivers, since a demonstration of ignition at low gains is more easily accessible by lasers, whereas the intrinsic properties of accelerators -efficiency and repetition rate- will be essential for a medium-gain power plant. One study for a fusion power system driven by heavy ion beams (HIBALL) was completed in Europe already in 1982 [Bad81]. When the USA declassified essential information on pellet design, "indirect drive" targets have been considered openly, where the pellet is hit by X-rays generated from laser or ion beams rather than directly from the beams. Main progress has been achieved during the latest years in the understanding of pellet dynamics after ignition, i.e. in plasma physics [Sym1][Sym2][Sym3][Bas97][Lut97], imposing also new requirements on the layout of the driver accelerator facilities. In 1994-95 Frankfurt University and several other European laboratories (leaded by GSI) started a new collaboration called HIDIF (Heavy Ion Driven Ignition Facility) in order to simplify the accelerator plant design owing to the new technique of indirectly driven targets and to some technological improvements. First studies were oriented towards the conceptual goal of a facility providing just enough beam energy for the ignition of fusion reactions at very low gain (a "proof of principle") [Hof98]. In a recent phase of the study, it was realized that the proposed concept would make this scheme a more appropriate choice for energy production rather than for ignition; the acronym HIDIF was therefore intended as Heavy Ion Driven Inertial Fusion, and the parameters are going to be modified accordingly [Hof96][Hof97][Hof98]. The scenario presently discussed by this group proposes the formation and acceleration of an intense beam (400 mA) of singly charged heavy ions of three different atomic species, with mass differences of about 10% (the reference one is 209Bi+) in a main rf linac; they are then injected into some storage rings at an energy of 50 MeV/u, bunched in induction linacs and finally transported to a target with different velocities in such a way that the three species merge on the pellet ("telescoping") at 500 TW peak power. In this thesis the main linac of the HIDIF proposal is extensively investigated as an example of a high intensity heavy ion linac. Results are presented from numerical simulations of multi-particle beam dynamics carried out for the first time in this context. After a short presentation of the HIDIF reference scenario (Ignition Facility), including a discussion of the motivations for a high current heavy ion linac, some elements of the theory of beam transport and acceleration are recalled [Con91][Hof82][Kap85] [Lap87][Law88][Mit78][Rei94][Str83]. Then the used simulation programs are described, and a particle dynamics layout of a conventional 200 MHz Alvarez DTL is discussed with respect to low emittance growth at high transmission, including large space-charge effects, taking into account the influence of different kinds of statistical errors and of input mismatch on the beam dynamics. The modifications needed for "telescoping" are investigated with simulations for the nominal mass difference (10%) and for a smaller one (5%); finally the transfer line between DTL and rings is discussed and studied both analytically and by numerical calculations. The large mass number (A= 209) helps to reduce the space-charge effects with respect to protons, therefore the behaviour of the beam is not space-charge dominated. Nevertheless the tune depression values (similar to those of the ESS linac e.g.) indicate that these effects cannot be neglected. For a linac with low duty cycle, as in the case of an ignition facility, the results from particle dynamics calculations can be considered as a reliable guideline for the DTL layout, since they indicate that such a high intensity linac can fulfill the requirements on smooth beam behaviour and low losses.
The D-meson spectral density at finite temperature is obtained within a self-consistent coupled-channel approach. For the bare meson-baryon interaction, a separable potential is taken, whose parameters are fixed by the position and width of the Lambda_c (2593) resonance. The quasiparticle peak stays close to the free D-meson mass, indicating a small change in the effective mass for finite density and temperature. However, the considerable width of the spectral density implies physics beyond the quasiparticle approach. Our results indicate that the medium modifications for the D-mesons in nucleus-nucleus collisions at FAIR (GSI) will be dominantly on the width and not, as previously expected, on the mass.
We study properties of compact stars with the deconfinement phase transition in their interiors. The equation of state of cold baryon-rich matter is constructed by combining a relativistic mean-field model for the hadronic phase and the MIT Bag model for the deconfined phase. In a narrow parameter range two sequences of compact stars (twin stars), which differ by the size of the quark core, have been found. We demonstrate the possibility of a rapid transition between the twin stars with the energy release of about 10 ^52 ergs. This transition should be accompanied by the prompt neutrino burst and the delayed gamma-ray burst.
Potential energy surfaces are calculated by using the most advanced asymmetric two-center shell model allowing to obtain shell and pairing corrections which are added to the Yukawa-plus-exponential model deformation energy. Shell effects are of crucial importance for experimental observation of spontaneous disintegration by heavy ion emission. Results for 222Ra, 232U, 236Pu and 242Cm illustrate the main ideas and show for the first time for a cluster emitter a potential barrier obtained by using the macroscopic-microscopic method.
The wave function of a spheroidal harmonic oscillator without spin-orbit interaction is expressed in terms of associated Laguerre and Hermite polynomials. The pairing gap and Fermi energy are found by solving the BCS system of two equations. Analytical relationships for the matrix elements of inertia are obtained function of the main quantum numbers and potential derivative. They may be used to test complex computer codes one should develop in a realistic approach of the fission dynamics. The results given for the 240 Pu nucleus are compared with a hydrodynamical model. The importance of taking into account the correction term due to the variation of the occupation number is stressed.
Complex fission phenomena
(2004)
Complex fission phenomena are studied in a unified way. Very general reflection asymmetrical equilibrium (saddle point) nuclear shapes are obtained by solving an integro-differential equation without being necessary to specify a certain parametrization. The mass asymmetry in binary cold fission of Th and U isotopes is explained as the result of adding a phenomenological shell correction to the liquid drop model deformation energy. Applications to binary, ternary, and quaternary fission are outlined.
We developed a three-center phenomenological model,able to explain qualitatively the recently obtained experimental results concerning the quasimolecular stage of a light-particle accompanied fission process. It was derived from the liquid drop model under the assumption that the aligned configuration, with the emitted particle between the light and heavy fragment, is reached by increasing continuously the separation distance, while the radii of the heavy fragment and of the light particle are kept constant. In such a way,a new minimum of a short-lived molecular state appears in the deformation energy at a separation distance very close to the touching point. This minimum allows the existence of a short-lived quasi-molecular state, decaying into the three final fragments.The influence of the shell effects is discussed. The half-lives of some quasimolecular states which could be formed in the $^{10}$Be and $^{12}$C accompanied fission of $^{252}$Cf are roughly estimated to be the order of 1 ns, and 1 ms, respectively.
A three-center phenomenological model able to explain, at least from a qualitative point of view, the difference in the observed yield of a particle-accompanied fission and that of binary fission was developed. It is derived from the liquid drop model under the assumption that the aligned configuration, with the emitted particle between the light and heavy fragment is obtained by increasing continuously the separation distance, while the radii of the light fragment and of the light particle are kept constant. During the first stage of the deformation one has a two-center evolution until the neck radius becomes equal to the radius of the emitted particle. Then the three center starts developing by decreasing with the same amount the two tip distances. In such a way a second minimum, typical for a cluster molecule, appears in the deformation energy. Examples are presented for $^{240}$Pu parent nucleus emitting $\alpha$-particles and $^{14}$C in a ternary process.
A very general saddle point nuclear shape may be found as a solution of an integro-differential equation without giving apriori any shape parametrization. By introducing phenomenological shell corrections one obtains minima of deformation energy for binary fission of parent nuclei at a finite (non-zero) mass asymmetry. Results are presented for reflection asymmetric saddle point shapes of thorium and uranium even-mass isotopes with A=226-238 and A=230-238 respectively.
While science claims to be universal, the notion of universality actually covers two very different facets: on the one hand, it refers to the universal value of the epistemological claims of science while, on the other hand, it addresses the issue of how fully the process of scientific communication is presently globalized. How the issue of open access crosses that of the globalization of scientific communication will be the theme of this presentation. The conclusion will be that, without open access, the globalization of scientific communication will lead to increased knowledge and digital divisions.
In this increasingly complex world of learned information delivery and discovery - is it possible that the "free lunch" the Publishing world worries about could come true? Although Open Access and Institutional Repositories have not (yet) created the "scorched earth" effect many were predicting, they are slowly and inevitably gaining momentum. Broader access to top-level information via Google (and others) does indeed appear to be "good enough" for many in their search for content. But you rarely get food for free in a good quality restaurant. You pay for the selection, preparation, speed and expertise of the delivery. At the soup kitchen the food can often be filling - but the queue will be long, the wait even longer and there is no chance of silver service or à la carte. If you are unfortunate enough to have little choice then this may be a great solution. Others will be willing to pay for a more satisfactory meal. As in all aspects of life, diversification and specialisation are fundamental forces. The publishing community in the years to come will continue to develop its offerings for a variety of needs that require more than just broth. To stretch the analogy, the ongoing presence of tap water in our lives has done little to halt the extraordinary rise of bottled water as part of our staple diet. Business reality will continue to settle these types of debate; my bet is that the commercial publishers see a role as providing information that commands an intrinsic value proposition to enough customers to remain economically viable for some time to come. Inspired by the comments and ideas expounded by Dr. James O'Donnell of Georgetown University on the liblicense listserv on 20th July this year, this paper will look to expand on the analogy and identify the good, the bad - but importantly the difference in information quality and access that will result in the radically changed (but still co-existent) information landscape of tomorrow.
The economical and organizational debates about open access have mostly been concerned with journals. This is not surprising since the open access movement can be seen largely as a response to the serials crisis. Recently the open access debate has been extended to include access to government produced data in different forms. In this presentation I'll critically look at some economic and organizational issues pertaining to the open access provision of bibliographical data.
In keeping with the views of its guru, Stephen Harnard, the open access movement is only prepared to discuss the two models of the "green road" and the "golden road" as sole alternatives for the future of scientific publishing. The "golden road" is put forward as the royal road for solving the journals crisis. However, no one has drawn attention to the fact that the golden road represents a purely socialist solution to a free-market problem and thus continues the "samizdat" tradition of underground literature in the former Eastern bloc. The present paper reveals the alarmingly low level at which the open access movement intends to publish top-class results from science and research, and the low degree of professionalism with which they are satisfied.
Der Vortrag wurde am 5th Frankfurt Scientific Symposium gehalten (22-23 Oktober 2005). Die Betrachtung des Videos ist (leider) nur mit den Browsern Internet Explorer ab 5.0, Netscape Navigator ab 7.0 oder Internet Explorer ab 5.2.2 für MaC möglich (s. Dokument 1.html). Die gesamten Tagungsbeiträge sind unter http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1992/ abrufbar.
A novel mechanism of H0 and strangelet production in hadronic interactions within the Gribov-Regge approach is presented. In contrast to traditional distillation approaches, here the production of multiple (strange) quark bags does not require large baryon densities or a QGP. The production cross section increases with center of mass energy. Rapidity and transverse momentum distributions of the H 0 are predicted for pp collisions at E_lab = 160 AGeV (SPS) and \sqrt s = 200 AGeV (RHIC). The predicted total H 0 multiplicities are of order of the Omega-baryon yield and can be accessed by the NA49 and the STAR experiments.
We apply a microcanonical statistical model to investigate hadron production in pp collisions. The parameters of the model are the energy E and the volume V of the system, which we determine via fitting the average multiplicity of charged pions, protons and antiprotons in pp collisions at different collision energies. We then make predictions of mean multiplicities and mean transverse momenta of all identified hadrons. Our predictions on nonstrange hadrons are in good agreement with the data, the mean transverse momenta of strange hadron as well. However, the mean multiplicities of strange hadrons are overpredicted. This agrees with canonical and grandcanonical studies, where a strange suppression factor is needed. We also investigate the influence of event-by-event fluctuations of the E parameter.
A micro-canonical treatment is used to study particle production in pp collisions. First this micro-canonical treatment is compared to some canonical ones. Then proton, antiproton and pion 4 pi multiplicities from proton-proton collisions at various center of mass energies are used to fix the micro-canonical parameters (E) and (V). The dependences of the micro-canonical parameters on the collision energy are parameterised for the further study of pp reactions with this micro-canonical treatment.
The production of multiple strange baryons in pp interactions is studied. Here one can directly probe the microscopic decay of color flux tubes, allowing to differentiate between different string models and a statistical description of the hadronization. To analyse the different stages of a heavy ion collision the time evolution of the elastic and inelastic collision rates in central Pb+Pb interactions are studied. The microscopic simulation supports the idea of separated phases (non-equilibrium -> chemical freeze-out -> kinetic freeze-out) in the evolution of the system. The spectra and abundances of Lambda(1520), K 0(892) and other resonances are used to study the break-up dynamics of the source between chemical and thermal freeze-out.
We show that an unambiguous way of determining the universal limiting fragmentation region is to consider the derivative (d 2 n / d eta 2) of the pseudo-rapidity distribution per participant pair. In addition, we find that the transition region between the fragmentation and the central plateau regions exhibits a second kind of universal behavior that is only apparent in d 2 n / d eta 2. The sqrt s dependence of the height of the central plateau (d n / d eta) eta=0 and the total charged particle multiplicity n total critically depend on the behavior of this universal transition curve. Analyzing available RHIC data, we show that (dn/d eta) eta=0 can be bounded by ln 2 s and n total can be bounded by ln 3 s. We also show that the deuteron-gold data from RHIC has the exactly same features as the gold-gold data indicating that these universal behaviors are a feature of the initial state parton-nucleus interactions and not a consequence of final state interactions. Predictions for LHC energy are also given.
Yields, rapidity and transverse momentum spectra of Delta++(1232), Lambda(1520), Sigma+-(1385) and the meson resonances K0(892), Phi, rho0 and f0(980) are predicted. Hadronic rescattering leads to a suppression of reconstructable resonances, especially at low p_perp. A mass shift of the rho of 10 MeV is obtained from the microscopic simulation, due to late stage rho formation in the cooling pion gas.
Recent calculations applying statistical mechanics indicate that in a setting with compactified large extra dimensions a black hole might evolve into a (quasi-)stable state with mass close to the new fundamental scale M f. Black holes and therefore their relics might be produced at the LHC in the case of extra-dimensional topologies. In this energy regime, Hawking's evaporation scenario is modified due to energy conservation and quantum effects. We reanalyse the evaporation of small black holes including the quantisation of the emitted radiation due to the finite surface of the black hole. It is found that observable stable black hole relics with masses sim 1-3 M f would form which could be identified by a delayed single jet with a corresponding hard momentum kick to the relic and by ionisation, e.g. in a TPC.
String theory suggests the existence of a minimum length scale. An exciting quantum mechanical implication of this feature is a modification of the uncertainty principle. In contrast to the conventional approach, this generalised uncertainty principle does not allow to resolve space time distances below the Planck length. In models with extra dimensions, which are also motivated by string theory, the Planck scale can be lowered to values accessible by ultra high energetic cosmic rays (UHECRs) and by future colliders, i.e. M f approximately equal to 1 TeV. It is demonstrated that in this novel scenario, short distance physics below 1/M f is completely cloaked by the uncertainty principle. Therefore, Planckian effects could be the final physics discovery at future colliders and in UHECRs. As an application, we predict the modifications to the e+ e- to f+ f- cross-sections.
Within the scenario of large extra dimensions, the Planck scale is lowered to values soon accessible. Among the predicted effects, the production of TeV mass black holes at the LHC is one of the most exciting possibilities. Though the final phases of the black hole’s evaporation are still unknown, the formation of a black hole remnant is a theoretically well motivated expectation. We analyze the observables emerging from a black hole evaporation with a remnant instead of a final decay. We show that the formation of a black hole remnant yields a signature which differs substantially from a final decay. We find the total transverse momentum of the black hole event to be significantly dominated by the presence of a remnant mass providing a strong experimental signature for black hole remnant formation.