Refine
Year of publication
Document Type
- Article (15825)
- Part of Periodical (2818)
- Working Paper (2353)
- Preprint (2085)
- Doctoral Thesis (2065)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (753)
- Report (471)
- Review (165)
Language
- English (29537) (remove)
Keywords
- taxonomy (744)
- new species (444)
- morphology (174)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (117)
- biodiversity (101)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5347)
- Physik (3819)
- Wirtschaftswissenschaften (1921)
- Frankfurt Institute for Advanced Studies (FIAS) (1762)
- Biowissenschaften (1550)
- Center for Financial Studies (CFS) (1494)
- Informatik (1401)
- Biochemie und Chemie (1090)
- Sustainable Architecture for Finance in Europe (SAFE) (1071)
- House of Finance (HoF) (710)
Syntactic coindexing restrictions are by now known to be of central importance to practical anaphor resolution approaches. Since, in particular due to structural ambiguity, the assumption of the availability of a unique syntactic reading proves to be unrealistic, robust anaphor resolution relies on techniques to overcome this deficiency. In this paper, two approaches are presented which generalize the verification of coindexing constraints to de cient descriptions. At first, a partly heuristic method is described, which has been implemented. Secondly, a provable complete method is specified. It provides the means to exploit the results of anaphor resolution for a further structural disambiguation. By rendering possible a parallel processing model, this method exhibits, in a general sense, a higher degree of robustness. As a practically optimal solution, a combination of the two approaches is suggested.
An anaphor resolution algorithm is presented which relies on a combination of strategies for narrowing down and selecting from antecedent sets for re exive pronouns, nonre exive pronouns, and common nouns. The work focuses on syntactic restrictions which are derived from Chomsky's Binding Theory. It is discussed how these constraints can be incorporated adequately in an anaphor resolution algorithm. Moreover, by showing that pragmatic inferences may be necessary, the limits of syntactic restrictions are elucidated.
Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution
(2003)
Approaches to Text Summarization and Question Answering are known to benefit from the availability of coreference information. Based on an analysis of its contributions, a more detailed look at coreference processing for these applications will be proposed: it should be considered as a task of anaphor resolution rather than coreference resolution. It will be further argued that high precision approaches to anaphor resolution optimally match the specific requirements. Three such approaches will be described and empirically evaluated, and the implications for Text Summarization and Question Answering will be discussed.
Syntactic coindexing restrictions are by now known to be of central importance to practical anaphor resolution approaches. Since, in particular due to structural ambiguity, the assumption of the availability of a unique syntactic reading proves to be unrealistic, robust anaphor resolution relies on techniques to overcome this deficiency.
This paper describes the ROSANA approach, which generalizes the verification of coindexing restrictions in order to make it applicable to the deficient syntactic descriptions that are provided by a robust state-of-the-art parser. By a formal evaluation on two corpora that differ with respect to text genre and domain, it is shown that ROSANA achieves high-quality robust coreference resolution. Moreover, by an in-depth analysis, it is proven that the robust implementation of syntactic disjoint reference is nearly optimal. The study reveals that, compared with approaches that rely on shallow preprocessing, the largely nonheuristic disjoint reference algorithmization opens up the possibility/or a slight improvement. Furthermore, it is shown that more significant gains are to be expected elsewhere, particularly from a text-genre-specific choice of preference strategies.
The performance study of the ROSANA system crucially rests on an enhanced evaluation methodology for coreference resolution systems, the development of which constitutes the second major contribution o/the paper. As a supplement to the model-theoretic scoring scheme that was developed for the Message Understanding Conference (MUC) evaluations, additional evaluation measures are defined that, on one hand, support the developer of anaphor resolution systems, and, on the other hand, shed light on application aspects of pronoun interpretation.
This paper is focused on the coordination of order and production policy between buyers and suppliers in supply chains. When a buyer and a supplier of an item work independently, the buyer will place orders based on his economic order quantity (EOQ). However, the buyer s EOQ may not lead to an optimal policy for the supplier. It can be shown that a cooperative batching policy can reduce total cost significantly. Should the buyer have the more powerful position to enforce his EOQ on the supplier, then no incentive exists for him to deviate from his EOQ in order to choose a cooperative batching policy. To provide an incentive to order in quantities suitable to the supplier, the supplier could offer a side payment. One critical assumption made throughout in the literature dealing with incentive schemes to influence buyer s ordering policy is that the supplier has complete information regarding buyer s cost structure. However, this assumption is far from realistic. As a consequence, the buyer has no incentive to report truthfully on his cost structure. Moreover there is an incentive to overstate the total relevant cost in order to obtain as high a side payment as possible. This paper provides a bargaining model with asymmetric information about the buyer s cost structure assuming that the buyer has the bargaining power to enforce his EOQ on the supplier in case of a break-down in negotiations. An algorithm for the determination of an optimal set of contracts which are specifically designed for different cost structures of the buyer, assumed by the supplier, will be presented. This algorithm was implemented in a software application, that supports the supplier in determining the optimal set of contracts.
This paper provides global terrestrial surface balances of nitrogen (N) at a resolution of 0.5 by 0.5 degree for the years 1961, 1995 and 2050 as simulated by the model WaterGAP-N. The terms livestock N excretion (Nanm), synthetic N fertilizer (Nfert), atmospheric N deposition (Ndep) and biological N fixation (Nfix) are considered as input while N export by plant uptake (Nexp) and ammonia volatilization (Nvol) are taken into account as output terms. The different terms in the balance are compared to results of other global models and uncertainties are described. Total global surface N surplus increased from 161 Tg N yr-1 in 1961 to 230 Tg N yr-1 in 1995. Using assumptions for the scenario A1B of the Special Report on Emission Scenarios (SRES) of the International Panel on Climate Change (IPCC) as quantified by the IMAGE model, total global surface N surplus is estimated to be 229 Tg N yr-1 in 2050. However, the implementation of these scenario assumptions leads to negative surface balances in many agricultural areas on the globe, which indicates that the assumptions about N fertilizer use and crop production changes are not consistent. Recommendations are made on how to change the assumptions about N fertilizer use to receive a more consistent scenario, which would lead to higher N surpluses in 2050 as compared to 1995.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigated Areas for the continent of Asia. For this update, an inventory of subnational irrigation statistics for the continent was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 188.5 million ha for the entire continent. The total number of subnational units used in the inventory is 4 428. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation. Contents 1. Working Report I: Generation of a map of administrative units compatible with statistics used to update the Digital Global Map of Irrigated Areas in Asia 2. Working Report II: The inventory of subnational irrigation statistics for the Asian part of the Digital Global Map of Irrigated Areas 3. Working Report III: Geospatial information used to locate irrigated areas within the subnational units in the Asian part of the Digital Global Map of Irrigated Areas 4. Working Report IV: Update of the Digital Global Map of Irrigated Areas in Asia, Results Maps
Pseudorandom function tribe ensembles based on one-way permutations: improvements and applications
(1999)
Pseudorandom function tribe ensembles are pseudorandom function ensembles that have an additional collision resistance property: almost all functions have disjoint ranges. We present an alternative to the construction of pseudorandom function tribe ensembles based on oneway permutations given by Canetti, Micciancio and Reingold [CMR98]. Our approach yields two different but related solutions: One construction is somewhat theoretic, but conceptually simple and therefore gives an easier proof that one-way permutations suffice to construct pseudorandom function tribe ensembles. The other, slightly more complicated solution provides a practical construction; it starts with an arbitrary pseudorandom function ensemble and assimilates the one-way permutation to this ensemble. Therefore, the second solution inherits important characteristics of the underlying pseudorandom function ensemble: it is almost as effcient and if the starting pseudorandom function ensemble is efficiently invertible (given the secret key) then so is the derived tribe ensemble. We also show that the latter solution yields so-called committing private-key encryption schemes. i.e., where each ciphertext corresponds to exactly one plaintext independently of the choice of the secret key or the random bits used in the encryption process.
We introduce the relationship between incremental cryptography and memory checkers. We present an incremental message authentication scheme based on the XOR MACs which supports insertion, deletion and other single block operations. Our scheme takes only a constant number of pseudorandom function evaluations for each update step and produces smaller authentication codes than the tree scheme presented in [BGG95]. Furthermore, it is secure against message substitution attacks, where the adversary is allowed to tamper messages before update steps, making it applicable to virus protection. From this scheme we derive memory checkers for data structures based on lists. Conversely, we use a lower bound for memory checkers to show that so-called message substitution detecting schemes produce signatures or authentication codes with size proportional to the message length.
A memory checker for a data structure provides a method to check that the output of the data structure operations is consistent with the input even if the data is stored on some insecure medium. In [8] we present a general solution for all data structures that are based on insert(i,v) and delete(j) commands. In particular this includes stacks, queues, deques (double-ended queues) and lists. Here, we describe more time and space efficient solutions for stacks, queues and deques. Each algorithm takes only a single function evaluation of a pseudorandomlike function like DES or a collision-free hash function like MD5 or SHA for each push/pop resp. enqueue/dequeue command making our methods applicable to smart cards.
We present efficient non-malleable commitment schemes based on standard assumptions such as RSA and Discrete-Log, and under the condition that the network provides publicly available RSA or Discrete-Log parameters generated by a trusted party. Our protocols require only three rounds and a few modular exponentiations. We also discuss the difference between the notion of non-malleable commitment schemes used by Dolev, Dwork and Naor [DDN00] and the one given by Di Crescenzo, Ishai and Ostrovsky [DIO98].
We address to the problem to factor a large composite number by lattice reduction algorithms. Schnorr has shown that under a reasonable number theoretic assumptions this problem can be reduced to a simultaneous diophantine approximation problem. The latter in turn can be solved by finding sufficiently many l_1--short vectors in a suitably defined lattice. Using lattice basis reduction algorithms Schnorr and Euchner applied Schnorrs reduction technique to 40--bit long integers. Their implementation needed several hours to compute a 5% fraction of the solution, i.e., 6 out of 125 congruences which are necessary to factorize the composite. In this report we describe a more efficient implementation using stronger lattice basis reduction techniques incorporating ideas of Schnorr, Hoerner and Ritter. For 60--bit long integers our algorithm yields a complete factorization in less than 3 hours.
Based on the quadratic residuosity assumption we present a non-interactive crypto-computing protocol for the greater-than function, i.e., a non-interactive procedure between two parties such that only the relation of the parties' inputs is revealed. In comparison to previous solutions our protocol reduces the number of modular multiplications significantly. We also discuss applications to conditional oblivious transfer, private bidding and the millionaires' problem.
We propose a new security measure for commitment protocols, called Universally Composable (UC) Commitment. The measure guarantees that commitment protocols behave like an \ideal commitment service," even when concurrently composed with an arbitrary set of protocols. This is a strong guarantee: it implies that security is maintained even when an unbounded number of copies of the scheme are running concurrently, it implies non-malleability (not only with respect to other copies of the same protocol but even with respect to other protocols), it provides resilience to selective decommitment, and more. Unfortunately two-party uc commitment protocols do not exist in the plain model. However, we construct two-party uc commitment protocols, based on general complexity assumptions, in the common reference string model where all parties have access to a common string taken from a predetermined distribution. The protocols are non-interactive, in the sense that both the commitment and the opening phases consist of a single message from the committer to the receiver.
We review the representation problem based on factoring and show that this problem gives rise to alternative solutions to a lot of cryptographic protocols in the literature. And, while the solutions so far usually either rely on the RSA problem or the intractability of factoring integers of a special form (e.g., Blum integers), the solutions here work with the most general factoring assumption. Protocols we discuss include identification schemes secure against parallel attacks, secure signatures, blind signatures and (non-malleable) commitments.
We show that non-interactive statistically-secret bit commitment cannot be constructed from arbitrary black-box one-to-one trapdoor functions and thus from general public-key cryptosystems. Reducing the problems of non-interactive crypto-computing, rerandomizable encryption, and non-interactive statistically-sender-private oblivious transfer and low-communication private information retrieval to such commitment schemes, it follows that these primitives are neither constructible from one-to-one trapdoor functions and public-key encryption in general. Furthermore, our separation sheds some light on statistical zeroknowledge proofs. There is an oracle relative to which one-to-one trapdoor functions and one-way permutations exist, while the class of promise problems with statistical zero-knowledge proofs collapses in P. This indicates that nontrivial problems with statistical zero-knowledge proofs require more than (trapdoor) one-wayness.
We show lower bounds for the signature size of incremental schemes which are secure against substitution attacks and support single block replacement. We prove that for documents of n blocks such schemes produce signatures of \Omega(n^(1/(2+c))) bits for any constant c>0. For schemes accessing only a single block resp. a constant number of blocks for each replacement this bound can be raised to \Omega(n) resp. \Omega(sqrt(n)). Additionally, we show that our technique yields a new lower bound for memory checkers.
Given a real vector alpha =(alpha1 ; : : : ; alpha d ) and a real number E > 0 a good Diophantine approximation to alpha is a number Q such that IIQ alpha mod Zk1 ", where k \Delta k1 denotes the 1-norm kxk1 := max 1id jx i j for x = (x1 ; : : : ; xd ). Lagarias [12] proved the NP-completeness of the corresponding decision problem, i.e., given a vector ff 2 Q d , a rational number " ? 0 and a number N 2 N+ , decide whether there exists a number Q with 1 Q N and kQff mod Zk1 ". We prove that, unless ...
Given x small epsilon, Greek Rn an integer relation for x is a non-trivial vector m small epsilon, Greek Zn with inner product <m,x> = 0. In this paper we prove the following: Unless every NP language is recognizable in deterministic quasi-polynomial time, i.e., in time O(npoly(log n)), the ℓinfinity-shortest integer relation for a given vector x small epsilon, Greek Qn cannot be approximated in polynomial time within a factor of 2log0.5 − small gamma, Greekn, where small gamma, Greek is an arbitrarily small positive constant. This result is quasi-complementary to positive results derived from lattice basis reduction. A variant of the well-known L3-algorithm approximates for a vector x small epsilon, Greek Qn the ℓ2-shortest integer relation within a factor of 2n/2 in polynomial time. Our proof relies on recent advances in the theory of probabilistically checkable proofs, in particular on a reduction from 2-prover 1-round interactive proof-systems. The same inapproximability result is valid for finding the ℓinfinity-shortest integer solution for a homogeneous linear system of equations over Q.
We analyse a continued fraction algorithm (abbreviated CFA) for arbitrary dimension n showing that it produces simultaneous diophantine approximations which are up to the factor 2^((n+2)/4) best possible. Given a real vector x=(x_1,...,x_{n-1},1) in R^n this CFA generates a sequence of vectors (p_1^(k),...,p_{n-1}^(k),q^(k)) in Z^n, k=1,2,... with increasing integers |q^{(k)}| satisfying for i=1,...,n-1 | x_i - p_i^(k)/q^(k) | <= 2^((n+2)/4) sqrt(1+x_i^2) |q^(k)|^(1+1/(n-1)) By a theorem of Dirichlet this bound is best possible in that the exponent 1+1/(n-1) can in general not be increased.
In discussing final status issues, Palestinians and Israelis approach the question of the refugees and the right of return from radically different perspectives. The Palestinian narrative maintains that the Zionists forcibly expelled the Arab refugees in 1948. The Palestinians insist on the right of the refugees to return to their homes or, for those who choose not to do so, to accept compensation. And they demand that Israel unilaterally acknowledge its complete moral responsibility for the injustice of the refugees’ expulsion. In contrast, the Israeli narrative rejects the refugees’ right of return. Israel argues that it was the Arabs who caused the Palestinian refugee problem, by rejecting the creation of the State of Israel and declaring war upon it—a war which, like most wars, created refugee problems, including a Jewish one. Israel sees the return of Palestinian refugees as an existential threat, insofar as it would undermine the Jewish character and the viability of the state. The two sides’ traditional solutions make no attempt to reconcile these opposing narratives. Yet such an attempt is vital if the issue is to be engaged. Hence the Joint Working Group on Israeli–Palestinian Relations developed two compromise solutions. They narrow the gap between the positions, but do not fully reconcile them. The compromise solution espoused by the Palestinian members of the Joint Working Group would insist that Israel acknowledge both its responsibility for creating the refugee problem and the individual moral right of Palestinian refugees to return. But it recognizes that, in view of the changed situation of the refugees over 50 years, and taking into account Israel’s constraints, the return of only a limited number would be feasible. Israel would pay both individual and collective compensation. The Palestinians’ case for an Israeli withdrawal to the 1967 borders would be strengthened as a result of their willingness to absorb the refugees in the Palestinian state. Under the compromise solution proposed by the Israeli members of the Joint Working Group, Israel would acknowledge that it shares, with the other parties to the 1948 war, practical, but not moral, responsibility for the suffering of the refugees, and that rectification of their plight is a central goal of the peace process. Israel would accept repatriation of tens of thousands of refugees under its family reunification program. Israel would pay collective compensation to the Palestinian state, paralleled by Arab State compensation for Jewish refugees from 1948. In seeking to further reconcile these two compromise solutions, we note that they reflect a large measure of agreement between Palestinians and Israelis: that Israel had a historic role in the events that created the refugee issue; that a massive exercise of the right of return is unrealizable, and “return”/family reunification will be limited; that a larger number of Palestinians will “return” to the Palestinian state; that some resettlement will take place in host states, primarily Jordan; that Israel will pay some form of compensation; and that closing the file on the refugee issue means the dismantling of the entire international apparatus that has sustained the refugees—camps, UNRWA, etc. But there remain significant gaps between the two sides’ compromise proposals as well. These concern the nature of Israeli acknowledgement of Palestinian suffering and the responsibility for it; the nature and number of “return”/family reunification; the nature and size of compensation, and its linkage to compensation for Jewish refugees from 1948; and the size of “return” to the Palestinian state. In order to negotiate an agreed solution that bridges these remaining gaps, Israelis and Palestinians will have to develop the mutual trust required to further accommodate each other’s narratives. They will also, inevitably, have to factor the refugee/right of return issue into the broader fabric of tradeoffs and compromises that will characterize a comprehensive solution to the conflict. This will involve additional parties—primarily the refugee host countries—as well as related substantive issues, such as borders.
We generalize the concept of block reduction for lattice bases from l2-norm to arbitrary norms. This extends the results of Schnorr. We give algorithms for block reduction and apply the resulting enumeration concept to solve subset sum problems. The deterministic algorithm solves all subset sum problems. For up to 66 weights it needs in average less then two hours on a HP 715/50 under HP-UX 9.05.
We propose a fast variant of the Gaussian algorithm for the reduction of two dimensional lattices for the l1-, l2- and l-infinite- norm. The algorithm runs in at most O(nM(B) logB) bit operations for the l-infinite- norm and in O(n log n M(B) logB) bit operations for the l1 and l2 norm on input vectors a, b 2 ZZn with norm at most 2B where M(B) is a time bound for B-bit integer multiplication. This generalizes Schönhages monotone Algorithm [Sch91] to the centered case and to various norms.
This study analyses the labour market effects of fixed-term contracts (FTCs) in West Germany by microeconometric methods using individual and establishment level data. In the first part of the study the role of FTCs in firms’ labour demand is analysed. An econometric investigation of the firms’ reasons for using FTCs focussing on the identification of the link between dismissal protection for permanent contract workers and the firms’ use of FTCs is presented. Furthermore, a descriptive analysis of the role of FTCs in worker and job flows at the firm level is provided. The second part of the study evaluates the short-run effects of being employed on an FTC on working conditions and wages using a large cross-sectional dataset of employees. The final part of the study analyses whether taking up an FTC increases the (permanent contract) employment opportunities in the long-run (stepping stone effect) and whether FTCs affect job finding behaviour of unemployed job searchers. Firstly, an econometric unemployment duration analysis distinguishing between both types of contracts as destination states is performed. Secondly, the effects of entering into FTCs from unemployment on future (permanent contract) employment opportunities are evaluated attempting to account for the sequential decision problem of job searchers.
We present an efficient variant of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lov´asz [LLL82]. We organize LLL-reduction in segments of size k. Local LLL-reduction of segments is done using local coordinates of dimension 2k. Strong segment LLL-reduction yields bases of the same quality as LLL-reduction but the reduction is n-times faster for lattices of dimension n. We extend segment LLL-reduction to iterated subsegments. The resulting reduction algorithm runs in O(n3 log n) arithmetic steps for integer lattices of dimension n with basis vectors of length 2O(n), compared to O(n5) steps for LLL-reduction.
We introduce algorithms for lattice basis reduction that are improvements of the famous L3-algorithm. If a random L3-reduced lattice basis b1,b2,...,bn is given such that the vector of reduced Gram-Schmidt coefficients ({µi,j} 1<= j< i<= n) is uniformly distributed in [0,1)n(n-1)/2, then the pruned enumeration finds with positive probability a shortest lattice vector. We demonstrate the power of these algorithms by solving random subset sum problems of arbitrary density with 74 and 82 many weights, by breaking the Chor-Rivest cryptoscheme in dimensions 103 and 151 and by breaking Damgard's hash function.
We call a vector x/spl isin/R/sup n/ highly regular if it satisfies =0 for some short, non-zero integer vector m where <...> is the inner product. We present an algorithm which given x/spl isin/R/sup n/ and /spl alpha//spl isin/N finds a highly regular nearby point x' and a short integer relation m for x'. The nearby point x' is 'good' in the sense that no short relation m~ of length less than /spl alpha//2 exists for points x~ within half the x'-distance from x. The integer relation m for x' is for random x up to an average factor 2/sup /spl alpha//2/ a shortest integer relation for x'. Our algorithm uses, for arbitrary real input x, at most O(n/sup 4/(n+log A)) many arithmetical operations on real numbers. If a is rational the algorithm operates on integers having at most O(n/sup 5/+n/sup 3/(log /spl alpha/)/sup 2/+log(/spl par/qx/spl par//sup 2/)) many bits where q is the common denominator for x.
We study the following problem: given x element Rn either find a short integer relation m element Zn, so that =0 holds for the inner product <.,.>, or prove that no short integer relation exists for x. Hastad, Just Lagarias and Schnorr (1989) give a polynomial time algorithm for the problem. We present a stable variation of the HJLS--algorithm that preserves lower bounds on lambda(x) for infinitesimal changes of x. Given x \in {\RR}^n and \alpha \in \NN this algorithm finds a nearby point x' and a short integer relation m for x'. The nearby point x' is 'good' in the sense that no very short relation exists for points \bar{x} within half the x'--distance from x. On the other hand if x'=x then m is, up to a factor 2^{n/2}, a shortest integer relation for \mbox{x.} Our algorithm uses, for arbitrary real input x, at most \mbox{O(n^4(n+\log \alpha))} many arithmetical operations on real numbers. If x is rational the algorithm operates on integers having at most \mbox{O(n^5+n^3 (\log \alpha)^2 + \log (\|q x\|^2))} many bits where q is the common denominator for x.
Black box cryptanalysis applies to hash algorithms consisting of many small boxes, connected by a known graph structure, so that the boxes can be evaluated forward and backwards by given oracles. We study attacks that work for any choice of the black boxes, i.e. we scrutinize the given graph structure. For example we analyze the graph of the fast Fourier transform (FFT). We present optimal black box inversions of FFT-compression functions and black box constructions of collisions. This determines the minimal depth of FFT-compression networks for collision-resistant hashing. We propose the concept of multipermutation, which is a pair of orthogonal latin squares, as a new cryptographic primitive that generalizes the boxes of the FFT. Our examples of multipermutations are based on the operations circular rotation, bitwise xor, addition and multiplication.
Parallel FFT-hashing
(1994)
We propose two families of scalable hash functions for collision resistant hashing that are highly parallel and based on the generalized fast Fourier transform (FFT). FFT hashing is based on multipermutations. This is a basic cryptographic primitive for perfect generation of diffusion and confusion which generalizes the boxes of the classic FFT. The slower FFT hash functions iterate a compression function. For the faster FFT hash functions all rounds are alike with the same number of message words entering each round.
We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of theL3-algorithm of Lenstra, Lenstra, Lovász (1982). We present a variant of theL3-algorithm with "deep insertions" and a practical algorithm for block Korkin—Zolotarev reduction, a concept introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC1 + computer.
We call a distribution on n bit strings (", e) locally random, if for every choice of e · n positions the induced distribution on e bit strings is in the L1 norm at most " away from the uniform distribution on e bit strings. We establish local randomness in polynomial random number generators (RNG) that are candidate one way functions. Let N be a squarefree integer and let f1, . . . , f be polynomials with coe±- cients in ZZN = ZZ/NZZ. We study the RNG that stretches a random x 2 ZZN into the sequence of least significant bits of f1(x), . . . , f(x). We show that this RNG provides local randomness if for every prime divisor p of N the polynomials f1, . . . , f are linearly independent modulo the subspace of polynomials of degree · 1 in ZZp[x]. We also establish local randomness in polynomial random function generators. This yields candidates for cryptographic hash functions. The concept of local randomness in families of functions extends the concept of universal families of hash functions by Carter and Wegman (1979). The proofs of our results rely on upper bounds for exponential sums.
We propose two improvements to the Fiat Shamir authentication and signature scheme. We reduce the communication of the Fiat Shamir authentication scheme to a single round while preserving the e±ciency of the scheme. This also reduces the length of Fiat Shamir signatures. Using secret keys consisting of small integers we reduce the time for signature generation by a factor 3 to 4. We propose a variation of our scheme using class groups that may be secure even if factoring large integers becomes easy.
We introduce novel security proofs that use combinatorial counting arguments rather than reductions to the discrete logarithm or to the Diffie-Hellman problem. Our security results are sharp and clean with no polynomial reduction times involved. We consider a combination of the random oracle model and the generic model. This corresponds to assuming an ideal hash function H given by an oracle and an ideal group of prime order q, where the binary encoding of the group elements is useless for cryptographic attacks In this model, we first show that Schnorr signatures are secure against the one-more signature forgery : A generic adversary performing t generic steps including l sequential interactions with the signer cannot produce l+1 signatures with a better probability than (t 2)/q. We also characterize the different power of sequential and of parallel attacks. Secondly, we prove signed ElGamal encryption is secure against the adaptive chosen ciphertext attack, in which an attacker can arbitrarily use a decryption oracle except for the challenge ciphertext. Moreover, signed ElGamal encryption is secure against the one-more decryption attack: A generic adversary performing t generic steps including l interactions with the decryption oracle cannot distinguish the plaintexts of l + 1 ciphertexts from random strings with a probability exceeding (t 2)/q.
Assuming a cryptographically strong cyclic group G of prime order q and a random hash function H, we show that ElGamal encryption with an added Schnorr signature is secure against the adaptive chosen ciphertext attack, in which an attacker can freely use a decryption oracle except for the target ciphertext. We also prove security against the novel one-more-decyption attack. Our security proofs are in a new model, corresponding to a combination of two previously introduced models, the Random Oracle model and the Generic model. The security extends to the distributed threshold version of the scheme. Moreover, we propose a very practical scheme for private information retrieval that is based on blind decryption of ElGamal ciphertexts.
Let b1, . . . , bm 2 IRn be an arbitrary basis of lattice L that is a block Korkin Zolotarev basis with block size ¯ and let ¸i(L) denote the successive minima of lattice L. We prove that for i = 1, . . . ,m 4 i + 3 ° 2 i 1 ¯ 1 ¯ · kbik2/¸i(L)2 · ° 2m i ¯ 1 ¯ i + 3 4 where °¯ is the Hermite constant. For ¯ = 3 we establish the optimal upper bound kb1k2/¸1(L)2 · µ3 2¶m 1 2 1 and we present block Korkin Zolotarev lattice bases for which this bound is tight. We improve the Nearest Plane Algorithm of Babai (1986) using block Korkin Zolotarev bases. Given a block Korkin Zolotarev basis b1, . . . , bm with block size ¯ and x 2 L(b1, . . . , bm) a lattice point v can be found in time ¯O(¯) satisfying kx vk2 · m° 2m ¯ 1 ¯ minu2L kx uk2.
With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who now have the means to take and distribute pictures of one’s face at no risk and little cost in any situation in public and private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual’s private life to the public in unpreceeded ways. Social and legal measures are increasingly taken to deal with this problem. In practice however, they lack efficiency, as they are hard to enforce in practice. In this paper, we discuss a supportive infrastructure aiming for the distribution channel; as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action.
Korrektur zu: C.P. Schnorr: Security of 2t-Root Identification and Signatures, Proceedings CRYPTO'96, Springer LNCS 1109, (1996), pp. 143-156 page 148, section 3, line 5 of the proof of Theorem 3. Die Korrektur wurde präsentiert als: "Factoring N via proper 2 t-Roots of 1 mod N" at Eurocrypt '97 rump session.
Let G be a finite cyclic group with generator \alpha and with an encoding so that multiplication is computable in polynomial time. We study the security of bits of the discrete log x when given \exp_{\alpha}(x), assuming that the exponentiation function \exp_{\alpha}(x) = \alpha^x is one-way. We reduce he general problem to the case that G has odd order q. If G has odd order q the security of the least-significant bits of x and of the most significant bits of the rational number \frac{x}{q} \in [0,1) follows from the work of Peralta [P85] and Long and Wigderson [LW88]. We generalize these bits and study the security of consecutive shift bits lsb(2^{-i}x mod q) for i=k+1,...,k+j. When we restrict \exp_{\alpha} to arguments x such that some sequence of j consecutive shift bits of x is constant (i.e., not depending on x) we call it a 2^{-j}-fraction of \exp_{\alpha}. For groups of odd group order q we show that every two 2^{-j}-fractions of \exp_{\alpha} are equally one-way by a polynomial time transformation: Either they are all one-way or none of them. Our key theorem shows that arbitrary j consecutive shift bits of x are simultaneously secure when given \exp_{\alpha}(x) iff the 2^{-j}-fractions of \exp_{\alpha} are one-way. In particular this applies to the j least-significant bits of x and to the j most-significant bits of \frac{x}{q} \in [0,1). For one-way \exp_{\alpha} the individual bits of x are secure when given \exp_{\alpha}(x) by the method of Hastad, N\"aslund [HN98]. For groups of even order 2^{s}q we show that the j least-significant bits of \lfloor x/2^s\rfloor, as well as the j most-significant bits of \frac{x}{q} \in [0,1), are simultaneously secure iff the 2^{-j}-fractions of \exp_{\alpha'} are one-way for \alpha' := \alpha^{2^s}. We use and extend the models of generic algorithms of Nechaev (1994) and Shoup (1997). We determine the generic complexity of inverting fractions of \exp_{\alpha} for the case that \alpha has prime order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q consecutive shift bits of random x are for constant \varepsilon >0 simultaneously secure against generic attacks. Every generic algorithm using $t$ generic steps (group operations) for distinguishing bit strings of j consecutive shift bits of x from random bit strings has at most advantage O((\lg q) j\sqrt{t} (2^j/q)^{\frac14}).
Let G be a group of prime order q with generator g. We study hardcore subsets H is include in G of the discrete logarithm (DL) log g in the model of generic algorithms. In this model we count group operations such as multiplication, division while computations with non-group data are for free. It is known from Nechaev (1994) and Shoup (1997) that generic DL-algorithms for the entire group G must perform p2q generic steps. We show that DL-algorithms for small subsets H is include in G require m/ 2 + o(m) generic steps for almost all H of size #H = m with m <= sqrt(q). Conversely, m/2 + 1 generic steps are su±cient for all H is include in G of even size m. Our main result justifies to generate secret DL-keys from seeds that are only 1/2 * log2 q bits long.
We present a novel practical algorithm that given a lattice basis b1, ..., bn finds in O(n exp 2 *(k/6) exp (k/4)) average time a shorter vector than b1 provided that b1 is (k/6) exp (n/(2k)) times longer than the length of the shortest, nonzero lattice vector. We assume that the given basis b1, ..., bn has an orthogonal basis that is typical for worst case lattice bases. The new reduction method samples short lattice vectors in high dimensional sublattices, it advances in sporadic big jumps. It decreases the approximation factor achievable in a given time by known methods to less than its fourth-th root. We further speed up the new method by the simple and the general birthday method. n2
We enhance the security of Schnorr blind signatures against the novel one-more-forgery of Schnorr [Sc01] andWagner [W02] which is possible even if the discrete logarithm is hard to compute. We show two limitations of this attack. Firstly, replacing the group G by the s-fold direct product G exp(×s) increases the work of the attack, for a given number of signer interactions, to the s-power while increasing the work of the blind signature protocol merely by a factor s. Secondly, we bound the number of additional signatures per signer interaction that can be forged effectively. That fraction of the additional forged signatures can be made arbitrarily small.
We modify the concept of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lovasz [LLL82] towards a faster reduction algorithm. We organize LLL-reduction in segments of the basis. Our SLLL-bases approximate the successive minima of the lattice in nearly the same way as LLL-bases. For integer lattices of dimension n given by a basis of length 2exp(O(n)), SLLL-reduction runs in O(n.exp(5+epsilon)) bit operations for every epsilon > 0, compared to O(exp(n7+epsilon)) for the original LLL and to O(exp(n6+epsilon)) for the LLL-algorithms of Schnorr (1988) and Storjohann (1996). We present an even faster algorithm for SLLL-reduction via iterated subsegments running in O(n*exp(3)*log n) arithmetic steps.
We show that P(n)*(P(n)) for p = 2 with its geometrically induced structure maps is not an Hopf algebroid because neither the augmentation Epsilon nor the coproduct Delta are multiplicative. As a consequence the algebra structure of P(n)*(P(n)) is slightly different from what was supposed to be the case. We give formulas for Epsilon(xy) and Delta(xy) and show that the inversion of the formal group of P(n) is induced by an antimultiplicative involution Xi : P(n) -> P(n). Some consequences for multiplicative and antimultiplicative automorphisms of K(n) for p = 2 are also discussed.
The general subset sum problem is NP-complete. However, there are two algorithms, one due to Brickell and the other to Lagarias and Odlyzko, which in polynomial time solve almost all subset sum problems of sufficiently low density. Both methods rely on basis reduction algorithms to find short nonzero vectors in special lattices. The Lagarias-Odlyzko algorithm would solve almost all subset sum problems of density < 0.6463 . . . in polynomial time if it could invoke a polynomial-time algorithm for finding the shortest non-zero vector in a lattice. This paper presents two modifications of that algorithm, either one of which would solve almost all problems of density < 0.9408 . . . if it could find shortest non-zero vectors in lattices. These modifications also yield dramatic improvements in practice when they are combined with known lattice basis reduction algorithms.
Public key signature schemes are necessary for the access control to communication networks and for proving the authenticity of sensitive messages such as electronic fund transfers. Since the invention of the RSA scheme by Rivest, Shamir and Adleman (1978) research has focused on improving the e±ciency of these schemes. In this paper we present an efficient algorithm for generating public key signatures which is particularly suited for interactions between smart cards and terminals.
We present a novel parallel one-more signature forgery against blind Okamoto-Schnorr and blind Schnorr signatures in which an attacker interacts some times with a legitimate signer and produces from these interactions signatures. Security against the new attack requires that the following ROS-problem is intractable: find an overdetermined, solvable system of linear equations modulo with random inhomogenities (right sides). There is an inherent weakness in the security result of POINTCHEVAL AND STERN. Theorem 26 [PS00] does not cover attacks with 4 parallel interactions for elliptic curves of order 2200. That would require the intractability of the ROS-problem, a plausible but novel complexity assumption. Conversely, assuming the intractability of the ROS-problem, we show that Schnorr signatures are secure in the random oracle and generic group model against the one-more signature forgery.
We present a practical algorithm that given an LLL-reduced lattice basis of dimension n, runs in time O(n3(k=6)k=4+n4) and approximates the length of the shortest, non-zero lattice vector to within a factor (k=6)n=(2k). This result is based on reasonable heuristics. Compared to previous practical algorithms the new method reduces the proven approximation factor achievable in a given time to less than its fourthth root. We also present a sieve algorithm inspired by Ajtai, Kumar, Sivakumar [AKS01].
Let G be a Fuchsian group containing two torsion free subgroups defining isomorphic Riemann surfaces. Then these surface subgroups K and alpha-Kalpha exp(-1) are conjugate in PSl(2,R), but in general the conjugating element alpha cannot be taken in G or a finite index Fuchsian extension of G. We will show that in the case of a normal inclusion in a triangle group G these alpha can be chosen in some triangle group extending G. It turns out that the method leading to this result allows also to answer the question how many different regular dessins of the same type can exist on a given quasiplatonic Riemann surface.
We consider Schwarz maps for triangles whose angles are rather general rational multiples of pi. Under which conditions can they have algebraic values at algebraic arguments? The answer is based mainly on considerations of complex multiplication of certain Prym varieties in Jacobians of hypergeometric curves. The paper can serve as an introduction to transcendence techniques for hypergeometric functions, but contains also new results and examples.
The main subject of this survey are Belyi functions and dessins d'enfants on Riemann surfaces. Dessins are certain bipartite graphs on 2-mainfolds defining there are conformal and even an algebraic structure. In principle, all deeper properties of the resulting Riemann surfaces or algebraic curves should be encoded in these dessins, but the decoding turns out to be difficult and leads to many open problems. We emphasize arithmetical aspects like Galois actions, the relation to the ABC theorem in function filds and arithemtic questions in uniformization theory of algebraic curves defined over number fields.
The large conductance voltage- and Ca2+-activated potassium (BK) channel has been suggested to play an important role in the signal transduction process of cochlear inner hair cells. BK channels have been shown to be composed of the pore-forming alpha-subunit coexpressed with the auxiliary beta-1-subunit. Analyzing the hearing function and cochlear phenotype of BK channel alpha-(BKalpha–/–) and beta-1-subunit (BKbeta-1–/–) knockout mice, we demonstrate normal hearing function and cochlear structure of BKbeta-1–/– mice. During the first 4 postnatal weeks also, BKalpha–/– mice most surprisingly did not show any obvious hearing deficits. High-frequency hearing loss developed in BKalpha–/– mice only from ca. 8 weeks postnatally onward and was accompanied by a lack of distortion product otoacoustic emissions, suggesting outer hair cell (OHC) dysfunction. Hearing loss was linked to a loss of the KCNQ4 potassium channel in membranes of OHCs in the basal and midbasal cochlear turn, preceding hair cell degeneration and leading to a similar phenotype as elicited by pharmacologic blockade of KCNQ4 channels. Although the actual link between BK gene deletion, loss of KCNQ4 in OHCs, and OHC degeneration requires further investigation, data already suggest human BK-coding slo1 gene mutation as a susceptibility factor for progressive deafness, similar to KCNQ4 potassium channel mutations. © 2004, The National Academy of Sciences. Freely available online through the PNAS open access option.
Presentation at the AMS Southeastern Sectional Meeting 14-16 March 2003, and the Workshop Asymptotic Analysis, Stability, and Generalized Functions', 17-19 March 2003, Louisiana State University, Baton Rouge, Louisiana. See the corresponding papers "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and "Infinite Infrared Regularization and a State Space for the Heisenberg Algebra".
Presentation at the Università di Pisa, Pisa, Itlay 3 July 2002, the conference on Irreversible Quantum Dynamics', the Abdus Salam ICTP, Trieste, Italy, 29 July - 2 August 2002, and the University of Natal, Pietermaritzburg, South Africa, 14 May 2003. Version of 24 April 2003: examples added; 16 December 2002: revised; 12 Sptember 2002. See the corresponding papers "Zeno Dynamics of von Neumann Algebras", "Zeno Dynamics in Quantum Statistical Mechanics" and "Mathematics of the Quantum Zeno Effect"
Background: The existence of a constitutively expressed machinery for death in individual cells has led to the notion that survival factors repress this machinery and, if such factors are unavailable, cells die by default. In many cells, however, mRNA and protein synthesis inhibitors induce apoptosis, suggesting that in some cases transcriptional activity might actually impede cell death. To identify transcriptional mechanisms that interfere with cell death and survival, we combined gene trap mutagenesis with site-specific recombination (Cre/loxP system) to isolate genes from cells undergoing apoptosis by growth factor deprivation.
Results: From an integration library consisting of approximately 2 × 106 unique proviral integrations obtained by infecting the interleukin-3 (IL-3)-dependent hematopoietic cell line - FLOXIL3 - with U3Cre gene trap virus, we have isolated 125 individual clones that converted to factor independence upon IL-3 withdrawal. Of 102 cellular sequences adjacent to U3Cre integration sites, 17% belonged to known genes, 11% matched single expressed sequence tags (ESTs) or full cDNAs with unknown function and 72% had no match within the public databases. Most of the known genes recovered in this analysis encoded proteins with survival functions.
Conclusions: We have shown that hematopoietic cells undergoing apoptosis after withdrawal of IL-3 activate survival genes that impede cell death. This results in reduced apoptosis and improved survival of cells treated with a transient apoptotic stimulus. Thus, apoptosis in hematopoietic cells is the end result of a conflict between death and survival signals, rather than a simple death by default.
An excess of the proinflammatory substance IL-18 is present in joints of patients with rheumatoid arthritis (RA), and expression of IL-18 receptor (IL-18R) regulates IL-18 bioactivity in various cell types. We examined the expression of IL-18R alpha-chain and beta-chain and the biologic effects of IL-18 in fibroblast-like synoviocytes (FLS) after long-term culture. The presence of both IL-18R chains was a prerequisite for IL-18 signal transduction in FLS. However, all FLS cultures studied were either resistant or barely responsive to IL-18 stimulation as regards cell proliferation, expression of adhesion molecules ICAM-1 and vascular cell adhesion molecule (VCAM)-1, and the release of interstitial collagenase and stromelysin, IL-6 and IL-8, prostaglandin E2, or nitric oxide. We conclude that the presence of macrophages or IL-18R+ T cells that can respond directly to IL-18 is essential for the proinflammatory effects of IL-18 in synovitis in RA. Open Access: Published: 14 November 2001 © 2002 Möller et al., licensee BioMed Central Ltd (Print ISSN 1465-9905; Online ISSN 1465-9913)
Background: Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is performed mainly in patients with high-risk or advanced hematologic malignancies and congenital or acquired aplastic anemias. In the context of the significant risk of graft failure after allo-HSCT from alternative donors and the risk of relapse in recipients transplanted for malignancy, the precise monitoring of posttransplant hematopoietic chimerism is of utmost interest. Useful molecular methods for chimerism quantification after allogeneic transplantation, aimed at distinguishing precisely between donor's and recipient's cells, are PCR-based analyses of polymorphic DNA markers. Such analyses can be performed regardless of donor's and recipient's sex. Additionally, in patients after sex-mismatched allo-HSCT, fluorescent in situ hybridization (FISH) can be applied. Methods: We compared different techniques for analysis of posttransplant chimerism, namely FISH and PCR-based molecular methods with automated detection of fluorescent products in an ALFExpress DNA Sequencer (Pharmacia) or ABI 310 Genetic Analyzer (PE). We used Spearman correlation test. Results: We have found high correlation between results obtained from the PCR/ALF Express and PCR/ABI 310 Genetic Analyzer. Lower, but still positive correlations were found between results of FISH technique and results obtained using automated DNA sizing technology. Conclusions: All the methods applied enable a rapid and accurate detection of post-HSCT chimerism.
Background: To investigate the occupational risk of tuberculosis (TB) infection in a low-incidence setting, data from a prospective study of patients with culture-confirmed TB conducted in Hamburg, Germany, from 1997 to 2002 were evaluated. Methods: M. tuberculosis isolates were genotyped by IS6110 RFLP analysis. Results of contact tracing and additional patient interviews were used for further epidemiological analyses. Results: Out of 848 cases included in the cluster analysis, 286 (33.7%) were classified into 76 clusters comprising 2 to 39 patients. In total, two patients in the non-cluster and eight patients in the cluster group were health-care workers. Logistic regression analysis confirmed work in the health-care sector as the strongest predictor for clustering (OR 17.9). However, only two of the eight transmission links among the eight clusters involving health-care workers had been detected previously. Overall, conventional contact tracing performed before genotyping had identified only 26 (25.2%) of the 103 contact persons with the disease among the clustered cases whose transmission links were epidemiologically verified. Conclusion: Recent transmission was found to be strongly associated with health-care work in a setting with low incidence of TB. Conventional contact tracing alone was shown to be insufficient to discover recent transmission chains. The data presented also indicate the need for establishing improved TB control strategies in health-care settings.
Introduction: ScFv(FRP5)-ETA is a recombinant antibody toxin with binding specificity for ErbB2 (HER2). It consists of an N-terminal single-chain antibody fragment (scFv), genetically linked to truncated Pseudomonas exotoxin A (ETA). Potent antitumoral activity of scFv(FRP5)-ETA against ErbB2-overexpressing tumor cells was previously demonstrated in vitro and in animal models. Here we report the first systemic application of scFv(FRP5)-ETA in human cancer patients.
Methods: We have performed a phase I dose-finding study, with the objective to assess the maximum tolerated dose and the dose-limiting toxicity of intravenously injected scFv(FRP5)-ETA. Eighteen patients suffering from ErbB2-expressing metastatic breast cancers, prostate cancers, head and neck cancer, non small cell lung cancer, or transitional cell carcinoma were treated. Dose levels of 2, 4, 10, 12.5, and 20 μg/kg scFv(FRP5)-ETA were administered as five daily infusions each for two consecutive weeks.
Results: No hematologic, renal, and/or cardiovascular toxicities were noted in any of the patients treated. However, transient elevation of liver enzymes was observed, and considered dose limiting, in one of six patients at the maximum tolerated dose of 12.5 μg/kg, and in two of three patients at 20 μg/kg. Fifteen minutes after injection, peak concentrations of more than 100 ng/ml scFv(FRP5)-ETA were obtained at a dose of 10 μg/kg, indicating that predicted therapeutic levels of the recombinant protein can be applied without inducing toxic side effects. Induction of antibodies against scFv(FRP5)-ETA was observed 8 days after initiation of therapy in 13 patients investigated, but only in five of these patients could neutralizing activity be detected. Two patients showed stable disease and in three patients clinical signs of activity in terms of signs and symptoms were observed (all treated at doses ≥ 10 μg/kg). Disease progression occurred in 11 of the patients.
Conclusion: Our results demonstrate that systemic therapy with scFv(FRP5)-ETA can be safely administered up to a maximum tolerated dose of 12.5 μg/kg in patients with ErbB2-expressing tumors, justifying further clinical development.
Background: The cosmopolitan moon jelly Aurelia is characterized by high degrees of morphological and ecological plasticity, and subsequently by an unclear taxonomic status. The latter has been revised repeatedly over the last century, dividing the genus Aurelia in as many as 12 or as little as two species. We used molecular data and phenotypic traits to unravel speciation processes and phylogeographic patterns in Aurelia.
Results: Mitochondrial and nuclear DNA data (16S and ITS-1/5.8S rDNA) from 66 world-wide sampled specimens reveal star-like tree topologies, unambiguously differentiating 7 (mtDNA) and 8 (ncDNA) genetic entities with sequence divergences ranging from 7.8 to 14% (mtDNA) and 5 to 32% (ncDNA), respectively. Phylogenetic patterns strongly suggest historic speciation events and the reconstruction of at least 7 different species within Aurelia. Both genetic divergences and life history traits showed associations to environmental factors, suggesting ecological differentiation forced by divergent selection. Hybridization and introgression between Aurelia lineages likely occurred due to secondary contacts, which, however, did not disrupt the unambiguousness of genetic separation.
Conclusions: Our findings recommend Aurelia as a model system for using the combined power of organismic, ecological, and molecular data to unravel speciation processes in cosmopolitan marine organisms.
© 2002 Schroth et al; licensee BioMed Central Ltd. Verbatim copying and redistribution of this article are permitted in any medium for any non-commercial purpose, provided this notice is preserved along with the article's original URL: http://www.biomedcentral.com/1471-2148/2/1
Dendritic cells (DC) are known to present exogenous protein Ag effectively to T cells. In this study we sought to identify the proteases that DC employ during antigen processing. The murine epidermal-derived DC line Xs52, when pulsed with PPD, optimally activated the PPD-reactive Th1 clone LNC.2F1 as well as the Th2 clone LNC.4k1, and this activation was completely blocked by chloroquine pretreatment. These results validate the capacity of XS52 DC to digest PPD into immunogenic peptides inducing antigen specific T cell immune responses. XS52 DC, as well as splenic DC and DCs derived from bone marrow degraded standard substrates for cathepsins B, C, D/E, H, J, and L, tryptase, and chymases, indicating that DC express a variety of protease activities. Treatment of XS52 DC with pepstatin A, an inhibitor of aspartic acid proteases, completely abrogated their capacity to present native PPD, but not trypsin-digested PPD fragments to Th1 and Th2 cell clones. Pepstatin A also inhibited cathepsin D/E activity selectively among the XS52 DC-associated protease activities. On the other hand, inhibitors of serine proteases (dichloroisocoumarin, DCI) or of cystein proteases (E-64) did not impair XS52 DC presentation of PPD, nor did they inhibit cathepsin D/E activity. Finally, all tested DC populations (XS52 DC, splenic DC, and bone marrow-derived DC) constitutively expressed cathepsin D mRNA. These results suggest that DC primarily employ cathepsin D (and perhaps E) to digest PPD into antigenic peptides.
Background: The neurophysiological and neuroanatomical foundations of persistent developmental stuttering (PDS) are still a matter of dispute. A main argument is that stutterers show atypical anatomical asymmetries of speech-relevant brain areas, which possibly affect speech fluency. The major aim of this study was to determine whether adults with PDS have anomalous anatomy in cortical speech-language areas. Methods: Adults with PDS (n = 10) and controls (n = 10) matched for age, sex, hand preference, and education were studied using high-resolution MRI scans. Using a new variant of the voxel-based morphometry technique (augmented VBM) the brains of stutterers and non-stutterers were compared with respect to white matter (WM) and grey matter (GM) differences. Results: We found increased WM volumes in a right-hemispheric network comprising the superior temporal gyrus (including the planum temporale), the inferior frontal gyrus (including the pars triangularis), the precentral gyrus in the vicinity of the face and mouth representation, and the anterior middle frontal gyrus. In addition, we detected a leftward WM asymmetry in the auditory cortex in non-stutterers, while stutterers showed symmetric WM volumes. Conclusions: These results provide strong evidence that adults with PDS have anomalous anatomy not only in perisylvian speech and language areas but also in prefrontal and sensorimotor areas. Whether this atypical asymmetry of WM is the cause or the consequence of stuttering is still an unanswered question. This article is available from: http://www.biomedcentral.com/1471-2377/4/23 © 2004 Jäncke et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
First paragraph (this article has no abstract) Persistent stimulation of nociceptors results in sensitization of nociceptive sensory neurons, which is associated with hyperalgesia and allodynia. The release of NO and subsequent synthesis of cGMP in the spinal cord are involved in this process. cGMP-dependent protein kinase I (PKG-I) has been suggested to act as a downstream target of cGMP, but its exact role in nociception hadn't been characterized yet. To further evaluate the NO/cGMP/PKG-I pathway in nociception we assessed the effects of PKG-I inhibiton and activaton in the rat formalin assay and analyzed the nociceptive behavior of PKG-I-/- mice. Open access article.
Background: In general shell-less slugs are considered to be slimy animals with a rather dull appearance and a pest to garden plants. But marine slugs usually are beautifully coloured animals belonging to the less-known Opisthobranchia. They are characterized by a large array of interesting biological phenomena, usually related to foraging and/or defence. In this paper our knowledge of shell reduction, correlated with the evolution of different defensive and foraging strategies is reviewed, and new results on histology of different glandular systems are included. Results: Based on a phylogeny obtained by morphological and histological data, the parallel reduction of the shell within the different groups is outlined. Major food sources are given and glandular structures are described as possible defensive structures in the external epithelia, and as internal glands. Conclusion: According to phylogenetic analyses, the reduction of the shell correlates with the evolution of defensive strategies. Many different kinds of defence structures, like cleptocnides, mantle dermal formations (MDFs), and acid glands, are only present in shell-less slugs. In several cases, it is not clear whether the defensive devices were a prerequisite for the reduction of the shell, or reduction occurred before. Reduction of the shell and acquisition of different defensive structures had an implication on exploration of new food sources and therefore likely enhanced adaptive radiation of several groups. © 2005 Wägele and Klussmann-Kolb; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited: http://www.frontiersinzoology.com/content/2/1/3/
Background: Tumor development remains one of the major obstacles following organ transplantation. Immunosuppressive drugs such as cyclosporine and tacrolimus directly contribute to enhanced malignancy, whereas the influence of the novel compound mycophenolate mofetil (MMF) on tumor cell dissemination has not been explored. We therefore investigated the adhesion capacity of colon, pancreas, prostate and kidney carcinoma cell lines to endothelium, as well as their beta1 integrin expression profile before and after MMF treatment. Methods: Tumor cell adhesion to endothelial cell monolayers was evaluated in the presence of 0.1 and 1 μM MMF and compared to unstimulated controls. beta1 integrin analysis included alpha1beta1 (CD49a), alpha2beta1 (CD49b), alpha3beta1 (CD49c), alpha4beta1 (CD49d), alpha5beta1 (CD49e), and alpha6beta1 (CD49f) receptors, and was carried out by reverse transcriptase-polymerase chain reaction, confocal microscopy and flow cytometry. Results: Adhesion of the colon carcinoma cell line HT-29 was strongly reduced in the presence of 0.1 μM MMF. This effect was accompanied by down-regulation of alpha3beta1 and alpha6beta1 surface expression and of alpha3beta1 and alpha6beta1 coding mRNA. Adhesion of the prostate tumor cell line DU-145 was blocked dose-dependently by MMF. In contrast to MMF's effects on HT-29 cells, MMF dose-dependently up-regulated alpha1beta1, alpha2beta1, alpha3beta1, and alpha5beta1 on DU-145 tumor cell membranes. Conclusion: We conclude that MMF possesses distinct anti-tumoral properties, particularly in colon and prostate carcinoma cells. Adhesion blockage of HT-29 cells was due to the loss of alpha3beta1 and alpha6beta1 surface expression, which might contribute to a reduced invasive behaviour of this tumor entity. The enhancement of integrin beta1 subtypes observed in DU-145 cells possibly causes re-differentiation towards a low-invasive phenotype.
Background: In rat, deafferentation of one labyrinth (unilateral labyrinthectomy) results in a characteristic syndrome of ocular and motor postural disorders (e.g., barrel rotation, circling behavior, and spontaneous nystagmus). Behavioral recovery (e.g., diminished symptoms), encompassing 1 week after unilateral labyrinthectomy, has been termed vestibular compensation. Evidence suggesting that the histamine H3 receptor plays a key role in vestibular compensation comes from studies indicating that betahistine, a histamine-like drug that acts as both a partial histamine H1 receptor agonist and an H3 receptor antagonist, can accelerate the process of vestibular compensation. Results: Expression levels for histamine H3 receptor (total) as well as three isoforms which display variable lengths of the third intracellular loop of the receptor were analyzed using in situ hybridization on brain sections containing the rat medial vestibular nucleus after unilateral labyrinthectomy. We compared these expression levels to H3 receptor binding densities. Total H3 receptor mRNA levels (detected by oligo probe H3X) as well as mRNA levels of the three receptor isoforms studied (detected by oligo probes H3A, H3B, and H3C) showed a pattern of increase, which was bilaterally significant at 24 h post-lesion for both H3X and H3C, followed by significant bilateral decreases in medial vestibular nuclei occurring 48 h (H3X and H3B) and 1 week post-lesion (H3A, H3B, and H3C). Expression levels of H3B was an exception to the forementioned pattern with significant decreases already detected at 24 h post-lesion. Coinciding with the decreasing trends in H3 receptor mRNA levels was an observed increase in H3 receptor binding densities occurring in the ipsilateral medial vestibular nuclei 48 h post-lesion. Conclusion: Progressive recovery of the resting discharge of the deafferentated medial vestibular nuclei neurons results in functional restoration of the static postural and occulomotor deficits, usually occurring within a time frame of 48 hours in rats. Our data suggests that the H3 receptor may be an essential part of pre-synaptic mechanisms required for reestablishing resting activities 48 h after unilateral labyrinthectomy.
Introduction: This open label, multicentre study was conducted to assess the times to offset of the pharmacodynamic effects and the safety of remifentanil in patients with varying degrees of renal impairment requiring intensive care.
Methods: A total of 40 patients, who were aged 18 years or older and had normal/mildly impaired renal function (estimated creatinine clearance ≥ 50 ml/min; n = 10) or moderate/severe renal impairment (estimated creatinine clearance <50 ml/min; n = 30), were entered into the study. Remifentanil was infused for up to 72 hours (initial rate 6–9 μg/kg per hour), with propofol administered if required, to achieve a target Sedation–Agitation Scale score of 2–4, with no or mild pain.
Results: There was no evidence of increased offset time with increased duration of exposure to remifentanil in either group. The time to offset of the effects of remifentanil (at 8, 24, 48 and 72 hours during scheduled down-titrations of the infusion) were more variable and were statistically significantly longer in the moderate/severe group than in the normal/mild group at 24 hours and 72 hours. These observed differences were not clinically significant (the difference in mean offset at 72 hours was only 16.5 min). Propofol consumption was lower with the remifentanil based technique than with hypnotic based sedative techniques. There were no statistically significant differences between the renal function groups in the incidence of adverse events, and no deaths were attributable to remifentanil use.
Conclusion: Remifentanil was well tolerated, and the offset of pharmacodynamic effects was not prolonged either as a result of renal dysfunction or prolonged infusion up to 72 hours.
The study of organisms with restricted dispersal abilities and presence in the fossil record is particularly adequate to understand the impact of climate changes on the distribution and genetic structure of species. Trochoidea geyeri (Soós 1926) is a land snail restricted to a patchy, insular distribution in Germany and France. Fossil evidence suggests that current populations of T. geyeri are relicts of a much more widespread distribution during more favourable climatic periods in the Pleistocene. Results: Phylogeographic analysis of the mitochondrial 16S rDNA and nuclear ITS-1 sequence variation was used to infer the history of the remnant populations of T. geyeri. Nested clade analysis for both loci suggested that the origin of the species is in the Provence from where it expanded its range first to Southwest France and subsequently from there to Germany. Estimated divergence times predating the last glacial maximum between 25–17 ka implied that the colonization of the northern part of the current species range occurred during the Pleistocene. Conclusion: We conclude that T. geyeri could quite successfully persist in cryptic refugia during major climatic changes in the past, despite of a restricted capacity of individuals to actively avoid unfavourable conditions.
Western cultures have witnessed a tremendous cultural and social transformation of sexuality in the years since the sexual revolution. Apart from a few public debates and scandals, the process has moved along gradually and quietly. Yet its real and symbolic effects are probably much more consequential than those generated by the sexual revolution of the sixties. Sigusch refers to the broad-based recoding and reassessment of the sexual sphere during the eighties and nineties as the "neosexual revolution". The neosexual revolution is dismantling the old patterns of sexuality and reassembling them anew. In the process, dimensions, intimate relationships, preferences and sexual fragments emerge, many of which had submerged, were unnamed or simply did not exist before. In general, sexuality has lost much of its symbolic meaning as a cultural phenomenon. Sexuality is no longer the great metaphor for pleasure and happiness, nor is it so greatly overestimated as it was during the sexual revolution. It is now widely taken for granted, much like egotism or motility. Whereas sex was once mystified in a positive sense - as ecstasy and transgression, it has now taken on a negative mystification characterized by abuse, violence and deadly infection. While the old sexuality was based primarily upon sexual instinct, orgasm and the heterosexual couple, neosexualities revolve predominantly around gender difference, thrills, self-gratification and prosthetic substitution. From the vast number of interrelated processes from which neosexualities emerge, three empirically observable phenomena have been selected for discussion here: the dissociation of the sexual sphere, the dispersion of sexual fragments and the diversification of intimate relationships. The outcome of the neosexual revolution may be described as "lean sexuality" and "self-sex".
Background: Common warts (verrucae vulgares) are human papilloma virus (HPV) infections with a high incidence and prevalence, most often affecting hands and feet, being able to impair quality of life. About 30 different therapeutic regimens described in literature reveal a lack of a single striking strategy. Recent publications showed positive results of photodynamic therapy (PDT) with 5-aminolevulinic acid (5-ALA) in the treatment of HPV-induced skin diseases, especially warts, using visible light (VIS) to stimulate an absorption band of endogenously formed protoporphyrin IX. Additional experiences adding waterfiltered infrared A (wIRA) during 5-ALA-PDT revealed positive effects. Aim of the study: First prospective randomised controlled blind study including PDT and wIRA in the treatment of recalcitrant common hand and foot warts. Comparison of "5-ALA cream (ALA) vs. placebo cream (PLC)" and "irradiation with visible light and wIRA (VIS+wIRA) vs. irradiation with visible light alone (VIS)". Methods: Pre-treatment with keratolysis (salicylic acid) and curettage. PDT treatment: topical application of 5-ALA (Medac) in "unguentum emulsificans aquosum" vs. placebo; irradiation: combination of VIS and a large amount of wIRA (Hydrosun® radiator type 501, 4 mm water cuvette, waterfiltered spectrum 590-1400 nm, contact-free, typically painless) vs. VIS alone. Post-treatment with retinoic acid ointment. One to three therapy cycles every 3 weeks. Main variable of interest: "Percent change of total wart area of each patient over the time" (18 weeks). Global judgement by patient and by physician and subjective rating of feeling/pain (visual analogue scales). 80 patients with therapy-resistant common hand and foot warts were assigned randomly into one of the four therapy groups with comparable numbers of warts at comparable sites in all groups. Results: The individual total wart area decreased during 18 weeks in group 1 (ALA+VIS+wIRA) and in group 2 (PLC+VIS+wIRA) significantly more than in both groups without wIRA (group 3 (ALA+VIS) and 4 (PLC+VIS)): medians and interquartile ranges: -94% (-100%/-84%) vs. -99% (-100%/-71%) vs. -47% (-75%/0%) vs. -73% (-92%/-27%). After 18 weeks the two groups with wIRA differed remarkably from the two groups without wIRA: 42% vs. 7% completely cured patients; 72% vs. 34% vanished warts. Global judgement by patient and by physician and subjective rating of feeling was much better in the two groups with wIRA than in the two groups without wIRA. Conclusions: The above described complete treatment scheme of hand and foot warts (keratolysis, curettage, PDT treatment, irradiation with VIS+wIRA, retinoic acid ointment; three therapy cycles every 3 weeks) proved to be effective. Within this treatment scheme wIRA as non-invasive and painless treatment modality revealed to be an important, effective factor, while photodynamic therapy with 5-ALA in the described form did not contribute recognisably - neither alone (without wIRA) nor in combination with wIRA - to a clinical improvement. For future treatment of warts an even improved scheme is proposed: one treatment cycle (keratolysis, curettage, wIRA, without PDT) once a week for six to nine weeks. © 2004 Fuchs et al; licensee German Medical Science. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL : http://www.egms.de/en/gms/volume2.shtml
Apparent contradiction between negative effects of UV radiation and positive effects of sun exposure
(2005)
We would like to comment on the three contributions in the Journal of the National Cancer Institute, Vol. 97, No. 3, February 2, 2005: Kathleen M. Egan, Jeffrey A. Sosman, William J. Blot: Editorial: Sunlight and Reduced Risk of Cancer: Is the Real Story Vitamin D? (pp. 161-163) ; Marianne Berwick, Bruce K. Armstrong, Leah Ben-Porat, Judith Fine, Anne Kricker, Carey Eberle, Raymond Barnhill: Sun Exposure and Mortality From Melanoma. (pp. 195-199) ; Karin Ekström Smedby, Henrik Hjalgrim, Mads Melbye, Anna Torrång, Klaus Rostgaard, Lars Munksgaard, et al.: Ultraviolet Radiation Exposure and Risk of Malignant Lymphomas. (pp. 199-209).
In this short note on my talk I want to point out the mathematical difficulties that arise in the study of the relation of Wightman and Euclidean quantum field theory, i.e., the relation between the hierarchies of Wightman and Schwinger functions. The two extreme cases where the reconstructed Wightman functions are either tempered distributions - the well-known Osterwalder-Schrader reconstruction - or modified Fourier hyperfunctions are discussed in some detail. Finally, some perpectives towards a classification of Euclidean reconstruction theorems are outlined and preliminary steps in that direction are presented.
We reconsider estimates for the heat kernel on weighted graphs recently found by Metzger and Stollmann. In the case that the weights satisfy a positive lower bound as well as a finite upper bound, we obtain a specialized lower estimate and a proper generalization of a previous upper estimate. Reviews: Math. Rev. 1979406, Zbl. Math. 0934.46042
We present an overview of the mathematics underlying the quantum Zeno effect. Classical, functional analytic results are put into perspective and compared with more recent ones. This yields some new insights into mathematical preconditions entailing the Zeno paradox, in particular a simplified proof of Misra's and Sudarshan's theorem. We empahsise the complex-analytic structures associated to the issue of existence of the Zeno dynamics. On grounds of the assembled material, we reason about possible future mathematical developments pertaining to the Zeno paradox and its counterpart, the anti-Zeno paradox, both of which seem to be close to complete characterisations. PACS-Klassifikation: 03.65.Xp, 03.65Db, 05.30.-d, 02.30.T . See the corresponding presentation: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Zeno Dynamics in Quantum Statistical Mechanics"
We study the quantum Zeno effect in quantum statistical mechanics within the operator algebraic framework. We formulate a condition for the appearance of the effect in W*-dynamical systems, in terms of the short-time behaviour of the dynamics. Examples of quantum spin systems show that this condition can be effectively applied to quantum statistical mechanical models. Furthermore, we derive an explicit form of the Zeno generator, and use it to construct Gibbs equilibrium states for the Zeno dynamics. As a concrete example, we consider the X-Y model, for which we show that a frequent measurement at a microscopic level, e.g. a single lattice site, can produce a macroscopic effect in changing the global equilibrium. PACS - Klassifikation: 03.65.Xp, 05.30.-d, 02.30. See the corresponding papers: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Mathematics of the Quantum Zeno Effect" and the talk "Zeno Dynamics in Quantum Statistical Mechanics" - http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1167/
The dynamical quantum Zeno effect is studied in the context of von Neumann algebras. It is shown that the Zeno dynamics coincides with the modular dynamics of a localized subalgebra. This relates the modular operator of that subalgebra to the modular operator of the original algebra by a variant of the Kato-Lie-Trotter product formula.
We present a method for the construction of a Krein space completion for spaces of test functions, equipped with an indefinite inner product induced by a kernel which is more singular than a distribution of finite order. This generalizes a regularization method for infrared singularities in quantum field theory, introduced by G. Morchio and F. Strocchi, to the case of singularites of infinite order. We give conditions for the possibility of this procedure in terms of local differential operators and the Gelfand-Shilov test function spaces, as well as an abstract sufficient condition. As a model case we construct a maximally positive definite state space for the Heisenberg algebra in the presence of an infinite infrared singularity. See the corresponding paper: Schmidt, Andreas U.: "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and the presentation "Infinite Infrared Regularization in Krein Spaces"
This extended write-up of a talk gives an introductory survey of mathematical problems of the quantization of gauge systems. Using the Schwinger model as an exactly tractable but nontrivial example which exhibits general features of gauge quantum field theory, I cover the following subjects: The axiomatics of quantum field theory, formulation of quantum field theory in terms of Wightman functions, reconstruction of the state space, the local formulation of gauge theories, indefiniteness of the Wightman functions in general and in the special case of the Schwinger model, the state space of the Schwinger model, special features of the model. New results are contained in the Mathematical Appendix, where I consider in an abstract setting the Pontrjagin space structure of a special class of indefinite inner product spaces - the so called quasi-positive ones. This is motivated by the indefinite inner product space structure appearing in the above context and generalizes results of Morchio and Strocchi [J. Math. Phys. 31 (1990) 1467], and Dubin and Tarski [J. Math. Phys. 7 (1966) 574]. See the corresponding paper: Schmidt, Andreas U.: "Infinite Infrared Regularization and a State Space for the Heisenberg Algebra" and the presentation "Infinite Infrared Regularization in Krein Spaces".
Drug target 5-lipoxygenase : a link between cellular enzyme regulation and molecular pharmacology
(2005)
Leukotriene (LT) sind bioaktive Lipidmediatoren, die in einer Vielzahl von Entzündungskrankheiten wie z.B. Asthma, Psoriasis, Arthritis oder allergische Rhinitis involviert sind. Des Weiteren spielen LT in der Pathogenese von Erkrankungen wie Krebs, Osteoarthritis oder Atherosklerose eine Rolle. Die 5-Lipoxygenase (5-LO) ist das Enzym, das für die Bildung von LT verantwortlich ist. Aufgrund der physiologischen Eigenschaften der LT, ist die Entwicklung von potentiellen Arzneistoffen, welche die 5-LO als Zielstruktur besitzen, von erheblichem Interesse. Die Aktivität der 5-LO wird in vitro durch Ca2+, ATP, Phosphatidylcholin und Lipidhydroperoxide (LOOH) und durch die p38-abhängige MK-2/3 5-LO bestimmt. Inhibitorstudien weisen darauf hin, dass der MEK1/2-Signalweg ebenfalls in vivo an der 5-LO Aktivierung beteiligt ist. Hauptziel dieser Arbeit war es zu untersuchen, welche Rolle der MEK1/2-Signalweg bei der Aktivierung der 5-LO besitzt und welchen Einfluss der 5-LO Aktivierungsweg auf die Wirksamkeit potentieller Inhibitoren hat. „In gel kinase“ und „In vitro kinase“ Untersuchungen zeigten, dass die 5-LO ein Substrat für die Extracellular signal-regulated kinase (ERK) und MK-2/3 darstellt. Der Zusatz von mehrfach ungesättigten Fettsäuren (UFA), wie AA oder Ölsäure, verstärkte den Phosphorylierungsgrad der 5-LO sowohl durch ERK1/2 als auch durch MK-2/3. Die genannten Kinasen sind demnach auch für die 5-LO Aktivierung durch natürliche Stimuli verantwortlich, die den zellulären Ca2+-Spiegel kaum beeinflussen. Daraus ist ersichtlich, dass die Phosphorylierung der 5-LO durch ERK1/2 und/oder MK-2/3 einen alternativen Aktivierungsmechanismus neben Ca2+ darstellt. Ursprünglich wurden Nonredox-5-LO-Inhibitoren als kompetitive Wirkstoffe entwickelt, die mit AA um die Bindung an die katalytische Domäne der 5-LO konkurrieren. Vertreter dieser Inhibitoren, wie ZM230487 und L-739,010, zeigen eine potente Hemmung der LT-Biosynthese in verschiedenen Testsystemen. Sie scheiterten jedoch in klinischen Studien. In dieser Arbeit konnten wir zeigen, dass die Wirksamkeit dieser Inhibitoren vom Aktivierungsweg der 5-LO abhängig ist. Verglichen mit 5-LO Aktivität, die durch den unphysiologischen Stimulus Ca2+-Ionophor induziert wird, erfordert die Hemmung zellstress-induzierter Aktivität eine 10- bis 100-fach höhere Konzentration der Nonredox-5-LO-Inhibitoren. Die nicht-phosphorylierbare 5-LO Mutante (Ser271Ala/Ser663Ala) war wesentlich sensitiver gegenüber Nonredox-Inhibitoren als der Wildtyp, wenn das Enzym durch 5-LO Kinasen aktiviert wurde. Somit zeigen diese Ergebnisse, dass, im Gegensatz zu Ca2+, die 5-LO Aktivierung mittels Phosphorylierung die Wirksamkeit der Nonredox-Inhibitoren deutlich verringert. Des Weiteren wurde das pharmakologische Profil des neuen 5-LO Inhibitors CJ-13,610 mittels verschiedener in vitro-Testsysteme charakterisiert. In intakten PMNL, die durch Ca2+-Ionophor stimuliert wurden, hemmte die Substanz die 5-LO Produktbildung mit einem IC50 von 70 nM. Durch Zugabe von exogener AA, wird die Wirkung vermindert und der IC50 des Inhibitors steigt an. Dies deutet auf eine kompetitive Wirkweise hin. Wie die bekannten Nonredox-Inhibitoren, verliert auch CJ-13,610 seine Wirkung bei erhöhtem zellulärem Peroxidspiegel. Der Inhibitor CJ-13,610 zeigt jedoch keine Abhängigkeit vom Aktivierungsweg der 5-LO. Grundsätzlich ist es also von fundamentaler Bedeutung bei der Entwicklung von neuen Arzneistoffen, die zellulären Zusammenhänge, insbesondere die Regulierung der Aktivität von Enzymen, zu kennen. Wie in dieser Arbeit gezeigt, hat die Phosphorylierung der 5-LO einen starken Einfluss auf die Regulation der 5-LO Aktivität und eine elementare Wirkung auf die Hemmung des Enzyms durch verschiedene Wirkstoffe.
A fundamental work on THz measurement techniques for application to steel manufacturing processes
(2004)
The terahertz (THz) waves had not been obtained except by a huge system, such as a free electron laser, until an invention of a photo-mixing technique at Bell laboratory in 1984 [1]. The first method using the Auston switch could generate up to 1 THz [2]. After then, as a result of some efforts for extending the frequency limit, a combination of antennas for the generation and the detection reached several THz [3, 4]. This technique has developed, so far, with taking a form of filling up the so-called THz gap . At the same time, a lot of researches have been trying to increase the output power as well [5-7]. In the 1990s, a big advantage in the frequency band was brought by non-linear optical methods [8-11]. The technique led to drastically expand the frequency region and recently to realize a measurement up to 41 THz [12]. On the other hand, some efforts have yielded new generation and detection methods from other approaches, a CW-THz as well as the pulse generation [13-19]. Especially, a THz luminescence and a laser, originated in a research on the Bloch oscillator, are recently generated from a quantum cascade structure, even at an only low temperature of 60 K [20-22]. This research attracts a lot of attention, because it would be a breakthrough for the THz technique to become widespread into industrial area as well as research, in a point of low costs and easier operations. It is naturally thought that a technology of short pulse lasers has helped the THz field to be developed. As a background of an appearance of a stable Ti:sapphire laser and a high power chirped pulse amplification (CPA) laser, instead of a dye laser, a lot of concentration on the techniques of a pulse compression and amplification have been done. [23] Viewed from an application side, the THz technique has come into the limelight as a promising measurement method. A discovery of absorption peaks of a protein and a DNA in the THz region is promoting to put the technique into practice in the field of medicine and pharmaceutical science from several years ago [24-27]. It is also known that some absorption of light polar-molecules exist in the region, therefore, some ideas of gas and water content monitoring in the chemical and the food industries are proposed [28-32]. Furthermore, a lot of reports, such as measurements of carrier distribution in semiconductors, refractive index of a thin film and an object shape as radar, indicate that this technique would have a wide range of application [33-37]. I believe that it is worth challenging to apply it into the steel-making industry, due to its unique advantages. The THz wavelength of 30-300 ¼m can cope with both independence of a surface roughness of steel products and a detection with a sub-millimeter precision, for a remote surface inspection. There is also a possibility that it can measure thickness or dielectric constants of relatively high conductive materials, because of a high permeability against non-polar dielectric materials, short pulse detection and with a high signal-to-noise ratio of 103-5. Furthermore, there is a possibility that it could be applicable to a measurement at high temperature, for less influence by a thermal radiation, compared with the visible and infrared light. These ideas have motivated me to start this THz work.
The Kochen-Specker theorem has been discussed intensely ever since its original proof in 1967. It is one of the central no-go theorems of quantum theory, showing the non-existence of a certain kind of hidden states models. In this paper, we first offer a new, non-combinatorial proof for quantum systems with a type I_n factor as algebra of observables, including I_infinity. Afterwards, we give a proof of the Kochen-Specker theorem for an arbitrary von Neumann algebra R without summands of types I_1 and I_2, using a known result on two-valued measures on the projection lattice P(R). Some connections with presheaf formulations as proposed by Isham and Butterfield are made.
This paper has shown that some of the principal arguments against shareholder voice are unfounded. It has shown that shareholders do own corporations, and that the nature of their property interest is structured to meet the needs of the relationships found in stock corporations. The paper has explained that fiduciary and other duties restrain the actions of shareholders just as they do those of management, and that critics cannot reasonably expect court-imposed fiduciary duties to extend beyond the actual powers of shareholders. It has also illustrated how, although corporate statutes give shareholders complete power to structure governance as they will, the default governance structures of U.S. corporations leaves shareholders almost powerless to initiate any sort of action, and the interaction between state and federal law makes it almost impossible for shareholders to elect directors of their choice. Lastly, the paper has recalled how the percentage of U.S. corporate equities owned by institutional investors has increased dramatically in recent decades, and it has outlined some of the major developments in shareholder rights that followed this increase. I hope that this paper deflated some of the strong rhetoric used against shareholder voice by contrasting rhetoric to law, and that it illustrated why the picture of weak owners painted in the early 20th century should be updated to new circumstances, which will help avoid projecting an old description as a current normative model that perpetuates the inevitability of "managerialsm", perhaps better known as "dirigisme".
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
Syndicated loans and the number of lending relationships have raised growing attention. All other terms being equal (e.g. seniority), syndicated loans provide larger payments (in basis points) to lenders funding larger amounts. The paper explores empirically the motivation for such a price discrimination on sovereign syndicated loans in the period 1990-1997. First evidence suggests larger premia are associated with renegotiation prospects. This is consistent with the hypothesis that price discrimination is aimed at reducing the number of lenders and thus the expected renegotiation costs. However, larger payment discrimination is also associated with more targeted market segments and with larger loans, thus minimising borrowing costs and/or attempting to widen the circle of lending relationships in order to successfully raise the requested amount. JEL Classification: F34, G21, G33 This version: June, 2002. Later version (october 2003) with the title: "Why Borrowers Pay Premiums to Larger Lenders: Empirical Evidence from Sovereign Syndicated Loans" : http://publikationen.ub.uni-frankfurt.de/volltexte/2005/992/
We use consumer price data for 205 cities/regions in 21 countries to study deviations from the law-of-one-price before, during and after the major currency crises of the 1990s. We combine data from industrialised nations in North America (Unites States, Canada, Mexico), Europe (Germany, Italy, Spain and Portugal) and Asia (Japan, Korea, New Zealand, Australia) with corresponding data from emerging market economies in the South America (Argentine, Bolivia, Brazil, Columbia) and Asia (India, Indonesia, Malaysia, Philippines, Taiwan, Thailand). We confirm previous results that both distance and border explain a significant amount of relative price variation across different locations. We also find that currency attacks had major disintegration effects by significantly increasing these border effects, and by raising within country relative price dispersion in emerging market economies. These effects are found to be quite persistent since relative price volatility across emerging markets today is still significantly larger than a decade ago. JEL classification: F40, F41