Refine
Year of publication
- 2014 (1214) (remove)
Document Type
- Article (588)
- Part of Periodical (163)
- Working Paper (149)
- Book (134)
- Doctoral Thesis (86)
- Report (27)
- Part of a Book (23)
- Conference Proceeding (18)
- Review (10)
- Preprint (8)
Language
- English (1214) (remove)
Keywords
- taxonomy (21)
- new species (19)
- Syntax (11)
- Inversionsfigur (10)
- Multistability (10)
- Multistable figures (10)
- Wahrnehmungswechsel (10)
- morphology (8)
- Bantusprachen (7)
- Benjamin, Walter (7)
Institute
- Medizin (231)
- Wirtschaftswissenschaften (149)
- Center for Financial Studies (CFS) (131)
- Physik (101)
- Biowissenschaften (88)
- Sustainable Architecture for Finance in Europe (SAFE) (86)
- House of Finance (HoF) (82)
- Biochemie und Chemie (48)
- Geowissenschaften (41)
- Gesellschaftswissenschaften (33)
The extinction of conditioned fear depends on an efficient interplay between the amygdala and the medial prefrontal cortex (mPFC). In rats, high-frequency electrical mPFC stimulation has been shown to improve extinction by means of a reduction of amygdala activity. However, so far it is unclear whether stimulation of homologues regions in humans might have similar beneficial effects. Healthy volunteers received one session of either active or sham repetitive transcranial magnetic stimulation (rTMS) covering the mPFC while undergoing a 2-day fear conditioning and extinction paradigm. Repetitive TMS was applied offline after fear acquisition in which one of two faces (CS+ but not CS−) was associated with an aversive scream (UCS). Immediate extinction learning (day 1) and extinction recall (day 2) were conducted without UCS delivery. Conditioned responses (CR) were assessed in a multimodal approach using fear-potentiated startle (FPS), skin conductance responses (SCR), functional near-infrared spectroscopy (fNIRS), and self-report scales. Consistent with the hypothesis of a modulated processing of conditioned fear after high-frequency rTMS, the active group showed a reduced CS+/CS− discrimination during extinction learning as evident in FPS as well as in SCR and arousal ratings. FPS responses to CS+ further showed a linear decrement throughout both extinction sessions. This study describes the first experimental approach of influencing conditioned fear by using rTMS and can thus be a basis for future studies investigating a complementation of mPFC stimulation to cognitive behavioral therapy (CBT).
This paper provides a systematic analysis of individual attitudes towards ambiguity, based on laboratory experiments. The design of the analysis allows to capture individual behavior across various levels of ambiguity, ranging from low to high. Attitudes towards risk and attitudes towards ambiguity are disentangled, providing pure measures of ambiguity aversion. Ambiguity aversion is captured in several ways, i.e. as a discount factor net of a risk premium, and as an estimated parameter in a generalized utility function. We find that ambiguity aversion varies across individuals, and with the level of ambiguity, being most prominent for intermediate levels. Around one third of subjects show no aversion, one third show maximum aversion, and one third show intermediate levels of ambiguity aversion, while there is almost no ambiguity seeking. While most theoretical work on ambiguity builds on maxmin expected utility, our results provide evidence that MEU does not adequately capture individual attitudes towards ambiguity for the majority of individuals. Instead, our results support models that allow for intermediate levels of ambiguity aversion. Moreover, we find risk aversion to be statistically unrelated to ambiguity aversion on average. Taken together, the results support the view that ambiguity is an important and distinct argument in decision making under uncertainty.
The n_TOF facility operates at CERN with the aim of addressing the request of high accuracy nuclear data for advanced nuclear energy systems as well as for nuclear astrophysics. Thanks to the features of the neutron beam, important results have been obtained on neutron induced fission and capture cross sections of U, Pu and minor actinides. Recently the construction of another beam line has started; the new line will be complementary to the first one, allowing to further extend the experimental program foreseen for next measurement campaigns.
We present the results of two-pion production in tagged quasi-free np collisions at a deutron incident beam energy of 1.25 GeV/c measured with the High-Acceptance Di-Electron Spectrometer (HADES) installed at GSI. The specific acceptance of HADES allowed for the first time to obtain high-precision data on π+π− and π−π0 production in np collisions in a region corresponding to large transverse momenta of the secondary particles. The obtained differential cross section data provide strong constraints on the production mechanisms and on the various baryon resonance contributions (∆∆, N(1440), N(1520), ∆(1600)). The invariant mass and angular distributions from the np → npπ+π −and np → ppπ−π0 reactions are compared with different theoretical model predictions.
The elements in the universe are mainly produced by charged-particle fusion reactions and neutron-capture reactions. About 35 proton-rich isotopes, the p-nuclei, cannot be produced via neutron-induced reactions. To date, nucleosynthesis simulations of possible production sites fail to reproduce the p-nuclei abundances observed in the solar system. In particular, the origin of the light p-nuclei 92Mo, 94Mo, 96Ru and 98Ru is little understood. The nucleosynthesis simulations rely on assumptions about the seed abundance distributions, the nuclear reaction network and the astrophysical environment. This work addressed the nuclear data input.
The key reaction 94Mo(g,n) for the production ratio of the p-nuclei 92Mo and 94Mo was investigated via Coulomb dissociation at the LAND/R3B setup at GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt, Germany. A beam of 94Mo with an energy of 500 AMeV was directed onto a lead target. The neutron-dissociation reactions following the Coulomb excitation by virtual photons of the electromagnetic field of the target nucleus were investigated. All particles in the incoming and outgoing channels of the reaction were identified and their kinematics were determined in a complex analysis. The systematic uncertainties were analyzed by calculating the cross sections for all possible combinations of the data selection criteria. The integral Coulomb dissociation cross section of the reaction 94Mo(g,n) was determined to be (571 +- 14 (stat) +- 46 (syst) ) mb. The result was compared to the data obtained in a real photon experiment carried out at the Saclay linear accelerator. The ratio of the integral cross sections was found to be 0.63 +- 0.07, which is lower than the expected value of about 0.8.
The nucleosynthesis of the light p-nuclei 92Mo, 94Mo, 96Ru and 98Ru was investigated in post-processing nucleosynthesis simulations within the NuGrid research platform. The impact of rate uncertainties of the most important production and destruction reactions was studied for a Supernova type II model. It could be shown that the light p-nuclei are mainly produced via neutron-dissociation reactions on heavier nuclei in the isotopic chains, and that the final abundances of these p-nuclei are determined by their main destruction reactions. The nucleosynthesis of 92Mo and 94Mo was also studied in different environments of a Supernova type Ia model. It was concluded that the maximum temperature and the duration of the high temperature phase determine the final abundances of 92Mo and 94Mo.
A measurement of the transverse momentum spectra of jets in Pb-Pb collisions at sNN−−−√=2.76 TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R of 0.2 and 0.3 in pseudo-rapidity |η|<0.5. The transverse momentum pT of charged particles is measured down to 0.15 GeV/c which gives access to the low pT fragments of the jet. Jets found in heavy-ion collisions are corrected event-by-event for average background density and on an inclusive basis (via unfolding) for residual background fluctuations and detector effects. A strong suppression of jet production in central events with respect to peripheral events is observed. The suppression is found to be similar to the suppression of charged hadrons, which suggests that substantial energy is radiated at angles larger than the jet resolution parameter R=0.3 considered in the analysis. The fragmentation bias introduced by selecting jets with a high pT leading particle, which rejects jets with a soft fragmentation pattern, has a similar effect on the jet yield for central and peripheral events. The ratio of jet spectra with R=0.2 and R=0.3 is found to be similar in Pb-Pb and simulated PYTHIA pp events, indicating no strong broadening of the radial jet structure in the reconstructed jets with R<0.3.
This thesis is structured into 7 chapters:
• Chapter 2 gives an overview of the ultrashort high intensity laser interaction with matter. The laser interaction with an induced plasma is described, starting from the kinematics of single electron motion, followed by collective electron effects and the ponderamotive motion in the laser focus and the plasma transparency for the laser beam. The three different mechanisms prepared to accelerate and propagate electrons through matter are discussed. The following indirect acceleration of protons is explained by the Target Normal Sheath Acceleration (TNSA) mechanism. Finally some possible applications of laser accelerated protons are explained briefly.
• Chapter 3 deals with the modeling of geometry and field mapping of magnetic lens. Initial proton and electron distributions, fitted to PHELIX measured data are generated, a brief description of employed codes and used techniques in simulation is given, and the aberrations at the solenoid focal spot is studied.
• Chapter 4 presents a simulation study for suggested corrections to optimize the proton beam as a later beam source. Two tools have been employed in these suggested corrections, an aperture placed at the solenoid focal spot as energy selection tool, and a scattering foil placed in the proton beam to smooth the radial energy beam profile correlation at the focal spot due to chromatic aberrations. Another suggested correction has been investigated, to optimize the beam radius at the focal spot by lens geometry controlling.
• Chapter 5 presents a simulation study for the de-neutralization problem in TNSA caused by the fringing fields of pulsed magnetic solenoid and quadrupole. In this simulation, we followed an electrostatic model, wherethe evolution of both, self and mutual fields through the pulsed magnetic solenoid could be found, which is not the case in the quadrupole and only the growth of self fields could be found. The field mapping of magnetic elements is generated by the Matlab program, while the TraceWin code is employed to study the tracking through magnetic elements.
• Chapter 6 describes the PHELIX laser parameters at GSI with chirp pulse amplification technique (CPA), and Gafchromic Radiochromic film RCF) as a spatial energy resolver film detector. The results of experiments with laser proton acceleration, which were performed in two experimental areas at GSI (Z6 area and PHELIX Laser Hall (PLH)), are presented in section 6.3.
• Chapter 7 includes the main results of this work, conclusions and gives a perspective for future experimental activities.
Mathematical modeling of Arabidopsis thaliana with focus on network decomposition and reduction
(2014)
Systems biology has become an important research field during the last decade. It focusses on the understanding of the systems which emit the measured data. An important part of this research field is the network analysis, investigating biological networks. An essential point of the inspection of these network models is their validation, i.e., the successful comparison of predicted properties to measured data. Here especially Petri nets have shown their usefulness as modeling technique, coming with sound analysis methods and an intuitive representation of biological network data.
A very important tool for network validation is the analysis of the Transition-invariants (TI), which represent possible steady-state pathways, and the investigation of the liveness property. The computational complexity of the determination of both, TI and liveness property, often hamper their investigation.
To investigate this issue, a metabolic network model is created. It describes the core metabolism of Arabidopsis thaliana, and it is solely based on data from the literature. The model is too complex to determine the TI and the liveness property.
Several strategies are followed to enable an analysis and validation of the network. A network decomposition is utilized in two different ways: manually, motivated by idea to preserve the integrity of biological pathways, and automatically, motivated by the idea to minimize the number of crossing edges. As a decomposition may not be preserving important properties like the coveredness, a network reduction approach is suggested, which is mathematically proven to conserve these important properties. To deal with the large amount of data coming from the TI analysis, new organizational structures are proposed. The liveness property is investigated by reducing the complexity of the calculation method and adapting it to biological networks.
The results obtained by these approaches suggest a valid network model. In conclusion, the proposed approaches and strategies can be used in combination to allow the validation and analysis of highly complex biological networks.
A recent paper on the phylogenetic relationships of species within the cephalopod family Mastigoteuthidae meant great progress in stabilizing the classification of the family. The authors, however, left the generic placement of Mastigoteuthis pyrodes unresolved. This problem is corrected here by placing this species in a new monotypic genus, Mastigotragus, based on unique structures of the photophores and the funnel/mantle locking apparatus.
More than 100 years after Henry James’s death, criticism is still working through unresolved gender issues in his fiction. This study proposes a new interdisciplinary approach to the gendered power relations in James’s novels that fills a crucial vacancy in the literature. Reading James’s intricately woven narrative form through the lens of relational sociology, specifically Pierre Bourdieu’s concept of symbolic domination, reconciles some of the most fiercely disputed positions in James studies of the past decades. With its focus on gender-related symbolic domination, this study demonstrates this approach’s potential to probe the depths of James’s fictional social worlds while developing the narratological tools to do so.
Many critics have paid attention to the relational nature of James’s social fictions as well as his talent for capturing unspoken, invisible, hidden social constraints. Blatantly missing from the literature is a systematic relational analysis into the specifically Jamesian method of narrating the socio-psychological, embodied responses to power and oppression. The present study closes this research gap. It reveals how James persistently narrates his characters as social agents whose perception, affects, and bodily practices are products of the social structures that they in turn continue to shape and reproduce. Moreover, it traces a development throughout James’s career that reflects his growing sensitivity for the stubbornness of some seemingly insurmountable social constraints. James’s fictional social worlds are relational ones through and through. This study is the first sustained effort to investigate the way in which his narratives capture this interrelatedness.
This article explores life insurance consumption in 31 European countries from 2003 to 2012 and aims to investigate the extent to which market transparency can affect life insurance demand. The cross-country evidence for the entire sample period shows that greater market transparency, which resolves asymmetric information, can generate a higher demand for life insurance. However, when considering the financial crisis period (2008-2012) separately, the results suggest a negative impact of enhanced market transparency on life insurance consumption. The mixed findings imply a trade-off between the reduction in adverse selection under greater market transparency and the possible negative effects on life insurance consumption during the crisis period due to more effective market discipline. Furthermore, this article studies the extent to which transparency can influence the reaction of life insurance demand to bad market outcomes: i.e., low solvency ratios or low profitability. The results indicate that the markets with bad outcomes generate higher life insurance demand under greater transparency compared to the markets that also experience bad outcomes but are less transparent.
Many Zanjian settlements (8th to 13th centuries AD) on Tanzania’s coast are considered to have collapsed and not regarded as belonging to the formation of the Swahili culture (13th to 16th centuries AD). With this regard, Swahili traditions found on Tanzania’s coast are seldom linked to local Zanjian precursors but to external influence especially from Lamu archipelago on the Kenya coast. Nevertheless, new archaeological evidences from Pangani Bay on the northern coast of Tanzania suggest that the external influences to cultural continuity and change from Zanjian to Swahili periods are overemphasized. This conclusion is grounded on archaeological field works conducted in the surrounding of Pangani Bay in 2010 and 2012, where major Swahili sites directly overlie Zanjian sites without recognizable changes of the cultural materials. The study compares and contrasts cultural materials (in particular pottery) and remains of economy and trade (fauna and glass beads) traditions from both Zanjian and Swahili phases. The aim of this comparative analysis is to trace change and continuity of archaeological traditions for better understanding the origin of Swahili culture in Pangani Bay.
In this endeavour, the analysis of ceramic, faunal remains and glass beads from Pangani Bay proposes negligible differences of materials and economical traditions from the late 1st to 2nd millennia AD. That is, local ceramic styles by Swahilis show only minor differences to those used by their ancestors, while fauna data suggest a similarity in subsistence economy between Zanjian and Swahili periods. Correspondingly, glass bead data indicate that although maritime trade became highly sophisticated during Swahili time, early involvement into oceanic far distance trade contact began in the Zanjian period. Thus, this thesis conveys all issues together. It presents research objectives, field work methods as well as analysis and interpretation of the results, with a main focus on ceramic, fauna and bead data. With the support of archaeological evidences, the current work concludes that there is more continuity than change in most of the Zanjian traditions that facilitated the origin of Swahili culture in Pangani Bay.
he predictive likelihood is of particular relevance in a Bayesian setting when the purpose is to rank models in a forecast comparison exercise. This paper discusses how the predictive likelihood can be estimated for any subset of the observable variables in linear Gaussian state-space models with Bayesian methods, and proposes to utilize a missing observations consistent Kalman filter in the process of achieving this objective. As an empirical application, we analyze euro area data and compare the density forecast performance of a DSGE model to DSGE-VARs and reduced-form linear Gaussian models.
Mapping is an important tool for the management of plant invasions. If landscapes are mapped in an appropriate way, results can help managers decide when and where to prioritize their efforts. We mapped vegetation with the aim of providing key information for managers on the extent, density and rates of spread of multiple invasive species across the landscape. Our case study focused on an area of Galapagos National Park that is faced with the challenge of managing multiple plant invasions. We used satellite imagery to produce a spatially explicit database of plant species densities in the canopy, finding that 92% of the humid highlands had some degree of invasion and 41% of the canopy was comprised of invasive plants. We also calculated the rate of spread of eight invasive species using known introduction dates, finding that species with the most limited dispersal ability had the slowest spread rates while those able to disperse long distances had a range of spread rates. Our results on spread rate fall at the lower end of the range of published spread rates of invasive plants. This is probably because most studies are based on the entire geographic extent, whereas our estimates took plant density into account. A spatial database of plant species densities, such as the one developed in our case study, can be used by managers to decide where to apply management actions and thereby help curtail the spread of current plant invasions. For example, it can be used to identify sites containing several invasive plant species, to find the density of a particular species across the landscape or to locate where native species make up the majority of the canopy. Similar databases could be developed elsewhere to help inform the management of multiple plant invasions over the landscape.
Telecommunications companies traditionally offer several tariffs from which their customers can choose the tariff that best suits their preferences. Yet, customers sometimes make choices that are not optimal for them because they do not minimize their bill for a certain usage amount. We show in this paper that companies should be very concerned about choices in which customers pick tariffs that are too small for them because they lead to a significant increase in customers churn. In contrast, this is not the case if customers choose tariffs that are too big for them. The reason is that in particular flat-rates provide customers with the additional benefit that they guarantee a constant bill amount that consumption can be enjoyed more freely because all costs are already accounted for.
FINANCIAL SERVICE PROVIDERS FACE SERIOUS PROBLEMS IF MANY OF THEIR CUSTOMERS LEAVE QUICKLY BECAUSE SUCH CUSTOMERS HAVE LITTLE LONG-TERM VALUE. STILL, CURRENT REPORTING PRIMARILY FOCUSES ON CURRENT PROFITABILITY THAT REPRESENTS THE SHORT-TERM VALUE OF THE CUSTOMERS. THE LONG-TERM VALUE TYPICALLY RECEIVES LITTLE ATTENTION. CUSTOMER EQUITY REPORTING PRESENTS A MEANS TO FOCUS ON THE LONG-TERM VALUE OF THE COMPANY'S CUSTOMERS. IT AVOIDS THE RISK THAT SHORT-TERM PROFITS ARE INCREASED AT THE EXPENSE OF LONG-TERM VALUE CREATION AND ITS CENTRAL METRIC, CUSTOMER EQUITY, SERVES AS AN EARLY WARNING INDICATOR FOR RISK MANAGEMENT SYSTEMS THAT FOCUS ON CUSTOMER LOSS.
5-lipoxygenase (5-LO) is an enzyme with a substantial role in inflammatory processes. In vitro kinase assays using [32P]-ATP in combination with mutagenesis have revealed that serine residues 271, 523 and 663 can be phosphorylated by MK2, PKA and ERK2 kinases, respectively. A few available reports regarding 5-LO protein sequence have covered up to 30% of the sequence after amino acid sequencing including Ser663. In LCMS/MS analyses of 5-LO tryptic digests from different cellular sources different peptides have been detected; however, none of the three phosphorylations has been detected and only Ser663 was included in the covered sequence.
As there was no comprehensive mass spectrometric analysis of 5-LO, the purpose of this study was to optimize the experimental conditions under which detection of the aforementioned phosphorylation events, as well as other possible post-translational modifications (PTMs), would be feasible. Matrix-Assisted Laser Desorption/Ionization Mass Spectrometry (MALDI-MS) was used for peptide analysis of 5-LO cleaved either by chemical reagents or by proteases. Sequence coverage of 5-LO could be enhanced to be close to completion by combination of results from digestions by trypsin, AspN and chymotrypsin. In-gel trypsin digestion followed by in-solution AspN digestion proved to be a useful sample treatment for reproducible detection of the Ser271-containing peptide.
Nevertheless, in none of the examined cleavage protocols the sequence around Ser523 was detected reproducibly or with acceptable signal intensity for subsequent peptide fragmentation. Propionic anhydride and sulfo-NHS-SS-biotin cross-linker (EZ-linkTM), were used for derivatization of lysine side chains and hindrance of lysine residue recognition by trypsin. Phosphopeptide enrichment became possible after tryptic digestion of these samples, not only due to formation of an individual Ser523-containing peptide, but also because TiO2-mediated enrichment, which is performed in acidic pH, was not impaired by positively charged free lysine side chains. Additionally, biotinylation of lysine residues was exploited for an intermediate enrichment step of the lysine containing peptides, prior to TiO2 phosphopeptide enrichment.
MALDI-MS analysis after in-vitro phosphorylation of 5-LO by the three kinases showed that Ser271 was phosphorylated in the MK2 and PKA kinase assays, while Ser523 was phosphorylated only in the PKA kinase assay. Surpisingly, no phosphopeptides were detected in the in-vitro kinase assays with ERK2, even though the unmodified counterpart of the Ser663-containing peptide was easily detected. The detection limit for each of the three phosphorylation sites was determined by the use of custom made phosphopeptides and an amount of 0.06 pmol of phosphopeptide in 1 μg 5-LO (representing 0.5% phosphorylation rate) was sufficient in all cases for successful enrichment and detection by MS.
In-vitro kinase assays with [32P]-ATP were performed for some kinases that were expected to phosphorylate 5-LO according to in-silico data. Three members of the Src tyrosine kinase family (Fgr, Hck and Yes) and the Ser/Thr specific kinase DNA-PK used 5-LO as their substrate and mainly residues at the N-terminal part of 5-LO were detected phosphorylated by MS (e.g. Y42, Y53). Additional in-vitro assays for recombinant 5-LO modification included incubation with glutathione or compound U73122, previously described as inhibitor of 5-LO.
Since in-vitro assays might have generated artifacts, a method for 5-LO purification from human cells was sought, in order to examine the modification state of the protein in the cellular context. ATP-agarose affinity purification and anti-5-LO immunoprecipitation proved inappropriate for sample purification for MALDI-MS analysis. Consequently, two human cell lines that are able to express 5-LO (Rec-1 Blymphocytes and MM6 monocytes) were transduced with a DNA cassette that contained recombinant human 5-LO sequence with an attached N-terminal FLAG-tag. Anti-FLAG immunoprecipitation was then performed effectively in cell lysates and the precipitated FLAG-5-LO was separated by SDS-PAGE before MALDI-MS analysis.
The examined cell stimuli were expected to result to phosphorylation of 5-LO at Ser523 by PKA in Rec-1 cells and to phosphorylation of Ser271 and/or Ser663 in MM6 cells by activated MK2 and ERK2, respectively. Additionally, under the conditions of MM6 cell stimulation, Fgr, Hck and Yes kinases, which phosphorylated 5-LO in vitro, were expected to be activated and the possibility of 5-LO phosphorylation on tyrosine was investigated. Although immunoblotting results indicated that all the aforementioned phosphorylation events existed in the examined samples, MALDI-MS analysis verified only phosphorylation on Ser271 in differentiated MM6 cells, interestingly regardless of cell stimulation.
Finally, the primary amine derivatization procedure by EZ-linkTM was utilized for MS analysis of lysine rich proteins. In the past, chemical propionylation of histones had been employed prior to trypsin digestion; however it was easily confused in MS with combinations of other PTMs (e.g. acetylation, methylation). Moreover, propionylation is a PTM for histone H3 and this information was lost. Consequently, the EZ-link reagent was more useful for analysis of histones, as unambiguous assignment of PTMs and detection of native propionylation on bovine H3 became possible.
Background: Malaria is still a priority public health problem of Nepal where about 84% of the population are at risk. The aim of this paper is to highlight the past and present malaria situation in this country and its challenges for long-term malaria elimination strategies.
Methods: Malariometric indicator data of Nepal recorded through routine surveillance of health facilities for the years between 1963 and 2012 were compiled. Trends and differences in malaria indicator data were analysed.
Results: The trend of confirmed malaria cases in Nepal between 1963 and 2012 shows fluctuation, with a peak in 1985 when the number exceeded 42,321, representing the highest malaria case-load ever recorded in Nepal. This was followed by a steep declining trend of malaria with some major outbreaks. Nepal has made significant progress in controlling malaria transmission over the past decade: total confirmed malaria cases declined by 84% (12,750 in 2002 vs 2,092 in 2012), and there was only one reported death in 2012. Based on the evaluation of the National Malaria Control Programme in 2010, Nepal recently adopted a long-term malaria elimination strategy for the years 2011–2026 with the ambitious vision of a malaria-free Nepal by 2026. However, there has been an increasing trend of Plasmodium falciparum and imported malaria proportions in the last decade. Furthermore, the analysis of malariometric indicators of 31 malaria-risk districts between 2004 and 2012 shows a statistically significant reduction in the incidence of confirmed malaria and of Plasmodium vivax, but not in the incidence of P. falciparum and clinically suspected malaria.
Conclusions: Based on the achievements the country has made over the last decade, Nepal is preparing to move towards malaria elimination by 2026. However, considerable challenges lie ahead. These include especially, the need to improve access to diagnostic facilities to confirm clinically suspected cases and their treatment, the development of resistance in parasites and vectors, climate change, and increasing numbers of imported cases from a porous border with India. Therefore, caution is needed before the country embarks towards malaria elimination.