The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
An essential part of many scientific problems is the computation of integral (1) % MathType!MTEF!2!1!+- % feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamysaiabg2 % da9maapebabaGaam4zamaabmaabaGaamiEaaGaayjkaiaawMcaaaWc...
To generate random variables that follow a general probability distribution function π, we need first to generate random variables uniformly distributed in [0,1]. These random variables are often called random numbers for simplicity. However, this “simple-sounding” task is not easily achievable on a computer. But even if it were possible, it might not be desirable to use authentic random numbers because...
In the previous chapter, we introduced the basic framework of sequential importance sampling (SIS), in which one builds up the trial sampling distribution sequentially and computes the importance weights recursively.
The previous chapter outlines a general Monte Carlo framework based on the sequential buildup strategy. Several essential elements are (a) the choice of the trial densities, (b) the resampling method, (c) the marginalization strategy, and (d) the rejection control. This chapter will illustrate how these generic strategies are applied to various application problems.
We have discussed in the previous chapters the important role of Monte Carlo methods in evaluating integrals and simulating stochastic systems. The most critical step in developing an efficient Monte Carlo algorithm is the simulation (sampling) from an appropriate probability distribution π(x). When directly generating independent samples from π(x) is not possible, we have to either opt for an importance sampling...
The proposal transition T(x,y) in a Metropolis sampler is often an arbitrary choice out of convenience. In many applications, the proposal is chosen to be a locally uniform move. In fact, the use of symmetric and locally uniform proposals is so prevailing that these are often referred to as “unbiased proposals” in the literature.
In Section 1.3, we introduced the Ising model, which is used by physicists to model the magnetization phenomenon and has been studied extensively in statistical physics literature. A closely related model is the Potts model.
The fundamental idea underlying all Markov chain Monte Carlo algorithms is the construction of implementable Markov transition rules that leave the target distribution π(x) invariant. Although the Metropolis-Hastings algorithm for constructing a desirable Markov chain is very simple and powerful, a potential problem with the Metropolis algorithm, as explained in the previous chapter, is that the proposal...
Molecular dynamics (MD) simulation is a deterministic procedure to integrate the equations of motion based on the classical mechanics principles (Hamiltonian equations). This method was first proposed by Alder and Wainwright (1959) and has become one of the most widely used research tools for complex physical systems. In a typical MD simulation study, one first sets up the quantitative system...
In this chapter, we describe a few innovative ideas in using auxiliary distributions and multiple Markov chains (in parallel) to improve the efficiency of Monte Carlo simulations. Roughly speaking, in order to improve the mixing property of an underlying Monte Carlo Markov chain, one can build a few “companion chains” whose sole purpose is to help bridging parts of the sample space that are separated...
In parallel tempering (Section 10.4), the target distribution is embedded into a larger system which hosts a number of similar distributions differing with each other only in a temperature parameter. Then, parallel Monte Carlo Markov chains are conducted to sample from these distributions simultaneously. An important step which makes PT effective and which connects the multiple distributions in the...
When running a MCMC sampler, one is often fascinated by the fact that the sampler can produce desirable random samples from a target distribution by making a series of local changes to an arbitrary initial state. It is therefore a natural question to ask: What makes this operation work? Why can we obtain “typical samples” from a target distribution by conducting a series of local moves? A basic tool...
Topics in this chapter include the covariance analysis of iterative conditional sampling; the comparison of Metropolis algorithms based on Peskun’s theorem; the eigen-analysis of the independence sampler; perfect simulation, convergence diagnostics, and a theory for dynamic weighting. The interested reader is encouraged to read the related literature for more detailed analyses.
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.