The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Summary. Advances in understanding the biological underpinnings of many cancers have led increasingly to the use of molecularly targeted anticancer therapies. Because the platelet‐derived growth factor receptor (PDGFR) has been implicated in the progression of prostate cancer bone metastases, it is of great interest to examine possible relationships between PDGFR inhibition and therapeutic outcomes...
Summary. We analyse the effects of various treatments on cotton aphids (Aphis gossypii). The standard analysis of count data on cotton aphids determines parameter values by assuming a deterministic growth model and combines these with the corresponding stochastic model to make predictions on population sizes, depending on treatment. Here, we use an integrated stochastic model to capture the intrinsic...
Summary. Racial disparities in risks of mortality adjusted for socio‐economic status are not well understood. To add to the understanding of racial disparities, we construct and analyse a data set that links, at individual and zip code levels, three government databases: Medicare, the Medicare Current Beneficiary Survey and US census. Our study population includes more than 4 million Medicare enrollees...
Summary. Health economic decision models are subject to various forms of uncertainty, including uncertainty about the parameters of the model and about the model structure. These uncertainties can be handled within a Bayesian framework, which also allows evidence from previous studies to be combined with the data. As an example, we consider a Markov model for assessing the cost‐effectiveness of implantable...
Summary. Sample selection models attempt to correct for non‐randomly selected data in a two‐model hierarchy where, on the first level, a binary selection equation determines whether a particular observation will be available for the second level, i.e. in the outcome equation. Ignoring the non‐random selection mechanism that is induced by the selection equation may result in biased estimation of the...
Summary. Material indentation studies, in which a probe is brought into controlled physical contact with an experimental sample, have long been a primary means by which scientists characterize the mechanical properties of materials. More recently, the advent of atomic force microscopy, which operates on the same fundamental principle, has in turn revolutionized the nanoscale analysis of soft biomaterials...
Summary. In 1996, the discovery of variant Creutzfeldt–Jakob disease in the UK raised serious concerns about a large‐scale epidemic. These concerns have been heightened by the recent discovery of people in Britain who were infected through blood transfusion. The outbreak of variant Creutzfeldt–Jakob disease in France emerged more recently with 23 cases observed to date. We use a hidden Markov model...
Summary. Nicholls and Gray have described a phylogenetic model for trait data. They used their model to estimate branching times on Indo‐European language trees from lexical data. Alekseyenko and co‐workers extended the model and gave applications in genetics. We extend the inference to handle data missing at random. When trait data are gathered, traits are thinned in a way that depends on both the...
Summary. Remote sensing is one example where data sets that vary across space and time have become so large that ‘standard’ approaches employed by statistical modellers for applied analysis are no longer feasible. We present a Bayesian methodology, which makes use of recently developed algorithms in applied mathematics, for the analysis of large space–time data sets. In particular, a Markov chain...
Summary. Data structures in modern applications frequently combine the necessity of flexible regression techniques handling, for example, non‐linear and spatial effects with high dimensional covariate vectors. Whereas estimation of the former is typically achieved by supplementing the likelihood with a suitable smoothness penalty, the latter are usually assigned shrinkage penalties that enforce sparse...
Summary. The paper describes the use of a longitudinal tobit model to characterize cognitive decline over a 13‐year period in a cohort of 2087 elderly Australians. Use of a tobit formulation allows for the so‐called ‘ceiling effect’ wherein many subjects achieve perfect test scores. A Bayesian hierarchical joint model is presented that allows for random subject‐specific intercepts and slopes, as...
Summary. The paper is concerned with a dynamic factor model for spatiotemporal coupled environmental variables. The model is proposed in a state space formulation which, through Kalman recursions, allows a unified approach to prediction and estimation. Full probabilistic inference for the model parameters is facilitated by adapting standard Markov chain Monte Carlo algorithms for dynamic linear models...
Summary. This work is concerned with the vulnerability of spaceborne microelectronics to single‐event upset, which is a change of state caused by high‐energy charged particles in the solar wind or the cosmic ray environment striking a sensitive node. To measure the susceptibility of a semiconductor device to single‐event upsets, testing is conducted by exposing it to high‐energy heavy ions or protons...
Summary. Meta‐analysis is often undertaken in two stages, with each study analysed separately in stage 1 and estimates combined across studies in stage 2. The study‐specific estimates are assumed to arise from normal distributions with known variances equal to their corresponding estimates. In contrast, a one‐stage analysis estimates all parameters simultaneously. A Bayesian one‐stage approach offers...
The paper considers data from an aphid infestation on a sugar cane plantation and illustrates the use of an individual level infectious disease model for making inference on the biological process underlying these data. The data are interval censored, and the practical issues involved with the use of Markov chain Monte Carlo algorithms with models of this sort are explored and developed. As inference...
The analysis of genomics alterations that may occur in nature when segments of chromosomes are copied (known as copy number alterations) has been a focus of research to identify genetic markers of cancer. One high throughput technique that has recently been adopted is the use of molecular inversion probes to measure probe copy number changes. The resulting data consist of high dimensional copy number...
Our application data are produced from a scalable, on‐line expert elicitation process that incorporates hundreds of participating raters to score the importance of research goals for the prevention of suicide with the purpose of informing policy making. We develop a Bayesian formulation for analysis of ordinal multirater data motivated by our application. Our model employs a non‐parametric mixture...
We present a novel analysis of a landmark table of dose–response mortality counts from lung cancer in men. The data were originally collected by Doll and Hill. Our inferences are based on Poisson models for which the rates of occurrence are partially ordered according to two covariates. The partial ordering of the mortality rates enforces the well‐established knowledge that lung cancer mortality rates...
The paper proposes a fully Bayesian approach for the analysis of triadic data in social networks. Inference is based on Markov chain Monte Carlo methods as implemented in the software package WinBUGS. We apply the methodology to two data sets to highlight the ease with which cognitive social structures can be analysed.
Often in regionally aggregated spatiotemporal models, a single variance parameter is used to capture variability in the spatial structure of the model, ignoring the effect that spatially varying factors may have on the variability in the underlying process. We extend existing methodologies to allow for region‐specific variance components in our analysis of monthly asthma hospitalization rates in California...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.