# METRON

*JG*), is symmetric and its square root is a metric. We show that the

*JG*can be represented as a mixture of Cramér’s distance (

*CD*) between...

In the present paper, we define a new measure of divergence between two probability distribution functions $$F_{1}$$ F1 and $$F_{2}$$ F2 based on Jensen inequality and Gini mean difference. The proposed measure, which we call it Jensen–Gini measure of divergence (*JG*), is symmetric and its square root is a metric. We show that the *JG* can be represented as a mixture of Cramér’s distance (*CD*) between...

This study establishes a new approach for the analysis of variance (ANOVA) of time series. ANOVA has been sufficiently tailored for cases with independent observations, but there has recently been substantial demand across many fields for ANOVA in cases with dependent observations. For example, ANOVA for dependent observations is important to analyze differences among industry averages within financial...

The use of p values in null hypothesis statistical tests (NHST) is controversial in the history of applied statistics, owing to a number of problems. They are: arbitrary levels of Type I error, failure to trade off Type I and Type II error, misunderstanding of p values, failure to report effect sizes, and overlooking better means of reporting estimates of policy impacts, such as effect sizes, interpreted...

This paper presents the estimation procedures for a bivariate cointegration model when the errors are generated by a constant conditional correlation model. In particular, the method of maximum likelihood is discussed when the errors follow Generalised Autoregressive Conditional Hetroskedastic (GARCH) models with Gaussian and some non Gaussian innovations. The method of estimation is illustrated using...

This paper deals with linear models for a time-dependent response and explanatory variables in a high-dimensional setting. We account for the time dependency in the data by explicitly adding autoregressive terms to the response variable in the model together with an autoregressive process for the residuals. We present a penalized likelihood approach for parameter estimation and discuss its theoretical...

Whenever the computation of data distribution is unfeasible or inconvenient, the classical predictive procedures prove not to be useful. These rely, after all, on the conditional distribution of the future random variable, which is also unavailable. This paper considers a notion of composite likelihood for specifying composite predictive distributions, viewed as surrogates for true unknown predictive...

This paper studies the “truncated extended skew elliptically contoured” (TESEC) distributions and their related properties, which have never been discussed in the literature. In particular, we show that the exact distributions of order statistics arising from a doubly truncated bivariate elliptical distribution can be formulated as a mixture of six TESEC distributions. The explicit formulae for computing...

This paper examines, from a historical and model-based Bayesian perspective, two inferential issues: (1) the relation between confidence coverage and credibility of interval statements about model parameters; (2) the prediction of new values of a random variable. Confidence and credible intervals have different properties. This worries some statisticians, who want them to have the same properties,...

Survey data play an important role in many areas. The surveys typically consist of a list of direct questions. However, if survey data on sensitive topics (tax evasion, fraud, discrimination) are desired, direct questions lead to problems in data quality by answer refusal and untruthful answers. For this reason, there is a need for clever questioning procedures which protect the privacy of the respondents...

This paper presents a Bayesian approach, using differential evolution Markov chain method, to estimate the parameters of the failure time distribution and its percentiles based on grouped and non-grouped degradation data. The observed failure times are modeled by linear degradation path model with random degradation rates follow log-logistic distribution. Two Monte Carlo simulation studies are conducted...

This paper examines, from a historical and model-based Bayesian perspective, two inferential issues:the relation between confidence coverage and credibility of interval statements about model parameters;the prediction of new values of a random variable.

Generally modelling lifetime data is carried out using probability distributions with the aid of reliability functions such as hazard rate, mean residual life, etc. In the present work an alternative approach is proposed by considering bivariate copulas instead of bivariate distributions. We define the analogues of reliability functions that are expressed in terms of copulas and study their properties...

Despite a flourishing activity, especially in recent times, for the study of flexible parametric classes of distributions, little work has dealt with the case where the tail weight and degree of peakedness is regulated by two parameters instead of a single one, as it is usually the case. The present contribution starts off from the symmetric distributions introduced by Kotz in 1975, subsequently evolved...

Present article endeavours to propose a general class of estimators to estimate population mean of a sensitive character using non-sensitive auxiliary information under five different scrambled response models in two occasions successive sampling. Various well-known estimators have been modified for the estimation of sensitive population mean and hence they also become a member of proposed general...

Recent papers have discussed general procedures with complex models to obtain confidence interval statements for a parameter of interest in the presence of nuisance parameters. This paper discusses the role of the likelihood in these procedures, and points out the simplicity of the Bayesian credible interval approach to the same models.

A common approach to analyzing categorical correlated time series data is to fit a generalized linear model (GLM) with past data as covariate inputs. There remain challenges to conducting inference for time series with short length. By treating the historical data as covariate inputs, standard errors of estimates of GLM parameters computed from the empirical Fisher information do not fully account...

In this paper, we develop a quantile regression model for analyzing ordinal longitudinal responses with random effects in the presence of non-ignorable and non-monotone missing data. The ordinal responses are related to underlying latent variables which are considered to have asymmetric Laplace distribution. For modeling the missing data mechanism an ordinary probit model is used via specifying another...